Interactive Computer Graphics Book Summary 2013

download Interactive Computer Graphics Book Summary 2013

of 43

Transcript of Interactive Computer Graphics Book Summary 2013

  • 8/12/2019 Interactive Computer Graphics Book Summary 2013

    1/43

    Mark Pearl Summary

    Oct 2013

    Chapter 1 Graphic Systems and Models

    Section 1.3 Images Physical and Synthetic

    Two basic entities must be part of any image formation process

    1. Object2. Viewer

    The visible spectrum of light for humans is from 350 to 780 nm

    A light source is characterized by its intensity and direction

    A ray is a semi-infinite line that emanates from a point and travels to infinity

    Ray tracing and photon mapping are examples of image formation techniques

    Section 1.5 Synthetic Camera Model

    Conceptual foundation for modern three dimensional computer graphics is the synthetic camera

    model.

    Few basic principles include:

    Specification of object is independent of the specification of the viewer Compute the image using simple geometric calculationsCOPCenter of projection (center of the lens)

    With synthetic cameras we move the clipping to the front by placing a clipping rectangle, or clipping

    window in the projection plane. This acts as a window through which we view the world.

    Section 1.7 - Graphic Architectures

    2 main approaches

    1. Object Oriented Pipelinevertices travel through the pipeline that determines the colorand pixel positions.

    2. Image Oriented Pipelineloop over pixels. For each pixel work backwards to determinewhich geometric primitives can contribute to its color.

    Object Orient Pipeline

    History

    Graphic architecture has progressed from single central processing to do graphics to a pipel ine

    model. Pipeline architecture reduces the total processing time for a render (think of it as multiple

  • 8/12/2019 Interactive Computer Graphics Book Summary 2013

    2/43

    specialized processors, each performing a function and then passing the result on to the next

    processor).

    Advantages

    Each primitive can be processed independently which leads to fast performance Memory requirements reduced because not all objects are needed in memory at the same timeDisadvantages

    Cannot handle most global effects such as shadows, reflections and blending in a physicallycorrect manner

    4 major steps in pipeline

    1. Vertex Processinga. Does coordinate transformationsb. Computes color for each vertex

    2. Clipping and Primitive Assemblya. Clipping is performed on a primitive by primitive basis

    3. Rasterizationa. Convert from vertices to fragmentsb. Output of rasterizer is a set of fragments

    4. Fragment Processinga. Takes fragments generated by rasterizer and updates pixels

    Fragmentsthink of them as a potential pixel that carries information including its color, location

    and depth info.

    6 major frames that occur in OpenGL

    1. Object / Model Coordinates2. World Coordinates3. Eye or Camera Coordinates4. Clip Coordinates5. Normalized Device Coordinates6. Window or Screen Coordinates

    Example Questions for Chapter 1

    Textbook Question 1.1

    What are the main advantages and disadvantages of the preferred method to form computer-

    generated images discussed in this chapter?

    Textbook Question 1.5

    Each image has a set of objects and each object comprises a set of graphical primitives. What does

    each primitive comprise? What are the major steps in the imaging process?

  • 8/12/2019 Interactive Computer Graphics Book Summary 2013

    3/43

    Exam Jun 2011 1.a (6 marks)

    Differentiate between the object oriented and image oriented pipeline implementation strategies

    and discuss the advantages of each approach? What strategy does OpenGL use?

    Exam Jun 2012 1.a (4 marks)

    What is the main advantage and disadvantage of using the pipel ine approach to form computergenerated images?

    Exam Jun 2012 1.b (4 marks)

    Differentiate between the object oriented and image oriented pipeline implementation strategies

    Exam Jun 2012 1.c (4 marks)

    Name the frames in the usual order in which they occur in the OpenGL pipeline

    Exam Jun 2013 1.3 (3 marks)

    Can the standard OpenGL pipel ine easily handle light scattering from object to object? Explain?

  • 8/12/2019 Interactive Computer Graphics Book Summary 2013

    4/43

    Chapter 2 Graphics ProgrammingKey concepts that need to be understood

    Typical composition of Vertices / Primitive Objects

    Size & ColourImmediate mode vs. retained mode graphics

    Immediate mode

    - Used to be the standard method for displaying graphics- There is no memory of the geometric data stored- Large overhead in time needed to transfer drawing instructions and model data for each

    cycle to the GPU

    Retained mode graphics

    - Has the data stored in a data structure which allows it to redisplay the data with the optionof slight modifications (i.e. change color) by resending the array without regenerating the

    points.

    Retained mode is the opposite of immediate: most rendering data is pre-loaded onto the graphics

    card and thus when a render cycle takes place, only render instructions, and not data, are sent.

    Both immediate and retained mode can be used at the same time on all graphics cards, though the

    moral of the story is that if possible, use retained mode to improve performance.

    Coordinate SystemsDevice Dependent Graphics- Originally graphic systems required the user to specify all information

    directly in units of the display device (i.e. pixels).

    Device Independent Graphics- Allows users to work in any coordinate system that they desire.

    World coordinate systemcoordinate system that the user decides to work in Vertex coordinatesthe units that an application program uses to specify vertex positions.

    At some point with device independent graphics the values in the vertex coordinate system must be

    mapped to window coordinates. The graphic system rather than the user is now responsible for this

    task and mapping is performed automatically as part of the rendering process.

    Color RGB vs. Indexed

    With both the indexed and RGB color models the number of colors that can be displayed depends on

    the depth of the frame (color) buffer.

    Indexed Color Model

    In the past, memory was expensive and small and displays had limited colors.

    This meant that the indexed-color model was preferred because

    - It had lower memory requirements

  • 8/12/2019 Interactive Computer Graphics Book Summary 2013

    5/43

    - Displays had limited colors available.In an indexed color model a color lookup table is used to identify which color to display.

    Color indexing presented 2 major problems

    1) When working with dynamic images that needed shading we would typically need morecolors than were provided by the color index mode.

    2) The interaction with the window system is more complex than with RGB color.RGB Color Model

    As hardware has advanced, RGB has become the norm.

    Think of RGB conceptually as three separate buffers, one for red, green and blue. It allows us to

    specify the proportion of red, green and blue in a single pixel. In OpenGL this is often stored in a

    three dimensional vector.

    RGB color model can become unsuitable when the depth of the frame is small because shades

    become too distinct/discreet.

    Viewing Orthographic and Two Dimensional

    The orthographic view is the simplest and OpenGLs default view. Mathematically, the orthographic

    projection is what we would get if the camera in our synthetic camera model had an infinitely long

    telephoto lens and we could then place the camera infinitely far from our objects.

    In OpenGL, an orthographic projection with a right-parallelepiped viewing volume is the default. The

    orthographic projection sees only those objects in the volume specified by the viewing volume.

    Two dimensional viewingis a special case of three-dimensional graphics. Our viewing area is in the

    plane z = 0, within a three dimensional viewing volume. The area of the world that we image is

    known as the viewing rectangle, or clipping rectangle. Objects inside the rectangle are in the image;

    objects outside are clipped out.

    Aspect Ratio and Viewports

    Aspect Ratio- The aspect ratio of a rectangle is the ratio of the rectangles width to its height. The

    independence of the object, viewing, and workstation window specifications can cause undesirable

    side effects if the aspect ratio of the viewing rectangle is not the same as the aspect ratio of thewindow specified.

    In glut we use glutInitWindowSize to set this. Side effects can include distortion. Distortion is a

    consequence of our default mode of operation, in which the entire clipping rectangle is mapped to

    the display window.

    Clipping Rectangle- The only way we can map the entire contents of the clipping rectangle to the

    entire display window is to distort the contents of clipping rectangle to fit inside the display window.

    This is avoided if the di splay window and clipping rectangle have the same aspect ratio.

  • 8/12/2019 Interactive Computer Graphics Book Summary 2013

    6/43

    Viewport- Another more flexible approach is to use the concept of a viewport. A viewport is a

    rectangular area of the display window. By default it is the entire window, but it can be set to any

    smaller size in pixels.

    OpenGL Programming Basics

    Event Processing

    Event processing allows us to program how we would like the system to react to certain events.

    These could include mouse, keyboard or window events.

    Callbacks (Display, Reshape, Idle, Keyboard, Mouse)

    Each event has a callback that is specified. The callback is used to trigger actions when an event is

    used.

    The idle callback is invoked when there are no other events to trigger. A typical use of the idle

    callback is to continue to generate graphical primitives through a display function while nothing else

    is happening.

    Hidden Surface Removal

    Given the position of the viewer and the objects being rendered we should be able to draw the

    objects in such a way that the correct image is obtained. Algorithms for ordering objects so that they

    are drawn correctly are called visible-surface algorithms (or hidden-surface removal algorithms).

    Z-buffer algorithm- A common hidden surface removal algorithm supported by OpenGL.

    Double buffering

    Why we need double buffering

    Because an application program typically works asynchronously, changes can occur to the display

    buffer at any time. Depending on when the display is updated, this can cause the display to show

    partially updated results.

    What is double buffering

    A way to avoid partial updates. Instead of a single frame buffer, the hardware has two frame buffers.

    Front bufferthe buffer that is displayed

    Back bufferthe buffer that is being updated

    Once updating the back buffer is complete, the front and back buffer are swapped. The new back

    buffer is then cleared and the system starts updating it.

    To trigger a refresh using double buffering in OpenGL we call glutSwapBuffers();

    Menus

    Glut provides pop-up menus that can be used.

    An example of doing this in code would be

    glutCreateMenu(demo_menu); //Create Callback for Menu

    glutAddMenuEntry(quit,1);

  • 8/12/2019 Interactive Computer Graphics Book Summary 2013

    7/43

    glutAddMenuEntry(start rotation, 2);

    glutAttachMenu(GLUT_RIGHT_BUTTON);

    void demo_menu(int id)

    {

    //react to menu}

    Purpose of GLFlush statements

    Similar to computer IO buffer, OpenGL commands are not executed immediately. All commands are

    stored in buffers first, including network buffers and the graphics accelerator itself, and are awaiting

    execution until buffers are full. For example, if an application runs over the network, it is much more

    efficient to send a collection of commands in a single packet than to send each command over

    network one at a time.

    glFlush()- empties all commands in these buffers and forces all pending commands to be executedimmediately without waiting for buffers to get full. glFlush() guarantees that all OpenGL commands

    made up to that point will complete executions in a finite amount time after calling glFlush().

    glFlush() does not wait until previous executions are complete and may return immediately to your

    program. So you are free to send more commands even though previously issued commands are not

    finished.

    Vertex Shaders and Fragment Shaders

    OpenGL requires a minimum of a vertex and fragment shader.

    Vertex Shader

    A simple vertex shader determines the color and passes the vertex location to the fragment shader.

    The absolute minimum a vertex shader must do is send a vertex location to the rasterizer.

    In general a vertex shader will transform the representation of a vertex location from whatever

    coordinate system in which is it specified to a representation in clip coordinates for the rasterizer.

    Shaders are written using GLSL (which is very similar to a dumbed down c).

    Example would be .

    In vec4 vPosition

    void main()

    {

    Gl_Position = vPosition;

    }

    Gl_Position is a special variable known by OpenGL and used to pass data to the rasterizer.

    Fragment Shader

    Each invocation of the vertex shader outputs a vertex that then goes through primitive assembly and

    clipping before reaching the rasterizer. The rasterizer outputs fragments for each primitive inside theclipping volume. Each fragment invokes an execution of the fragment shader.

  • 8/12/2019 Interactive Computer Graphics Book Summary 2013

    8/43

    At a minimum, each execution of the fragment shader must output a color for the fragment unless

    the fragment is to be discarded.

    void main(){

    gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0);}

    Shaders need to be compiled and linked as a bare minimum for things to work.

    Example Questions for Chapter 2

    Exam Nov 2011 1.1 (4 marks)

    Explain what double buffering is and how it is used in computer graphics.

    Exam Jun 2011 6.a (4 marks)

    Discuss the difference between RGB color model and the indexed color model with respect to thedepth of the frame (color) buffer.

    Exam Nov 2012 5.4 (4 marks)

    Discuss the difference between the RGB color model and the indexed color model with respect to

    the depth of the frame (color) buffer.

    Exam Nov 2012 1.1 (3 marks)

    A real-time graphics program can use a single frame buffer for rendering polygons, clearing the

    buffer, and repeating the process. Why do we usually use two buffers instead?

    Exam Jun 2013 8.2 (5 marks)GLUT uses a callback function event model. Describe how it works and state the purpose of the idle,

    display and reshape callbacks.

    Exam Jun 2013 8.3 (1 marks)

    What is the purpose of the OpenGL glFlush statement.

    Exam Jun 2013 8.4 (1 marks)

    Is the following code a fragment or vertex shader

    In vec4 vPosition;

    void Main() {Gl_Position = vPosition;}

    Exam Jun 2013 1.1 (4 marks)

    Explain the difference between immediate mode graphics and retained mode graphics.

    Exam Jun 2013 1.2 (2 marks)

    Name two artifacts in computer graphics that may commonly be specified at the vertices of a

    polygon and then interpolated across the polygon to give a value for each fragme nt within the

    polygon.

  • 8/12/2019 Interactive Computer Graphics Book Summary 2013

    9/43

    Chapter 3 Geometric Objects and TransformationsKey concepts you should know in this chapter are the following:

    Surface Normals

    Normals are vectors that are perpendicular to a surface. They can be used to describe the

    orientation of direction of that surface.

    Uses of surface normals include

    Together with a point, a normal can be used to specify the equation of a plane The shading of objects depends on the orientation of their surfaces, a factor that is characterized

    by the normal vector at each point.

    Flat shading uses surface normals to determine if the normal is the same at all points on thesurface.

    Calculating smooth (Gourand and Phong) shading. Ray tracing and light interactions can be calculated from the angle of incidence and the normal.Homogeneous Coordinates

    Because there can be confusion between vectors and points we use homogenous coordinates.

    For a point, the fourth coordinate is 1 and for a vector it is 0. For example

    The point (4,5,6) is represented in homogenous coordinates by (4,5,6,1)

    The vector (4,5,6) is represented in homogenous coordinates by (4,5,6,0)

    Advantages of homogenous coordinates include

    All affine (l ine preserving) transformations can be represented as matrix multiplications inhomogenous coordinates.

    Less arithmetic work is involved. The uniform representation of all affine transformations makes carrying out successive

    transformations far easier than in three dimensional space.

    Modern hardware implements homogenous coordinate operations directly, using parallelism toachieve high speed calculations.

    Instance Transformations

    An instance transformation is the product of a translation, a rotation and a scaling.

    The order of the transformations that comprise an instance transformation will effect the outcome.

    For instance, if we rotate a square before we apply a non-uniform scale, we will shear the square,

    something we cannot do if we scale then rotate.

    Frames in OpenGL

    The following is the usual order in which the frames occur in the pipeline.

  • 8/12/2019 Interactive Computer Graphics Book Summary 2013

    10/43

    1) Object (or model) coordinates2) World coordinate3) Eye (or camera) coordinates4) Clip coordinates5) Normalized device coordinates6) Window (or screen) coordinates

    Model Frame (Represents an object we want to render in our world).

    A scene may comprise of many modelseach is oriented, sized and positioned in the World

    coordinate system.

    World Frame - also referred to as the application frame - represents values in world coordinates.

    If we do not apply transformations to our object frames the world and model coordinates are the

    same.

    The camera frame (or eye frame) is a frame whose origin is the center of the camera lens and whose

    axes are aligned with the sides of the camera.

    Because there is an affine transformation that corresponds to each change of frame, there are 4x4

    matrices that represent the transformation from model coordinates to world coordinates and from

    world coordinates to eye coordinates. These transformations are usually concatenated together into

    the model-view transformation, which is specified by the model-view matrix.

    After transformation, vertices are stil l represented in homogenous coordinates. The division by the

    w component called perspective division, yields three dimensional representations in normalizeddevice coordinates.

    The final translation takes a position in normalized device coordinates and, taking into account the

    viewport, creates a three dimensional representation in window coordinates.

    Translation, Rotation, Scaling and Shearing

    Know how to perform Translation, Rotation, Scaling and Shearing (You do not have to learn off the

    matrices, they will be given to you if necessary).

    Affine transformation- An affine transformation is any transformation that preserves collinearity

    (i.e., all points lying on a line initially still lie on a li ne after transformation) and ratios of distances

    (e.g., the midpoint of a line segment remains the midpoint after transformation).

    Rigid-body Transformations- Rotation and translation are known as rigid-body transformations. No

    combination of rotations and translations can alter the shape or volume of an object, they can alter

    only the objects location and orientation.

    Within a frame, each affine transformation is represented by a 4x4 matric of the form

  • 8/12/2019 Interactive Computer Graphics Book Summary 2013

    11/43

    Translation

    Translation displaces points by a fixed distance in a given direction.

    P=P+d

    We can also get the same result using the matrix multiplication

    P = Tp

    where

    T is called the translation matrix

    Rotation

    Two dimensional rotations

    Three dimensional rotations.

    Rotation about the x-axis by an angle followed by rotation about the y-axis by an angle does not

    give us the same result as the one that we obtain if we reverse the order of rotations.

  • 8/12/2019 Interactive Computer Graphics Book Summary 2013

    12/43

    ScalingP = sP, where

    Sheer

    Sections that are not examinable include 3.13 & 3.14

    Examples of different types of matrices are below

  • 8/12/2019 Interactive Computer Graphics Book Summary 2013

    13/43

    Example Questions for Chapter 3

    Exam Jun 2011 2 (6 marks)

    Consider the diagram below and answer the question that follows

    a) Determine the transformation matrix which will transform the square ABCD to the squareABCD. Show all workings.

    Hint Below is the transformation matrices for clockwise and anticlockwise rotation about the z -axis.

    b) Using the transformation matrix in a, calculate the new position of A if the transformationwas performed on ABCD

    Exam Nov 2011 2.1 & 2.2 (6 marks)

    Consider a triangular prism with vertices a,b,c,d,e and f at (0,0,0),(1,0,0),(0,0,1),(0,2,0),(1,2,0) and

    (0,2,1), respectively.

    Perform scaling by a factor of 15 along the x-axes. (2 marks)

    Then perform a clockwise rotation by 45 about the y-axis (4 marks)

    Hint: The transformation matrix for rotation about the y-axis is given alongside (where theta is the

    angle of rotation)

    Exam Nov 2012 2.2 (6 marks)

    Consider the following 4x4 matrices

  • 8/12/2019 Interactive Computer Graphics Book Summary 2013

    14/43

    Which of the matrices reflect the following (give the correct letter only)

    2.2.1 Identity Matric (no effect)2.2.2 Uniform Scaling2.2.3 Non-uniform scaling2.2.4 Reflection2.2.5 Rotation about z2.2.6 Rotation

    Exam June 2012 2.a (1 marks)What is an instance transformation?

    Exam June 2012 2.b (3 marks)

    Will you get the same effect if the order of transformations that comprise an instance

    transformation were changed? Explain using an example.

    Exam June 2012 2.c (4 marks)

    Provide a mathematical proof to show that rotation and uniform scaling commute.

    Exam June 2013 2.1 (3 marks)

    Do the following transformation sequences commute? If they do commute under certain conditionsonly, state those conditions.

    2.1.1 Rotation

    2.1.2 Rotation and Scaling

    2.1.3 Two Rotations

    Exam June 2013 2.2 (5 marks)

    Consider a line segment (in 3 dimensions) with endpoints a and b at (0,1,0) and (1,2,3) respectively.

    Compute the coordinates of vertices that result after each application of the following sequence of

    transformations of the line segment.

    2.2.1 Perform scaling by a factory of 3 along the x-axis

    2.2.2 Then perform a translation of 2 units along the y-axis

    2.2.3 Finally perform an anti-clockwise rotation by 60 about the z-axis

    Hintthe transformation matric for rotation about the z-axis is given below (where omega is the

    angle of anti-clockwise rotation)

  • 8/12/2019 Interactive Computer Graphics Book Summary 2013

    15/43

  • 8/12/2019 Interactive Computer Graphics Book Summary 2013

    16/43

    Chapter 4 - ViewingImportant concepts for Chapter 4 include

    Planar Geometric Projectionsare the class of projections produced by parallel and perspective

    views. A planar geometric projection is a projection where the surface is a plane and the projectors

    are lines.

    4.1 Classical and Computer Viewing

    Two types of views

    1. Perspective ViewsViews with finite COP2. Parallel ViewsViews with infinite COP

    Classical and computer viewing COP / DOP

    COPCenter of Projections. For computers is the origin of the camera frame for perspective views.DOPDirection of projections

    PRPProjection Reference Point

    In classical viewing there is an underlying notion of a principle face.

    Different types of classical views include

    Parallel Viewing

    Orthographic Projectionsparallel viewshows a single plane Axonometric Projectionsparallel view - projection are still orthogonal to the projection plane

    but the projection plane can have any orientation with respect to the object (isometric,

    diametric and trimetric views)

    Oblique Projectionsmost general parallel viewmost diffi cult views to construct by hand.

  • 8/12/2019 Interactive Computer Graphics Book Summary 2013

    17/43

    Perspective Viewing

    Characterized by diminution of size Classical perspective views are known as one, two and three point perspective Parallel l ines in each of the three principal directions converges to a finite vanishing pointPerspective foreshortening- The farther an object is from the center of projection ,the smaller it

    appears. Perspective drawings are characterized by perspective foreshortening and vanishing points.

    Perspective foreshorteningis the il lusion that object and lengths appear smaller as there distance

    from the center of projection increases. These points are called vanishing point. Principal vanishing

    points are formed by the apparent intersection of lines parallel to one of the three x,y or z axis. The

    number of principal vanishing points is determined by the number of principal axes interested by the

    view plane.

  • 8/12/2019 Interactive Computer Graphics Book Summary 2013

    18/43

    4.2 Viewing with a computer read only

    4.3.1 Positioning of camera frame read only

    4.3.2 Normalization

    Normalization transformation- specification of the projection matrix

    VRPView Reference Point

    VRCView Reference Coordinate

    VUPView Up Vector - the up direction of the camera

    VPNView Plane Normalthe orientation of the projection plane or back of the camera

    Camera is positioned at the origin, pointing in the negative z direction. Camera is centered at a point

    called the View Reference Point (VRP). Orientation of the camera is specified by View Plane Normal

    (VPN) and View Up Vector (VUP). The View Plane Normal is the orientation of the projection plane

    or back of camera. The orientation of the plane does not specify the up direction of the camera

    hence we have View Up Vector (VUP) which is the up direction of the camera. VUP fixes the camera.

  • 8/12/2019 Interactive Computer Graphics Book Summary 2013

    19/43

  • 8/12/2019 Interactive Computer Graphics Book Summary 2013

    20/43

    Viewing Coordinate SystemThe orthogonal coordinate system (see pg 240)

    View Orientation Matrixthe matrix that does the change of frames. It is equivalent to the viewing

    component of the model-view matrix. (Not necessary to know formulae or derivations)

    4.3.3 Look-at function

    The use of VRP, VPN and VUP is but one way to provide an API for specifying the position of a

    camera.

    The LookAt function creates a viewing matrix derived from an eye point, a reference point indicating

    the center of the scene, and an up vector. The matrix maps the reference point to the negative z-axis

    and the eye point to the origin, so that when you use a typical projection matrix, the center of the

    scene maps to the center of the viewport. Similarly, the direction described by the up vector

    projected onto the viewing plane is mapped to the positive y-axis so that it points upward in theviewport. The up vector must not be parallel to the line of sight from the eye to the reference point.

  • 8/12/2019 Interactive Computer Graphics Book Summary 2013

    21/43

    Eye and pointsVPN = a-e (Not necessary to know other formulae or derivations)

    4.3.4 Other Viewing APIs - read only

    4.4 Parallel Projections

    A parallel projectionis the limit of a perspective projection in that the center of projection (COP) is

    infinitely far from the object being viewed.

    Orthogonal projections- a special kind of parallel projection in which the projector are parallel to

    the view plane. A single orthogonal view is restricted to one principal face of an object.

    Axonometric viewprojectors are perpendicular to the projection plane but projection plane can

    have any orientation with respect to object.

    Oblique projectionsprojectors are parallel but can make an arbitrary angle to the projection plane

    and projection plane can have any orientation with respect to object.

    Projection Normalizationa process using translation and scaling that will transform vertices in

    camera coordinates to fit inside the default view volume. (see page 247/248 for detailed

    explanation).

    4.4.5 Oblique Projections - Leave out

    4.5 Perspective projections

    Perspective projections are what we get with a camera whose lens has a finitelength, or in terms of

    our synthetic camera model, the center of the projection is finite.

    4.5.1 Simple Perspective Projections - Not necessary to know formulae and derivations

    Read pg. 257

  • 8/12/2019 Interactive Computer Graphics Book Summary 2013

    22/43

    4.6 View volume, Frustum, Perspective Functions

    Two perspective functions you need to know

    1. Mat4 Frustrum(left, right, bottom, top, near, far)2. Mat4 Perspective(fovy, aspect, near, far)

    (All variables are of type GlFloat)

    View Volume (Canonical)

    The view volume can be thought of as the volume that a real camera would see through its lens

    (Except that it is also l imited in distance from the front and back). It is a section of 3D space that is

    visible from the camera or viewer between two distances.

    When using orthogonal (or parallel) projections, the view volume is rectangular. In OpenGL, an

    orthographic projection is defined with the function call glOrtho(left, right, bottom, top, near, far).

  • 8/12/2019 Interactive Computer Graphics Book Summary 2013

    23/43

    When using perspective projections, the view volume is a frustrum and has a truncated pyramid

    shape. In OpenGL, a perspective projection is defined with the function all glFrustum(xmin, xmax,

    ymin, ymax, nbear, far)or gluPerspective(fovy, aspect, near, far).

    NB: Not necessary to know formulae or derivations.

    4.7 Perspective-Projection Matrices read only

    4.8 Hidden surface removal

    Conceptually we seek algorithms that either remove those surfaces that should not be visible to the

    viewer, called hidden-surface-removal algorithms, or find which surfaces are visible, called visible-

    surface-algorithms.

    OpengGL has a particular algorithm associated with it, the z-buffer algorithm, to which we can

    interface through three function calls.

    Hidden-surface-removal algorithms can be divided into two broad classes

    1. Object-space algorithms2. Image-space algorithms

    Object-space algorithms

    Object space algorithms attempt to order the surfaces of the objects in the scene such that

    rendering surfaces in a particular order provides the correct image. i.e. render objects furthest back

    first.

    This class of algorithms does not work well with pipeline architectures in which objects are passed

    down the pipeline in an arbitrary order. In order to decide on a proper order in which to render the

    objects, the graphics system must have all the objects available so it can sort them into the desi red

    back-to-front order.

    Depth Sort Algorithm

    All polygons are rendered with hidden surface removal as a consequence of back to front rendering

    of polygons. Depth sort orders the polygons by how far away from the viewer their maximum z-

    value is. If the minimum depth (z-value) of a given polygon is greater than the maximum depth of

    the polygon behind the one of interest, we can render the polygons back to front.

  • 8/12/2019 Interactive Computer Graphics Book Summary 2013

    24/43

    Image-space algorithms

    Image-space algorithms work as part of the projection process and seek to determine the

    relationship among object points on each projector. The z-buffer algorithm is an example of this.

    Z-Buffer Algorithm

    The basic idea of the z-buffer algorithm is that for each fragment on the polygon corresponding tothe intersection of the polygon with a ray (from the COP) through a pixel we compute the depth

    from the center of projection. If the depth is greater than the depth currently stored in the z -buffer,

    it is ignored else z-buffer is updated and color buffer is updated with the new color fragment.

    Ultimately we display only the closest point on each projector. The algorithm requires a depth

    buffer, or z-buffer, to store the necessary depth information as polygons are rasterized.

    Because we must keep depth information for each pixel in the color buffer, the z-buffer has the

    same spatial resolution as the color buffers. The depth buffer is initialized to a value that

    corresponds to the farthest distance from the viewer.

    For instance with the diagram below, a projector from the COP passes through two surfaces.

    Because the circle is closer to the viewer than to the triangle, it is the circles color that determines

    the color placed in the color buffer at the location corresponding to where the projector pierces the

    projection plane.

    2 Major Advantages of Z-Buffer Algorithm

    - Its complexity is proportional to the number of fragments generated by the rasterizer- It can be implemented with a small number of additional calculations over what we have to

    do to project and display polygons without hidden-surface removal

    Handling Translucent Objects using the Z-Buffer Algorithm

    Any object behind an opaque object (solid object) should not be rendered. Any object behind a

    translucent object (see through objects) should be composited.

    Basic approach in the z-buffer algorithm would be

  • 8/12/2019 Interactive Computer Graphics Book Summary 2013

    25/43

    - If the depth information allows a pixel to be rendered, it is blended (composited) with the pixelalready stored there.

    - If the pixel is part of an opaque polygon, the depth data is updated.4.9 Displaying Meshes read only

    4.10 Projections and Shadows - Know

    The creation of simple shadows is an interesting application of projection matrices.

    To add physically correct shadows we would typically have to do global calculations that are difficult.

    This normally cannot be done in real time.

    There is the concept of a shadow polygonwhich is a flat polygon which is the projection of the

    original polygon onto the surface with the center of projection at the light source.

    Shadows are easier to calculate if the light source is not moving, if i t is moving, the shadows wouldpossibly need to be calculated in the idle callback function.

    For a simple environment such as a plane flying over a flat terrain casting a single shadow, this is an

    appropriate approach. When objects can cast shadows on other objects, this method becomes

    impractical

    Example Questions for Chapter 4

    Exam Jun 2011 3.3 (4 marks)

    Differentiate between orthographic and perspective projections in terms of projectors and theprojection plane.

    Exam Jun 2012 3.a (4 marks)

    Define the term View Volume with respect to computer graphics and with reference to both

    perspective and orthogonal views.

    Exam Jun 2012 3.a (4 marks)

    Define the term View Volume with respect to computer graphics and with reference to both

    perspective and orthogonal views.

    Exam Nov 2012 1.3 (6 marks)Define the term View Volume with reference two both perspective and orthogonal views. Provide

    the OpenGL functions that used to define the respective view volumes.

    Exam Jun 2011 3.b (4 marks)

    Orthogonal, oblique and axonometric view scenes are all parallel view scenes. Explain the

    differences between orthogonal, axonometric, and oblique view scenes.

    Exam June 2013 3.1 (1 marks)

    Explain what is meant by non-uniform foreshortening of objects under perspective camera.

  • 8/12/2019 Interactive Computer Graphics Book Summary 2013

    26/43

    Exam June 2013 3.2 (3 marks)

    What is the purpose of projection normalization in the computer graphics pipeline? Name one

    advantage of using this technique.

    Exam June 2013 3.3 (4 marks)

    Draw a view Frustum. Position and name the three important rectangular planes at their correctpositions. Make sure that the position of the origin and the orientation of the z -axis are clearly

    distinguishable. State the name of the coordinate system (or frame) in which the view frustum is

    defined.

    Exam June 2013 6.1 (2 marks)

    Draw a picture of a set of simple polygons that the Depth sort algorithm cannot render without

    splitting the polygons

    Exam June 2013 6.2 (3 marks)

    Why cant the standard z-buffer algorithm handle scenes with both opaque and translucent objects?

    What modifications can be made to the algorithm for it to handle this?

    Exam June 2012 3.a (4 marks)

    Hidden surface removal can be divided into two broad classes. State and explain each of these

    classes.

    Exam June 2012 3.b (4 marks)

    Explain the problem of rendering translucent objects using the z-buffer algorithm, and describe how

    the algorithm can be adapted to deal with this problem (without sorting the polygons).

    Exam June 2012 4.a (4 marks)What is parallel projection? What specialty do orthogonal projections provide? What is the

    advantage of the normalization transformation process?

    Exam June 2012 4.b (2 marks)

    Why are projections produced by parallel and perspective viewing known as planar geometric

    projections?

    Exam June 2012 4.c (4 marks)

    The specification of the orientation of a synthetic camera can be divided into the specification of the

    view reference point (VRP), view-plane normal (VPN) and the view-up-vector (VUP). Explain each of

    these?

    Exam Nov 2012 3.1 (6 marks)

    Differentiate between Depth sort and z-buffer algorithms for hidden surface removal.

    Exam Nov 2012 3.2 (6 marks)

    Briefly describe, with any appropriate equations, the algorithm for removing (or culling) back facing

    polygons. Assume that the normal points out from the visible side of the polygon

  • 8/12/2019 Interactive Computer Graphics Book Summary 2013

    27/43

    Chapter 5 Lighting and ShadingA surface can either emit light by self -emission, or reflect light from other surfaces that illuminate it.

    Some surfaces can do both.

    Rendering equation- to represent lighting correctly we would need a recursive call that would blend

    light between sourcesthis can mathematically be described using the rendering equation. There

    are various approximations of this e quation using ray tracing unfortunately these methods cannot

    render scenes at the rateat which we can pass polygons through the modeling-projection pipeline.

    For render pipeline architectures we focus on a simpler rendering model, based on the Phong

    reflection model that provides a compromise between physical correctness and efficient calculation.

    Rather than looking at a global energy balance, we fol low rays of light from light -emitting (or self-

    luminous) surfaces that we call light sources. We then model what happens to these rays as they

    interact with reflecting surfaces in the scene. This approach is similar to ray tracing, but we consider

    only single interactions between light sources and surfaces.

    2 independent parts of the problem

    1. Model the l ight sources in the scene2. Build a reflection model that deals with the interactions between materials and light.

    We need to consider only those rays that leave the source and reach the viewers eye (either directly

    or through interactions with objects). These are the rays that reach the center of projection (COP)

    after passing through the clipping rectangle.

  • 8/12/2019 Interactive Computer Graphics Book Summary 2013

    28/43

    Interactions between light and materials can be classified into three groups depicted.

    1. Specular Surfacesappear shinny because most of the light that is reflected or scattered isin a narrow range of angles close to the angle of reflection. Mirrors are perfectly specular

    surfaces.

    2. Diffuse surfacescharacterized by reflected light being scattered in all directions. Wallspainted with matt paint are diffuse ref lectors.

    3. Translucent Surfacesallow some light to penetrate the surface and to emerge fromanother location on the object. The process of refraction characterizes glass and water.

    5.1 Light and Matter

    There are 4 basic types of l ight sources

    1. Ambient Lighting2. Point Sources3. Spotlights4. Distant Lights

    These four lighting types are sufficient for rendering most simple scenes.

    5-2 Light Sources

    Ambient Light

    Ambient l ight produces light of constant intensity throughout the scene. All objects are illuminated

    from all sides.

    Point SourcesPoint sources emit light equally in all direction, but the intensity of the light diminishes with the

    distance between the light and the objects it illuminates. Surfaces facing away from the light source

    are not illuminated.

    Umbrathe area that is fully in the shadow

    Penumbrathe area that is partially in the shadow

    Spotlights

    A spot light source is similar to a point light source except that its illumination is restricted to a cone

    in a particular direction.

  • 8/12/2019 Interactive Computer Graphics Book Summary 2013

    29/43

    Spotlights are characterized by a narrow range of angles through which light is emitted. More

    realistic spotlights are characterized by the distribution of light within the coneusually with most

    of the light concentrated in the center of the cone.

    Distant Light Sources

    A distant light source is like a point light source except that the rays of light are all parallel.

    Most shading calculations require the direction from the point on the surface to the l ight source

    position. As we move across a surface, calculating the intensity at each point, we should re-compute

    this vector repeatedlya computation that is a significant part of the shading calculation. Distant

    light sources can be calculated faster than near light sources (see pg. 294 for parallel light).

    5.3 - Phong Reflection Model

    The Phong model uses 4 vectors to calculate a color for an arbitrary point P on a surface.

    1. lfrom p to light source2. nthe normal at point p3. vfrom p to viewer4. rreflection of ray from l

    The Phong model supports the three types of material-light interactions

    1. Ambient Light, I = kL where k is the reflection coefficient, L is ambient term2. Diffuse Light, I = k(I.n)L3. Specular Light, I=k.L Max((rv),0) is the shininess coefficient

    There are 3 types of reflection

    1. Ambient Reflection2. Diffuse Reflection3. Specular Reflection

    What is referred to as the Phong model, including the distance term is written

  • 8/12/2019 Interactive Computer Graphics Book Summary 2013

    30/43

    Lambertian Surfaces (Applies to Diffuse Reflection)

    An example of a lambertian surface is a rough surface. This can also be referred to as diffuse

    reflection.

    Lamberts Lawthe surface is brightest at noon and dimmest at dawn and dusk because we see only

    the vertical component of the incoming light.

    More technical, the amount of di ffuse light reflected is directly proportional to Cos where is the

    angle between the normal at the point of interest and the direction of the light source.

    If both l and n are unit-length vectors, then

    Cos = l n

    Using Lamberts Law, derive the equation for calculating approximations to diffuse reflection on

    a computer.

    If we consider the direction of light source (l) and the normal at the point of interest (n) to beunit length vectors, then cos = l dot n;

    If we add a reflection component kdrepresenting the fraction of incoming diffuse light that isreflected, we have the diffuse reflection term:

    Id= kd(l n)Ld, where L is the light sourceDifference between the Phong Model and the Blinn-Phong Model

    The Blinn-Phong model attempts to provide a performance optimization by using the unit vector

    halfway between the viewer vector and the light-source vector which avoids the recalculation of r.

    When we use the halfway vector in the calculation of the specular term we are using the Blinn -

    Phong model. This model is the default in systems with a fixed-function pipeline.

    5.4 Computation of Vectors Read Only

    5.5 Polygonal Shading

    Flat Shadinga polygon is fil led with a single color or shade across its surface. A single normal is

    calculated for the whole surface, and this determines the color. It works on the basis that if three

    vectors are constant, then the shading calculation needs to be carried out only once for each

    polygon, and each point on the polygon is assigned the same shade.

    Smooth Shading- the color per vertex is calculated using vertex normal and then this color isinterpolated across the polygon.

  • 8/12/2019 Interactive Computer Graphics Book Summary 2013

    31/43

    Gouraud Shadingan estimate to the surface normal of each vertex in is found. Using this value,

    lighting computations based on the Phong reflection model are performed to produce color

    intensities at the vertices.

    Phong Shading- the normals at the vertices are interpolated across the surface of the polygon. The

    lighting model is then applied at every point within the polygon. Because normals give the localsurface orientation, by interpolating the normals across the surface of a polygon, the surface

    appears to be curved rather than flat hence the smoother appearance of Phong shaded images.

    5.6 Approximation of a sphere by recursive subdivision - Read Only

    5.7 Specifying Lighting Parameters

    Read pg. 314-315

    5.8 - Implementing a Lighting Model

    Read pg. 314-315

    5.9 - Shading of the sphere model

    5.10 Per fragment Lighting

    5.11 Global Illumination

    Example Questions for Chapter 5

    Exam Nov 2013 4.1 (4 marks)

    The Phong reflection model is an approximation of the physical reality to produce good renderingsunder a variety of lighting conditions and material properties. In this model there are three terms, an

    ambient term, a diffuse term, and a specular term. The Phong shading model for a single light source

    is

    Exam Nov 2013 4.1.1 (4 marks)

    Describe the four vectors the model uses to calculate a color for an arbitrary point p. Il lustrate with a

    figure.

    Exam Nov 2013 4.1.2 (2 marks)

    In the specular term, there is a factor of (rv)P. What does p refer to? What effect does varying the

    power p have?

    Exam Nov 2013 4.1.3 (3 marks)

    What is the term kaLa? What does karefer to? How will decreasing kaeffect the rendering of the

    surface?

  • 8/12/2019 Interactive Computer Graphics Book Summary 2013

    32/43

    Exam June 2012 5.a (4 marks)

    Interactions between light and materials can be classified into three categories. State and describe

    these categories.

    Exam June 2012 5.b (4 marks)

    State and explain Lamberts Law using a diagram.

    Exam Nov 2011 4.b (3 marks)

    State and explain Lamberts Law using a diagram

    Exam Nov 2011 4.c (3 marks)

    Using Lamberts Law, derive the equation for calculating approximations to the diffuse reflection

    term used in the Phong lighting model.

    Exam June 2012 5.c (4 marks)

    Using Lamberts Law derive the equation for calculating approximations to diffuse reflection on a

    computer.

    Exam Nov 2012 4.1 (9 marks)

    The shading intensity at any given point p on a surface is in general comprised of three

    contributions, each of which corresponds to a distinct physical phenomenon. List and describe all

    three, stating how they are computed in terms of the following vectors.

    nthe normal at point

    vfrom p to viewer

    Ifrom p to light source

    rreflection of ray from light source to p

    Exam Nov 2013 4.1.4 (3 marks)

    Consider Gouraud and Phong shading. Which one is more realistic, especially for highly curved

    surfaces? Why?

    Exam Jun 2011 4.e (3 marks)

    Why does Phong shaded images appear smoother than Gouraud or Flat shaded images?

    Exam Jun 2011 4.a (1 marks)

    Explain what characterizes a diffuse ref lecting surface.

    Exam Jun 2011 4.d (6 marks)

    Describe distinguishing features of ambient, point, spot and distant light sources.

  • 8/12/2019 Interactive Computer Graphics Book Summary 2013

    33/43

    Chapter 6 From Vertices to Fragments

    6.1 Basic Implementation Strategies

    There are two basic implementation strategies

    1. Image Oriented Strategy2. Object Oriented Strategy

    Image Oriented Strategy

    Loop through each pixel (called scanlines) and work our way back to determine what determines the

    pixels color.

    Main disadvantageunless we first build a data structure from the geometric data, we do not know

    which primitives affect which pixels (These types of data structures can be complex).

    Main AdvantageThey are well suited to handle global effects such as shadows and reflections (e.g.

    Ray Tracing).

    Object Oriented Strategy

    Loop through each object and determine each objects colors and whether it is visible.

    In past main disadvantage was memory required doing this, however this has been overcome as

    memory has reduced in price and become denser.

    Main disadvantage- each geometric primitive is processed independentlycomplex shading effects

    that involve multiple geometric objects such as reflections cannot be handled except by approximate

    methods.

    Main advantagereal-time processing and generation of 3d views.

    One major exceptionto this is hidden-surface removal, where the z buffer is used to store global

    information.

    6.2 Four Major Tasks

    There are four major tasks that any graphics system must perform to render a geometric entity, such

    as a 3d polygon. They are

    1. Modeling2. Geometry Processing3. Rasterization4. Fragment Processing

    Modeling

    Think of the modeler as a black box that produces geometric objects and is usually a user program.

    One function a modeler can perform is to perform clipping or intel ligently el iminate objects that

    do not need to be rendered or that can be simplified.

    Geometry Processing

    The first step in geometry processing is to change representations from object coordinates to

    camera or eye coordinates using the model-view transformation.

  • 8/12/2019 Interactive Computer Graphics Book Summary 2013

    34/43

    The second step is to transform vertices using the projection transformation to a normalized view

    volume in which objects that might be visible are contained in a cube centered at the origin.

    Geometric objects are transformed by a sequence of transformations that may reshape and move

    them or may change their representations.

    Eventually only those primitives that fit within a specified volume (the view volume) can appear on

    the display after rasterization.

    2 reasons why we cannot allow all objects to be rasterized are

    1. Rasterizing objects that lie outside the view volume is inefficient because such objectscannot be visible.

    2. When vertices reach the rasterizer, they can no longer be processed individually and firstmust be assembled into primitives.

    RasterizationTo generate a set of fragments that give the locations of the pixels in the frame buffer corresponding

    to these vertices, we only need their x, y components or, equivalently, the results of the orthogonal

    projection of these vertices. We determine these fragments through a process called rasterization.

    Rasterization determines which fragments should be used to approximate a line segment between

    the projected vertices.

    The rasterizer starts with vertices in normalized device coordinates but outputs frames whose

    locations are in units of the display (window coordinates)

    Fragment Processing

    The process of taking fragments generated by rasterizer and updating pixels in the frame buffer.

    Depth information together with transparency of fragments in frontas well as texture and bump

    mapping are used to update the fragments in the frame buffer to form pixels that can be displayed

    on the screen.

    Hidden-surface removal is typically carried out on a fragment by fragment basis. In the simplest

    situation, each fragment is assigned a color by the rasterizer and this color is placed in the frame

    buffer at the location corresponding to the fragment location.

    6.3 ClippingClipping is performed before perspective division.

    The most common primitives to pass down the pipeline are line segments and polygons and there

    are techniques for clipping on both types of primitives.

    6.4 Line-segment Clipping

    A clipper decides which primitives are accepted (displayed) or rejected. There are two wel l know

    clipping algorithms for line segments, they are

    1. Cohen-Sutherland Clipping2. Liang-Barsky Clipping

  • 8/12/2019 Interactive Computer Graphics Book Summary 2013

    35/43

    Liang-Barksy Clipping is more efficient than Cohen-Sutherland Clipping.

    Cohen-Sutherland Clipping

    The Cohen-Sutherland algorithm was the first to seek to replace most of the expensive floating-point

    multipl ications and divisions with a combination of floating-point subtractions and bit operations.

    The center region is the screen, and the other 8 regions are on different sides outside the screen.

    Each region is given a 4 bit binary number, called an "outcode". The codes are chosen as follows:

    If the region is above the screen, the first bit is 1

    If the region is below the screen, the second bit is 1

    If the region is to the right of the screen, the third bit is 1

    If the region is to the left of the screen, the fourth bit is 1

    Obviously an area can't be to the left and the right at the same time, or above and below it at the

    same time, so the third and fourth bit can't be 1 together, and the first and second bit can't be 1

    together. The screen itself has all 4 bits set to 0.

    Both endpoints of the line can lie in any of these 9 regions, and there are a few trivial cases:

    If both endpoints are inside or on the edges of the screen, the line is inside the screen or clipped,and can be drawn. This case is the trivial accept.

    If both endpoints are on the same side of the screen (e.g., both endpoints are above the screen),certainly no part of the line can be visible on the screen. This case is the trivial reject, and the

    line doesn't have to be drawn.

    Advantages

    1. This algorithm works best when there are many line segments but few are actuallydisplayed.

    2. The algorithm can be extended to three dimensions

  • 8/12/2019 Interactive Computer Graphics Book Summary 2013

    36/43

    Disadvantage

    1. It must be used recursivelyLiang-Barsky Clipping

    The LiangBarsky algorithm uses the parametric equation of a line and inequalities describing therange of the clipping window to determine the intersections between the line and the clippingwindow. With these intersections it knows which portion of the l ine should be drawn.

    How it works

    Suppose we have a line segment defined by two endpoints p(x1,y1) q(x1,y1). The parametric

    equation of the line segment gives x-values and y-values for every point in terms of a parameter

    that ranges from 0 to 1.

    x() = (1- ) x1 + x2

    y() = (1- ) y1 + y2

    There are four points where line intersects side of windowstB, tL, tT, tR

    We can order these points and then determine where clipping needs to take place. If for example tL

    > tR, this implies that the line must be rejected as it falls outside the window.

    To use this strategy effectively we need to avoid computing intersections until they are needed.

    Many l ines can be rejected before all four intersections are known.

    Efficiency

    Efficient implementation of this strategy requires that we avoid computing intersections until they

    are needed.

    The Liang-Barsky algorithm is significantly more efficient than CohenSutherland.

    The efficiency of this approach, compared to that of Cohen-Sutherland algorithm is that we avoid

    multiple shortening of line segments and the related re-executions of the clipping algorithm.

  • 8/12/2019 Interactive Computer Graphics Book Summary 2013

    37/43

    6.5 Polygon Clipping do not learn

    6.6 - Clipping of other primitives do not learn

    6.7 - Clipping in three dimensions do not learn

    6.8 Rasterization

    Rasterization is the task of taking an image described in a vector graphics format (shapes) and

    converting it into a raster image (pixels or dots) for output on a video display or printer, or for

    storage in a bitmap file format.

    In normal usage, the term refers to the popular rendering algorithm for displaying three -dimensional

    shapes on a computer. Rasterization is currently the most popular technique for producing real -time

    3D computer graphics. Real-time applications need to respond immediately to user input, and

    generally need to produce frame rates of at least 24 frames per second to achieve smooth

    animation.

    Compared with other rendering techniques such as ray tracing, rasterization is extremely fast.

    However, rasterization is simply the process of computing the mapping from scene geometry to

    pixels and does not prescribe a particular way to compute the color of those pixels. Shading,

    including programmable shading, may be based on physical light transport, or artistic intent.

    6.9 Bresenhams Algorithm

    Bresenham derived a line-rasterization algorithm that avoids all floating point calculations and has

    become the standard algorithm used in hardware and software rasterizers.

    It is preferred over the DDA algorithm because although the DDA algorithm is eff icient and can be

    coded easily, it requires a floating-point addition for each pixel generated which Bresenhams

    algorithm doesnt require.

    6.10 Polygon Rasterization

    There are several different types of polygon rasterization. Some of the ones that work with the

    OpenGL pipeline include.

    Inside-Outside Testing Concave Polygons Fill and Sort Flood Fill Singularities

    Crossing or Odd-Even Test

    The most widely used test for making inside-outside decisions. Suppose P is a point inside a polygon,

    and then pretend there is a ray emanating from p, going off into infinity. Now follow that 'ray' from

    outside (somewhere) to P. If it crosses an even amount of edges, then P is inside the polygon, else

    its outside

    Winding Testp.g. 358

  • 8/12/2019 Interactive Computer Graphics Book Summary 2013

    38/43

    6.11 Hidden Surface Removal

    2 main approaches

    1)

    Object space approaches2) Image space approaches (Image space approaches are more popular)

    Scanline Algorithms

    -

    Back-Face Removal (Object Space Approach)

    A simple object space algorithm is Back-Face removal (or back face cull) where no faces on the back

    of the object are displayed.

    Limitations on Back-face removal algorithm

    It can only be used on solid objects modeled as a polygon mesh. It works fine for convex polyhedra but not necessarily for concave polyhedra.

    Z-Buffer Algorithm (Image Space Approach)

    The easiest way to achieve hidden-surface removal is to use the depth buffer (sometimes called a z-

    buffer). A depth buffer works by associating a depth, or distance from the viewpoint, with each pixel

    on the window. Initially, the depth values for all pixels are set to the largest possible distance, and

    then the objects in the scene are drawn in any order.

    Graphical calculations in hardware or software convert each surface that's drawn to a set of pixels

    on the window where the surface will appear if it isn't obscured by something else. In addition, the

    distance from the eye is computed. With depth buffering enabled, before each pixel is d rawn, a

    comparison is done with the depth value already stored at the pixel.

    If the new pixel is closer to the eye than what is there, the new pixel's color and depth values replace

    those that are currently written into the pixel. If the new pixel's depth is greater than what is

    currently there, the new pixel would be obscured, and the color and depth information for the

    incoming pixel is discarded.

    Since information is discarded rather than used for drawing, hidden-surface removal can increase

    your performance.

    Shading is performed before hidden surface removal. In the z-buffer algorithm polygons are fisrt

    rasterized and then for each fragment of the polygon depth values are determined and compared to

    the z-buffer.

    Scan Conversion with the z-Buzzer

    -

    Depth Sort and the Painters Algorithm (Object Space Approach)

  • 8/12/2019 Interactive Computer Graphics Book Summary 2013

    39/43

    The idea behind the Painter's algorithm is to draw polygons far away from the eye first, followed by

    drawing those that are close to the eye. Hidden surfaces will be written over in the image as the

    surfaces that obscure them are drawn.

    Situations where depth sort algorithm is troublesome include.

    If three or more polygons operate cyclically If a polygon pierces another polygon6.12 Antialiasing

    An error arises whenever we attempt to go from the continuous representation of an object (which

    has infinite resolution) to a sampled approximation, which has limited resolutionthis is called

    aliasing.

    Aliasing errors are caused by 3 related problems with the discrete nature of the frame buffer.

    1. The number of pixels of the frame buffer is fixed. Many different line segments may beapproximated by the same pattern of pixels. We can say that all these segments are aliased

    as the same sequence of pixels.2. Pixel locations are fixed on a uniform grid; regardless of where we would like to place pixels,

    we cannot place them at other than evenly spaced locations.

    3. Pixels have a fixed size and shape.In computer graphics, antialiasing is a software technique for diminishing jaggies - stairstep-like

    lines that should be smooth.

    Jaggies occur because the output device, the monitor or printer, doesn't have a high enough

    resolution to represent a smooth line. Antialiasing reduces the prominence of jaggies by surrounding

    the stairsteps with intermediate shades of gray (for gray-scaling devices) or color (for color devices).

    Although this reduces the jagged appearance of the lines, it also makes them fuzzier.

    Another method for reducing jaggies is called smoothing, in which the printer changes the size and

    horizontal alignment of dots to make curves smoother.

    Antialiasing is sometimes called oversampling.

    Interpolation

    Interpolation is a way of determining value (of some parameter) for any point between two

    endpoints of which the parameter values are known (e.g. the color of any points between two point,

    or the normal of any point between two points).

  • 8/12/2019 Interactive Computer Graphics Book Summary 2013

    40/43

    Accumulation Buffer

    The accumulation buffer can be used for a variety of operations that involve combining multiple

    images. One of the most important uses of the accumulation buffer is for antialiasing. Rather than

    antialiasing individual lines and polygons, we can anti-alias an entire scene using the accumulation

    buffer.

    6.13 Display Considerations do not learn

    Example Questions for Chapter 6

    Exam Nov 2012 1.2 (3 marks)

    Briefly explain what the accumulation buffer is and how it is used with respect to anti -aliasing.

    Exam Nov 2012 6 (6 marks)

    Using diagrams describe brief ly the Liang-Barsky clipping algorithm

    Exam Jun 2012 7.a (8 marks)Describe, with the use of diagrams, the Cohen-Sutherland line clipping algorithm

    Exam Jun 2012 7.b (2 marks)

    What are the advantages and disadvantages of the Cohen-Sutherland line clipping algorithm?

    Exam Jun 2013 7.1 (2 marks)

    Give one advantage and one disadvantage of the Cohen -Sutherland line clipping algorithm.

    Exam Jun 2013 7.2 (3 marks)

    What is the crossing or odd even test? Explain i t with respect to a point p inside a polygon

    Exam Jun 2011 5.a (2 marks)

    In the case of the z-buffer algorithm for hidden surface removal, is shading performed before or

    after hidden surfaces are eliminated? Explain.

    Exam Jun 2011 5.c (2 marks)

    Bresenham derived a line-rasterization algorithm that has become the standard approach used in

    hardware and software rasterizers as opposed to the more simpler DDA algorithm. Why is this so?

    Exam Jun 2011 1.b (4 marks)

    Give brief definitions of the following terms in the context of computer graphics

    i) Anti-aliasingii) Normal Interpolation

  • 8/12/2019 Interactive Computer Graphics Book Summary 2013

    41/43

    Chapter 7 Discrete Techniques

    7.1 Buffers Do not learn

    7.2 Digital Images Do not learn

    7.3 - Writing into Buffers Do not learn

    7.4 - Mapping Methods

    Mimapping

    A way to deal with the minification problem i.e. the distortion of a mapped texture due to the texel

    being smaller than one pixel. Mimapping enables us to use a sequence of texture images at different

    resolutions to give a texture values that are the average of texel values over various are

    7.5 Texture Mapping

    Texture mapping maps a pattern (of colors) to a surface.

    All approaches of texture mapping require a sequence of steps that involve mappings among three

    or four different coordinate systems. They are

    Screen Coordinateswhere the final image is produced World Coordinatewhere the objects up which the textures will be mapped are described Texture coordinatesare used to describe the texture Parametric coordinateused to define curved surfaces7.6 Texture Mapping in OpengGL

    7.7 - Texture Generation Do not learn

    7.8 - Environment Maps

    7.9 Reflection Map Example Do not learn

    7.10 Bump Mapping

    Whereas texture maps give detail by mapping patterns onto surfaces, bump maps distort the normal

    vectors during the shading process to make the surface appear to have small variations in shape, l ike

    bumps or depressions.

    7.11 Compositing Techniques

    7.12 - Sampling and Aliasing Do not learn

    Example Questions for Chapter 7

    Exam Jun 2013 5.1 (4 marks)

    Explain what is meant by bump mapping. What does the value at each pixel in a bump map

    correspond to? How is this data used in rendering?

    Exam Jun 2013 5.2 (1 marks)What technique computes the surroundings visible as a reflected image in a shinny object?

  • 8/12/2019 Interactive Computer Graphics Book Summary 2013

    42/43

    Exam Jun 2013 5.3 (3 marks)

    Describe what is meant by point sampling and linear filtering? Why is linear filtering a better choice

    than point sampling in the context of aliasing of textures?

    Exam Jun 2013 1.4 (5 marks)

    The 4 major stages of the modern graphics pipeline are.

    a. Vertex Processingb. Clipping and primitive assemblyc. Rasterizationd. Fragment processing

    In which of these 4 stages would the fol lowing normally occur.

    1.4.1 Texture Mapping1.4.2 Perspective Division

    !.4.3 Inside-outside testing1.4.4 Vertices are assembled into objects

    1.4.5 Z-buffer algorithm

    Exam Nov 2012 5.1 (2 marks)

    Explain the term texture mapping

    Exam Nov 2012 5.3 (2 marks)

    Consider the texture map with U,V coordinates in the diagram on the left below. Draw the

    approximate mapping if the square on right were textured using the above image.

    Exam May 2012 6.a (3 marks)

    Texture mapping requires interaction between the application program, the vertex shader and thefragment shade. What are the three basic steps of texture mapping.

    Exam May 2012 6.b (4 marks)

    Explain how the alpha channel and the accumulation buffer can be used to achieve antialiasing with

    line segments and edges of polygons.

    Exam May 2012 6.c (3 marks)

    Explain what is meant by texture aliasing. Explain how point sampling and linear filtering help to

    solve this problem.

  • 8/12/2019 Interactive Computer Graphics Book Summary 2013

    43/43

    Exam Jun 2011 6.b (4 marks)

    Define the following terms and briefly explain their use in computer graphics

    i) Bitmapii) Bump Mappingiii) Mimapping