Computer Graphics Classic Rendering Pipeline Overview.

download Computer Graphics Classic Rendering Pipeline Overview.

If you can't read please download the document

  • date post

    12-Jan-2016
  • Category

    Documents

  • view

    235
  • download

    3

Embed Size (px)

Transcript of Computer Graphics Classic Rendering Pipeline Overview.

  • Computer GraphicsClassic Rendering Pipeline Overview

  • What is Rendering?Rendering is the process of taking 3D models and producing a single 2D pictureThe classic rendering approaches all work with polygons triangles in particularDoes not include creating of the 3D modelsModelingDoes not include movement of the modelsAnimation/physics/AI

  • What is a Pipeline?Classic rendering is done as a pipelineTriangles travel from stage to stageAt any point in time, there are triangles at all stages of processingCan obtain better throughput with a pipelineMuch of the pipeline is now in hardware

  • Classic Rendering PipelineView SpaceModel SpaceScreen SpaceModel & View TransformationsNormalized Device SpaceViewport MappingProjection

  • Model SpaceModel Space is the coordinate system attached to the specific model that contains the triangleIt is easiest to define models in this local coordinate systemSeparation of object design from world locationMultiple instances of the object

  • Model & View TransformationsThese are 3D transformations that simply change the coordinate system with which the triangles are definedThe triangles are not actually movedModel Coordinate SpaceWorld Coordinate SpaceView Coordinate Space

  • Model to World TransformationEach object is defined w.r.t its own local model coordinate systemThere is one world coordinate system for the entire scene

  • Model to World TransformationTransformation can be performed if one knows the position and orientation of the model coordinate system relative to the world coordinate system

    Transformations place all objects into the same coordinate system (the world coordinate system)There is a different transformation for each objectAn object can consist of many triangles

  • World to View TransformationOnce all the triangles are define w.r.t. the world coordinate system, we need to transform them to view spaceView space is defined by the coordinate system of the virtual cameraThe camera is placed using world space coordinatesThere is one transformation from world to view space for all triangles (if only one camera)

  • World to View TransformationThe cameras film is parallel to the view xy planeThe camera points down the negative view z axisAt least for the right-handed OpenGL coordinate systemThings are opposite for the left-handed DirectX system

  • Placing the CameraIn OpenGL the default view coordinate system is identical to the world coordinate systemThe cameras lens points down the negative z axisThere are several ways to move the view from its default position

  • Placing the CameraRotations and Translations can be performed to place the view coordinate system anywhere in the worldHigher-level functions can be used to place the camera at an exact positiongluLookAt(eye point, center point, up vector)Similar function in DirectX

  • Transformation OrderNote that order of transformations is importantPoints move from model space to world spaceThen from world space to view (camera) spaceThis implies an order of: Pview = (Tworld2view) (Tmodel2world) (Pmodel)That is, the model to world transform needs to be applied first to the point

  • OpenGL Transformation OrderOpenGL contains a single global ModelView matrix set to the multiplication of the 2 transforms from the previous slideThis ModelView matrix is automatically applied to all points you send down the pipelineYou can either set the matrix directly or you can perform the operation: MV = (MV) (SomeTransformationMatrix)Most utility operations (those that hide the details of the 4x4 matrix such rotate 30 about the X axis or gluLookAt) perform this second type of operationHow does this impact the order of the transformations in your OpenGL code?

  • World to View DetailsJust to give you a taste of what goes on behind the scenes with gluLookAtIt needs to form a 4x4 matrix that transforms world coordinate points into view coordinate pointsTo do this it simply forms the matrix that represents the series of transformation steps that get the camera coordinate system to line up with the world coordinate systemHow does it do that what would the steps be if you had to implement the function in the API?

  • View SpaceThere are several operations that take place in view space coordinatesBack-face cullingView Volume clippingLighting

    Note that view space is still a 3D coordinate system

  • Back-face CullingBack-face culling removes triangles that are not facing the viewerback-face is towards the cameraNormal extends off the front-faceDefault is to assume triangles are defined counter clock-wise (ccw)At least this is the default for a right-handed coordinate system (OpenGL)DirectXs left-handed coordinate system is backwards (cw is front facing)

  • Surface NormalEach triangle has a single surface normalThe normal is perpendicular to the plane of the triangleEasy way to define the orientation of the surfaceAgain, the normal is just a vector (no position)

  • Computing the Surface NormalLet V1 be the vector from point A to point BLet V2 be the vector from point A to point CN = V1 x V2 N is often normalized

    Note that order of vertices becomes importantTriangle ABC has an outward facing normalTriangle ACB has an inward facing normal

  • Back-face CullingRecall that V1 . V2 = |V1| |V2| cos(q)If both vectors are unit vectors this simplifies to V1 . V2 = cos(q)Recall that cos(q) is positive if q [-90..+90]Thus, if the dot product of the View vector (V) and the Polygon Normal vector (Np) is positive we can cull (remove) it

  • Back-face CullingWe do need to compute a different view vector for each triangle renderedNeeds to be as if the camera is looking directly at the triangleJust use a single point on the triangle as the view vector since the triangle is in view spaceCamera origin is at (0,0,0) in view space

  • Back-face CullingThis technique should remove approximately half the triangles in a typical scene at a very early stage in the pipelineWe always want to dump data as early as possibleDot products are really fast to computeCan be optimized further because all that is necessary is the sign of the dot product

  • Back-face CullingWhen using an API such as OpenGL or DirectX there is a toggle to turn on/off back-face cullingThere is also a toggle to select which side is considered the front side of the triangle (the side with the normal or the other side)

  • View Volume ClippingView Volume Clipping removes triangles that are not in the cameras sightThe View Volume of a perspective camera is a 3D shape that looks like a pyramid with its top cut offCalled a FrustumThus, this step is sometimes called Frustum clippingThe Frustum is defined by near and far clipping planes as well as the field of viewMore info later when talking about projections

  • View Volume Clipping

  • View Volume ClippingView Volume Clipping happens automatically in OpenGL and DirectXYou need to be aware of it because it is easy to get black screens because you set your view volume to be the wrong sizeAlso, for some of the game speed-up techniques we will need to perform some view volume clipping by hand in software

  • LightingThe easiest form of lighting is to just assign a color to each vertexAgain, color is a state-machine type of thingMore realistic forms of lighting involve calculating the color value based on simulated physics

  • Real-world LightingPhotons emanate from light sourcesPhotons collide with surfaces and are:AbsorbedReflectedTransmittedEventually some of the photons make it to your eyes enabling you to see

  • Lighting ModelsThere are different ways to model real-world lighting inside a computerLocal reflection modelsOpenGLDirect3DGlobal illumination modelsRaytracingRadiosity

  • Local Reflection ModelsCalculates the reflected light intensity from a point on the surface of an object using only direct illuminationAs if the object was alone in the sceneSome important artifacts not taken into account by local reflection models are:Shadows from other objectsInter-object reflection Refraction

  • Phong Local Reflection Model3 types of lighting are considered in the Phong model:DiffuseSpecularAmbientThese 3 types of light are then combined into a color for the surface at the point in question

  • DiffuseDiffuse reflection is what happens when light bounces off a matte surfacePerfect diffuse reflection is when light reflects in all directions

  • DiffuseWe dont actually cast rays from the light source and scatter them in all directions, hoping one of them will hit the cameraThis technique is not very efficient!Even offline techniques such as radiosity which try and simulate diffuse lighting dont go this far!We just need to know the amount of light falling on a particular surface point

  • DiffuseThe amount of light reflected (the brightness) of the surface at a point is proportional to the angle between the surface normal, N, and the direction of the light, L.In particular: Id = Ii cos(q) = Ii (N . L) Where Id is the resulting diffuse intensity, Ii is the incident intensity, and N and L are unit vectors

  • DiffuseA couple of examples:Ii = 0.8, q = 0 Id = 0.8The full amount is reflected

    Ii = 0.8, q = 45 Id = 0.5771% is reflected

  • DiffuseDiffuse reflection only depends on:Orientation of the surfacePosition of the lightDoes not depend on:Viewing position

    Bottom sphere is viewed from a slightly lower position than the top sphere

  • SpecularSpecular highlights are the mirror-like reflections found on shinny metals and plastics

  • SpecularN is again the normal of the surface at the point in we are lightingL is again the direction to the light sourceR is the reflection vectorV is the direction to the viewer (camera)

  • SpecularWe want the intensity to be greatest in the direction of the reflection vector and fall off quite fast around the reflection vectorIn particular: Is = Ii cosn(W) = Ii (R . V)n Where Is is the resulting specular intensity, Ii is the incident intensity, R and V are unit vectors, and n is an index that simulates the degree of surface imperfection

  • SpecularAs n gets bigger the drop-off around R is fasterAt n = , the surface is a perfect mirror (all reflection is directly along Rcos(0) = 1 and 1 = 1cos(anything bigger than 0) = number < 1 and (number < 1) = 0

  • SpecularExamples of various values of n:Left: diffuse onlyMiddle: low n specular added to diffuseRight: high n specular added to diffuse

  • SpecularCalculation of N, V and L are easyN with a cross product on the triangle verticesV and L with the surface point and the camera or light position, respectivelyCalculation of R requires mirroring L about N, which requires a bit of geometry:R = 2 N ( N . L ) L

    Note: Foley p.730 has a good explanation of this geometry

  • SpecularThe reflection vector, R, is time consuming to compute, so often it is approximated with the halfway vector, H, which is halfway between the light direction and the viewing direction: H = (L + V) / 2Then the equation is:Is = Ii (H . N)n

  • SpecularSpecular reflection depends on:Orientation of the surfacePosition of the lightViewing positionThe bottom picture was taken with a slightly lower viewing positionThe specular highlights changes when the camera moves

  • AmbientNote in the previous examples that the part of the sphere not facing the light is completely blackIn the real-world light would bounce off of other objects (like floors and walls) and eventually some light would get to the back of the sphereThis global bouncing is what the ambient component modelsAnd models is a very loose term here because it isnt at all close to what happens in the real-world

  • AmbientThe amount of ambient light added to the point being lit is simply: IaNote that this doesnt depend on:surface orientationlight positionviewing direction

  • Phong Local Illumination ModelThe 3 components of reflected light are combined to form the total reflected lightI = KaIa + KdId + KsIs

    Where Ia, Id and Is are as computed previously and Ka, Kd and Ks are 3 constants that control how to mix the componentsAdditionally, Ka + Kd + Ks = 1The OpenGL and DirectX models are both based on the Phong local illumination model

  • OpenGL Model Light ColorIncident light (Ii)Represents the color of the light sourceWe need 3 (Iir Iib Iig) valuesExample: (1.0, 0.0, 0.0) is a red lightLighting calculations to determine Ia, Id, and Is now must be done 3 times eachEach color channel is calculated independentlyFurther control is gained by defining separate (Iir Iib Iig) values for ambient, diffuse, specular

  • OpenGL Model Light ColorSo for each light in the scene you need to define the following colors:Ambient (r, g, b)Diffuse (r, g, b)Specular (r, g, b)The ambient Iis are used in the Ia equationThe diffuse Iis are used in the Id equationThe specular Iis are used in the Is equation

  • OpenGL Model Material ColorMaterial properties (K values)The equations to compute Ia, Id and Is just compute how must light from the light source is reflected off the objectWe must also define the color of the objectAmbient color: (r, g, b)Diffuse color: (r, g, b)Specular color: (r, g, b)

  • OpenGL Model - ColorThe ambient material color is multiplied by the amount of reflected ambient lightKa IaSimilar process for diffuse and specularThen, just like in the Phong model, they are all added together to produce the final colorNote that each K and I are vectors of 3 color values that are all computed independentlyAlso need to define a shininess material value to be used as the n value in the specular equation

  • OpenGL Model - ColorBy mixing the material color with the lighting color, one can get realistic lightWhite light, red material

    Green light, same red material

  • OpenGL Model - EmissiveThe OpenGL model also allows one to make objects emissiveThey look like they produce light (glow)The extra light they produce isnt counted as an actual light as far as the lighting equations are concernedThis emissive light values (Ke) are simply added to the resulting reflected values

  • OpenGL Model - AttenuationOne can also specify how fast the light will fade as it travels away from the light sourceControlled by an attenuation equationA = 1 / (kc + kl d + kq d2)Where the 3 Ks can be set by the programmer and d represents the distance between the light source and the vertex in question

  • OpenGL Model - EquationSo the total equation is:Vertex Color = Ke + A ( ( Ka La ) + ( Kd Ld (L . N) ) + ( Ks Ls (((L+V)/2) . N)shininess ) )For each of the 3 colors (R,G,B) independentlyFor each light turned on in the sceneFor each vertex in the sceneNote that the above equation is slightly simplified:If either of the dot products is negative, use 0Spotlight effect is not includedGlobal ambient light is not included

  • Light SourcesThere are several classifications of lights:Point lights

    Directional lights

    Spot lights

    Extended lights

  • ProjectionProjection is what takes the scene from 3D down to 2DThere are several type of projectionOrthographicCADPerspectiveNormal cameraStereographicFish-eye lens

  • Orthographic ProjectionsEquations: x = xy = y

    Main property that is preserved is that parallel lines in 3D remain parallel in 2D

  • Perspective ProjectionsEquations:x = f x / zy = f y / z

    Creates a foreshortening effectMain property that is preserved is that straight lines in 3D remain straight in 2D

  • Projection in OpenGLSet the projection matrix instead of the modelview matrixThe equations given previously can be turned into 4x4 matrix form what else would you expect!Orthographic (view volume is a rectangle):glOrtho(left, right, bottom, top, near, far)Perspective (view volume is a frustum):gluPerspective(horzFOV, aspectRatio, nearClipPlane, farClipPlane)

  • FOV CalculationIt is important to pick a good FOVIf the image on the screen stays the same sizeThe bigger the FOV the closer the center of projection is to the image plane

  • FOV CalculationThis implies that the human viewer needs to move their eye closer to the actual screen to keep the scene from being distorted as the FOV increasesTo pick a good FOV:Put the actual size window on the screenSit at a comfortable viewing distanceDetermine how much that window subtends of your eyes viewing angleThis method effectively places your eye at the center of projection and will create the least distortion

  • Normalized Device SpaceThis is our first 2D spaceAlthough some 3D information is often keptThe major operations that happen in this space are:2D clippingPixel ShadingHidden surface removalTexture mapping

  • 2D ClippingWhen 3D objects were clipped against the view volume, triangles that were partially inside the volume were keptWhen these triangles make it to this stage they have parts that hang outside the window these are clipped

  • Pixel ShadingThe lighting equations we have seen are all about obtaining a color at a particular vertexPixel shading is all about taking those colors and coloring each pixel in the triangleThere are 3 main methods:Flat shadingGouraud shadingPhong shading (not to be confused with the Phong local reflectance model previously discussed)

  • Flat ShadingA single color is computed the triangle atThe center of the triangleUsing the normal of the triangle surface as NThe computed color is used to shade every pixel in the triangle uniformlyProduces images that clearly show the underlying polygonsOpenGL: glShadeModel(GL_FLAT);

  • Flat Shading Example

  • Gouraud Shading3 colors are computed for the triangle at:Each vertexUsing neighbor averaged normals as each NWhat is a neighbor averaged normal?The average of the surface normals of all triangles that share this vertexIf triangle model is approximating an analytical surface then normals could be computed directly from the surface descriptionI did this in my sphere examples for the lighting model

  • Gouraud ShadingBi-linear interpolation is used to shade the pixels from the 3 vertex colorsInterpolation happens in 2DThe advantage Gouraud shading has over flat shading is that the underlying polygon structure cant be seenOpenGL: glShadeModel(GL_SMOOTH);

  • Gouraud Shading ExampleThe problem with Gouraud shading is that specular highlights dont interpolate correctlyIf the object isnt constructed with enough triangles artifacts can be seen (left 320 s, right 5120 s)

  • Phong Shading Many colors are computed for the triangle at:The back projection of each pixel onto the 3D triangleUsing normals that have been bi-linearly interpolated (in 2D) from the normals at the 3 verticesThe normals at the 3 vertices are still computed as neighbor averaged normalsEach pixel gets its own computed color

  • Phong ShadingThe advantage Phong shading has over Gouraud shading is that it allows the interior of a triangle to contain specular highlightsThe disadvantage is that it is easily 4-5 times more expensiveOpenGL does not support Phong shading

  • Phong Shading Example

  • Shading Comparison ExamplesWireframe

    Flat

    Gouraud

    Phong

  • Hidden Surface RemovalThe problem is that we have polygons that overlap in the image and we want to make sure that the one in front shows up in front

    There are several ways to solve this problem:Painters algorithmZ-buffer algorithm

  • Painters AlgorithmSort the polygons by depth from cameraPaints the polygons in order from farthest to nearest

  • Painters AlgorithmThere are two major problems with the painters algorithm:Wasteful of time because every polygon gets drawn on the screen even if it is entirely hiddenOnly handles polygons that dont overlap in the z-coordinate

  • Z-buffer AlgorithmThe Z-buffer algorithm is pixel basedThe Painters is object basedA buffer of identical size to the color buffer is created, called the Z-buffer (or depth buffer)Recall that the color buffer where the resulting colors are placed (2 color buffers when double-buffering)The values in the Z-buffer are all set to the maxThe range of depth values in the view volume is mapped to 0.0 to 1.0OpenGL:glClearDepth(1.0f);glClear(GL_DEPTH_BUFFER_BIT);

  • Z-buffer AlgorithmFor each objectFor each projected pixel in the object: (x, y)If the z value of the current pixel is less than the Z-buffers value at (x, y) then:Color the pixel at (x, y) in the color bufferReplace the value at (x, y) in the Z-buffer with the current z valueElse dont do anything because a previously rendered object is closer to the viewer at the projected (x, y) location

  • Z-buffer AlgorithmPros:Objects can be drawn in any orderObjects can overlap in depthHardware supported in almost every graphics cardCons:Memory cost (1024x768 > 786K pixels)At 4 bytes per pixel > 3M bytes4 bytes often necessary to get the resolution we want in depthSome pixels are still drawn and then replacedProblems with transparent objects

  • Z-buffer Algorithm and TransparencyTransparent colors need to be blended with the colors of the opaque objects behindThere are different blending functions (more later)To make blending work, the correct opaque color needs to be known the opaque objects need to be drawn before the transparent onesHowever, we still have the following problem:Black: opaque (drawn first)Blue: transparent (drawn second)Red: transparent (drawn third)

  • Z-buffer Algorithm and TransparencyThe problem:If blue sets the Z-buffer to its depth value then the red is assumed to be blocked by the blue and wont get its color blended properlySolutions:Order the transparent objects from back to frontFails: transparent objects can overlap in depthTurn off Z-Buffer test for transparent objectsFails: transparent objects wont be blocked by opaque

  • Z-buffer Algorithm and TransparencyCorrect Solution:Make the Z-buffer be read-only during the drawing of the transparent objectsZ-buffer tests are still done so opaque objects block transparent objects that are behind themBut Z-buffer values are not changed so transparent objects dont block other transparent objects that are behind themOpenGL:glDepthMask(GL_TRUE) // read/writeglDepthMask(GL_FALSE) // read-only

  • Texture MappingReal objects contain subtle changes in both color and orientationWe cant model these objects with tons of little triangles to capture these changesModeling would be too hardRendering would be too time consumingUse Mapping techniques to simulate itTexture mapping handles changes in color

  • Texture MappingModel objects with normal sized polygonsMap 2D images onto the polygons

  • The Stages of Texture MappingThere are 4 major stages to texture mappingObtaining texture parameter space coordinates from 3D coordinates using a projector functionMapping the texture parameter space coordinates into texture image space coordinates using a corresponder function Sampling the texture image at the computed texture space coordinatesBlending the texture value with the object color

  • Projector FunctionsA projector function is simply a way to get from a 3D point on the object to a 2D point in texture parameter spaceTexture Parameter space is represented by two coordinates (u, v) both in the range [0..1)

  • Projector FunctionsProjector functions can be computed automatically during the rendering process by using an intermediate objectSpherical mappingCylindrical mappingPlanar mappingOr projector functions can be pre-computed during the modeling stage and their results stored with each vertex

  • Intermediate Objects in Projector FunctionsAn imaginary intermediate object is placed around the the modeled object being texturedPoints on the modeled object are projected onto the intermediate objectThe texture parameter space is wrapped onto the intermediate object in a known way

  • Intermediate Objects in Projector Functions

    Also see p.121 Real-time rendering

  • Intermediate Objects in OpenGLOpenGL has a glTexGen function that allows one to specify the type of the type of projector function usedOften this is used to do special types of mapping, such as environment mapping (later)The quadric objects, which are basically the same shape as the intermediate objects, have their texture coordinates generated in this way

  • Pre-computing Projector FunctionsTexture coordinates are simply defined at each vertex that directly map the 3D vertex into 2D parameter spaceIn OpenGL:glBegin(GL_QUADS)glTexCoord2f(0, 1); glVertex3f(20, 20, 2); // AglTexCoord2f(0, 0); glVertex3f(20, 10, 2); // BglTexCoord2f(1, 0); glVertex3f(30, 10, 2); // CglTexCoord2f(1, 1); glVertex3f(30, 20, 2); // DglEnd( );

  • Texture EditorsTexture editors can be used to help in the manual placement of texture coordinates

  • Interpolating Texture CoordinatesTexture coordinates only provide (u, v) values at the vertices of the polygonWe still need to fill in each pixel location in the interior of the polygonThese are filled by bi-linearly interpolating the texture parameter space coordinate in 2D spaceThis can be done at the same time as we do the interpolation for lighting and depth calculations

  • Corresponder FunctionsThe Corresponder function takes the (u, v) values and maps them into the texture image space (e.g. 128 pixels by 64 pixels)

  • Corresponder FunctionsCorresponder function allow us to:Change the size of the image used without having to redefine our projector functions (or redefine all our texture coordinates)Map to subsections of the imageSpecify what happens outside the range [0..1]

  • Mapping to a SubsectionAllows you to store several small texture images into a single large texture imageBy default it maps to the entire texture image

  • What Happens Outside [0..1]The 3 main approaches are:Repeat/Tile: Image is repeated multiple times by simply dropping the integer part of the value

    Clamp: Values are clamped to the range, resulting in the edge values being repeated

    Border: Values outside the range are displayed in a given border color

  • SamplingIn general, the size of the texture image and the size of the projected surface is not the sameIf the size of the projected surface is larger than the size of the texture image then the image will need to be blown up to fit on the surfaceThis process is called MagnificationIf the size of the projected surface is smaller than the size of the texture image then the image will need to be shrunk to fit on the surfaceThis process is called Minification

  • MagnificationRecall that there are more pixels than texelsThus, we need to sample the texels for each pixel to determine the pixels texture colorThat is, there is no 1-1 correlationThere are 2 main ways to sample texels:Nearest neighborBi-linear interpolation

  • MagnificationNearest neighbor sampling simply picks the texel closest to the projected pixel

    Bi-linear interpolation samples the 4 texels closest to the projected pixel and linearly interpolates their values in both the horizontal and vertical directions

  • MagnificationNearest neighbor can give a crisper feel when little magnification is occurring, but bi-linear is usually the safer choiceBi-linear also takes 4+ times as long

    Also see p.130 Real-time rendering

  • MinificationRecall that there are more texels than pixelsThus, we need to integrate the colors from many texels to form a pixels texture colorHowever, integration of all the associated texels is nearly impossible in real-timeWe need to use a sampling technique

  • Minification2 common sampling techniques areNearest neighbor samples the texel value at the center of the group of associated texelsBi-linear interpolation samples 4 texel values in the group of associated texels and bi-linearly interpolates themNote that the sampling techniques are the same as in Magnification, but the results are quite different

  • MinificationFor nearest neighbor, severe aliasing artifacts can be seenThey are even more noticeable as the surface moves with respect to the viewerTemporal aliasing

    See NeHe Lesson 7 (f to change cycle through filtering modes, page up/down to go forward and back)See p.132 Real-time rendering

  • MinificationBi-linear interpolation is only slightly better than nearest neighbor for minificationWhen more than 4 texels need to be integrated together this filter shows the same aliasing artifacts as nearest neighbor

    See NeHe Lesson 7 (second filter in cycle)

  • Mipmapsmip stands for multum in parvo which is Latin for many things in a small placeThe basic idea is to improve Minimization by providing down sampled versions of the original texture image a pyramid of texture images

  • MipmapsWhen Minification would normally occur, instead use the mipmap image that most closely matches the size of the projected surfaceIf the projected surface falls in between mipmap images:Use nearest neighbor to pick the mipmap image closest to the projected surface size Or use linear interpolation to pick combine values from the 2 closest mipmap images

  • Sampling in OpenGLOpenGL allows you to select a Magnification filter from:Nearest or LinearOpenGL allows you to select a Minification filter from:Nearest or Linear (without mipmaps)Nearest or Linear texel sampling with nearest or linear mipmap selection (4 distinct choices)(Bi-)linear texel sampling with linear mipmap selection is often called tri-linear filteringSee NeHe Lesson 7 (adjust choice in code)

  • Blending the Texture ValueOnce a sample texture value has been obtained, we need to blend it with the computed color valueThere are 3 main ways to perform blending:Replace: Replace the computed color with the texture color, effectively removing all lightingDecal: Like replace but transparent parts of the texture are blended with the underlying computed colorModulate: Multiply the texture color by the computed color, producing a shaded and textured surface

  • Blending RestrictionsThe main problem with this simple form of texture map blending is that we can only blend with the final computed colorThus, the texture will dim both the diffuse and specular terms, which can look unnaturalA dark object still may have a bright highlightIf diffuse and specular components can be interpolated across the pixels independently then we could blend the texture with just the diffuseThis is not part of the Classic Rendering Pipeline but several vendors have tried to add implementations

  • Texture Set ManagementEach graphics card can handle a certain number of texture in memory at onceEven though memory in 3D cards has increased dramatically recently, the general rule of thumb is that you never have enough texture memoryThe card usually has a built-in strategy, like LRU, to manage the working setOpenGL allows you to set priorities on the textures to enable you to adjust this process

  • Viewport MappingThis is the final transformation that occurs in the Classic Rendering PipelineThe Viewport transformation simply maps the generated 2D image to a portion of the 2D window used to display itBy default the entire window is usedThis is useful if you want several views of a scene in the same window