Pipelines are for Whimps Raycasting, Raytracing, and Hardcore Rendering.

Post on 26-Dec-2015

219 views 2 download

Tags:

Transcript of Pipelines are for Whimps Raycasting, Raytracing, and Hardcore Rendering.

Pipelines are for Whimps

Raycasting, Raytracing, and Hardcore Rendering

Ray Casting

• Definition Time• There are two definitions of Ray Casting• The Old and the New• The old was related to 3D games back in the

Wolfenstein / Doom 1 Era. Where gameplay was on a 2D platform

• The New definition is:– Non Recursive Ray Tracing

Ray Tracing

• http://www.flipcode.com/archives/Raytracing_Topics_Techniques-Part_1_Introduction.shtml

Glass Ball

Rays from the Sun or from the Screen

• Rays could be programmed to work in either direction

• We choose from the screen to the Light– Only X x Y Pixels to trace

• From the Light we would need to emulate Millions of Rays to find the few thousand that reach the screen

Our Rays

Center of Projection

(0,0)

Viewport

(0,0)

Screen

Clipping Planes

Into World Coordinates

(0,0)

Screen

Clipping Planes

Getting Each Initial Ray

• Origin = (0,0,0)• Direction = ScreenX,screenY, zMin– ScreenX, ScreenY are the float locations of each

pixel in projected world coordinates– zMin is the plane on which the screen exists

Materials

• Surfaces must have their Material Properties set– Diffuse, Reflective, Emissive, and Colour need to

be considered

For Each Pixel (the main Raytrace loop)

For each pixel {

Construct ray from camera through pixel Find first primitive hit by ray Determine colour at intersection point Draw colour to pixel Buffer

}

Intersections

• The simplest way is to loop through all your primitives (All Polygons)– If the Polygon Normal DOT RayDirection(Cos

Theta) < 0 // Face is opposite to Ray Ignore– Now we can Intersect the Ray with the Polygon– Or Intersect the Ray with the Polygons Plane

Ray / Polygon Intersection

p0, p1 and p2 are verts of the trianglepoint(u,v) = (1-u-v)*p0 + u*p1 + v*p2U > 0V > 0U + V <= 1.0

Line Representation

point(t) = p + t * dt is any point on the line p is a known point on the lineD is the direction vectorCombined:p + t * d = (1-u-v) * p0 + u * p1 + v * p2A Point on the line (p + t * d) which Is part of the triangle[(1-u-v) * p0 + u * p1 + v * p2]

• http://www.lighthouse3d.com/opengl/maths/index.php?raytriint

Intersecting a Plane

• A point on the Plane = p1• Plane Normal = n.• Ray = p(t) = e + td

P(t) = Point on RayE = OriginD = Direction Vectort = [(P1 – e) . n]/ d.n

World / Object Coordiantes

• We need to translate the Ray into Object Coordinates / Vice Versa to get this to work

• Ray = p(t) = e + td• Ray = Inv (Object->World)e + t Inv (Object-

>World)d

After Finding the Intersecting Plane

• You need a simple way to check for a hit or miss

• If your Object has a bounding box this can be achieved through a line check

Miss Conditions

Hit Conditions

For Other Shaped Flat Polygons

• An Even Number of Intersections with the Outside of the Polygon means a Miss

• An Odd Number of Intersections means a Hit

Task List for Ray Casting

1) Create a vector for each Pixel on the screena) From the Origin of the Camera Matrix (0,0,0)b) That intersects with a Pixel in the screen

2) Use this Vector to create a trace through the World

a) From the Zmin to the Zmax Clipping Volumeb) UnProjected into World Coordinates

3) Intersect the trace with every object in the world

4) When the ray hits an Object we need to check how the pixel should be lita) Check if the Ray has a direct view to each of the lights in the sceneb) calculate the input from each light.c) Color the pixel based on the lighting and surface properties

One extra task for Ray Casting

• After Intersection Calculate the reflective Vector– Dot Product of Ray and Surface Normal

• Then cast a new Ray– This continues in a recursive fashion untill:• A ray heads off into the universe• A ray hits a light• We reach our maximum recursion level

How we would like to be able to calculate light

Conservation of Energy

• A Physics-Based Approach to Lighting– Surfaces will absorb some light, and reflect some

light– Any surfaces may also be light emitting– Creating a large simultaneous equation can solve

the light distribution (I mean LARGE)– The light leaving a point is the sum of the light

emitted + the sum of all reflected light

Don’t Scream (loudly)

The Rendering Equation

http://en.wikipedia.org/wiki/Rendering_equation

Light Leaving Point X in direction

Light Emitted by Point X in direction

Integral over the Input Hemisphere

Bidirectional reflective function (BDRF) in the direction from direction ’

Light toward Point X from direction ’

Attenuation of inward light related to incidence angle

The Monte Carlo Method

• Repeated Random Sampling• Deterministic Algorithms may be unfeasibly

complex (light)

Metropolis Light Transport

• A directed approach to simplifying the BDRF• Still considered a Monte Carlo Method• It directs the randomness considering more

samples from directions with a higher impact on the point being assessed

BDRF Tracing

http://graphics.stanford.edu/papers/metro/

Metropolis Light Transport

http://graphics.stanford.edu/papers/metro/

Radiosity

• Simplifying the Rendering Equation by making all surfaces perfectly diffuse reflectors

• This simplifies the BDRF function

Parallel Rendering (Rendering Farms)

• There are Three major Type Definitions– Sort-First– Sort-Middle– Sort-Last

• These are just the outlines, in reality things need to be customised based on technical limitations / requirements

Sort-MiddleApplication

Sort

Geometry(Vertex Shading)

Geometry(Vertex Shading)

Geometry(Vertex Shading)

Fragments(Pixel Shading)

Fragments(Pixel Shading)

Fragments(Pixel Shading)

Display

DisplayDisplay

Display

Fragments(Pixel Shading)

Geometry(Vertex Shading)

Sort-Middle

• Pros– The number of Vertex Processors is independent of the

Number of Pixel Processors• Cons– Normal Maps may mess with Polygons on overlap

areas– Correcting Aliasing between Display Tiles

(RenderMan??)– Requires specific hardware– Rendering may not be balanced between Display Tiles

Sort-LastApplication

Composite

Geometry(Vertex Shading)

Geometry(Vertex Shading)

Geometry(Vertex Shading)

Fragments(Pixel Shading)

Fragments(Pixel Shading)

Fragments(Pixel Shading)

Display

Fragments(Pixel Shading)

Geometry(Vertex Shading)

Sort-Last

• Pros– Can be easily created from networked PCs

• Cons– Each Vertex Processor requires a Pixel Processor– Unsorted Geometry means each Pixel Processor must

carry a full-size frame buffer• Limited scalability

– Composing the image requires integrating X frame buffers considering X Z-Buffers

Sort-Last

• Compositing can be done more efficiently (memory requirements) utilising a binary tree approach– May lead to idle processors

• Another approach is a Binary Swap architecture– Large Data Bus usage

Sort-FirstApplication

Sort

Geometry(Vertex Shading)

Geometry(Vertex Shading)

Geometry(Vertex Shading)

Fragments(Pixel Shading)

Fragments(Pixel Shading)

Fragments(Pixel Shading)

Display

DisplayDisplay

Display

Fragments(Pixel Shading)

Geometry(Vertex Shading)

Sort-First

• Pros– Pixel Processors only need a tile of the display buffer– Can be created utilising PC hardware– Infinitely Scalable

• Cons– We are sorting Primitives BEFORE they are translated into

projected space!!!• This requires some overhead

– Polygons crossing tiles will be sent to both pipelines– An error backup could consider a bus to move incorrectly sorted polygons to the

correct render queue (Transparency causes issues here!)

– Correcting Aliasing between Display Tiles– Rendering may not be balanced between Display Tiles

Parallel Processing Techniques

• Conclusively– Sort-Middle is for expensive hardware– Sort-Last is limited by scalability– Sort-First requires careful consideration on

implementation• Sort First / Sort Last COULD be run on a Cloud– Bandwidth??– Security??– What happens when you max the cloud??

Image Based Rendering

• Geometric Upscaling!• The process of getting 3D information out of

2D image(s)– Far outside our scope, but interesting in Research

RenderManhttps://renderman.pixar.com/

RenderMan / Reyes

• Reyes (Renders Everything You Ever Saw)• RenderMan is an implementation of Reyes– Reyes was developed by two staff at the

‘Lucasfilm's Computer Graphics Research Group’ now known as Pixar!

– RenderMan is Pixar’s current implementation of the Reyes Architecture

The Goals of Reyes

• Model Complexity / Diversity• Shading Complexity• Minimal Ray Tracing• Speed• Image Quality (Artefacts are Unacceptable)• Flexibility– Reyes was designed so that new technology could

be incorporated without an entire re-implementation

The Functionality of Reyes / RenderMan

• Objects (Polygons and Curves) are divided into Micro Polygons as needed– A Micro Polygon is a typically smaller than a pixel– In Reyes Micro Polygons are quadrilaterals– Flat shading all the Quads gives an excellent

representation of shading• These quads allow Reyes to use a Vector

Based Rendering Approach– This allows simple Parallelism

• Bound– Bounding Boxes

• Split– Geometry Culling & Partials

• Dice– Polygons into grids of Micro Polygons

• Shade– Shading Functions are applied to the Micro Polygons

• Functions used are Independent of Reyes

• Bust– Do Bounding and Visibility checking on each Micro Polygon

• Sample (Hide)– Generate the Render based on the remaining Micro Polygons

The Reyes

Pipeline

• http://en.wikipedia.org/wiki/File:Reyes-pipeline.gif

Interesting Facts

• Some Frames take 90 hours!!! (1/24th of a second of footage)

• On average Frames take 6 hours to render!!!!• 6 * 24 = 1 second = 144 Hours– About 2 years for a 2 hour Movie!!

Licensing

• $3500 us per Server• $2000us – 3500us per Client Machine• Far cheaper than I expected!