Post on 02-Dec-2014
UNIT- I
2D PRIMITIVES
Line and Curve Drawing Algorithms
Line Drawing
y = m . x + b
m = (yend – y0) / (xend – x0)
b = y0 – m . x0x0
y0
xend
yend
DDA Algorithm
if |m|<1
xk+1 = xk + 1
yk+1 = yk + m
if |m|>1
yk+1 = yk + 1
xk+1 = xk + 1/m
x0
y0
xend
yend
x0
y0
xend
yend
DDA Algorithm#include <stdlib.h>#include <math.h>
inline int round (const float a) { return int (a + 0.5); }
void lineDDA (int x0, int y0, int xEnd, int yEnd) { int dx = xEnd - x0, dy = yEnd - y0, steps, k; float xIncrement, yIncrement, x = x0, y = y0;
if (fabs (dx) > fabs (dy)) steps = fabs (dx); /* |m|<1 */ else steps = fabs (dy); /* |m|>=1 */ xIncrement = float (dx) / float (steps); yIncrement = float (dy) / float (steps);
setPixel (round (x), round (y)); for (k = 0; k < steps; k++) { x += xIncrement; y += yIncrement; setPixel (round (x), round (y)); } }
Bresenham’s Line Algorithm
xk
yk
xk+1
yk+1
ydu
dl
xk xk+1
yk
yk+1
Bresenham’s Line Algorithm#include <stdlib.h>
#include <math.h>
/* Bresenham line-drawing procedure for |m|<1.0 */
void lineBres (int x0, int y0, int xEnd, int yEnd)
{
int dx = fabs(xEnd - x0),
dy = fabs(yEnd - y0);
int p = 2 * dy - dx;
int twoDy = 2 * dy,
twoDyMinusDx = 2 * (dy - dx);
int x, y;
/* Determine which endpoint to use as start position. */
if (x0 > xEnd) {
x = xEnd;
y = yEnd;
xEnd = x0;
}
else {
x = x0;
y = y0;
}
setPixel (x, y);
while (x < xEnd) {
x++;
if (p < 0)
p += twoDy;
else {
y++;
p += twoDyMinusDx;
}
setPixel (x, y);
}
}
Circle Drawing
Pythagorean Theorem:
x2 + y2 = r2
(x-xc)2 + (y-yc)2 = r2
(xc-r) ≤ x ≤ (xc+r)
y = yc ± √r2 - (x-xc)2
xc
yc
r(x, y)
Circle Drawing
change x
change y
Circle Drawing using polar coordinates
x = xc + r . cos θ
y = yc + r . sin θ
change θ with step size 1/r
r (x, y)
(xc, yc)
θ
Circle Drawing using polar coordinates
x = xc + r . cos θ
y = yc + r . sin θ
change θ with step size 1/r
use symmetry if θ>450
r (x, y)
(xc, yc)
θ
(x, y)
(xc, yc)
450
(y, x)(y, -x)
(-x, y)
Midpoint Circle Algorithm
f(x,y) = x2 + y2 - r2
<0 if (x,y) is inside circle
f(x,y) =0 if (x,y) is on the circle
<0 if (x,y) is outside circle
use symmetry if x>y xk xk+1
yk-1
yk
yk-1/2
Midpoint Circle Algorithm #include <GL/glut.h>
class scrPt { public: GLint x, y; };
void setPixel (GLint x, GLint y) { glBegin (GL_POINTS); glVertex2i (x, y); glEnd ( ); }
void circleMidpoint (scrPt circCtr, GLint radius) { scrPt circPt;
GLint p = 1 - radius; circPt.x = 0;
circPt.y = radius; void circlePlotPoints (scrPt, scrPt);
/* Plot the initial point in each circle quadrant. */ circlePlotPoints (circCtr, circPt);
/* Calculate next points and plot in each octant. */ while (circPt.x < circPt.y) { circPt.x++; if (p < 0) p += 2 * circPt.x + 1; else { circPt.y--; p += 2 * (circPt.x - circPt.y) + 1; } circlePlotPoints (circCtr, circPt); }}
void circlePlotPoints (scrPt circCtr, scrPt circPt);{ setPixel (circCtr.x + circPt.x, circCtr.y + circPt.y); setPixel (circCtr.x - circPt.x, circCtr.y + circPt.y); setPixel (circCtr.x + circPt.x, circCtr.y - circPt.y); setPixel (circCtr.x - circPt.x, circCtr.y - circPt.y); setPixel (circCtr.x + circPt.y, circCtr.y + circPt.x); setPixel (circCtr.x - circPt.y, circCtr.y + circPt.x); setPixel (circCtr.x + circPt.y, circCtr.y - circPt.x); setPixel (circCtr.x - circPt.y, circCtr.y - circPt.x);}
OpenGL#include <GL/glut.h> // (or others, depending on the system in use)
void init (void){ glClearColor (1.0, 1.0, 1.0, 0.0); // Set display-window color to white.
glMatrixMode (GL_PROJECTION); // Set projection parameters. gluOrtho2D (0.0, 200.0, 0.0, 150.0);}
void lineSegment (void){ glClear (GL_COLOR_BUFFER_BIT); // Clear display window.
glColor3f (0.0, 0.0, 1.0); // Set line segment color to red. glBegin (GL_LINES); glVertex2i (180, 15); // Specify line-segment geometry. glVertex2i (10, 145); glEnd ( );
glFlush ( ); // Process all OpenGL routines as quickly as possible.}
void main (int argc, char** argv){ glutInit (&argc, argv); // Initialize GLUT. glutInitDisplayMode (GLUT_SINGLE | GLUT_RGB); // Set display mode. glutInitWindowPosition (50, 100); // Set top-left display-window position. glutInitWindowSize (400, 300); // Set display-window width and height. glutCreateWindow ("An Example OpenGL Program"); // Create display window.
init ( ); // Execute initialization procedure. glutDisplayFunc (lineSegment); // Send graphics to display window. glutMainLoop ( ); // Display everything and wait.}
OpenGLPoint Functions
• glVertex*( );* : 2, 3, 4 i (integer) s (short) f (float) d (double)
Ex: glBegin(GL_POINTS); glVertex2i(50, 100);glEnd();
int p1[ ]={50, 100};glBegin(GL_POINTS); glVertex2iv(p1);glEnd();
OpenGLLine Functions
• GL_LINES
• GL_LINE_STRIP
• GL_LINE_LOOP
Ex: glBegin(GL_LINES); glVertex2iv(p1); glVertex2iv(p2);glEnd();
OpenGL
glBegin(GL_LINES); GL_LINES GL_LINE_STRIP
glVertex2iv(p1);
glVertex2iv(p2);
glVertex2iv(p3);
glVertex2iv(p4);
glVertex2iv(p5);
glEnd();
GL_LINE_LOOP
p1
p1
p1
p2
p3
p2
p2
p4
p3
p3
p5
p4
p4
p5
Antialiasing
SupersamplingCount the
number of subpixels that overlap the line path.
Set the intensity proportional to this count.
Antialiasing
Area SamplingLine is treated as a rectangle.
Calculate the overlap areas for pixels.
Set intensity proportional to the overlap areas.
80% 25%
Antialiasing
Pixel Sampling
Micropositioning
Electron beam is shifted 1/2, 1/4, 3/4 of a pixel diameter.
Line Intensity differences
Change the line drawing algorithm:
For horizontal and vertical lines use the lowest intensity
For 45o lines use the highest intensity
2D Transformations with Matrices
Matrices
3,32,31,3
3,22,21,2
3,12,11,1
aaa
aaa
aaa
A
A matrix is a rectangular array of numbers.
A general matrix will be represented by an upper-case italicised letter.
The element on the ith row and jth column is denoted by ai,j. Note that we start indexing at 1, whereas C indexes arrays from 0.
Given two matrices A and B if we want to add B to A (that is form A+B) then if A is (nm), B must be (nm), Otherwise, A+B is not defined.
The addition produces a result, C = A+B, with elements:
Matrices – Addition
jijiji BAC ,,,
1210
86
8473
6251
87
65
43
21
Given two matrices A and B if we want to multiply B by A (that is form AB) then if A is (nm), B must be (mp), i.e., the number of columns in A must be equal to the number of rows in B. Otherwise, AB is not defined.
The multiplication produces a result, C = AB, with elements:
(Basically we multiply the first row of A with the first column of B and put this in the c1,1 element of C. And so on…).
Matrices – Multiplication
m
kkjikji baC
1,
9666
9555
7644
62
33
86
329
854
762
Matrices – Multiplication (Examples)
26+ 63+ 72=44
62
33
86
54
62Undefined!2x2 x 3x2 2!=3
2x2 x 2x4 x 4x4 is allowed. Result is 2x4 matrix
Unlike scalar multiplication, AB ≠ BA
Matrix multiplication distributes over addition:
A(B+C) = AB + AC
Identity matrix for multiplication is defined as I.
The transpose of a matrix, A, is either denoted AT or A’ is obtained by swapping the rows and columns of A:
Matrices -- Basics
3,23,1
2,22,1
1,21,1
3,22,21,2
3,12,11,1 '
aa
aa
aa
Aaaa
aaaA
2D Geometrical Transformations
Translate
Rotate Scale
Shear
Translate Points
Recall.. We can translate points in the (x, y) plane to new positions by adding translation amounts to the coordinates of the points. For each point P(x, y) to be moved by dx units parallel to the x axis and by dy
units parallel to the y axis, to the new point P’(x’, y’ ). The translation has the following form:
y
x
dyy
dxx
'
'
P(x,y)
P’(x’,y’)
dx
dy
In matrix format:
y
x
d
d
y
x
y
x
'
'
If we define the translation matrix , then we have P’ =P + T.
y
x
d
dT
Scale Points
Points can be scaled (stretched) by sx along the x axis and by sy along the y axis into the new points by the multiplications:
We can specify how much bigger or smaller by means of a “scale factor”
To double the size of an object we use a scale factor of 2, to half the size of an obejct we use a scale factor of 0.5
ysy
xsx
y
x
'
'
P(x,y)
P’(x’,y’)
xsx x
sy y
y
y
x
s
s
y
x
y
x
0
0
'
'
If we define , then we have P’ =SP
y
x
s
sS
0
0
Rotate Points (cont.)
Points can be rotated through an angle about the origin:
cossin
cossinsincos
)sin()sin(|'|'
sincos
sinsincoscos
)cos()cos(|'|'
|||'|
yx
ll
lOPy
yx
ll
lOPx
lOPOP
P(x,y)
P’(x’,y’)
xx’
y’
y
l
O
y
x
y
x
cossin
sincos
'
'
P’ =RP
Review…
Translate: P’ = P+T Scale: P’ = SP Rotate: P’ = RP
Spot the odd one out…• Multiplying versus adding matrix…
• Ideally, all transformations would be the same..• easier to code
Solution: Homogeneous Coordinates
Homogeneous Coordinates
For a given 2D coordinates (x, y), we introduce a third dimension:
[x, y, 1]
In general, a homogeneous coordinates for a 2D point has the form:
[x, y, W]
Two homogeneous coordinates [x, y, W] and [x’, y’, W’] are said to be of the same (or equivalent) if
x = kx’ eg: [2, 3, 6] = [4, 6, 12]y = ky’ for some k ≠ 0 where k=2W = kW’
Therefore any [x, y, W] can be normalised by dividing each element by W:[x/W, y/W, 1]
Homogeneous Transformations
Now, redefine the translation by using homogeneous coordinates:
Similarly, we have:
y
x
d
d
y
x
y
x
'
'
1100
10
01
1
'
'
y
x
d
d
y
x
y
x
PTP '
1100
00
00
1
'
'
y
x
s
s
y
x
y
x
1100
0cossin
0sincos
1
'
'
y
x
y
x
Scaling Rotation
P’ = S P P’ = R P
Composition of 2D Transformations
1. Additivity of successive translations
We want to translate a point P to P’ by T(dx1, dy1) and then to P’’ by another T(dx2, dy2)
On the other hand, we can define T21= T(dx1, dy1) T(dx2, dy2) first, then apply T21 to P:
where
]),()[,('),('' 112222 PddTddTPddTP yxyxyx
PTP 21''
100
10
01
100
10
01
100
10
01
),(),(
21
21
1
1
2
2
112221
yy
xx
y
x
y
x
yxyx
dd
dd
d
d
d
d
ddTddTT
T(-1,2) T(1,-1)
(2,1)
(1,3)
(2,2)
100
110
001
100
210
101
100
110
101
21T
Examples of Composite 2D Transformations
Composition of 2D Transformations (cont.)
2. Multiplicativity of successive scalings
where
PS
PssSssS
PssSssSP
yxyx
yxyx
21
1122
1122
)],(),([
]),()[,(''
100
0*0
00*
100
00
00
100
00
00
),(),(
12
12
1
1
2
2
112221
yy
xx
y
x
y
x
yxyx
ss
ss
s
s
s
s
ssSssSS
Composition of 2D Transformations (cont.)
3. Additivity of successive rotations
where
PR
PRR
PRRP
21
12
12
)]()([
])()[(''
100
0)cos()sin(
0)sin()cos(
100
0cossin
0sincos
100
0cossin
0sincos
)()(
1212
1212
11
11
22
22
1221
RRR
Composition of 2D Transformations (cont.)
4. Different types of elementary transformations discussed above can be concatenated as well.
where
MP
PddTR
PddTRP
yx
yx
)],()([
]),()[('
),()( yx ddTRM
Consider the following two questions:
1) translate a line segment P1 P2, say, by -1 units in the x direction and -2 units in the y direction.
2). Rotate a line segment P1 P2, say by degrees counter clockwise, about P1.
P1(1,2)
P2(3,3)
P’2
P’1
P1(1,2)
P2(3,3)P’2
P’1
Other Than Point Transformations…
Translate Lines: translate both endpoints, then join them.
Scale or Rotate Lines: More complex. For example, consider to rotate an arbitrary line about a point P1, three steps are needed:1). Translate such that P1 is at the origin;2). Rotate;3). Translate such that the point at the origin returns to P1.
T(-1,-2) R()
P1(1,2)
P2(3,3)
P2(2,1)T(1,2)P2
P2
P1
P1P1
Another Example.
Scale
Translate
Rotate
Translate
Order Matters!
As we said, the order for composition of 2D geometrical transformations matters, because, in general, matrix multiplication is not commutative. However, it is easy to show that, in the following four cases, commutativity holds:
1). Translation + Translation2). Scaling + Scaling3). Rotation + Rotation4). Scaling (with sx = sy) + Rotation
just to verify case 4:
if sx = sy, M1 = M2.
100
0cos*sin*
0sin*cos*
100
0cossin
0sincos
100
00
00
)(),(1
yy
xx
y
x
yx
ss
ss
s
s
RssSM
100
0cos*sin*
0sin*cos*
100
00
00
100
0cossin
0sincos
),()(2
yx
yx
y
x
yx
ss
ss
s
s
ssSRM
Rigid-Body vs. Affine Transformations
A transformation matrix of the form
where the upper 22 sub-matrix is orthogonal, preserves angles and lengths. Such transforms are called rigid-body transformations, because the body or object being transformed is not distorted in any way. An arbitrary sequence of rotation and translation matrices creates a matrix of this form.
The product of an arbitrary sequence of rotation, translations, and scale matrices will cause an affine transformation, which have the property of preserving parallelism of lines, but not of lengths and angles.
1002221
1211
y
x
trr
trr
Rigid-Body vs. Affine Transformations (cont.)
Shear transformation is also affine.
Unit cube 45º Scale in x, not in y
Rigid- bodyTransformation
AffineTransformation
Shear in the x direction Shear in the y direction
100
010
01 a
SH x
100
01
001
bSH y
2D Output Primitives
Points Lines Circles Ellipses Other curves Filling areas Text Patterns Polymarkers
Filling area
Polygons are considered!
1) Scan-Line Filling (between edges)
2) Interactive Filling (using an interior starting point)
1) Scan-Line Filling (scan conversion)
Problem: Given the vertices or edges of a polygon, which are the pixels to be included in the area filling?
Scan-Line filling, cont’d
Main idea: locate the intersections between the
scan-lines and the edges of the polygon sort the intersection points on each scan-
line on increasing x-coordinates generate frame-buffer positions along
the current scan-line between pairwise intersection points
Main idea
Scan-Line filling, cont’d
Problems with intersection points that are vertices:
Basic rule: count them as if each vertex is being two points (one to each of the two joining edges in the vertex)
Exception: if the two edges joining in the vertex are on opposite sides of the scan-line, then count the vertex only once (require some additional processing)
Vertex problem
Scan-Line filling, cont’d
Time-consuming to locate the intersection points!
If an edge is crossed by a scan-line, most probably also the next scan-line will cross it (the use of coherence properties)
Scan-Line filling, cont’d
Each edge is well described by an edge-record:
ymax
x0 (initially the x related to ymin)x/y (inverse of the slope) (possibly also x and y)x/y is used for incremental calculations
of the intersection points
Edge Records
Scan-Line filling, cont’d
The intersection point (xn,yn) between an edge and scan-line yn follows from the line equation of the edge:
yn = y/x . xn + b (cp. y = m.x + b)The intersection between the same edge and the
next scan-line yn+1 is then given from the following:
yn+1 = y/x . xn+1 + band also
yn+1 = yn + 1 = y/x . xn + b +1
Scan-Line filling, cont’d
This gives us:
xn+1 = xn + x/y , n = 0, 1, 2, …..
i.e. the new value of x on the next scan-line is given by adding the inverse of the slope to the current value of x
Scan-Line filling, cont’d
An active list of edge records intersecting with the current scan-line is sorted on increasing x-coordinates
The polygon pixels that are written in the frame buffer are those which are calculated to be on the current scan-line between pairwise x-coordinates according to the active list
Scan-Line filling, cont’d
When changing from one scan-line to the next, the active edge list is updated:
a record with ymax < ”the next scan-line” is removed
in the remaining records, x0 is incremented and rounded to the nearest integer
an edge with ymin = ”the next scan-line” is included in the list
Scan-Line Filling Example
2) Interactive Filling
Given the boundaries of a closed surface. By choosing an arbitrary interior point, the complete interior of the surface will be filled with the color of the user’s choice.
Interactive Filling, cont’d
Definition: An area or a boundary is said to be 4-connected if from an arbitrary point all other pixels within the area or on the boundary can be reached by only moving in horizontal or vertical steps.
Furthermore, if it is also allowed to take diagonal steps, the surface or the boundary is said to be 8-connected.
4/8-connected
Interactive Filling, cont’d
A recursive procedure for filling a 4-connected (8-connected) surface can easily be defined.
Assume that the surface shall have the same color as the boundary (can easily be modified!).
The first interior position (pixel) is choosen by the user.
Interactive Filling, algorithm
void fill(int x, int y, int fillColor) { int interiorColor;
interiorColor = getPixel(x,y);if (interiorColor != fillColor) {
setPixel(x, y, fillColor); fill(x + 1, y, fillColor);fill(x, y + 1, fillColor);fill(x - 1, y, fillColor);fill(x, y - 1, fillColor); } }
Inside-Outside test
When is a point an interior point?Odd-Even RuleDraw conceptually a line from a specified point to
a distant point outside the coordinate space; count the number of polygon edges that are crossed
if odd => interiorif even => exterior
Note! The vertices!
Text
Representation:* bitmapped (raster) + fast - more storage - less good for
styles/sizes
* outlined (lines and curves)
+ less storage + good for styles/sizes - slower
Other output primitives
* pattern (to fill an area)
normally, an n x m rectangular color pixel array with a specified
reference point
* polymarker (marker symbol)
a character representing a point
* (polyline)
a connected sequence of line segments
Attributes
Influence the way a primitive is displayed
Two main ways of introducing attributes:
1) added to primitive’s parameter liste.g. setPixel(x, y, color)
2) a list of current attributes (to be updated when changed)
e.g setColor(color); setPixel(x, y);
Attributes for lines
Lines (and curves) are normally infinitely thin type
• dashed, dotted, solid, dot-dashed, …
• pixel mask, e.g. 11100110011 width
• problem with the joins; line caps are used to adjust shape
pen/brush shape color (intensity)
Lines with width
Line caps
Joins
Attributes for area fill
fill style• hollow, solid, pattern, hatch fill, …
color pattern
• tiling
Tiling
Tiling = filling surfaces (polygons) with a rectangular pattern
Attributes for characters/strings
style font (typeface) color size (width/height) orientation path spacing alignment
Text attributes
Text attributes, cont’d
Text attributes, cont’d
Text attributes, cont’d
Color as attribute
Each color has a numerical value, or intensity, based on some color model.
A color model typically consists of three primary colors, in the case of displays Red, Green and Blue (RGB)
For each primary color an intensity can be given, either 0-255 (integer) or 0-1 (float) yielding the final color
256 different levels of each primary color means 3x8=24 bits of information to store
Color representations
Two different ways of storing a color value:
1) a direct color value storage/pixel
2) indirectly via a color look-up table index/pixel (typically 256 or 512 different colors in the table)
Color Look-up Table
Antialiasing
Aliasing ≈ the fact that exact points are approximated by fixed pixel positions
Antialiasing = a technique that compensates for this (more than one intensity level/pixel is required)
Antialiasing, a method
A polygon will be studied (as an example).
Area sampling (prefiltering): a pixel that is only partly included in the exact polygon, will be given an intensity that is proportional to the extent of the pixel area that is covered by the true polygon
Area sampling
P = polygon intensity
B = background intensity
f = the extent of the pixel area covered by the true polygon
pixel intensity =
P*f + B*(1 - f)
Note! Time consuming to calculate f
Topics
Clipping Cohen-Sutherland Line Clipping Algorithm
Clipping Why clipping?
• Not everything defined in the world coordinates is inside the world window
Where does clipping take place?
OpenGL does it for you• BUT, as a CS major, you should know how it
is done.
Model
Viewport Transformation
Clipping …
Line Clipping int clipSegment(p1, p2, window)
• Input parameters: p1, p2, window
p1, p2: 2D endpoints that define a line
window: aligned rectangle
• Returned value:
1, if part of the line is inside the window
0, otherwise
• Output parameters: p1, p2
p1 and/or p2’s value might be changed so that both p1 and p2 are inside the window
Line Clipping
Example• Line RetVal Output
AB
BC
CD
DE
EA
o P1
o P2
o P3
o P4
o P1
o P2
o P3
o P4
Cohen-Sutherland Line Clipping Algorithm
Trivial accept and trivial reject• If both endpoints within window trivial accept
• If both endpoints outside of same boundary of window trivial reject
Otherwise• Clip against each edge in turn
Throw away “clipped off” part of line each time
How can we do it efficiently (elegantly)?
Cohen-Sutherland Line Clipping Algorithm
Examples: • trivial accept?
• trivial reject?window
L1
L2L3
L4
L5
L6
Cohen-Sutherland Line Clipping Algorithm
Use “region outcode”
Cohen-Sutherland Line Clipping Algorithm outcode[1] (x < Window.left)
outcode[2] (y > Window.top)
outcode[3] (x > Window.right)
outcode[4] (y < Window.bottom)
Cohen-Sutherland Line Clipping Algorithm
Both outcodes are FFFF• Trivial accept
Logical AND of two outcodes FFFF• Trivial reject
Logical AND of two outcodes = FFFF• Can’t tell
• Clip against each edge in turn
Throw away “clipped off” part of line each time
Cohen-Sutherland Line Clipping Algorithm
Examples: • outcodes?
• trivial accept?
• trivial reject?
window
L1
L2L3
L4
L5
L6
Cohen-Sutherland Line Clipping Algorithm
int clipSegment(Point2& p1, Point2& p2, RealRect W) do if(trivial accept) return 1; else if(trivial reject) return 0; else if(p1 is inside) swap(p1, p2) if(p1 is to the left) chop against the left else if(p1 is to the right) chop against the right else if(p1 is below) chop against the bottom else if(p1 is above) chop against the top while(1);
Cohen-Sutherland Line Clipping Algorithm
A segment that requires 4 clips
Cohen-Sutherland Line Clipping Algorithm
How do we chop against each boundary?
Given P1 (outside) and P2, (A.x,A.y)=?
Cohen-Sutherland Line Clipping Algorithm
Let dx = p1.x - p2.x dy = p1.y - p2.y
A.x = w.r d = p1.y - A.y e = p1.x - w.r
d/dy = e/dx
p1.y - A.y = (dy/dx)(p1.x - w.r)
A.y = p1.y - (dy/dx)(p1.x - w.r)
= p1.y + (dy/dx)(w.r - p1.x)
As A is the new P1
p1.y += (dy/dx)(w.r - p1.x)
p1.x = w.r Q: Will we have divided-by-zero problem?
UNIT-II
THREE-DIMENSIONAL CONCEPTS
3D VIEWING
3D Viewing-contents•Viewing pipeline
•Viewing coordinates
•Projections
•View volumes and general projection transformations
•clipping
3D Viewing World coordinate system(where the objects
are modeled and defined) Viewing coordinate system(viewing objects with
respect to another user defined coordinate system) Scene coordinate system(a viewing coordinate
system chosen to be at the centre of a scene) Object coordinate system(a coordinate system
specific to an object.)
3D viewing
Simple camera analogy is adopted
3D viewing-pipeline
3D viewing
Defining the viewing coordinate system and specifying the view plane
3D viewing
First pick up a world coordinate position called the view reference point. This is the origin of the VC system
Pick up the +ve direction for the Zv axis and the orientation of the view plane by specifying the view plane normal vector ‘N’.
Choose a world coordinate position and this point establishes the direction for N relative to either the world or VC origin. The view plane normal vector is the directed line segment.
steps to establish a Viewing coordinate system or view reference coordinate system and the view plane
3D viewing
steps to establish a Viewing coordinate system or view reference coordinate system and the view plane
Some packages allow us to choose a look at point relative to the view reference point.
Or set up a Left handed viewing system and take the N and the +ve Zv axis from the viewing origin to the look- at point.
3D viewing
steps to establish a Viewing coordinate system or view reference coordinate system and the view plane
We now choose the view up vector V. It can be specified as a twist angle about Zv axis.
Using N,V U can be specified.
Generally graphics packages allow users to choose a position of the view plane along the Zv axis by specifying the view plane distance from the viewing origin.
The view plane is always parallel to the XvYv plane.
3D viewing
To obtain a series of views of a scene we can keep the view reference point fixed and change the direction of N or we can fix N direction and move the view reference point around the scene.
3D viewing
(a)Invert Viewing
z Axis
(b)Translate Viewing
Origin to World Origin
(c)Rotate About World x Axisto Bring Viewing z Axis into
the xz Plane of the World System
(d)Rotate About the World
y Axis to Alignthe Two z Axes
(e)Rotate About the Worldz Axis to Align the Two
Viewing Systems
wx
wy
wz
wx
wy
wz
wx
wy
wz
wx
wy
wz
wx
wy
wz
vx vy
vz
vzvyvx
vxvy
vz vz
vxvy
vz
vy
vx
Transformation from world to viewing coordinate system
Mwc,vc=RzRyRx.T
What Are Projections?
Picture Plane
Objects in World Space
Our 3-D scenes are all specified in 3-D world coordinatesTo display these we need to generate a 2-D image - project objects onto a picture plane
Converting From 3-D To 2-D Projection is just one part of the process of converting from 3-D
world coordinates to a 2-D image
Clip against view volume
Project onto projection
plane
Transform to 2-D device coordinates
3-D world coordinate
output primitives
2-D device coordinates
Types Of Projections
There are two broad classes of projection:Parallel: Typically used for architectural and engineering
drawings
Perspective: Realistic looking and used in computer graphics
Perspective ProjectionParallel Projection
Taxonomy Of Projections
Types Of Projections
There are two broad classes of projection:• Parallel:
preserves relative proportions of objects
accurate views of various sides of an object can be obtained
does not give realistic representations of the appearance of a 3D objective.
• Perspective:
produce realistic views but does not preserve relative proportions
projections of distant objects are smaller than the projections of objects of the same size that are closer to the projection plane.
Parallel Projections Some examples of parallel projections
Orthographic Projection(axonometric)
Orthographic oblique
Parallel Projections Some examples of parallel projections
Isometric projection for a cube
The projection plane is aligned so that it intersects each coordinate axes in which the object is defined (principal axes) at the same distance from the origin.
All the principal axes are foreshortened equally.
Parallel ProjectionsTransformation equations for an orthographic parallel projections is simple
Any point (x,y,z) in viewing coordinates is transformed to projection coordinates as
Xp=X Yp=Y
Parallel ProjectionsTransformation equations for oblique projections is as below.
Oblique projections
11000
0000
0sin10
0cos01
1
1
z
y
x
L
L
w
z
y
x
p
p
p
p
Parallel ProjectionsTransformation equations for oblique projections is as below.
Oblique projections
11000
0000
0sin10
0cos01
1
1
z
y
x
L
L
w
z
y
x
p
p
p
p
An orthographic projection is
obtained when L1=0.
In fact the effect of the projection matrix is to shear planes of constant Z and project them on to the view plane.
•Two common oblique parallel projections:
–Cavalier and Cabinet
Parallel Projections Oblique projections
•2 common oblique parallel projections:Cavalier projection
45
1tan
Cabinet projection
4.63
2tan
All lines perpendicular to the projection plane are projected with no change in length.
They are more realistic than cavaliar
Lines perpendicular to the viewing surface are projected at one-half their length.
Perspective Projectionsvisual effect is similar to human visual system...
has 'perspective foreshortening‘
size of object varies inversely with distance from the center of projection.
angles only remain intact for faces parallel to projection plane.
Perspective Projections
Where u varies from o to 1
Perspective Projections
Perspective Projections
If the view plane is the UV plane itself then Zvp=0.
The projection coordinates become
Xp=X(Zprp/(Zprp-Z))=X(1/(1-Z/Zprp))
Yp=Y(Zprp/(Zprp-Z))=Y(1/(1-Z/Zprp))
If the PRP is selected at the viewing cooridinate origin then Zprp=0
The projection coordinates become
Xp=X(Zvp/Z)
Yp=Y(Zvp/Z)
Perspective Projections There are a number of different kinds of perspective views The most common are one-point and two point perspectives
One-point perspective projection
Two-point perspective projection
Coordinate description
Perspective Projections
Parallel lines that are parallel to the view plane are projected as parallel lines. The point at which a set of projected parallel lines appear to converge is called a vanishing point.
If a set of lines are parallel to one of the three principle axes, the vanishing point is called an principal vanishing point. There are at most 3 such points, corresponding to the number of
axes cut by the projection plane.
View volume
View volume
Perspective projectionParallel projection
The size of the view volume depends on the size of the window but the shape depends on the type of projection to be used.
Both near and far planes must be on the same side of the reference point.
View volume
Often the view plane is positioned at the view reference point or on the front clipping plane while generating parallel projection.
Perspective effects depend on the positioning of the projection reference point relative to the view plane
VPN
B
F
FrontClippingPlane
ViewPlane
BackClippingPlane
Direction ofPropagation
VPN
FrontClippingPlane
ViewPlane
BackClippingPlane
BF
VRP
VRP
Direction ofPropagation
VPNFrontClippingPlane
ViewPlane
BackClippingPlane
BF
VRP
View volume - PHIGS
View volume
In an animation sequence, we can place the projection reference point at the viewing coordinate origin and put the view plane in front of the scene.
We set the field of view by adjusting the size of the window relative to the distance of the view plane from the PRP.
We move through the scene by moving the viewing reference frame and the PRP will move with the view reference point.
N
Direction ofProjection
N
NearPlane
FarPlane
Window
(a)Original Orientation
(b)After Shearing
Direction ofProjection
Window
NearPlane
FarPlane
ViewVolume
ViewVolume
ParallelGeneral parallel projection transformation
General parallel projection transformation
),0,0(:),,(: ccba
c
bbbcb
c
aaaca
cc
b
a
b
a
11
11
1
1
0
0
0
0
0
01000
0100
010
001
ShearingLet Vp=(a,b,c) be the projection vector in viewing coordinates.
The shear transformation can be expressed as
V’p=Mparallel.Vp
Where Mparallel is
For an orthographic parallel projection Mparallel becomes the identity matrix since a1=b1=0
Parallel
Perspective
N N
ViewVolume
ViewVolume
Far
Near
Window
Far
Near
Window
Center ofProjection
Center ofProjection
(a)Original Orientation
(b)After Transformation
Shearing
Regularization of Clipping (View) Volume (Cont’)
General perspective projection transformation
General perspective projection transformation Perspective
Steps
1. Shear the view volume so that the centerline of the frustum is perpendicular to the view plane
2. Scale the view volume with a scaling factor that depends on 1/z.
A shear operation is to align a general perspective view volume with the projection window.
The transformation involves a combination of z-axis shear and a translation.
Mperspective=Mscale.Mshear
Clipping
View volume clipping boundaries are planes whose orientations depend on the type of projection, the projection window and the position of the projection reference point
The process of finding the intersection of a line with one of the view volume boundaries is simplified if we convert the view volume before clipping to a rectangular parallelepiped.
i.e we first perform the projection transformation which converts coordinate values in the view volume to orthographic parallel coordinates.
Oblique projection view volumes are converted to a rectangular parallelepiped by the shearing operation and perspective view volumes are converted with a combination of shear and scale transformations.
Clipping-normalized view volumes The normalized view volume is a region defined by the planes
X=0, x=1, y=0, y=1, z=0, z=1
Clipping-normalized view volumes
There are several advantages to clipping against the unit cube
1. The normalized view volume provides a standard shape for representing any sized view volume.
2. Clipping procedures are simplified and standardized with unit clipping planes or the viewport planes.
3. Depth cueing and visible-surface determination are simplified, since Z-axis always points towards the viewer.
Unit cube
3D viewport
Mapping positions within a rectangular view volume to a three-dimensional rectangular viewport is accomplished with a combination of scaling and translation.
Clipping-normalized view volumes
Unit cube
3D viewport
Mapping positions within a rectangular view volume to a three-dimensional rectangular viewport is accomplished with a combination of scaling and translation.
Dx 0 0 Kx
0 Dy 0 Ky
0 0 Dz Kz
0 0 0 1Where
Dx=(xvmax-xvmin)/(xwmax-xwmin) and Kx= xvmin- xwmin Dx
Dy= (yvmax-yvmin)/(ywmax-ywmin) and Ky= yvmin - ywmin Dy
Dz= (zvmax-zvmin)/(zwmax-zwmin) and Kz= zvmin- zwmin Dz
Viewport clipping
For a line endpoint at position (x,y,z) we assign the bit positions in the region code from right to left as
Bit 1 = 1 if x< xvmin (left)
Bit 1 = 1 if x< xvmax (right)
Bit 1 = 1 if y< yvmin (below)
Bit 1 = 1 if y< yvmax (above)
Bit 1 = 1 if z< zvmin (front)
Bit 1 = 1 if z< zvmax (back)
Viewport clipping
For a line segment with endpoints P1(x1,y1,z1) and P2(x2,y2,z2) the parametric equations can be
X=x1+(x2-x1)u
Y=y1+(y2-y1)u
Z=z1+(z2-z1)u
Hardware implementations
WORLD-COORDINATE
Object descriptions
Transformation Operations
Clipping Operations
Conversion to Device Coordinates
3D Transformations
2D coordinates 3D coordinates
x
y
x
y
z
xz
y
Right-handed coordinate system:
3D Transformations (cont.)
1. Translation in 3D is a simple extension from that in 2D:
2. Scaling is similarly extended:
1000
100
010
001
),,(z
y
x
zyx d
d
d
dddT
1000
000
000
000
),,(z
y
x
zyx s
s
s
sssS
3D Transformations (cont.)
3. The 2D rotation introduced previously is just a 3D rotation
about the z axis.
similarly we have:X
Y
Z
1000
0100
00cossin
00sincos
)(
zR
1000
0cossin0
0sincos0
0001
)(
xR
1000
0cos0sin
0010
0sin0cos
)(
yR
Composition of 3D Rotations
In 3D transformations, the order of a sequence of rotations matters!
1000
0cos0sin
0sinsincoscossin
0sincossincoscos
1000
0cos0sin
0010
0sin0cos
1000
0100
00cossin
00sincos
)()(
yz RR
1000
0cossinsincossin
00cossin
0sincossincoscos
1000
0100
00cossin
00sincos
1000
0cos0sin
0010
0sin0cos
)()(
zy RR
)()()()( yzzy RRRR
More Rotations
We have shown how to rotate about one of the principle axes, i.e. the
axes constituting the coordinate system. There are more we can do,
for example, to perform a rotation about an arbitrary axis:
X
Y
Z
P2(x2, y2 , z2)
P1(x1, y1 , z1)
We want to rotate an object about an
axis in space passing through (x1, y1, z1)
and (x2, y2, z2).
Rotating About An Arbitrary Axis
Y
Z
P2
P1
1). Translate the object by (-x1, -
y1, -z1): T(-x1, -y1, -z1)
X
Y
ZP2
P1
2). Rotate the axis about x so
that it lies on the xz plane: Rx()
X
X
Y
ZP2
P1
3). Rotate the axis about y so
that it lies on z: Ry ()
X
Y
ZP2
P1
4). Rotate object about z by : Rz()
Rotating About An Arbitrary Axis (cont.)
After all the efforts, don’t forget to undo the rotations and the translation!
Therefore, the mixed matrix that will perform the required task of rotating an
object about an arbitrary axis is given by:
M = T(x1,y1,z1) Rx(-)Ry(-) Rz() Ry() Rx()T(-x1,-y1,-z1)
Finding is trivial, but what about ?
The angle between the z axis and the
projection of P1 P2 on yz plane is .
X
Y
Z
P2
P1
Composite 3D Transformations
Example of Composite 3D Transformations
Try to transform the line segments P1P2 and P1P3 from their start position in (a) to their ending position in (b).
The first solution is to compose the primitive transformations T, Rx, Ry, and Rz. This approach is easier to illustrate and does offer help on building an understanding. The 2nd, more abstract approach is to use the properties of special orthogonal matrices.
y
xz
y
xz
P1P2
P3
P1
P2
P3
(a) (b)
Composition of 3D Transformations
Breaking a difficult problem into simpler sub-problems:1.Translate P1 to the origin.2. Rotate about the y axis such that P1P2 lies in the (y, z) plane.3. Rotate about the x axis such that P1P2 lies on the z axis.4. Rotate about the z axis such that P1P3 lies in the (y, z) plane.
y
xz
y
xz
y
xz
P1 P2
P3
P1
P2
P3
y
xz
P1 P2
P3
y
xz
P1P2
P3P3
P2 P1
1
2
34
Composition of 3D Transformations
1.
2.
1000
100
010
001
),,(
1
1
1
111
z
y
x
zyxT
1000
0sin0cos
0010
0cos0sin
))90((
yR
T
T
T
zzyyxxPzyxTP
zzyyxxPzyxTP
PzyxTP
]1[),,(
]1[),,(
]1000[),,(
1313133111'
3
1212122111'
2
1111'
1
y
xz
P1 P2
P3
D1
Composition of 3D Transformations
3
4.
1000
0cossin0
0sincos0
0001
)(
xRy
xz
P1P2 D2
y
xz
P1
P2
D3
P3
)(zR
TRzyxTRRR yxz ),,()90()()( 111
Finally, we have the composite matrix:
Vector Rotation
x
y
x
y
Rotate the vector
0
1
u
sin
cos
0
1
cossin
sincosu
The unit vector along the x axis is [1, 0]T. After rotating about the origin by , the resulting vector is
x
Vector Rotation (cont.)
y
Rotate the vector
1
0
cos
sin
1
0
cossin
sincosv
x
y
v
The above results states that if we try to rotate a vector, originally pointing the direction of the x (or y) axis, toward a new direction, u (or v), the rotation matrix, R, could be simply written as [u | v] without the need of any explicit knowledge of , the actual rotation angle.
Similarly, the unit vector along the y axis is [0, 1]T. After rotating about the origin by , the resulting vector is
Vector Rotation (cont.)
The reversed operation of the above rotation is to rotate a vector that is not originally pointing the x (or y) direction into the direction of the positive x or y axis. The rotation matrix in this case is R(- ), expressed by R-1( )
where T denotes the transpose.
)(cossin
sincos
)cos()sin(
)sin()cos()(1
T
T
T
Rv
uR
x x
yy
Rotate the vector u u
Example
what is the rotation matrix if one wants the vector T in the left figure to be rotated to the direction of u.
T
(2, 3)
u
TT
u
u
13
3
13
2
32
32
|| 22
13
2
13
313
3
13
2
| vuR
If, on the other hand, one wants the vector u to be rotated to the direction of the positive x axis, the rotation matrix should be
13
2
13
313
3
13
2
T
T
v
uR
Rotation Matrices
Rotation matrix is orthonormal:• Each row is a unit vector
• Each row is perpendicular to the other, i.e. their dot product is zero.
• Each vector will be rotated by R() to lie on the positive x and y axes, respectively. The two column vectors are those into which vectors along the positive x and y axes are rotated.
• For orthonormal matrices, we have
1sincos
1)sin(cos
cossin
sincos22
22
R
0)sin(cossincos
)()(1 TRR
Cross Product
• The cross product or vector product of two vectors, v1 and v2, is another vector:
• The cross product of two vectors is orthogonal to both• Right-hand rule dictates direction of cross product.
1221
1221
1221
21 )(
y x y x
z x z x
z y z y
vv
v1
v2v1 v2
u2
Extension to 3D Cases
The above examples can be extended to 3D cases….
In 2D, we need to know u, which will be
rotated to the direction of the positive x axis.uv
x
y
z u1
v=u1u2
In 3D, however, we need to know more than
one vector. See in the left figure, for example,
two vectors, u1 and u2 are given. If after
rotation, u1 is aligned to the positive z axis, this
will only give us the third column in the rotation
matrix. What about the other two columns?
3D Rotation
In many cases in 3D, only one vector will be aligned to one of the coordinate axes, and the others are often not explicitly given. Let’s see the example:
y
xz
y
xz
P1P2
P3
P1
P2
P3
Note, in this example, vector P1P2 will be
rotated to the positive z direction. Hence the
fist column vector in the rotation matrix is the
normalised P1P2. But what about the other
two columns? After all, P1P3 is not perpendi-
cular to P1P2. Well, we can find it by taking
the cross product of P1P2 and P1P3. Since
P1P2 P1P3 is perpendicular to both P1P2
and P1P3, it will be aligned into the direction
of the positive x axis.
And the third direction is decide by the cross
product of the other two directions, which is
P1P2 (P1P2 P1P2 ).
Therefore, the rotation matrix should be
3D Rotation (cont.)
u
y
xz
P1
P2
P3
v
w
21
21
312121
312121
3121
3121
)(
)(
PP
PPPPPPPP
PPPPPPPPPP
PPPP
R
y
xz
P1P2
P3
u
v
Yaw, Pitch, and Roll
Imagine three lines running through an airplane and intersecting at right angles at the airplane’s centre of gravity.
Roll: rotation around the
front-to-back axis.
Roll: rotation around the
side-to-side axis.
Roll: rotation around the
vertical axis.
An Example of the Airplane
Consider the following example. An airplane is oriented such that its nose is pointing in the positive z direction, its right wing is pointing in the positive x direction, its cockpit is pointing in the positive y direction. We want to transform the airplane so that it heads in the direction given by the vector DOF (direction of flight), is centre at P, and is not banked.
Solution to the Airplane Example
First we are to rotate the positive zp direction into the direction of DOF, which gives us the third column of the rotation matrix: DOF / |DOF|. The xp axis must be transformed into a horizontal vector perpendicular to DOF – that is in the direction of yDOF. The yp direction is then given by xp zp = DOF (y DOF).
DOF
DOF
DOFyDOF
DOFyDOF
DOFy
DOFyR
)(
)(
Inverses of (2D and) 3D Transformations
1. Translation:
2. Scaling:
3. Rotation:
4. Shear:
),,(),,(1zyxzyx dddTdddT
)1
,1
,1
(),,(1
zyxzyx sss
SsssS
)()()(1 TRRR
),(),(1yxyx shshSHshshSH
UNIT-III
GRAPHICS
PROGRAMMING
Color Models
Color models,cont’d
Different meanings of color: painting wavelength of visible light human eye perception
Physical properties of light
Visible light is part of the electromagnetic radiation (380-750 nm)
1 nm (nanometer) = 10-10 m (=10-7 cm)
1 Å (angstrom) = 10 nm
Radiation can be expressed in wavelength () or frequency (f), c=f, where c=3.1010 cm/sec
Physical properties of light
White light consists of a spectrum of all visible colors
Physical properties of light
All kinds of light can be described by the energy of each wavelength
The distribution showing the relation between energy and wavelength (or frequency) is called energy spectrum
Physical properties of light
This distribution may indicate:
1) a dominant wavelength (or frequency) which is the color of the light (hue), cp. ED
2) brightness (luminance), intensity of the light (value), cp. the area A
3) purity (saturation), cp. ED - EW
Physical properties of light
Energy spectrum for a light source with a dominant frequency near the red color
Material properties
The color of an object depends on the so called spectral curves for transparency and reflection of the material
The spectral curves describe how light of different wavelengths are refracted and reflected (cp. the material coefficients introduced in the illumination models)
Properties of reflected light
Incident white light upon an object is for some wavelengths absorbed, for others reflected
E.g. if all light is absorbed => blackIf all wavelengths but one are absorbed =>
the one color is observed as the color of the object by the reflection
Color definitions
Complementary colors - two colors combine to produce white light
Primary colors - (two or) three colors used for describing other colors
Two main principles for mixing colors: additive mixing subtractive mixing
Additive mixing
pure colors are put close to each other => a mix on the retina of the human eye (cp. RGB)
overlapping gives yellow, cyan, magenta and white the typical technique on color displays
Subtractive mixing
color pigments are mixed directly in some liquid, e.g. ink
each color in the mixture absorbs its specific part of the incident light
the color of the mixture is determined by subtraction of colored light, e.g. yellow absorbs blue => only red and green, i.e. yellow, will reach the eye (yellow because of addition)
Subtractive mixing,cont’d
primary colors: cyan, magenta and yellow, i.e. CMY
the typical technique in printers/plotters connection between additive and
subtractive primary colors (cp. the color models RGB and CMY)
Additive/subtractive mixing
Human color seeing
The retina of the human eye consists of cones (7-8M),”tappar”, and rods (100-120M), ”stavar”, which are connected with nerve fibres to the brain
Human color seeing,cont’d
Theory: the cones consist of various light absorbing material
The light sensitivity of the cones and rods varies with the wavelength, and between persons
The ”sum” of the energy spectrum of the light the reflection spectrum of the object the response spectrum of the eyedecides the color perception for a person
Overview of color models
The human eye can perceive about 382000(!) different colors
Necessary with some kind of classification sys-tem; all using three coordinates as a basis:
1) CIE standard2) RGB color model3) CMY color model (also, CMYK)4) HSV color model5) HLS color model
CIE standard
Commission Internationale de L’Eclairage (1931)
not a computer model
each color = a weighted sum of three imaginary primary colors
RGB model
all colors are generated from the three primaries
various colors are obtained by changing the amount of each primary
additive mixing (r,g,b), 0≤r,g,b≤1
RGB model,cont’d
the RGB cube 1 bit/primary => 8 colors, 8 bits/primary => 16M colors
CMY model
cyan, magenta and yellow are comple-mentary colors of red,green and blue, respectively
subtractive mixing the typical printer
technique
CMY model,cont’d
almost the same cube as with RGB; only black<-> white
the various colors are obtained by reducing light, e.g. if red is absorbed => green and blue are added, i.e cyan
RGB vs CMY
If the intensities are represented as 0≤r,g,b≤1 and 0≤c,m,y≤1 (also coordinates 0-255 can be used), then the relation between RGB and CMY can be described as:
cmy
111
rgb
CMYK model
For printing and graphics art industry, CMY is not enough; a fourth primary, K which stands for black, is added.
Conversions between RGB and CMYK are possible, although they require some extra processing.
HSV model
HSV stands for Hue-Saturation-Value described by a hexcone derived from the RGB
cube
HSV model,cont’d
Hue (0-360°); ”the color”, cp. the dominant wave-length (128)
Saturation (0-1); ”the amount of white” (130)
Value (0-1); ”the amount of black” (23)
HSV model,cont’d
The numbers given after each ”primary” are estimates of how many levels a human being is capable to distinguish between, which (in theory) gives the total number of color nuances:
128*130*23 = 382720
In Computer Graphics, usually enough with:
128*8*15 = 16384
HLS model
Another model similar to HSV
L stands for Lightness
Color models
Some more facts about colors:
The distance between two colors in the color cube is not a measure of how far apart the colors are perceptionally!
Humans are more sensitive to shifts in blue (and green?) than, for instance, in yellow
COMPUTER ANIMATIONS
Computer Animations
Any time sequence of visual changes in a scene. Size, color, transparency, shape, surface texture, rotation,
translation, scaling, lighting effects, morphing, changing camera parameters(position, orientation, and focal length), particle animation.
Design of animation sequences: Storyboard layout Object definitions Key-frame specifications generation of in-between frames
Computer Animations
Frame by frame animation Each frame is separately generated. Object defintion Objects are defined interms of basic shapes, such as
polygons or splines. In addition the associated movements for each object are specified along with the shape.
Storyboard It is an outline of the action Keyframe Detailed drawing of the scene at a particular instance
Computer Animations
Inbetweens Intermediate frames (3 to 5 inbetweens for each two key
frames) Motions can be generated using 2D or 3D transformation Object parameters are stored in database Rendering algorithms are used finally Raster animations: Uses raster operations. Ex: we can animate objects along 2D motion paths using
the color table transformations. Here we predefine the object at successive positions along the motion path, and set the successive blocks of pixel values to color table entries
Computer Animations
Computer animation languages:A typical task in animation specification is
Scene description – includes position of objects and light sources, defining the photometric parameters and setting the camera parameters.
Action specification – this involves layout of motion paths for the objects and camera. We need viewing and perspective transformations, geometric transformations, visible surface detection, surface rendering, kinematics etc.,
Keyframe systems – designed simply to generate the in-betweens from the user specified key frames.
Computer Animations Computer animation languages:
A typical task in animation specification is Parameterized systems – allow object motion characteristics to be
specified as part of the object definitions. The adjustable parameters control such object charateristics as degrees of freedom, motion limitations and allowable shape changes.
Scripting systems – allow object specifications and animation sequences to be defined with a user-input script. From the script a library of various objects and motions can be constructed.
Computer AnimationsInterpolation techniques Linear
Computer Animations
Interpolation techniques Non-linear
Key frame systems Morphing – Transformation of object shapes from one form to another is called
morphing. Given two keyframes for an object transformation, we first adjust the object
specification in one of the frames so that number of polygon edges or vertices is the same for the two frames.
Let Lk,Lk+1 denote the number of line segments in two different frames K,K+1 Let us define Lmax=max(Lk,Lk+1)
Lmin=min(Lk,Lk+1)
Ne = Lmax mod Lmin Ns = int(Lmax / Lmin)
Computer Animations
1
2
3’
2’
1’
Key frame K Key frame K+1
Steps 1. Dividing Ne edges of keyframemin into Ns+1 sections
2. Dividing the remaining lines of keyframemin into Nssections
Computer Animations
1
2
3’
2’
1’
Key frame K Key frame K+1
Key frame systems Morphing – Transformation of object shapes from one form to another is
called morphing. If we equalize the vertex count, then the similar analysis follows Let Vk,Vk+1 denote the number of vertices in two different frames K,K+1 Let us define Vmax=max(Lk,Lk+1) Vmin=min(Lk,Lk+1) Nls = (Vmax –1)mod (Vmin –1)
Np = int((Vmax –1) / (Vmin –1) Steps 1. adding Np points to Nls line sections of keyframemin sections 2. Adding Np-1 points to the remaining edges of keyframemin
Computer Animations
Simulating accelerations Curve fitting techniques are often used to specify the animation paths between keyframes. To simulate accelerations we can adjust the time spacing for the in-betweens. For
constant speed we use equal interval time spacing for the inbetweens. suppose we want ‘n’ in-betweens for keyframes at times t1 and t2. The time intervals
between key frames is then divided into n+1 sub intervals, yielding an in-between spacing of t = t2-t1/(n+1) We can calculate the time for in-betweens as tBj=t1+j t for j=1,2,…….,n
Computer Animations
t
Simulating accelerations To model increase or decrease in speeds we use trignometric functions. To model increasing speed, we want the time spacing between frames to increase so
that greater changes in position occur as the object moves faster. We can obtain increase in interval size with the function 1-cos, 0< </2 For n-inbetweens the time for the jth inbetween would then be calculated as tBj=t1+t(1-cosj /2(n+1)) j=1,2,…….,n For j=1
tB1=t1+t(1-cos /2(n+1)) For j=1
tB2=t1+t(1-cos 2/2(n+1)) where t is the time difference between any two key frames.
Computer Animations
Simulating deccelerations To model increase or decrease in speeds we use trignometric functions. To model decreasing speed, we want the time spacing between
frames to decrease. We can obtain increase in interval size with the function
sin, 0< </2 For n-inbetweens the time for the jth inbetween would then be calculated as tBj=t1+t.sinj /2(n+1)) j=1,2,…….,n
Computer Animations
Simulating both accelerations and deccelerations To model increase or decrease in speeds we use trignometric functions. A combination of increasing and decreasing speeds can be modeled using ½(1-cos) 0< </2 The time for the jth inbetween is calculated as tBj=t1+t 1-cos j[(n+1)/2) j=1,2,…….,n
Computer Animations
Motion specifications Direct motion specifications Here we explicitly give the rotation angles and translation vectors.
Then the geometric transformation matrices are applied to transform coordinate positions.
A bouncing ball can be approximated by a sine curve y(x)=AI(sin(x+0)Ie-kx
A is the initial amplitude is the angular frequency 0 is the phase angle K is the damping constant
Computer Animations
Motion specifications Goal directed systems We can specify the motions that are to take place in general
terms that abstractly describe the actions, because they determine specific motion paramters given the goals of the animation.
Computer Animations
Motion specifications Kinematics Kinematic specification of of a motion
can also be given by simply describing the motion path which is often done using splines.
In inverse kinematics we specify the intital and final positions of objects at specified times and the motion parameters are computed by the system.
Computer Animations
Motion specifications dynamics specification of the forces that produce the velocities and
accelerations. Descriptions of object behavior under the influence of forces are generally referred to as a Physically based modeling (.rigid body systems and non rigid systems such as cloth or plastic)
Ex: magnetic, gravitational, frictional etc We can also use inverse dynamics to obtain the forces, given the initial
and final position of objects and the type of motion.
Computer Animations
Computer Animations•Ideally suited for:
•Large volumes of objects – wind effects, liquids,
•Cloth animation/draping
•Underlying mechanisms are usually:
•Particle systems
•Mass-spring systems
Physics based animations
Computer Animations
Physics based animations
Computer Animations
Physics based animations
Computer Animations
Some more animation techniques………….
Anticipation and Staging
Computer Animations
Some more animation techniques………….
Secondary Motion
Computer Animations
Some more animation techniques………….
Motion Capture
Computer Graphics using OpenGL
Initial Steps in Drawing Figures
Using Open-GL
Files: .h, .lib, .dll• The entire folder gl is placed in the Include
directory of Visual C++
• The individual lib files are placed in the lib directory of Visual C++
• The individual dll files are placed in C:\Windows\System32
Using Open-GL (2)
Includes:• <windows.h>
• <gl/gl.h>
• <gl/glu.h>
• <gl/glut.h>
• <gl/glui.h> (if used) Include in order given. If you use capital
letters for any file or directory, use them in your include statement also.
Using Open-GL (3)
Changing project settings: Visual C++ 6.0• Project menu, Settings entry
• In Object/library modules move to the end of the line and add glui32.lib glut32.lib glu32.lib opengl32.lib (separated by spaces from last entry and each other)
• In Project Options, scroll down to end of box and add same set of .lib files
• Close Project menu and save workspace
Using Open-GL (3)
Changing Project Settings: Visual C++ .NET 2003• Project, Properties, Linker, Command Line
• In the white space at the bottom, add glui32.lib glut32.lib glu32.lib opengl32.lib
• Close Project menu and save your solution
Getting Started Making Pictures
Graphics display: Entire screen (a); windows system (b); [both have usual screen coordinates, with y-axis down]; windows system [inverted coordinates] (c)
Basic System Drawing Commands
setPixel(x, y, color)• Pixel at location (x, y) gets color specified by
color
• Other names: putPixel(), SetPixel(), or drawPoint()
line(x1, y1, x2, y2) • Draws a line between (x1, y1) and (x2, y2)
• Other names: drawLine() or Line().
Alternative Basic Drawing
current position (cp), specifies where the system is drawing now.
moveTo(x,y) moves the pen invisibly to the location (x, y) and then updates the current position to this position.
lineTo(x,y) draws a straight line from the current position to (x, y) and then updates the cp to (x, y).
Example: A Square
moveTo(4, 4);//move to starting corner
lineTo(-2, 4); lineTo(-2, -2); lineTo(4, -2); lineTo(4, 4);
//close the square
Device Independent Graphics and OpenGL
Allows same graphics program to be run on many different machine types with nearly identical output.• .dll files must be with program
OpenGL is an API: it controls whatever hardware you are using, and you use its functions instead of controlling the hardware directly.
OpenGL is open source (free).
Event-driven Programs
Respond to events, such as mouse click or move, key press, or window reshape or resize. System manages event queue.
Programmer provides “call-back” functions to handle each event.
Call-back functions must be registered with OpenGL to let it know which function handles which event.
Registering function does *not* call it!
Skeleton Event-driven Program
// include OpenGL librariesvoid main(){
glutDisplayFunc(myDisplay); // register the redraw function glutReshapeFunc(myReshape); // register the reshape function glutMouseFunc(myMouse); // register the mouse action functionglutMotionFunc(myMotionFunc); // register the mouse motion function
glutKeyboardFunc(myKeyboard); // register the keyboard action function…perhaps initialize other things…glutMainLoop(); // enter the unending main loop
}…all of the callback functions are defined here
Callback Functions
glutDisplayFunc(myDisplay); • (Re)draws screen when window opened or another
window moved off it.
glutReshapeFunc(myReshape); • Reports new window width and height for reshaped
window. (Moving a window does not produce a reshape event.)
glutIdleFunc(myIdle); • when nothing else is going on, simply redraws display
using void myIdle() {glutPostRedisplay();}
Callback Functions (2)
glutMouseFunc(myMouse); • Handles mouse button presses. Knows
mouse location and nature of button (up or down and which button).
glutMotionFunc(myMotionFunc); • Handles case when the mouse is moved with
one or more mouse buttons pressed.
Callback Functions (3)
glutPassiveMotionFunc(myPassiveMotionFunc) • Handles case where mouse enters the window with
no buttons pressed. glutKeyboardFunc(myKeyboardFunc);
• Handles key presses and releases. Knows which key was pressed and mouse location.
glutMainLoop() • Runs forever waiting for an event. When one occurs,
it is handled by the appropriate callback function.
Libraries to Include GL, for which the commands begin with GL; GLUT, the GL Utility Toolkit, opens windows,
develops menus, and manages events. GLU, the GL Utility Library, which provides
high level routines to handle complex mathematical and drawing operations.
GLUI, the User Interface Library, which is completely integrated with the GLUT library.• The GLUT functions must be available for GLUI to
operate properly. • GLUI provides sophisticated controls and menus to
OpenGL applications.
A GL Program to Open a Window
// appropriate #includes go here – see Appendix 1void main(int argc, char** argv){
glutInit(&argc, argv); // initialize the toolkit glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB);
// set the display mode glutInitWindowSize(640,480); // set window size glutInitWindowPosition(100, 150); // set window upper left corner position on screen
glutCreateWindow("my first attempt"); // open the screen window (Title: my first attempt)
// continued next slide
Part 2 of Window Program
// register the callback functions glutDisplayFunc(myDisplay); glutReshapeFunc(myReshape); glutMouseFunc(myMouse); glutKeyboardFunc(myKeyboard);
myInit(); // additional initializations as necessary
glutMainLoop(); // go into a perpetual loop} Terminate program by closing window(s) it is using.
What the Code Does
glutInit (&argc, argv) initializes Open-GL Toolkit
glutInitDisplayMode (GLUT_SINGLE | GLUT_RGB) allocates a single display buffer and uses colors to draw
glutInitWindowSize (640, 480) makes the window 640 pixels wide by 480 pixels high
What the Code Does (2)
glutInitWindowPosition (100, 150) puts upper left window corner at position 100 pixels from left edge and 150 pixels down from top edge
glutCreateWindow (“my first attempt”) opens and displays the window with the title “my first attempt”
Remaining functions register callbacks
What the Code Does (3)
The call-back functions you write are registered, and then the program enters an endless loop, waiting for events to occur.
When an event occurs, GL calls the relevant handler function.
Effect of Program
Drawing Dots in OpenGL We start with a coordinate system based
on the window just created: 0 to 679 in x and 0 to 479 in y.
OpenGL drawing is based on vertices (corners). To draw an object in OpenGL, you pass it a list of vertices.• The list starts with glBegin(arg); and ends with
glEnd();• Arg determines what is drawn.• glEnd() sends drawing data down the OpenGL
pipeline.
Example
glBegin (GL_POINTS);• glVertex2i (100, 50);
• glVertex2i (100, 130);
• glVertex2i (150, 130); glEnd(); GL_POINTS is constant built-into Open-
GL (also GL_LINES, GL_POLYGON, …) Complete code to draw the 3 dots is in
Fig. 2.11.
Display for Dots
What Code Does: GL Function Construction
Example of Construction
glVertex2i (…) takes integer values glVertex2d (…) takes floating point
values
OpenGL has its own data types to make graphics device-independent• Use these types instead of standard ones
Open-GL Data Typessuffix data type C/C++ type OpenGL type name
b 8-bit integer signed char GLbyte
s 16-bit integer Short GLshort
i 32-bit integer int or long GLint, GLsizei
f 32-bit float Float GLfloat, GLclampf
d 64-bit float Double GLdouble,GLclampd
ub 8-bit unsigned number
unsigned char GLubyte,GLboolean
us 16-bit unsigned number
unsigned short GLushort
ui 32-bit unsigned number
unsigned int or unsigned long
GLuint,Glenum,GLbitfield
Setting Drawing Colors in GL
glColor3f(red, green, blue); // set drawing color• glColor3f(1.0, 0.0, 0.0); // red • glColor3f(0.0, 1.0, 0.0); // green • glColor3f(0.0, 0.0, 1.0); // blue • glColor3f(0.0, 0.0, 0.0); // black • glColor3f(1.0, 1.0, 1.0); // bright white • glColor3f(1.0, 1.0, 0.0); // bright yellow • glColor3f(1.0, 0.0, 1.0); // magenta
Setting Background Color in GL
glClearColor (red, green, blue, alpha); • Sets background color.
• Alpha is degree of transparency; use 0.0 for now.
glClear(GL_COLOR_BUFFER_BIT); • clears window to background color
Setting Up a Coordinate System
void myInit(void){
glMatrixMode(GL_PROJECTION); glLoadIdentity();gluOrtho2D(0, 640.0, 0, 480.0);
}// sets up coordinate system for window
from (0,0) to (679, 479)
Drawing Lines
glBegin (GL_LINES); //draws one line• glVertex2i (40, 100); // between 2 vertices
• glVertex2i (202, 96); glEnd (); glFlush(); If more than two vertices are specified
between glBegin(GL_LINES) and glEnd() they are taken in pairs, and a separate line is drawn between each pair.
Line Attributes
Color, thickness, stippling. glColor3f() sets color. glLineWidth(4.0) sets thickness. The default
thickness is 1.0.
a). thin lines b). thick lines c). stippled lines
Setting Line Parameters
Polylines and Polygons: lists of vertices. Polygons are closed (right); polylines
need not be closed (left).
Polyline/Polygon Drawing
glBegin (GL_LINE_STRIP); // GL_LINE_LOOP to close polyline (make
it a polygon)• // glVertex2i () calls go here
glEnd (); glFlush (); A GL_LINE_LOOP cannot be filled with
color
Examples
Drawing line graphs: connect each pair of (x, f(x)) values
Must scale and shift
Examples (2)
Drawing polyline from vertices in a file• # polylines
• # vertices in first polyline
• Coordinates of vertices, x y, one pair per line
• Repeat last 2 lines as necessary
File for dinosaur available from Web site Code to draw polylines/polygons in Fig.
2.24.
Examples (3)
Examples (4)
Parameterizing Drawings: allows making them different sizes and aspect ratios
Code for a parameterized house is in Fig. 2.27.
Examples (5)
Examples (6)
Polyline Drawing Code to set up an array of vertices is in
Fig. 2.29. Code to draw the polyline is in Fig. 2.30.
Relative Line Drawing
Requires keeping track of current position on screen (CP).
moveTo(x, y); set CP to (x, y) lineTo(x, y);draw a line from CP to (x, y), and
then update CP to (x, y). Code is in Fig. 2.31. Caution! CP is a global variable, and therefore
vulnerable to tampering from instructions at other points in your program.
Drawing Aligned Rectangles
glRecti (GLint x1, GLint y1, GLint x2, GLint y2); // opposite corners; filled with current color; later rectangles are drawn on top of previous ones
Aspect Ratio of Aligned Rectangles
Aspect ratio = width/height
Filling Polygons with Color
Polygons must be convex: any line from one boundary to another lies inside the polygon; below, only D, E, F are convex
Filling Polygons with Color (2)
glBegin (GL_POLYGON);• //glVertex2f (…); calls go here
glEnd (); Polygon is filled with the current drawing
color
Other Graphics Primitives
GL_TRIANGLES, GL_TRIANGLE_STRIP, GL_TRIANGLE_FAN
GL_QUADS, GL_QUAD_STRIP
Simple User Interaction with Mouse and Keyboard
Register functions:• glutMouseFunc (myMouse);
• glutKeyboardFunc (myKeyboard); Write the function(s) NOTE that any drawing you do when you
use these functions must be done IN the mouse or keyboard function (or in a function called from within mouse or keyboard callback functions).
Example Mouse Function
void myMouse(int button, int state, int x, int y);
Button is one of GLUT_LEFT_BUTTON, GLUT_MIDDLE_BUTTON, or GLUT_RIGHT_BUTTON.
State is GLUT_UP or GLUT_DOWN. X and y are mouse position at the time of
the event.
Example Mouse Function (2)
The x value is the number of pixels from the left of the window.
The y value is the number of pixels down from the top of the window.
In order to see the effects of some activity of the mouse or keyboard, the mouse or keyboard handler must call either myDisplay() or glutPostRedisplay().
Code for an example myMouse() is in Fig. 2.40.
Polyline Control with Mouse
Example use:
Code for Mouse-controlled Polyline
Using Mouse Motion Functions
glutMotionFunc(myMovedMouse); // moved with button held down
glutPassiveMotionFunc(myMovedMouse); // moved with buttons up
myMovedMouse(int x, int y); x and y are the position of the mouse when the event occurred.
Code for drawing rubber rectangles using these functions is in Fig. 2.41.
Example Keyboard Function
void myKeyboard(unsigned char theKey, int mouseX, int mouseY){
GLint x = mouseX; GLint y = screenHeight - mouseY; // flip y value switch(theKey) {case ‘p’: drawDot(x, y); break;// draw dot at mouse position case ‘E’: exit(-1); //terminate the program
default: break; // do nothing}
}
Example Keyboard Function (2) Parameters to the function will always be
(unsigned char key, int mouseX, int mouseY). The y coordinate needs to be flipped by
subtracting it from screenHeight. Body is a switch with cases to handle active
keys (key value is ASCII code). Remember to end each case with a break!
Using Menus
Both GLUT and GLUI make menus available.
GLUT menus are simple, and GLUI menus are more powerful.
We will build a single menu that will allow the user to change the color of a triangle, which is undulating back and forth as the application proceeds.
GLUT Menu Callback Function
Int glutCreateMenu(myMenu); returns menu ID void myMenu(int num); //handles choice num void glutAddMenuEntry(char* name, int value); //
value used in myMenu switch to handle choice void glutAttachMenu(int button); // one of
GLUT_RIGHT_BUTTON, GLUT_MIDDLE_BUTTON, or GLUT_LEFT_BUTTON • Usually GLUT_RIGHT_BUTTON
GLUT subMenus
Create a subMenu first, using menu commands, then add it to main menu.• A submenu pops up when a main menu item is selected.
glutAddSubMenu (char* name, int menuID); // menuID is the value returned by glutCreateMenu when the submenu was created
Complete code for a GLUT Menu application is in Fig. 2.44. (No submenus are used.)
GLUI Interfaces and Menus
GLUI Interfaces
An example program illustrating how to use GLUI interface options is available on book web site.
Most of the work has been done for you; you may cut and paste from the example programs in the GLUI distribution.
UNIT-IV
RENDERING
Polygon shading model
Flat shading - compute lighting once and assign the color to the whole (mesh) polygon
Flat shading
Only use one vertex normaland material property to compute the color for the polygon
Benefit: fast to compute Used when:
• Polygon is small enough
• Light source is far away (why?)
• Eye is very far away (why?)
Mach Band Effect
Flat shading suffers from “mach band effect” Mach band effect – human eyes accentuate the
discontinuity at the boundary
Side view of a polygonal surface
perceived intensity
Smooth shading
Fix the mach band effect – remove edge discontinuity
Compute lighting for more points on each face
Flat shading Smooth shading
Smooth shading
Two popular methods: • Gouraud shading (used by OpenGL)
• Phong shading (better specular highlight, not in OpenGL)
Gouraud Shading
The smooth shading algorithm used in OpenGL
glShadeModel(GL_SMOOTH) Lighting is calculated for each of the polygon vertices Colors are interpolated for interior pixels
Gouraud Shading
Per-vertex lighting calculation Normal is needed for each vertex Per-vertex normal can be computed by
averaging the adjust face normalsn
n1 n2
n3 n4n = (n1 + n2 + n3 + n4) / 4.0
Gouraud Shading
Compute vertex illumination (color) before the projection transformation
Shade interior pixels: color interpolation (normals are not needed)
C1
C2 C3
Ca = lerp(C1, C2) Cb = lerp(C1, C3)
Lerp(Ca, Cb)
for all scanlines
* lerp: linear interpolation
Gouraud Shading
Linear interpolation
Interpolate triangle color: use y distance to interpolate the two end points in the scanline, and use x distance to interpolate
interior pixel colors
a b
v1 v2x
x = a / (a+b) * v1 + b/(a+b) * v2
Gouraud Shading Problem
Lighting in the polygon interior can be inaccurate
Phong Shading
Instead of interpolation, we calculate lighting for each pixel inside the polygon (per pixel lighting)
Need normals for all the pixels – not provided by user Phong shading algorithm interpolates the normals
and compute lighting during rasterization (need to map the normal back to world or eye space though)
Phong Shading
Normal interpolation
• Slow – not supported by OpenGL and most graphics hardware
n1
n2
n3
nb = lerp(n1, n3)na = lerp(n1, n2)
lerp(na, nb)
UNIT-V
FRACTALS
Fractals
Fractals are geometric objects. Many real-world objects like ferns are
shaped like fractals. Fractals are formed by iterations. Fractals are self-similar. In computer graphics, we use fractal
functions to create complex objects.
Koch Fractals (Snowflakes)
Iteration 0 Iteration 1 Iteration 2 Iteration 3
Generator1/3 1/3
1/3 1/3
1
Fractal Tree
Iteration 1 Iteration 2 Iteration 3 Iteration 4 Iteration 5
Generator
Fractal Fern
Generator
Iteration 0 Iteration 1 Iteration 2 Iteration 3
Add Some Randomness
The fractals we’ve produced so far seem to be very regular and “artificial”.
To create some realism and variability, simply change the angles slightly sometimes based on a random number generator.
For example, you can curve some of the ferns to one side.
For example, you can also vary the lengths of the branches and the branching factor.
Terrain (Random Mid-point Displacement)
Given the heights of two end-points, generate a height at the mid-point.
Suppose that the two end-points are a and b. Suppose the height is in the y direction, such that the height at a is y(a), and the height at b is y(b).
Then, the height at the mid-point will be: ymid = (y(a)+y(b))/2 + r, where
• r is the random offset This is how to generate the random offset r:
r = srg|b-a|, where • s is a user-selected “roughness” factor, and• rg is a Gaussian random variable with mean 0 and variance 1
How to generate a random number with Gaussian (or normal) probability distribution
// given random numbers x1 and x2 with equal distribution from -1 to 1// generate numbers y1 and y2 with normal distribution centered at 0.0// and with standard deviation 1.0.void Gaussian(float &y1, float &y2) {
float x1, x2, w;
do {x1 = 2.0 * 0.001*(float)(rand()%1000) - 1.0;x2 = 2.0 * 0.001*(float)(rand()%1000) - 1.0;w = x1 * x1 + x2 * x2;
} while ( w >= 1.0 );
w = sqrt( (-2.0 * log( w ) ) / w );y1 = x1 * w;y2 = x2 * w;
}
Procedural Terrain Example
Building a more realistic terrain
Notice that in the real world, valleys and mountains have different shapes.
If we have the same terrain-generation algorithm for both mountains and valleys, it will result in unrealistic, alien-looking landscapes.
Therefore, use different parameters for valleys and mountains.
Also, can manually create ridges, cliffs, and other geographical features, and then use fractals to create detail roughness.
Fractals
Infinite detail at every point Self similarity between parts and overall features of the object Zoom into Euclidian shape
• Zoomed shape see more detail
• eventually smooths Zoom in on fractal
• See more detail
• Does not smooth Model
• Terrain, clouds water, trees, plants, feathers, fur, patterns General equation P1=F(P0), P2 = F(P1), P3=F(P2)…
• P3=F(F(F(P0)))
Self similar fractals
Parts are scaled down versions of the entire object• use same scaling on subparts
• use different scaling factors for subparts
Statistically self-similar• Apply random variation to subparts
• Trees, shrubs, other vegetation
Fractal types
Statistically self-affine • random variations
• Sx<>Sy<>Sz• terrain, water, clouds
Invariant fractal sets• Nonlinear transformations
• Self squaring fractals• Julia-Fatou set
• Squaring function in complex space
• Mandelbrot set
• Squaring function in complex space
• Self-inverse fractals• Inversion procedures
Julia-Fatou and Mandelbrot
x=>x2+c
• x=a+bi
• Complex number Modulus
• Sqrt(a2+b2)
• If modulus < 1
• Squaring makes it go toward 0
• If modulus > 1
• Squaring falls towards infinity
If modulus=1
• Some fall to zero
• Some fall to infinity
• Some do neither
• Boundary between numbers which fall to zero and those which fall to infinity
• Julia-Fatou Set
Foley/vanDam Computer Graphics-Principles and Practices, 2nd edition
Julia-Fatou
Julia Fatou and Mandelbrot con’d
Shape of the Julia-Fatou set based on c To get Mandelbrot set – set of non-diverging points
• Correct method
• Compute the Julia sets for all possible c
• Color the points black when the set is connected and white when it is not connected
• Approximate method
• Foreach value of c, start with complex number 0=0+0i
• Apply to x=>x2+c
• Process a finite number of times (say 1000)
• If after the iterations is is outside a disk defined by modulus>100, color the points of c white, otherwise color it black.
Foley/vanDam Computer Graphics-Principles and Practices, 2nd edition
Constructing a deterministic self-similar fractal
Initiator• Given geometric shape
Generator• Pattern which replaces subparts of initiator
Koch Curve
Initiator generator First iteration
Fractal dimension
D=fractal dimension
• Amount of variation in the structure
• Measure of roughness or fragmentation of the object
• Small d-less jagged
• Large d-more jagged Self similar objects
• nsd=1 (Some books write this as ns-d=1)
• s=scaling factor
• n number of subparts in subdivision
• d=ln(n)/ln(1/s)
• [d=ln(n)/ln(s) however s is the number of segments versus how much the main segment was reduced
• I.e. line divided into 3 segments. Instead of saying the line is 1/3, say instead there are 3 sements. Notice that 1/(1/3) = 3]
• If there are different scaling factors
•
• Skd=1
K=1
n
Figuring out scaling factorsI prefer: ns-d=1 :d=ln(n)/ln(s)
Dimension is a ratio of the (new size)/(old size)• Divide line into n identical
segments• n=s
• Divide lines on square into small squares by dividing each line into n identical segments• n=s2 small squares
• Divide cube• Get n=s3 small cubes
Koch’s snowflake• After division have 4 segments
• n=4 (new segments)
• s=3 (old segments)
• Fractal Dimension
• D=ln4/ln3 = 1.262
• For your reference: Book method
• n=4
• Number of new segments
• s=1/3
• segments reduced by 1/3
• d=ln4/ln(1/(1/3))
Sierpinski gasket Fractal Dimension
Divide each side by 2• Makes 4 triangles
• We keep 3
• Therefore n=3• Get 3 new triangles from 1 old
triangle
• s=2 (2 new segments from one old segment)
Fractal dimension • D=ln(3)/ln(2) = 1.585
Cube Fractal Dimension
Apply fractal algorithm• Divide each side by 3
• Now push out the middle face of each cube
• Now push out the center of the cube What is the fractal dimension?
• Well we have 20 cubes, where we used to have 1• n=20
• We have divided each side by 3• s=3
• Fractal dimension ln(20)/ln(3) = 2.727
Image from Angel book
Language Based Models of generating images
Typical Alphabet {A,B,[,]} Rules
• A=> AA
• B=> A[B]AA[B] Starting Basis=B Generate words
• Represents sequence of segments in graph structure
• Branch with brackets
• Interesting, but I want a tree
B A[B]AA[B] AA[A[B]AA[B]]AAAA[A[B]AA[B]]
A
AA
B
B
A
AAB
AA
B
AAAA
A
B
AA
B
Language Based Models of generating images con’d
Modify Alphabet {A,B,[,],(,)}
Rules• A=> AA
• B=> A[B]AA(B)
• [] = left branch () = right branchStarting Basis=B
Generate words• Represents sequence of
segments in graph structure
• Branch with brackets
B A[B]AA(B) AA[A[B]AA(B)]AAAA(A[B]AA(B))
A
AA
B
B
A
AAB
AA
B
AAAA
AB
AA
B
Language Based models have no inherent geometry
Grammar based model requires
• Grammar
• Geometric interpretation Generating an object from the word is a
separate process
• examples
• Branches on the tree drawn at upward angles
• Choose to draw segments of tree as successively smaller lengths
• The more it branches, the smaller the last branch is
• Draw flowers or leaves at terminal nodes
A
AAB
AA
B
AAAA
AB
AA
B
Grammar and Geometry
Change branch size according to depth of graph
Foley/vanDam Computer Graphics-Principles and Practices, 2nd edition
Particle Systems
System is defined by a collection of particles that evolve over time
• Particles have fluid-like properties
• Flowing, billowing, spattering, expanding, imploding, exploding
• Basic particle can be any shape
• Sphere, box, ellipsoid, etc
• Apply probabilistic rules to particles
• generate new particles
• Change attributes according to age
• What color is particle when detected?
• What shape is particle when detected?
• Transparancy over time?
• Particles die (disappear from system)
• Movement
• Deterministic or stochastic laws of motion
• Kinematically
• forces such as gravity
Particle Systems modeling
Model
• Fire, fog, smoke, fireworks, trees, grass, waterfall, water spray. Grass
• Model clumps by setting up trajectory paths for particles Waterfall
• Particles fall from fixed elevation
• Deflected by obstacle as splash to ground
• Eg. drop, hit rock, finish in pool
• Drop, go to bottom of pool, float back up.
Physically based modeling
Non-rigid object
• Rope, cloth, soft rubber ball, jello Describe behavior in terms of external and internal forces
• Approximate the object with network of point nodes connected by flexible connection
• Example springs with spring constant k
• Homogeneous object
• All k’s equal Hooke’s Law
• Fs=-k x
• x=displacement, Fs = restoring force on spring Could also model with putty (doesn’t spring back) Could model with elastic material
• Minimize strain energy
kk
kk
“Turtle Graphics” Turtle can
• F=Move forward a unit
• L=Turn left
• R=Turn right Stipulate turtle directions, and
angle of turns Equilateral triangle
• Eg. angle =120
• FRFRFR What if change angle to 60
degrees• F=> FLFRRFLF
• Basis F• Koch Curve (snowflake)
• Example taken from Angel book
Using turtle graphics for trees
Use push and pop for side branches []
F=> F[RF]F[LF]F Angle =27 Note spaces ONLY for
readability F[RF]F[LF]F [RF[RF]F[LF]F]
F[RF]F[LF]F [LF[RF]F[LF]F] F[RF]F[LF]F