Module 1 Introduction to Computer...

136
Computer Graphics Dept of Computer Science & Engg, VJCET Module 1 Introduction to Computer Graphics 1.1) Introduction Today there are very few aspects of our lives not affected by computers. Practically every cash or monetary transaction that takes place daily involves a computer. In many cases, the same is true of computer graphics. Whether you see them on television, in newspapers, in weather reports or while at the doctor’s surgery, computer images are all around you. “A picture is worth a thousand words” is a well-known saying and highlights the advantages and benefits of the visual presentation of our data. We are able to obtain a comprehensive overall view of our data and also study features and areas of particular interest. A well-chosen graph is able to transform a complex table of numbers into meaningful results. You know that such graphs are used to illustrate papers, reports and thesis, as well as providing the basis for presentation material in the form of slides and overhead transparencies. A range of tools and facilities are available to enable users to visualize their data, and this document provides a brief summary and overview. Computer graphics can be used in many disciplines. Charting, Presentations, Drawing, Painting and Design, Image Processing and Scientific Visualization are some among them. In particular, we will emphasize the following a) Basic concepts of Computer Graphics b) Different types of Computer Graphics c) Origin of Computer Graphics d) Working of an interactive graphics display e) Importance of speed in displaying pictures f) The change in size an orientation of pictures 1

Transcript of Module 1 Introduction to Computer...

Page 1: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

Module 1

Introduction to Computer Graphics

1.1) Introduction

Today there are very few aspects of our lives not affected by computers.

Practically every cash or monetary transaction that takes place daily involves a computer.

In many cases, the same is true of computer graphics. Whether you see them on

television, in newspapers, in weather reports or while at the doctor’s surgery, computer

images are all around you. “A picture is worth a thousand words” is a well-known saying

and highlights the advantages and benefits of the visual presentation of our data. We are

able to obtain a comprehensive overall view of our data and also study features and areas

of particular interest.

A well-chosen graph is able to transform a complex table of numbers into

meaningful results. You know that such graphs are used to illustrate papers, reports and

thesis, as well as providing the basis for presentation material in the form of slides and

overhead transparencies. A range of tools and facilities are available to enable users to

visualize their data, and this document provides a brief summary and overview.

Computer graphics can be used in many disciplines. Charting, Presentations, Drawing,

Painting and Design, Image Processing and Scientific Visualization are some among

them.

In particular, we will emphasize the following

a) Basic concepts of Computer Graphics

b) Different types of Computer Graphics

c) Origin of Computer Graphics

d) Working of an interactive graphics display

e) Importance of speed in displaying pictures

f) The change in size an orientation of pictures

1

Page 2: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

g) Applications of Computer Graphics

1.2) Basic concepts of Computer Graphics

I hope all of you are fond of video games and you may be good at playing them.

Have you seen the game of ping-pong? It’s a game played by two people with a pair

video game controller and a home television set. You can see that when a game is

switched on, a small bright spot , representing a ball, is seen bouncing to and fro across

the screen. Now each player uses his video game controller to position a ‘paddle’ to

bounce the ball back to his opponent. The player who hits the ball past his opponent wins

a point and the one who gains 15 points wins the game. Now how did you invent this

video game? This has been done with the aid of Computer Graphics. Video games

represent a major use in the home of computer graphics. Computer graphics helps to

create and manipulate pictures with the aid of computers.

Computer graphics is concerned with all aspects of producing images using a

computer. It concerns with the pictorial synthesis of real or imaginary objects from their

computer-based models.

1.3) Different types of Computer Graphics

Computer Graphics can be broadly divided into two

a) Non Interactive Computer Graphics

b) Interactive Computer Graphics

Non Interactive Computer Graphics: In non interactive computer graphics otherwise

known as passive computer graphics, the observer has no control over the image.

Familiar examples of this type of computer graphics include the titles shown on TV and

other forms of computer art.

Interactive Computer Graphics: Interactive Computer Graphics involves a two way

communication between computer and user. Here the observer is given some control over

the image by providing him with an input device for example the video game controller

2

Page 3: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

of the ping pong game. This helps him to signal his request to the computer. The

computer on receiving signals from the input device can modify the displayed picture

appropriately. To the user it appears that the picture is changing instantaneously in

response to his commands. He can give a series of commands, each one generating a

graphical response from the computer. In this way he maintains a conversation, or

dialogue, with the computer.

Interactive computer graphics affects our lives in a number of indirect ways. For

example, it helps to train the pilots of our airplanes. We can create a flight simulator

which may help the pilots to get trained not in a real aircraft but on the grounds at the

control of the flight simulator. The flight simulator is a mock up of an aircraft flight deck,

containing all the usual controls and surrounded by screens on which we have the

projected computer generated views of the terrain visible on take off and landing. Flight

simulators have many advantages over the real aircrafts for training purposes, including

fuel savings, safety, and the ability to familiarize the trainee with a large number of the

world’s airports.

1.4) Origin of Computer Graphics

Years of research and development were made to achieve the goals in the field of

computer graphics. In 1950 the first computer driven display was used to generate only

simple pictures. This display made use of a cathode ray tube similar to the one used in

television sets. During 1950’s interactive computer graphics made little progress because

the computers of that period were so unsuited to interactive use. These computers were

used to perform only lengthy calculations.

The single vent that did the most to promote interactive computer graphics as an

important new field was the publication in 1962 of a brilliant thesis by Ivan E.

Sutherland. His thesis, entitled ‘Sketchpad: A Man- Machine Graphical Communication

System’, proved to many readers that interactive computer graphics was a viable, useful,

and exciting field of research. By the mid -1960’s large computer graphics research

projects were under taken at MIT, Bell Telephone Labs and General Motors. Thus the

3

Page 4: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

golden age of computer graphics began. In 1970’s thee researches began to bear fruit.

The instant appeal of computer graphics to users of all ages has helped it to spread into

many applications throughout the world.

1.5) Working of an interactive graphics display

Interactive graphics display consists of three components

a) A display controller

b) A digital memory or frame buffer

c) A television monitor

d) A video controller

The display controller gets the inputs and commands from the user and determines the

image to be displayed on the monitor. The display controller will divide the image into a

number of pixels. This image which is to be displayed is stored in the frame buffer. The

image will be stored as a matrix of intensity values. The image will be displayed onto the

television monitor and the video controller will act as a simple interface that passes the

contents of the frame buffer to the monitor. The image must be repeatedly passed to the

monitor, 30 or more times a second. This helps you to maintain a steady picture on the

screen.

In the frame buffer the image is stored as a pattern of binary digital numbers. These

binary digital numbers represents a rectangular array of picture elements or pixels (a

picture can be divided into a number of picture elements or pixels. You will learn more

about pixels in the coming lectures.). So corresponding to each pixel you have a binary

digital number in the frame buffer. If your image is a black and white image you can

represent the black pixels by 0 s and white pixels by 1s. Therefore a 16 X 16 array of

black and white pixels could be represented by the binary values stored in the 32 8-bit

bytes. Now what happens to this data?

4

Page 5: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

The video controller simply reads each successive byte of data from the frame buffer and

converts its 0s and 1s into the corresponding video signal. This signal is then fed into the

TV monitor, producing a black and white pattern on the screen. The video controller

repeats this operation 30 times a second in order to maintain a steady picture on the TV

screen.

Now what should be done to change or modify this image?

All we need is to modify the frame buffer’s contents. Set the frame buffer with a new set

of values so that it represents the new image. In this way we can achieve effects like a

rotating wheel and a wheel that grows and shrinks.

The figure given below gives an idea about the graphics display system

1.6) Importance of speed in displaying pictures

Why is speed so important in displaying pictures?

5

Page 6: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

One reason behind this is that any display based on the CRT must be refreshed by

repeatedly passing it to the monitor. The image must be transmitted to the display

monitor point by point. If the image is not transmitted at least 25 times per second, the

image will start to flicker in an unpleasant manner. If the speed of transmitting each

picture element is less, then only fewer elements will be transmitted and only less

information will be displayed.

The second reason is that the response of a computer program to the actions by its user

should be very fast i.e. the response time should be very small. The speed of response

depends on two factors.

1. The rate at which the computer can generate a fresh image in response to each

action by its users.

2. The rate at which the image is transmitted to the display monitor.

Generally speaking, slow response always makes interactive graphics program more

difficult to operate .Perhaps that’s the reason why research efforts are made to improve

the speed of interactive response.

1.7) The change in size an orientation of pictures

How are pictures made to rotate? How are pictures made to shrink? How are pictures

made to grow? Pictures can be made to change the orientation and size. How are these

changes possible?

These transformations or changes are based on standard mathematical techniques:

coordinate geometry, trigonometry and matrix methods. These techniques tell us how to

compute the new coordinates after applying the transformations. We will study more

about transformations of objects in the coming lectures.

1.8) Applications of Computer Graphics

The following are also considered graphics applications:

6

Page 7: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

Paint programs: Allow you to create rough freehand drawings. The images are stored as

bit maps and can easily be edited. It is a graphics program that enables you to draw

pictures on the display screen which is represented as bit maps (bit-mapped graphics). In

contrast, draw programs use vector graphics (object-oriented images), which scale better.

Most paint programs provide the tools shown below in the form of icons. By selecting an

icon, you can perform functions associated with the tool.In addition to these tools, paint

programs also provide easy ways to draw common shapes such as straight lines,

rectangles, circles, and ovals.

Sophisticated paint applications are often called image editing programs. These

applications support many of the features of draw programs, such as the ability to work

with objects. Each object, however, is represented as a bit map rather than as a vector

image.

Illustration/design programs: Supports more advanced features than paint programs,

particularly for drawing curved lines. The images are usually stored in vector-based

formats. Illustration/design programs are often called draw programs.

Presentation graphics software: Lets you create bar charts, pie charts, graphics, and other

types of images for slide shows and reports. The charts can be based on data imported

from spreadsheet applications.

A type of business software that enables users to create highly stylized images for slide

shows and reports. The software includes functions for creating various types of charts

and graphs and for inserting text in a variety of fonts. Most systems enable you to import

data from a spreadsheet application to create the charts and graphs. Presentation graphics

is often called business graphics.

7

Page 8: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

Animation software: Enables you to chain and sequence a series of images to simulate

movement. Each image is like a frame in a movie. It can be defined as a simulation of

movement created by displaying a series of pictures, or frames. A cartoon on television is

one example of animation. Animation on computers is one of the chief ingredients of

multimedia presentations. There are many software applications that enable you to create

animations that you can display on a computer monitor.

There is a difference between animation and video. Whereas video takes continuous

motion and breaks it up into discrete frames, animation starts with independent pictures

and puts them together to form the illusion of continuous motion.

CAD software: Enables architects and engineers to draft designs. It is the acronym for

computer-aided design. A CAD system is a combination of hardware and software that

enables engineers and architects to design everything from furniture to airplanes. In

addition to the software, CAD systems require a high-quality graphics monitor; a mouse,

light pen, or digitizing tablet for drawing; and a special printer or plotter for printing

design specifications.

CAD systems allow an engineer to view a design from any angle with the push of a

button and to zoom in or out for close-ups and long-distance views. In addition, the

computer keeps track of design dependencies so that when the engineer changes one

value, all other values that depend on it are automatically changed accordingly.

Until the mid 1980s, all CAD systems were specially constructed computers. Now, you

can buy CAD software that runs on general-purpose workstations and personal

computers.

Desktop publishing: Provides a full set of word-processing features as well as fine control

over placement of text and graphics, so that you can create newsletters, advertisements,

books, and other types of documents. It means by using a personal computer or

8

Page 9: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

workstation high-quality printed documents can be produced. A desktop publishing

system allows you to use different typefaces, specify various margins and justifications,

and embed illustrations and graphs directly into the text. The most powerful desktop

publishing systems enable you to create illustrations; while less powerful systems let you

insert illustrations created by other programs.

As word-processing programs become more and more powerful, the line separating such

programs from desktop publishing systems is becoming blurred. In general, though,

desktop publishing applications give you more control over typographical characteristics,

such as kerning, and provide more support for full-color output.

A particularly important feature of desktop publishing systems is that they enable you to

see on the display screen exactly how the document will appear when printed. Systems

that support this feature are called WYSIWYGs (what you see is what you get).

Until recently, hardware costs made desktop publishing systems impractical for most

uses. But as the prices of personal computers and printers have fallen, desktop publishing

systems have become increasingly popular for producing newsletters, brochures, books,

and other documents that formerly required a typesetter.

Once you have produced a document with a desktop publishing system, you can output it

directly to a printer or you can produce a PostScript file which you can then take to a

service bureau. The service bureau has special machines that convert the PostScript file to

film, which can then be used to make plates for offset printing. Offset printing produces

higher-quality documents, especially if color is used, but is generally more expensive

than laser printing.

In general, applications that support graphics require a powerful CPU and a large

amount of memory. Many graphics applications—for example, computer animation

systems—require more computing power than is available on personal computers and

9

Page 10: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

will run only on powerful workstations or specially designed graphics computers. This is

true of all three-dimensional computer graphics applications.

In addition to the CPU and memory, graphics software requires a graphics monitor and

support for one of the many graphics standards. Most PC programs, for instance, require

VGA graphics. If your computer does not have built-in support for a specific graphics

system, you can insert a video adapter card.

The quality of most graphics devices is determined by their resolution—how many pixels

per square inch they can represent—and their color capabilities.

1.9) pixel

A pixel (short for picture element, using the common abbreviation "pix" for

"picture") is one of the many tiny dots that make up the representation of a picture in a

computer's memory. Each such information element is not really a dot, nor a square, but

an abstract sample. With care, pixels in an image can be reproduced at any size without

the appearance of visible dots or squares; but in many contexts, they are reproduced as

dots or squares and can be visibly distinct when not fine enough. The intensity of each

pixel is variable; in color systems, each pixel has typically three or four dimensions of

variability such as red, green and blue, or cyan, magenta, yellow and black.

A

1.10) pixel resolution

10

Page 11: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

Pixel resolution

The term resolution is often used as a pixel count in digital imaging, even though

American, Japanese, and international standards specify that it should not be so used, at

least in the digital camera field. An image of N pixels high by M pixels wide can have

any resolution less than N lines per picture height, or N TV lines. But when the pixel

counts are referred to as resolution, the convention is to describe the pixel resolution with

the set of two positive integer numbers, where the first number is the number of pixel

columns (width) and the second is the number of pixel rows (height), for example as 640

by 480. Another popular convention is to cite resolution as the total number of pixels in

the image, typically given as number of megapixels, which can be calculated by

multiplying pixel columns by pixel rows and dividing by one million. Other conventions

include describing pixels per length unit or pixels per area unit, such as pixels per inch or

per square inch. None of these pixel resolutions are true resolutions, but they are widely

referred to as such; they serve as upper bounds on image resolution.

Below is an illustration of how the same image might appear at different pixel

resolutions, if the pixels were poorly rendered as sharp squares (normally, a smooth

image reconstruction from pixels would be preferred, but for illustration of pixels, the

sharp squares make the point better).

Resolution in various media

DVDs have roughly 500 lines (or TV lines, or lines per picture height).

High definition television has 1,080 lines.

11

Page 12: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

35mm movie film is scanned for release on DVD at 1080 or 2000 lines as

of 2005.

35mm optical camera negative motion picture film can resolve up to 6,000

lines.

35mm projection positive motion picture film has about 2,000 lines which

results from the analogue printing from the camera negative of an

interpositive, and possibly an internegative, then a projection positive.

Newer films are scanned at 4,000 lines, called 4K scanning, anticipating

any advances in digital projection or higher resolution in flat panel

display.

1.11) Image resolution

Image resolution describes the detail an image holds. The term applies equally to

digital images, film images, and other types of images. Higher resolution means more

image detail. Image resolution can be measured in various ways. Basically, resolution

quantifies how close lines can be to each other and still be visibly resolved. Resolution

units can be tied to physical sizes (e.g. lines per mm, lines per inch) or to the overall size

of a picture (lines per picture height, also known simply as lines, or TV lines).

Furthermore, line pairs are often used instead of lines. A line pair is a pair of adjacent

dark and light lines, while a line counts both dark lines and light lines. A resolution of 10

lines per mm means 5 dark lines alternating with 5 light lines, or 5 line pairs per mm.

Photographic lens and film resolution are most often quoted in line pairs per mm.

Image resolution on raster displays

A television or raster image display with 525 scan lines makes a picture with

somewhat less than 525 TV lines of resolution. The ratio of lines of resolution to the

number of format lines in known as the Kell factor, after Raymond D. Kell, who worked

out details of visual resolution in scanned systems at RCA in the 1930s.

12

Page 13: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

1.12) Graphics Primitives and Attributes

Even the most complex computer-generated graphic images are produced by a

relatively small set of Graphics Primitives. The usual sets of basic primitives provided in

Graphics Packages are:

1. A single point.

2. A line with given end-points.

3. A polyline i.e. a line joining a sequence of points.

4. A filled polygonal area with given points as vertices.

5. Text.

There may be additional, but not essential, primitives such as rectangles, circles, curves

of various types, images etc.

Associated with each graphics primitive is a set of Attributes. The attributes of a

primitive determine its appearance on the output device. For example a line commonly

has attributes such as colour, width, style (full, dotted, dashed etc.) etc.

1.13) Frame Buffer

. A frame buffer is a large, contiguous piece of computer memory. At a minimum

there is one memory bit for each pixel in the rater; this amount of memory is called a bit

plane. The picture is built up in the frame buffer one bit at a time. You know that a

memory bit has only two states, therefore a single bit plane yields a black-and white

display. You know that a frame buffer is a digital device and the CRT is an analog

device. Therefore, a conversion from a digital representation to an analog signal must

take place when information is read from the frame buffer and displayed on the raster

CRT graphics device. For this you can use a digital to analog converter (DAC).Each pixel

in the frame buffer must be accessed and converted before it is visible on the raster CRT.

13

Page 14: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

A single bit-plane black white frame buffer raster CRT graphics device

N-bit colour Frame buffer

Color or gray scales are incorporated into a frame buffer rater graphics device by

using additional bit planes. The intensity of each pixel on the CRT is controlled by a

corresponding pixel location in each of the N bit planes. The binary value from each of

the N bit planes is loaded into corresponding positions in a register. The resulting binary

number is interpreted as an intensity level between 0 (dark) and 2n -1 (full intensity). This

is converted into an analog voltage between 0 and the maximum voltage of the electron

gun by the DAC. A total of 2N intensity levels are possible. Figure given below illustrates

a system with 3 bit planes for a total of 8 (23) intensity levels. Each bit plane requires the

full complement of memory for a given raster resolution; e.g., a 3-bit plane frame buffer

for a 1024 X1024 raster requires 3,145,728 (3 X 1024 X1024) memory bits.

14

Page 15: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

An N- bit plane gray level frame buffer

An increase in the number of available intensity levels is achieved for a modest increase

in required memory by using a lookup table. Upon reading the bit planes in the frame

buffer, the resulting number is used as an index into the lookup table. The look up table

must contain 2N entries. Each entry in the lookup table is W bit wise. W may be greater

than N. When this occurs, 2W intensities are available; but only 2N different intensities are

available at one time. To get additional intensities, the lookup table must be changed.

15

Page 16: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

Because there are three primary colours, a simple color frame buffer is implemented with

three bit planes, one for each primary color. Each bit plane drives an individual color gun

for each of the three primary colors used in color video. These three primaries (red,

green, and blue) are combined at the CRT to yield eight colors.

Simple Color buffer

1.13)Display devices

The working of a Cathode Ray Tube

The primary output device in a graphics system is a video monitor. The operation

of most video monitors is based on the standard cathode ray tube (CRT) design.

The Cathode Ray Tube

16

Page 17: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

Operation of an electron gun with an accelerating anode

Basic design of a magnetic deflection CRT

17

Page 18: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

The electron gun emits a beam of electrons which are focused to a point on the screen

phosphor. The beam is positioned on the screen by a deflection system which operates in

the horizontal and vertical directions. The intensity of the beam is controlled by the

intensity signal on the control grid. When the phosphor is hit by the electron beam it

absorbs energy and jumps to a higher quantum-energy level. As it returns to its normal

level it emits visible light i.e. it phosphoresces. In the phosphors used in graphics devices

the persistence of the phosphorescence is typically 10-60 microseconds.

Before the human visual system can see a transient image it must be continually redrawn

(refreshed) at a rate higher than the critical fusion frequency of the human visual system.

To allow the human visual system to see a continuously refreshed image without flicker

the refresh rate has to be at least 60 c/s.

To allow continuous refreshing of an image there must be some stored representation of

the image from which the refresh system can obtain the graphical information required to

re-draw the image. This representation nowadays is invariably a set of values of

18

Page 19: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

intensity/colour at each of a discrete set of points laid out in a rectangular array covering

the screen.

While it may seem a disadvantage to continually refresh the image there are some very

important advantages of such refresh type systems. For example it is possible to edit an

image by changing the stored representation between refresh cycles for what appears to

be instantaneous updating of the image. Compare this with some earlier systems in which

the only way to carry out an edit was to clear the whole screen and then redraw the whole

image. Also by changing the stored representation between refresh cycles animation is

possible.

1.14)Shadow Mask CRT

In Shadow Mask CRT tiny holes in a metal plate separate the colored phosphors

in the layer behind the front glass of the screen. The holes are placed in a manner

ensuring that electrons from each of the tube's three cathode guns reach only the

appropriately-colored phosphors on the display. All three beams pass through the same

holes in the mask, but the angle of approach is different for each gun. The spacing of the

holes, the spacing of the phosphors, and the placement of the guns is arranged so that for

example the blue gun only has an unobstructed path to blue phosphors. The red, green,

and blue phosphors for each pixel are generally arranged in a triangular shape (sometimes

called a "triad"). All early color televisions and the majority of computer monitors, past

and present, use shadow mask technology.

Traditionally, shadow masks have been made of materials which temperature variations

cause to expand and contract to the point of affecting performance. The energy the

shadow mask absorbs from the electron gun in normal operation causes it to heat up and

expand, which leads to blurred or discolored (see doming) images. The invar shadow

mask is composed of the nickel-iron alloy invar. Therefore it expands and contracts much

less than other materials in response to temperature changes. This property allows

displays made with this technology to provide a clearer, more accurate picture. It also

19

Page 20: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

reduces the amount of long-term stress and damage to the shadow mask that can result

from repeated expand/contract cycles, thus increasing the display's life expectancy.

In other words, In Shadow Mask CRT, before the stream of electrons produced by the

CRT's cathode reach the phosphor coated faceplate, it encounters the shadow mask, a

sheet of metal etched with a pattern of holes. The mask is positioned in the glass funnel

of the CRT during manufacture and the phosphor is coated onto the screen so that

electrons coming from the red, green and blue gun positions only land on the appropriate

phosphor.

Stray electrons strike the shadow mask and are absorbed by it, generating a great deal of

heat, which in turn causes the metal to expand. To allow flatter CRTs to be made, the

metal most commonly used now for shadow masks is Invar, an alloy of iron and nickel.

The metal has a low coefficient of expansion and its name derives from the supposed

invariability of its dimensions when heat is applied. In reality, its dimensions are not

completely invariable and the build up of heat in a shadow mask can lead to a form of

distortion known as doming, where the centre of the mask bulges towards the faceplate

slightly.

20

Page 21: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

An alternative to the shadow mask which is less prone to distortion, the aperture grille ,

was included as part of the design of Trinitron CRTs by Sony in 1968 and Mitsubishi in

its Diamondtron products in the early 1990s.

Importance of Refresh Rates

When choosing a monitor, one of the factors that the customer usually considers is the

refresh rate. A high refresh rate is important in providing a clear picture and avoiding eye

fatigue.

What is a refresh rate and why is a monitor's refresh rate important?

An image appears on screen when electron beams strike the surface of the screen

in a zig-zag pattern. A refresh rate is the number of times a screen is redrawn in one

second and is measured in Hertz (Hz). Therefore, a monitor with a refresh rate of 85 Hz

is redrawn 85 times per second. A monitor should be "flicker-free meaning that the image

is redrawn quickly enough so that the user cannot detect flicker, a source of eye strain.

Today, a refresh rate of 75 Hz or above is considered to be flicker-free

How are refresh rates calculated?

Factors in determining refresh rates

21

Page 22: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

A refresh rate is dependent upon a monitor's horizontal scanning frequency and the

number of horizontal lines displayed.The horizontal scanning frequency is the number of

times the electron beam sweeps one line and returns to the beginning of the next in one

second. Horizontal scanning frequency is measured in kilohertz (kHz).A monitor with a

horizontal scanning frequency of 110 kHz means 110,000 lines are scanned per second.

The number of horizontal lines on the screen depends upon the monitor's resolution. If a

monitor is set to a resolution of 1024 x 768 then there are 768 horizontal lines (1024 is

the number of pixels on one line). For a monitor set to a 1280 x 1024 resolution, there are

1024 horizontal lines.

Additionally, the time it takes for the electron beam to return to the top of the screen and

begin scanning again must be taken into account. This is roughly 5% of the time it takes

to scan the entire screen. Therefore, the total is multiplied by 0.95 to calculate the

maximum refresh rate.

How to calculate maximum refresh rates?

The following formula is used to calculate maximum refresh rates

fV = fH / # of horizontal lines x 0.95

fV = vertical scanning frequency (refresh rate)

fH = horizontal scanning frequency

Example: A monitor with a horizontal scanning frequency of 96 kHz at a resolution of

1280 x 1024 would have the following refresh rate based on the calculation above.

fV = 96,000 / 1024 x 0.95

fV = 89.06

This figure is rounded down to produce a maximum refresh rate of 89Hz.

22

Page 23: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

If the same monitor is set to a resolution of 1600 x 1200, then the equation will be as

follows:

fV = 96,000 / 1200 x 0.95

fV = 76

The maximum refresh rate at this resolution is 76 Hz.

1.15) Raster Graphics Display Architecture

The figure which describes the architecture of Raster graphics is given below.

It includes:

1. Display controller

2. Refresh Buffer

3. Video Controller

4. CRT monitor

1.15.1)Raster Graphics

A raster graphics image, digital image, or bitmap, is a data file or structure

representing a generally rectangular grid of pixels, or points of color, on a computer

23

Page 24: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

monitor, paper, or other display device. The color of each pixel is individually defined;

images in the RGB color space, for instance, often consist of colored pixels defined by

three bytes—one byte each for red, green and blue. Less colorful images require less

information per pixel; an image with only black and white pixels requires only a single

bit for each pixel. Raster graphics are distinguished from vector graphics in that vector

graphics represent an image through the use of geometric objects such as curves and

polygons.

A colored raster image (or pixmap) will usually have pixels with between one and eight

bits for each of the red, green, and blue components, though other color encodings are

also used, such as four- or eight-bit indexed representations that use vector quantization

on the (R, G, B) vectors. The green component sometimes has more bits than the other

two to allow for the human eye's greater discrimination in this component.

The quality of a raster image is determined by the total number of pixels (resolution), and

the amount of information in each pixel (often called color depth). For example, an image

that stores 24 bits of color information per pixel (the standard for all displays since

around 1995) can represent smoother degrees of shading than one that only stores 16 bits

per pixel, but not as smooth as one that stores 48 bits (technically; this would not be

discernible by the human eye). Likewise, an image sampled at 640 x 480 pixels (therefore

containing 307,200 pixels) will look rough and blocky compared to one sampled at 1280

x 1024 (1,310,720 pixels). Because it takes a large amount of data to store a high-quality

image, data compression techniques are often used to reduce this size for images stored

on disk. Some techniques sacrifice information, and therefore image quality, in order to

achieve a smaller file size. Compression techniques that lose information are referred to

as "lossy" compression.

Raster graphics cannot be scaled to a higher resolution without loss of apparent quality.

This is in contrast to vector graphics, which easily scale to the quality of the device on

which they are rendered. Raster graphics are more practical than vector graphics for

24

Page 25: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

photographs and photo-realistic images, while vector graphics are often more practical

for typesetting or graphic design. Modern computer monitors typically display about 72

to 130 pixels per inch (PPI), and some modern consumer printers can resolve 2400 dots

per inch (DPI) or more; determining the most appropriate image resolution for a given

printer resolution can be difficult, since printed output may have a greater level of detail

than can be discerned on a monitor.

Raster example

To illustrate the matter further, here is the letter "J":

J

A close look at the letter will appear as such, where the "X" and "." characters represent a

grid of pixels:

.....X

.....X

.....X

.....X

.....X

.....X

X....X

X....X

.XXX..

A computer sees something more like this, where "." represents a zero and "X" represents

a one:

000001

000001

000001

000001

25

Page 26: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

000001

000001

100001

100001

011100

Where a zero appears, the computer software instructs its video hardware to paint the

current background color. A one calls for the current foreground color. The software

makes a distinction between the colors of adjacent pixels, which together form an image.

This is the basic principle behind graphics editing on a computer.

Raster graphics was first patented by Texas Instruments in the 1970s, and is now ever-

present.

Rasterization

Rasterization is the task of taking an image described in a vector graphics format (shapes)

and converting it into a raster image (pixels or dots) for output on a video display or

printer.

Raster Display

System Components

• The screen is subdivided into a matrix of pixels (smallest

addressable units).

26

Page 27: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

Raster scanline -- A line of pixels along the screen

• Frame (refresh) buffer -- Block of memory used to store the screen

pattern

How it works

1. The DISPLAY PROCESSOR produces the raster image in the

frame buffer from the commands

2. The VIDEO CONTROLLER moves the beam row wise across the

pixels setting it on and off according to the content of the frame

buffer

3. The display must be refreshed to avoid flickering (raster image

redisplayed 30 to 60 times per second)

27

Page 28: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

Raster Scan Display

Raster Scan methods have increasingly become the dominant technology since about

1975. These methods use the TV type raster scan. The growth in the use of such methods

28

Page 29: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

has been dependent on rapidly decreasing memory prices and on the availability of cheap

scan generating hardware from the TV industry.

The screen is coated with discrete dots of phosphor, usually called pixels, laid out in a

rectangular array. The image is then determined by how each pixel is intensified. The

representation of the image used in servicing the refresh system is thus an area of

memory holding a value for each pixel. This memory area holding the image

representation is called the frame buffer.

The values in the frame buffer are held as a sequence of horizontal lines of pixel values

from the top of the screen down. The scan generator then moves the beam in a series of

horizontal lines with fly-back (non-intensified) between each line and between the end of

the frame and the beginning of the next frame. This is illustrated below.

Unlike random-scan which is a line drawing device, refresh CRT is a point-plotting

device. Raster displays store the display primitives (lines, characters, shaded and

patterned areas) in a refresh buffer. Refresh buffer (also called frame buffer) stores the

drawing primitives in terms of points and pixels components

29

Page 30: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

This scan is synchronized with the access of the intensity values held in the frame buffer.

The maximum number of points that can be displayed without overlap by a system is

called the resolution and is quoted as the number of points per horizontal line versus the

number of horizontal lines. Typical resolutions are 640*480, 1024*768, and 1280*1024.

The maximum resolution is determined by the characteristics of the monitor and/or by the

memory capacity available for storing the frame buffer.

1.16) Random Scan Display

Random scan displays, often termed vector Vector, Stroke, and Line drawing

displays, came first and are still used in some applications. Here the characters are also

made of sequences of strokes (or short lines). The electron gun of a CRT illuminates

straight lines in any order. The display processor repeatedly reads a variable 'display file'

defining a sequence of X,Y coordinate pairs and brightness or color values, and converts

these to voltages controlling the electron gun.

30

Page 31: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

In random scan display an electron beam is deflected from endpoint to end-point.

The order of deflection is dictated by the arbitrary order of the display commands. The

display must be refreshed at regular intervals – minimum of 30 Hz (fps) for flicker-free

display.

Ideal line drawing Random Scan Display

31

Page 32: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

Colour in Raster Graphics

To handle colour the phosphor is applied to the screen in small triads of red, green

and blue phosphor dots, each triad representing one pixel. The CRT then has an electron

gun for each of the additive primary colours red, green and blue. When the three primary

colour dots are intensified the human visual system combines the triad of primary colours

into a single perceived colour.

To ensure the beams hit the correct elements of each triad there is a thin perforated metal

sheet (the shadow mask) behind the phosphor that ensures each of the three beams hits

the correct element of the triad.

1.17) Raster Scan Systems

You know that interactive raster graphics system typically employs several

processing units. In addition to central processing unit, or CPU, there is a special –

purpose processor, called the video controller or display controller. It is used to control

the operation of the display device.

32

Page 33: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

Architecture of a simple raster graphics system

Here the frame buffer can be anywhere in the system memory, and the video

controller accesses the frame buffer to refresh the screen.

Video Controller

In commonly used raster systems a fixed area is reserved for the frame buffer, and

the video controller is given direct access to the frame buffer memory. Frame buffer

locations and the corresponding screen positions are referenced in Cartesian coordinates.

Architecture of a raster system with a fixed portion of the system memory reserved

for the frame buffer

For the graphics monitor, the origin I defined at the lower left screen corner. The

screen is represented as a two dimensional system i.e. the x values increasing to the right

and the y values increasing from bottom to top. Each scan line is labeled as y max at the top

of the screen to 0 at the bottom. Along each scan line, screen pixel positions are labeled

from 0 to x max.

33

Page 34: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

Refresh operation of the video buffer

In the basic refresh operation, two registers are used. The purpose of this registers

are to store the coordinates of the screen pixels. Initially the x register is set to 0 and the y

register is set to y max. The value stored in the frame buffer for this pixel position is

retrieved and used to set the intensity of the CRT beam. Then the x register is

incremented by 1, and the process repeated for the next pixel on the top scan line. This is

repeated for each pixel along the scan line.

After the last pixel on the top of the scan line has been processed, the x register is reset to

0and the y register is decremented by 1. The pixels along this scan line are then processed

and the procedure is repeated for each successive scan line.

After cycling through all the pixels along the bottom scan line(y=0), the video controller

resets the registers to the first pixel position on the top scan line and refresh process starts

over.

As you know, a screen has to be refreshed at the rate of 60 frames per second. So this

cannot be accommodated in RAM chips. Therefore the cycle time is too low and as a

result the pixel processing is too slow. Now what can be done to speed up this process?

Multiple pixels can be retrieved in one pass i.e. video controller retrieve multiple pixel

values from the refresh buffer on each pass.

34

Page 35: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

Then how do you store all these different intensity values? They are stored using

separate registers and are used to control the CRT beam intensity for a group of adjacent

pixels .When that group of pixels has been processed; the next block of pixel values is

retrieved from the frame buffer.

In high quality systems two buffers are used so that one is used for refreshing and

other is being filled with intensity values. Then the two buffers can switch the roles. This

provides a fast mechanism for generating real time animations, since different views of

moving objects can be successively loaded into the refresh buffer.

Raster Scan Display processor

Raster Scan system may contain a separate display processor.

Raster Scan Display processor is sometimes referred to as graphics controller or a

display coprocessor. The purpose of this processor is to free the CPU from the graphics

tasks. But what is its major task? Its major task is to digitize a picture definition given in

an application program into a set of pixel intensity values for storage in the frame buffer.

This digitization process is called scan conversion. Graphics commands specifying

straight lines and other geometric shapes can be can converted into a set of intensity

points. When you scan convert a straight line segment we have to locate the pixel

35

Page 36: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

positions closest to the line path and store the intensity for each position in the frame

buffer.

Characters can be defined with rectangular grids or they can be defined with

outlines. The array size for character grids can vary from about 5 by 7 or 9 by 12.

defined as a grid of pixel position defined as a curve outline

Display processor are deigned to perform different other functions. They are

To generate various line styles (dashed , dotted, or solid)

To display colour areas

To perform certain manipulations and transformations on displayed objects

To interface with interactive input devices, such as mouse

How is the frame buffer implemented?

It is implemented as a linked list and encodes the intensity information. One way

to do this is to store each scan line as a set of integer pairs. One number of each pair

indicates an intensity value and the second number specifies the number of adjacent

pixels on the scan line that have that intensity. This technique is called run length

encoding.

1.18) Random Scan Systems

In Random Scan Systems an application program is input and stored in the system

memory along with a graphics package. Graphics commands in the application program

are translated by the graphics package into a display file stored in the system memory.

This display file is then accessed by the display processor to refresh the screen. The

display processor cycles through each command in the display file program once during

36

Page 37: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

every refresh cycle. Sometimes the display processor in the random scan system is

referred to as a display processing unit or a graphics controller. Graphics patterns are

drawn on a random scan system by directing the electron beam along the component lines

of picture. Lines are defined by the values for their coordinate endpoints, and these input

coordinate values are converted to x and y deflection voltage. A scene is then drawn one

line at a time by positioning the beam to fill in the line between specified endpoints.

1.19)Input/Output Devices

KEYBOARD

The different types of Keyboards are:

Dome-switch keyboard Dome-switches mesh with keys (keyboard is upside down in this image).Dome-switch keyboards are kind of a hybrid of membrane and mechanical keyboards. They bring two circuit board traces together under a rubber "dome" or bubble. The inside of the top of the bubble is coated in graphite. When a key is pressed, it collapses the dome, which connects the two circuit traces and completes the connection to enter the character. The pattern on the PC board is often gold-plated.This is a common switch technology used in mass market keyboards today. It is considered very quiet, but purists tend to find it "mushy" because the collapsing dome does not provide as much positive response as a hard closing switch. These are also a good choice for office or consumer environments because they are generally fairly quiet. This switch technology also happens to be most commonly used in handheld controllers, such as those used with home video game consoles.Dome-switch keyboards are also called direct-switch keyboards.

Scissor-switch keyboard

On a scissor-switch keyboard, the keys are attached to the keyboard via two plastic pieces that interlock in a "scissor"-like fashion, and snap to the keyboard and the key. Scissor-switch technology does not use a rubber-plunger assembly like most other keyboards. Scissor-switch keyboards are normally responsive and crisp. These keyboards are generally quiet. The keys require little force to press.This keyboard technology is mainly used in laptops.

37

Page 38: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

Capacitive keyboard

In this type of keyboard, pressing the key changes the capacitance of a pattern printed on a PC board. Usually this permits a pulse or pulse train to be sensed. Unlike "dome switch" keyboards, the pattern will be covered by a thin, insulating film. Capacitive keyboards are inexpensive, and resist wear, water, foreign objects and dirt. They are common in PC keyboards.

Mechanical-switch keyboard

Mechanical-switch keyboards use real switches, one under each key. Depending on the construction of the switch, these keyboards have varying responses and travel times. Notable keyboards utilizing this technology are the Apple Extended II, and its modern imitator, the Matias Tactile Pro. These two keyboards use ALPS switches. Cherry Corporation of Germany also makes mechanical switches used in special purpose and high end keyboards. In India, the TVS Gold mechanical keyboard is very popular despite costing about five times a membrane keyboard.

Buckling-spring keyboard

Many typists prefer buckling-spring keyboards. [1][2] It is a common misconception that the IBM Model M and its derivates are mechanical-switch keyboards. In fact, the Model M uses membrane-sheet switches, much like those found in a dome-switch keyboard. The buckling spring mechanism (U.S. Patent 4,118,611 ) atop the switch is responsible for the tactile and aural response of the keyboard. This mechanism controls a small hammer that strikes the membrane switch.[3].

Hall-effect keyboard

Hall effect keyboards use magnets and Hall effect sensors instead of an actual switch. When a key is depressed, it moves a magnet, which is detected by the solid-state sensor. These keyboards are extremely reliable, and are able to accept millions of keystrokes before failing. They are used for ultra-high reliability applications, in locations like nuclear powerplants or aircraft cockpits. They are also sometimes used in industrial environments. These keyboards can be easily made totally waterproof. They also resist large amounts of dust and contaminants. Because a magnet and sensor is required for each key, as well as custom control electronics, they are very expensive.

Laser keyboard

A laser projection device approximately the size of a computer mouse projects the outline of keyboard keys onto a flat surface, such as a table or desk. When the laser is interrupted in the position of a key, a keystroke is registered. This type of keyboard is portable enough to be easily used with PDAs and cellphones, and many models have retractable cords and wireless capabilities. However, sudden or accidental

38

Page 39: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

disruption of the laser will register unwanted keystrokes. Also, if the laser malfunctions, the whole unit becomes useless, unlike conventional keyboards which can be used even if a variety of parts (such as the keycaps) are removed. This type of keyboard can be cumbersome to use since it is susceptible to errors, even in the course of normal typing, and its complete lack of tactile feedback makes it even less user-friendly than the cheapest membrane keyboards.

Membrane keyboard

Membrane keyboards are usually flat. They are most often found on appliances like microwave ovens or photocopiers. A common design consists of three layers. The top layer (and the one the user touches) has the labels printed on its front and conductive stripes printed on the back. Under this it has a spacer layer, which holds the front and back layer apart so that they do not normally make electrical contact. The back layer has conductive stripes printed perpendicularly to those of the front layer.

When placed together, the stripes form a grid. When the user pushes down at a particular position, his finger pushes the front layer down through the spacer layer to close a circuit at one of the intersections of the grid. This indicates to the computer or keyboard control processor that a particular button has been pressed. Membrane keyboards do not generally have much of a "feel", so many machines which use them issue a beep or flash a light when the key is pressed. They are often used in harsh environments where water or leak proofing is desirable. Although used in the early days of the personal computer (on the ZX80, ZX81 and Atari 400), they have been supplanted by the more tactile dome and mechanical switch keyboards. However, membrane keyboards with interchangeable key layouts, such as the IntelliKeys and Discover:board are still commonly used by people with physical, visual, or cognitive disabilities as well as people who require assistive technology to access a computer.

Roll-up keyboard Most keyboards are rigid, but this foldable keyboard demonstrates one of many variations from the usual. Some keyboards are designed out of flexible materials that can roll up in a moderately tight bundle. Normally the external materials are either silicone or polyurethane. It is important to note that although many manufacturers claim that the keyboards are foldable, they cannot be folded without damaging the membrane that holds the circuitry.Typically theyare completely sealed in rubber, making them watertight like membrane keyboards. Like membrane keyboards, they are reported to be very hard to get used to, as there is little tactile feedback.

Other parts of the PC keyboard

The modern PC keyboard is more than just the switch technology, however. It also includes a control processor and indicator lights to provide feedback to the user about what state the keyboard is in. Depending on the sophistication of the controller's programming, the keyboard may also offer other special features. The processor is

39

Page 40: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

usually a single chip 8048 microcontroller variant. The keyboard switch matrix is wired to its inputs and it processes the incoming keystrokes and sends the results down a serial cable (the keyboard cord) to a receiver in the main computer box. It also controls the illumination of the "caps lock", "num lock" and "scroll lock" lights.

A common test for whether the computer has crashed is pressing the "caps lock" key. The keyboard sends the key code to the BIOS code running in the main computer; if the main computer is operating, it commands the light to turn on. All the other indicator lights work in a similar way. The BIOS also tracks the shift, alt and control state of the keyboard. When pressing a keyboard key, the key "bounces" like a ball against its contacts several times before it settles into firm contact. When released, it bounces some more until it reverts to the uncontacted state. If the computer was watching for each pulse, it would see many keystrokes for what the user thought was just one.

Keyboard switch matrix

The keyboard switch matrix is often drawn with horizontal wires and vertical wires in a grid, with a switch at some or all intersections, much like a multiplexed display. Some keyboards have only the switch at each intersection, which causes "ghost keys" and "key jamming" when multiple keys are pressed. Other, more expensive keyboards have the switch and a diode at each intersection, allowing the keyboard microcontroller to accurately sense any number of simultaneous keys being pressed.

MOUSE

The Mouse is a popular input device. You move it across the desk and its movement is shown on the screen by a marker known as a 'cursor'. You will need to click the buttons at the top of the mouse to select an option.A mouse contains a rotating ball with two rollers held at right angles to each other which touch the ball. The rollers register the x and y axis movements of the ball. Each roller is attached to an optical encoder, a disc which has gaps in its surface. The rotation of the encoder interrupts two beams of light between the LEDs and their detectors. These pulses of light are fed to the computer and are used to determine the speed and direction of the mouse's movement.

LIGHT PEN

A Light Pen is a pointing device shaped like a pen and is connected to a VDU. The tip of the light pen contains a light-sensitive element which, when placed against the

40

Page 41: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

screen, detects the light from the screen enabling the computer to identify the location of the pen on the screen.

A light pen Light pens have the advantage of 'drawing' directly onto the screen, but this can become uncomfortable, and they are not as accurate as digitising tablets.

JOYSTICK

Logitech was distributing this device in the U.S. 1994 that could be used to replace a space ball or possibly used as a joystick The Cyberman is sort of a cross between a mouse and a joystick. It uses a spring mounted 6DOF puck about 3 inches across with a few millimeters of displacement. Joysticks are mainly used in the entertainment industry. They are used to control cursor movements in 2D and are equipped with one or more buttons. It connects to a RS232 serial port and provides data for 6 motions:

1. X translation 2. Y translation 3. Z translation (Vertical push/pull) 4. Twist 5. Front-to-back tilt 6. Left-to-right tilt

OPTICAL MARK READER

The Optical Mark Reader (OMR) can read information in the form of numbers or letters and put it into the computer. The marks have to be precisely located as in multiple choice test papers.

TOUCH SENSITIVE SCREEN

41

Page 42: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

A Touch Sensitive Screen is a pointing device that enables the user to interact with the computer by touching the screen. There are three forms of touchscreen: pressure-sensitive, capacitive surface and light beam. Touchpad/Touchscreen are operated by a pen or by the user's fingertips. With the touchscreen, the user can directly point to objects on the screen. Both devices are mainly used for simple pointing and selection tasks.

SCANNERS Scanners allow information such as a photo or text to be input into a computer. Scanners are usually either A4 size (flatbed), or hand-held to scan a much smaller area. If text is to be scanned, you would use an Optical Character Recognition (OCR) program to recognise the printed text and then convert it to a digital text file that can be accessed using a computer.A flatbed scanner used for digital image input.

DIGITIZERS Digitizers are 2D input devices for exact inputs of points, usually from already existing sketches. Their main purpose is to digitize hand-drawings, i.e. to get 2D scenes in a format which can be processed by a CAD program.

Imaging and Video input devices

• Webcam • Image scanner • Fingerprint scanner • Barcode reader • 3D scanner

o laser range-finder • medical imaging sensor technology

o Computed tomography o Magnetic resonance imaging o Positron emission tomography o Medical ultrasonography

Audio input devices

• Microphone • Speech Recognition

HARD COPY DEVICES

42

Page 43: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

You can print out information that is in the computer onto paper. By printing you create what is known as a 'hard copy'. There are different kinds of printers which vary in their speed and print quality. The two main types of printer are impact and non-impact.

IMPACT PRINTERS

Impact Printers use a print head containing a number of metal pins which strike an inked ribbon placed between the print head and the paper. Some print heads have only 9 pins to make the dots to build up a character; some have 24 pins which produce a better resolution. A dot matrix printer is an example. Dot matrix printer: Characters are formed from a matrix of dots. The speed is usually 30 - 550 characters per second (cps). This is the cheapest and noisiest of the printer family.The standard of print obtained is poor. These printers are cheap to run and relatively fast. They are useful for low quality carbon copy printing.

NON-IMPACT PRINTERS

Non-impact printers are much quieter than impact printers as their printing heads do not strike the paper. Most non-impact printers produce dot-matrix patterns. Several different technologies have been used to provide a variety of printers.

The main types of non-impact printer are:

o Thermal Printer o Laser Printer o Ink Jet Printer

Thermal Printer

Characters are formed by heated elements being placed in contact with special heat sensitive paper forming darkened dots when the elements reach a critical temperature. Thermal printer paper tends to darken over time due to exposure to sunlight and heat. The standard of print produced is poor. Thermal printers are widely used in battery powered equipment such as portable calculators.

Laser Printer

Laser Printers use a laser beam and dry powdered ink to produce a fine dot matrix pattern. This method of printing can generate about 4 pages of A4 paper per minute. The standard of print is very good and laser printers can also produce very good quality printed graphic images too. A photoconductive drum is initially charged and then a high intensity laser beam is used to discharge selected areas on the drum. These

43

Page 44: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

discharged areas correspond to the white areas of the printed document. Toner is attracted to parts of the drum with a high charge. The drum rotates and transfers the toner to the paper which has an even greater electrical charge. Finally a heater fixes the toner onto the paper.Characters are formed as a result of electrically charged or heated ink being sprayed in fine jets onto the paper. Individual nozzles in the printing head produce high resolution (up to 400 dots per inch or 400 dpi) dot matrix characters.

Inkjet printer

Inkjet printers use colour cartridges which combine magenta, yellow and cyan inks to create colour tones. A black cartridge is also used for crisp monochrome output. This method of printing can generate up to 200 cps and allows for good quality, cheap colour printing.

PLOTTERS Plotters are used to produce graphs or diagrams. Plotters can be of two types:

• Pen plotters • Electrostatic plotters

Pen plotters have an ink pen attached to draw the images, and electrostatic plotters work similarly to a laser printer.

Flatbed Plotter is a plotter where the paper is fixed on a flat surface and pens are moved to draw the image. This plotter can use several different colour pens to draw with. The size of the plot is limited only by the size of the plotter's bed.Hardware - Peripherals

In drum plotters the pen is moved in a single axis track and the paper itself moves on a cylindrical drum to add the other axis or dimension. The size of the graph is therefore limited only by the width of the drum and can be of any length.

An Electrostatic Plotter produces a raster image by charging the paper with a high voltage. This voltage attracts toner which is then melted into the paper with heat. This type of plotter is fast, but the quality is generally considered to be poor when compared to pen plotters.

44

Page 45: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

Direct-View Storage Tube (DVST)

DVST provides an alternative method for maintaining screen image. It is

done by storing the picture information inside the CRT instead of refreshing the screen.

A DVST stores the picture information as a charge distribution just behind the phosphor

coated screen. Two electron guns are used in a DVST. One the primary gun is used to

store the picture pattern. The second, flood gun is used to maintain the picture display.

No refreshing is needed for DVST. Disadvantage is that they do not ordinarily display

colour and that selected parts of image cannot be erased.

A storage tube is a special monochromatic CRT whose screen has a kind of

'memory' (hence the name): when a portion of the screen is illuminated by the CRT's

electron gun, it stays lit until a screen erase command is given. Thus, screen update

commands need only be sent once and this allows the use of a slower data connection,

typically serial—a feature very well adapted to computer terminal. DVBST was an

acronym used by Tektronix to describe their line of "Direct-View Bistable Storage

Tubes". These were cathode ray tubes that stored information written to them using an

analog technique inherent in the CRT and based upon the secondary emission of

electrons from the phosphor screen itself. Some DVBST implementations also allowed

the "write-through" of a small amount of dynamically refreshed, non-stored data. This

allowed the display of cursors, graphic elements under construction, and the like on

computer terminals. Another version of a DVBST was the "dark trace" CRT also known

as the Skiatron This CRT replaced the conventional light-emitting phosphor layer on the

face of the tube screen with a Scotophor such as Potassium Chloride (KCl). KCl has the

property that when a crystal is struck by an electron beam that spot would change from

translucent white to a dark magenta color. By back lighting such a CRT with a white or

green fluorescent lamp, the resulting image would appear as black information against a

green background or as magenta information against a white background. A benefit, aside

from the semi-permanent storage of the displayed image is that the brightness of the

resultant display is limited by the illumination source and optics. The image would be

45

Page 46: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

retained until erased by flooding the scotophor with a high-intensity infrared light or by

electro-thermal heating.

b) Flat panel display

Flat panel displays are of two categories: Emissive and non-emissive. Emissive

displays include plasma panels, electroluminescent thin film display and light-emitting

diodes. Non-emissive displays include LCD. Flat panel display refers to a class of video

devices that have reduced volume, weight and power requirements compared to a CRT.

PLASMA DISPLAY

Plasma display

Plasma displays are bright, have a wide color gamut, and can be produced in

fairly large sizes, up to 262 cm (103 inches) diagonally. They have a very low-luminance

"dark-room" black level, creating a black some find more desirable for watching movies.

The display panel is only about 6 cm (2½ inches) thick, while the total thickness,

including electronics, is less than 10 cm (4 inches). Plasma displays use as much power

per square meter as a CRT or an AMLCD television. Power consumption will vary

greatly depending on what is watched on it. Bright scenes (say a football game) will draw

significantly more power than darker scenes. The xenon and neon gas in a plasma

television is contained in hundreds of thousands of tiny cells positioned between two

plates of glass. Long electrodes are also sandwiched between the glass plates, in front of

and behind the cells. The address electrodes sit behind the cells, along the rear glass plate.

The transparent display electrodes, which are surrounded by an insulating dielectric

material and covered by a magnesium oxide protective layer, are mounted in front of the

cell, along the front glass plate. Control circuitry charges the electrodes that cross paths at

46

Page 47: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

a cell, creating a voltage difference between front and back and causing the gas to ionize

and form a plasma; as the gas ions rush to the electrodes and collide, photons are emitted.

In a monochrome plasma panel, the ionizing state can be maintained by applying a low-

level voltage between all the horizontal and vertical electrodes – even after the ionizing

voltage is removed. To erase a cell all voltage is removed from a pair of electrodes. This

type of panel has inherent memory and does not use phosphors. A small amount of

nitrogen is added to the neon to increase hysteresis. In color panels, the back of each cell

is coated with a phosphor. The ultraviolet photons emitted by the plasma excite these

phosphors to give off colored light. The operation of each cell is thus comparable to that

of a fluorescent lamp. Every pixel is made up of three separate sub pixel cells, each with

different colored phosphors. One sub pixel has a red light phosphor, one sub pixel has a

green light phosphor and one sub pixel has a blue light phosphor. These colors blend

together to create the overall color of the pixel, analogous to the "triad" of a shadow-

mask CRT.

47

Page 48: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

Thin film electro-luminescent display

Electroluminescence is the result of radiative recombination of electrons and holes in a

material (usually a semiconductor). The excited electrons release their energy as photons

- light. Prior to recombination, electrons and holes are separated either as a result of

doping of the material to form a p-n junction (in semiconductor electroluminescent

devices such as LEDs), or through excitation by impact of high-energy electrons

accelerated by a strong electric field (as with the phosphors in electroluminescent

displays). Powder phosphor-based electroluminescent panels are frequently used as

backlights to liquid crystal displays. They readily provide a gentle, even illumination to

the entire display while consuming relatively little electric power. They do, however,

require relatively high voltage. Recently, blue, red and green emitting thin film

electroluminescent materials have been developed that offer the potential for long life and

full color electroluminescent displays. In either case, the EL material must be enclosed

between two electrodes and at least one electrode must be transparent to allow the escape

of the produced light. Glass coated with indium oxide or tin oxide is commonly used as

the front (transparent) electrode while the back electrode is or is coated with reflective

metal. Additionally, other transparent conducting materials, such as carbon nanotubes

coatings or PEDOT can be used as the front electrode. Unlike neon and fluorescent

lamps, EL lamps are not negative resistance devices so no extra circuitry is needed to

regulate the amount of current flowing through them. In principle, EL lamps can be made

in any color. EL devices have low power consumption when compared with neon signs,

and have a wide range of applications such as their use on advertising boards and safety

signs.

48

Page 49: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

Light-emitting diode ( LED )

A light-emitting diode (LED) is a semiconductor diode that emits incoherent

narrow-spectrum light when electrically biased in the forward direction of the p-n

junction. This effect is a form of electroluminescence. An LED is usually a small area

source, often with extra optics added to the chip that shapes its radiation pattern. The

color of the emitted light depends on the composition and condition of the semi

conducting material used, and can be infrared, visible, or near-ultraviolet. An LED can be

used as a regular household light source. Like a normal diode, an LED consists of a chip

of semi conducting material impregnated, or doped, with impurities to create a p-n

junction. As in other diodes, current flows easily from the p-side, or anode, to the n-side,

or cathode, but not in the reverse direction. Charge-carriers—electrons and holes—flow

into the junction from electrodes with different voltages. When an electron meets a hole,

it falls into a lower energy level, and releases energy in the form of a photon. The

wavelength of the light emitted, and therefore its color, depends on the band gap energy

of the materials forming the p-n junction. In silicon or germanium diodes, the electrons

and holes recombine by a non-radiative transition which produces no optical emission,

because these are indirect band gap materials. The materials used for an LED have a

49

Page 50: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

direct band gap with energies corresponding to near-infrared, visible or near-ultraviolet

light.

LEDs are usually built on an n-type substrate, with an electrode attached to the p-

type layer deposited on its surface. P-type substrates, while less common, occur as well.

Many commercial LEDs, especially GaN/InGaN, also use sapphire substrate. Substrates

that are transparent to the emitted wavelength, and backed by a reflective layer, increase

the LED efficiency. The refractive index of the package material should match the index

of the semiconductor, otherwise the produced light gets partially reflected back into the

semiconductor, where it may be absorbed and turned into additional heat, thus lowering

the efficiency. An anti-reflection coating may be added as well.

c) Liquid Crystal Display (LCD)

An active matrix liquid crystal display (AMLCD) is a type of flat panel display,

currently the overwhelming choice of notebook computer manufacturers, due to light

weight, very good image quality, wide color gamut, and response time. The most

common example of an active matrix display contains, besides the polarizing sheets and

cells of liquid crystal, a matrix of thin-film transistors (TFTs) to make a TFT LCD. Each

pixel of an LCD typically consists of a layer of molecules aligned between two

transparent electrodes, and two polarizing filters, the axes of transmission of which are

(in most of the cases) perpendicular to each other. With no liquid crystal between the

polarizing filters, light passing through the first filter would be blocked by the second

(crossed) polarizer. The surface of the electrodes that are in contact with the liquid crystal

material are treated so as to align the liquid crystal molecules in a particular direction.

This treatment typically consists of a thin polymer layer that is unidirectionally rubbed

using, for example, a cloth. The direction of the liquid crystal alignment is then defined

by the direction of rubbing. When a voltage is applied across the electrodes, a torque acts

to align the liquid crystal molecules parallel to the electric field, distorting the helical.

This reduces the rotation of the polarization of the incident light, and the device appears

gray. If the applied voltage is large enough, the liquid crystal molecules in the center of

the layer are almost completely untwisted and the polarization of the incident light is not

50

Page 51: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

rotated as it passes through the liquid crystal layer. This light will then be mainly

polarized perpendicular to the second filter, and thus be blocked and the pixel will appear

black. By controlling the voltage applied across the liquid crystal layer in each pixel, light

can be allowed to pass through in varying amounts thus constituting different levels of

gray.Both the liquid crystal material and the alignment layer material contain ionic

compounds. If an electric field of one particular polarity is applied for a long period of

time, this ionic material is attracted to the surfaces and degrades the device performance.

This is avoided either by applying an alternating current or by reversing the polarity of

the electric field as the device is addressed. When a large number of pixels is required in

a display, it is not feasible to drive each directly since then each pixel would require

independent electrodes. Instead, the display is multiplexed. In a multiplexed display,

electrodes on one side of the display are grouped and wired together (typically in

columns), and each group gets its own voltage source. On the other side, the electrodes

are also grouped (typically in rows), with each group getting a voltage sink.

University Questions

(a) Write short note on Raster Scan displays (6 marks) [F 3875 Computer

Graphics November 2005]

(b) Explain the following in detail a) Generation of a raster image.(6 marks)

[3794 Computer Graphics (R, T) November 2005]

(c) What is interactive graphics system?(4 marks) [G 1693 Computer Graphics

(R, T) June/ July 2006]

(d) Enumerate the applications of raster scan graphics.(4 marks) [G 1693

Computer Graphics (R, T) June/ July 2006]

(e) Explain in detail the basic concepts in Computer graphics.(12 marks) [G 1693

Computer Graphics (R, T) June/ July 2006]

(f) What are the requirements of interactive graphics? ?(4 marks) [G 1867

Computer Graphics May 2005]

51

Page 52: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

(g) Explain the potential applications of computer graphics.(4 marks) [3794

Computer Graphics (R, T) November 2005]

(h) Explain with a neat diagram the working of a shadow mask CRT bringing out

its merits and demerits.(12 marks) [G 1867 Computer Graphics May 2005]

(i) Explain in detail the basic concepts in Computer Graphics (12 marks) [G 1693

Computer Graphics (R) June/July 2006]

(j) Explain and compare the working of raster scan and random scan display

systems.(12 marks) [G 1867 Computer Graphics (R) May 2005]

(k) Write short note on display file interpreter.(6 marks) [F 3879 Computer

Graphics (R) November 2005]

(l) Explain with the help of a block diagram, display processor for a refresh line

drawing display. (12 marks) [Computer Graphics (R) old scheme November

2001]

(m)Using an illustrate example, explain how a picture can be represented using

linked lists.(12 marks) [Computer Graphics (R) old scheme November 2001]

(n) Explain in detail the display processor (6 marks) [3794 Computer Graphics

(R) November 2005]

(o) Explain in detail the generation of a raster image (6 marks) [3794 Computer

Graphics (R) November 2005]

(p)

Summary

In this lecture we had a first look at the basic concepts of computer graphics. We

developed an idea on the history of evolution of computer graphics. We also discussed

how an interactive display system works. We have also discovered the different

application areas of computer graphics.

In this lecture we familiarized the terms pixel and resolution. We learned that the

primary output device in a graphics system is a video monitor and its operation is based

52

Page 53: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

on CRT design. We also discussed the working of a Shadow mask CRT. We learned how

refresh rates of a monitor is calculated.

We discussed about the two display methods and understood that a raster system

uses a frame buffer to store intensity information for each screen position (pixel).

Pictures are then painted on the screen by retrieving this information from the frame

buffer as the electron beam in the CRT sweeps across each scan line, from top to bottom.

In random scan display we know that pictures are constructed by drawing lines between

specified line endpoints. Picture information is then stored as a set of line drawing

instructions.

53

Page 54: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

Module 2

Introduction

In the introductory lecture of this course we have discussed about the key features

and applications of computer graphics. We have discussed that the end product of

computer graphics is a picture. In its broadest sense you know that a picture means any

collection of lines, points, text, etc. that can be displayed on a graphics device. In your

Higher Secondary classes you have learned that the fundamental building block of picture

representation is a point and the algorithms that organize these points. As stated above, in

this lecture, we plan to explore how the graphics system convert the graphical primitives

such as points and lines from its geometrical definition into a set of pixels that make up

the primitive in the image space. We will also discuss on DDA algorithm that helps in

drawing a straight line from one point to another. Before we begin on this topic, a

revision of the concepts developed earlier like CRT raster display, pixels -the smallest

picture unit accepted by the display and the Cartesian slope intercept equation for straight

lines may be helpful. Once we finish this aspect, we will proceed towards exposition of

items listed in the synopsis. In particular, we will emphasize the following

(a) Rasterization and Scan Conversion

(b) Scan converting a point

(c) Design criteria of straight lines

(d) Scan converting a line

(e) Implementing the Digital Differential Analyzer

Representing pictures

Points are the fundamental building blocks of picture representation. Of equal

fundamental importance is the algorithm, which explains how to organize these points.

To illustrate this consider a unit square in the first quadrant. Refer the figure given below

54

Page 55: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

The unit square can be represented by its four corners points

P1 (0, 0) P2 (1, 0) P3 (1, 1) P4 (0, 1)

An associated algorithm description might be

Connect P1P2P3P4P1 in sequence

The unit square can also be described by its four edges

E1 = P1P2 E2= P2P3 E3 = P3P4 E4 = P4P1

Here, the algorithm description is

Connect E1E2E3E4 in sequence

The fundamental building blocks, i.e., points can be represented as either pairs or triplets

of numbers, depending whether the data are two or three dimensional. Thus (x1, y1) or

(x1, y1, z1) represent a point in either two or three dimensional space. Two points

represents a line or edge, and a collection of three or more points represents a polygon.

The representation of curved lines is usually accomplished by approximating them by

connected short straight line segments.

Preparing pictures for presentation

Pictures ultimately consist of points and a drawing algorithm to display them.

This information is generally stored in a file before it is used to present the picture; this

file is called a data base. Very complex pictures require very complex data bases, which

require a complex algorithm to access them.

Presenting previously prepared pictures

The data used to prepare the picture for presentation is rarely the same as that

used to present the picture. The data used to present the picture is frequently called a

55

Page 56: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

display file. The display file represents some portion, view or scene of the picture

represented by the total database.

Also, refer to pages 86 to 94 of your text book.

Raster CRT graphics device

Before discussing what rasterization is we first look at some fundamentals of

cathode ray tubes and how they are used in computer graphics. The most common

method to implement a raster CRT graphics device is to use a frame buffer. A frame

buffer is a large, contiguous piece of computer memory. At a minimum there is one

memory bit for each pixel in the rater; this amount of memory is called a bit plane. The

picture is built up in the frame buffer one bit at a time. You know that a memory bit has

only two states, therefore a single bit plane yields a black-and white display. You know

that a frame buffer is a digital device and the CRT is an analog device. Therefore, a

conversion from a digital representation to an analog signal must take place when

information is read from the frame buffer and displayed on the raster CRT graphics

device. For this you can use a digital to analog converter (DAC).Each pixel in the frame

buffer must be accessed and converted before it is visible on the raster CRT.

Color or gray scales are incorporated into a frame buffer rater graphics device by

using additional bit planes. The intensity of each pixel on the CRT is controlled by a

corresponding pixel location in each of the N bit planes. The binary value from each of

the N bit planes is loaded into corresponding positions in a register. The resulting binary

number is interpreted as an intensity level between 0 (dark) and 2n -1 (full intensity). This

is converted into an analog voltage between 0 and the maximum voltage of the electron

gun by the DAC.

Rasterization of straight lines

You know that a raster graphics device can be considered as a matrix of discrete

cells, each of which can be made bright. You can find that it is not possible to directly

draw a straight line from one addressable point, or pixel, in the matrix to another. The

line can be approximated by a series of dots (pixels) close to the path of the line. The

56

Page 57: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

process of determining which pixels provides the best approximation to the desired line is

properly known as rasterization. Only in a special case of completely horizontal, vertical

or for square pixels 45o lines, does a straight line or pixels result. All the other lines

appear as a series of stair steps; this is called aliasing, or the ‘jaggies’.

Scan Conversion

When the process of rasterization is combined with the process of generating a

picture in scan line order, it is known as scan conversion.

Scan converting a point

A mathematical point (x, y) where x and y are real numbers within an image area,

needs to be scan converted to a pixel at location (x’, y’). This may be done by making x’

to be the integer part of x, and y’ to be the integer part of y. In other words, x’ = floor(x)

and y’ = floor(y), where function floor returns the largest integer that is less than or equal

to the arguments. Doing so in essence places the origin of a continuous coordinate system

for (x, y) at the lower left corner of the pixel grid in the image space. All the points that

satisfy x’≤ x ≤ x’ + 1 and y’≤ y ≤ y’ + 1 are mapped to pixel (x’, y’).

Let us take for example a point P1(1.7, 0.8). It will be represented by pixel (1, 0).

Points P2 (2.2, 1.3) and P3(2.8, 1.9) are both represented by pixel (2, 1).

Let us take another approach to align the integer values in the coordinate system

for (x, y) with the pixel coordinates. Here we can convert (x, y) by making x’ = floor(x +

0.5) and y’ = floor(y + 0.5). This approach places the origin of the coordinate system for

(x, y) at the center of pixel(0,0). All points that satisfy x’- 0.5 ≤ x ≤ x’+0.5 and y’- 0.5 ≤

y ≤ y’+0.5 are mapped to pixel (x’, y’). This means that points P1 and P2 are now both

represented by pixel (2, 1), whereas point P3 is represented by pixel (3, 2).

Scan converting a line

You know that a line in computer graphics typically refers to a line segment,

which is a portion of a straight line that extends indefinitely in opposite directions. You

can define a line by its two end points and by the line equation y = mx + c, where m is

57

Page 58: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

called the slope and c the y intercept of the line. Let the two end points of a line be P1(x1,

y1) and P2(x2, y2). The line equation describes the coordinates of all the points that lie

between the two endpoints.

A simple approach to scan convert a line is to first scan convert P1 and P2 to pixel

coordinates (x1’, y1’) and (x2’, y2’) respectively. Then let us set m = (y2’- y1’)/(x2’-

x1’) and b = y1’-mx1’. Find |m| and if |m| ≤ 1, then for every integer value of x between

and excluding x1’ and x2’, calculate the corresponding value of y using the equation and

scan convert (x, y). If |m| >1, then for every integer value of y between and excluding y1’

and y2’, calculate the corresponding value of x using the equation and scan convert (x, y).

Design criteria of straight lines

From geometry we know that a line, or line segment, can be uniquely specified by

two points. From algebra we also know that a line can be specified by a slope, usually

given the name m and a y-axis intercept called b. Generally in computer graphics, a line

will be specified by two endpoints. But the slope and y-intercept are often calculated as

intermediate results for use by most line-drawing algorithms.

The goal of any line drawing algorithm is to construct the best possible

approximation of an ideal line given the inherent limitations of a raster display. Before

discussing specific line drawing algorithms, it is useful to consider general requirements

for such algorithms. Let us see what are the desirable characteristics needed for these

lines.

The primary design criteria are as follows

• Straight lines appear as straight lines

• Straight lines start and end accurately

• Displayed lines should have constant brightness along their length,

independent of the line length and orientation.

• Lines should be drawn rapidly

58

Page 59: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

2.1) Digital Differential Analyzer

DDA algorithm is an incremental scan conversion method. Here we perform

calculations at each step using the results from the preceding step. The characteristic of

the DDA algorithm is to take unit steps along one coordinate and compute the

corresponding values along the other coordinate. The unit steps are always along the

coordinate of greatest change, e.g. if dx = 10 and dy = 5, then we would take unit steps

along x and compute the steps along y.

Suppose at step i we have calculated (xi, yi) to be a point on the line. Since the

next point (x i+1,y i+1) should satisfy ∆y/∆ x =m where ∆y= y i+1–yi and ∆ x= x i+1–xi , we

have y i+1= yi + m∆ x or x i+1=xi+ ∆ y/m

These formulas are used in DDA algorithm as follows. When |m| ≤ 1, we start

with x = x1’≤ (assuming that x1’ <x2’) and y = y1’, and set ∆ x =1( i.e., unit increment in

the x direction). The y coordinate of each successive point on the line is calculated using

y i+1= yi + m. When |m| >1, we start with x= x1’ and y= y1’ (assuming that y1’ < y2’), set

∆ y =1 ( i.e., unit increment in the y direction). The x coordinate of each successive point

on the line is calculated using x i+1=xi+ 1/m. This process continues until x reaches

x2’(for m| ≤ 1 case )or y reaches y2’ (for m| > 1 case ) and all points are scan converted

to pixel points.

The explanation is as follows: In DDA algorithm we have to find the new point

xi+1 and yi+1 from the existing points xi and yi. As a first step here we identify the major

axis and the minor axis of the line to be drawn. Once the major axis is found we sample

the major axis at unit intervals and find the value in the other axis by using the slope

equation of the line. For example if the end points of the line is given as (x1,y1)= (2,2)

and (x2, y2)= (9,5). Here we will calculate y2-y1 and x2-x1 to find which one is greater.

Here y2-y1 =3 and x2-x1 =7; therefore here the major axis is the x axis. So here we need

to sample the x axis at unit intervals i.e.∆ x = 1 and we will find out the y values for each

∆ x in the x axis using the slope equation.

In DDA we need to consider two cases; one is slope of the line less than or equal

to one (|m| ≤ 1)and slope of the line greater than one (m| > 1). When |m| ≤ 1 means y2-y1

= x2-x1 or y2-y1 <x2-x1.In both these cases we assume x to be the major axis. Therefore

59

Page 60: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

we sample x axis at unit intervals and find the y values corresponding to each x value.

We have the slope equation as

∆ y = m ∆ x

y2-y1 = m (x2-x1)

In general terms we can say that y i+1 - yi = m(x i+1 - xi ). But here ∆ x = 1; therefore the

equation reduces to y i+1= yi + m = yi + dy/dx.

When m| > 1 means y2-y1> x2-x1 and therefore we assume y to be the major axis. Here

we sample y axis at unit intervals and find the x values corresponding to each y value.We

have the slope equation as

∆ y = m ∆ x

y2-y1 = m (x2-x1)

In general terms we can say that y i+1 - yi = m(x i+1 - xi ). But here ∆ y = 1; therefore the

equation reduces to 1 = m(x i+1 - xi). Therefore

x i+1=xi+ 1/m

x i+1=xi+ dx/dy

DDA Algorithm is given below:

procedure DDA( x1, y1, x2, y2: integer);

var

dx, dy, steps: integer;

x_inc, y_inc, x, y: real;

begin

dx := x2 - x1; dy := y2 - y1;

if abs(dx) > abs(dy) then

steps := abs(dx); {steps is larger of dx, dy}

else

steps := abs(dy);

x_inc := dx/steps; y_inc := dy/steps;

{either x_inc or y_inc = 1.0, the other is the slope}

x:=x1; y:=y1;

set_pixel(round(x), round(y));

60

Page 61: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

for i := 1 to steps do

begin

x := x + x_inc;

y := y + y_inc;

set_pixel(round(x), round(y));

end;

end; {DDA}

The DDA algorithm is faster than the direct use of the line equation since it calculates

points on the line without any floating point multiplication.

2.2)Bresenham’s line drawing Algorithm

In lecture <1> we discussed about the line drawing algorithm. You know that DDA

algorithm is an incremental scan conversion method which performs calculations at each

step using the results from the preceding step. Here we are going to discover an accurate

and efficient raster line generating algorithm, the Bresenham's line-drawing algorithm.

This algorithm was developed by Jack E. Bresenham in 1962 at IBM. As stated above, in

this lecture, I will explain how to draw lines using the Bresenham's line-drawing

algorithm. And then show you the complete line drawing function. Before we begin on

this topic, a revision of the concepts developed earlier like scan conversion methods and

rasterization may be helpful. Once we finish this aspect, we will proceed towards

exposition of items listed in the synopsis. In particular, we will emphasize the following

(a) The working of Bresenham’s algorithm.

(b) Implementation of the Bresenham’s algorithm.

The working of Bresenham’s algorithm

The following is an explanation of how the Bresenham's line-drawing algorithm works,

rather than exact implementation.

61

Page 62: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

Let’s take a look at this image. One thing to note here is that it is impossible to draw the

true line that we want because of the pixel spacing. Putting it in other words, there's no

enough precision for drawing true lines on a PC monitor especially when dealing with

low resolutions. The Bresenham's line-drawing algorithm is based on drawing an

approximation of the true line. The true line is indicated in bright color, and its

approximation is indicated in black pixels.

In this example the starting point of the line is located exactly at 0, 0 and the

ending point of the line is located exactly at 9, 6. Now let discuss the way in which this

algorithm works. First it decides which axis is the major axis and which is the minor axis.

The major axis is longer than the minor axis. On this picture illustrated above the major

axis is the X axis. Each iteration progresses the current value of the major axis (starting

from the original position), by exactly one pixel. Then it decides which pixel on the

minor axis is appropriate for the current pixel of the major axis. Now how can you

approximate the right pixel on the minor axis that matches the pixel on the major axis? -

That’s what Bresenham's line-drawing algorithm is all about. And it does so by checking

which pixel's center is closer to the true line.

Now you take a closer look at the picture. The center of each pixel is marked with

a dot. The algorithm takes the coordinates of that dot and compares it to the true line. If

the span from the center of the pixel to the true line is less or equal to 0.5, the pixel is

drawn at that location. That span is more generally known as the error term.

62

Page 63: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

You might think of using floating variables but you can see that the whole

algorithm is done in straight integer math with no multiplication or division in the main

loops(no fixed point math either). Now how is it possible? Basically, during each

iteration through the main drawing loop the error term is tossed around to identify the

right pixel as close as possible to the true line. Let's consider these two deltas between the

length and height of the line: dx = x1 - x0; dy = y1 - y0; This is a matter of precision and

since we're working with integers you will need to scale the deltas by 2 generating two

new values: dx2 = dx*2; dy2 = dy*2; These are the values that will be used to change the

error term. Why do you scale the deltas? That’s because the error term must be first

initialized to 0.5 and that cannot be done using an integer. Finally the scaled values must

be subtracted by either dx or dy (the original, non-scaled delta values) depending on what

the major axis is (either x or y).

The implementation of Bresenham’s algorithm

The function given below handles all lines and implements the complete Bresenham's

algorithm.

function line(x0, x1, y0, y1)

boolean steep := abs(y1 - y0) > abs(x1 - x0)

if steep then

swap(x0, y0)

swap(x1, y1)

if x0 > x1 then

swap(x0, x1)

swap(y0, y1)

int deltax := x1 - x0

int deltay := abs(y1 - y0)

real error := 0

real deltaerr := deltay / deltax

63

Page 64: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

int y := y0

if y0 < y1 then ystep := 1 else ystep := -1

for x from x0 to x1

if steep then plot(y,x) else plot(x,y)

error := error + deltaerr

if error ≥ 0.5

y := y + ystep

error := error - 1.0

Note:-To draw lines with a steeper slope, we take advantage of the fact that a steep line

can be reflected across the line y=x to obtain a line with a small slope. The effect is to

switch the x and y variables throughout, including switching the parameters to plot.

2.3)2D Transformations

In lecture #1 we have discussed that the end product of computer graphics is a

picture. In its broadest sense you know that a picture means any collection of lines,

points, text, etc. that can be displayed on a graphics device. Also, a graphics system

should allow the programmer to define pictures that include a variety of transformations.

So let’s discuss about two dimensional transformations. As stated above, in this lecture,

we plan to explore the transformation principles, the different forms of two dimensional

transformations and their matrix representations. Before we begin on this topic, a revision

of the concepts developed earlier in your Engineering Mathematics classes like vector

calculus, trigonometry and matrices may be helpful. Once we finish this aspect, we will

proceed towards exposition of items listed in the synopsis. In particular, we will

emphasize the following

(a) Transformation Principles

(b) The coordinate systems

(c) Different forms of transformation

i. Translation

64

Page 65: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

ii. Rotation

iii. Scaling

(d) The Matrix representations

(e) The concatenation process

Transformation Principles

You can say that transformations are a fundamental part of computer graphics.

Transformations are used to position objects, to shape objects, to change viewing

positions, and even to change how something is viewed (e.g. the type of perspective that

is used).Putting it in other words a transformation is a function that maps every position

(x, y) into a new position (x', y'). Instead of applying the transformation to every point in

every line that makes up the object, we simply apply the function to the objects vertices

and then draw new lines between the resulting new endpoints.

Two aspects of the formulation of transformation should be emphasized:

• A transformation is a single mathematical entity and such can be denoted by a

single name or symbol.

• Two transformations can be combined, or concatenated, to yield a single

transformation with the same effect as the sequential application of the original

two. Thus transformation A might be a translation and transformation B a scaling.

• The concatenation property allows us to determine a transformation C = AB

whose effect is to translate and then scale.

Each of the transformations is used to generate a new point (x’, y’) from the coordinates

of a point to (x, y) in the original picture description. If the original definition includes a

line, it suffices to apply the transformation to the endpoints of the line and display the

line between the two transformed endpoints.

65

Page 66: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

2.5)World and Modeling Coordinates

Before going into details let us discuss on 3D coordinate systems. As you all know in 3D

graphics, one uses a coordinate system in order to represent points in space by a series of

numbers. Usually Cartesian coordinates are used, as these are the easiest to work with. It

defines three straight fixed perpendicular lines called axes (usually referred to as X, Y,

and Z). A point in 3D can be described by three numbers, which indicate the distance of

the point from each axis. This is represented by an ordered triple: (x,y,z). However, there

is not just one Cartesian coordinate system. There are many, which have their origins in

different places and their axes aligned with different things.

World Coordinates

The world coordinate system forms the "base" on which you can think of all other

coordinates being defined. Usually, its origin is at the center of your game world,

and its axes might be aligned with directions like north/south, east/west, and

up/down. (For example, it is very common for the X axis to be east, the Y axis to

be north, and the Z axis to be up. Another way is for X to be east, Y to be up, and

Z to be south.) This is the coordinate system in which most of your level

geometry - such as rooms, hallways, and landscapes - will probably be defined

Modeling coordinates

Modeling coordinates is the one in all drawing primitives do their drawing. The user can

select the position and orientation of the modeling space with regard to the world

space by means of translations, rotations, scales, or generalized transformations.

The relation between modeling coordinates and world coordinates is determined

by the modeling matrix. Modeling coordinates are a useful conceptual device

when drawing complex or repetitive scenes. For instance, a paper clip can be

defined once in modeling coordinates, and then drawn hundreds of times by

moving the modeling coordinate around in world space.

2.6)Different types of Transformation

66

Page 67: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

There are three basic 2D Transformation functions:

• Translation

• Scaling

• Rotation

Now let’s consider the figures given below. Here we are trying to transform a picture in

the world coordinate into an object in the modeling coordinates.

Modeling Coordinates

World Coordinates

Figure 1

Modeling Coordinates

ScaleRotate

Translate

ScaleTranslate

x

y

67

Page 68: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

World Coordinates

Figure 2

World Coordinates

Figure 3

World Coordinates

Figure 3

Let’s look at this in detail.

x

y

68

x

y

ModelingCoordinates

Initial locationat (0, 0) withx- and y-axesaligned

x

y

Page 69: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

World Coordinates

Figure 4 : Scaling Transformation is done

World Coordinates

69

x

y

ModelingCoordinates

Scale .3, .3

x

y

ModelingCoordinates

Rotate -90

Page 70: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

Figure 5: Rotating the object by 90o

Figure 6: Translating the object to a new position

Translation

Translations with 2 dimensional points are not very difficult. All we need to do is

determine how much we want to move the object in the x and y direction, and add those

variables to the point's x and y respectively

Translation means a shift by Tx in the x direction and Ty in the y direction is

x' = x + Tx

y' = y + Ty

Tx and Ty are translation amounts in the x direction in the y direction respectively. For

example let us see the figure given below. An object at position (4,5) and (7,5) has been

shifted to a new position (7,1) and (10,1)

70

x

y

ModelingCoordinates

Translate 5, 3World Coordinates

Page 71: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

Scaling

A scaling by Sx in the x direction and Sy in the y directions about the origin is

x' = Sx x

y' = Sy y

If Sx and Sy are not equal this results in a stretching along the axis of the larger scale

factor. To scale about a particular point, first translate to the origin, scale, and translate

back to the original position. For example, to scale about the point (x0,y0)

x' = x0 + Sx ( x - x0 )

y' = y0 + Sy ( y - y0 )

In other words scaling a coordinate means multiplying each of its components by a

scalar. Uniform scaling means this scalar is the same for all components. For example,

we have an object initially at (4, 5) and (7, 5). A scaling Factor of ¼ is applied to this

object. Look at the figure given below

71

Page 72: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

Note that the house changes position, since the scaling is about the origin. If scaling were

uniform, the amount of scaling in each dimension would be equal.

Rotation

Rotation about the origin by an angle A in a clockwise direction is

x' = x cos(A) + y sin(A)

y' = y cos(A) - x sin(A)

To rotate about a particular point apply the same technique as described for scaling,

translate the coordinate system to the origin, rotate, and the translate back. Putting it in

other words to rotate about another point, first translate the point to the origin, rotate, and

then translate back.

The following transforms perform rotation:

The angle of rotation is about the origin. Positive angles are measured counterclockwise.

You should be able to reformulate the rotational transformations for negative angles also.

72

Page 73: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

Use the identities:

Proof:

Consider:

By simple trigonometry:

x = r cosΦ and y= r cosΦ

x’ = r cos(θ +Φ ) = r cos Φ cos θ – r sinΦ sin θ

y’ = r sin(θ +Φ ) = r cos Φ sin θ + r sinΦ cos θ

By substitution

x’ = x cos θ- y sin θ

73

Page 74: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

y’ = x sin θ + y cos θ

Matrix representation of Transformations

The matrix representation of different transformations are given below

Translation

Scaling

Rotation

Concatenation

Now we have discussed about different types of transformations. Now is it possible to

combine one transformation into another? The answer is yes. We can combine a sequence

of transformation into one transformation. This process is called concatenation. Look

again at transformations in terms of vectors.

74

Page 75: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

Scaling: x' = x * Sx, y' = y * Sy, which can be written as:

(x' y') = (x y) * (Sx 0)

(0 Sy)

Rotation: x' = x cos q - y sin q, y' = y cos q + x sin q, which can be written as:

(x' y') = (x y) * ( cos q sin q)

(-sin q cos q)

So if we wanted to scale and then rotate the object we could do the following:

(x' y') = (x y) * ( Sx 0 )

( 0 Sy )

(x" y") = (x' y') * ( cos q sin q )

(-sin q cos q)

But this is the same as:

(x" y") = (x y) * (Sx 0) * ( cos q sin q )

(0 Sy) (-sin q cos q )

= (x y) * ( Sx * cos q Sx * sin q)

(-Sy * sin q Sy * cos q)

Hence, we can concatenate the Scaling and Rotation matrices, and then multiply the old

points by resultant matrix.

75

Page 76: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

2.6)Scan converting a circle

Circles have the property of being highly symmetrical, which is handy when it comes to

drawing them on a display screen.

We know that there are 360 degrees in a circle. First we see that a circle is symmetrical

about the x axis, so only the first 180 degrees need to be calculated. Next we see that it's

also symmetrical about the y axis, so now we only need to calculate the first 90 degrees.

Finally we see that the circle is also symmetrical about the 45 degree diagonal axis, so we

only need to calculate the first 45 degrees

We only need to calculate the values on the border of the circle in the first octant. The

other values may be determined by symmetry. Assume a circle of radius r with center at

(0,0).

Procedure Circle_Points(x,y: Integer);

Begin

Plot(x,y);

Plot(y,x);

Plot(y,-x);

76

Page 77: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

Plot(x,-y);

Plot(-x,-y);

Plot(-y,-x);

Plot(-y,x);

Plot(-x,y)

End;

2.7) The Bresenham’s Method of scan conversion

Bresenham's circle algorithm calculates the locations of the pixels in the first 45 degrees.

It assumes that the circle is centered on the origin. So for every pixel (x, y) it calculates,

we draw a pixel in each of the eight octants of the circle. This is done till when the value

of the y coordinate equals the x coordinate.

PutPixel (CenterX + X, Center Y + Y)

PutPixel (CenterX + X, Center Y - Y)

PutPixel (CenterX - X, Center Y + Y)

PutPixel (CenterX - X, Center Y - Y)

PutPixel (CenterX + Y, Center Y + X)

PutPixel (CenterX + Y, Center Y - X)

PutPixel (CenterX - Y, Center Y + X)

PutPixel (CenterX - Y, Center Y - X)

The Bresenham’s Circle-Drawing Algorithm

The algorithm is given as follows:

Given a radius for the circle we perform this initialization:

d := 3 - (2 * RADIUS)

x := 0

77

Page 78: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

y := RADIUS

Now for each pixel we do the following operations:

Draw the 8 circle pixels

if d < 0 then

d := d + (4 * x) + 6

else

begin

d := d + 4 * (x - y) + 10

y := y - 1;

end;

2.8)Mid point circle algorithm

In midpoint circle algorithm, we sample at unit intervals and determine the closest pixel

position to the specified circle path at each step. For a given radius r and screen center

position (xc, yc), this algorithm first calculate pixel positions around a circle path centered

at the coordinate origin (0, 0). Then each calculated position (x, y) is moved to its proper

position by adding xc to x and yc to y. Along the circle section from x = 0 to x = y in the

first quadrant, the slope of the curve varies from 0 to -1. Therefore, we can take steps in

the positive x direction over this octant and use a decision parameter to determine which

of the two possible y positions is closer to the circle path at each step. Positions of the

other seven octants are then obtained by symmetry.

f circle (x, y) = x2 + y2 – r2

Any point on the boundary of the circle with radius r satisfies the equation f circle (x, y) = 0.

If the point is in the interior of the circle, the circle function is negative. If the point is

outside the circle, the circle function is positive. This test is performed for the mid

positions between pixels near the circle path at each sampling step.

78

Page 79: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

Assuming we have plotted the pixel at (Xk, Yk), we next need to determine whether the

pixel at position (Xk +1, Yk -1) is closer to the circle. For that we calculate the circle

function at the midpoint between these points.

Pk = f circle (Xk +1, Yk - ½)

If Pk < 0, this midpoint is inside the circle and the pixel on the scan line Yk, is closer to

the circle boundary, and we select the pixel on the scan line Yk -1. Successive parameters

are obtained using incremental calculations. The initial decision parameter is obtained by

evaluating the circle function at the staring position (X0, Y0) = (0, r).

= f circle (1, r – ½)

= 1 +(r – ½)2 - r 2

= 5/4 - r

The algorithm is given as follows:

Initialize x=0, y= r and p =1-r

For x<=y do the following

Plot the pixel (x,y )

If (p <0)

p = p+2x + 3

Else

p=p + 2(x-y)+ 5

Decrement the value of y

79

Page 80: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

2.9)Clipping

Clipping algorithm is a procedure that identifies those positions of a picture that are either inside or outside of a specified region of space. The region against which an object is clipped is called Clip Window. The different types of clipping are:

1. Point Clipping.2. Line Clipping.3. Area Clipping.4. Curve Clipping.5. Text Clipping.

LINE CLIPPING METHODS A line clipping methods involves several parts. First, we can test a given segment to determine whether it lies completely inside the clipping window. If it does not ,we try to determine whether it lies completely outside the window. Finally if we cannot identify a line as completely inside or outside, we must perform the intersection calculations with one or more clipping boundaries.The different clipping methods are1.Cohen-sutherland line clipping2. Liang-barsky line clipping3. Nicholl-lee-nicholl line clipping4 Line clipping using nonrectangular clip windows5 Splitting concave polygons.

For a line segment with endpoints(x1,y1)and (x2,y2) and one or both endpoints outside the clipping rectangle the parametric equation is X=x1+u(x2-x1)Y=y1+u(y2-y1), 0≤u≤1

Cohen-Sutherland line clipping | 1001 | 1000 1010 ---------|--------------------- | 0001 | 0000 0010 ---------|------------------------ 0101 In this

80

Page 81: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

bit 1:left bit 2:right bit 3:below bit 4:above.

Intersection points with a clipping boundry can be calculated using the slope-intercept form of the line equation.

Bit values in the region code are determined by compairing endpoint coordinate values (x,y) to the clip boundries.bit 1 is set to 1 if x<xwmin . the other three values can be determined using the same comparison.

Nicholl-lee-nicholl line clipping

This algorithm avoids multiple clipping of an individual line segment.in the cohen-sutherland method for example multiple intersections may be calculated along the path of a single line before a n intersection on the clipping rectangle is located.these extra intersection calculations eliminated in the nln algorithm by carryng out more region testing before intersection position are calculated.compared to the cohen-sutherland ,this algorithm performs fewer comparisons and divisions.the trade off is that the nln alg.can only be applied to two-dimensional clipping whereas the cohen-sutherland method are easily extend to three-dimensional scenes.

POLYGON CLIPPING METHODS

THE VARIOUS POLYGON CLIPPING METHODS ARE 1.sutherland-hodgeman polygon clipping2.weiler-atherton polygon clipping3,other polygon-clipping algorithms.

Sutherland-Hodgeman polygon clippingWe can correctly clip a polygon by processing the polygon boundary as a whole against each window edge.first clip the polygon against the left rectangle boundary to produce a new sequence of vertices. The new set of vertices then successively passed to a right boundary clipper,a bottom boundary clipper and a top boundary clipper.

The 4 main points are,

81

Page 82: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

1. if the first vertex Is outside the window and the second is inside,both the intersection point on the polygon edge with the window boundary and the second vertex are added to the output vertex list.

2. if both the vertex are inside the window boundary ,then second only added to the output window boundary.

3. if the first vertex is inside and the second is outside ,then only the edge intersection is added to the output vertex list.

4. if both are outside the window ,nothing is added to the output window boundary.

Weiler-atherton polygon clipping

This clipping was developed as amethod for identifying the visible surface,and it can be applied to the arbitrary polygon clipping regions.For clockwise processing of polygon vertices,we use following rules,

For an outside to inside pair of vertices,follow the polygon boundary.

For an inside to outside pair of vertices,follow the window boundary in a clockwise direction.

82

Page 83: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

University Questions

(a) Explain in detail the principle of 2D transformations (12 marks) [F 3794

Computer Graphics (R,T) November 2005]

(b) Prove that two successive rotations are additive(4 marks) [F 3879 Computer

Graphics (R) November 2005]

(c) Prove that rotation and uniform scaling is a commutative pair of operations(4

marks) [F 3879 Computer Graphics (R) November 2005]

(d) What do you mean by scan conversion? Explain its significance in display

devices. (4 marks) [F 3879 Computer Graphics (R) November 2005]

(e) Discuss why incremental methods are used in line drawing in graphic

displays. (6 marks) [Computer Graphics (R) November 2001]

(f) What is real time scan conversion? Explain. (8 marks) [Computer Graphics

(R) November 2001]

(g)

Summary

In this lecture we have discussed about modeling transformation principles. We have

understood that to change the size of the object we can use scaling transformation. To

move the object and to rotate the object we are familiarized with the translation and

rotation transformations. We have also discussed how these transformations are

represented in the matrix format. We concluded by mentioning how these

transformations can be concatenated into one transformation.

83

Page 84: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

Module 3

3.1)3D OBJECT REPRESENTATION

One of the major concepts in computer graphics is modeling of objects. Modeling of

objects means describing the objects in terms of their geometric properties (like shape

and size) and how they interact with light (reflect, transmit).A graphics system usually

uses a set of primitives or geometric forms to model variety of objects . Geometric forms

that are often used as primitives include points, lines, polylines, polygons and polyhedra.

More complex geometric form includes curves, curved surface patches, and quadratic

surfaces.

SIMPLE GEOMETRIC FORMS

Points and lines

The basic building blocks of computer graphics can be termed as points and lines. How

do we specify a point? We do it by specifying its coordinate sin three or two dimensional

space. For example, a point can be represented as P (x, y, z). How do we specify a line

segment? It can be done by specifying the two end points P1(x1, y1, z1) and P2(x2, y2,

z2).

Polylines

What is a polyline? It’s a chain of connected line segments. It is specified by giving the

vertices or nodes that defines the line segment. For example, a polyline can be defined as

P0, P1…, Pn. The first vertex of the polyline is called the starting point or the initial point

and the last vertex is called the terminal point or the final point.

Polygons

How do you specify a polygon? A polygon can be treated as a closed polyline, that is, a

polyline where the initial and the terminal vertices coincides. It is specified by its vertex

list as P0, P1,….Pn, P0.The edges of the polygon are specified as line segments P0P1,

84

Page 85: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

P1P2, ………….PnP0. What is a planar polygon? A polygon in which all vertices lie on

the same plane can be called as a planar polygon.

WIRE FRAME MODELS

A wire frame model consists of edges, vertices, and polygons. In a wire frame the

vertices are connected by edges, and the polygons are sequences of vertices and edges.

The edges may be curves or it may be straight line segments. If the edges are straight line

segments then the wire frame model is called a polygonal net or polygonal mesh.

The wire frame models are used in engineering applications. The advantage of wire frame

model is that they are easy to construct. They are easy to clip and manipulate if they are

composed of straight lines. But they cannot be used in building up realistic models. For

modeling highly curved objects we will be forced to use very large number of polygons

to achieve the illusions of roundness and smoothness.

Polyhedron

A polyhedron is a closed polygonal net in which each polygon is planar. A closed

polygonal net means it encloses a definite volume. The polygons are called the faces of

the polyhedron. In modeling, polyhedrons are quite often treated as solid objects (block)

POLYGON SURFACES

The boundary of a three dimensional graphics object can be represented by a set of

polygon surfaces that enclose the object interior. The object descriptions are stores as sets

of surface polygons. Therefore all polygon surfaces can be easily described with linear

equations. A polygon representation of a polygon precisely defines the surface features of

the object.

Polygon Tables

85

Page 86: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

How do you specify a polygon surface? This can be done by using a set of vertex

coordinates and associated attributes parameters. We use polygon tables to store such

information. These tables are used in processing, display and manipulation of objects in a

scene. The polygon tables can be categorized into two: geometric tables and attribute

tables. The geometric table contains vertex coordinates and parameters to identify the

spatial orientation of the polygon surfaces. The attribute table contains information about

the degree of transparency of the object and its surface reflectivity and texture

characteristics.

Now let’s see how the geometric data is stored. This can be done by creating three lists:

1. Vertex table

2. Edge table

3. Polygon table

The coordinate value for each vertex in the object is stored in the vertex table. The edge

table contains the list of vertices that forms each edge. The polygon table contains the list

of edges that forms the polygon surfaces. Now how are these list linked inter related?

There is a back pointer pointing from the edge table to the vertex table. Similarly there is

another pointer that points back to the edge table from the polygon table.

Now how do you construct a polyhedron from these three tables? At first, the polygon

surface table will be searched to identify the edges that compose the polygon. From the

polygon table the information about the edges are obtained. Now we need to find the

details of the edges. The details of the edges are stored in the edge table. Using the

pointer from the polygon table we move to the edge table to locate the vertices that

compose the edges of the polygon. From the edge table we will get the details of the

vertices that compose each edge. But now we need to get the information about each

vertex. These details are stored in the vertex table. By using the pointer from the edge

table to the vertex table we can get the details about the x, y, and z coordinates that

compose each vertex.

86

Page 87: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

We can always add extra information to the data tables for faster information extraction.

For example we could expand the edge table to include forward pointers into the polygon

table so that common edges between the polygons could be identified more rapidly.

Similarly, the vertex table could be expanded so that the vertices are cross reference to

corresponding edges. We can also store additional information in the data tables like the

slope foe each edge and the coordinate extent for each polygon.

We can eliminate the edge table by letting the polygon table reference the vertices

directly, but we can run into problems, such as drawing some edges twice, because we

don't realize that we have visited the same set of points before, in a different polygon. We

could go even further and eliminate the vertex table by listing all the coordinates

explicitly in the polygon table, but this wastes space because the same points appear in the

polygon table several times.

Let’s consider the example of a polyhedron with two surfaces. The three tables that

describes the polygon is as shown in the figure

87

Page 88: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

Now what abut the spatial orientation of the individual surface of the object? Often in the

graphics pipeline, we need to know the orientation of an object. This information is

obtained from the equation of the plane that describes the polygon.

The equation of the plane can be expressed as Ax + By + Cz = 0 where (x, y, z) is any

point on the plane and coefficients A, AB, C, and D are constants defining the spatial

properties of the plane. Here the value of ABCD can be obtained by solving the set of

three plane equations using the coordinate for three non collinear points in the plane. For

this purpose let us select three consecutive polygon vertices (x1, y1, z1), (x2, y2, z2), (x3,

y3, z3) and solve the simultaneous linear plane equations.

(A/D)x k + (B/D)y k + (C/D)z k = -1

Then we can use the equation to determine whether a point is on the inside or outside of

the plane formed by this polygon:

Ax + By + Cz + D < 0 = the point (x, y, z) is inside the surface

Ax + By + Cz + D > 0 = the point (x, y, z) is outside the surface

Normal to the plane

The coefficients A, B, and C can also be used to determine a vector normal to the plane

of the polygon. This vector, called the surface normal, is given simply by:

N = (A, B, C).

If we specify the vertices of a polygon counterclockwise when viewing the outer side, in

a right-handed coordinate system, the surface normal N will point from inside to outside.

You can verify this from an alternate definition for N, based on three vertices:

88

Page 89: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

N = (V2 - V1) x (V3 - V1) = (A, B, C)

Polygon Meshes

Objects can be modeled by several polygon functions. But when object surfaces are to be

tiled, it is more convenient to specify the surface faces with mesh function. One

type of the polygon mesh is the triangle strip. The function produces n – 2

triangles given the coordinate for n vertices. A triangle list will produce only n/3

triangles for a polygon with n vertices.

Another similar function is the quadrilateral mesh, which generates a mesh of (n-1) by

(m-1) quadrilaterals, given the coordinates for an n by m array of vertices. A

quadrilateral mesh containing 12 quadrilaterals constructed from a 5 by 4 input vertex

array is shown in the figure below

3.2)3D DISPLAY METHODS

To obtain the display of a 3D scene, we must first set up a coordinate

reference. This defines a plane to display the view. This plane is referred to as display

plane. Object descriptions are transferred to the reference coordinates and projected onto

the selected display plane.

The different 3D display methods are:

89

Page 90: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

1. Parallel Projection

2. Perspective Projection

3. Depth Cueing

4. Visible surface and line detection

5. Surface Rendering

6. Exploded and Cutaway views

7. 3D and Stereoscopic views

3.3)CURVES AND SURFACES

You know that objects with complex shapes occur frequently in our 3D world. Therefore

special techniques have to be developed to model these objects so that we get realistic

images. These objects can be approximated as plane faced polyhedra. For example the

polyhedral approximation of a coffee cup may contain 1000 faces. In this case it will be

difficult to generate and modify the shape of the cup. We need a more direct

representation of shapes which is tractable both to the computer and to the person trying

to make modifications on the shape.

Representation of curves

Let’s discuss some of the crucial properties that our representation must have. These

properties are important in designing the curves.

1. Control Points

2. Multiple value

3. Axis independence

4. Global and local control

5. Variation Diminishing property

6. Versatility

7. Order of Continuity

90

Page 91: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

Control Points

How do you give shape to the curve? How do you control its shape? The answer to these

questions is control points. Control points are a set of coordinate positions that indicates

the general shape of the curve. It is by the control points the shape of the curve is

controlled. The control points influence the shape of the curve. it is through these control

points the curve pass. Examples are given below:

The line connecting the control points in order is called the control graph, or control

polygon, or even characteristic polygon. If the curve passes exactly through each of the

control points then the curve is said to interpolate the set of control points. If the curve

doesn’t pass necessarily through the control points then it is said to approximate the set of

control points. Examples re given below:

91

Page 92: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

Multiple values: The curve can have multiple values. It is not a graph that is plotted by

using a single values function. A curve can be multivalued with respect to all coordinate

systems.

Axis independence: The shape of the curve must not change when the control points are

measured in a different coordinate system. For example if the control points are rotated

60o the curve should rotate 60o but the shape of the curve should not change.

Global and local control: We know that control points are used to control the shape of

the curve. When we try to manipulate or change the position of a control point the shape

of the curve changes. Manipulation of the control point can bring in two effects on the

curve. The shape of the curve gets affected entirely or the shape of the curve changes

only in the region near to the control point .If the shape of the curve changes entirely then

it is said to have global control. If the shape changes only in the region near the control

point, it is said to have local control.

Variation-diminishing property: Certain curves have a tendency to amplify small

irregularities in the shape outlined by the control points. But there are certain other curves

that always smooth the designer’s control points diminishing the irregularities in the

shape. A curve that oscillates about its control points is usually undesirable. But there

92

Page 93: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

will not be such oscillations in curves that possess variation diminishing property. The

curve will always tend to smooth out the sequence of control points.

Versatility: The shapes of the curves formed from the control points should not be

limited. It should not lack versatility. How can we bring in versatility in curve designs?

This can be done by adding or removing control points from the framework. For

example, a straight line can be formed from two control points. This straight line can be

changed into a curve by simply adding one more control point. This third control point

can create a large number of additional shapes by just changing its location.

Order of continuity: Using simple curves simple object shapes can be modeled but for

complex shapes we need to depend on complex curves. Now you know how a curve is

formed from the control points. Now let’s discuss how a complex curve is formed? A

complex curve can be formed by joining simple curves together. The shape that cannot be

described by a simple curve can often be described by several curves joined together.

When the curves are joined the order of continuity has to be maintained. There are three

orders of continuity defined. They are:

1. Zero Order

2. First order

3. Second order

Zero order continuity means simply that two curves meet. First order continuity requires

the curve to be tangent at the point of intersection. Second order continuity requires that

the curvatures be the same.

3.4)BEZIER METHODS

This spline approximation method was developed by P.Bezier, a French engineer, for use

in the design of Renault automobile bodies.

Bezier Curves

93

Page 94: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

In general a Bezier curve section can be fitted to any number of control points. The

degree of the polynomial is determined by two factors. The number of control points and

their relative position are the factors that contribute to the degree of the Bezier curve. The

degree of the Bezier curve is one less than the number of control points used. If there are

n + 1 control points then the degree of the Bezier curve will be n. For example, three

points will generate a parabola and four control points will generate a cubic curve.

As we know the main form used to model curves and surfaces is the parametric function

or vector- valued function. A point on the curve is represented as a vector:

P(u) = [ x(u) y(u) z(u)]

Bezier defines the curve P(u) in terms of the location of n+1 control points pi

P(u)

where Bi, n (u) is a blending function

Bi, n (u) = C(n,i)u i(1-u) n-i

And C(n,i) is the binomial coefficient, C(n,i)= n!/(i! (n-i)!)

Here pi is the control point where pi can be defined as [xi yi zi].

X(u)= ∑Xk BEZk,n(u) 0<k<n

Y(u)= ∑Yk BEZk,n(u) 0<k<n

Z(u)= ∑Zk BEZk,n(u) 0<k<n

Here there are n+1 control points as pi ranges from 0 to n. These control points along

with the blending function gives the shape to the curve. The blending function defined

influences the control points i.e. it actually blends the control point pi. The blending

functions are the key to the behaviour of Bezier curves.

94

Page 95: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

Here let us take an example of a Bezier curve with four control points. So pi ranges from

0 to 3. Therefore the Bezier parametric function can be written as

P(u) = p0 B 0,3 + p1 B 1,3+ p2 B 2,3+ p3 B 3,3

Here the control points pi’s are influenced by their respective blending functions. The

graph

Algorithm:

a) Compute C(n,k), k ranging from 0 to n where n =Control points-1

b) Repeat steps 3,4,5,6 and 7 with k ranging from 1 to n

c) blend=nCk * uk * (1-u)n-k

d) x=x+x*blend

e) y=y+y*blend

f) z=z+z*blend

g) Join the points

h) End

Properties:

a) Curve always passes through first and last control points

i.e. P(0)=P0

P(1)=Pn

b) Values of the parametric first derivatives at the endpoints can be calculated from

control point coordinates as

P`(0)=-nP0+nP1

P`(1)=-nP¬n-1+nPn

95

Page 96: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

c) Slope at the beginning of curve is along the line joining the first two control

points and the slope at the end of the curve is along the line joining the last two

endpoints.

d) Curve lies within a convex hull (convex polygon boundary) of the control points.

The curve is contained by the polygon.

Applications:

a) Painting and drawing packages-for drawing curves

b) CAD systems- for drawing smooth curves

c) Curve Fitting- to fit a curve along the layout of an object

d) Animation- to transform from one shape to another or to simulate a motion

e) Accelerating 3D Graphics- to simulate transformation of an object

3.5)B-Spline Curves

a) B-spline functions of order m are piecewise polynomials of degree m-1. They are

m-2 smooth.

b) The B-spline functions form a basis; every spline function can be written as a

linear combination of B-splines.

c) Nk,m(t) has support [tk, t k+m].

d) A B-spline curve lies within the convex hull of m consecutive control points.

e) Knots can appear multiple times within the knot vector. A knot with multiplicity

m will cause the resulting curve to interpolate the corresponding control point.

f) Affine invariance.

96

Page 97: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

3.6)3D TRANSFORMATIONS

3D Coordinate Space

3D graphics content in Windows Presentation Foundation (WPF) is encapsulated

in an element, Viewport3D, that can participate in the two-dimensional element structure.

The graphics system treats Viewport3D as a two-dimensional visual element like many

others in Windows Presentation Foundation (WPF). Viewport3D functions as a window

—a viewport—into a three-dimensional scene. More accurately, it is a surface on which a

3-D scene is projected. Although you can use Viewport3D with other 2-D drawing

objects in the same scene graph, you cannot interpenetrate 2-D and 3-D objects within a

Viewport3D. In the following discussion, the coordinate space described is contained by

the Viewport3D element. The Windows Presentation Foundation (WPF) coordinate

system for 2-D graphics locates the origin in the upper left of the rendering surface

(typically the screen). In the 2-D system, positive x-axis values proceed to the right and

positive y-axis values proceed downward. In the 3-D coordinate system, however, the

origin is located in the center of the screen, with positive x-axis values proceeding to the

right but positive y-axis values proceeding upward instead, and positive z-axis values

proceeding outward from the origin, toward the viewer.

Coordinate System Comparison

The space defined by these axes is the stationary frame of reference for 3-D objects in

Windows Presentation Foundation (WPF). As you build models in this space and create

lights and cameras to view them, it's helpful to distinguish this stationary frame of

reference, or "world space," from the local frame of reference you create for each model

97

Page 98: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

when you apply transformations to it. Remember also that objects in world space might

look entirely different, or not be visible at all, depending on light and camera settings, but

the position of the camera does not change the location of objects in world space.

In many applications, there is also a need for altering or manipulating

displays. This can be accomplished by arranging the orientations and sizes of component

parts of the scene.

The basic transformations are

1.translation

2.rotation

3.scaling

TRANSLATION

A translation is applied to an object by repositioning it along a straight-line path

from one coordinate location to another. We translate a two-dimensional point by adding

translation distances, tx, ty and tz, to the original coordinate position(x,y,z) to move the

point to a new position(x’,y’,z’).

X’=x+tx

Y’=y+ty

Z’= z+tz

In matrix form, we can write as P’=T.P

Translation is a rigid body transformation that moves objects without

deformation. Similar methods are used to translate curved objects.

98

Page 99: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

TranslateTransform3D moves all the points in the Model3D in the direction of the

offset vector you specify with the OffsetX, OffsetY, and OffsetZ properties. For example,

given one vertex of a cube at (2,2,2), an offset vector of (0,1.6,1) would move that vertex

(2,2,2) to (2,3.6,3). The cube's vertex is still (2,2,2) in model space, but now that model

space has changed its relationship to world space so that (2,2,2) in model space is

(2,3.6,3) in world space.

Translation with Offset

ROTATION

Rotations in three-dimensions are considerably more complicated than two-

dimensional rotations. In general, rotations are specified by a rotation axis and an angle.

In two-dimensions there is only one choice of a rotation axis that leaves points in the

planeA 3-dimensional rotation is applied to an object by repositioning it along a circular

path in the xyz plane.

To generate a rotation we specify a rotation angle θ and the position

(xr,yr,zr) of the rotation point about which the object is to be rotated. We can rotate a

model in 3-D in several different ways. A typical rotation transformation specifies an axis

and an angle of rotation around that axis. The RotateTransform3D class allows you to

define a Rotation3D with its Rotation property. You then specify Axis and Angle

99

Page 100: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

properties on the Rotation3D, in this case an AxisAngleRotation3D, to define the

transformation. The following examples rotate a model by 60 degrees around the Y axis.

Axis-angle rotations assume rotation about the origin if a value is not

specified for the CenterX, CenterY, and CenterZ properties on RotateTransform3D. As

with scaling, it's helpful to remember that the rotation transforms the model's entire

coordinate space. If the model was not created about the origin, or has been translated

previously, the rotation might "pivot" about the origin instead of rotating in place.

Positive value=anti-clockwise rotation.

Negative value =clockwise rotation.

The transformation can also be described as a rotation about a rotation axis. That is

perpendicular to xy plane and passes through the pivot point.

Coordinate-Axes Rotations

Around the x axis Around the y axis Around the z axis

P’=R .P

A rotation matrix for any axis that does not coincide with a coordinate axis can be set up as a composite transformation involving combination of translations and the coordinate-axes rotations.

Translate the object so that the rotation axis passes through the coordinate originRotate the object so that the axis rotation coincides with one of the coordinate axesPerform the specified rotation about that coordinate axisApply inverse rotation axis back to its original orientationApply the inverse translation to bring the rotation axis back to its original position

Properties of Rotation Matrix

1. A rotation matrix is a special orthogonal matrix

AAT = I

where AT is the transpose of A and I is the identity matrix.

100

Page 101: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

2.R is normalized: the squares of the elements in any row or column sum to 1.

3.R is orthogonal: the dot product of any pair of rows or any pair of columns is 0.

4.The rows of R represent the coordinates in the original space of unit vectors along the coordinate axes of the rotated space.

5.The columns of R represent the coordinates in the rotated space of unit vectors along the axes of the original space.

ow to get these matrix ? let's have a look first at the rotation around the z axis

SCALING

A scaling transformation alters the size of an object. ScaleTransform3D

changes the model's scale by a specified scale vector with reference to a center point.

Specify a uniform scale, which scales the model by the same value in the X, Y, and Z

axes, to change the model's size proportionally. For example, setting the transform's

ScaleX, ScaleY, and ScaleZ properties to 0.5 halves the size of the model; setting the

same properties to 2 doubles its scale in all three axes.

ScaleVector Example

By specifying a non-uniform scale transformation—a scale transformation

whose X, Y, and Z values are not all the same—you can cause a model to stretch or

contract in one or two dimensions without affecting the others. For example, setting

ScaleX to 1, ScaleY to 2, and ScaleZ to 1 would cause the transformed model to double

in height but remain unchanged along the X and Z axes.

101

Page 102: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

By default, ScaleTransform3D causes vertices to expand or contract about the origin

(0,0,0). If the model you want to transform is not drawn from the origin, however, scaling

the model from the origin will not scale the model "in place." Instead, when the model's

vertices are multiplied by the scale vector, the scale operation will have the effect of

translating the model as well as scaling it.

X’=x .sx

Y’=y .sy

Z’= z.sz

The scaling matrix is

Sx 0 0 0

0 Sy 0 0

0 0 Sz 0

0 0 0 1

P’=S .P;

Transformations in 3D

Type Column Vector Row Vector Properties

102

Page 103: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

Matrix Matrix

Translation

translate (tx ty tz)

1 0 0 tx

0 1 0 ty

0 0 1 tz

0 0 0 1

1 0 0 00 1 0 00 0 1 0tx ty tz 1

Rigid BodyOrthogonal Submatrix

Scale

scale (sx sy sy)

sx 0 0 00 sy 0 00 0 sz 00 0 0 1

sx 0 0 00 sy 0 00 0 sz 00 0 0 1

Non-rigid Body

Rotation around X

rotate (1 0 0) (a)

1 0 0 00 cos(a) -sin(a) 00 sin(a) cos(a) 00 0 0 1

1 0 0 00 cos(a) sin(a) 00 -sin(a) cos(a) 00 0 0 1

Rigid BodyOrthogonal

Rotation around Y

rotate (0 1 0) (a)

cos(a) 0 sin(a) 00 1 0 0

-sin(a) 0 cos(a) 00 0 0 1

cos(a) 0 -sin(a) 00 1 0 0

sin(a) 0 cos(a) 00 0 0 1

Rigid BodyOrthogonal

Rotation around Z

rotate (0 0 1) (a)

cos(a) -sin(a) 0 0sin(a) cos(a) 0 0

0 0 1 00 0 0 1

cos(a) sin(a) 0 0-sin(a) cos(a) 0 0

0 0 1 00 0 0 1

Rigid BodyOrthogonal

Shear XY

[x, y, z, 1] =>[x + shx*z, y + shy*z, z, 1]

1 0 shx 00 1 shy 00 0 1 00 0 0 1

1 0 0 00 1 0 0

shx shy 1 00 0 0 1

Non-rigid Body

Shear XZ

[x, y, z, 1] =>[x + shx*y, y, z + shz*y, 1]

1 shx 0 00 1 0 00 shz 1 00 0 0 1

1 0 0 0shx 1 shz 00 0 1 00 0 0 1

Non-rigid Body

Shear YZ

[x, y, z, 1] =>[x, y + shy*x, z + shz*x, 1]

1 0 0 0shy 1 0 0shz 0 1 00 0 0 1

1 shy shz 00 1 0 00 0 1 00 0 0 1

Non-rigid Body

Module 4

4.1)3D VIEWING

103

Page 104: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

For 3D graphics applications, we can view an object from any special position from the front, above, or from the back. The 3D descriptions of object must be projected onto the flat viewing surface of the output device.

3D TRANFORMATION PIPELINE

MC – Modeling coordinates PC – Projection coordinatesWC – World coordinates DC – Device coordinatesVC – View coordinates

The steps for modeling and converting a world coordinate positions of a scene to device coordinates, first model the scene. Then convert the world coordinate positions to viewing coordinate positions. Projection operations are performed to convert the viewing coordinate description to coordinate positions on the projection plane.

PROJECTIONS Once world-coordinate descriptions of the objects in a scene are converted to. Viewing coordinates, we can project the three-dimensional objects onto the trio dimensional view plane. There are two basic projection methods. In a parallel projection, coordinate positions are transformed to the vied plane along parallel lines, For a perspective projection object positions are transformed to the view plane along lines that converge to a point called the projection reference point (or center of projection).

MODELING TRANSFORMATION

WCMC

VIEWING TRANSFORMATION

VC

PROJECTION TRANFORMATION

WORKSTATION TRANSFORMATION

PC DC

104

Page 105: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

A parallel projection preserves relative proportions of objects, and this is the method used in drafting to produce scale drawings of three-dimensional objects. Accurate views of the various sides of an object are obtained with a parallel projection, but this does not give us a realistic representation of the appearance of a three-dimensional object. A perspective projection, on the other hand, produces realistic views but does not preserve relative proportions. Projections of distant objects are smaller than the projections of objects of the same size that are closer to the projection plane.

Parallel Projections

We can specify a parallel projection with a projection vector that defines the direction for the projection lines. When the projection is perpendicular to the view plane, we have an orthographic parallel projection. Otherwise, we have an oblique parallel projection. Figure illustrates the two types of parallel projections. Some graphics packages, such as GL on Silicon Graphics workstations, do not provide for oblique projections.

105

Page 106: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

We can form orthographic projections that display more than one face of an object. Such views are called axonometric orthographic projections. The most commonly used axonometric orthographic projection is the isometric projection. Transformation equations for an orthographic parallel projection are Straight forward. If the view plane is placed at position z, along the z , axis , then any point (x, y, z) in viewing coordinates is transformed to projection coordinates as

where the original z-coordinate value is preserved for the depth information needed in depth cueing and visible-surface determination procedures. An oblique projection is obtained by projecting points along parallel lines that are not perpendicular to the projection plane. In some applications packages, an oblique projection vector is specified

106

Page 107: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

with two angles, a and O, Point (x, y, z) is projected to position (x,, y,) on the view plane. Orthographic projection coordinates on the plane are (x, y). The oblique projection line from (x, y, z) to (x,, y,) makes an angle a with the line on the projection plane thatjoins (x,, y,,) and (x, y). This line, of length L, is at an angle A with the horizontal direction in the projection plane.

Length L depends on the angle a and the z coordinate of the point to be projected:

tan α = z / L Thus L = z / tan α zL1

L, is the inverse of tan alpha, which is also the value of L when z = 1. We can then write the oblique projection equations as

The transformation matrix for producing any parallel projection onto the xy plane can be written as

An orthographic projection obtained when L, = 0 . Oblique projections are generated with nonzero values for L. Projection matrix has a structure similar to that of a z-axis shear matrix. In fact, the effect of this projection matrix is to shear planes of constant 2 and project them onto the view plane. The x- and y-coordinate value within each plane of constant z are shifted by an amount proportional to the value of the plane so that angles, distances, and parallel lines in the plane arc projected accurately.

Perspective Projections

107

Page 108: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

To obtain a perspective projection of a three-dimensional ob*, we transform points along projection lines that meet at the projection reference point.

Cabinet projections of a cube onto a view plane for two values of angle O. Depth is projected as one-half that of the width and height.

We can write equations describing coordinate positions along this perspective projection line in parametric form as

108

Page 109: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

Parameter u takes values from 0 to 1, and coordinate position (x', y', z') represents any point along the projection line. When u = 0, we are at position P = (x,y, z). At the other end of the line, u = 1 and we have the projection referencepoint coordinates (0, 0, z,,). On the view plane, z' = z , and we can solve the z' equation for parameter u at this position along the projection line: Substituting this value of u into the equations for x' and y', we obtain the perspective transformation equations

Where dp = zprp – zvp is the distance of the view plane from the projection reference point. Using a three-dimensional homogeneous-coordinate representation, we can write the perspective projection transformation in matrix form as

In this representation, the homogeneous factor is

109

Page 110: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

and the projection coordinates on the view plane are calculated from the homogeneous coordinates as

where the original z-coordinate value would be retained in projection

When a three-dimensional object is projected onto a view plane using per- Three-Dimensional Viewing spective transformation equations, any set of parallel lines in the object that are not parallel to the plane are projected into converging lines. Parallel Lines that are parallel to the view plane will be projected as parallel lines. The point at which a set of projected parallel lines appears to converge is called a vanishing point. Each such set of projected parallel lines will have a separate vanishing point; and in general, a scene can have any number of vanishing points, depending on how many sets of parallel lines there are in the scene. The vanishing point for any set of lines that are parallel to one of the principal axes of an object is referred to as a principal vanishing point. We control the number of principal vanishing points (one, two, or three) with the orientation of the projection plane, and perspective projections are accordingly classified as one-point, two-point, or three-point projections. The number of principal vanishing points in a projection is determined by the number of principal axes intersecting the view plane. Figure illustrates the appearance of one-point and two point perspective projections for a cube.

110

Page 111: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

4.2)VISIBLE-SURFACE DETECTION METHODS In the realistic graphics display, we have to identify those parts of a scene that are visible from a chosen viewing position. The various algorithms that are used for that are referred to as Visible-surface detection methods or hidden-surface elimination methods. Visible-surface detection algorithms are broadly classified according to whether they deal with object definitions directly or with their projected images. These two approaches are called object-space methods and image-space methods, respectively. An object-space method compares objects and parts of objects to each other within the scene definition to determine which surfaces, as a whole, we should label as visible. In an image-space algorithm, visibility is decided point by point at each pixel position on the projection plane. Most visible-surface algorithms use image-space methods, although object space methods can be used effectively to locate visible surfaces in some cases. Line display algorithms, on the other hand, generally use object-space methods to identify visible lines in wire frame displays, but many image-space visible-surface algorithms can be adapted easily to visible-line detection.

BACK-FACE DETECTION

111

Page 112: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

A fast and simple object-space method for identifying the back faces of a polyhedron is based on the "inside-outside" tests . A point (x, y,z) is "inside" a polygon surface with plane parameters A, B, C, and D if

When an inside point is along the line of sight to the surface, the polygon must be a back face (we are inside that face and cannot see the front of it from ourviewing position).We can simplify this test by considering the normal vector N to a polygon surface, which has Cartesian components (A, B, C). In general, if V is a vector in the viewing direction from the eye (or "camera") position, as shown in Fig. 13-1,then this polygon is a back face if

Furthermore, if object descriptions have been converted to projection coordinates and our viewing direction is parallel to the viewing z,. axis, then V = (0, 0, V;) and

so that we only need to consider the sign of C, the ; component of the normal vector N .In a right-handed viewing system with viewing direction along the negative z, axis, the polygon is a back face if C < 0. Also, we cannot see anyface whose normal has z component C ..- 0, since our viewing direction is grazing that polygon. Thus, in general, we can label any polygon as a back face if its normal vector has a z-component value:

DEPTH-BUFFER METHOD

A commonly used image-space approach to detecting visible surfaces is the depth-buffer method, which compares surface depths at each pixel position on the projection plane. This procedure is also referred to as the z-buffer method, since object depth is usually measured from the view plane along the z axis of a viewing system. Each surface of a scene is processed separately, one point at a time across the surface. The method is usually applied to scenes containing only polygon surfaces, because depth values can be computed very quickly and the method is easy to implement. But the method can be applied to no planar surfaces. With object descriptions converted to projection coordinates, each (x, y, z ) position on a polygon surface corresponds to the orthographic projection point (x, y) on the view plane. Therefore, for each pixel position (x, y) on the view plane, object depths can be compared by comparing z values. Figure shows three surfaces at varying distances along the orthographic projection line from

112

Page 113: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

position (1,y ) in a view plane taken as the xy, plane. Surface 5, is closest at this position, so its surface intensity value at (x, y) is saved. As implied by the name of this method, two buffer areas are required. A depth buffer is used to store depth values for each (x, y) position as surfaces are processed, and the refresh buffer stores the intensity values for each position. Initially, all positions in the depth buffer are set to 0 (minimum depth), and the refresh buffer is initialized to the background intensity. Each surface listed in the polygon tables is then processed, one scan line at a time, calculating the depth (z value) at each (x, y) pixel position. The calculated depth is compared to the value previously stored in the depth buffer at that position. If the calculated depth is patter than the value stored in the depth buffer, the new depth value is stored, and the surface intensity at that position is determined and in the same xy location in the refresh buffer.

Steps:1. Initialize the depth buffer and refresh buffer so that for all buffer positions (x,y),

depth(x,y) = 0, refresh(x,y)= Ibackground 2. For each position on each polygon surface, compare depth values to previously

stored values in the depth buffer to determine visibility.a) Calculate the depth z for each (x,y) position on the polygon.b) If z>depth(x,y), then set Depth(x,y)=z, refresh(x,y)= I surf(x,y)

After all surfaces have been processed, the depth buffer contains depth values for the visible surfaces and the refresh buffer contains the corresponding intensity values for those surfaces. The ratio -A/C is constant for each surface, so succeeding depth values across a scan line are obtained from preceding values with a single addition. On each scan line, we start by calculating the depth on a left edge of the polygon that intersects that scan line We first determine the y-coordinate extents of each polygon, and process the surface from the topmost scan line to the bottom scan line, as shown in Fig. Starting at a top vertex, we can recursively calculate x positions down a left edge of the polygon as x' = x - l/m, where rn is the slope of the edge. Depth values down the edge are then obtained recursively as

If we are processing down a vertical edge, the slope is infinite and the recursive calculations reduce to

An alternate approach is to use a midpoint method or Bresenham-type algorithm for determining x values on left edges for each scan line. Also the method can be applied to curved surfaces by determining depth and intensity values at each surface projection point. For polygon surfaces, the depth-buffer method is very easy to implement, and it requires no sorting of the surfaces in a scene. But it does require the availability of a second buffer in addition to the refresh buffer.

113

Page 114: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

A-BUFFER METHOD

An extension of the ideas in the depth-buffer method is the A-buffer method. The A buffer method represents an anti aliased, area-averaged, accumulation-buffer method developed by Lucas film for implementation in the surface-rendering system called REYES (an acronym for "Renders Everything You Ever Saw").A drawback of the depth-buffer method is that it can only find one visible surface at each pixel position. In other words, it deals only with opaque surfaces and cannot accumulate intensity values for more than one surface, as is necessary if transparent surfaces are to be displayed The A-buffer method expands the depth buffer so that each position in the buffer can reference a linked list of surfaces.Thus, more than one surface intensity can be taken into consideration at each pixel position, and object edges can be ant aliased.Each position in the A-buffer has two fields:

• depth field - stores a positive or negative real number• intensity field - stores surface-intensity information or a pointer value.

If the depth field is positive, the number stored at that position is the depth of a single surface overlapping the corresponding pixel area. If the depth field is negative, this indicates multiple-surface contributions to the pixel intensity.

The A-buffer can be constructed using methods similar to those in the depth-buffer algorithm. Scan lines are processed to determine surface overlaps of pixels across the individual scan lines. Surfaces are subdivided into a polygon mesh and clipped against the pixel boundaries. Using the opacity factors and percent of surface overlaps,

d>0

d<0 Suf 1 Suf 2

Single surface overlap

Multiple surface overlap

Depth IntensityField Field

Depth IntensityField Field

114

Page 115: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

we can calculate the intensity of each pixel as an average of the contributions from the overlapping surfaces.

SCAN-LINE METHOD

This image space method for removing hidden surface5 is an extension of the scan-line algorithm for tilling polygon interiors. Instead of filling just one surface, we now deal with multiple surfaces. As each scan line is processed, all polygon surfaces intersecting that line are examined to determine which are visible. Across each scan line, depth calculations are made for each overlapping surface to determine which is nearest to the view plane. When the visible surface has been determined, the intensity value for that position is entered into the refresh buffer. Any number of overlapping polygon surfaces can be processed with this scan-line method. Flags for the surfaces are set to indicate whether a position is inside or outside, and depth calculations are performed when surfaces overlap. When these coherence methods are used, we need to be careful to keep track of which surface section is visible on each scan line. This works only if surfaces do not cut through or otherwise cyclically overlap each other. If any kind of cyclic overlap is present in a scene, we can divide the surfaces to eliminate the overlaps. The dashed lines in this figure indicate where planes could be subdivided to form two distinct surfaces, so that the cyclic overlaps are eliminated.

Light Sources

Every object in a scene is potentially a source of light. Light may be either or reflected

from objects. Generally, in computer graphics we make a distinction between light

emitters and light reflectors. The emitters are called light sources, and the reflectors are

usually the objects being rendered. Light sources are characterized by their intensities

while reflectors are characterized by their material properties.

Point Light Sources

The rays emitted from a point light radially diverge from the source. A point light source

is a fair approximation to a local light source such as a light bulb. The direction of the

light to each point on a surface changes when a point light source is used. Thus a

normalized vector to the light emitter must be computed for each point that is

illuminated.

115

Page 116: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

4.3)Basic illumination models

Illumination models model the interaction of light with the surface and range from

simple to very complex.

a) Local Illumination = direct illumination - considers light traveling directly

from source to surface

b) Global Illumination = indirect illumination - takes into account reflection of

light from other surfaces

Ambient Light

Even though an object in a scene is not directly lit it will still be visible. This is because

light is reflected from nearby objects. Ambient light refers to any outside light such as

sunlight coming through windows or overhead room light. i.e. Ambient light is the

general background light all around us .Ambient light has no spatial or directional

characteristics. The amount of ambient light incident on each object is a constant for all

surfaces in the scene.

Let

Ia = Ambient light intensity

ka = Ambient light reflected

Then intensity on the surface is described by

I = Iaka

Ideal Diffuse Reflection

116

Page 117: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

First, we will consider a particular type of surface called an ideal diffuse reflector. An

ideal diffuse surface is, at the microscopic level, a very rough surface.

Chalk is a good approximation to an ideal diffuse surface.

Because of the microscopic variations in the surface, an incoming ray of light is equally

likely to be reflected in any direction over the hemisphere. Ambient light

reflection is an approximation of global diffuse lighting effects. Diffuse

reflections are constant over each surface in a scene, independent of the viewing

direction. The intensity of the diffuse reflection at any point is given by

I ambdiff = Kd Ia

where, Kd->Coefficient of diffused reflection.

Ia-> intensity of ambient light

Kd is assigned a constant value in the interval 0 to 1.

Lambert's cosine law

Ideal diffuse reflectors are also called Lambertian reflectors. Ideal diffuse reflectors

reflect light according to Lambert's cosine law, (these are sometimes called Lambertian

reflectors).

117

Page 118: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

Lambert's law states that the reflected energy from a small surface area in a particular

direction is proportional to the cosine of the angle between that direction and the surface

normal. Lambert's law determines how much of the incoming light energy is reflected.

The amount of energy that is reflected in any one direction is constant in this model. In

other words, the reflected intensity is independent of the viewing direction. The intensity

does, however, depend on the light source's orientation relative to the surface, and it is

this property that is governed by Lambert's law.

If Il is the intensity of the point light source, then the diffuse reflection equation for a

point on the surface can be written as

Il,diff = Kd Il cos θ

A surface is illuminated by a point source only if the angle of incidence is in the range 0

to 90 (cos θ is in the interval from 0 to 1).When cos θ is negative, the light source is

behind the surface.

If N is the unit normal vector to a surface and L is the unit direction vector to the point

light source from a position on the surface, then

Cos θ =N.L

The diffuse reflection equation for a point source illumination is

Il,diff = Kd Il (N.L)

According to Lambert’s cosine law the intensity of the reflected light depends on the

angle of illumination. A surface that is perpendicular to the direction of the incident light

118

Page 119: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

appears brighter than a surface that is at an angle to the direction of the incoming light.

As an angle increases less of the incident light falls on the surface.

We can combine the ambient and point source intensity calculations to obtain an

expression for the total diffuse reflection.

Total diffuse reflection equation is given by

Idiff = Ka Ia + Kd Il (N.L)

Specular Reflection

I spec = W(I spec = W(q) Il cos nsq) Il cos ns θ θ

Specular reflection, on the other hand, is viewpoint dependent. Light striking a specular

surface, by Snell's Law, will be reflected at an angle which mirrors the incident light

angle, so the viewing angle is very important. Specular reflection forms tight, bright

highlights, making the surface appear glossy).

119

Page 120: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

It is is the perfect, mirror-like reflection of light (or sometimes other kinds of wave) from

a surface, in which light from a single incoming direction is reflected into a single

outgoing direction. Such behaviour is described by the law of reflection, which states that

the direction of outgoing reflected light and the direction of incoming light make the

same angle with respect to the surface normal; this is commonly stated as θi = θr.

This is in contrast to diffuse reflection, where incoming light is reflected in a broad range

of directions. The most familiar example of the distinction between specular and diffuse

reflection would be matte and glossy paints. While both exhibit a combination of specular

and diffuse reflection, matte paints has a higher proportion of diffuse reflection and

glossy paints have a greater proportion of specular reflection. Very highly polished

surfaces, such as high quality mirrors, can exhibit almost perfect specular reflection.

Even when a surface exhibits only specular reflection with no diffuse reflection, not all of

the light is necessarily reflected. Some of the light may be absorbed by the materials.

Additionally, depending on the type of material behind the surface, some of the light may

be transmitted through the surface. For most interfaces between materials, the fraction of

the light that is reflected increases with increasing angle of incidence θi. If the light is

120

Page 121: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

propagating in a material with a higher index of refraction than the material whose

surface it strikes, then total internal reflection may occur.

Phong Model

This is an empirical model, which is not based on physics, but physical

observation. Phong observed that for very shiny surfaces the specular

highlight was small and the intensity fell off rapidly, while for duller

surfaces it was larger and fell off more slowly. He decided to let the

reflected intensity be a function of (cos α )n with n >= 200 for a shiny

surface and n small for a dull surface. For a perfect reflector n equals

infinity, and for a piece of cardboard n equals 0 or 1. In the diagram

below we can see how the function (cos α )n behaves for different

values of n.

Specular reflection is also a function of the light incidence angle θ . An

example is glass which has almost no specular reflectance for θ = 0

degrees but a very high specular reflectance for θ > 80 degrees.

Some substances, such as copper, actually change color with change

121

Page 122: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

in the incidence angle, as shown in the following plot of the reflectance

curve as a function of the incident angle for copper. .

Warn model

•• Simulates studio lighting effects by controlling light intensity in differentSimulates studio lighting effects by controlling light intensity in different

directionsdirections

•• control light direction using reflecting surface (only specular) control light direction using reflecting surface (only specular)

•• control light direction using reflecting surface (only specular) control light direction using reflecting surface (only specular)

122

Page 123: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

•• Flaps are used to control the amount of light emitted by a source in variousFlaps are used to control the amount of light emitted by a source in various

directionsdirections

•• Spot lighting is simulated in warn modelSpot lighting is simulated in warn model

RGB color model

123

Page 124: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

The RGB color model is an additive model in which red, green, and blue (often used in

additive light models) are combined in various ways to reproduce other colors. The name

of the model and the abbreviation ‘RGB’ comes from the three primary colors, red, green,

and blue. These three colors should not be confused with the primary pigments of red, blue,

and yellow, known in the art world as ‘primary colors’.

The RGB color model itself does not define what is meant by ‘red’, ‘green’ and ‘blue’, and

the results of mixing them are not exact unless the exact spectral make-up of the red, green

and blue primaries are defined. The color model then becomes an absolute color space,

such as RGB or Adobe RGB; see RGB color space for more details. This article discusses

concepts common to all the different RGB color spaces that use the RGB color model.

4.4)Rendering Methods

Constant Shading

The simplest shading model for a polygon is 'constant shading', also known as 'faceted

shading' or 'flat shading'. This approach applies an illumination model once to determine

a single intensity value that is then used to shade an entire polygon, and holding the value

across the polygon to reconstruct the polygon's shade. This approach is valid if several

assumptions are true:

1. The light source is at infinity, so N.L is constant across the polygon face.

2. The viewer is at infinity, so N.V is constant across the polygon face.

3. The polygon represents the actual surface being modeled, and is not an

approximation to a curved surface.

Gouraud Shading

124

Page 125: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

'Gouraud shading also called intensity interpolation shading' or 'color interpolation

shading', eliminates intensity discontinuities.

Gouraud shading extends the concept of interpolated shading applied to individual

polygons by interpolating polygon vertex illumination values that take into account the

surface being approximated. The Gouraud shading process requires that the normal

{perpendicular vector} be known for each vertex of the polygonal mesh. Gouraud was

able to compute these 'vertex normals' directly from an analytical description of the

surface. Alternatively, if the vertex normals are not stored with the mesh and cannot be

determined directly from the actual surface, then, Gouraud suggested, we can

approximate them by averaging the surface normals of all polygonal facets sharing each

vertex .If an edge is meant to be visible (as at the joint between a plane's wing and body),

then we find two vertex normals, one for each side of the edge, by averaging the normals

of polygons on each side of the edge separately.

The next step in Gouraud shading is to find 'vertex intensities' by using the vertex

normals with any desired illumination model. Finally, each polygon is shaded by linear

interpolation of vertex intensities along each edge and then between edges along each

scan line .The term 'Gouraud shading' is often generalized to refer to intensity

interpolation shading of even a single polygon in isolation, or to the interpolation of

arbitrary colors associated with polygon vertices {as is the case in computer games}.

The algorithm can be stated is as follows:

1. Compute a normal N for each vertex of the polygon.

2. From N compute intensity I for each vertex of the polygon.

3. From bi-linear interpolation compute intensity Ii for each pixel.

4. Paint pixel to shade corresponding to Ii.

Phong Shading

125

Page 126: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

The third shading model, Phong shading, is similar to Gouraud shading except that the

Normals are interpolated. Thus, the specular highlights are computed much more

precisely than in the Gouraud shading model.

The algorithm is as follows:

1. Compute a normal N for each vertex of the polygon.

2. From bi-linear interpolation compute a normal, Ni for each pixel. (This must be

renormalized each time)

3. From Ni compute intensity Ii for each pixel of the polygon.

4. Paint pixel to shade corresponding to Ii.

Note that this method is much more computationally intensive than Gouraud shading:

4.4)Ray Tracing Method

Our goal is find the color of each point on the view window. We subdivide the view

window into small squares, where each square corresponds to one pixel in the final

image. If you want to create an image at the resolution of 640x400, you would break up

the view window into a grid of 640 squares across and 400 square down. The real

problem, then, is assigning a color to each square. This is what ray tracing does.

The objective is to determine the color of each light ray that strikes the view window

before reaching the eye. Natural assumption would be that rays are traced starting at their

126

Page 127: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

point of origin, the light source, and towards their destination, the eye. Some will reach

the eye directly, others will bounce around some and then reach the eye, and many, many

more will probably never hit the eye at all. For all the rays that never reach the eye, the

effort tracing them was wasted.

Fig:Tracing rays from the light source to the eye. Lots of rays are wasted because they

never reach the eye.

In order to save ourselves this wasted effort, we trace only those rays that are guaranteed

to hit the view window and reach the eye. It seems at first that it is impossible to know

beforehand which rays reach the eye. After all, any given ray can bounce around the

room many times before reaching the eye. However, if we look at the problem

backwards, we see that it has a very simple solution. Instead of tracing the rays starting at

the light source, we trace them backwards, starting at the eye. Consider any point on the

view window whose color we're trying to determine. Its color is given by the color of the

light ray that passes through that point on the view window and reaches the eye. We can

just as well follow the ray backwards by starting at the eye and passing through the point

on its way out into the scene. The two rays will be identical, except for their direction: if

the original ray came directly from the light source, then the backwards ray will go

directly to the light source. So the backwards method does the same thing as the original

method, except it doesn't waste any effort on rays that never reach the eye.

127

Page 128: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

This, then, is how ray tracing works in computer graphics. For each pixel on the view

window, we define a ray that extends from the eye to that point. We follow this ray out

into the scene and as it bounces off of different objects. The final color of the ray (and

therefore of the corresponding pixel) is given by the colors of the objects hit by the ray as

bit travels through the scene.

Just as in the light-source-to-eye method it might take a very large number of bounces

before the ray ever hits the eye, in backwards method it might take many bounces before

the ray every hits the light. Since we need to establish some limit on the number of

bounces to follow the ray on, we make the following approximation: every time a ray hits

an object, we follow a single new ray from the point of intersection directly towards the

light source.

We trace a new ray from each ray-object intersection directly towards the light source

In the figure we see two rays, a and b, which intersect the purple sphere. To determine

the color of a, we follow the new ray a' directly towards the light source. The color of a

will then depend on several factors, discussed in Color and Shading below. As you can

see, b will be shadowed because the ray b' towards the light source is blocked by the

sphere itself. Ray a would have also been shadowed if another object blocked the ray a'.

128

Page 129: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

Module 5

5.1)FRACTALS

The natural objects like mountains, clouds, trees etc have irregular features. These objects can be described with fractal-geometry methods. A fractal is generally "a rough or fragmented geometric shape that can be subdivided in parts, each of which is (at least approximately) a reduced-size copy of the whole," a property called self-similarity. The term was coined by Benoît Mandelbrot in 1975 and was derived from the Latin fractus meaning "broken" or "fractured".

A fractal often has the following features:

• It has a fine structure at arbitrarily small scales. • It is too irregular to be easily described in traditional Euclidean geometric

language. • It is self-similar (at least approximately or stochastically). • It has a Hausdorff dimension which is greater than its topological dimension

(although this requirement is not met by space-filling curves such as the Hilbert curve).

• It has a simple and recursive definition.

Because they appear similar at all levels of magnification, fractals are often considered to be infinitely complex (in informal terms). Natural objects that approximate fractals to a degree include clouds, mountain ranges, lightning bolts, coastlines, and snow flakes. However, not all self-similar objects are fractals—for example; the real line (a straight Euclidean line) is formally self-similar but fails to have other fractal characteristics.

FRACTAL GENERATION PROCEDURES

A fractal object is generated by repeatedly applying a specified transformation function to points within a region of space. If Po= (Xo,Yo,Zo) is a selected initial point, P1=F(Po), P2=F(P1)…….

Three common techniques for generating fractals are:

129

Page 130: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

1.Escape-time fractals — These are defined by a recurrence relation at each point in a space (such as the complex plane). Examples of this type are the Mandelbrot set, Julia set, the Burning Ship fractal and the Lyapunov fractal.

2.Iterated function systems — These have a fixed geometric replacement rule. Cantor set, Sierpinski carpet, Sierpinski gasket, Peano curve, Koch snowflake, Harter-Heighway dragon curve, T-Square, Menger sponge, are some examples of such fractals.

3.Random fractals — Generated by stochastic rather than deterministic processes, for example, trajectories of the Brownian motion, Lévy flight, fractal landscapes and the Brownian tree. The latter yields so-called mass- or dendritic fractals, for example, diffusion-limited aggregation or reaction-limited aggregation clusters.

Classification of fractals

1. Self-similar fractals: Fractals can also be classified according to their self-similarity. There are two types of self-similarity found in fractals:

Statistical self-similarity — This is the weakest type of self-similarity; the fractal has numerical or statistical measures which are preserved across scales. Most reasonable definitions of "fractal" trivially imply some form of statistical self-similarity. (Fractal dimension itself is a numerical measure which is preserved across scales.) Random fractals are examples of fractals which are statistically self-similar, but neither exactly nor quasi-self-similar.

Deterministic self-similar – Here non random variations are used.

2. Self-affine fractals: These fractals have parts that are formed with different scaling parameters Sx, Sy, Sz in different coordinate directions.

3. Invariant fractals: These are formed with non-linear transformations. This includes self-squaring fractals.

Applications

Random fractals can be used to describe many highly irregular real-world objects. Other applications of fractals include:

• Classification of histopathology slides in medicine • Enzyme/enzymology (Michaelis-Menten kinetics) • Generation of new music • Generation of various art forms • Signal and image compression • Seismology

130

Page 131: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

• Computer and video game design, especially computer graphics for organic environments and as part of procedural generation

• Fractography and fracture mechanics

5.2)ANIMATION

Computer animation generally refers to any time sequence of visual changes in a

scene. In addition to changing object position with translation or rotations, computer-

generated animations could display time variations in object size, color, transparency

or surface texture.

Design of an animation sequence

In general an animation sequence is designed with the following steps

a) Storyboard layout

b) Object definitions

c) Key-frame specifications

d) Generation of in between frames

131

Page 132: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

The storyboard is an outline of the action. It defines motion sequences as a set of

basic events that are to take place. Depending on the type of animation to be

produced, the storyboard could consist of a set of rough sketches or it could be a list

of basic ideas for the motion.

An object definition is given for each participant in the action. Objects can be defined

in terms of basic shapes, such as polygon or splines. In addition, associated sets of

movements are also specified along with the shape.

A key frame is a detailed drawing of the scene at a certain time in the animation

sequence. Within each key frame each object is positioned according to the time for

that frame.

In-betweens are intermediate frames between the key frames. The number of In-

betweens needed is determined by the media to be used to display animation. Film

requires 24 frames per second, and graphics terminal are refreshed at the rate of 30 to

60 frames per second. Typically time intervals for the motion are set up so that there

are from three to five In-betweens for each pair of key frames.

When someone creates a 3D animation on a computer, they usually don't specify the

exact position of any given object on every single frame. They create key frames. Key

frames are important frames during which an object changes its size, direction, shape

or other properties. The computer then figures out all the in between frames and saves

an extreme amount of time for the animator.

Two key frames drawn by the user

132

Page 133: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

In between frames generated by the computer

On raster scan systems we can generate real time animation in limited applications

using raster operations. A simple method for translation in the xy plane is to transfer a

rectangular blocks of pixels through arbitrary angles using anti-aliasing procedures.

To rotate a block of pixels we need to determine the percent of area coverage for

those pixels that overlap the rotated block. Sequences of raster operations can be

executed to produce real time animation of either two-dimensional or three-

dimensional objects, as long as we restrict the animation to motions in the projection

plane. Then no viewing or visible surface algorithms need be invoked.

5.3)Morphing

Transformation of object shapes from one form to another is called morphing. We

generate set of in-betweens from the specification of two or more key frames. Given

the animation paths we can interpolate the positions of individual objects between any

two times or key frames. With complex object transformations the shapes of the

object may change over time. If all surfaces are described with polygon meshes then

the number of edges per polygon can change from one frame to the next. Thus the

total number of line segments can be different in different frames.

Transformation of object shapes from one form to another is called morphing.

Morphing methods can be applied to any motion or transition involving a change of

shape.

Given two key frames for an object transformation we first adjust the object

specification in one if the frames so that the number of polygon edges (or the number

of vertices) is the same for the two frames. This is illustrated below

133

Page 134: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

Frame K Halfway Frame Frame K+1

A straight-line segment in key frame k is transformed into two line segments in key

frame k+1. Since key frame k+1 has an extra vertex, we add a vertex between 1 and 2 in

key frame K to balance the number of vertices and edges in the two key frames. Using

linear interpolation to generate the in betweens we transition the added vertex in key

frame k into vertex 3’ along the straight-line path as shown.We can state general

preprocessing rules for equalizing key frames in terms of either the number of edges or

the number of vertices to be added to a key frame.

Case 1: Equalizing edge count.

134

Frame K Frame K+1

Page 135: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

Let the parameters Lk and Lk+1 denote the number of line segments in two consecutive

frames. We then define

Lmax = max(Lk, Lk+1)

Lmin = min(Lk, Lk+1)

Ne = Lmax mod Lmin

Ns = int(Lmax/ Lmin)

Then the preprocessing is accomplished by

1. Dividing the Ne edges of keyframemin into Ns sections.

2. Dividing the remaining lines of keyframemin into Ns sections

Case 2: Equalizing vertex count

Let the parameters be Vk and Vk+1 denote the number of vertices in the two consecutive

frames. We define

Vmax = max(Vk, Vk+1)

Vmin = min(Vk, Vk+1)

Nls = (Vmax-1) mod (Vmin-1)

Np = int ((Vmax-1) / (Vmin-1))

Preprocessing using vertex count is performed by

1. Adding Np points to Nls line sections of keyframemin

Adding Np-1 points to the remaining edges of keyframemin

135

Page 136: Module 1 Introduction to Computer Graphicsbcagalgotiasuniversity.weebly.com/uploads/2/6/4/4/26443327/vip.pdf · Module 1 Introduction to Computer Graphics 1.1) Introduction Today

Computer Graphics Dept of Computer Science & Engg, VJCET

Text Books and websites reffered.

1. Computer Graphics (C version) - Donald Hearn & Pauline Baker (Pearson Education Asia)

2. Principles of Interactive Computer Graphics – William .N. Newman, Robert .F. Sproull (second edition), McGraw Hill edition

3. http://www.cs.unc.edu/~mantler/research/bezier/

4. http://www.rocw.raifoundation.org/computing/MCA/computergraphics/lecture-notes/lecture-06.pdf

Reference:

1. Computer Graphics Principles & Practice -Foley, VanDam, Feiner, Hughes (second edition in C), Addison Wesley

136