Mapping and Other yl.;cs5610/handouts/environmentMapping.pdfinformation with surface properties...

9
Environment Mapping and Other Applications of World Projections Ned Greene New York Institute of Technology Reflection mapping, introduced by Blinn and ~ ~ 4 Newell in 1976, is a shading technique that uses a i i s ci tXc . 2 3 projection of the world (a reflection map) as seen i d p u 2 pI from a particular viewpoint (the "world center") to anr1seti&, make rendered surfaces appear to reflect their en- wihv *In.wlgUatv4r~* y vironment. The mir-ror reflection of the environ- #an-ifngl4ieft wa EM.1 ment at a surface point is taken to be the point in the ~~Mry n~ n I world projection corresponding to the direction of a ray from the eye as reflected by the surface. Conse- quently, reflections are geometrically accurate only if the surface point is at the world center, or if the An earlier version of this article appeared in Proc. Graphics Interface 86. V .7z 1, %M i q, .. k November 1986 0272-1716/86/1100-0021$01.00 0 1986 IEEE 21 yl.; i.. 41 .0

Transcript of Mapping and Other yl.;cs5610/handouts/environmentMapping.pdfinformation with surface properties...

Page 1: Mapping and Other yl.;cs5610/handouts/environmentMapping.pdfinformation with surface properties (color, glossi-ness, etc.) to determine the color reflected at a surfacepointis beyondthescopeofthis

Environment Mapping andOther Applicationsof World Projections

Ned GreeneNew York Institute of Technology

Reflection mapping, introduced by Blinn and ~ ~ 4Newell in 1976, is a shading technique that uses a i i s ci tXc . 2 3projection of the world (a reflection map) as seen i d p u2 pIfrom a particular viewpoint (the "world center") to anr1seti&,make rendered surfaces appear to reflect their en- wihv *In.wlgUatv4r~* yvironment. The mir-ror reflection of the environ- #an-ifngl4ieft wa EM.1ment at a surface point is taken to be the point in the ~~Mry n~ n Iworld projection corresponding to the direction of aray from the eye as reflected by the surface. Conse-quently, reflections are geometrically accurate onlyif the surface point is at the world center, or if the An earlier version of this article appeared in Proc. Graphics Interface 86.

V.7z1,%M

i

q, .. k

November 1986 0272-1716/86/1100-0021$01.00 0 1986 IEEE 21

yl.;i.. 41.0

Page 2: Mapping and Other yl.;cs5610/handouts/environmentMapping.pdfinformation with surface properties (color, glossi-ness, etc.) to determine the color reflected at a surfacepointis beyondthescopeofthis

Figure 1. Lizard model reflecting its environment.(Lizard model by Dick Lundin, tree model by JulesBloomenthal, terrain model by Paul P. Xander, skymodel by the author.)

reflected object is greatly distant. Geometric distor-tion of reflections increases as the distance from thesurface point to the world center increases and asthe distance from the reflected object to the worldcenter decreases. When applying reflection mappingto a particular object, the most satisfactory resultsare usually obtained by centering the world projec-tion at the object's center.

This method for approximating reflections can beextended to encompass refraction. Obtaining ac-curate results, however, requires more computation,since the ray from the eye should be "ray traced"through the refractive object. As with reflection,results are only approximate for geometric reasons.(Simply bending the ray at the surface point andusing this as the direction of the refracted ray is notaccurate, although it may convey the impression ofrefraction.)

Miller and Hoffman have developed a generalillumination model based on reflection mapping3Essentially, they treat a world projection as an all-encompassing, luminous globe imprinted with aprojection of the world that produces sharp reflec-tions in smooth, glossy objects and diffuse reflec-tions in matte objects. This is a good model ofillumination in the real world, since it takes intoaccount all illumination in the environment, al-though shadows are not explicitly handled, and, aswith reflection mapping, results are only approxi-mate for geometric reasons. To speak generically ofthis approach and conventional reflection mapping,the term "environment mapping" will be applied totechniques for shading and texturing surfaces thatuse a world projection. A world projection used for

environment mapping is usually called an environ-ment map.

Environment mapping vs. ray tracingEnvironment mapping is often thought of as a

poor man's ray tracing, a way of obtaining approxi-mate reflection effects at a fraction of the compu-tational expense. While ray tracing is unquestionablythe more versatile and comprehensive technique,since it handles shadows and multiple levels ofreflection and refraction,3 environment mapping issuperior in some ways, quite aside from its enor-mous advantage in speed. Environment mappinggracefully handles finding diffuse illumination andantialiasing specular illumination, operations thatare problematical with ray tracing, since it point-samples the 3D environment. (Amanatides' approachof ray tracing with cones is an exception4)

Theoretically, finding the diffuse and specularillumination impinging on a region of a surfacerequires integration over part of the 3D environ-ment, an inherently complex procedure. Refine-ments to ray tracing proposed to approximate suchintegration include distributed ray tracing5 and raytracing with cones4Environment mapping simplifies the problem by

treating the environment as a 2D projection, whichallows integration to be performed by texture filter-ing. Specifically, diffuse and specular surfaceillumination can be found by filtering regions of anenvironment map. However, integration over a 2Dprojection rather than the 3D environment is a grosssimplification that sacrifices the reliability of localshading information, and consequently aspects ofshading that depend on the local environment (suchas shadowing) cannot be performed reliably. On theother hand, environment mapping accurately con-veys global illumination, and in situations where thelocal environment does not substantially affect sur-face shading, the subjective quality of reality cuesproduced by environment mapping may be superiorto those produced by naive ray tracing.An approach to shading that exploits the accuracy

of ray tracing and the speed of environment map-ping involves ray tracing the local 3D environmentof an object and using an environment map repre-senting the remainder of the environment as abackdrop6

An example of environment mappingThe lizard of Figure 1 was rendered with reflection

mapping (environment mapping with mirror re-flections). Examination of this image reveals aninherent limitation with reflection mapping: Sincethe reflecting object is not normally in the environ-

IEEE CG&A22

Page 3: Mapping and Other yl.;cs5610/handouts/environmentMapping.pdfinformation with surface properties (color, glossi-ness, etc.) to determine the color reflected at a surfacepointis beyondthescopeofthis

ment map, that object cannot reflect parts of itself(for example, the legs are not reflected in the body).Actually, this limitation can be partially overcome byusing different environment maps for different partsof an object.But overall, reflection mapping performs well in

this scene. The horizon and sky, which are the mostprominent reflected features, are accurately re-flected because of their distance from the reflectingobject. The reflections of the tree and the foregroundterrain are less accurate because of their proximity,but surface curvature makes this difficult to recog-nize. As this example shows, reflections need not beaccurate to look right, although attention should bepaid to scene composition. Planar reflecting sur-faces, for example, may cause problems, since theirdistortion of reflections may be quite noticeable.

Rendering a cube projectionEnvironment mapping presupposes the ability to

obtain a projection of the complete environment.Suppose we wish to obtain a world projection of a3D synthetic environment as seen from a particularviewpoint. One method is to position the camera atthe viewpoint and project the scene onto a cube byrendering six perspective views, each with a 900view angle and each looking down an axis of acoordinate system with its origin at the viewpoint.Thus, a cube projection can be created with anyrendering program that can produce perspectiveviews.The environment map used to shade the lizard in

Figure 1 is a cube projection created in this manner;in Figure 2 it is unfolded.

Miller and Hoffman have also used this method ofcreating a cube projection as an intermediate step inmaking a latitude-longitude projection of the world,the format they prefer for environment mapping3Blinn and Newell use a latitude-longitude projectionalso.' Hall prefers plotting longitude against sine oflatitude so that pixels in the projection correspond toequal areas of "sky."6

Surface shadingLight reflected by a surface is assumed to have a

diffuse component and a specular component. Thediffuse component represents light scattered equallyin all directions; the specular component representslight reflected at or near the mirror direction.

This discussion of surface shading will focus onobtaining diffuse and specular illumination from anenvironment map. The problem of combining thisinformation with surface properties (color, glossi-ness, etc.) to determine the color reflected at asurface point is beyond the scope of this article, but

Figure 2. Unfolded cube projection of a 3D en-vironment.

the following formula is a rough rule for shadingmonochrome surfaces at a surface point:

reflected color = dc X D+ sc x S,

where

dc is the coefficient of diffuse reflectionD is the diffuse illuminationsc is the coefficient of specular reflectionS is the specular illumination(dc and sc depend mainly on surface glossiness)

No ambient term is required, since the diffuse il-lumination component accounts for all illuminationfrom the environment.Cook and Torrance' provide a general discussion

of surface shading; Miller and Hoffman2 discussshading in the context of environment mapping.

Texture filteringEnvironment mapping is a form of texture map-

ping wherein the texture applied to 3D surfaces isrepresented in an environment map. The diffuse andspecular illumination impinging on a region of asurface can be found by texture filtering regions ofthe environment map with a space-variant filter.

Diffuse illumination at a surface point comesfrom the hemisphere of the world centered on thesurface normal, and it can be found by filtering theregion of the environment map corresponding tothis hemisphere. Filtering should be done accordingto Lambert's law, which states that the illuminationcoming from a point on the hemisphere should beweighted by the cosine of the angle between thedirection of that point and the surface normal. Sincea hemisphere represents half of the environment

November 1986 23

Page 4: Mapping and Other yl.;cs5610/handouts/environmentMapping.pdfinformation with surface properties (color, glossi-ness, etc.) to determine the color reflected at a surfacepointis beyondthescopeofthis

Figure 3. Rays from viewpoint through corners of ascreen pixel are reflected by a surface. These fourdirection vectors define a "reflection cone" with aquadrilateral cross section.

map, filtering methods that consider texture pixelsindividually are very slow. A method for obtainingdiffuse illumination by table lookup is discussedlater.Specular illumination can also be found by filter-

ing a region of the environment map. Figure 3 showsthe quadrilateral cone of "sky" reflected by theregion of a surface subtended by a screen pixel. Tofind the four direction vectors defining this "reflec-tion cone," we assume that the surface is a perfectmirror, and we reflect each ray with respect to thesurface normal where it intersects the surface.Actually, this simplifies the geometry somewhat,since we would not expect the beam reflected by anonplanar surface to have a precisely quadrilateralcross section.Given a reflection cone, an obvious approach to

determining the specular illumination is to find thesubtended region of the environment map and thenaverage the pixel values within this region. (Using anunweighted average of the pixel values is equivalentto filtering the region with a box filter.)

Figures 4a and 4b show regions subtended by twoquadrilateral reflection cones in environment mapsrepresented in latitude-longitude format (Figure 4a)and cube format (Figure 4b). The red regions aresubtended by a relatively wide reflection cone, andthe blue regions are subtended by a much narrowerreflection cone. As is apparent from Figure 3, the"width" of a reflection cone depends on surfacecurvature, so reflection cones vary widely in size. Inpractice, regions subtended by reflection cones are

Figure 4. Regions subtended by two quadrilateralreflection cones in (a) latitude-longitude projection

Figure 5. Reflection cone defined by viewpoint andelliptical screen pixel is approximated by an ellipti-cal cone. Concentric ellipses within an ellipticalscreen pixel are approximated by concentric ellipti-cal cones.

typically smaller than the regions illustrated inFigures 4a and 4b, since relatively flat surfacesgenerate narrow reflection cones. This is not neces-

sarily the case, however; reflection cones may bevery wide on average ifbump mapping is performed.Determining the specular illumination impinging

on a surface by constructing quadrilateral reflectioncones and then averaging subtended pixels in theenvironment map gives results that are reasonablebut not optimal. For theoretical reasons, filteringshould not be restricted to the region of texture

IEEE CG&A

4aScreen pixel

Viewpoint

Screen pixel Viewpoint

24

Page 5: Mapping and Other yl.;cs5610/handouts/environmentMapping.pdfinformation with surface properties (color, glossi-ness, etc.) to determine the color reflected at a surfacepointis beyondthescopeofthis

and (b) cube projection. In cube format, regions oncube faces are always polygonal.

Figures 6a and 6b show regions subtended byconcentric elliptical cones which correspond to thequadrilateral cones represented in Figures 4a and4b. Regions subtended by elliptical cones in cube-format environment maps are always bounded bysecond-degree curves: ellipses, hyperbolas, andparabolas (for example, the concentric curves inFigure 6b).

Prefiltered environment mapsSince the width of a reflection cone depends on

surface curvature, the area of "sky" reflected at apixel can be arbitrarily large. A sphere covering asingle pixel, for example, reflects nearly the entireworld. So determining specular illumination nor-mally requires filtering large areas of the environ-

Figure 6. Regions subtended by two sets of concentric elliptical reflection cones in (a) latitude-longitudeprojection and (b) cube projection. In cube format, intersection lines are always second-degree curves:

ellipses, hyperbolas, or parabolas.

corresponding to screen pixel bounds, and averagingof pixel values should be weighted by proximity tothe center of the region being filtered&More accurate filtering is achieved by treating

pixels as ellipses that are somewhat larger than therectangular pixels defined by the raster grid andweighting pixel values according to displacementfrom pixel center (for example, weighting accordingto a Gaussian hump with an elliptical base8). Asshown in Figure 5, elliptical pixels produce approxi-mately elliptical reflection cones, and concentricelliptical cones corresponding to concentric ellipseswithin an elliptical screen pixel subtend regions ofequivalent proximity to pixel center.

ment map for some pixels, and filtering methodsthat evaluate texture pixels individually are very

slow.

Approximate filtering techniques that use prefil-tered texture are much faster. Several methodshave been developed that have low "constant cost"per output pixel; that is, filtering time is independentof the size of the region being filtered. However,these techniques are presently limited to filteringrectangular or elliptical areas oriented parallel to thecoordinate axes of the environment map9

Constant-cost filtering is problematical when en-

vironment maps are represented in latitude-longi-tude format, because regions subtended by wide

November 1986

.--

l llIll~ _I- -l

L.I

a

r-----X _ -+4--- _

l

bL -

25

Page 6: Mapping and Other yl.;cs5610/handouts/environmentMapping.pdfinformation with surface properties (color, glossi-ness, etc.) to determine the color reflected at a surfacepointis beyondthescopeofthis

Figure 7. Magnified diffuse illumination map of theenvironment of Figure 2 in latitude-longitude for-mat. Table resolution is 40 X 20.

Figure 8. Spheres reflecting the environment ofFigure 2. Left-hand sphere is a Lambertian reflector;right-hand sphere is a perfect mirror.

reflection cones are generally irregular (for example,the red regions of Figures 4a and 6a). With cube-format environment maps, on the other hand, re-flection cones subtend fairly regular regions. Theregions are always polygonal for quadrilateral re-flection cones (as in Figure 4b), and for ellipticalreflection cones they are bounded by second-degreecurves (as in Figure 6b).With our present implementation of environment

mapping, specular illumination is obtained from aprefiltered cube projection in the form of six "mipmap" image pyramids,10 one for each cube face.Texture filtering with mip maps offers fast, con-stant-cost performance, but it is limited to filteringsquare regions of texture with a box filter, so resultsare only approximate. In the case of Figure 4b, forexample, each of the polygonal areas on the cubefaces would need to be approximated by a square.Crow" and Perlin'2 have proposed similar andsomewhat more general approximate filtering tech-niques with constant-cost performance.3We turn now to applying prefiltered texture to

find diffuse illumination. Miller and Hoffman haveshown how a diffuse illumination map can be cre-ated by convolving an environment map with aLambert's law cosine function, a kernel coveringone hemisphere of the world& This map, which isindexed by surface normal, can be thought of asindicating colors reflected by a monochrome spheri-cal Lambertian reflector placed at the world center.

Since a diffuse illumination map has little high-frequency content, it can be computed and stored atlow resolution and accessed with bilinear inter-polation. Thus, prefiltering the environment map inthis manner reduces the problem of finding thediffuse illumination at a surface point to a tablelookup.Figure 7 is a magnified diffuse illumination map in

latitude-longitude format corresponding to the en-vironment of Figure 2. Since diffuse illuminationmaps usually change only subtly from frame toframe, for animation applications it may suffice tocreate these maps at intervals (say, every tenthframe) and interpolate between them.

Figure 8 shows three monochrome spheres re-flecting the environment of Figure 2. The relativeweighting of the diffuse and specular reflectioncoefficients (dc and sc in the shading formula givenearlier) varies from completely diffuse on the left (aLambertian reflector) to completely specular on theright (a perfect mirror). The left-hand sphere wasshaded from the diffuse illumination table of Figure7; the right-hand sphere was shaded by filtering mipmap image pyramids made from the projection ofFigure 2; and the middle sphere was shaded byblending values obtained with both techniques.

Hybrid texture filteringFigure 6b suggests that a filter capable of filtering

an arbitrarily oriented elliptical area could performvery accurate filtering of environment maps in cubeformat, provided the filter allows the elliptical area

to exceed the bounds of the texture map. Actually,the concentric curves in Figure 6b include hyper-bolas in addition to ellipses, but these patterns can

be closely approximated by concentric ellipses.According to Heckbert's survey of texture map-

ping techniques (included in this issue of IEEECG&A),9 no existing constant-cost filters are capableof filtering arbitrarily oriented elliptical areas, but a

method having nearly constant-cost performancehas been described.s'9 Filtering any elliptical area canbe closely approximated by filtering a small ellipticalarea at the appropriate level in a prefiltered imagepyramid. Actually, filtering at two adjacent levels inthe image pyramid and interpolating the results as

described by Williams'° is preferable. Within theimage pyramid, high-quality filtering is performedon arbitrarily oriented elliptical areas, and texturesamples are weighted by proximity to ellipse center.The elliptical weighted average filterg provides suchcapabilities. Although this hybrid filtering method ismore costly than constant-cost methods, it offersquality approaching that of direct convolution.

IEEE CG&A26

Page 7: Mapping and Other yl.;cs5610/handouts/environmentMapping.pdfinformation with surface properties (color, glossi-ness, etc.) to determine the color reflected at a surfacepointis beyondthescopeofthis

In sum, the combination of environment maps incube format, elliptical reflection cones, and theabove hybrid filtering method is proposed as anefficient and highly accurate method of obtainingspecular illumination for environment mapping.

Cube projections vs.latitude-longitude projections

The fast filtering methods described above pro-duce more accurate results when environment mapsare represented in cube format rather than latitude-longitude format, because the regions to be filteredare more regular. Cube projections have other ad-vantages as well.As previously described, a cube projection of a 3D

environment can be rendered directly as six per-spective views. Producing the corresponding lati-tude-longitude projection usually involves renderinga cube projection and then converting to a latitude-longitude projection by filtering the cube faces.(This is not always required; for example, ray tracerscan produce a latitude-longitude projection directly.)This conversion requires additional computation,and the added generation of filtering can only de-grade image clarity.Moreover, latitude-longitude projections are non-

linear in the sense that plotting 3D direction vectorsrequires trigonometric operations. Projecting direc-tion vectors onto a unit cube is much faster becauseit requires only division. This is a significant com-putational advantage, given the frequency withwhich this operation must be performed.

Cheap chrome effectsEnvironment mapping also has wide application

where the objective is to produce a striking visualeffect without particular regard for realism. Oftenthe intent is to give objects a chrome-plated look,and the content of reflections is unimportant. InRobert Abel and Associates' "Sexy Robot" animation,for example, the "reflection map" was a smoothcolor gradation from earth colors at low elevationsto sky colors at high elevations, and at a givenelevation color was constant.'4 (Of course, this is nota true reflection map.)For applications of this sort, a color map or other

one-dimensional color table suffices to specify thecolor gradation, and rendering computation can begreatly reduced by filtering this table instead ofperforming texture filtering. Since an environmentmap is not used, memory requirements are reducedaccordingly, and no setup time is required to makeand prefilter a world projection. Highlights can be

produced by adding point light sources indepen-dently of environment mapping.

Modeling background objectswith world projections

While the discussion thus far has been confined tousing world projections for surface shading, theycan also be used to model background objects."5 Thesky component in frames of moving-camera ani-mation can be modeled as a half-world projection,for example, the projection onto the upper half of acube. As the camera moves through the scene, theappropriate region of the projection comes intoview.Of course, the advantage of this technique is

speed: A scene element (in this case the sky) can berendered from the world projection with texturemapping which, for complex scenes, is much fasterthan rendering the corresponding 3D models. Thismethod assumes that objects in the projection are agreat distance from the camera, and results are onlyapproximate when this is not the case.Figure 9 is a frame from animation in which the

sky component was rendered from a half-worldprojection made from painted panels.Since half-world models cover the whole sky, they

are particularly useful for creating world projectionsfor environment mapping. The sky component ofFigure 2 was modeled by projecting a 180° fish-eyephotograph of sky onto a half cube, shown in isola-tion in Figure 10. This model served two purposes inproducing the picture of Figure 1: modeling the skyin the background, and making the environmentmap used to shade the lizard.Making sky models from photographs is an attrac-

tive option because of the difficulty of synthesizingcomplex sky scenes with realistic cloud forms andlighting. A 180° fish-eye lens (round type) allows thewhole sky to be photographed at once, thus avoidingproblems associated with making photo mosaics.

In animated scenes where the camera rotates butdoesn't change location, this approach to modelingsky can be extended to modeling the whole back-ground. In this case a single world projection cen-tered at the camera is generated, and the back-ground in the frames of animation is rendereddirectly from this projection. The geometry of thescene is faithfully reproduced regardless of how farobjects in the scene are from the camera; results arenot just approximate.

This technique can also be applied to moving-camera animation in some situations. A world pro-jection of the distant environment centered at atypical camera position can be used to render distantobjects in the scene while the near environment is

November 1986 27

Page 8: Mapping and Other yl.;cs5610/handouts/environmentMapping.pdfinformation with surface properties (color, glossi-ness, etc.) to determine the color reflected at a surfacepointis beyondthescopeofthis

Figure 9. Frame from animation using a half-worldsky model made from painted panels. (Animation:"Three Worlds" by the author, 1983.)

Figure 10. Half-world sky model made from a 1800fish-eye photograph.

Figure 11. Omnimax projection of the environmentof Figure 2.

rendered from 3D models at each frame and thencomposited with the distant environment. Thisapproach is analogous to using foreground andbackground levels in cel animation. As before, it isconvenient to represent the world as a cube projec-tion, since it can be rendered directly from a 3Dmodel of the scene.Assuming that the world is represented as a cube

projection, the cube faces should have substantiallyhigher resolution than the output frames, since onlya small part of the world is subtended by the viewingpyramid for typical view angles. For example, aviewing pyramid with a 3:4 aspect ratio and a 450vertical view angle subtends only about 6 percent ofthe world.Rendering background objects from a world pro-

jection is a form of texture mapping, since each pixelin the output image is rendered by determining thecorresponding region in the world projection andthen filtering a neighborhood of this region. Sinceeach output pixel corresponds to relatively fewpixels in the world projection, high-quality filteringmethods that consider texture pixels individually areappropriate (as opposed to approximate methodsusing prefiltered texture).This approach to rendering background objects

suggests a method for performing motion blur. Theregion of texture traversed by the projection of anoutput pixel in the time interval between frames isdetermined, and then this region is filtered. Thesimplicity of performing accurate motion blur in thissituation is due to the two-dimensional nature of themodel.For this application, a space-variant filter should

be employed because different output pixels maytraverse differently shaped paths. (This occurs, forexample, if the camera rolls.) Approximate resultscan be obtained by filtering elliptical regions in theworld projection with a filter capable of filteringarbitrarily oriented elliptical areas such as the ellip-tical weighted average filterO

Nonlinear projectionsIn addition to their use in generating perspective

views of an environment, world projections can beused to create nonlinear projections such as the fish-eye projection required for Omnimax frames.An Omnimax theater screen is hemispherical, and

film frames are projected through a fish-eye lens.16Omnimax projections of 3D scenes have been ob-tained by projecting the scene onto a cube centeredat the camera and then filtering regions of the cubefaces to obtain pixels in the output imageO Thistechnique is similar to the method described abovefor rendering background objects from world pro-jections. Figure 11 is an Omnimax projection madefrom the cube projection of Figure 2.

IEEE CG&A28

Page 9: Mapping and Other yl.;cs5610/handouts/environmentMapping.pdfinformation with surface properties (color, glossi-ness, etc.) to determine the color reflected at a surfacepointis beyondthescopeofthis

ConclusionThe projection of the environment onto a cube is a

convenient and useful representation of the worldfor the applications cited in this article. In the case ofenvironment mapping, cube projections allow rela-tively accurate texture filtering to be performedwith fast, constant-cost (or nearly constant-cost)methods.

Generally, the methods described are ways ofapproximating a three-dimensional problem with atwo-dimensional problem to reduce computationalexpense. Environment mapping approximates raytracing, and rendering background objects fromworld projections with texture mapping approxi-mates image rendering from 3D models. The sub-jective quality of reality cues in images producedwith these approximate methods often comparesfavorably with results obtained by more expensiveimage generation techniques. For complex en-vironments, approximate techniques may be theonly practical way of producing animation havingthe desired features with moderate computingresources. R

AcknowledgmentsMy thanks to Paul Heckbert, who collaborated to

incorporate environment mapping in his polygonrenderer, devised the "elliptical weighted averagefilter," and provided a critical reading of this article.Thanks also to Nelson Max and John Lewis for theircomments, and to Dick Lundin, who assembled andrendered the scene of Figure 1. Ariel Shaw and RichCarter provided photographic assistance.

References1. James F. Blinn and Martin E. Newell, "Texture and Reflection

in Computer Generated Images," Comm ACM, Vol. 19, No.10, Oct. 1976, pp.542-547.

2. Gene S. Miller and C. Robert Hoffman, "Illumination andReflection Maps: Simulated Objects in Simulated and RealEnvironments," SIGGRAPH 84: Advanced ComputerGraphics Animation Seminar Notes, July 1984.

3. Turner Whitted, "An Improved Illumination Model forShaded Display," Comm ACM, Vol. 23, No. 6, June 1980,pp.343-349.

4. John Amanatides, "Ray Tracing with Cones," ComputerGraphics (Proc. SIGGRAPH 84), Vol. 18, No. 3, July 1984,pp.129-135.

5. Robert L. Cook, Thomas Porter, and Loren Carpenter, "Dis-tributed Ray Tracing," Computer Graphics (Proc. SIGGRAPH84), Vol. 18, No. 3, July 1984, pp.137-145.

6. Roy Hall, "Hybrid Techniques for Rapid Image Synthesis,"course notes, SIGGRAPH 86: Image Rendering Tricks, Aug.1986.

7. Robert L. Cook and Kenneth E. Torrance, "A ReflectanceModel for Computer Graphics," Computer Graphics (Proc.SIGGRAPH 81), Vol. 15, No. 3, Aug. 1981, pp.307-316.

8. Ned Greene and Paul S. Heckbert, "Creating Raster Omni-max Images from Multiple Perspective Views Using theElliptical Weighted Average Filter," IEEE CG&A, Vol. 6, No.6,June 1986, pp.21-27.

9. Paul S. Heckbert, "Survey of Texture Mapping," IEEE CG&A,Vol. 6, No. 11, Nov. 1986, pp.56-67. An earlier version ap-peared in Proc Graphics Interface 86, May 1986, pp.207-212.

10. Lance Williams, "Pyramidal Parametrics," ComputerGraphics (Proc. SIGGRAPH 83), Vol. 17, No. 3, July 1983,pp.1-1 1.

11. Franklin C. Crow, "Summed-Area Tables for Texture Map-ping," Computer Graphics (Proc. SIGGRAPH 84), Vol. 18, No.3, July 1984, pp.207-212.

12. Kenneth Perlin, "Course Notes," SIGGRAPH 85: State of theArt in Image Synthesis Seminar Notes, July 1985, pp.297-300.

13. Paul S. Heckbert, "Filtering by Repeated Integration," Com-puter Graphics (Proc. SIGGRAPH 86), Vol. 20, No. 4, Aug.1986.

14. Torrey Byles, "Displays on Display: Commercial Demon-strates State-of-the-Art Computer Animation," IEEE CG&A,Vol. 5, No. 4, Apr. 1985, pp.9-12.

15. Ned Greene, "A Method of Modeling Sky for ComputerAnimation," Proc First Int'l Conf. Engineering and ComputerGraphics, Beijing, Aug. 1984, pp.297-300.

16. Nelson Max, "Computer Graphics Distortion for Imax andOmnimax Projection," Proc. Nicograph 83, Dec. 1983, pp.137-159.

Ned Greene has been a senior member of theresearch staff at the New York Institute ofTechnology Computer Graphics Lab since1980. From 1978 to 1980 he was a program-mer in computer-aided design for McDonnellDouglas Automation.His research interests include graphics

software for image synthesis and animation,and computer animation as an artisticmedium. His computer images have been

widely published, and excerpts from his animation have beentelevised in the US, Europe, and Japan. Greene holds a BA inmathematics from the University of Kansas. He is a member ofACM SIGGRAPH.The author can be contacted at the NYIT Computer Graphics

Lab, PO Box 170, Old Westbury, NY 11568.

November 1986 29