Review of Stereo Visioncssg.massey.ac.nz/cstn/155/cstn-155.pdf · 2013. 1. 3. · 6 Y 6 0...

9
6 Y 6 0 Computational Science Technical Note CSTN-155 Review of Stereo Vision T. S. Lyes 2011 Stereo vision is a method of rendering objects in such a way that the viewer perceives a more realistic looking object with the added illusion of depth. The technology has become increasingly popular in recent years in the movie and video gaming industries. A paper was written previously concerning the application of stereo vision technology to the viewing of scientific data sets, using the most simple of graphics card hardware. In this paper, a review of the stereo vision technology will be presented, including the hardware and software details, visualizations of some data sets rendered in stereo, as well as some performance analysis. Improvements since made to the code will be discussed, as well as improvements to the performance due to upgrading the professional graphics card used and comparisons rendering on dierent operating systems. Some follow-up ideas for future work are also put forward. Keywords: stereoscopic; OpenGL; Nvidia; Quadro; projection; simulation; realism BiBTeX reference: @TECHREPORT{CSTN-155, author = {T. S. Lyes}, title = {Review of Stereo Vision}, institution = {Computer Science, Massey University}, year = {2011}, number = {CSTN-155}, address = {Albany, North Shore 102-904, Auckland, New Zealand}, note = {In Proc. IIMS Postgraduate Student Conference, October 2011}, keywords = {stereoscopic; OpenGL; Nvidia; Quadro; projection; simulation; realism}, owner = {kahawick}, timestamp = {2012.05.03} } This is a early preprint of a Technical Note that may have been published elsewhere. Please cite using the information provided. Comments or quries to: Prof Ken Hawick, Computer Science, Massey University, Albany, North Shore 102-904, Auckland, New Zealand. Complete List available at: http://www.massey.ac.nz/ ~ kahawick/cstn

Transcript of Review of Stereo Visioncssg.massey.ac.nz/cstn/155/cstn-155.pdf · 2013. 1. 3. · 6 Y 6 0...

Page 1: Review of Stereo Visioncssg.massey.ac.nz/cstn/155/cstn-155.pdf · 2013. 1. 3. · 6 Y 6 0 Computational Science Technical Note CSTN-155 Review of Stereo Vision T. S. Lyes 2011 Stereo

6Y

6 0

Computational Science Technical Note CSTN-155

Review of Stereo Vision

T. S. Lyes

2011

Stereo vision is a method of rendering objects in such a way that the viewer perceives a more realistic looking objectwith the added illusion of depth. The technology has become increasingly popular in recent years in the movie andvideo gaming industries. A paper was written previously concerning the application of stereo vision technology tothe viewing of scientific data sets, using the most simple of graphics card hardware. In this paper, a review of thestereo vision technology will be presented, including the hardware and software details, visualizations of some datasets rendered in stereo, as well as some performance analysis. Improvements since made to the code will be discussed,as well as improvements to the performance due to upgrading the professional graphics card used and comparisonsrendering on di↵erent operating systems. Some follow-up ideas for future work are also put forward.

Keywords: stereoscopic; OpenGL; Nvidia; Quadro; projection; simulation; realism

BiBTeX reference:

@TECHREPORT{CSTN-155,author = {T. S. Lyes},title = {Review of Stereo Vision},institution = {Computer Science, Massey University},year = {2011},number = {CSTN-155},address = {Albany, North Shore 102-904, Auckland, New Zealand},note = {In Proc. IIMS Postgraduate Student Conference, October 2011},keywords = {stereoscopic; OpenGL; Nvidia; Quadro; projection; simulation; realism},owner = {kahawick},timestamp = {2012.05.03}

}

This is a early preprint of a Technical Note that may have been published elsewhere. Please cite using the informationprovided. Comments or quries to:

Prof Ken Hawick, Computer Science, Massey University, Albany, North Shore 102-904, Auckland, New Zealand.Complete List available at: http://www.massey.ac.nz/~kahawick/cstn

Page 2: Review of Stereo Visioncssg.massey.ac.nz/cstn/155/cstn-155.pdf · 2013. 1. 3. · 6 Y 6 0 Computational Science Technical Note CSTN-155 Review of Stereo Vision T. S. Lyes 2011 Stereo

Submitted to IIMS Postgraduate Conference 2011 - Author for correspondence: [email protected]

Review of Stereo Vision

T.S. Lyes

Computer Science, Institute for Information and Mathematical Sciences,

Massey University, North Shore 102-904, Auckland, New Zealand

email: [email protected]

Tel: +64 9 414 0800 Fax: +64 9 441 8181

September 2011

ABSTRACTStereo vision is a method of rendering objects in such a way that the viewer perceives a more realistic looking objectwith the added illusion of depth. The technology has become increasingly popular in recent years in the movieand video gaming industries. A paper was written previously concerning the application of stereo vision technologyto the viewing of scientific data sets, using the most simple of graphics card hardware. In this paper, a review ofthe stereo vision technology will be presented, including the hardware and software details, visualizations of somedata sets rendered in stereo, as well as some performance analysis. Improvements since made to the code will bediscussed, as well as improvements to the performance due to upgrading the professional graphics card used andcomparisons rendering on di↵erent operating systems. Some follow-up ideas for future work are also put forward.

KEY WORDSstereoscopic; OpenGL; Nvidia; Quadro; projection; simulation; realism;

1 Introduction

In [1] a method was described to add stereoscopic vision support to an OpenGL program designed to visualizescientific data sets - specifically, the ”Cubes” program [2]. The added stereoscopic support was meant to providea better understanding when visualizing data sets - for example, the scientist could get insights on data changesimmediately by being ”inside” the data set while they are adjusting parameters. Stereoscopic vision is better knownfor its applications in the video gaming industry as well as the movie industry, and has also had smaller applicationsin other areas such as manufacturing [3], vehicle detection [4], and medical practices [5] [6].

The stereo rendering of the program was relatively successful, however the project encountered limitations mainlydue to the hardware requirements needed to run the stereo vision programs (as described in Section 2). In short, thegraphics card ultimately used was an old model and thus much slower than what was expected, as well as havingsupport for only the Windows operating system. As a follow up to the paper, it was suggested that the programmight perform better using a more suitable graphics card with support for both Windows and Linux operatingsystems.

After successfully adding stereo support to the ”Cubes” program, the next step to take was to ask whether thestereo code itself could be compartmentalized in order to be used with any OpenGL visual simulation (not onlythe Cubes program). This is the main focus of this paper. Firstly, an explanation of the hardware setup neededwill be described (Section 2). Then, an in-depth explanation of how stereoscopic vision works will be presented,as well displaying the code fragments used to program the stereo vision support (Section 3). In Section 4 someresulting screenshots of di↵erent data sets in stereo will be shown, and in Section 5 some the performance, issuesand di�culties of the coding the program will be discussed, and finally some conclusions and further work ideas willbe o↵ered in Section 6.

Page 3: Review of Stereo Visioncssg.massey.ac.nz/cstn/155/cstn-155.pdf · 2013. 1. 3. · 6 Y 6 0 Computational Science Technical Note CSTN-155 Review of Stereo Vision T. S. Lyes 2011 Stereo

2 Setup

In order to set up a computer to run OpenGL stereo applications, specific hardware requirements are needed.Consumer graphics cards are designed for usage with consumer products such as video games or movies. Thus,they have no OpenGL stereo software support and cannot be used. Nvidia Quadro graphics cards are professionalversions of the Nvidia GeForce cards with added support for professional applications such as Maya and 3ds Max,as well as support for OpenGL stereo. Being ”professional” graphics cards, they are a lot more expensive than theGeForce counterparts, however the performance of both types of cards is actually very similar. The biggest di↵erenceis in the software support, which is why it is puzzling that support for open-source software such as OpenGL stereois only included in the professional Quadro card and not the consumer GeForce cards as well. Regardless, a NvidiaQuadro FX 380 graphics card was used to perform the simulations in [1].

Figure 1: The Nvidia Quadro 4000 graphics card

Since then, a new Quadro card was obtained, the Nvidia Quadro 4000, which is a much newer and more powerfulcard compared to the FX 380. What is particularly important about this card is that it is the first Quadro model tohave support for the Linux operating system, and this allowed us to run the stereo programs on both Windows andLinux rather than restricting us to just one operating system. The stereo drivers will not be set up automaticallyand OpenGL stereo support will not be initially enabled so it important to set these up properly before writing andtesting stereo programs.

Also needed to set up the stereo workstation is the Nvidia stereo glasses and the infra-red transmitter used tosynchronize the glasses with the monitor. These come with the Nvidia stereo vision package. Finally, a high refresh-rate monitor is needed. Stereo vision glasses work by rapidly switching each eye on and o↵ (often known as shutterglasses) so this means the monitor must be able to refresh at a rate twice as fast as normal for the stereo image toappear clear. A refresh rate of 60Hz is enough to be undetectable to the human eye, so the monitor must have arefresh rate of 120Hz so it can swap images for both the left and right eye without loss of quality of the image. Themonitor we used is the Samsung 2233 RZ (this comes recommended with the Nvidia Stereo Vision package).

3 Stereo Rendering

So how exactly does stereo vision work? Essentially, stereo vision is a method of rendering an image which essentiallytricks the viewer’s brain into believing the image has depth, just like real-world objects. Viewing an image on ascreen is di↵erent to viewing real world objects because there is only one view-port; the screen, while real-worldobjects are viewed with two; the left and right eye. Real-world images are viewed by each eye di↵erently - projectionsdi↵er in the horizontal position, and when this happens, perception of depth is achieved (this is known as horizontal,retinal or binocular disparity). This idea can easily be demonstrated by completely covering one eye with your handand looking at a non-flat object. You might already have an idea in your mind of the true shape of the object if

Page 4: Review of Stereo Visioncssg.massey.ac.nz/cstn/155/cstn-155.pdf · 2013. 1. 3. · 6 Y 6 0 Computational Science Technical Note CSTN-155 Review of Stereo Vision T. S. Lyes 2011 Stereo

Figure 2: NVidia stereo 3D glasses and Infra Red transmitter which synchronizes the glasses’ shutter rate with themonitor’s frame rate

you have seen it before, but in terms of what you can see purely with your one open eye, you will find it di�cultto distinguish exactly how deep some parts of the object go or how far parts of the object stick out. Upon openingthe closed eye, you will find the depth perception to be much more apparent.

Thus, depth perception can be achieved in a normal screen display simply by adding a second viewport. But howcan this be realistically added to a computer screen? This is where the technology explained in Section 2 comes in.3d shutter glasses are used to project di↵erent images to each eye. This is done by rapidly swapping two imageson the screen and closing the lens of the eye opposite to the image currently being displayed. The glasses must besynchronized to the monitor so that when the left eye image is displayed, the right eye lens is closed, and vice versa.To ensure a smooth and realistic image, this shuttering and swapping of images occurs at a speed undetectable tothe human eye (as explained in Section 2, a good shutter speed is 120Hz). Once these images have been adequatelyprovided, the human brain does the rest. It has o�cially been ’tricked’ into thinking the object has depth.

Figure 3: The o↵-axis stereo projection geometry.

Once the idea of stereo vision has been understood, the idea for writing a stereo-vision program becomes very simple;non-stereo programs are rendered on one bu↵er, so stereo programs are rendered on two bu↵ers. In OpenGL, this isachieved using a technique called quad-bu↵ering. Normal non-stereo programs often use a technique called double-bu↵ering to achieve a smooth, non-choppy image - one bu↵er (called the front bu↵er) is displayed, while the secondbu↵er (called the back bu↵er) is rendered. Then both bu↵ers are swapped to render the next frame. Quad-bu↵eringis very similar, except two bu↵ers are in front and two bu↵ers are in the back. One of each front and back bu↵erare rendered for each eye - thus the bu↵ers are usually known as left and right bu↵ers, making four bu↵ers intotal; front-left, back-left, front-right, and back-right. Front and back bu↵ers are swapped each frame to allow fora smooth image for each eye. As long as the images for the left and right bu↵ers di↵er in a believable way, stereo

Page 5: Review of Stereo Visioncssg.massey.ac.nz/cstn/155/cstn-155.pdf · 2013. 1. 3. · 6 Y 6 0 Computational Science Technical Note CSTN-155 Review of Stereo Vision T. S. Lyes 2011 Stereo

vision will be achieved. For example, if the viewpoints for each eye are too far apart, the depth illusion will be verypoor.

Good viewpoints to use for stereo vision can be calculated is several ways. Bourke [7] describes two main ways ofdoing this. The first is what is known as the ”toe-in” method, where both camera views are pointed at a singlefocal point, the second is what is known as the ”o↵ axis” method, where the views of each camera are parallel toeach other. The ”o↵-axis” method is considered the superior of the two methods as the discomfort levels are lowerthan the toe-in method due to the fact it does not introduce a vertical parallax [8]. This is also the method thatwas employed in our code. Figure 3 demonstrates the geometry for the ”o↵-axis” method to project the image instereo 3D, showing the key distances of eye-separation and approximate eye-screen distance.

The following algorithm (Figure 1) shows how one would typically write a stereo program in OpenGL. The codeshould be written in the Display callback function of the program. Writing the code in this way should (in theory)enable the stereo support part of the code to be added to any OpenGL program which uses simple blocks of codeto render objects, simply by ’wrapping around’ the stereo code around the normal rendering code or function.

Algorithm 1 the general method of rendering a stereo image

if rendering in stereo thenupdate stereo camera, normals, viewpointsselect right buffer

render image

select left buffer

render image

end ifswap buffers

Finally, the following code fragment (Figure 4) shows how to set up a stereo bu↵er for rendering. The image needsto be shifted moved slightly left or right (depending on the bu↵er) of the desired perceived image. This is achievedby first suing glFrustrum to calculate the correct viewing projection, followed by glLookat to correctly position thebu↵er viewport as well as point it in the right direction. This change in position and direction is dependent on thefocal length, aperture and eye separation variables, as di↵erent viewers will find di↵erent settings to give the clearestimage and force the least strain on the eyes (much like human eyes, everyones is di↵erent). These parameters canbe altered in the Cubes stereo version, but this support was not added as of yet to the other test programs.

void Camera : : L e f tBu f f e r ( ){glDrawBuffer (GL BACK LEFT) ;g lC l ea r (GL COLOR BUFFER BIT | GL DEPTH BUFFER BIT) ;glMatrixMode (GL PROJECTION) ;g lLoadIdent i ty ( ) ;double r a t i o = scrWidth / sc rHe ight ;double wd2 = near ⇤ tan (RAD⇤ aper ture / 2 ) ;double nd f l = near / foca lLength ;double l e f t = �r a t i o ⇤ wd2 + 0 .5 ⇤ eyeSeparat ion ⇤ nd f l ;double r i g h t = r a t i o ⇤ wd2 + 0 .5 ⇤ eyeSeparat ion ⇤ nd f l ;double top = wd2 ;double bottom = �wd2 ;glFrustrum ( l e f t , r i ght , bottom , top , near , f a r ) ;glMatrixMode (GLMODELVIEW) ;g lLoadIdent i ty ( ) ;g lLookat(�Pos . x�norm . x , �Pos . y�norm . y , �Pos . z�norm . z , �norm . x , �norm . y , �norm . z ,

Up . x , Up . y , Up . z ) ;g lRota t e f (�Rotation . x , 1 . 0 , 0 . 0 , 0 . 0 ) ;g lRota t e f ( Rotat ion . y , 0 . 0 , 1 . 0 , 0 . 0 ) ;

}

Figure 4: OpenGL Code outline for preparing Left “eye” bu↵er for rendering.

Page 6: Review of Stereo Visioncssg.massey.ac.nz/cstn/155/cstn-155.pdf · 2013. 1. 3. · 6 Y 6 0 Computational Science Technical Note CSTN-155 Review of Stereo Vision T. S. Lyes 2011 Stereo

4 Stereo Visualisation Results

As described in more detail in [1] the stereo visualization of the Cubes program was a success. It is important tonote that showing the results of a stereo program visualization is very di�cult on in a report: there is no way toactually truly demonstrate the stereo aspect of it. Thus the resulting figures show for the most part the visualizationof the data sets as they would appear to the naked eye (that is, both bu↵ers are shown, as they cannot be quicklyswapped). With the exception of Figure 5, each bu↵er has been rendered in a di↵erent color (red for the rightbu↵er, blue for the left bu↵er) so that it is easy to distinguish between the two. Hopefully an idea of exactly whatthe stereo code has done can be established.

Figure 5: The ’sca↵old’ predefined test data set as it would appear on screen to the naked eye.

Figure 5 shows the pre-defined test data set ”Sca↵old” using the Cubes program. It shows how a stereo projectionwould appear on the screen without the usage of the stereo glasses. Each cube-like block belongs to a separatebu↵er, each which shutter on and o↵ at rapid speeds, making the true image on screen look slightly more ”faded”than this one. It is a good demonstrating data set for the stereo projections, as it is very easy to distinguish partsof the hypercube based on their depth.

It is worth mentioning that the stereo vision of the Cubes program works best by rendering voxels as spheres withadded lighting e↵ects. Frames for voxels should be turned o↵, as should the hypercube grid. Further exampleprograms were not as advanced as Cubes, so such rendering options were not implemented and so the e↵ects areless noticeable, as will be shown later.

Figure 6 attempts to separate the di↵erent images for each eye. It features another dataset from the Cubes program,the dataset Biggest, referring to the biggest tree generated by an invasion percolation. For distinguishing betweenthe two bu↵ers, the left bu↵er is coloured blue while the right is coloured red. It is particularly e↵ective in stereoprojection, as the voxels can be shifted by the user in any direction allowing some voxels to appear at the extremefront or back of the image, strongly enhancing the stereo experience. Voxels at the extreme front of the image willappear very close to the viewer, out from the screen, while voxels at the extreme back of the image will appear”inside” the screen.

Figure 7 is a rendering similar to Figure 6 where the right and left bu↵ers are rendered in di↵erent colors fordistinguishing purposes. It shows a randomly generated DLA (Di↵usion Limited Aggregation) cluster. A DLA is adata set generated by having a one ”cluster” of particles and a ”walker” particle, which will move randomly. Upontouching the cluster, the walker sticks and a new walker is generated. This cluster is a relatively small cluster, asclusters will usually be made up of several thousands of particles and even much more. This program was the firstto program to test the portability of the stereo code. As one might gather from the image, the stereo e↵ect is less

Page 7: Review of Stereo Visioncssg.massey.ac.nz/cstn/155/cstn-155.pdf · 2013. 1. 3. · 6 Y 6 0 Computational Science Technical Note CSTN-155 Review of Stereo Vision T. S. Lyes 2011 Stereo

Figure 6: Red and blue colours for each eye for the predefined dataset ’biggest’ - which is a cluster pulled out of alarger simulation dataset.

Figure 7: A DLA (Di↵usion Limited Aggregation) cluster viewed in stereo red and blue colors , representing theright and left bu↵ers respectively

Page 8: Review of Stereo Visioncssg.massey.ac.nz/cstn/155/cstn-155.pdf · 2013. 1. 3. · 6 Y 6 0 Computational Science Technical Note CSTN-155 Review of Stereo Vision T. S. Lyes 2011 Stereo

Figure 8: A data map based on the random walk algorithm, viewed in stereo (red and blue)

e↵ective when used with cubes rather than spheres. This is mainly due to the lighting e↵ects added. Nevertheless,an illusion of depth is still achieved.

Figure 8 is a screenshot of the second program used to test the stereo code’s portability. The image shows a dataset generated by a random walk particle (it may only move in the x and z directions), where the height at any onecell on the map equals the number of times the walk particle has moved over that cell. Similar to the DLA, thee↵ect is less e↵ective than the Cubes program as the lighting and rendering options are not as advanced, however,a particularly good e↵ect is can be achieved when moving forwards through the ”hills” generated by the walk.

5 Discussion

The usage of the new Quadro 4000 graphics card improved performance substantially when compared with theperformance of the old Quadro FX 380. When rendering a test data set from the Cubes program in stereo, theQuadro 4000 rendered on average 1 frame every 178 milliseconds over 1000 frames, while the Quadro FX 380rendered the same set on average 1 frame every 278.3 milliseconds over 1000 frames. This is 64 per cent slower thanthe Quadro 4000. Rendering options used were the same as those which give the best stereo projection e↵ect; thatis, sphere voxels, lighting and no grid and frames drawn. On the simplest of settings (non-stereo, cube voxels, nolighting) the Quadro 4000 rendered on average 1 frame every 7.9 milliseconds, while the Quadro FX 380 averaged1 frame only every 16.7 milliseconds; over half the speed of the Quadro 4000.

There seems to be a problem rendering textures in OpenGL stereo. A third program, a particle engine, was usedto test the portability of the stereo code. The stereo projection for the most part worked, however it did not allowtextures to be rendered. To explain in more detail, some particle engines use a technique known as billboarding toenhance the realism of the system without needing to render additional particles. How this is done is each particleis rendered as a two dimensional square texture (a billboard) with added transparency, and the billboard is alwaysfacing the camera. When rendering the system in stereo, the textures were not loaded and were rendered as plain2d squares instead. Although a perception of depth was achieved, the resulting visualization was not pleasing. Itis unknown whether OpenGL stereo disallows texturing altogether or whether there were some adjustments neededto be done to the code to add this support. This is an ongoing area of work at the present time. It would also beinteresting to try rendering the particles in the system as lit spheres similar to the Cubes program, although witha large system of many many particles, this might not be feasible in terms of performance.

Page 9: Review of Stereo Visioncssg.massey.ac.nz/cstn/155/cstn-155.pdf · 2013. 1. 3. · 6 Y 6 0 Computational Science Technical Note CSTN-155 Review of Stereo Vision T. S. Lyes 2011 Stereo

6 Conclusions

The theory behind rendering an object in OpenGL stereo was explained. The portability of the the stereo visioncode used with the Cubes program was tested on several other OpenGL data set rendering programs, with varyingresults. Improvements in the performance of the Cubes program were observed as the program was run on a linuxbox rather than windows, using the Quadro 4000 graphics card rather than the older Quadro FX 380. Significantimprovements were found when running the program on the Quadro 4000. Di�culties were encountered when tryingto render textured polygons in stereo. More investigation on this issue is underway and hopefully a solution can befound in the near future.

For other future work, it would be interesting to incorporate haptic devices to manipulate objects rather than usingthe mouse and keyboard. Haptic devices would allow the user to rotate or move the object using hand movementsor other body parts. Recently a Microsoft Kinect device has been acquired which has the capabilities of recognizinghuman silhouettes and many other figures, so this should also be able to work for this purpose. Head tracking isalso an area which would be good to look into, which involves the camera moving as the user’s head moves, rotatingaround the image and giving the e↵ect of a ’virtual reality’ environment. Coupling head tracking with stereo visionhas been found to give better results than stereo vision alone, as several studies [9] [10][11] have demonstrated.

In summary, there is still a lot of interesting areas in which stereo projection can be enhanced to better experienceand understand the simulation of data sets, and expensive hardware and complicated code is not necessarily requiredto do this.

References

[1] Lyes, T., Hawick, K.: Implementing stereo vision of gpu-accelerated scientific simulations using commodity hardware.In: Proc. International Conference on Computer Graphics and Virtual Reality (CGVR’11). Number CGV4047, LasVegas, USA (2011)

[2] Hawick, K.: 3d visualisation of simulation model voxel hyperbricks and the cubes program. Technical Report CSTN-082,Computer Science, Massey University (2010)

[3] Aguilar, J., Torres, F., Lope, M.: Stereo vision for 3d measurement: accuracy analysis, calibration and industrialapplications. Measurement 18 (1996) 193–200

[4] Bertozzi, M., Broggi, A., Fascioli, A., Nichele, S.: Stereo vision-based vehicle detection. In: Proceedings of the 2000IEEE Intelligent Vehicles Symposium. (2000)

[5] Maupu, D., Horn, M.H.V., Weeks, S., Bullitt, E.: 3d stereo interactive medical visualization. IEEE Computer Graphicsand Applications 25 (2005) 67–71

[6] Webster, R., Haluck, R., Ravenscroft, R., Mohler, B., Crouthamel, E., Frack, T., Terlecki, S., Shea↵er, J.: Elasticallydeformable 3d organs for haptic surgical simulation. In: Medicine Meets Virtual Reality. IOS Press (2002)

[7] Bourke, P.: 3d stereo rendering using opengl (and glut). Website PDF (2002)

[8] Bourke, P.: Calculating stereo pairs. http://paulbourke.net/miscellaneous/stereographics/stereorender (1999)

[9] Ware, C., Franck, G.: Evaluating stereo and motion cues for visualizing information nets in three dimensions. ACMTransactions on Graphics 15 (1996) 121–140

[10] Ware, C., Franck, G.: Viewing a graph in a virtual reality display is three times as good as a 2d diagram. Technicalreport, Faculty of Computer Science, University of New Brunswick (1994)

[11] Deering, M.: High resolution virtual reality. Computer Graphics 26 (1992) 195–202