Reverse engineering using optical ... - optik.uni-erlangen.de · light sectioning sensor. J...

11
Reverse engineering using optical 3D sensors Harald Schönfeld, Gerd Häusler, and Stefan Karbacher Chair for Optics, Optical Metrology Group University of Erlangen-Nuremberg, Germany ABSTRACT Optical 3D sensors are used as tools for reverse engineering: First the shape of an object is digitized by acquisition of multiple range images from different view points. Then the range images are registered and the data is turned into a CAD description, e.g. tensor product surfaces, by surface modeling software. For many applications however it is sufficient to generate a polyhedral surface. We present a nearly automatic procedure covering the complete task of data acquisition, calibration, surface registration and surface reconstruction using a mesh of triangles. A couple of measurements, such as teeth, works of art and cutting tools are shown. Keywords: reverse engineering, calibration, registration, triangulation, surface reconstruction, 3D sensing, optical 3D sensor 1. INTRODUCTION Optical 3D sensors have become powerful tools for reverse engineering. The shape of a three-dimensional object is digitized and turned into a CAD description. The reconstructed model can be handled like synthetic CAD data. This allows the processing of physical design models on a computer. Using CAM techniques like NC milling or stereolithography, three- dimensional replicas of the digitized objects can be made. In dentistry such methods are used to scan teeth or plaster casts and to produce crowns and inlays from the data automatically. Depending on the size of the object, the desired field of view and measurement uncertainty different types of 3D sensors can be chosen to digitize the object by acquiring multiple range images from different view points. The raw data delivered by the 3D sensors are not well suited for direct use in CAD systems, as the data are given in the local coordinate system of the sensor. Moreover the range images do not really describe surfaces, but clouds of point coordinates in 3D space, The amount of data points may be very large (from millions to hundreds of millions). Furthermore the data are usually distorted by measuring errors like noise, aliasing, outliers, etc. Mainly three problems have to be solved for gaining a complete surface description: The transformation of the single range images into one global coordinate system (calibration and registration), the reconstruction of the topology of the sampled object (triangulation or surface reconstruction) and the processing of the surface geometry in order to eliminate measuring errors and reduce data (surface modeling). In practice the last two points are closely related. At present, the most frequently used method for surface reconstruction is approximation of tensor product surfaces. Unfortunately such methods require much interactive control. A simpler and more accurate way is to generate a polyhedral surface (e.g. a triangular mesh), which is sufficient (and often desired) for visualization or CAM. Figure 1 illustrates the steps necessary to gain a complete model of the object surface. We present a nearly automatic procedure covering the complete task of data acquisition, calibration, surface registration and surface reconstruction. Some examples like teeth, human bodies, works of art and cutting tools, that were modeled with our software system SLIM 30 , are shown. Further author information - H.S.: Email: [email protected], WWW: http://www.physik.uni-erlangen.de/optik/haeusler/people/hs/hs_e.htrnl G.H.: Email: haeusler@physik. uni-erlangen.de, WWW: http://www .physik.uni-erlangen.de/optik/home.html S.K.: Email: [email protected], WWW: http://www.physik.uni-erlangen.de/optik/haeusler/people/sbk/sbk_home_e.html

Transcript of Reverse engineering using optical ... - optik.uni-erlangen.de · light sectioning sensor. J...

Page 1: Reverse engineering using optical ... - optik.uni-erlangen.de · light sectioning sensor. J ohannesson 10 uses a least means square method to determine the parameters of the calibration

Reverse engineering using optical 3D sensors

Harald Schönfeld, Gerd Häusler, and Stefan Karbacher

Chair for Optics, Optical Metrology Group University of Erlangen-Nuremberg, Germany

ABSTRACT

Optical 3D sensors are used as tools for reverse engineering: First the shape of an object is digitized by acquisition of multiple range images from different view points. Then the range images are registered and the data is turned into a CAD description, e.g. tensor product surfaces, by surface modeling software. For many applications however it is sufficient to generate a polyhedral surface. We present a nearly automatic procedure covering the complete task of data acquisition, calibration, surface registration and surface reconstruction using a mesh of triangles. A couple of measurements, such as teeth, works of art and cutting tools are shown. Keywords: reverse engineering, calibration, registration, triangulation, surface reconstruction, 3D sensing, optical 3D sensor

1. INTRODUCTION

Optical 3D sensors have become powerful tools for reverse engineering. The shape of a three-dimensional object is digitized and turned into a CAD description. The reconstructed model can be handled like synthetic CAD data. This allows the processing of physical design models on a computer. Using CAM techniques like NC milling or stereolithography, three­dimensional replicas of the digitized objects can be made. In dentistry such methods are used to scan teeth or plaster casts and to produce crowns and inlays from the data automatically.

Depending on the size of the object, the desired field of view and measurement uncertainty different types of 3D sensors can be chosen to digitize the object by acquiring multiple range images from different view points. The raw data delivered by the 3D sensors are not well suited for direct use in CAD systems, as the data are given in the local coordinate system of the sensor. Moreover the range images do not really describe surfaces, but clouds of point coordinates in 3D space, The amount of data points may be very large (from millions to hundreds of millions). Furthermore the data are usually distorted by measuring errors like noise, aliasing, outliers, etc.

Mainly three problems have to be solved for gaining a complete surface description: The transformation of the single range images into one global coordinate system (calibration and registration), the reconstruction of the topology of the sampled object (triangulation or surface reconstruction) and the processing of the surface geometry in order to eliminate measuring errors and reduce data (surface modeling). In practice the last two points are closely related. At present, the most frequently used method for surface reconstruction is approximation of tensor product surfaces. Unfortunately such methods require much interactive control. A simpler and more accurate way is to generate a polyhedral surface (e.g. a triangular mesh), which is sufficient (and often desired) for visualization or CAM. Figure 1 illustrates the steps necessary to gain a complete model of the object surface.

We present a nearly automatic procedure covering the complete task of data acquisition, calibration, surface registration and surface reconstruction. Some examples like teeth, human bodies, works of art and cutting tools, that were modeled with our software system SLIM30

, are shown.

Further author information -H.S.: Email: [email protected], WWW: http://www.physik.uni-erlangen.de/optik/haeusler/people/hs/hs_e.htrnl G.H.: Email: haeusler@physik. uni-erlangen.de, WWW: http://www .physik.uni-erlangen.de/optik/home.html S.K.: Email: [email protected], WWW: http://www.physik.uni-erlangen.de/optik/haeusler/people/sbk/sbk_home_e.html

Page 2: Reverse engineering using optical ... - optik.uni-erlangen.de · light sectioning sensor. J ohannesson 10 uses a least means square method to determine the parameters of the calibration

--+

Figure 1. Data acquisition, registration and surface reconstruction of a fire fighter helmet.

2. RELATED WORK

2.1 Optical3D Sensors:

The light sectioning sensor is a typical optical 3D sensor for reverse engineering. It is based on the triangulation principle1 shown in Figure 2. A light section is projected onto the object surface by a Iaser and observed by the camera under the triangulation angle e. This transforms height differences on the object surface into lateral shifts on the CCD chip of the camera. Such sensors can be adapted to fields of view from a few mm up to one meter or more. The main disadvantage is the use of coherent Iaser light, resulting in speckle. This gives a Iimit for the measurement uncertainty2 for this type of sensor. So during the last years phase measuring triangulation3

became more common. As white light is used to project a pattern or grid onto the object, the measurement uncertainty achievable with this kind of sensor is much smaller. The field of view again is scaleable from a few mm up to about one meter. As the complete field of view needs to be illuminated Figure 2. The principle of Iaser triangulation. at once, its size is limited.

For small objects with a rough surface white light or multi-A. interferometry can be applied4'5

'6

• Broad band sources (e.g. LEDs), "chirped" Iaser or a set of properly chosen wavelengths are used for illumination. Basically those speckles, that disturb triangulation methods, are utilized by white light interferometry. Within one speckle the phase is approximately constant. The localization of the maximum intensity in a speckle allows to measure the distance of the object.

2.2 Calibration

There are two basic approaches of calibration methods for typical optical 3D sensors. The model based calibration tries to determine parameters of a sensormodelthat describes the imaging properties of the sensor as closely as possible. A few test measurements are needed to determine for example the coefficients for distorsion and other aberations. Imaging errors that are not covered by the sensor model however may impair the calibration accuracy.

When using photogrammetric calibration the observing camera is calibrated first. A standard object with circular fiducials 7 is translated, tilted and observed by the camera. Using bundle adjustment algorithms, all parameters of the camera model are determined8

• In the next step the projector itself is calibrated. Duwe9 observes features in a scene with the calibrated camera and determines their image coordinates. Then a grid is projected into the scene to define a projector coordinate system. By using the known image and grid coordinates, the projector coordinates are computed. Again bundle adjustment is used to finally calibrate the projector.

Another approach is to calibrate the sensor using an arbitrary calibration function, whose parameters are determined by a series of test measurements of a known standard object. Usually a system of two polynomial functions is used to calibrate a light sectioning sensor. J ohannesson 10 uses a least means square method to determine the parameters of the calibration polynome. Häusler et. al. 11 used the Levenberg-Marquardt algorithm to calibrate a light sectioning sensor. This is the base of our algorithm to calibrate a phase measuring 3D sensor. The advantages of this approach are that a mathematical model of the sensor is not necessary and that the underlying algorithms are straight forward and robust in implementation. The requirement of complex standards, which may limit the size of the field of view, and the fact, that registration of multiple views is not implicitly solved during the calibration process, are disadvantages.

Page 3: Reverse engineering using optical ... - optik.uni-erlangen.de · light sectioning sensor. J ohannesson 10 uses a least means square method to determine the parameters of the calibration

2.3 Registration

The registration procedure is often reduced in a first step to the alignment of a pair of views. Again, this problern is split into two steps. First, the transformation between two neighboring views is roughly determined (Coarse registration). In practice, this transformation is often found by manuaily selecting corresponding points in both views. Extraction of features, like edges or corners, combined with Hough methods, like in our algorithm, are used to compute a rough transformation more quickly12

'13

• Other methods try to determine the transformation parameters directly from the data, without explicit feature extraction. They are based on Mean Field Theory or genetic algorithms14

'15 or a curvature analysis.

After finding an approximative transformation the fine registration minimizes remaining deviations. The weil known ICP algorithm 16 is used to find for every point in one view the closest point of the surface in the second view. The established pairs of points are used as corresponding points to compute the transformation parameters. In every step of the iteration the correspondence is improved and finaily Ieads to a minimum of the error function 17

• Due to non overlapping areas of the views and noise, this minimum is not guaranteed to be the desired global minimum. So additional processing of non corresponding points is often done 18

• Our algorithm combines the ICP with simulated annealing to avoid local minima. Also Kaimanfiltersare used for fine registration to find a moreglobal solution19

Due to the registration by pair smail deviations of the transformation parameters may accumulate when assembling ail views to build the complete object. So gap like errors may appear. Global registration is necessary to minimize the error over the whole object. Bergevin20 enhances the ICP algorithm by iteration over all views, instead over pairs of views only. In21 spring forces between all views are introduced and a physical model is used to find the minimum of the potential, leading to a greater number of necessary iteration steps.

2.4 Surface Reconstruction and Modeling

For a few years mainly volumetric approaches were used for surface reconstruction. These are based on weil established algorithms like marehing cubes22

• They generate approximated surfaces, so in contrast to graph theory error smoothing is carried out automatically. The method of Hoppe et al.23 is able to detect and model sharp object features but ailows the processing of some 10,000 points only. Curless and Levo/4 can handle millions of data points, but only matrix like structured range images can be used. No mesh thinning is done, so a huge amount of data is produced.

In general the usage of range images with topology information ailows fast algorithms with accurate results. Several methods for merging multiple range images into triangular meshes were proposed. Mesh zippering25 generates dense meshes of flat triangles, whereas our approach produces meshes of flat or curved triangles with curvature dependent density.

3.0VERVIEW

The reverse engineering procedure consists of the foilowing steps: Data acquisition: In order to digitize the whole surface of an object and to compensate data loss, due to shading and

reflexes, multiple range images from different points of views are necessary. A convenient way to do this, is to simply place the object into the sensor' s field of view in different positions and take the images.

Calibration: The data delivered by the sensor is given in pixel coordinates in the local sensor coordinate system. These pixel coordinates need to be transformed into metrical euclidean coordinates.

Registration: The multiple views are aligned to each other by transforming them into a single global coordinate system. Astherelative sensor position for each view is unknown in general, the transformation parameters must be determined by a localization algorithm. This task is divided into three steps: Coarse registration for a rough estimation of the parameters, that allows huge initial deviations, fine registration for more exact alignment by pair and global registration for minimization of the error over the whole model.

Surface reconstruction: The multiple, registered views are triangulated and merged into one surface model. A description of the surface is generated that consists of a mesh of curved triangles. The coordinates are used as vertices just as measured by the sensor. The original ordering of the data is lost, resulting in scattered data.

Surface modeling: The algorithm ailows the elimination of measuring errors like noise, aliasing, calibration- and registration errors by smoothing the surface normals first and then using a geometry filter to adapt the positions of the vertices.

Export: The mesh of curved triangles can be exported in standard 3D object file formats like VRML, OBI, DXF and others.

Page 4: Reverse engineering using optical ... - optik.uni-erlangen.de · light sectioning sensor. J ohannesson 10 uses a least means square method to determine the parameters of the calibration

4. 3D SENSORS

Depending on the object to be digitized, that is its size and surface properties, the right 3D sensor needs to be chosen. We have developed a range of sensors for different tasks:

4.1 Light Sectioning Sensor

This triangulation sensor can be used for objects with a size of about 2 to 20 cm in its standard setup. But it is also possible to scale it to a field of view of more than one meter. The measurement uncertainty is about 111000 of the field of view in longitudinal direction. As the object needs to be scanned, the time for the measurement of one view is about one minute.

4.2 Phase Measuring Sensor

This kind of triangulation sensor is currently available in two setups26• The small one has a field of view of 10 to 30 mm, the

bigger one has a field of view of about 80 cm. They use a FLC or LCD display and a cylindricallens to project a sinusoidal grid onto the object. The time for one measurement is about 113 of a second. It is possible to take multiple images of an object with different shutter times, so surfaces with high differences in reflectivity can be measured, too.

4.3 Coherence Radar

This sensor is based on white light interferometry. It is a high accuracy sensor for optically rough surfaces6• The

measurement uncertainty only depends on the roughness of the surface (in the current setup about 1J.lm). As illumination and observation are coaxial no shading, like in triangulation, appears. The objects are scanned in depth, so the measuring time depends on the height tobe measured (currently 4J.lm per second).

5. CALIBRATION

Optical sensors deliver the data at first in distorted coordinates (x,.,y,.,z). This deformation takes place due to perspective, aberrations and other effects. For real world applications a calibration of the sensor is necessary, that transforms the sensor raw data (xs,ys,z) into the metrical, euclidean Coordinates (X

111,y

111,Z,J We present a method for the calibration of optical 3D

sensors. In the following example we calibrate a phase measuring, miniaturized sensor27: We use an aluminum block with 3

tilted planes, that is moved to defined positions with a translation stage. By using these measurements, an arbitrary, polynomial calibration function is computed. The advantages of this method, compared to some other methods, are that it is useable for any kind of sensor, no mathematical model of the sensor is necessary, no localization of features is done, and that the small measurement uncertainty of the 3D sensor in longitudinal direction is exploited.

The basic concept of this method 11 is as follows: An arbitrary polynomial function is used as calibration functioh. Its parameters need to be determined by a series of measurements by the sensor. Instead of localizing small features, like circles or crosses, we detect the virtual intersections of planes of a calibration standard. These intersections can be located very accurately, as their positions are calculated from thousands of points.

5.1 Measurement Phase

A block of aluminum with 3 tilted planes (Figure 3) is used, that is moved in y direction in defined steps. After every step a picture is taken. That way about 52 range images of the 3 planes are generated. 18 images of the same plane define a class of parallel calibration planes. Now these measured planes and the real planes (given by the standard's geometry) are known.

z

~ y

Figure 3. Calibration standard with 3 tilted planes.

plane class

2

3

Figure 4. Example of a measurement with 8 images taken of the 3 planes.

Page 5: Reverse engineering using optical ... - optik.uni-erlangen.de · light sectioning sensor. J ohannesson 10 uses a least means square method to determine the parameters of the calibration

5.2 Analytical Description in Sensor Coordinates

Surfaces of higher degree are fit to the sensor data to determine a polynomial function description of all measured calibration planes.

5.3 Computation of the Intersections

All polynomial functions of one class are intersected with all planes of the other two classes (Figure 5). The points of intersection are spread throughout the whole field of view of the sensor (Figure 6) .

5.4 Computation of the Calibration Function

The position of the intersections in metrical coordinates (xm, Ym, Zm) can be computed from the geometry of the standard and its translation. The general solution is rather lengthy, but for the arrangement of the planes on the standard we have chosen, a simple equation gives the coordinates of each intersection by the index h i2, i3 of the planes of each plane class and the translation t of the standard:

t(ii- i2) Xm = J2 Zm =

t(2h- i2- i3)

2

Now approximating polynomial functions P/xs,ys,z), P_ixs,ys,z), Pz(xs,ys,z) can be computed, that transform the measured intersection positions to the known positions in metrical Coordinates. We use the method of polynomial regression to determine the parameters of the function. The polynomial functions Px, Pv, Pz are finally used to convert the data after the measurement to metrical coordinates.

1

1

Figure 5. Intersection of three planes; one from each class (in arbitrary units).

5.5 Results

5 000

............ . . . . . • • 0 0 .: • • ••• •

X

Figure 6. Field of view with measured points of intersection (already shown in metric coordinates).

The time needed for the calibration of one range image with 512 • 540 points is about 3 seconds on a PC with a P166. The additional error due to the calibration is less than 50% of the measurement uncertainty of the sensor at the moment. It is sufficient to use polynomial calibration functions with the order of 4 in products of xs, Ys and zs as coefficients of products of higher degrees are very close to zero.

Page 6: Reverse engineering using optical ... - optik.uni-erlangen.de · light sectioning sensor. J ohannesson 10 uses a least means square method to determine the parameters of the calibration

6. REGISTRATION

Multiple range images of the same object from different directions need to be taken in order to digitize the complete surface. If the 3D sensor is moved mechanically to defined positions, this information can be used to transform these images into a common coordinate system. But for many reasons it is desirable that this can be done with arbitrary, unknown sensor resp. object positions; e.g. if you want to scan the bottom of the object, you have to turn it upside down. So the necessary transformation needs tobe computed from the contents of the images only.

6.1 Coarse Registration

We present a feature extraction based procedure for the registration of multiple views. It is independent of the sensor used to take the images. Thus it may be applied to small objects, like teeth, and to bigger objects, like busts.

Zero dimensional intrinsic features, e.g. corners, are extracted from range images (or congruent gray scale images)13•

The detected feature locations are used to calculate the translation and rotation parameters of one view relative to another. The unknown correlation between the located features in the first and in the second view is determined simultaneously by Hough methods. To allow an efficient calculation using Hough tables, the single six-dimensional parameter space is separated into multiple two- and one-dimensional hyper spaces12

In cases where intrinsic features are hard or impossible to detect (e.g. on the fireman's helmet in Figure 1) artificial ones may be used, that can either be detected automatically or manually.

Pair by pair the views are aligned to each other that way. Due to the limited localization accuracy of the features deviations of the views remain, that need to be eliminated by the next step of the procedure.

6.2 Fine Registration

A modified ICP algorithm is used for the minimization of the remaining deviation between pairs of views. The error minimization is done by simulated annealing, so in cantrast to the dassie ICP, local minima of the cost function may be overcome. The achievable accuracy conforms to the distortion of the views (noise) due to measurement uncertainty of the sensor. As simulated annealing leads to a slow convergence of the problem, computation time tends to berather high. First results with a combination of simulated annealing and Levenberg-Marquardt however show even smaller remaining errors in much shorter time: The registration of one pair of views takes about 15 seconds on a PC with a P166 CPU .. Figure 7 shows the result of the registration for two yiews. They were rotated by up to 50 degrees and 0.5% noise was added. The standard deviation for the error of the final result was a = 0.1061 mm compared to the artificial noise of a = 0.1372 mm.

Initial error: 3830~ . ~29688 (1 00000 steps)

Final error: 0 .7900~

Figure 7. Fine registration of two noisy views with very high initial deviation ( a = -30°, ß = 50°, y= 40°, X= 30 mm, y = 60 mm, Z = -40 mm).

6.3 Global Registration

With pair oriented registration alone closed surfaces can not be registered satisfactorily. Due to an accumulation of small remaining errors ( caused by noise and miscalibration) frequently a chink develops between the surface of the first and last view registered. In such cases, another iteration is started to eliminate this error and to find the global minimum of the error

Page 7: Reverse engineering using optical ... - optik.uni-erlangen.de · light sectioning sensor. J ohannesson 10 uses a least means square method to determine the parameters of the calibration

over all views. This is done by fixing one view and minimizing the error of all overlapping views simultaneously. Only about 5 global optimization cycles are necessary to reach the minimum or at least an evenly distributed residual error. The time needed for global registration of an object that consist of about 10 views is approximately one hour.

7. SURFACE RECONSTRUCTION

We present a new method to reconstruct the object surface from multiple registered range images. These consist of a matrix of coordinate triples (x,y,z)u. The object surface may be sampled incompletely and the sampling density may vary, but should be as high as possible. Beyond that, the object may have arbitrary shape, and the field of view may even contain several objects. The following steps are performed to turn this data into a single mesh of curved or flat triangles.

7.1 Mesh generation

Because of the matrix-like structure of the range images, it is easy to turn them into triangular meshes with the data points as vertices. For each vertex the surface normals are calculated from the normals of the surrounding triangles.

7.2 First smoothing

In order to utilize as much of the sampled information as possible, smoothing of measuring errors like noise and aliasing is done before mesh thinning.

7.3 First Mesh Thinning

Merging dense meshes usually requires too much memory, so mesh reduction often must be carried out in advance. The permitted approximation error should be chosen as small as possible, as ideally thinning should be done at the end of the processing chain, only.

7.4 Merging

The meshes from different views are merged by pairs using local mesh operations like vertex insertion, gap bridging and surface growth (Figure 8). Initially a master image is chosen. The other views are merged into it successively. Only those vertices are inserted, whose absence would cause an approximation error bigger than a given threshold.

7.5 Final Mesh Thinning

Mesh thinning is continued until the given approximation error is reached. For thinning purposes also a classification of the surfaces according to curvature properties is generated.

7.6 Geometrical Mesh Optimization

Thinning usually causes awkward distributions of the vertices, so that elongated triangles occur. Geometrical mesh optimization moves the vertices along the curved surface, in order to produce a better balanced triangulation (Figure 9).

7.7 Topological Mesh Optimization

At last the surface triangulation is reorganized using edge swap operations, in order to optimize certain criteria. Usually, the interpolation error is rninimized.

Page 8: Reverse engineering using optical ... - optik.uni-erlangen.de · light sectioning sensor. J ohannesson 10 uses a least means square method to determine the parameters of the calibration

Vertex Insertion 1 Gap Bridging

Surface Growth

Figure 8. Merging of the meshes using vertex insertion, gap bridging and surface growth operations.

The result of this process is a mesh of curved triangles. Our new modeling method is able to interpolate curved surfaces solely from the vertex coordinates and the assigned normal coordinates. This allows a compact description of the mesh, as modern data exchange formats like Wavefront OBJ, Geomview OFF or VRML support this data structure. The data may also be written in some other formats like DXF or STL. Currently no texture information is processed by the algorithm.

Figure 9. Topological optimization of the mesh from Fig. 8 using edge swap operations. Triangles that are as equilateral as possible were aspired.

8. SURFACE MODELING

Many of the errors that are caused by the measuring process (noise, aliasing, outliers, etc) can be filtered at the level of raw sensor data. A special class of errors ( calibration and registration) first appears after merging the different views. We use a new modeling method28 that is based on the assumption, that the underlying surface can be approximated by a mesh of circular arcs. This algorithm allows the elimination of measuring errors like noise, aliasing, calibration- and registration errors without disturbing object edges. Filtering is done by first smoothing the normals and then using a geometry filter to adapt the positions of the vertices.

9.EXAMPLES

Now we present some examples of objects that were digitized by our sensors and reconstructed with our SLIM30 software. For medical applications we measured a series of 26 tooth models. Figure 10 shows a reconstructed molar. On the left

the surface after merging is shown, on the right the final result after automatic surface modeling. 10 views, which correspond to 13 MB of data, with 472,506 valid points were used as input for the algorithm. In the final model 64,131 triangles occupying 0.87 MB are necessary. The standard deviation <J = 0.02 mm is of the same magnitude as the one originally delivered by the sensor.

Figure 11 shows a reconstructed fireman's helmet. 8 views with 874,180 valid points were merged. On the left the surface after merging shows a slight deformation in the overlapping area of some views, due to calibration or registration errors. In the right picture the result after modeling shows, that these errors were filtered out, without disturbing object edges. At the bottom the triangular mesh is shown, which now consists of 33,298 triangles only.

A rather complex object is shown in Figure 12. It is the reconstruction of a console of an altar, digitized for the Germanische Nationalmuseum. 20 views were merged to cover the ancient, rough surface in detail. Problems during digitizing arose due to gold plating and different kind of paint.

Figure 13 shows a cutting tool tip, measured by the coherence radar and Figure 14 shows a ceramic bust.

Page 9: Reverse engineering using optical ... - optik.uni-erlangen.de · light sectioning sensor. J ohannesson 10 uses a least means square method to determine the parameters of the calibration

Figure 10. Result ofthe reconstruction of a plaster model of a human molar. Left: Surface

after merging. Right: Surface after smoothing.

Figure 11. Reconstructed frreman's helmet. Left: With small, visible defonnations of the surface after merging. Right: Result after smoothing. Bottom:

The thinned triangular mesh.

123

Page 10: Reverse engineering using optical ... - optik.uni-erlangen.de · light sectioning sensor. J ohannesson 10 uses a least means square method to determine the parameters of the calibration

124

Figure 12. Console of an altar.

Figure 13. Reconstruction of a cutting tool tip. Figure 14. Reconstructed bust.

10. CONCLUSIONS

Wehave presented a nearly automatic procedure for digitizing the complete surface of an object and for the reconstruction of a model, based on a mesh of curved triangles. The error of the registration of multiple views is of the same magnitude as the noise in the sensor data. The same applies for surface reconstruction and modeling. The procedure is weil suited for metrology purposes, as the allowed approximation error in surface reconstruction is defined by the user and high accuracy can be achieved.

ACKNOWLEDGEMENTS

This work is supported by the DFG #1319/8-2

Page 11: Reverse engineering using optical ... - optik.uni-erlangen.de · light sectioning sensor. J ohannesson 10 uses a least means square method to determine the parameters of the calibration

l.

2.

3.

4. 5. 6.

7.

8. l ", ~

-. -~ ',: -

·- 9.

, . 10. t -i

r ~ 11. )~_,: - ~ --. -.;, .

12. y . .;.::.

,--13 .

}

14. ;

~ 15.

16.

17.

18. 19.

20.

21.

22.

23 .

24.

25.

26.

27 .

28 .

REFERENCES

K.H. Goh, N. Phillips, and R. Bell, The applicability of a lasertriangulationprobe to non-contacting inspection, International Journal of production research, 24(6): 1331 - 1348, 1986 R. Dorsch, J.M. Hemnann, and Gerd Häusler, Laser triangulation: Fundamental uncertainty of distance measurement, Applied Optics, 33(7): 1306- 1314, 1994 M. Hali'oua, H. Liu, and V. Srinivasan, Automated phase-measuring profilometry of 3-d diffuse objects. Applied Optics, 23(18):31 05 - 3108, 1984 B.S. Lee, and T.C. Strand, Applied Optics, 29:3784- 3788, 1990 z. Sodurk, E. Fischer, T. lttner, H. Tiziani, Applied Optics, 30: 3139, 1991 G. Häusler and J. Neumann, Coherence Radar- an accurate 3D sensor for rough surfaces, in D. J. Svetkoff, Editor, Optica, Illumination and Image Sensingfor Machine Vision VII, volume SPIE Proc. 1822, Boston, Nov. 1992 C.B. Bose and I. Amir, Design of Fiducials for Aceurate Registration Using Machine Vision, IEEE Transactions on Pattern Analysis and Machine Intelligence, 12(12):1196- 1200, 1984 R. Kotowski, Photogrammetrische Bündelausgleichsrechnung zur 3D-Objektrekonstruktion und simultanen Sensorkalibrierung in der Nahbereichsphotogrammetrie, in H. Wolf, Editor, Optische 3D-Formerfassung großer Objekte, 1. ABW-Workshop, Technische Akademie Esslingen, 1995 H.P. Duwe, Vorstellung eines neuen 3-D Sensors zur Kombination photogrammetrischer Verfahren mit dem Codierten Licht Ansatz, in H. Wolf, Editor, Optische 3D-Formerfassung großer Objekte, 2. ABW-Workshop, Technische Akademie Esslingen, 1996 M. Johannesson, Calibration of a MAPP2200 sheet-of-light range camera, Prof of the SCIA, Uppsala, June 1995 G. Häusler, H. Schönfeld and F. Stockinger, Kalibrierung von optischen 3D-Sensoren, Optik, 102(3):93- 100, 1996 G. Häusler, S. Karbacher, and D. Ritter, Fortschritte bei der Automatisierung des Reverse Engineerings, in H. Wolf, Editor, Optische 3D-Formerfassung großer Objekte, 4. ABW-Workshop, Technische Akademie Esslingen, 1997 G. Häusler and X. Laboureux, Corner detection in 2D video images, in Lehrstuhlfür Optik, Annual Report,p. 28, Friedrich-Alexander Universität Erlangen-Nürnberg, 1995 A.J. Stoddart, A. Hilton, Registration of multiple point sets, in 14th Intl. Conference on Pattern Recognition, Vienna, Austria, 1996 K. Brunnström and A.J. Stoddart, Genetic algorithms for free-form surface matching, in 14th Intl. Conference an Pattern Recognition, Vienna, Austria, 1996 P.J. Besl and N.D. McKay, A method for registration of 3-D shapes, IEEE Transactions on Pattern Analysis and Machine Intelligence, 14(2):239- 256, February 1992 K. Kanatai, Analysis of 3-D rotation fitting, IEEE Transactions on Pattern Analysis and Machine Intelligence, 16(5):752-765, 1994 Z. Zhang, Analysis of 3-D rotation fitting, Intl. Journal of Camp. Vision, 13(2): 119 - 152, 1994 I. Feldmar and N. Ayache, Rigid, Affme and Locally Affme Registration ofFree-Form Surfaces, Intl. Journal of Camp. Vision , 18(2):99- 119, 1996 R. Bergevin, M. Soucy, and H. Gagnon, Towards a General Multi-View Registration Technique, IEEE Transactions an Pattern Analysis and Machine Intelligence, 18(5), May 1996 A.J. Stoddart, A. Hilton, Registration of multiple point sets, in 13th Intl. Conference on Pattern Recognition, Vienna, Austria, 1996 W.E. Lorensen and H.E. Cline, Marehing cubes: A high resolution 3D surface construction algorithm, in Computer Graphics (SIGGRAPH '87 Proceedings), 21:163- 169, July 1987 H. Hoppe, T. DeRose, T. Duchamp, J. McDonald, and W. Stuetzle, Surface reconstruction from unorganized points, in Computer Graphics (SIGGRAPH '92 Proceedings), 26:71 - 78, July 1992 B. Curless and M. Levoy, A volumetric method for building complex models from range images, in Computer Graphics (SIGGRAPH '96 Proceedings), p. 303-312, August 1996 G. Turk and M. Levoy, lippered polygen meshes from range images, in Computer Graphfes (SIGGRAPH '94 Proceedings), p. 311 - 318, July 1994 . G. Häusler, S. Kreipel, R. Lampalzer, A. Schielzeth, and B. Spellenberg, Proc. o/the EOS Topical Meeting an Optoelectronics Distance Measurements and Applications, Nantes, July 1997 R. Lampalzer, G. Häusler, and A. Schielzeth, Physikalische Grenzen von Triangulation mit strukturierter Beleuchtung­und wie man sie hinausschiebt ·in H. Wolf, Editor, Optische 3D-Formerfassung großer Objekte, 2. ABW-Workshop, Technische Akademie Esslingen, 1996 S. Karbacherand G. Häusler, A new approach for modeling and smoothing of scattered 3D data, in these proceedings

<

125