Three-Dimensional Optical High-Resolution Profiler with a Large Observation Field: Foot Arch...

8
Three-dimensional optical high-resolution profiler with a large observation field: foot arch behavior under low static charge studies Jaime Meneses, Tijani Gharbi, and Jean Yves Cornu Our aim is to describe a method for detecting small deformations from a three-dimensional 3D shape of large lateral dimensions. For this purpose the measurement method is based on the simultaneous utilization of several 3D optical systems and the phase-shifting technique. In this way, the following problems appear: optical distortion due to the large field observed, nonlinear phase-to-height conver- sion, conversion of image coordinates into object coordinates for each 3D optical system, and coordinate unification of all optical systems. The resolution is 50 m with a field of view of 320 mm 150 mm. We used this system to study the 3D human foot arch deformation under low loads in vivo. First results indicate the hysteresis behavior of the human foot under a low load 50 to 450 N. © 2002 Optical Society of America OCIS codes: 120.2650, 120.2830, 120.3890, 120.3940, 120.6650. 1. Introduction Optical methods for surface profiling and shape mea- surement have found growing interest in many in- dustrial applications such as object modeling, medical diagnostics, computer-aided design and computer-aided manufacturing, multimedia, and vir- tual reality systems. The ability to make such mea- surements remotely and without perturbation of the object is the reason for this growing interest. A com- prehensive overview of shape measurements that use optical methods can be found in Refs. 1–3. The op- tical metrology can be classified into two categories, the passive method 4–6 and the active method, de- pending on the illumination source used; many of these methods were described by Chen et al. 2 In the first category the source of illumination for testing objects comes from natural environments, in which the power and direction of the light source are not controlled. In contrast, in the second category the light source is controlled externally to obtain the three-dimensional 3D profile of the objects. This broad panel of effective optical methods can satisfy the metrological constraints, particularly, the resolu- tion, the measuring range, and the lateral object’s dimensions, 7 that are imposed on the measuring sys- tem. In many applications, the 3D shape of the ob- ject, the surface microstructure, or the surface deformation must be distinguished. However, un- fortunately, the optical systems are bound to defined resolutions in fixed scale ranges. For instance, the 3D reconstruction of an adult human foot can be sat- isfactorily carried out through a fringe-projection sys- tem of 320 mm 150 mm of an observation field and 200 m of resolution. 7 This optical system is mainly used for obtaining biometrological parameters, but this system is not practical for measuring the foot arch deformation simultaneously under conditions of static load in vivo. One can obtain 400 m of defor- mation for the maximum height of the foot arch for 50 N of load applied vertically over the patient’s knee. The essential problem to be solved is how to resolve small deformations from the 3D shape of large lateral dimensions. Andra et al. 7 have proposed an active measurement approach for wide-scale 3D surface in- spection called scaled topometry. The active mea- surement approach is a combination of methods with J. Meneses is with the Grupo de Optica, Escuela de Fı ´sica, Universidad Industrial de Santander, Bucaramanga, Colombia. T. Gharbi [email protected] is with the Laboratoire d’Optique P.M. Duffieux, Unite ´ Mixte de Recherche au Centre National de la Recherche Scientifique 6603, Institut des Microtech- niques, Universite ´ de Franche-Comte ´, 25030 Besanc ¸on CEDEX, France. J. Y. Cornu is with the Centre Hospitalier Universitaire de Besanc ¸on, 2 Place Saint Jacques, 25030 Besanc ¸on CEDEX, France. Received 5 October 2001; revised manuscript received 24 Janu- ary 2002. 0003-693502255267-08$15.000 © 2002 Optical Society of America 1 September 2002 Vol. 41, No. 25 APPLIED OPTICS 5267

Transcript of Three-Dimensional Optical High-Resolution Profiler with a Large Observation Field: Foot Arch...

Three-dimensional optical high-resolution profilerwith a large observation field: foot arch behaviorunder low static charge studies

Jaime Meneses, Tijani Gharbi, and Jean Yves Cornu

Our aim is to describe a method for detecting small deformations from a three-dimensional �3D� shapeof large lateral dimensions. For this purpose the measurement method is based on the simultaneousutilization of several 3D optical systems and the phase-shifting technique. In this way, the followingproblems appear: optical distortion due to the large field observed, nonlinear phase-to-height conver-sion, conversion of image coordinates into object coordinates for each 3D optical system, and coordinateunification of all optical systems. The resolution is 50 �m with a field of view of 320 mm � 150 mm. Weused this system to study the 3D human foot arch deformation under low loads in vivo. First resultsindicate the hysteresis behavior of the human foot under a low load �50 to 450 N�. © 2002 OpticalSociety of America

OCIS codes: 120.2650, 120.2830, 120.3890, 120.3940, 120.6650.

1. Introduction

Optical methods for surface profiling and shape mea-surement have found growing interest in many in-dustrial applications such as object modeling,medical diagnostics, computer-aided design andcomputer-aided manufacturing, multimedia, and vir-tual reality systems. The ability to make such mea-surements remotely and without perturbation of theobject is the reason for this growing interest. A com-prehensive overview of shape measurements that useoptical methods can be found in Refs. 1–3. The op-tical metrology can be classified into two categories,the passive method4–6 and the active method, de-pending on the illumination source used; many ofthese methods were described by Chen et al.2 In thefirst category the source of illumination for testing

J. Meneses is with the Grupo de Optica, Escuela de Fısica,Universidad Industrial de Santander, Bucaramanga, Colombia.T. Gharbi �[email protected]� is with the Laboratoired’Optique P.M. Duffieux, Unite Mixte de Recherche au CentreNational de la Recherche Scientifique 6603, Institut des Microtech-niques, Universite de Franche-Comte, 25030 Besancon CEDEX,France. J. Y. Cornu is with the Centre Hospitalier Universitairede Besancon, 2 Place Saint Jacques, 25030 Besancon CEDEX,France.

Received 5 October 2001; revised manuscript received 24 Janu-ary 2002.

0003-6935�02�255267-08$15.00�0© 2002 Optical Society of America

objects comes from natural environments, in whichthe power and direction of the light source are notcontrolled. In contrast, in the second category thelight source is controlled externally to obtain thethree-dimensional �3D� profile of the objects. Thisbroad panel of effective optical methods can satisfythe metrological constraints, particularly, the resolu-tion, the measuring range, and the lateral object’sdimensions,7 that are imposed on the measuring sys-tem. In many applications, the 3D shape of the ob-ject, the surface microstructure, or the surfacedeformation must be distinguished. However, un-fortunately, the optical systems are bound to definedresolutions in fixed scale ranges. For instance, the3D reconstruction of an adult human foot can be sat-isfactorily carried out through a fringe-projection sys-tem of 320 mm � 150 mm of an observation field and200 �m of resolution.7 This optical system is mainlyused for obtaining biometrological parameters, butthis system is not practical for measuring the footarch deformation simultaneously under conditions ofstatic load in vivo. One can obtain 400 �m of defor-mation for the maximum height of the foot arch for 50N of load applied vertically over the patient’s knee.The essential problem to be solved is how to resolvesmall deformations from the 3D shape of large lateraldimensions. Andra et al.7 have proposed an activemeasurement approach for wide-scale 3D surface in-spection called scaled topometry. The active mea-surement approach is a combination of methods with

1 September 2002 � Vol. 41, No. 25 � APPLIED OPTICS 5267

different resolutions and different depth-measuringranges that ensure a wide global-measuring range.However, this measurement approach decreases theobservation field for the measurement of small de-tails.

In this paper we present a 3D measuring system ofhigh resolution over a wide observation field. Thesystem measures an extended surface, ensuring ahigh resolution over the entire observation field.The measuring approach is based on the simulta-neous utilization of several optical systems. The ap-plication of this system has been illustrated by thehuman foot’s mechanical behavior under static loadstudies.

2. Description of the Measurement Concept

To guarantee a high resolution over a wide observa-tion field, we base the measuring method on the si-multaneous utilization of more than one 3D opticalsystem. The spatial distribution of these 3D recon-struction systems must guarantee the required ob-servation field. For the particular case of themeasurement of the foot’s median arch deformation,we have utilized two CCD cameras. Each image is-sued from the two cameras reconstructs part of theobject surface so that the final system has an obser-vation field of 320 mm � 150 mm and resolution of 50�m. Figure 1 shows the optical system for this spe-cial case. The optical system is composed of a liquid-crystal-display �LCD� projector and two CCDcameras with the optical axes set perpendicularly tothe reference plane. With a phase-shifting algo-rithm, the LCD projector is attractive because arange of phase shifts and fringe densities can beachieved under software control with no movingparts required. There is no phase-shift error, and astandard N-sample algorithm is therefore the pre-ferred choice.8 In the case of a 4-sample algorithm,four LCD pixels to 750 mm of projection distanceprojects a fringe pattern with a period of 2.6 mm.The observation distance has been fixed to 320 mmwith a focal length of 12 and 150 mm for the separa-

tion distance between the optical axes of the CCDcameras. A microcomputer synchronizes the simul-taneous image acquisition and shifting fringes for thephase calculation. Figure 2 shows the 3D recon-struction scheme. To reduce the influence of sys-tematic errors, we present in the following sectionsthe corrections that have been proposed in each stepof the 3D reconstruction process.

3. Phase Calculation

If a square-wave Ronchi grating is projected onto theobject surface, the intensity distribution of deformedfringes captured by the camera may be mathemati-cally expressed as a Fourier cosine series:

I� xi, yi� � I0� xi, yi� � �� xi, yi�

� �n�1

An cosnz� xi, yi��, (1)

where ��xi, yi� is the contrast and An is the amplitudeof the n-harmonic component. The phase can be ex-pressed as

z� xi, yi� � 2�f0n � ri � 0� xi, yi� � �z� xi, yi�,(2)

where f0 is the fringe spatial frequency, 0 is theinitial deformation of fringes that is due to the lensaberrations and geometrical distortions, and �z isthe phase function induced by the object height dis-tribution. If we use a phase-shifting algorithm,9,10

Eq. �1� can be written as

Ij� xi, yi� � I0� xi, yi� � �� xi, yi� �n�1

An cos

� nz� xi, yi� � �j�, (3)

Fig. 1. Optical system for measurement of the foot’s median archdeformation.

Fig. 2. Schematic of the 3D reconstruction process.

5268 APPLIED OPTICS � Vol. 41, No. 25 � 1 September 2002

where �j � 2�� j�1�N�, N is the step count, and j isthe step index. According to the phase-shifting al-gorithm, z can be retrieved by

z� xi, yi� � arctan�j�1

N

Ij� xi, yi�sin �j

�j�1

N

Ij� xi, yi�cos �j

. (4)

For the 4-sample algorithm used, Eq. �4� can be writ-ten as

z� xi, yi� � arctanI4 � I2

I1 � I3, (5)

where �1 � 0, �2 � ��2, �3 � �, and �4 � 3��2.Nevertheless, Eq. �4� is obtained by use of the orthog-onal properties of the cosine function. An error isintroduced for nonsinusoidal fringes.11–13 In thecase of the 4-sample algorithm, with the assumptionthat harmonic components higher than five are ne-glected, the phase z� obtained differs from the realphase z by �, according to12

� � z� � z �A3

A1sin�4z�, (6)

where A1 and A3 are, respectively, the amplitudes ofthe first- and third-harmonic components. It indi-cates that the calculated phase, by use of the4-sample algorithm, has a sinusoidal distributionwith a phase error of frequency equaling four timesthe fringe fundamental frequency f0. Likewise, thepresence of parasite fringes introduces errors in thecalculated phase.14 The pixel electrodes of the LCDprojector �owing to dead zones� produce parasitefringes of high spatial frequency.

The errors induced by these harmonic and para-sitic frequencies have been demonstrated experimen-tally. The residual phase � has been calculated byinvestigation of the reference plane and removal ofthe mean linear value. A standard deviation of0.112 rad has been found. The Fourier transform of� exhibits a frequency peak equal to four times f0and another peak of low frequency corresponding tothe influence of parasite fringes. Various tech-niques have been proposed to reduce the influence ofhigh-harmonic components.12,13 When the numberof frames is increased beyond one fringe period 2�,the phase error � can be reduced. It was shown13

that the minimum number of frames required foreliminating the effect of harmonic components to ashigh as the nth order is n � 2 when the phase-shiftinterval is set to 2���n � 2� rad. Therefore in ourcase, five frames are necessary to eliminate the in-fluence of the third-harmonic component. Thedrawbacks are the increase in acquisition and com-putation times and the reduction in fringe frequency.The application of this system to the study of a hu-man foot’s mechanical behavior under a static loadrequires several 3D reconstructed images corre-sponding to various loading steps. For instance, for

the study of 16 deformation steps see Fig. 11�b��, 16 �5 images are required instead of the 16 � 4 images inour case. However, the residual phase error can beattenuated by means of a low-pass optical filteringperformed by our defocused imaging of the projectedfringes.12 By use of the altered optical transfer func-tion of the projector, the defocused images have beenfiltered by reduction of the amplitude of the third-harmonic component and the high frequencies of par-asite fringes. To achieve the resolution needed, wehave digitally filtered the image fringes using a 11 �11 sliding low-pass filter. When the projector imageplane is shifted around the reference plane, the opti-mal configuration is thus fixed, and the standard de-viation of � is reduced to 0.0061 rad.

4. Phase-to-Height Conversion

The above procedure enables us to calculate thephase of the fringe image by using the phase-shiftingalgorithm. �z can be calculated from the referenceand object phases, according to

�z � z � 0, (7)

where 0 is the phase of the fringe image without anyobject and �z is the phase of the fringe image withthe object. From geometry considerations �Fig. 3�and with the assumption that the fringes are parallelto the Y axis, the object phase z and the local heightz can be related by

z �2�

p0

�x �zx0

d0� z tan ��

�1 �x0 sin �

dp�

zx0 sin �

dp d0�

z cos �

dp� , (8)

where p0 is the fringe period measured on the refer-ence plane and � is the projection angle. With Eq.�8�, z can be expanded as

�z� x, y� � A� x, y� z� x, y� � B� x, y� z� x, y�2

� C� x, y� z� x, y�3 . . . . (9)

In the telecentric case, dp and do are infinite, A � 2�tan ��p0, and B � C � . . . � 0. In the nontelecentriccase, the coefficients vary according to the geometri-

Fig. 3. Optical diagram of the phase-to-height conversion.

1 September 2002 � Vol. 41, No. 25 � APPLIED OPTICS 5269

cal parameters of the system and the coordinates ofthe object point. A quadratic approach of �z can beused according to experimental conditions. We havedeveloped a calibration procedure to determine thecoefficients and the experimental approximation ofthe polynomial degree in Eq. �9�. Shifting the refer-ence plane N � 1 times parallel to �X, Y� and retriev-ing the phase, we have obtained N phasedistributions �z for each z value. The coefficientsof Eq. �9� are calculated by use of a polynomial fittingbetween �z and z for each pixel. Forty displace-ments of 1 mm � 10 �m have been done, and apolynomial fitting shows that the quadratic approxi-mation is appropriate according to the final resolu-tion achieved. The values of A and B for each imagepoint thus characterize the system and allow us tocompute the height z�x, y� according to �z�x, y�.

5. System Coordinate Calibration

The above procedure of phase calibration permits usto find the coefficients of Eq. �9� and the object heightaccording to phase distribution �z�x, y�. The ob-ject height z is therefore obtained for each pixel �xi,yi� in the image plane of the CCD camera. The con-version of image coordinates into object coordinates istraditionally done by use of magnification. Figure 4shows the image of a rectangular grid of 5 mm � 5mm between lines placed on the reference plane.The negative radial distortion is considerable be-cause a small focal distance is used. The geometri-cal lens distortion is typically determined with acamera calibration procedure before the actual 3Dmeasurement begins.15 The distortion parametersare determined experimentally by the imaging of acalibration target with known fiducial points. Thedeviation of these points from their original positionsis used to estimate the distortion.16 However, thedistortion parameters can change when other imag-ing parameters are modified. A repeated calibrationfor all camera settings is thus required. An alterna-tive calibration technique uses the presence ofstraight lines in the image.17,18 The distortion isestimated by the finding of the model parameters

that map these curved lines to straight lines. Re-cently Farid and Popescu19 have proposed a tech-nique for estimating the lens distortion in theabsence of any calibration information. The distor-tion is estimated by minimization of the high-ordercorrelations in the frequency domain introduced bylens distortion. However, the computation time isthe main drawback. We have defined a procedure inorder to correct the radial distortion for each cameraby fixing the optimal camera settings and using thedistorted image of a rectangular grid.

The problem can be proposed in the following way:How to establish a two-dimensional �2D� function Mthat transforms image coordinates �xi, yi� into objectcoordinates in the reference plane �x0, y0�, removingthe barrel distortion? Mathematically it can be ex-pressed as

X0C � MXi

C (10)

The barrel distortion of a perfectly centered lens isgoverned by

�XiC � Cr0

3 exp�i�0�, (11)

where C � 0, XiC � xi � iyi � ri exp�i�i�. Because

the problem is a properly metric one, we can, using arectangular grid, define two dimensionless rectangu-lar variables Ux and Uy as

Ux �x0

�x0,

Uy �y0

�y0, (12)

where �x0 and �y0 are the periods of the rectangulargrid. Integer values of Ux thus correspond to thevertical index lines of the grid, and integer values ofUy correspond to horizontal ones. Therefore a digi-tal processing of this rectangular grid image can bemade to extract the interception coordinates betweenthe lines. These coordinates are mathematicallyused to fit the lines. Thus a horizontal line can bemathematically expressed by

yi � ah xi2 � bh xi � ch, (13)

and vertical lines by

xi � av yi2 � bv yi � cv. (14)

The variation in the image space of these six coeffi-cients characterizes the radial distortion. The vari-ation of coefficients for the vertical lines is plotted inFig. 5, and the horizontal lines have a similar behav-ior. This figure shows that the linear approximationis appropriate for describing the variation of coeffi-cients. Therefore the coefficients for a vertical linecan be written as

av � mavIx � bav,

bv � mbvIx � bbv,

cv � Ix, (15)

Fig. 4. Image of a rectangular grid located in the reference plane.

5270 APPLIED OPTICS � Vol. 41, No. 25 � 1 September 2002

where Ix is the interception between the vertical linesand the X axis. In a similar way, the coefficients forthe horizontal lines can be written as

ah � mahIy � bah,

bh � mbhIy � bbh,

ch � Iy. (16)

Ux and Uy can be approximated, according to Eq. �11�,by

Ux � aUxIx

3 � bUxIx

2 � cUxIx � dUx

,

Uy � aUyIy

3 � bUyIy

2 � cUyIy � dUy

. (17)

Combining the above equations, Ix and Iy can be ex-pressed by

Ix �xi � bavyi

2 � bbvyi

mavyi2 � mbvyi � 1

,

Iy �yi � bahxi

2 � bbhxi

mahxi2 � mbvxi � 1

. (18)

In conclusion, digital processing of a rectangular gridimage makes possible the determination of the 16coefficients that characterize the radial distortion.Each image point �xi, yi� has two lines, one verticalline Ux and one horizontal line Uy. The interceptionsbetween these lines and the coordinate axes are cal-culated with Eqs. �18�. These values are used toobtain the coefficients of vertical and horizontal lines,Eqs. �15� and �16�, and Ux and Uy values, Eqs. �17�.The coordinates in the reference plane �x0, y0� arecalculated by with Eqs. �12�. The above procedurehas been applied to the deformed image in Fig. 4.Figure 6 shows the corrected image. Fixing the focallength, observation distance, aperture, and observa-tion angle of the CCD cameras, the distortion param-eters can thus be used to calculate the object positionon the reference plane for each image point.

6. Three-Dimensional Reconstruction and Resolution

So far, we have proposed procedures for reducing themost significant systematic errors. The 4-samplephase-shifting algorithm has been used to calculatethe unwrapped phase values i.e., phase values lyingin the range �� to � according to Eq. �5��. Over thepast 12 years, more than 200 journal papers havebeen published on the removal of 2� phase jumps20;this problem has its own textbook.21 The phase offringes without speckle and areas of low visibilityhave been unwrapped by use of �a� a binary maskthat defines the region of interest by means of thefringe modulation function22; �b� a selection criterionof discontinuity sources based on the local gradientmodulo 2� that drives the direction of processing and

Fig. 5. Variation of �a� quadratic, �b� linear, and �c� interceptcoefficients of vertical lines.

Fig. 6. Corrected image of the deformed rectangular grid showedin Fig. 4.

1 September 2002 � Vol. 41, No. 25 � APPLIED OPTICS 5271

decides which points should be invalidated23; �c� aqueue-based algorithm24 that fills a queue withpoints validated by criterion �b� and is used for theprocessing of points in the binary mask.

By use of the A and B values calculated in thephase-to-height calibration procedure Eq. �9��, theheight Z of each image point has been obtained ac-cording to

Z � �B

2 A�

�B2 � 4A�

2 A. (19)

The camera calibration procedure corrects the neg-ative radial distortion and converts the image coor-dinate system into a coordinate system located on thereference plane. Owing to nontelecentric observa-tion, a point on the object surface is projected onto thereference plane in the direction of exit pupil center ofthe CCD lens. The x and y coordinates of the objectsurface can be calculated according to

x � x0�1 �zz0� ,

y � y0�1 �zz0� , (20)

where z0 is the distance between the reference planeand the exit pupil of the CCD lens. These equationscorrect the lateral distance measured in the referenceplane between the interception point and the objectsurface point �Fig. 7�. For each sequence of fringeimage shifting of each CCD camera, the system thus

obtains the �x, y, z� coordinates for each object pointimaged.

As can be seen in Fig. 7, the optical axis must beperpendicular to the reference plane, and the point onthe reference plane, without lateral shifting, corre-sponds to the optical center of the CCD camera. Byuse of internal reflections produced by a laser source,the CCD cameras have been perpendicularly placedto the reference plane. The exact position of theCCD optical axes and the distance between the cam-eras have been calculated by use of the image of therectangular grid displaced eight times in the Z axisover 8 mm � 5 mm. The images of each positionhave been processed to calculate the lateral shiftingof interception lines. By use of a rectangular grid insuch a way that it fills the whole observation field, z0has been calculated, and the optical center of eachCCD camera has been localized on the referenceplane. The position in the reference plane definesthe unique object coordinate system and the coordi-nate system unification has been obtained by use ofthe distance between the optical CCD axes. The ro-tation of each camera around the optical axis is cor-rected through the calibration procedure of thecoordinate systems.

The coordinate system unification has been testedwith a well-known 2D pattern Fig. 8�c�� of threecircles placed on the reference plane. Figure 8shows the images of each CCD camera and the uni-fied image. The positions of overlapping points havebeen correctly calculated without distortion. Forthe case of 3D objects, the final images had no heightdiscontinuities on the overlapping regions.

The resolution has been checked with objects pre-senting several height distributions and with differ-ent contrasts of fringe without CCD saturation.Figure 9 shows the 3D reconstruction of an objectwith a maximum height of 30 � 0.01 mm. The meanvalue obtained was 30.010 mm with a standard de-viation of 11 �m, for a fringe contrast of 100�256 graylevels. When the fringe contrast is reduced to 13

Fig. 7. Coordinate system of the reference plane and the CCDsensor camera.

Fig. 8. Coordinate system unification: �a� a 2D pattern imagefrom CCD camera 1, �b� the image from CCD camera 2 of the samepattern, and �c� the unified image of the 2D pattern.

5272 APPLIED OPTICS � Vol. 41, No. 25 � 1 September 2002

gray levels, the standard deviation is increased 4.4times. As a check of the resolution, Fig. 10 showsthe 3D reconstruction of the thickness of a sheet pa-per. The thickness measured with a micrometerscrew was 110 � 5 �m, and the mean value obtainedwas 107 � 8 �m. Figures �9� and �10� illustrate thatthe system can resolve details of 50 �m of heightwithin an observation field of 320 mm � 150 mm.The lateral resolution evaluated on the �x, y� refer-ence plane is approximately 310 �m.

7. Foot Behavior Considerations

In this section we describe one application of thissystem, the study of a human foot’s mechanical be-havior under a static load. It is known that themedian bulge and lateral concavity of the foot outlineare responsible for the foot outflare. The aim of thisstudy is to determine footprint variation, the archindex variation, and the volume under the foot archvariation under a load situated between 50 and 450 Nin vivo. In this way, it is possible to characterize thehuman foot’s mechanical behavior under a staticload. A similar experiment25 was made on an am-putated foot, but the drawback of that experiment isthat the tonus and all foot reaction to the load cannotbe described. Our choice of low static loads permits

the measurement of bone displacement without theinfluence of the body’s gravity center.

We investigated the foot behavior with increasingand decreasing loading steps, as illustrated on Fig.11�a�. To characterize this behavior, we define thearch volume index �AVI� as

AVI �v0 � vx

v0, (21)

where v0 is the volume under the foot arch withoutload and vx is the volume corresponding to x N load ofthe same zone �see Fig. 12�.

Foot behavior has been determined during theloading process. This study revealed a visco-elasticbehavior of the foot as shown in Fig. 11�b�. This issimilar to the hysteresis commonly observed in rhe-ology. Ker et al.25 demonstrated this behavior usingto as much as 4 kN of load on a cadaveric foot.

8. Conclusion

We have presented in this paper a 3D measuringsystem of a large observation field. The experimen-tal results have demonstrated the performance of the

Fig. 9. 3D reconstruction of the test object of 30-mm height.

Fig. 10. Three-dimensional reconstruction of the thickness of asheet paper.

Fig. 11. �a� Steps of the loading process. �b� Foot arch behaviorunder load.

Fig. 12. 3D foot profile.

1 September 2002 � Vol. 41, No. 25 � APPLIED OPTICS 5273

procedures proposed. The measuring method isbased on the simultaneous utilization of several 3Doptical systems. For the special case of measure-ment of the foot’s median arch deformation, we haveutilized a LCD projector and two CCD cameras withthe optical axes placed perpendicularly to the refer-ence plane. Each system reconstructs part of theobject surface so that the final system has an obser-vation field of 320 mm � 150 mm and a resolution of50 �m. This resolution has been obtained by use of

• An optical and digital low-pass filter on thefringes to reducing errors in the phase calculation.

• A phase calibration procedure for phase-to-height conversion by use of a quadratic approach.

• A procedure for eliminating most of the nega-tive radial distortion.

Thanks to this system we have described the hu-man foot’s mechanical behavior under static load invivo. The hysteresis commonly observed in rheologyhave been confirmed for the human foot under lowloads �50–450 N�.

References1. H. J. Tiziani, “Optical metrology of engineering surfaces:

scope and trends,” in Optical Measurement Techniques andApplications, A. House, ed. �P. K. Rasgoti, Boston, 1997�.

2. F. Chen, G. M. Brown, and M. Song, “Overview of three-dimensional shape measurement using optical methods,” Opt.Eng. 39, 10–22 �2000�.

3. T. Strand, “Optical three-dimensional sensing for machine vi-sion,” Opt. Eng. 24, 33–40 �1985�.

4. H. Tiziani, M. Wegner, and D. Steudle, “Confocal principle formacro- and microscopic surface and defect analysis,” Opt. Eng.39, 32–39 �2000�.

5. J. S. Massa, G. S. Buller, A. C. Walker, S. Cova, M. Uma-suthan, and A. Wallace, “Time of flight optical ranging systembased on time correlated single photon counting,” Appl. Opt.31, 7298–7304 �1998�.

6. L. L. Kontsevich, P. Petrov, and L. S. Vergelskaya, “Recon-struction of shape from shading in color images,” J. Opt. Soc.Am. A 11, 1047–1058 �1994�.

7. P. Andra, E. Ivanov, and W. Osten, “Scaled topometry—anactive measurement approach for wide scale 3D surface in-spection,” in Fringe ’97 Automatic Processing of Fringe Pat-terns, W. Juptner and W. Osten, eds. �Akademie Verlag,Germany, 1997�, pp. 179–189.

8. C. R. Coggrave and J. M. Huntley, “Optimization of a shapemeasurement system based on spatial light modulators,” Opt.Eng. 39, 91–98 �2000�.

9. J. Schwider, R. Burow, K. E. Elssner, J. Grzanna, R. Spola-

czyk, and K. Merkel, “Digital wave-front measuringinterferometry—some systematic error sources,” Appl. Opt.22, 3421–3432 �1983�.

10. T. Judge and P. Bryanston-Cross, “A review of phase unwrap-ping techniques in fringe analysis,” Opt. Lasers Eng. 21, 199–239 �1994�.

11. K. A. Stetson and W. R. Brohinsky, “Electrooptic holographyand its application to hologram interferometry,” Appl. Opt. 24,3631–3637 �1985�.

12. S. Xian-Yu, Z. Wen-Se, G. Bally, and D. Vukicevic, “Automatedphase-measuring profilometry using defocused projection of aRonchi grating,” Opt. Commun. 94, 561–573 �1992�.

13. K. Hibino, B. F. Oreb, D. I. Farrant, and K. Larkin, “Phaseshifting for nonsinusoidal waveforms with phase-shift errors,”J. Opt. Soc. Am. A 12, 761–768 �1995�.

14. C. Ai and J. Wyant, “Effect of spurious reflection on phase shiftinterferometry,” Appl. Opt. 27, 3039–3045 �1988�.

15. B. Breuckmann, F. Halbauer, E. Klaas, and M. Kube, “3D-measurement for industrial applications,” in Rapid Prototyp-ing and Flexible Manufacturing, R. Ahlers and G. Reinhart,eds., Proc. SPIE 3102, 20–29 �1997�.

16. J. Cheng, P. Cohen, and M. Herniou, “Camera calibration withdistortion models and accuracy evaluation,” IEEE Trans. Pat-tern Anal. Mach. Intell. 14, 965–979 �1992�.

17. F. Devernay and O. Faugeras, “Automatic calibration and re-moval of distortion from scenes of structured environments,” inInvestigative and Trial Image Processing, L. I. Rudin and S. K.Bramble, eds., Proc. SPIE 2567, 62–72 �1995�.

18. R. Swaminatha and S. K. Nayar, “Non-metric calibration ofwide-angle lenses and polycameras,” in Proceedings of IEEEConference on Computer Vision and Pattern Recognition �IEEEComputer Society, Los Alamitos, Calif., 1999�, pp. 413–419.

19. H. Farid and A. C. Popescu, “Blind removal of lens distortion,”J. Opt. Soc. Am. A 18, 2072–2078 �2001�.

20. J. M. Huntley, “Three-dimensional noise-immune phase-unwrapping algorithm,” Appl. Opt. 40, 3901–3908 �2001�.

21. D. C. Ghiglia and M. D. Pritt, Two-Dimensional Phase Un-wrapping �Wiley, New York, 1998�.

22. D. Dirkesen, X. Su, D. Vukicevic, and G. V. Bally, “Optimizedphase shifting and use of fringe modulation function for highresolution phase evaluation,” in FRINGE ’93 Proceedings ofthe Second International Workshop on Automatic Processing ofFringe Patterns, W. Juptner and W. Osten, eds. �Akademie,Berlin, 1993�, pp. 72–77.

23. J. A. Quiroga and E. Bernabeu, “Phase-unwrapping algorithmfor noisy phase-map processing,” Appl. Opt. 33, 6725–6731�1994�.

24. H. A. Vrooman and A. M. Maas, “Image processing algorithmsfor the analysis of phase-shifted speckle interference pat-terns,” Appl. Opt. 30, 1636–1641 �1991�.

25. R. Ker, M. Bennet, S. Bibby, R. Kester, and R. M. Alexander,“The spring in the arch of the human foot,” Nature �London�325, 147–149 �1987�.

5274 APPLIED OPTICS � Vol. 41, No. 25 � 1 September 2002