Photografy Deformation
Transcript of Photografy Deformation
-
8/13/2019 Photografy Deformation
1/12
1
Manuscript submitted to Strain: An International Journal for Experimental Mechanics(2011)
Multiple-view shape and deformation measurement
by combining fringe projection and digital image correlation
T. Nam Nguyen1,Jonathan M. Huntley1*,Richard L. Burguete2andC. Russell Coggrave3
1Loughborough University, Wolfson School of Mechanical and Manufacturing Engineering, Loughborough,Leicestershire LE11 3TU, United Kingdom
2Airbus, Filton, Bristol BS99 7AR, United Kingdom
3Phase Vision Ltd, Loughborough Innovation Centre, Charnwood Building, Holywell Park, Ashby Road,Loughborough, Leicestershire LE11 3AQ, United Kingdom
*Corresponding author: [email protected]
Abstract
We present a new method that combines the fringe projection and the digital image correlation (DIC) techniques on
a single hardware platform to simultaneously measure both shape and deformation fields of three-dimensional(3-D) surfaces with complex geometries. The method in its basic form requires only a single camera and single
projector, but this can be easily extended to a multi-camera multi-projector system to obtain complete 360measurements. Multiple views of the surface profile and displacement field are automatically co-registered in aunified global coordinate system, thereby avoiding the significant errors that can arise through the use of statisticalpoint cloud stitching techniques. Experimental results from a two-camera two-projector sensor are presented andcompared to results from both a standard stereo-DIC approach and a finite element model.
Keywords: digital image correlation, fringe projection, multiple view, shape and deformation measurement
1. IntroductionOptical 3-D shape and deformation measurementtechniques are increasingly important across many
manufacturing industries. In the aerospace industry,for example, the development process of a newproduct normally requires inspection of both shapeand deformation for prototype design, structuraltesting and manufacturing quality control. Popularshape measurement techniques include laserscanning, interferometry, photogrammetry andstructured-light methods [1], of which the structured-light fringe projection technique [2] has a significantadvantage of very high spatial resolution (i.e., one
measured point per camera pixel). For deformationmeasurement, digital image correlation [3] is widelyrecognised for its greater robustness in noisyenvironments and larger maximum measurabledisplacement than some of the other full-fieldimaging techniques such as Electronic SpecklePattern Interferometry (ESPI) and moirinterferometry [4]. Although fringe projection and
DIC sensors are available separately fromcommercial vendors, the common hardwarerequirements for the two techniques suggests thatcombining the two on a common platform is a logical
direction of development, with benefits to includelower overall system cost and greater ease of use forthe end users of the technology.
Due to a sensors field-of-view limit and opticalocclusion effects, large-scale and complex objects
need to be measured from many different viewingdirections, resulting in point clouds defined indifferent coordinate systems. There are at least threedifferent approaches to connecting the coordinate
systems [1]: (i) fixing the sensor and rotating theobject on a mechanical stage, (ii) moving the sensoraround the fixed object, and (iii) using multiplesensors to observe the fixed object. The firstapproach [5, 6] is not only expensive for large-scale
inspections, but also unfeasible for structural testswhere the object must be attached to a loading
machine. In the second approach, the position andorientation of the sensor can be determined in severaldifferent ways, for example by using a mechanical
positioning system to move the sensor, using a lasertracking system to track the sensor movement,
matching overlapping parts of the point clouds byiteratively minimising a least-square errormeasure [7], or using photogrammetry [8]. Whenapplied to deformation measurements, this approachrequires repeated translation of the sensor to exactlocations and orientations in space, which is time-consuming and prone to re-positioning errors.
Therefore, the third approach of using multiplesensors, which is common in surface profilometry(e.g. [9, 10]), is preferable for deformationmeasurements of objects with dimensions of order
-
8/13/2019 Photografy Deformation
2/12
2
1 m or above, as typically used in the aerospaceindustry.
There are currently only a few papers proposing
methods to measure a complete 360 deformationfield using multiple sensors. They generally involveusing a stereovision DIC system to measure the pointcloud and the associated displacement field for each
view, and then registering them with respect to oneanother by aligning markers that are common to atleast two of the views. Sutton et al. [11] used fourcameras to measure simultaneously the front andback surfaces of a cracked plate undergoing bendingand compression. The four cameras were groupedinto two stereovision sets which were calibratedseparately using a reference checker board. Toestimate the coordinate transformation between thesets, 3-D point clouds of both sides of a metal plateof known thickness, drilled with six holes, were
aligned using the holes as the markers. In references[12, 13], the full surface of a cylindrical shell
undergoing compression was measured by four setsof stereo-DIC systems. Although not clearlydescribed in the papers, the four sensor sets appear to
have been connected by using a photogrammetrysystem that tracks coded markers distributed on the
specimen. Recently, Harvent et al. [14] proposed amethod to correlate multiple speckle images capturedby multiple cameras so that the speckle pattern itselfcan be used as the markers for the cameraalignment. Nevertheless, these stereo-DIC-based
methods suffer from at least two of the followingthree problems. Firstly, the process of matching
images of two different views, known as stereocorrespondence [15], may result in an erroneouspoint cloud (and thereby an inaccurate displacementfield) in the presence of large perspective distortionsor occlusions. Secondly, the density of the measured
point cloud is restricted by the correlation windowsize. Thirdly, the alignment accuracy dependsstrongly on the quality of the markers and the methodused to detect them.
In this paper, we present a method that combinesfringe projection with the digital image correlationtechnique on a single hardware platform to measuresimultaneously both shape and deformation. Thework described here can be considered a naturalextension from a recently-described single-camerasingle-projector system [16] to multiple cameras and
projectors so as to achieve up to complete 360object coverage. In particular, no view alignment is
needed as all of the sensors are automatically definedin a unified global coordinate system by an initial
calibration procedure. By using the fringe projection
technique, very dense point clouds can be produced,and the stereo correspondence problem can beavoided.
The paper starts with a summary of the basictechnique, recently proposed in [16], to measure both
shape and deformation fields with one camera andone projector. This provides the starting point for the
extension to multiple sensors which, together withthe validation experiments and modelling, providesthe focus of the current paper. Experimental results
with a two-camera two-projector system on a testsample undergoing mid-point bending are presentedin the next section. These are followed bycomparisons of measurement accuracy with astandard stereo-DIC system, and with numericalpredictions of the displacement field from a finiteelement model.
2. Shape and deformation measurement withone camera and one projector
2.1. Shape measurement by fringe projectionThe fringe projection technique [17] is used toestimate the surface profile with the basicconfiguration of one camera and one projector. Apattern of t sinusoidal fringes is generated on thespatial light modulator (SLM) of the projector andprojected onto the surface to be profiled. The shapeinformation of the surface is therefore encoded asphase variations of the fringe pattern observed by thecamera. A four-step phase-shifting algorithm is used
to estimate the phase values from the intensity imageof the captured fringe pattern. To optimise the phaseestimation accuracy, the temporal phase unwrappingtechnique [18] is used to obtain an unambiguousphase value for each pixel, which requires the fringedensity tto change over time. In this paper, we used a
reverse exponential sequence of fringe patterns [19]in which the fringe density decreases exponentially
from the maximum ofs = 32 fringes across the fieldof view (i.e. using the values t =s,s-1,s-2,s-4,s-8,s-16). A great advantage of using the temporal phase
unwrapping is that each pixel is analysedindependently of its neighbours, allowing surfaces
with complex discontinuities to be measured as easilyas smooth ones. In addition, both horizontal andvertical fringe patterns as suggested in [10] are usedto provide two projector image plane coordinates percamera pixel. This provides additional constraints tothe calibration procedure and is also used to improvethe accuracy of the point cloud estimation. A
coordinate measurement accuracy of 1/20,000 of themeasurement volume diagonal is currentlyachievable.
-
8/13/2019 Photografy Deformation
3/12
3
2.2. Image matching by 2-D DICThe 2-D DIC technique is used to calculate adisplacement field from the texture images recorded
by the camera at each deformation state of thesurface. The process requires matching a subset of
the reference image surrounding a point of interestwith a subset of a deformed image. In practice, thesurface is prepared with a high-contrast randomspeckle pattern to aid the matching algorithm. Asuccessful match of two subsets is obtained by
maximising their cross-correlation score [3],
1 2
,
( , ) ( , ) ( , )i j
C u v I i j I i u j v , (1)
where I1 and I2 are respectively the reference anddeformed subsets whose DC terms have beenremoved to exclude the effects of changes in ambient
lighting, ),( ji are the indices to a pixel within the
reference subset, and ( , )u v are the image
displacements between the subsets. To improve the
computational efficiency, Equation 1 is calculated inthe frequency domain by using
1 *1 2( , ) ( ') ( ')C u v I I , (2)
where and 1 are the forward and inverse 2-D
Fourier transform operators, respectively, the asterisk
indicates complex conjugate, and '1
I and '2
I are
respectively the sub-images I1 and I2 after paddingwith zeros around their edges to avoid aliasing errors.
For dealing with discontinuous surfaces, we alsointroduced a new correlation strategy [16] that
exploits the single pixel spatial resolution of theestimated point cloud to prevent the correlation peaksplitting phenomenon that occurs when a subsetstraddles a global geometrical discontinuity. In thisstrategy, continuous regions of a speckle image are
identified and segmented based on the measuredpoint cloud. Thus, the correlation process inEquation 2 is done separately for each continuousregion by setting pixels that belong to other regions
to zeros in '1
I and '2
I so that they do not contribute
to the correlation score.
An optimised correlation procedure based on thatdescribed in reference [20] is used to compute the
image displacement ( , )u v with a sub-pixel accuracy
that can be as small as one-hundredth of a pixel. The
current implementation assumes that a subsetundergoes only rigid body translation. However, it is
possible to introduce higher-order terms, such asextension and shear, into the deformation model of
the subset [3].
2.3. 3-D displacement field calculationFigure 1 shows the procedure to calculate a 3-Ddisplacement field from the estimated point cloudsand the image displacement fields. At the referencestate, the object surface is measured with the shape
measurement system using projected fringes asdescribed in Section2.1,generating a dense cloud of
3-D points that correspond to scattering pointsimaged onto the camera pixels. A white-light texture
image of the speckle pattern on the object surface isalso captured by the camera. At each subsequentloading state, the deformed 3-D point cloud and
texture image are also obtained in the same way.
A region of interest is selected on the reference
image, within which a grid of reference sample pointsis defined. Using the image correlation techniquepresented in Section 2.2, those sample points arematched with corresponding points on the deformedtexture image. The 3-D coordinates of the reference
sample points can be extracted easily from the
reference 3-D point cloud as they correspond tointeger pixel positions in the reference image. As theposition of a given reference sample point will in
Figure 1: Point cloud and 3-D displacement estimation
procedure using one camera and one projector. Courtesy of
[16].
-
8/13/2019 Photografy Deformation
4/12
4
general no longer lie at the centre of a pixel in thedeformed image, a bicubic interpolation is used tofind the 3-D coordinates of this point from the
coordinates at the neighbouring pixel sites. Thesystematic error induced by the interpolation is not
usually significant, because the geometric distancesbetween the interpolated points are small due to thehigh density of the point cloud.
Finally, the 3-D displacements are computed bydirect subtraction of the 3-D coordinates of the
deformed and reference sample points.
3. Extension to multiple sensorsThe shape measurement system presented in this
paper can easily be extended to a multi-camera multi-projector system due to the modular design. More
cameras and/or projectors can be added in order toinspect different parts of the object surface, since the
present calibration technique [21, 22] is able toautomatically bring 3-D point clouds measured bydifferent camera-projector pairs together into a
unified global coordinate system.
3.1. Calibration of multiple sensorsThe present calibration technique employs theprinciple of photogrammetry [10] to determine up to12 parameters for each camera and projector,
including 6 external parameters describing positionand orientation of the sensor in the global coordinatesystem, 3 internal parameters describing principal
point and focal length, and up to 3 coefficients of lensdistortion. Figure 2 shows the reference artefacts
used in two stages of the calibration process: thecircle pattern for the initialisation and the ball bar for
the refinement. In the initialisation stage, the circlecentres are detected by an ellipse-fitting algorithmand used as reference coordinates to obtain quickestimates of the first 9 calibration parameters with adirect linear transform (DLT). The global coordinate
frame, which is virtually attached to some markers onthe circle pattern within the measurement volume, isalso defined at this stage.
In the refinement stage, the centres of the balls,which are distributed in various positions within the
measurement volume, are first estimated from themeasured point cloud by using a 3-D Hough
transform [23]. This allows points belonging to thesurfaces of the two spheres to be selected from thefull point cloud. A subset of these points is then usedwithin a bundle adjustment calculation to refine theinitial estimates of the 12 parameters, under theconstraints that (i) camera and projector light raysmust intersect in space and that (ii) the ball centresmust be separated by the known distance asdetermined on a suitably calibrated mechanicalCoordinate Measuring Machine (CMM).
Recently, a robotic positioning system has beenintroduced to move the ball bar in space by two
rotation angles and , each to a precision of
50 rad. Besides allowing the calibration process tobe more automatic and repeatable, the knowledge ofthe rotation angles can be used to estimate thepositions of the ball centres. The automatedpositioning also allows for significantly largernumbers of artefact poses to be employed (typically
40 for the experiments reported here), whichimproves the accuracy of the calibration.
3.2. Combining 3-D displacement fieldsThe 3-D displacement field measured by everycamera-projector pair (using the procedure presentedin Section2.3)is associated with the measured pointcloud and thus has already been defined in the global
coordinate system due to the calibration technique.As a result, combining 3-D displacement fields
measured by all camera-projector pairs requires only
two simple steps. First, displacement fieldscorresponding to the same view (i.e. measured by thesame camera but different projectors) are mergedtogether. As 2-D reference sample points of those
pairs are identical, most of their 3-D displacementshave approximately the same values and thus can be
statistically combined to improve the estimateddisplacement field. Then, displacement fields of allviews are gathered and the combined point cloud ismeshed with triangular faces using a Delaunaytriangulation function built in MATLABTM that is
based on [24]. In this way, a surface with several
different overlapping views can be digitised withhigher spatial resolution.
Figure 2: Calibration arrangement for multiple sensors.The cameras are denoted as C1 and C2, and the projectors
as P1 and P2.
-
8/13/2019 Photografy Deformation
5/12
5
4. Experimental results4.1. Apparatus and procedureThe specimen used for the mid-point bendingexperiments is illustrated in Figure 3, which is analuminium sheet of thickness 1 mm bent along fourparallel lines into a top hat profile to introducegeometrical jumps and perspective occlusions. Two
circular holes were also created on the top section tomimic cut-outs that are common in aeronauticalcomponents. The edges of the specimen wereclamped onto a supporting frame. A micrometer witha measurement precision of 0.01 mm was used to
introduce a prescribed displacement at the centre
point from the back of the specimen. To avoid localplastic deformation caused by stress concentration at
the centre, the load was distributed through a pennycoin of diameter 20 mm that is thicker and stifferthan the specimen. To assist the image correlation,
the front surface was prepared with a high-contrastspeckle pattern by spraying black paint onto thewhite surface. As pointed out by Lecompte et al.
[25], the speckle size and density strongly affect theimage correlation accuracy. In this experiment, the
speckles had an average diameter of effectively7 pixels and an average density of around 5 specklesin a correlation window of 3333 pixels.
The specimen was placed in the measurement volumeof the system configured with two cameras and two
projectors as shown inFigure 4.Thus, four camera-projector pairs (i.e. C1P1, C1P2, C2P1 and C2P2)can be used to cover the entire front surface. Thesensors were arranged and calibrated so that ameasurement volume of 500500500 mm
3included
the specimen surface. In this experiment, thecalibration RMS error was about 0.08 mm (or
1/11,000 of the measurement volume diagonal). Itmay be noted that the global coordinate system XYZdefined by the calibration process is not necessarily
aligned with that of the specimen.
During the acquisition process, a sequence of
prescribed displacements varying from 0 mm to10 mm with steps of 1 mm was introduced by themicrometer. At each loading state, the shape anddeformation field of the specimen were measured bythe system. In the current software implementation,surface profiles are calculated in a few seconds usingC++ code, whereas the displacement fields are
computed off-line with a piece of code written inMATLABTM
.
4.2. Estimated displacement fieldsExample results of image-plane displacement fields,
computed for a micrometer displacement = 5 mmby pair C1P1 and C2P2, are shown inFigure 5.Thecoloured contour is visualised at a pixel level byspatially interpolating displacement values of a gridof sample points with a spacing of 16 pixels that is
specified on the reference image. Due to the way thatthis interpolation is done on the same continuous
regions of the surface, it can be seen that the imagedisplacements have been achieved correctly along thesurface discontinuities, such as the edges of the
circular holes and the bends between the top and basesections.
Figure 6 shows the X-, Y- and Z-components of the3-D displacement field obtained at the same load
state (= 5 mm). In this image visualisation, the Y-axis points to the left for pair C1P1 and to the rightfor pair C2P2. It can be seen that the out-of-planecomponent (approximately, dZ) is dominant. Theaverage discrepancy between the pairs is estimated tobe 0.10 mm which is slightly higher than thecalibration error. Also, there exist a number of pixels
Figure 3: Schematic illustration of test specimen. Material:
aluminium alloy 1050 (H14) (Youngs modulus 71 GPa;
Poissons ratio 0.33; and tensile strength 115 MPa).
Region S represents the speckle pattern painted on the
entire specimen surface. The edges are clamped, and the
centre is pushed by a micrometer by displacement .
Figure 4: Experimental configuration.
-
8/13/2019 Photografy Deformation
6/12
6
imaging dark speckles that have been automatically
masked out due to their high uncertainties in thephase extraction. Figure 7 shows the magnitude of
the 3-D displacement field calculated from the threedisplacement components. As expected, thedisplacement magnitude is nearly zero along the
clamped edges and increases up to a maximum valueof approximately 5 mm towards the centre.
4.3. Standard stereo-DICThe measured displacement fields were comparedagainst the output from an industry-standard stereo-
DIC code (the Vic-3D 2009 software from LimessGmbH). The texture images captured by camera C1and C2 were used as inputs for the stereo-DIC
software. The correlation subset size was chosen tobe 3333 pixels, i.e. the same as for the new image
matching technique. Due to severe perspective
disparities between image subsets of the cameras, the
user was required to manually specify severalmatched points between the images to assist the
stereo correspondence process. This problem doesnot arise with the new approach because the imagecorrelation here is always performed between images
recorded on the same camera. The software alsorequired calibrating the cameras by recording aplanar checker pattern at a number of poses (20 inthis experiment). The resulting calibration parametersare given in Table 1 for comparison with thoseconverted from the camera parameters of the newsystem. The displacement magnitude field obtained at
the micrometer displacement = 5 mm is shown in
Figure 8.Only the displacements on the top section
of the sample can be determined because most of theremaining sections are occluded for one of the twocameras.
Figure 5: Estimated image-plane displacement fields for a micrometer displacement of 5 mm. (a,c) Horizontal and verticaldisplacements by pair C1P1, respectively. (b,d) Horizontal and vertical displacements by pair C2P2, respectively.
-
8/13/2019 Photografy Deformation
7/12
7
Figure 6: Estimated components of the 3-D displacement field for a micrometer displacement of 5 mm for pair C1P1 (a,c,e)
and C2P2 (b,d,f).
-
8/13/2019 Photografy Deformation
8/12
8
4.4. Finite element simulationTo achieve more confidence in the 3-D displacement
distribution over the entire surface, a finite elementsimulation was also carried out. The finite elementmodel, as depicted in Figure 9, consisted of 1,100quadrilateral shell elements with a mesh density thatincreased towards the loaded point. The material wasmodelled as linear elastic with the properties given inthe caption to Figure 3. The two boundaries wereclamped by constraining all six degrees of freedom at
the relevant nodes. The displacement was appliedto all nodes lying on the circular edge of the loadingblock. The MSC.Nastran
TM linear static solver was
used to calculate the displacement and stress fields.
The resulting stress distributions predicted that theloaded region starts to deform plastically when
exceeds 8 mm. The results of the simulateddisplacement fields are discussed in Section5.2.
Figure 7: Estimated displacement magnitude field for a micrometer displacement of 5 mm for pair C1P1 (a) and C2P2 (b).
Figure 8: Displacement magnitude field estimated by the stereo-DIC system for a micrometer displacement of 5 mm andplotted on the image of camera C1 (a) and C2 (b).
Figure 9: Finite element model of the experimental sample.
-
8/13/2019 Photografy Deformation
9/12
9
5. Discussions5.1. Point-wise errorThe discrepancy of the displacement magnitude dmeasured at the loaded point from the prescribed
micrometer displacement is used as an errormeasure to compare the proposed system with thestereo-DIC. Figure 11 shows the displacement error
d for various values of for pair C1P1 and
C2P2 as well as for their combined result incomparison with that of the stereo-DIC. Thedisplacement error for the combined pairs isgenerally not a simple average of the two pairs, due
to the statistical data gathering and remeshing. It canbe seen that the point-wise error of the combined
displacement field varies from 0.02 mm to
0.13 mm with an RMS of 0.07 mm (which is
1/12,000 of the measurement volume diagonal). Theerror tends to increase with the loading displacement
due to the increasing distortion of the texture subsetaround the loaded point which is not included in thecurrent zeroth-order subset deformation model. Theerror difference of pair C1P1 and C2P2 has an RMS
value of 0.07 mm, which is mainly due tomisaligning the sensors by the calibration.
In comparison, the stereo-DIC gives lower or
comparable displacement error at the first fourloading states, which may be attributed to the factthat it incorporates a second-order deformation modelof the correlation subset. However, as the
deformation increases, the error of the stereo-DICmeasurements increases much more dramaticallythan that of the proposed system. This is possibly due
to the increasing disparity of the subsets as observedby the cameras located at large separation. Bycontrast, the proposed system does not utilise stereo
correspondence on texture images from differentcameras, and thus has been able to eliminate this
important source of error.
5.2. Field-wise comparisons with stereo-DIC andfinite element model
The magnitude of the 3-D displacement fields at the
loading state of = 5 mm, estimated by the proposedsystem, the stereo-DIC code and the finite elementsimulation, are visualised on top of the referenceshapes in Figure 10. The shape and displacementfield shown in Figure 10-a is a combination of pairC1P1 and C2P2. The roughness effect observed on
Figure 11: Error of displacement magnitude estimated at
the centre point by the new system and the stereo-DIC
system for various micrometer displacement values.
Figure 10: Comparison of displacement magnitude fields
estimated by the new system (a), the stereo-DIC system (b)and the finite element simulation (c) for a micrometer
displacement of 5 mm. The combined data of pair C1P1
and C2P2 are displayed here.
-
8/13/2019 Photografy Deformation
10/12
10
the combined shape is caused by the slightmisalignment of the pairs, which has been visually
exaggerated by the re-meshing process.
It can be seen that the proposed system has been ableto provide a full coverage of the surface, whereasonly the top section is measured by the stereo-DIC.This is not essentially an advantage of the proposedsystem, since it uses all four sensors (i.e. two camerasand two projectors) as opposed to only two sensors(i.e. two cameras) by the stereo-DIC. For the topsection of the surface, the displacement fields of theboth systems seem to be in a fairly good agreement.As compared to the finite element model, a good
agreement is observed over most of the surface,although the simulated displacement field seems tobe localised closer to the loaded region. Thediscrepancy between the finite element simulationand the two experimental techniques is possibly dueto imperfections of the specimen and the residualstress (or pre-stress) induced during making thespecimen and clamping it redundantly at two edges,
which are not included in the finite element model.
Several advantages of the proposed system over thestereo-DIC are demonstrated by the experimental
results. Firstly, the areas on the base section near theclamped edges, although visible to the both cameras
as shown inFigure 8,are not included in the result ofthe stereo-DIC. The reason is that their images appearto be too different (due to both perspective distortionand depth-of-field difference) for the stereocorrespondence to achieve sufficient correlationscores. These areas, on the other hand, can bemeasured by the proposed system even with a single
sensor pair, such as pair C1P1 as shown inFigure 7-a. Secondly, the proposed system hascorrectly measured the areas along thediscontinuities, such as the bent lines between the topsection and the side sections. By contrast, the stereo-
DIC approach results in erroneous displacements in
these discontinuous areas. As it has no priorknowledge about the 3-D scene and occlusions that
may occur, an image of the side section has beenwrongly correlated with that of the base section.Thirdly, the point cloud is computed at all camerapixels in just a few seconds by the proposed system,whereas the point cloud density by the stereo-DIC isnormally restricted to a subset of the camera pixels (1pixel in 16 along each axis in this experiment) due tothe significant computation time (approximately 10minutes for all the pixels of the 1 Mp images usedhere).
Some limitations of the proposed system should also
be pointed out. Firstly, due to the use of projectedfringes and the temporal phase unwrappingtechnique, the proposed system requires relativelylong acquisition time (which was about 20 secondsfor a single deformation state in this experiment) andthus is restricted to specimens undergoing relativelylow strain-rate deformations. Stereo-DIC, however, isa single-shot technique that has been applied to
dynamic testing (e.g. [11-13]). Secondly, phasevalues at dark speckles may not be extracted withhigh certainties by the fringe projection techniquedue to the low fringe modulation. The resulting
random shape measurement errors could be reducedby employing a speckle pattern with reducedcontrast, although this would also have the effect ofincreasing the displacement field errors.
6. ConclusionsThe fringe projection and digital image correlation
techniques have been combined on a hardwareplatform with multiple cameras and multipleprojectors to simultaneously measure both surfaceprofile and deformation field from multiple views.
The proposed approach has an attractive feature ofaccurately measuring discontinuous surfaces byexploiting the very dense point clouds to assist the
Table 1: Calibration parameters of the stereo-DIC as compared to the proposed system. The notation is adopted from [11].
Stereo-DIC system Proposed system
Intrinsic parameters of camera C1
Principal point {cx;cy}1, pixels {520.695; 513.419} {508.665; 562.773}Focal length {fx;fy}1, pixels {3910.61; 3909.93} {3912.15; 3912.15}
Pixel skew {fs}1, pixels 0.052 0Lens distortion {1;2;3}1 {-0.084; 0; 0} {0.001; 0; 0}
Intrinsic parameters of camera C2Principal point {cx;cy}2, pixels {536.137; 524.398} {522.480; 566.328}Focal length {fx;fy}2, pixels {3896.33; 3896.05} {3894.97; 3894.97}Pixel skew {fs}2, pixels 0.307 0Lens distortion {1;2;3}2 {-0.072; 0; 0} {0.001; 0; 0}
Relative transform from C1 to C2Translation {tx;ty;tz}1-2, mm {-1784.390; -9.059; 517.984} {-1723.244; 14.820; 549.705}
Rotation {nx;ny;nz}1-2, degrees {0.589; 34.007; 0.752} {0.603; 33.972; -1.157}
-
8/13/2019 Photografy Deformation
11/12
11
image correlation. Another advantage is that resultsfrom multiple views of the surface are automaticallycombined into a unified global coordinate system
without an extra alignment step. The experimentalresults show that the proposed system has currently
achieved an accuracy of 1/12,000 of themeasurement volume diagonal for fully 3-Ddisplacements of up to 10 mm. The results are also in
good agreement with those produced by a standardstereo-DIC system and a finite element simulation.
References
1. Chen, F., Brown, G. M. and Song, M. (2000)Overview of three-dimensional shapemeasurement using optical methods. Opt. Eng.39, No.1, 10-22. doi:10.1117/1.602438.
2. Huntley, J. M. and Saldner, H. O. (1997) Shapemeasurement by temporal phase unwrapping:comparison of unwrapping algorithms. Meas.
Sci. Technol. 8, No.9, 986. doi:10.1088/0957-0233/8/9/005.3. Sutton, M., McNeill, S., Helm, J. and Chao, Y.
(2000) Advances in two-dimensional and three-dimensional computer vision. In:
Photomechanics, Topics in Applied Physics (P.K. Rastogi, Ed.). Springer, Berlin: 323-372.
4. Schmidt, T., Tyson, J. and Galanulis, K. (2003)Full-field dynamic displacement and strainmeasurement using advanced 3D image
correlation photogrammetry: part I. Exp. Tech.27, No.3, 47-50. doi:10.1111/j.1747-
1567.2003.tb00115.x.5. Cheng, X., Su, X. and Guo, L. (1991) Automated
measurement method for 360 profilometry of 3-D diffuse objects. Appl. Opt. 30, No.10, 1274-1278. doi:10.1364/AO.30.001274.
6. Guo, H. and Chen, M. (2003) Multiviewconnection technique for 360-deg three-
dimensional measurement. Opt. Eng. 42, No.4,900-905. doi:10.1117/1.1555056.
7. Sansoni, G. and Docchio, F. (2004) Three-
dimensional optical measurements and reverseengineering for automotive applications. Robot
20, No.5, 359-367.doi:10.1016/j.rcim.2004.03.001.
8. Reich, C., Ritter, R. and Thesing, J. (2000) 3-Dshape measurement of complex objects bycombining photogrammetry and fringe
projection. Opt. Eng. 39, No.1, 224-231.doi:10.1117/1.602356.
9. Hartley, R. and Zisserman, A. (2004) Multipleview geometry in computer vision, 2nd edn,Cambridge University Press, Cambridge.
10. Schreiber, W. and Notni, G. (2000) Theory andarrangements of self-calibrating whole-bodythree-dimensional measurement systems usingfringe projection technique. Opt. Eng.39, No.1,159-169. doi:10.1117/1.602347.
11. Sutton, M. A., Orteu, J. and Schreier, H. (2009)Image correlation for shape, motion anddeformation measurements: basic concepts,theory and applications, Springer, New York.
12. Degenhardt, R., Kling, A., Bethge, A., Orf, J.,
Krger, L., Zimmermann, R., Rohwer, K. andCalvi, A. (2010) Investigations on imperfectionsensitivity and deduction of improved knock-
down factors for unstiffened CFRP cylindricalshells. Compos. Struct. 92, No.8, 1939-1946.doi:10.1016/j.compstruct.2009.12.014.
13. Hhne, C., Rolfes, R., Breitbach, E. and Temer,J. (2008) Robust design of composite cylindricalshells under axial compression - simulation andvalidation. Thin Walled Struct.46, No.7-9, 947-962. doi:10.1016/j.tws.2008.01.043.
14. Harvent, J., Bugarin, F., Orteu, J., Devy, M.,
Barbeau, P. and Marin, G. (2008) Inspection depices aronautiques pour la dtection de dfautsde forme partir d'un systme multi-camras.Journes COFREND 2008, Toulouse-Labge,France.
15. Ogale, A. S. and Aloimonos, Y. (2005) Shapeand the stereo correspondence problem. Int. J.Comput. Vision 65, No.3, 147-162.doi:10.1007/s11263-005-3672-3.
16. Nguyen, T. N., Huntley, J. M., Burguete, R. L.and Coggrave, C. R. (2011) Shape anddisplacement measurement of discontinuoussurfaces by combining fringe projection anddigital image correlation. Opt. Eng., accepted for
publication.17. Coggrave, C. R. and Huntley, J. M. (2000)Optimization of a shape measurement systembased on spatial light modulators. Opt. Eng.39,
No.1, 91-98. doi:10.1117/1.602340.18. Saldner, H. O. and Huntley, J. M. (1997)
Temporal phase unwrapping: application tosurface profiling of discontinuous objects. Appl.Opt. 36, No.13, 2770-2775.
doi:10.1364/AO.36.002770.19. Huntley, J. M. and Saldner, H. O. (1997) Error-
reduction methods for shape measurement by
temporal phase unwrapping. J. Opt. Soc. Am. A14, No.12, 3188-3196.doi:10.1364/JOSAA.14.003188.
20. Huntley, J. M. (1986) An image processing
system for the analysis of speckle photographs.J. Phys. E: Sci. Instrum. 19, No.1, 43-49.
doi:10.1088/0022-3735/19/1/007.21. Ogundana, O. (2007) Automated calibration of
multi-sensor optical shape measurement system,PhD thesis, Loughborough University, UK.
22. Huntley, J. M., Ogundana, T., Burguete, R. L.and Coggrave, C. R. (2007) Large-scale full-field
metrology using projected fringes: somechallenges and solutions. Proc. SPIE 6616,66162C-10. doi:10.1117/12.726222.
-
8/13/2019 Photografy Deformation
12/12
12
23. Ogundana, O. O., Coggrave, C. R., Burguete, R.L. and Huntley, J. M. (2007) Fast Houghtransform for automated detection of spheres in
three-dimensional point clouds. Opt. Eng. 46,No.5, 051002-11. doi:10.1117/1.2739011.
24. Barber, C. B., Dobkin, D. P. and Huhdanpaa, H.(1996) The quickhull algorithm for convex hulls.ACM Trans. Math. Softw. 22, 469483.
doi:10.1145/235815.235821.25. Lecompte, D., Smits, A., Bossuyt, S., Sol, H.,
Vantomme, J., Van Hemelrijck, D. and
Habraken, A. (2006) Quality assessment ofspeckle patterns for digital image correlation.
Opt. Laser Eng. 44, No.11, 1132-1145.doi:10.1016/j.optlaseng.2005.10.004.