Analysis of Errors in ToF Range Imaging With Dual ...sprg.massey.ac.nz/pdfs/2011_tIM_1861.pdf ·...

8
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 60, NO. 5, MAY 2011 1861 Analysis of Errors in ToF Range Imaging With Dual-Frequency Modulation Adrian P. P. Jongenelen, Donald G. Bailey, Senior Member, IEEE, Andrew D. Payne, Adrian A. Dorrington, and Dale A. Carnegie, Senior Member, IEEE Abstract—Range imaging is a technology that utilizes an amplitude-modulated light source and gain-modulated image sen- sor to simultaneously produce distance and intensity data for all pixels of the sensor. The precision of such a system is, in part, dependent on the modulation frequency. There is typically a tradeoff between precision and maximum unambiguous range. Research has shown that, by taking two measurements at different modulation frequencies, the unambiguous range can be extended without compromising distance precision. In this paper, we present an efficient method for combining two distance measurements obtained using different modulation frequencies. The behavior of the method in the presence of noise has been investigated to determine the expected error rate. In addition, we make use of the signal amplitude to improve the precision of the combined distance measurement. Simulated results compare well to ac- tual data obtained using a system based on the PMD19k range image sensor. Index Terms—Ambiguity, image sensor, range imaging, time of flight (ToF), 3-D camera. I. I NTRODUCTION T IME-OF-FLIGHT (ToF) range-imaging cameras can be used to measure the size, shape, and location of objects in a scene from a single view-point. They operate similar to traditional digital video cameras but simultaneously capture both intensity and distance information for every pixel in the image. This condition is achieved by actively illuminating the scene with intensity-modulated light and measuring both the amplitude and propagation ToF’s round trip from the camera to the objects in the scene and back to the camera. Typically an infrared light source is used, modulated in the region of 10100 MHz. Propagation time (and, hence, distance) is determined by measuring the phase shift of the re- turned light’s modulation envelope. This approach is performed Manuscript received June 23, 2010; revised August 6, 2010; accepted August 11, 2010. Date of publication March 17, 2011; date of current version April 6, 2011. The work of A. P. P. J was supported in part by the New Zealand Tertiary Education Commission through a Top Achievers Doctoral Scholarship. The Associate Editor coordinating the review process for this paper was Dr. George Xiao. A. P. P. Jongenelen and D. A. Carnegie are with the School of Engineering and Computer Science, Victoria University of Wellington, Wellington 6140, New Zealand (e-mail: [email protected]). D. G. Bailey is with the School of Engineering and Advanced Technology, Massey University, Palmerston North 4442, New Zealand. A. D. Payne and A. A. Dorrington are with the School of Engineering, University of Waikato, Hamilton 3240, New Zealand. Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TIM.2010.2089190 by gain modulating the image sensor during the integration time at the same frequency as the illumination modulation, thereby implementing a homodyne downconverter. This gain modulation can be achieved with the combination of a tradi- tional image sensor and an external modulation device [1] but is more commonly achieved with a specialized complementary metal–oxide–semiconductor (CMOS) image sensor, with the modulation function designed into the pixels [2]–[4]. Objects that are close to the camera will exhibit a small phase shift in the returned light, which will basically be in phase with the sensor modulation, producing a bright pixel. More distant objects will impart a larger phase delay on the illumination, resulting in a darker pixel value being detected. Factors such as object color, distance, background lighting, and illumination homogeneity will all influence pixel bright- ness. Therefore, range cameras normally reject these external influences by capturing multiple images of the scene with known phase offsets between the illumination and image sensor introduced between frames. The phase delay of the returned light can be calculated from the phase of the Fourier series over the samples as ϕ = tan 1 N1 i=0 I i sin(2πi/N ) N1 i=0 I i cos(2πi/N ) (1) where I i are the pixel intensity samples from each frames (each phase shifted by 2π/N rad), and N is the number of frames per measurement cycle. Distance can be calculated from the phase by [5] d = c 2f ϕ 2π + k (2) where f is the modulation frequency, c is the speed of light, and k is an integer that accounts for the potential wrapping of phase. With only one phase measurement, it is impossible to establish the integer k; therefore, in many cases, it is assumed to be 0. This assumption restricts the maximum unambiguous range of the system to d u = c 2f . (3) The maximum distance can be extended by decreasing f , but this approach also decreases the measurement precision. By utilizing two range results acquired with different modulation frequencies, it is possible to extend the maximum unambiguous 0018-9456/$26.00 © 2010 IEEE

Transcript of Analysis of Errors in ToF Range Imaging With Dual ...sprg.massey.ac.nz/pdfs/2011_tIM_1861.pdf ·...

Page 1: Analysis of Errors in ToF Range Imaging With Dual ...sprg.massey.ac.nz/pdfs/2011_tIM_1861.pdf · JONGENELENet al.: ANALYSIS OF ERRORS IN TOF RANGE IMAGING WITH DUAL-FREQUENCY MODULATION

IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 60, NO. 5, MAY 2011 1861

Analysis of Errors in ToF Range ImagingWith Dual-Frequency Modulation

Adrian P. P. Jongenelen, Donald G. Bailey, Senior Member, IEEE, Andrew D. Payne,Adrian A. Dorrington, and Dale A. Carnegie, Senior Member, IEEE

Abstract—Range imaging is a technology that utilizes anamplitude-modulated light source and gain-modulated image sen-sor to simultaneously produce distance and intensity data forall pixels of the sensor. The precision of such a system is, inpart, dependent on the modulation frequency. There is typicallya tradeoff between precision and maximum unambiguous range.Research has shown that, by taking two measurements at differentmodulation frequencies, the unambiguous range can be extendedwithout compromising distance precision. In this paper, we presentan efficient method for combining two distance measurementsobtained using different modulation frequencies. The behaviorof the method in the presence of noise has been investigated todetermine the expected error rate. In addition, we make use ofthe signal amplitude to improve the precision of the combineddistance measurement. Simulated results compare well to ac-tual data obtained using a system based on the PMD19k rangeimage sensor.

Index Terms—Ambiguity, image sensor, range imaging, time offlight (ToF), 3-D camera.

I. INTRODUCTION

T IME-OF-FLIGHT (ToF) range-imaging cameras can beused to measure the size, shape, and location of objects

in a scene from a single view-point. They operate similar totraditional digital video cameras but simultaneously captureboth intensity and distance information for every pixel in theimage. This condition is achieved by actively illuminatingthe scene with intensity-modulated light and measuring both theamplitude and propagation ToF’s round trip from the camera tothe objects in the scene and back to the camera.

Typically an infrared light source is used, modulated inthe region of 10−100 MHz. Propagation time (and, hence,distance) is determined by measuring the phase shift of the re-turned light’s modulation envelope. This approach is performed

Manuscript received June 23, 2010; revised August 6, 2010; acceptedAugust 11, 2010. Date of publication March 17, 2011; date of current versionApril 6, 2011. The work of A. P. P. J was supported in part by the New ZealandTertiary Education Commission through a Top Achievers Doctoral Scholarship.The Associate Editor coordinating the review process for this paper wasDr. George Xiao.

A. P. P. Jongenelen and D. A. Carnegie are with the School of Engineeringand Computer Science, Victoria University of Wellington, Wellington 6140,New Zealand (e-mail: [email protected]).

D. G. Bailey is with the School of Engineering and Advanced Technology,Massey University, Palmerston North 4442, New Zealand.

A. D. Payne and A. A. Dorrington are with the School of Engineering,University of Waikato, Hamilton 3240, New Zealand.

Color versions of one or more of the figures in this paper are available onlineat http://ieeexplore.ieee.org.

Digital Object Identifier 10.1109/TIM.2010.2089190

by gain modulating the image sensor during the integrationtime at the same frequency as the illumination modulation,thereby implementing a homodyne downconverter. This gainmodulation can be achieved with the combination of a tradi-tional image sensor and an external modulation device [1] butis more commonly achieved with a specialized complementarymetal–oxide–semiconductor (CMOS) image sensor, with themodulation function designed into the pixels [2]–[4]. Objectsthat are close to the camera will exhibit a small phase shift in thereturned light, which will basically be in phase with the sensormodulation, producing a bright pixel. More distant objects willimpart a larger phase delay on the illumination, resulting in adarker pixel value being detected.

Factors such as object color, distance, background lighting,and illumination homogeneity will all influence pixel bright-ness. Therefore, range cameras normally reject these externalinfluences by capturing multiple images of the scene withknown phase offsets between the illumination and image sensorintroduced between frames. The phase delay of the returnedlight can be calculated from the phase of the Fourier series overthe samples as

ϕ = tan−1

⎛⎜⎜⎝

N−1∑i=0

Ii sin(2πi/N)

N−1∑i=0

Ii cos(2πi/N)

⎞⎟⎟⎠ (1)

where Ii are the pixel intensity samples from each frames (eachphase shifted by 2π/N rad), and N is the number of frames permeasurement cycle. Distance can be calculated from the phaseby [5]

d =c

2f

( ϕ

2π+ k

)(2)

where f is the modulation frequency, c is the speed of light, andk is an integer that accounts for the potential wrapping of phase.With only one phase measurement, it is impossible to establishthe integer k; therefore, in many cases, it is assumed to be 0.This assumption restricts the maximum unambiguous range ofthe system to

du =c

2f. (3)

The maximum distance can be extended by decreasing f , butthis approach also decreases the measurement precision. Byutilizing two range results acquired with different modulationfrequencies, it is possible to extend the maximum unambiguous

0018-9456/$26.00 © 2010 IEEE

Page 2: Analysis of Errors in ToF Range Imaging With Dual ...sprg.massey.ac.nz/pdfs/2011_tIM_1861.pdf · JONGENELENet al.: ANALYSIS OF ERRORS IN TOF RANGE IMAGING WITH DUAL-FREQUENCY MODULATION

1862 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 60, NO. 5, MAY 2011

range without significantly impacting range precision [4], [6].The maximum distance of a system that utilizes two modulationfrequencies, fA and fB , can be calculated based on the greatestcommon divisor of the frequencies by

dE =c

2 gcd(fA, fB)=

c

2fE(4)

where dE is the effective new maximum range, which is basedon an effective frequency of fE .

Previous work by Payne [7] and Jongenelen [5] has shownthat it is feasible to increase dE with two approaches. In thefirst approach, fB is much lower than fA and, therefore, is lessprecise. This low-frequency measure provides only a coarseestimate for the object location, and measurement precision isprovided by the higher frequency fA. The advantage is thatthere is the option to purposely reduce the integration time ofthe coarse measure and relatively increase the integration timeof the higher frequency measurement, thereby improving theoverall precision. In the second approach, fA and fB are bothrelatively large, and therefore, both measurements contributeto measurement precision. However, if either measurement isvery imprecise due to measurement noise, the system cannotcorrectly establish the coarse location of the object, i.e., k in(2), producing large errors.

This paper sets out to first demonstrate a practical approachto calculating distance based on two measurements taken withdifferent modulation frequencies. We then analyze the precisionrequirements of the two measurements to establish a relation-ship between the relative frequencies and measurement errorrate. In addition, the signal amplitude is taken into account asan estimate of the measurement precision, which is used toestablish an appropriate weighting of the two measurements.

This paper is organized as follows. In Section II, we describean efficient method for calculating the extended distance basedon two independent phase measurements. In Section III, wediscuss the factors that affect the precision of this measurement,and in Section IV, we investigate how poor precision in eitherof the phase measurements can lead to a situation where thedistance measurement is disproportionately incorrect, which wedeem to be an error. These ideas have been tested using a range-imaging system constructed by the authors, and in Section V,we briefly describe this system and our experimental setup.Results that show how well the theory matches the experimentaldata are detailed in Section VI, and conclusions are drawn inSection VII.

II. COMBINING TWO FREQUENCY MEASUREMENTS

Given that two phase measurements ϕA and ϕB capturedusing modulation frequencies fA and fB , respectively, wherethe ratio fA : fB can be expressed by the coprime integersMA : MB , as calculated by

MA =fA

fE=

dE

duA

MB =fB

fE=

dE

duB. (5)

For convenience, we will scale the range of the phasesto [0 : 1) by working with pA = ϕA/2π and pB = ϕB/2π.In the absence of noise, both of these values represent thedistance by

d = duA(nA + pA) = duB(nB + pB) = dEpE (6)

where nA and nB represent the integer multiple number oftimes the phases pA and pB have wrapped around, respectively,and pE is the equivalent phase for the effective modulationfrequency. Due to noise in the phase measurements, these twoestimates are unlikely to ever be exactly equal; therefore, theaim is to find nA and nB such that the difference betweenthe two measurements is a minimum. This function can beexpressed as

y(nA, nB) = |MB(nA + pA) − MA(nB + pB)| . (7)

One naive approach is to evaluate all possible combinationsof nA and nB and select the options that give the differenceclosest to zero. With values established for nA and nB , distanceis computed by

d =c

2

[w(nA + pA)

fA+

(1 − w)(nB + pB)fB

](8)

where w is a weighting factor between 0 and 1 and is chosen tominimize the variance in the output distance estimate.

A more direct method involves the modification of a residue-to-binary converter. The distance is calculated in three steps. Inthe first step, we find an estimate for the integer nB using thefollowing modified Chinese remainder theorem [7]:

e = pAMB − pBMA

nB = mod [k0round(e),MB ] (9)

where k0 is the modular multiplicative inverse such that mod(k0MA,MB) = 1. This calculation works equally well to findnA by substituting the relevant subscripts, although the value k0

is typically larger. For example, if calculating nB , when MB =MA − 1, k0 equates to 1, whereas when calculating nA whenMB = MA − 1, k0 equates to MB . The second step producesan intermediate result X , i.e.,

X =wMB(nA + pA) + (1 − w)MA(nB + pB)

=MAnB + MApB + w (e − round(e)) . (10)

The first line is essentially a variation of (8). The second linesimplifies to requiring only the calculation of nB . The first twoterms assume that the measurements are error free, and the thirdterm effectively weights the two estimates by reintroducingthe weighted error term. The third step scales the intermediateresult by a constant to convert from the scale of [0 : MAMB)to [0 : dE), giving the final distance measurement by

d = dE · X

MAMB. (11)

Page 3: Analysis of Errors in ToF Range Imaging With Dual ...sprg.massey.ac.nz/pdfs/2011_tIM_1861.pdf · JONGENELENet al.: ANALYSIS OF ERRORS IN TOF RANGE IMAGING WITH DUAL-FREQUENCY MODULATION

JONGENELEN et al.: ANALYSIS OF ERRORS IN TOF RANGE IMAGING WITH DUAL-FREQUENCY MODULATION 1863

III. FACTORS THAT AFFECT DISTANCE PRECISION

Assuming that the amplitude of the returned signal is suffi-ciently large to minimize the effects of quantization, the noiseon pA and pB follows a Gaussian distribution [9]. The in-puts can be expressed as pA = p′A + ΔA and pB = p′B + ΔB ,where ΔA and ΔB are independent and normally distributed,with zero mean and standard deviations of σpA and σpB , re-spectively. The phase measurement standard deviation σp isrelated to the standard deviation of the raw pixel intensities, σI

by [9]

σp =σI√2A

(12)

where A is the amplitude, as calculated from the Fourierseries by

A=2N

√√√√[N−1∑i=0

(Ii cos(2πi/N))

]2

+

[N−1∑i=0

(Ii sin(2πi/N))

]2

.

(13)

Factors that can affect the amplitude include• Less light returning due to 1/d2 relationship with distance;• Reflectivity of objects in the scene;• Illumination amplitude modulation depth (which reduces

at higher frequencies due to bandwidth limitations);• Integrating for a longer time increases the amplitude.Most of these factors cannot be controlled in a real scene,

except for the integration time, which is directly proportionalto A. The standard deviation of the distance measurement σd issimilarly proportional to the input noise and amplitude with theadded factor of the modulation frequency by

σd =c

2f· σI√

2A. (14)

This case is not strictly true in the sense that σI will notremain unchanged when altering f due to nonlinearities inthe system electronics; however, this condition is a second-order effect compared to the influence of the amplitude andintegration time.

When combining the two-phase measurements, the best esti-mate will apply a weighting to each measurement to favor theinput value with the higher precision, as shown in (10). Thisweighting can be determined by their relative standard devia-tions, which can be estimated based on the relative modulationfrequencies and amplitude as

w =σdB

σdA + σdB=

MAAA

MAAA + MBAB. (15)

IV. ERRORS IN DISTANCE MEASUREMENT

If there is sufficient noise on the inputs, pA and pB , it ispossible for the returned distance to have a very large error dueto an incorrect estimate of the number of integer cycles thateach phase has wrapped around. This error occurs when theestimate returned from the round function in (9) is incorrect.This error can be expressed as

|round(e) − i| ≥ 1 (16)

Fig. 1. Simulated error rate versus phase precision σp, with fA = 40 MHz,and frequency ratios of 5 : 4, 5 : 1, 8 : 7, and 8 : 1. Dotted lines indicate the errorrate as predicted by (18) and (19).

where i is the integer that would have given the correct result.Assuming that the amplitude is sufficiently large, the occur-rence of an error can be expressed as

error ={

0, |ΔAMB − ΔBMA| < 1/21, |ΔAMB − ΔBMA| ≥ 1/2.

(17)

Because the noise is assumed to be independent, the standarddeviation of the difference is

σe =√

(MBσpA)2 + (MAσpB)2. (18)

The expected error rate for a given σpA and σpB can befound by finding the probability that a sample taken from anormal distribution with standard deviation σe will be morethan 0.5 standard deviations from the mean. Using the normalcumulative distribution function, this error rate can be deter-mined with

P = 1 − erf

(1

2σe

√2

). (19)

Fig. 1 shows the error rate as a function of σp for simulateddata with frequency ratios, MA : MB , of 5 : 4, 5 : 1, 8 : 7 and8 : 1. Noise on each channel is independent but with the samestandard deviation. The error rate increases as MA is increased,because the integer number of cycles is larger, making thedetermination of the correct integer more sensitive to phaseerrors. For a given maximum unambiguous distance, the use oftwo high frequencies (where MA and MB are similar) generallyincreases the error rate for similar reasons. The dotted linesbehind the colored lines show the error rate as predicted by (18)and (19), indicating that the simulation results closely match thetheory in this case.

Fig. 2 shows the relationship between σd and σp for the sameset of frequencies. Note that outliers that result from mismatcherrors have been removed to obtain a better estimate of the un-derlying distribution. The use of two high frequencies generallyreduces the standard deviation for the same effective frequency,

Page 4: Analysis of Errors in ToF Range Imaging With Dual ...sprg.massey.ac.nz/pdfs/2011_tIM_1861.pdf · JONGENELENet al.: ANALYSIS OF ERRORS IN TOF RANGE IMAGING WITH DUAL-FREQUENCY MODULATION

1864 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 60, NO. 5, MAY 2011

Fig. 2. Simulated distance precision σd versus phase precision σp for sim-ulated data, with fA = 40 MHz, and frequency ratios MB : MA of 5 : 4,5 : 1, 8 : 7, and 8 : 1. Also shown is σd for a capture using a single modulationfrequency of 8 MHz.

Fig. 3. Simulated phase precision σp versus ratio of integration times as usedin Figs. 4 and 5.

because both measurements are more precise, allowing the errorto be reduced by averaging. When the high-frequency measure-ment is paired with a low-frequency measurement, the result isless precise for a given phase error; therefore, the improvementdue to averaging is reduced. Also shown in the plot is asimulated measurement using only a single 8-MHz modulationfrequency, where, by comparison, σd is higher for all σp.

When performing a measurement using two modulation fre-quencies, a decision must be made with regard to how muchtime is spent at each frequency, assuming the constraint that thetotal measurement period or frame rate remains unchanged. Therelative integration time for each frequency will directly affectthe signal amplitude and, hence, σp. Fig. 3 shows the simulatedeffect of the integration time ratio on σp, with a best achievableprecision of 0.1 rad. This relationship has been used for thesubsequent simulations of the error rate and distance standarddeviation σd.

Fig. 4. Simulated error rate versus the ratio of integration times for frequencyratios of 5 : 4 and 5 : 1.

Fig. 5. Simulated distance precision σd versus the ratio of integration timesfor frequency ratios of 5 : 4 and 5 : 1. Each frequency has been processed witha fixed weighting of 0.5 and a variable weighting based on (15).

Fig. 6. Diagram of test targets’ color and approximate position in scene.

Fig. 4 shows the simulated error rate versus the ratio ofintegration times. Using two relatively high frequencies notonly has the advantage of potentially improving precision butalso has an associated cost in terms of a higher error rate com-pared to the high- and low-frequency cases. The combinationof high and low frequency (MB = 1) only produces errorsat one end of the scale, where the relative time spent on the

Page 5: Analysis of Errors in ToF Range Imaging With Dual ...sprg.massey.ac.nz/pdfs/2011_tIM_1861.pdf · JONGENELENet al.: ANALYSIS OF ERRORS IN TOF RANGE IMAGING WITH DUAL-FREQUENCY MODULATION

JONGENELEN et al.: ANALYSIS OF ERRORS IN TOF RANGE IMAGING WITH DUAL-FREQUENCY MODULATION 1865

Fig. 7. False color images of the scene captured with high integration time and N = 240. (a) Distance measured using f = 40 MHz. (b) Distance measuredusing f = 32 MHz. (c) Distance calculated through a combination of (a) and (b). (d) Amplitude measured using f = 40 MHz, showing labeled test targets.

low frequency is less, because the high-frequency measurementdoes not contribute to resolving the ambiguity.

Fig. 5 shows the simulated distance standard deviation versusthe ratio of integration times for both a fixed weighting of 0.5and for a variable weighting calculated using (15). It is shownthat, in both the two high-, and high- and low-frequency cases,the use of a variable weighting based on the frequency ratioand amplitude can improve precision compared to the fixedweighting of 0.5. The curve for a ratio of 5 : 1 clearly showsthe tradeoff in terms of time split between the high and low fre-quencies. Increasing the period of the high frequency improvesthe accuracy only until a certain critical proportion when usinga fixed weighting. When using the variable weighting scheme,the standard deviation decreases to a minimum, where 100%of the result is based on the high-frequency measurement. Withtwo similar frequencies, there is a wideband over which precisemeasurements can be made. Although the precision is improvedwith only a single frequency (at both ends of the integrationratio scale), the error rate is such that many of the results wouldbe meaningless.

V. EXPERIMENTAL SETUP

We have constructed a range-imaging system based on thePMD Technologies (Seigen, Germany) PMD19-k image sensor.Illumination is provided by a bank of eight laser diodes that aredriven in a controlled current configuration mounted in a circu-lar arrangement around a 16-mm focal length lens. The mod-ulation signals are generated using an Altera (San Jose, CA,

U.S.) Stratix III field-programmable gate array (FPGA). Si-multaneous dual frequency modulation has been implementedusing phase-locked loops within the FPGA, one for each of thetwo frequencies. A controller selects which modulation sourcereaches the output and can vary the ratio of the time spent ateach frequency during the sensor integration time.

This system exhibits an improvement in precision as themodulation frequency is increased up to 40 MHz, at which pointa sharp decline in precision is noted due to bandwidth limita-tions of the image sensor drive electronics [10]. In addition,within the useable frequency range of less than 40 MHz, themodulation contrast is reduced as the frequency is increased,which, for this system, implies that the distance measurementprecision is not strictly proportional to 1/f , as given by (14).Further details with regard to this system are described byJongenelen et al. [11].

One scene has been set up as illustrated in Fig. 6 with fivepairs of test targets, with each pair’s actual distances measuredto approximately 2.50, 3.75, 5.00, 6.25, and 7.50 m using aLeica Geosystems (Heerbrugg, Switzerland) DISTO distancemeter (manufacturer-specified accuracy of ±1 mm). Thesedistances have been chosen to purposely produce ambigu-ous distance measurements for modulation frequencies above20 MHz. The 40 MHz measurement is particularly interesting,because ambiguity leads to some objects appearing at almostidentical distances. Each target pair has a white target anda green target so that a comparison can be made betweentwo objects at the same distance, reflecting light of differentintensities.

Page 6: Analysis of Errors in ToF Range Imaging With Dual ...sprg.massey.ac.nz/pdfs/2011_tIM_1861.pdf · JONGENELENet al.: ANALYSIS OF ERRORS IN TOF RANGE IMAGING WITH DUAL-FREQUENCY MODULATION

1866 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 60, NO. 5, MAY 2011

Fig. 8. Distance measurement error over time for a pixel located at test point 8using combined 40- and 32-MHz modulation. The large spikes are due to errorsin the estimate for the integer number of times the phase has wrapped around.

VI. EXPERIMENTAL RESULTS

Fig. 7(a) and (b) shows two captures of the scene usingmodulation frequencies of 40 and 32 MHz, respectively. Theseimages have been captured using 240 phase steps per measure-ment to minimize the systematic error due to the interference ofharmonics [12] and captured over 36 s to reduce the standarddeviation. The relatively high accuracy of these images makesthem suitable for acting as the benchmark case for measuringthe error rate for the higher frame rate captures, as will later bepresented in this paper.

These images illustrate the problem of range ambiguity,where more distant objects are incorrectly measured by mul-tiples of the maximum unambiguous range du. Fig. 7(c) showsthe distance calculated by combining the 40- and 32-MHzmeasurements using (10). There are still a number of pixels atthe top and left of the image that have been incorrectly ranged,which can be attributed to the pixels’ low amplitude return as aresult of poor object reflectivity. Fig. 7(d) shows the measuredamplitude for the 40-MHz sequence on a logarithmic scale. Thedifferent colored targets can clearly be identified and have beenlabeled to match the diagram in Fig. 6.

The scene has then been imaged using the three differ-ent modulation frequencies 40, 32, and 8 MHz, with fivephase steps (N = 5), and the total measurement period isfixed at 100 ms (10 measurements per second). A total of200 measurements have been taken at each setting, with preci-sion being measured as the standard deviation of a pixel over the200 measurements.

Fig. 8 shows a plot of the measured distance of a singlepixel located on test target 8 for the 200 measurements. Thisplot illustrates how a signal with a typically small standarddeviation can cause large errors when the phase differencebetween the two frequency measurements is sufficiently large.The observed spikes are deemed to be an error, as described inSection IV, and the error rate is calculated by the percentage ofthese error events with respect to the total number of samples.When calculating the distance standard deviation σd, theseoutliers are first removed from the data to not give an over-estimate of what might, in actuality, be a very small standarddeviation.

Fig. 9. Relationship between distance precision σd and mean signal ampli-tude A for (a) all pixels in the scene and (b) only pixels with A > 30. If Ais sufficiently large, the relationship is approximately linear. For smaller A,quantization effects cause larger than expected standard deviation.

Fig. 9(a) shows a plot of the inverse of the amplitude forall pixels in the scene versus the measured distance precisionσd, with single modulation frequencies of 40, 32, and 8 MHz.To enable a meaningful comparison, the y-axis has also beenscaled by the maximum unambiguous range du, which isdifferent for each modulation frequency. Where the amplitudeis small, the phase uncertainty is large, although for verysmall amplitude, it is not strictly an inverse relationship. Thiscondition can be attributed to the fact that, for small amplitudes,the effect of quantization is for the phase to deteriorate into asample taken from a uniform random distribution [8].

Fig. 9(b) shows a plot of the same data but with points withamplitude less than 30 units removed [the units are related tothe 16-b analog-to-digital converter (ADC) used to convert thevoltage output from the image sensor]. On this smaller scale, itis apparent that there is an inverse relationship between signalamplitude and distance precision, as anticipated by (12), whereσI is held to be constant. Based on (15), it is desirable forthe weightings applied to each independent measurement tobe based on the inverse of their relative standard deviations,thereby favoring the measurement with the higher precision.Because the standard deviation for a given sample is unknown,the amplitude can act as a reasonable indicator for the sample’srelative precision.

The measurements have been repeated using the same set off and N but with varying measurement periods (and, hence,integration time) from 5 ms to 100 ms. The following twofrequency pairings have been tested: 1) 40 and 32 MHz and2) 40 and 8 MHz. In addition, the measurements have beencombined such that each measurement takes a total time of100 ms, e.g., T40MHZ = 100 ms − T32MHz. The precisionfor these measurements is calculated as the mean standard

Page 7: Analysis of Errors in ToF Range Imaging With Dual ...sprg.massey.ac.nz/pdfs/2011_tIM_1861.pdf · JONGENELENet al.: ANALYSIS OF ERRORS IN TOF RANGE IMAGING WITH DUAL-FREQUENCY MODULATION

JONGENELEN et al.: ANALYSIS OF ERRORS IN TOF RANGE IMAGING WITH DUAL-FREQUENCY MODULATION 1867

Fig. 10. Mean phase precision σp versus integration time ratio for pixels attest point 8.

Fig. 11. Error rate versus integration ratio for pixels at test point 8.

deviation for a 12 × 16 group of pixels located at test point 8.This set of pixels has been chosen, because it is central to theimage; therefore, it can be assumed that the area is planar andall pixels have the same true distance.

In Fig. 10, it is shown how the ratio between the time spentintegrating for each frequency affects the standard deviation ofthe measured phase. The relationship is similar to the simula-tion presented in Fig. 3, where the best phase precision was0.1 rad. However, in this case, the phase precision of each fre-quency is different, even when their respective integration timesare the same. In general, the lower 8-MHz frequency (min σp =0.062 rad) has a smaller phase standard deviation than the32-MHz frequency (min σp = 0.076 rad), which, in turn, issmaller than the 40-MHz frequency (min σp = 0.101 rad).This case can be attributed to a reduction in the demodulationcontrast of the image sensor as the frequency is increased.

Fig. 11 shows the effect of the ratio between the integrationtime for each frequency and the error rate for a group of pixelslocated at test point 8. The behavior is very much as expected,following the simulated results in Fig. 4 closely, only withthe error rates offset due to the differences in phase precision.

Fig. 12. Distance precision σd versus integration ratio for pixels attest location 8. Error bars show variation in standard deviation within a12 × 16 group of pixels.

The fewest errors when using two relatively high frequenciesare obtained at approximately 50%, although this error rate istypically higher than the case using a combination of high andlow frequencies.

Fig. 12 shows the effect of the integration ratio on thedistance standard deviation using a fixed weighting of 0.5 and avariable weighting based on relative frequency and amplitude.Error bars are also shown to indicate the variation in the stan-dard deviation across the 12 × 16 group of pixels. Irrespectiveof the weighting scheme used, the combination of two relativelyhigh frequencies gives a lower standard deviation. This casevery closely relates to the simulated behavior presented inFig. 5. The only noticeable difference is in the position ofthe best integration ratio for the fixed weighting 40 : 8-MHzmeasurement from approximately 25% in the simulation toapproximately 35% for the experimental data. This condition isconsistent with the reduced σp on the 8-MHz signal comparedto the higher frequencies, whereas the simulation assumed thatσp was constant for all frequencies.

VII. CONCLUSION

This paper has investigated the concept of errors that arisefrom measuring distance using two modulation frequencies inToF range imaging systems. We have presented an efficient al-gorithm for combining two independent distance measurementsto give a new measurement with increased precision and anextended maximum unambiguous range. The main drawbackof this method is the possibility that, if either of the individualmeasurements has sufficient noise, the calculated unambiguousdistance may contain a large error, disproportionate to the levelof noise.

The rate at which these errors occur can statistically bemodeled based on the phase standard deviation. Furthermore,the amplitude of the signal can be used as an indicator of themeasurement standard deviation—a fact that proves useful inapplying a weighting between the two individual samples toincrease overall system precision.

Page 8: Analysis of Errors in ToF Range Imaging With Dual ...sprg.massey.ac.nz/pdfs/2011_tIM_1861.pdf · JONGENELENet al.: ANALYSIS OF ERRORS IN TOF RANGE IMAGING WITH DUAL-FREQUENCY MODULATION

1868 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 60, NO. 5, MAY 2011

These techniques have been applied to test data that werecaptured using a custom–built range-imaging system based onthe PMD19k image sensor. The experimental results comparewell with the theoretical models and demonstrate the viabilityof the proposed techniques.

REFERENCES

[1] A. A. Dorrington, M. J. Cree, D. A. Carnegie, A. D. Payne, R. M. Conroy,J. P. Godbaz, and A. P. P. Jongenelen, “Video-rate or high-precision:A flexible range imaging camera,” in Proc. SPIE Image Process., Mach.Vis. Appl., 2008, vol. 6813, p. 681 307.

[2] B. Büttgen and P. Seitz, “Robust optical time-of-flight range imagingbased on smart pixel structures,” IEEE Trans. Circuits Syst. I: Reg. Papers,vol. 55, no. 6, pp. 1512–1525, Jul. 2008.

[3] G. Gulden, D. Becker, and M. Vossiek, “Novel optical distance sensorbased on MSM technology,” IEEE Sensors J., vol. 4, no. 5, pp. 612–618,Oct. 2004.

[4] S. B. Gokturk, H. Yalcin, and C. Bamji, “A time-of-flight depth sensor—System description, issues, and solutions,” in Proc. Comput. Vis. PatternRecog. Workshop, 2004, vol. 3, pp. 35–43.

[5] A. P. P. Jongenelen, D. A. Carnegie, A. D. Payne, and A. A. Dorrington,“Maximizing precision over extended unambiguous range for TOF rangeimaging systems,” in Proc. IEEE Int. Instrum. Meas. Technol. Conf., 2010,pp. 1575–1580.

[6] A. A. Dorrington, M. J. Cree, A. D. Payne, R. M. Conroy, andD. A. Carnegie, “Achieving submillimetre precision with a solid-state full-field heterodyning range imaging camera,” Meas. Sci. Technol., vol. 18,no. 9, pp. 2809–2816, Jul. 2007.

[7] A. D. Payne, A. P. P. Jongenelen, A. A. Dorrington, M. J. Cree,and D. A. Carnegie, “Multiple-frequency range imaging to removemeasurement ambiguity,” in Proc. Opt. 3-D Meas. Tech. IX, 2009, vol. 2,pp. 139–148.

[8] Y. Wang, “New chinese remainder theorems,” in Proc. 32nd AsilomarConf. Signals, Syst., Comput., 1998, pp. 165–171.

[9] M. Frank, M. Plaue, H. Rapp, U. Kothe, B. Jahne, and F. Hamprecht,“Theoretical and experimental error analysis of continuous-wave time-of-flight range cameras,” Opt. Eng., vol. 48, no. 1, pp. 013 602-1–013 602-16,Jan. 2009.

[10] A. D. Payne, A. A. Dorrington, M. J. Cree, and D. A. Carnegie, “Char-acterization of modulated time-of-flight range image sensors,” in Proc.SPIE-IS,T Electron. Imaging, 2008, vol. 7239, pp. 723 904-1–723 904-11.

[11] A. P. P. Jongenelen, D. A. Carnegie, A. P. Payne, and A. A. Dorrington,“Development and characterisation of an easily configurable range imag-ing system,” in Proc. Image Vis. Comput. New Zealand, 2009, pp. 79–84.

[12] H. Rapp, M. Frank, F. Hamprecht, and B. Jahne, “A theoretical and exper-imental investigation of the systematic errors and statistical uncertaintiesof time-of-flight-cameras,” Int. J. Intell. Syst. Technol. Appl., vol. 5, no. 3,pp. 402–413, Nov. 2008.

Adrian P. P. Jongenelen received the M.Sc. degree (with distinction), majoringin electronics and computer systems engineering, in 2007 from the VictoriaUniversity of Wellington, Wellington, New Zealand, where he is currentlyworking toward the Ph.D. degree working in the Mechatronics Research Group,School of Engineering and Computer Science.

His research interests include 3-D time-of-flight range imaging, digital signalprocessing, wireless communications and FPGA implementations.

Donald G. Bailey (M’80–SM’04) received the B.E. (Hons) and Ph.D. degreesin electrical and electronic engineering from the University of Canterbury, NewZealand, in 1982 and 1985, respectively.

He is currently an Associate Professor with the School of Engineering andAdvanced Technology, Massey University, Palmerston North, New Zealand.He is the Leader of the Image and Signal Processing Research Group. Hisresearch interests include applications of image analysis, machine vision, androbot vision, in particular the application of FPGAs to implementing imageprocessing algorithms.

Andrew D. Payne received the B.Tech. (Hons), M.Sc., and Ph.D. degrees inphysics from the University of Waikato, Hamilton, New Zealand, in 2003, 2004,and 2009, respectively.

He currently holds a postdoctoral position with the Chronoptics Group,School of Engineering, University of Waikato, developing advanced modula-tion signal coding and measurement optimisation techniques for range imagingcameras.

Adrian A. Dorrington received the Ph.D. degree from the University ofWaikato, Hamilton, New Zealand, in 2001. He has held postdoctoral fellow-ships from the National Research Council, NASA Langley Research Center,Hampton, VA, and from the Foundation for Research Science and Technology,University of Waikato.

He is currently a Senior Lecturer with the School of Engineering, Universityof Waikato. His research interests include optoelectronics and optical measure-ment technologies, in particular, 3-D time-of-flight range imaging techniques.

Dale A. Carnegie (M’91–SM’02) received the M.Sc. (with first-class honors)degree in physics and electronics and the Ph.D. degree in computer sciencefrom the University of Waikato, Hamilton, New Zealand.

He is currently a Professor of electronic and computer systems engineeringwith the School of Engineering and Computer Science, Victoria Universityof Wellington, Wellington, New Zealand, where he heads the MechatronicsResearch Group. His research interests include autonomous mobile robotics,sensors, embedded controllers, and applied artificial intelligence.