Robust Feature Matching for Loop Closing and...

6
Robust Feature Matching for Loop Closing and Localization Jungho Kim and In-So Kweon Abstract— Recently, many vision-based SLAM methods have achieved good results using visual features. However, most algorithms suffer from the accumulated error that inevitably occurs. In this paper, we propose a robust loop detection method by matching image features between the incoming image and key-frame images saved in SLAM. Loop detection is a task of deciding whether a robot has returned to a previously visited area or not. Because a camera is unlikely to have the same pose when a robot revisits the place where it previously encountered, it is crucial to match the features under the different views of the scene. In contrast with view-invariant features, it is hard to match corner points in that situation due to the large variation of neighboring pixels. So we present the robust corner matching method under the view changes. Experimental results demonstrate the capability of the loop closing and mobile robot localization under the different views using the proposed method. I. INTRODUCTION SLAM, a prerequisite for autonomous robot navigation,is a problem of determining a robot pose and building a geo- metric map simultaneously from sensor inputs in unknown environments. The visual SLAM research community has made a good performance in the past years. However, one of the challenging issues in SLAM is to cope with the accumulated error. So many approaches are introduced to alleviate this problem[1][2]. One of the possible solutions is to detect the loop that is to decide whether a robot returns to the previously visited place. Vision sensors are very efficient for this task in that it provides much information to evaluate the similarity between two views in comparison with range scanning sensors such as LRF and sonar sensors. However, a camera is unlikely to have the same pose when a robot revisits a place as it did where it first encountered. So, many visual SLAM approaches employ view-invariant fea- tures such as SIFT[3][4], GRIF[5]. In [1], maximally stable extremal regions (MSERs) are defined and those regions are encoded by SIFT descriptors for loop closing[3]. Recently, many vision-based SFM(Structure from Motion) and SLAM approaches use Harris corners due to many aspects, for example, very fast computation, relatively accurate feature localization and its repeatability. Moreover, for small inter- frame displacement between consecutive images, a window can be tracked well using block matching methods such as sum of squared difference (SSD), normalized cross corre- lation (NCC) or KLT feature tracker[6]. Many vision-based Jungho Kim is with the Department of Electrical Engineering and Computer Science, Korea Advanced Institute of Science and Technology, Daejeon, Korea, [email protected] In-So Kweon is with Faculty of Electrical Engineering and Computer Science, Korea Advance Institute of Science and Technology, Daejeon, Korea, [email protected] SLAM approaches[7][8][9][16] have recently performed suc- cessfully by Harris corner except the loop closing problem. From these approaches, to recognize the previously visited area, a robust corner matching algorithm is required. In this paper, we propose a new corner matching method that can be used for loop closing and robot localization under large view changes. We first present a visual SLAM approach based on ”visual odometry”[9] using only the stereo camera. In section III, we introduce a robust corner matching method between different views of the scene. Section IV describes the method for loop closing after recognizing the previously encountered area. Section V describes experimental results of feature matching and visual SLAM to show the feasibility of the proposed method. II. VISUAL SLAM APPROACH A. Feature Extraction In each frame, we detect Harris corners[10] in the left image. Harris corners have been found to give detections that are relatively stable under small to moderate image distortion[11] and are identified by intersection of two strong edges. The computational cost for extracting corners from the 320×240 image size is below 0.02sec. B. Stereo Matching From stereo matching of detected feature points, we can infer information on the 3-D structure and distance of a scene. If we assume that stereo images are rectified, then pairs of conjugate epipolar lines become collinear and par- allel to one of the image axes and this fact makes the stereo matching problem much easier because it is reduced to a 1- D search on a scanline identified trivially. We use modified KLT feature tracker to find correspondences between stereo images. Because KLT tracker gives the sub-pixel locations of matched points, we can reconstruct more realistic structures as show in Fig. 1. To reduce the original KLT feature tracker which involves 2-D search into the 1-D search along the horizontal line, we assume that a point in the left image I moves to point x d min d x in the right image J and we linearize J (x d min d x ) by Taylor expansion as (1) I (x, y)= J (x d min , y) g x d x (1) Here, d min is a minimum disparity threshold and g x is an intensity gradient along the horizontal axis. d x is the disparity that minimizes the following dissimilarity defined by SSD in (2). ε = A [h(x, y) g x d x ] 2 wdA (2) Proceedings of the 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems San Diego, CA, USA, Oct 29 - Nov 2, 2007 ThD1.6 1-4244-0912-8/07/$25.00 ©2007 IEEE. 3905

Transcript of Robust Feature Matching for Loop Closing and...

Page 1: Robust Feature Matching for Loop Closing and Localizationkoasas.kaist.ac.kr/bitstream/10203/22443/1/Robust Feature Matching for... · Robust Feature Matching for Loop Closing and

Robust Feature Matching for Loop Closing and Localization

Jungho Kim and In-So Kweon

Abstract— Recently, many vision-based SLAM methods haveachieved good results using visual features. However, mostalgorithms suffer from the accumulated error that inevitablyoccurs. In this paper, we propose a robust loop detection methodby matching image features between the incoming image andkey-frame images saved in SLAM. Loop detection is a task ofdeciding whether a robot has returned to a previously visitedarea or not. Because a camera is unlikely to have the same posewhen a robot revisits the place where it previously encountered,it is crucial to match the features under the different viewsof the scene. In contrast with view-invariant features, it ishard to match corner points in that situation due to the largevariation of neighboring pixels. So we present the robust cornermatching method under the view changes. Experimental resultsdemonstrate the capability of the loop closing and mobilerobot localization under the different views using the proposedmethod.

I. INTRODUCTION

SLAM, a prerequisite for autonomous robot navigation,isa problem of determining a robot pose and building a geo-metric map simultaneously from sensor inputs in unknownenvironments. The visual SLAM research community hasmade a good performance in the past years. However, oneof the challenging issues in SLAM is to cope with theaccumulated error. So many approaches are introduced toalleviate this problem[1][2]. One of the possible solutions isto detect the loop that is to decide whether a robot returns tothe previously visited place. Vision sensors are very efficientfor this task in that it provides much information to evaluatethe similarity between two views in comparison with rangescanning sensors such as LRF and sonar sensors. However,a camera is unlikely to have the same pose when a robotrevisits a place as it did where it first encountered. So,many visual SLAM approaches employ view-invariant fea-tures such as SIFT[3][4], GRIF[5]. In [1], maximally stableextremal regions (MSERs) are defined and those regions areencoded by SIFT descriptors for loop closing[3]. Recently,many vision-based SFM(Structure from Motion) and SLAMapproaches use Harris corners due to many aspects, forexample, very fast computation, relatively accurate featurelocalization and its repeatability. Moreover, for small inter-frame displacement between consecutive images, a windowcan be tracked well using block matching methods such assum of squared difference (SSD), normalized cross corre-lation (NCC) or KLT feature tracker[6]. Many vision-based

Jungho Kim is with the Department of Electrical Engineering andComputer Science, Korea Advanced Institute of Science and Technology,Daejeon, Korea, [email protected]

In-So Kweon is with Faculty of Electrical Engineering and ComputerScience, Korea Advance Institute of Science and Technology, Daejeon,Korea, [email protected]

SLAM approaches[7][8][9][16] have recently performed suc-cessfully by Harris corner except the loop closing problem.From these approaches, to recognize the previously visitedarea, a robust corner matching algorithm is required. In thispaper, we propose a new corner matching method that can beused for loop closing and robot localization under large viewchanges. We first present a visual SLAM approach basedon ”visual odometry”[9] using only the stereo camera. Insection III, we introduce a robust corner matching methodbetween different views of the scene. Section IV describesthe method for loop closing after recognizing the previouslyencountered area. Section V describes experimental resultsof feature matching and visual SLAM to show the feasibilityof the proposed method.

II. VISUAL SLAM APPROACH

A. Feature Extraction

In each frame, we detect Harris corners[10] in the leftimage. Harris corners have been found to give detectionsthat are relatively stable under small to moderate imagedistortion[11] and are identified by intersection of two strongedges. The computational cost for extracting corners from the320×240 image size is below 0.02sec.

B. Stereo Matching

From stereo matching of detected feature points, we caninfer information on the 3-D structure and distance of ascene. If we assume that stereo images are rectified, thenpairs of conjugate epipolar lines become collinear and par-allel to one of the image axes and this fact makes the stereomatching problem much easier because it is reduced to a 1-D search on a scanline identified trivially. We use modifiedKLT feature tracker to find correspondences between stereoimages. Because KLT tracker gives the sub-pixel locations ofmatched points, we can reconstruct more realistic structuresas show in Fig. 1. To reduce the original KLT feature trackerwhich involves 2-D search into the 1-D search along thehorizontal line, we assume that a point in the left image Imoves to point x− dmin − dx in the right image J and welinearize J(x−dmin −dx) by Taylor expansion as (1)

I(x,y) = J(x−dmin,y)−gxdx (1)

Here, dmin is a minimum disparity threshold and gx is anintensity gradient along the horizontal axis. dx is the disparitythat minimizes the following dissimilarity defined by SSD in(2).

ε =∫ ∫

A[h(x,y)−gxdx]2wdA (2)

Proceedings of the 2007 IEEE/RSJ InternationalConference on Intelligent Robots and SystemsSan Diego, CA, USA, Oct 29 - Nov 2, 2007

ThD1.6

1-4244-0912-8/07/$25.00 ©2007 IEEE. 3905

Page 2: Robust Feature Matching for Loop Closing and Localizationkoasas.kaist.ac.kr/bitstream/10203/22443/1/Robust Feature Matching for... · Robust Feature Matching for Loop Closing and

Here, h(x,y) = J(x−dmin,y)− I(x,y).To find the disparity dx, we set the derivative to zero

∂ε∂dx

=∫ ∫

A[−2h(x,y)gx +2g2

xdx]wdA = 0

Finally, dx is computed by (3)

dx =∫ ∫

A h(x,y)gxwdA∫ ∫A g2

xwdA(3)

Fig. 1. Top : Scene image, Bottom left : Reconstructed 3D from NCC,Bottom right : Reconstructed by KLT

Stereo matching using the modified KLT feature trackergives us more correspondences although the scene is veryclose to the camera as shown in Fig.2. It also provides manymatched points in the remote scene.

Fig. 2. Stereo Matching (top: conventional KLT, middle: proposed method,bottom: remote scene from the camera)

C. Motion Estimation and Map Generation

Among vision-based SLAM approaches, Nister[9][12]introduces a method called ”visual odometry”. This methodestimates the movement of a stereo head or a single camerain real time from only vision data. In 3-point algorithm, theimages of three known world points provide the possibleposes of the camera to up to four solutions and many morecorrespondences than three points are needed to obtain onesolution automatically. Fig.3 describes the visual SLAMprocess based on 3-point algorithm[12]. The overall processis as follows.

• Match feature points between left and right images ofthe stereo pair and triangulate the matches into 3Dpoints after camera calibration. Those 3D points aresaved in local map as landmarks

• Track features between the incoming left image andlandmarks in the local map using KLT feature trackerand estimate the robot pose using 3-point algorithmfollowed by RANSAC[13] until it satisfies followingcriterions

– The number of inliers after RANSAC is more thanthe pre-defined number

– The variance of tracked features in incoming imageis above a variance threshold to ensure the largefield of view.

• If it does not satisfy above criterions, save the currentleft image in the database as the key-frame image thatcan be used for loop detection and localization. Afterthen all the landmarks in the local map are saved in theglobal map and we generate the new local map fromstep 1.

Fig. 3. Visual SLAM approach

3906

Page 3: Robust Feature Matching for Loop Closing and Localizationkoasas.kaist.ac.kr/bitstream/10203/22443/1/Robust Feature Matching for... · Robust Feature Matching for Loop Closing and

D. Localization

From given landmarks in the map, a robot pose is deter-mined by 3 point algorithm after finding correspondencesbetween the incoming image and key frame images by theproposed method in Section III. These correspondences inthe incoming image are tracked by KLT tracker in the nextimage for localization until the overlapped region disappears.

III. LOOP DETECTION

When a robot returns to the previously visited area, acamera mounted on a robot is unlikely to have same posewith those of key-frame images as usual. Scale-invariantfeatures such as SIFT have a good performance for findingcorrespondences under the viewpoint changes. However,computational cost of SIFT is more expensive than Harriscorner to use it in the real-time system and corner-basedSLAM approaches have landmarks as corner points. SIFTdetects local maxima and minima of the DOG(differenceof Gaussian) by comparing each sample point to its eightneighbors in the image and assigns a consistent orientationto each detected point based on local image properties asa keypoint descriptor. In the proposed method, we initiallydetect corner points in the image and match those pointsby NCC (Normalized Cross Correlation). As Barnard andFischler[14] point out, ”A problem with correlation matchingis that the patch (window) size must be large enough toinclude enough intensity variation for matching but smallenough to avoid the effects of projective distortion”[15].Because corner points guarantee enough intensity variationwithin neighboring pixels, we define that the window sizefor NCC is as small as possible to consider the effect ofperspective distortion and a low correlation threshold is usedfor initial matching followed by the WTA (winner-take-all) method. Among initially matched points by NCC, werandomly select 3 points and those points can generate onecircle region that 3 points are on the boundary of it. Asa circle region descriptor, we use a 128-dimensioal SIFTdescriptor to encode the circle region. SIFT descriptors aregenerated from image gradient magnitudes and orientationswithin the circle region and its center is used as a keypointlocation. A SIFT descriptor is invariant to scale, affine andpartial illumination changes. The circle region from 3 pointsis computed by circle fitting using simple vector product.From given 3 points (x1,y1),(x2,y2),(x3,y3), the circle canbe computed by solving the following determinant equation(4) ∣∣∣∣∣∣∣∣

x2 + y2 x y 1x2

1 + y21 x1 y1 1

x22 + y2

2 x2 y2 1x2

3 + y23 x3 y3 1

∣∣∣∣∣∣∣∣= 0 (4)

From the determinant equation (4), general equations for thecircle are as follows.

x0 = 0.5×M12/M11

y0 = −0.5×M13/M11

r =√

x20 + y2

0 +M14/M11 (5)

Here, x0, y0 is the center of the circle, r is its radius, andMi j is a determinant of order 3, namely, the determinant ofthe sub-matrix by deleting the ith row and the jth column.To reduce the computational cost for assigning the circledescriptor, A false match is determined by simple geometricconstraints. Before assigning descriptors to the circle regions,

Fig. 4. Geometric constraint

we initially use rough geometric angle constraints betweengenerated circle regions. If α,β ,γ are angles between featurepoints,x1,x2,x3,from the center of the circle region 1 (c)respectively and α ′

,β ′,γ ′

are angles from he region 2(c’)then corresponding pairs x1,x2,x3 and x

′1,x

′2,x

′3 must satisfy

the following criterions in (6).

∣∣∣α −α′∣∣∣ < θthreshold∣∣∣β −β′∣∣∣ < θthreshold∣∣∣γ − γ′∣∣∣ < θthreshold (6)

Here, θthreshold is a pre-defined angular threshold. Thissimple constraint is useful to reduce computational loadby preventing computing unnecessary descriptors. Moreover,using a radius of circle computed by 3 points, the scaleof the region is determined. So all the descriptors aregenerated at the lower resolution image to reduce processingtime for computing descriptors and from one descriptor, wecan determine 3 corresponding points. If we find matchedcircle regions by computing Euclidean distance between twodescriptors, namely 128-dimensional feature vectors, thencorner points within those regions are matched by KLTfeature tracker after generating image patches as shown inFig.5 and these matched points in the patch are factoredout from candidates for generating the circle region. Totrack features within corresponding regions, we can predictpossible corresponding locations using a scale and the centerof a circle by (7).

x2 = (x1 − cx1)× s2/s1 + cx2

y2 = (y1 − cy1)× s2/s1 + cy2 (7)

s1 and s2 are radii of two circle regions respectively andcx,cy represent the center of a circle. From predicted points(x2, y2) as initial positions, we can find corresponding pointsthat minimize the dissimilarity using KLT tracker. We canfind more corresponding points in the matched circle region,

3907

Page 4: Robust Feature Matching for Loop Closing and Localizationkoasas.kaist.ac.kr/bitstream/10203/22443/1/Robust Feature Matching for... · Robust Feature Matching for Loop Closing and

whereas original KLT can not give any correspondencebetween whole images as shown in Fig. 5. If it fails to findthe matched regions after the limited number of trials, weterminate the matching process.

Fig. 5. top: matching result from original KLT, middle: matched regionby randomly selected 3 points, bottom : KLT tracking of matched imagepatches.

IV. LOOP CLOSING

Given all the 3D landmarks in the map are projected intothe current image plane according to the estimated robotpose during SLAM by (8). Because all the landmarks havetheir own key-frame indices, we find the matched key-frameimage by voting the projected points which are located inthe current image boundary. The matched key-frame imageis computed by (9).

x = K[Rc tc]X (8)

Imatch = argmaxI j

(Nj(Rc, tc,X)) (9)

Here, X are the 3D locations of landmarks in the globalmap, Rc and tc are the current robot pose with respectto the global frame and a 3 × 3 calibration matrix Krepresents camera parameters. Nj(Rc, tc,X) is the number oflandmarks that belong to the jth key frame image satisfyingthat whose projections are located within the current imageplane and they are in front of the camera. When a robotreturns to the previously encountered area, we can estimatethe current robot pose (Rb, tb) according to the revisitedlandmarks in the map using 3-point algorithm followed byRANSAC. This estimate of the current robot pose couldbe conflicting with estimated one (R f , t f )from SLAM. Forloop closing in SLAM, we adopt a multi-view registrationapproach which distributes pairwise error accumulated over

TABLE I

PERFORMANCE COMPARISON FOR 20IMAGES

SIFT ProposedAccuracy :

Number of Inliers / Number of Matches 401/416 425/438(%) (96.39) (97.03)

Mean Processing Time(sec) : 0.5228 0.1170

multiple views[17]. In the ring topology shown in Fig.6, wehave estimated the key frame pose from v1 to vn and theirrelative movements from R1,2, t1,2 to Rn−1,n, tn−1,n. A cycleis consistent if its association rotations compose to identityas (10)

Fig. 6. Ring Topology for Loop Closing

R1,2R2,3R3,4R4,5 · · ·Rn,1 = I (10)

Total error about cycle is computed by two estimates R f ,Rb

and we portion out this error equally between all of therelative rotations. After we adjust the rotations of all the keyframes, we also portion out total translational error equallybetween all the relative translations on the adjusted relativerotations to satisfy (11).

t1,2 + t2,3 + t3,4 + t4,5 · · · tn,1 = 0 (11)

According to recomputed poses at key-frames, we finallyreconstruct the global map.

V. EXPERIMENTAL RESULTS

Table 1 shows the performance evaluation between SIFTand the proposed method for 20 images. The matchingaccuracy is measured manually and in our experiments, weuse a labtop (2.0GHz), 1G of RAM memory. The proposedmethod has a similar performance for matching accuracywith SIFT. However the computational cost is much lowerthan that of SIFT. Moreover, the proposed method canfind matched points between incoming image and corner-based landmarks in the map. This fact simplifies problemsof loop closing and mobile robot localization for corner-based SLAM approaches. Fig. 12 and 13 describe some ofmatching results from SIFT and the proposed method forvarious scenes that are an office, a corridor and a lobby in thebuilding. The images are taken under the viewpoint changes.

3908

Page 5: Robust Feature Matching for Loop Closing and Localizationkoasas.kaist.ac.kr/bitstream/10203/22443/1/Robust Feature Matching for... · Robust Feature Matching for Loop Closing and

From these images, it is sufficient for computing below20 descriptors in a single image to get corresponding pointsand KLT tracker gives more corresponding points withincorresponding regions. Fig. 7 shows the matching resultwhen a robot returns to the previously visited place.

Fig. 7. Matching result after a robot returns to the previously encounteredplace

The left image is the key-frame image saved in thedatabase during SLAM and the right image is the incomingimage after a robot revisits the similar scene. This resultreveals that we can robustly find correspondences despitescale and affine changes. Fig. 8 reveals the effect of theaccumulated pose error when a robot circulates the loopmore than 2 times. It is clear when a robot moves for a long

Fig. 8. Without Loop Closing left : 2 times circulations, right : 3 timecirculations

distance, accumulated pose error causes the serious problemin SLAM. Fig. 9 shows the experimental environments andwe use the off-the-shelf Bumblebee Stereo Camera as avision sensor mounted on a mobile robot. Fig.10 shows the

Fig. 9. Experimental Environment

inconsistent SLAM result without loop closing and localiza-tion approaches after a robot circulated 5 times around anoffice environment then moved to another office as shown inFig. 9, approximately 50m.

However, we can obtain the more consistent map com-pared with Fig. 10 from loop closing and localization by theproposed method as shown in Fig.11.

−8000 −6000 −4000 −2000 0 2000 4000 6000 8000

−4000

−2000

0

2000

4000

6000

8000

X(mm)

Y(m

m)

SLAM (5 times circulations)

Fig. 10. SLAM Result without Loop Closing

−8000 −6000 −4000 −2000 0 2000 4000 6000 8000

−4000

−2000

0

2000

4000

6000

8000

X(mm)

Y(m

m)

SLAM (5 times circulations)

Fig. 11. SLAM Result using Loop Closing and Localization

VI. CONCLUSIONS

In this paper, we have proposed a method for the robustcorner matching under the large view changes. Instead ofpoint-based correspondence search, we generate circle re-gions from initially matched 3 points and we use the SIFTdescriptor to determine whether those regions are matchedor not. From matched regions, we can find many correspon-dences within them by KLT feature tracker. This method isuseful due to the facts that computational cost is relativelycheap and it produces many matched points. Moreover it canbe used for existing corner-based SLAM approaches. We usethe proposed method to solve loop closing and localizationproblems. We demonstrate the feasibility of the proposedmethod through various experiments.

VII. ACKNOWLEDGMENTS

This work was supported in part by MIC && IITA throughIT Leading R&D Support Project.

REFERENCES

[1] Pual Newman and Kin Ho, ”SLAM-Loop Closing with Visual SailentFeatures”, IEEE International Conference on Robotics and Automa-tion, 2006.

3909

Page 6: Robust Feature Matching for Loop Closing and Localizationkoasas.kaist.ac.kr/bitstream/10203/22443/1/Robust Feature Matching for... · Robust Feature Matching for Loop Closing and

Fig. 12. Feature Matching Results (SIFT)

[2] Michael Kaess and Frank Dellaert, ”A Markov Chain Monte CarloApproach to Closing the Loop in SLAM”. IEEE International Con-ference on Robotics and Automation 2005.

[3] Lowe. D.G., ”Distinctive Image Features from Scale-Invariant Key-points”, International Journal of Computer Vision, 60, 2, pp/ 91-110,2004.

[4] Stephen Se, David Lowe and Jim Little, ”Mobile Robot Localizationand Mapping with Uncertainty using Scale-Invariant Landmarks”,International Journal of Robotics Research, volume 14, Number 3,pages 157-165, July 2003.

[5] Sungho Kim and Inso Kweon, ”Biologically Motivated PerceptualFeature : Generalized Robust Invariant Feature”, LNCS 3853 : 305-314 (ACCV 06), 2006.

[6] Jianbo Shi and Carlo Tomasi, ”Good Features to Track”, IEEEConference on Computer Vision and Pattern Recognition (CVPR), pp.593-600, 1994.

[7] Andrew J. Davison, Walterio Mayol and David W. Murray, ”Real-Time Localisation and Mapping with a Single Camera”, InternationalConference on Computer Vision (ICCV), 2003.

[8] Eade.E, Drummond. T, ”Scalable Monocular SLAM”, IEEE ComputerSociety Conference on Computer Vision and Pattern Recognition(CVPR) 2006.

[9] David Nister, Oleg Naroditsky and James Bergen, ”Visual Odometry”,IEEE Society Conference on Computer Vision and Pattern Recognition(CVPR), 2004.

Fig. 13. Feature Matching Results (Proposed)

[10] C. Harris and M.J.Stephen, ”A combined corner and edge detector”,In Alvey Vision Conference, page 147-152, 1988.

[11] C. Schmid, R. Mohr and C. Bauckhage, ”Evaluation of Interest PointDetectors”, International Journal of Computer Vision, 32(2), 151-172,2000.

[12] David Nister, ”A Minimal Solution to the Generalised 3-Point PoseProblem”, IEEE Conference on Computer Vision and Pattern Recog-nition (CVPR) 2004.

[13] M.Fischler and R.Bolles: ”Random Sample Consensus : a Paradigmfor Model Fitting with Application to Image Analysis and AutomatedCartography”, Communications ACM, 24:381-395. 1981.

[14] S. T. Barnard and M.A.Fischler, ”Stereo vision.”, in Encyclopedia ofArtificial Intelligence, New York ; John Wiley, 1987, pp/1083-1090

[15] A Stereo Matching Algorithm with an Adaptive Window : Theoryand Experiment”, IEEE Trans. on Pattern Analysis And MachineIntelligence, vol. 16, No. 9, September 1994.

[16] E. Mouragnin, M. Lhuillier, M.Dhome, F.Dekeyser, P.Ssayd, ”RealTime Localization and 3D Reconstruction”, IEEE Computer SocietyConference on Computer Vision and Pattern Recognition (CVPR),2006.

[17] G. Sharp, S.W.Lee and D. Wehe, ”Multiview Registration of 3DScenes by Minimizing Error Between Coordinate Frames”, LectureNotes in Computer Science (Proc. European Conference on ComputerVision (ECCV 02), vol. LNCS-2351, pp. 587, May 2002.

3910