Paracatadioptric visual servoing from linesmartinet-p/publis/2004/isr04hicham.pdf ·...

6
Paracatadioptric visual servoing from lines H. Hadj Abdelkader Y. Mezouar P. Martinet LASMEA 24 avenue des Landais 63177 AUBIERE - FRANCE hadj,mezouar,[email protected] Abstract In this paper we present a new approach to control the 6 degrees of freedom of a manipulator using the projection of 3D lines extracted from the image plane of an omnidi- rectional camera. The main motivation of this work is to overcome the problem of visibility when using classical vi- sion sensor. Indeed one of the principal deficiency of classi- cal visual servoing strategy is that some parts of the visual features may get out of the field of view during servoing lea- ding to its failure. Panoramic cameras overcome this pro- blem because of their wider field of view. The present paper is concerned with the use of lines in paracataioptric visual servoing. Indeed when dealing with real environments (in- door or outdoor) or industrial workpiece, the extraction and the tracking of lines is natural. We derive the pano- ramic image jacobian for lines in the case of image based visual servoing. The corresponding control law is then de- signed and validated. Keywords Visual Servoing, Omnidirectional camera, lines 1 Introduction In the last years, the use of visual observations to control the motions of robots has been extensively studied (ap- proach referred in the literature as visual servoing). The advent of fast and inexpensive digital imaging technology has allowed camera systems to be integrated as part of a closed loop feedback control system. Computer vision can provide to the robotic system a powerful way of sensing the environment and can potentially reduce or obliterate the need for environmental modeling, which is extremely important when the robotic tasks require the robot to move in unknown and/or dynamic environments. Conventional cameras suffer from restricted field of view. Many applications in vision-based robotics, such as mobile robot localisation [5] and navigation [23], can benefit from panoramic field of view provided by omnidirectional came- ras. In the literature, there have been several methods pro- posed for increasing the field of view of cameras systems. One effective way is to combine mirrors with conventional imaging system. The obtained sensors are referred as ca- tadioptric imaging systems. The resulting imaging systems have been termed central catadioptric when a single pro- jection center describe the world-image mapping. From a pratical view point, a single center of projection is a desi- rable property for an imaging system [2]. Baker and Nayar in [2] derive the entire class of catadioptric systems with a single viewpoint. The visual servoing framework is an effective way to control robot motions from cameras observations [12]. Control of motion with omnidirectionnal cameras appears in the litte- rature for mobile robot formation control [20], [22]. Visual servoing schemes are generally classified in three groups, namely position-based, image-based and hybrid-based control [11, 12, 15]. Classical visual servoing techniques make assumptions on the link between the initial, current and desired images. They require correspondences between the visual features extracted from the initial image with those obtained from the desired one. These features are then tracked during the camera (and/or the object) motion. If these steps fail the visually based robotic task can not be achieved [6]. Typical cases of failure arise when matching joint images features is impossible (for example when no joint features belongs to initial and desired images) or when some parts of the visual features get out of the field of view during the servoing. Some methods has been proposed to resolve this deficiency based on path planning [18], swit- ching control [7], zoom adjustment [13], geometrical and topological considerations [10]. However, such strategies are sometimes delicate to adapt to generic setup. Clearly, visual servoing applications can also benefit from cameras with a wide field of view. The image Jacobian 1

Transcript of Paracatadioptric visual servoing from linesmartinet-p/publis/2004/isr04hicham.pdf ·...

Page 1: Paracatadioptric visual servoing from linesmartinet-p/publis/2004/isr04hicham.pdf · Paracatadioptric visual servoing from lines H. Hadj Abdelkader Y. Mezouar P. Martinet LASMEA 24

Paracatadioptric visual servoing from lines

H. Hadj Abdelkader Y. Mezouar P. Martinet

LASMEA

24 avenue des Landais63177 AUBIERE - FRANCE

hadj,mezouar,[email protected]

Abstract

In this paper we present a new approach to control the 6degrees of freedom of a manipulator using the projectionof 3D lines extracted from the image plane of an omnidi-rectional camera. The main motivation of this work is toovercome the problem of visibility when using classical vi-sion sensor. Indeed one of the principal deficiency of classi-cal visual servoing strategy is that some parts of the visualfeatures may get out of the field of view during servoing lea-ding to its failure. Panoramic cameras overcome this pro-blem because of their wider field of view. The present paperis concerned with the use of lines in paracataioptric visualservoing. Indeed when dealing with real environments (in-door or outdoor) or industrial workpiece, the extractionand the tracking of lines is natural. We derive the pano-ramic image jacobian for lines in the case of image basedvisual servoing. The corresponding control law is then de-signed and validated.

Keywords

Visual Servoing, Omnidirectional camera, lines

1 IntroductionIn the last years, the use of visual observations to controlthe motions of robots has been extensively studied (ap-proach referred in the literature as visual servoing). Theadvent of fast and inexpensive digital imaging technologyhas allowed camera systems to be integrated as part of aclosed loop feedback control system. Computer vision canprovide to the robotic system a powerful way of sensingthe environment and can potentially reduce or obliteratethe need for environmental modeling, which is extremelyimportant when the robotic tasks require the robot to movein unknown and/or dynamic environments.Conventional cameras suffer from restricted field of view.Many applications in vision-based robotics, such as mobile

robot localisation [5] and navigation [23], can benefit frompanoramic field of view provided by omnidirectional came-ras. In the literature, there have been several methods pro-posed for increasing the field of view of cameras systems.One effective way is to combine mirrors with conventionalimaging system. The obtained sensors are referred as ca-tadioptric imaging systems. The resulting imaging systemshave been termed central catadioptric when a single pro-jection center describe the world-image mapping. From apratical view point, a single center of projection is a desi-rable property for an imaging system [2]. Baker and Nayarin [2] derive the entire class of catadioptric systems with asingle viewpoint.

The visual servoing framework is an effective way to controlrobot motions from cameras observations [12]. Control ofmotion with omnidirectionnal cameras appears in the litte-rature for mobile robot formation control [20], [22]. Visualservoing schemes are generally classified in three groups,namely position-based, image-based and hybrid-basedcontrol [11, 12, 15]. Classical visual servoing techniquesmake assumptions on the link between the initial, currentand desired images. They require correspondences betweenthe visual features extracted from the initial image withthose obtained from the desired one. These features arethen tracked during the camera (and/or the object) motion.If these steps fail the visually based robotic task can not beachieved [6]. Typical cases of failure arise when matchingjoint images features is impossible (for example when nojoint features belongs to initial and desired images) or whensome parts of the visual features get out of the field of viewduring the servoing. Some methods has been proposed toresolve this deficiency based on path planning [18], swit-ching control [7], zoom adjustment [13], geometrical andtopological considerations [10]. However, such strategiesare sometimes delicate to adapt to generic setup.

Clearly, visual servoing applications can also benefit fromcameras with a wide field of view. The image Jacobian

1

Page 2: Paracatadioptric visual servoing from linesmartinet-p/publis/2004/isr04hicham.pdf · Paracatadioptric visual servoing from lines H. Hadj Abdelkader Y. Mezouar P. Martinet LASMEA 24

(also termed interaction matrix) play a central role to de-sign vision-based control law. It links the variations of imageobservations to the camera velocity. The analytical form ofthe interaction matrix is available for some image features(points, circles, lines,· · ·) in the case of conventional ca-meras [11]. As explained, omnidirectional view can be apowerfull way to overcome the problem of target visibilityin visual servoing. Barreto etal. in [3], studied the centralcatadioptric Jacobian matrix for a set of image points. Thispaper is mainly concerned with the use of projected linesextracted from central catadioptric images as input of a vi-sual servoing. When dealing with real environments (in-door or outdoor) or industrial workpiece, lines features arenatural choices. Nevertheless, most of the effort in visualservoing are devoted to points [12], only few works haveinvestigated the use of lines in visual servoing with tradi-tional cameras (refer for example to [1], [14]) and none hasexplored the case of omnidirectional cameras. This paper isconcerned with this last issue, we derive a form of the cen-tral paracatadioptric image Jacobian for lines which can beexploited to design a control law for positionning task of asix degrees of freedom manipulator.

The remainder of this paper is organized as follow. In Sec-tion 2, following the description of the central catadioptriccamera model, lines projections in the image plane is stu-died. This is achieved using the paracatadioptric camera.We present, in Section 3 the control law and we derive aform of the image Jacobian for paracatadioptric projectedlines. In Section 4, simulated results are presented.

2 Central paracatadioptric image for-mation of lines

In this section, we describe the projection model for centralparacatadioptric cameras and then we focus on 3D linesfeatures.

2.1 Paracatadioptric camera model

As noted previously, a single center of projection is a de-sirable property for an imaging system. A single centerimplies that all lines passing through a 3D point and itsprojection in the image plane pass through a single pointin 3D space. Conventional perspective cameras are singleview point sensors. As shown in [2], a central catadioptricsystem can be build by combining an hyperbolic, ellipticalor planar mirror with a perpective camera and a parabolicmirror with an orthographic camera.

In this paper, we will thus study only the last configura-tion (parabolic mirror with an orthographic camera). Thecamera model expresses the bond between space3D andspace2D (image). The orthographic camera model is gi-

ven by :

(p1

)= α

1 0 0 00 1 0 00 0 0 1

(R t

03x3 1

)(Pw

1

)

u = Kp(1)

wherePw = [X Y Z]T is the coordinates of pointPwith respect toFw(world frame),R andt are rotation ma-trix and translation vector respectively betweenRw andFc(camera frame).K denote the triangular calibration ma-trix, p = [x y 1]T ) the metric homogenous coordinates ofimage point ,u = [u v 1]T is p expressed on pixel,α is anon-zero scale factor.According to the projection model [3],Pw is projected inthe image plan of paracatadioptric sensor to a pointu with:

u = KMf(X) (2)

where :

M =

2p 0 00 2p 00 0 1

and :

f(X) =

XZ+

√X2+Y 2+Z2

YZ+

√X2+Y 2+Z2

1

(3)

In the sequel, we will assume without loss of generality thattheK-matrix is the identity, the mapping function descri-bing central catadioptric projection is then given by:

u = Mf(X) (4)

2.2 Projection of LinesIn order to model lines projections in the image of a cen-tral paracatadioptric imaging system, we use the Plückercoordinates of lines. Let(=) be a 3D line, these Plücker’scoordinate are defined as :

(=) :(

nu

)(5)

whereu is a unit vector lies in the line(=), andn definedby:

n = P× u (6)

whereP is a point in(=).The interpretation plan contain the line in space and thecentral point of the paracatadioptric system, and its equa-tion is given by :

(Π) : nxx + nyy + nzz = 0 (7)

2

Page 3: Paracatadioptric visual servoing from linesmartinet-p/publis/2004/isr04hicham.pdf · Paracatadioptric visual servoing from lines H. Hadj Abdelkader Y. Mezouar P. Martinet LASMEA 24

Rm

Rc

n

u

P

(F)

FIG. 1 –Paracatadioptric line images

The orthographical camera is placed so that the projectionof the rays coming from the surface of the parabolic mir-ror is orthographical on CCD plan of the camera. For thispurpose, axisZ of the camera is confused with the axis ofsymmetryZ of the mirror (see Figure 1).The equation of the corresponding parabolic mirror is gi-ven by :

z =x2 + y2

2ap− ap

2(8)

ap = 2p

wherep is the parameter of the parabola. Let(Sp) be the in-tersection between the interpretation plan(Π) and the sur-face mirror.(Sp) defines the line projection in the mirrorsurface. The pointsp = [x y z]T in Fm from the set(Sp)verify two constraints. On one hand, the pointsp must lie inthe surface mirror, so verify the equation (8). On the otherhand, they lies in the plan(Π), so satisfy the equation (7).By combining between (7) and (8), we proof that :

pT App = 0 (9)

wherep = [x y 1]T and

Ap =

nz 0 apnx

0 nz apny

apnx apny −a2pnz

(10)

The quadratic equation given in (10) is a circle, dependthe both mirror parameters and the interpretation plan(Π).Let Rc the rotation matrix between the camera frame andworld frame. Without loss of generality, we are going toassumeRc = I.Using the intrinsic parameters matrix, the equation (9) canbe written in the pixel dimension as :

uT K−T ApK−1u = 0 (11)

The paracatadioptric image of line in pixel dimension is aalways circle/ellipse (equation (11)).Later on, and without loss the generality, we assumeK = I3×3.So the quadratic equation given by (10) defines the map-ping lines in the paracatadioptric image plan.

Ap0 = nz

Ap1 = nz

Ap2 = 0Ap3 = apnx

Ap4 = apny

Ap5 = −a2pnz

(12)

3 Control lawIn order to control the movements of a robot from visualinformation, we define a task function as [21] :

e = L+(s− s∗) (13)

where :– s is composed of the current features extracted from

the catadioptric image.We will define this vector inthe following.

– s∗ is the desired value ofs– L+ is the pseudo-inverse of model of the interaction

matrixL. The interaction matrixL is the jacobian de-fined as :

L =∂s∂r

(14)

It binds the variations of the visual features to thescrew Vector cameraT = r = [V,W]T :

s = LT (15)

If we wishes an exponential decreasing of the task functione towards0 and if the scene is supposed to be static, thelaw control is given by :

T = −λL+(s− s∗) (16)

The jacobian matrixL thus plays a central role for the syn-thesis of control law. In the continuation, after having cho-sen the vector of visual featuress, we give the analyticalforme of the interaction matrix for the combinations para-bolic mirror-orthographical camera.The coefficientsApi of the quadratic forme (to refer to(12)) are defined except for a non-zero factor constant. Inorder to use these coefficients in the control loop, it is thusnecessary to eliminate this factor. We can for that divideeach coefficient of the quadratic form by a linear combi-nation adequate of those. In order to minimize the wholeof the degenerated configurations (such as the parameterof standardization is null), we choseAh5 like parameterof standardization where this situation is presented on the

3

Page 4: Paracatadioptric visual servoing from linesmartinet-p/publis/2004/isr04hicham.pdf · Paracatadioptric visual servoing from lines H. Hadj Abdelkader Y. Mezouar P. Martinet LASMEA 24

nz = 0 (i.e. the interpretation plan contain deZ axis ofthe mirror frame). In this case, the whole of the degene-rated configurations for the interpretation plan is the plannz = 0. The vector of the visual features is thus:

s =

(Ap3Ap5Ap4Ap5

)(17)

The calculation of the interaction matrix appropriate to thisvector of features is based on the following decomposition :

L =∂s∂r

=∂s∂n

∂n∂r

(18)

The first term∂s∂n represent the interaction between the vi-

sual features motion in the image and the normal vectormotion. The second term∂n

∂r bind the normal vector mo-tion to the camera motion. By noting that :

{u = −W × un = −V × u−W × n (19)

We obtain the interaction between the variations of the nor-mal vector and the movements of the camera :

∂n∂r

=

0 −uz uy 0 −nz ny

uz 0 −ux nz 0 −nx

−uy ux 0 −ny nx 0

(20)The interaction between the movement of the visual fea-tures in the image and the variations of the normal vectorto the plan of interpretation is easily obtained by deriving(??) to the components ofn :

∂s∂n

=1

apnz

( −1 0 nx

n z0 −1 ny

nz

)(21)

The interaction matrix is then obtained by (18), and it canby writtenL = [A B] with:

A = − 1apnz

( −nxuy

nzuz + nxux

nz−uy

−uz − nyuy

nz

nyux

nzux

)

and

B = − 1apnz

(−nxny

nz1 + n2

x

nz−ny

−1− n2y

nz

nxny

nznx

)

This jacobian has a rank 2, its kernel is given by the follo-wing vectors :

(1, − u3n2z+u3n2

x+nynzu2ny(u3nx−u1nz) , 0, 0,

u3nxu1+u3nyu2+u23nz

ny(u3nx−u1nz) , 0)

(0,n3

x+nxn2y+nxn2

z

ny(u3nx−u1nz) , 0, 0, − u3nxnz+n2yu1+n2

xu1

ny(u3nx−u1nz) , 1)

(0,nxnyu2+n2

xu1+u1n2z

ny(u3nx−u1nz) , 1, 0, − nzu3u1+nyu1u2+nxu21

ny(u3nx−u1nz) , 0)

(0, − n3z+n2

ynz+nzn2x

ny(u3nx−u1nz) , 0, 1,u3n2

z+nxu1nz+u3n2y

ny(u3nx−u1nz) , 0)

One thus needs at least three lines to control the six degreesof freedom of the camera. If one considers a set of N lines,the corresponding matrix of interaction is :

L = [LT1 · · · LT

n ]T

4 Results

In this part, we present simulation results of paracatadiop-tric visual servoing from lines by using the control law(25). In the results of simulation which we will present,the initial attitude of the reference camera in comparison tothe world reference is given byri = [0, 0, 1, 0, 0, 0]T

(the first three terms ofri are the components of transla-tion in meter and last three are the components of rotationin radian). The desired image is acquired at the positionof the camera given byrd = [0.1, 0.1, 1.1, π

8 , π8 , π

8 ]T .The three lines used during visual servoing are defined inthe world space by the following plucker-coordinates :

(=1) :(

u1 = (0 1 0)T

n1 = (0 0 − 1)T

)

(=2) :(

u2 = (0 0.9806 0.1961)T

n2 = (0 − 0.1961 0.9806)T

)

(=3) :(

u3 = (0.9623 0.1925 0.1925)T

n3 = (0.1961 0 − 0.9806)T

)

Figure 2 show the initial spacial configuration of lines andcamera. Image noise has been introduced (additive noisewhit maximum amplitude of 1 pixel). The images corres-ponding to the initial and desired cameras positions are gi-ven in figures 3(a) and 3(b). Figure 3(c) shows the trajec-tory of the conics in the image plan. Camera velocities aregiven in figure 4. As one can note it on the figures 5 of er-rors between desired and current feature vectors, the taskof positioning is correctly carried out.

5 Conclusion

In this paper, we have present a strategy of the control ma-nipulators robots at six degrees of freedom using a paraca-tadioptric visual features from lines in the control loop. Theinteraction matrix was obtained in the cases of the combi-nations parabolic mirror-orthographical camera, what allo-wed to construct an adequate control law. In the continua-tion of this work we wish studied the case of the nonholo-nomic mobile robots.

4

Page 5: Paracatadioptric visual servoing from linesmartinet-p/publis/2004/isr04hicham.pdf · Paracatadioptric visual servoing from lines H. Hadj Abdelkader Y. Mezouar P. Martinet LASMEA 24

0

0.5

1

1.5

2

−2

−1.5

−1

−0.5

0

0.5

1

1.5

2

Catadioptric Camera

FIG. 2 –3D scene

0 100 200 300 400 500 600 7000

50

100

150

200

250

300

350

400

450

0 100 200 300 400 500 600 7000

50

100

150

200

250

300

350

400

450

(a) (b)

0 100 200 300 400 500 600 7000

50

100

150

200

250

300

350

400

450

(c)

FIG. 3 –(a) Initial image , (b) trajectory of the catadioptricimage lines, (c) desired image

0 100 200 300 400 500 600 700 800 900 1000−3

−2.5

−2

−1.5

−1

−0.5

0

0.5

1

1.5x 10

−3

VxVyVz

0 100 200 300 400 500 600 700 800 900 1000−0.5

0

0.5

1

1.5

2

2.5

3

3.5x 10

−3

WxWyWz

(a) (b)

FIG. 4 – (a) Translational velocities [m/s] and (b) rotatio-nal velocities [rad/s]

0 100 200 300 400 500 600 700 800 900 1000−25

−20

−15

−10

−5

0

5

10

15

20s1s2

0 100 200 300 400 500 600 700 800 900 1000−25

−20

−15

−10

−5

0

5

10

15

20s1s2

(a) (b)

0 100 200 300 400 500 600 700 800 900 1000−5

0

5

10

15

20

25

30s1s2

(c)

FIG. 5 – (a) Errors for the first conic, (b) Errors for thesecond conic, (c) Errors for the third conic

5

Page 6: Paracatadioptric visual servoing from linesmartinet-p/publis/2004/isr04hicham.pdf · Paracatadioptric visual servoing from lines H. Hadj Abdelkader Y. Mezouar P. Martinet LASMEA 24

Références[1] N. Andreff, B. Espiau, and R. Horaud. Visual ser-

voing from lines. International Journal of RoboticsResearch, 21(8), August 2002.

[2] S. Baker and S. K. Nayar. A theory of single-viewpoint catadioptric image formation.Interna-tional Journal of Computer Vision, 35(2):1–22, No-vembre 1999.

[3] J. P. Barreto, F. Martin and R. Horaud. Visualservoing/tracking using central catadioptric images.Dans ISER2002 - 8th International Symposium onExperimental Robotics, Juillet 2002.

[4] R. Benosman and S. Kang.Panoramic Vision. Sprin-ger Verlag, 2000.

[5] P. Blaer and P.K. Allen. Topological mobile robotlocalization using fast vision techniques. InIEEEInt. Conference on Robotics and Automation, pages1031–1036, Washington, May 2002.

[6] F. Chaumette. Potential problems of stability andconvergence in image-based and position-based vi-sual servoing.The Confluence of Vision and Control,D. Kriegman, G. Hager, A. Morse (eds), LNCIS Se-ries, Springer Verlag, 237:66–78, 1998.

[7] G. Chesi, K. Hashimoto, D. Prattichizzo, and A. Vi-cino. A switching control law for keeping features inthe field of view in eye-in-hand visual servoing. InIEEE Int. Conf. on Robotics and Automation, pages3929–3934, Taipei, Taiwan„ 2003.

[8] P. Corke and S. Hutchinson. A new partioned ap-proach to image-based visual servo control. DansIEEE Conference on Decision and Control, pages2521–2526, Sydney, Décembre 2000.

[9] N.J. Cowan, G.A.D. Lopes, and D.E. Koditschek. Ri-gid body visual servoing using navigation functions.DansIEEE Int. Conf. on Decision and Control, pages3920–3926, Sydney, Australie, 2000.

[10] Noah J. Cowan, Joel D. Weingarten, and Daniel E.Koditschek. Visual servoing via navigation functions.Transactions on Robotics and Automation, August2002.

[11] B. Espiau, F. Chaumette, and P. Rives. A new ap-proach to visual servoing in robotics.IEEE Trans. onRobotics and Automation, 8(3):313–326, June 1992.

[12] S. Hutchinson, G.D. Hager, and P.I. Corke. A tutorialon visual servo control.IEEE Trans. on Robotics andAutomation, 12(5):651–670, October 1996.

[13] E. Malis S. Benhimane. Vision-based control withrespect to planar and non-planar objects using a zoo-ming camera. DansIEEE International Conferenceon Advanced Robotics, Juillet 2003.

[14] E. Malis, J. Borrelly, and P. Rives. Intrinsics-free visual servoing with respect to straight lines.

In IEEE/RSJ International Conference on Intelli-gent Robots Systems, Lausanne, Switzerland, October2002.

[15] E. Malis, F. Chaumette, and S. Boudet. 2 1/2 d visualservoing. IEEE Trans. on Robotics and Automation,15(2):238–250, April 1999.

[16] E. Malis, F. Chaumette, and S. Boudet. Positioninga coarse-calibrated camera with respect to an unk-nown object by 2d 1/2 visual sevoing.IEEE Int.Conference on Robotics and Automation, 2:1352–1359, Mai 1998.

[17] P. Martinet and J. Gallice. Position based visual ser-voing using a nonlinear approach. DansIEEE/RSJInternational Conference on Intelligent Robots andSystems, volume 1, pages 531–536, Kyongju, Coréedu Sud, Octobre 1999.

[18] Y. Mezouar and F. Chaumette. Path planning for ro-bust image-based control.IEEE Trans. on Roboticsand Automation, 18(4):534–549, August 2002.

[19] T. Pajdla, T. Svoboda, and V. Hlacac.Panoramic Vi-sion : Sensors, Theory and Applications, chapter Epi-polar Geometry of Central Panoramic CatadioptricCameras, pages 73–102. R. Benosman and S. B.Kang, 2001.

[20] A. Paulino and H. Araujo. Multiple robots in geome-tric formation: Control structure. InInt. Symposiumon Intelligent Robotic Systems, UK, 2000.

[21] C. Samson and B. Espiau. Application of the taskfunction approach to sensor-based-control of robotmanipulators. In11th IFAC World Congress, vo-lume 9, pages 286–291, Tallin, Estonie, URSS, Aout1990.

[22] R. Vidal, O. Shakernia, and S. Sastry. Formationcontrol of nonholonomic mobile robots with omnidi-rectional visual servoing and motion segmentation. InIEEE International Conference on Robotics and Au-tomation, Taipei, Taiwan, 2003.

[23] N. Winter, J. Gaspar, G. Lacey, and J. Santos-Victor.Omnidirectional vision for robot navigation. InOM-NIVIS, pages 21–28, 2000.

6