Estimate Location Using Omnidirectional Image Sensor

download Estimate Location Using Omnidirectional Image Sensor

of 6

Transcript of Estimate Location Using Omnidirectional Image Sensor

  • 8/3/2019 Estimate Location Using Omnidirectional Image Sensor

    1/6

    IEEE/RSJ International Workshop on IntelligentRobots and Systems lROS '91. Nov. 3-5, 1991,Osaka, Japan. IEEE Cat. No. 91TH0375-6

    Estimating Location and Avoiding Collision AgainstUnknown Obstacles for the Mobile Robotusing Omnidirectional Image Sensor COPISYasushi Yagi, Yoshimitsu Nishizawa and Masahiko Yachida

    Department of Information& Computer SciencesOsakaUniversity

    1-1 Machikaneyama-cho, Toyonaka, Osaka 560, JapanPhone 06-844-1 151, E-mail [email protected]

    AbstractWe have proposed a new omnidirectional image

    sensor COPIS (Conic Projection Image Sensor) for guidingnavigation of a mobile robot. Its feature is passive sensing ofthe omnidirectional image of the environment in real-time (atthe frame rate of a TV camera) using a conic mirror. COPISis a suitable sensor for visual navigation in real worldenvironment with moving objects.

    This paper describes a method for estimating thelocation and the motion of the robot by detecting the azimuthof each object in the omnidirectional image. In this method,the azimuth is matched with the given environmental map.We also present a method to avoid collision against unknownobstacles and estimate their locations by detecting their az-imuth changes while the robot is moving in the environment.Using the COPIS system, we performed several experimentsin the real world.1. IntroductionThere has been much work on mobile robots withvision systems which navigate in both unknown and knownenvironments [ 1-41. These mobile robots, however, viewonly front region of themselves and, as a result, they maycollide against objects approaching from the side or behind.Thus, we need an image sensor to view the environmentsaround the'robot so that it may navigate safely,

    Imaging methods using rotating cameras [SI, a fish-eye lens [61, a spherical mirror 171 or a conic mirror [81[9]

    fish-eye lens can acquire a wide view of a semi-spherearound the camera. However the image analysis of theground (floor) and objects on it is difficult because theyappear along the circular boundary of the image where theimage resolution is very poor. A conic mirror yields animage of the environment around it and we can easily take a360 degrees view. Using spherical mirror, this imagingmethod is similar to the imaging method using conic mirror;however, the image is similar to the image using a fish-eyelens. Thus structures in an environment such as a wall and adoor in a room appear along the circular boundary of theimage.

    We have proposed a new omnidirectional imagesensor COPIS (Conic Projection Image Sensor) for guidingnavigation of a mobile robot [%101. Its feature is passivesensing of the omnidirectional image of the environment inreal-time (at the hame rate of a TV camera) using a conicmirror. COPIS is a suitable sensor for visual navigation inthe real world environment with moving objects [ I l l . Theimaging isa conic projection; the azimuth of each point in thescene appears in the image as its direction from the imagecenter. Thus, if the azimuth angle is observed at twopoints,the relative location between the robot and the object point iscalculated by triangulation,

    This paper describes a method for estimating thelocation and themotion of the robot by detecting the azimuthof each object in the omnidirectional image. In this paper, weassume a priori knowledge (model) of the environment andthe azimuth is matched with the given environmental map.have been studied for acquiring omnidirectional views of theenvironment. Although it can acquire very precise azimuth- -

    information in the omnidirectional view taken by a rotating We also present a method to estimate locations of unknown- obstacles by detecting their azimuth changes while the robotis moving in the environment. Then the robot can avoidcollision against them. Using the COPIS system, we

    camera, the imaging takes a long time and, thereby, themethod is not applicable to real-time problems such asavoiding collision against moving objects. Imaging using a

    mailto:[email protected]:[email protected]
  • 8/3/2019 Estimate Location Using Omnidirectional Image Sensor

    2/6

    performed experiments i n a room.

    2 . COPIS System ConfigurationA s shown in F ig .1 , the COPIS system has three

    co i i i p o n en ts ; an i m ag i n g su b sy s t em CO P IS , an i m ag eprocessing subsystcm and a mobi le robot . COPI S mountedon thc robot consists of a conic mirror and a TV camera i n aglass tube with a diameter of 200 m m and a height of 200m i n . T h e image p ro cess i n g su b -sy s t em co n s i s t s of am o n i t o r , an i m ag e p ro cesso r , w h i ch co n v e r t s eachomnidi rect ional image in to a 5 1 2 x 4 3 2 ~ 8 it digital image,arid a 32 bits workstation.

    A conic mirror with a diameter of 120 mni and a T Vcamera are set in the glass tube in such a way that their axesare identical and vertical. Fig.:! is an ex am p l e of an inputimage in a real environment as shown in Fig.3. As shown inFig.4, the image taken by the COPIS is a 2x view around thevert ical axis . Furthermore, the C OPIS has advantages thatvertical edges in the environment project radially in the image

    COPIS -%

    Mobileobot+a-k*L'

    ReceiverC o r r e c m

    Work StationNFS

    Fig.1 C O P I S System Configuration

    Fig.3 Experimental EnvironmentConic mirror

    4' Conic mirror

    Fig.4 View Field of COPIS1mageplane

    Fig.2 A Example of Input Image F i g 5 Invariant Relation o f Azimuth Angle

  • 8/3/2019 Estimate Location Using Omnidirectional Image Sensor

    3/6

    and that their azimuth angles have an invariant relation withthe distance from and the height of the object. As shown inFig.5, the point P at (Xp,Yp,Zp)n the environment isprojected on the image point p (xp,yp) represented by

    Thus in the COPIS system, by using the azimuths of theradial edges in the image plane, the COPIS can estimatelocations of the robot and objects.

    3. Navigation AlgorithmThe robot is initially parked at a standard position and

    driven around a room and a corridor of the building via agiven route. The robot knows the standard position (startingposition) and the robots own movement, however, there aremeasurement errors caused by swaying motion of the robot.Therefore, by using azimuth information from both the inputimage and the environmental map, we estimate location andmotion of the robot.3. 1 Location and motion estimation of robot

    Essentially, the location of a mobile robot would bedefined just by the planar polar coordinates (r,e) as shown inFig.5. Thus, as shown in Fig.6, if two and more azimuthangles of the object are observed in the given environmentalmap where the robot moves, the location of the robot iscalculated by matching the obtained azimuth angles with theenvironmental map. Actually. we estimate more preciselocation by the least squares method. Furthermore, themotion of the robot can be estimated by measuring itslocation in consecutive images. As COPIS can take anomnidirectional image, this system can estimate the location

    Fig.6 Location Estimation of Robotand the motion of the robot even if the robot are turning.3.2 Predicting azimuth angle of vertical edge

    Fig.7(a) shows an example of the environmentalmap. As seen in the figure, the map is two-dimensionalmodel which is viewed from the vertical direction.Therefore, when the robots location is given, we can predictthe azimuth angles of each edge as shown in Fig.7(b).3.3 Matching radial l ines to predicted azimuth

    Since the rough locations of the robot have beenobtained, we generate an predicted azimuth angle model fromthe environmental map. In case of robot moving, the roughlocations of the robot are calculated by adding the robotmovement from the encoder of the robot and the estimatedlocation at the prior frame. Then this predicted azimuth angleof each edge is compared with the azimuth angle of the radialline obtained from the input image as follows.

    angle model

    Vertical Edge Predicted Radial line(b)

    Fig.7 Environmental Map and Prediction of the Azimuth Angle

    - 9 / / -

  • 8/3/2019 Estimate Location Using Omnidirectional Image Sensor

    4/6

    In the case when the robot is at the starting position,we first set a search region around each predicted azimuthangle of vertical edge, which is an estimated position fromthe environmental map. We examine if an radial line in theimage exists in each search region.

    In the case when the robot is moving, to set thesearch region, we use th e observed locus of each edge in theimage while the robot moves.

    Let us denote the robot motion by (u(t),v(t)).Defining the position of P at time t i by Pi(X1,Y i,Zi ), therelative velocity of the point P in the environment at time tl+tis represented by (-u(tl+t), -v(tl+t), 0). We get the locationof point Pat time t i + t as

    tX p = j-u(tl+t) dt + Xi

    t=Ott=O

    Yp = I-v(tl+t) dt + Y1 (2)z p = z 1

    Thus, from (1) and (21, the relation between an azimuthangle of an object and time tis obtained as follows,t

    I-v(tl+t) dt + Y1J-u(ti+t) dt + Xi

    ( 3 )t=Otane(ti+t) =

    t=OThe locus of the azimuth angle in consecutive images isrepresented by (3). Thus, the azimuth angle at the next framecan be predicted, then we can set a search region around thispredicted azimuth angle of vertical edge.

    After setting the search regions, we estimate thelocation of the robot. However, the obtained azimuth angleshave observational errors due to the swaying motion (panangle) of the robot. Therefore, we estimate more preciselocation by the least squares method. By changing the panangle of the robot every 0.5 degree in a certain margin of

    MinimumFig8 Process of Estimating Pan Angle of Robot

    error with the swaying motion of the robot, we can find outthe precise pan angle when the deviation of the least squaremethod becomes minimum. Then the location of the robotcan be detected. Fig.8 shows the process of finding theminimum deviation by changing the pan angle.

    4. Estimation of Unknown ObstacleAfter matching the observed objects with the

    environmental map, the objects which are undetectable in theenvironmental map are recognized as the unknown obstacles.If there are edges which have not been matched to theenvironmental map, the robot considers that they are causedby some unknown obstacles and estimates their locations.

    The relative location between the robot and theobstacle is calculated by triangulation if the azimuth angle isdetected at two positions while the robot is moving.4. 1 Estimation of unknown obstacle's location

    The locus of the azimuth angle in consecutive mage isrepresented by (3). Thus, if the azimuth angle 0 is observedat two points, the relative location between the robot and theobject point is calculated by triangulation as follow,

    th angle 6

    Fig.9 Estimation of Unknown Obstacle's Location

    - 9 / 2 -

  • 8/3/2019 Estimate Location Using Omnidirectional Image Sensor

    5/6

    -1 ] (4)tan02 -1 U(2)tanQ V (2 )[;:I=[ tan83 -11 [U(3)tanOg V (3)(i=2,3)V (i) = Av( ti +t (i)) dt

    t-where 82 and 83 are the azimuth angle 8 after t2 and t3 (sec)respectively. In case of 82=83, the mamx of (6) is singular.Now, when the object moves along the same axis as the therobot motion. the conditions of tan82 # tan83 is notsatisfied. In this case, it is impossible to calculate thelocation. However, as the object has usually a size, it isunlikely that all points on the object move on the same axisas the robot movement. Therefore, at least, locations of afew points on the object can be calculated. Actually, theazimuth angle has an observational error due to the swayingmotion of the robot. Therefore, as shown in Fig.9, weestimate more precise location using consecutivemeasurements by the least squares method.

    5. Experimental ResultsUsing the COPIS system, we performed several

    experiments in the real world. One of them was performed ina room with a size of 2.5m by 2.5m. The robot moved in theenvironment as shown in Fig.3. Each image was taken aftermotion of every 5cm when the robot moved. Observing thelocus of azimuth angle of vertical edges, COPIS an estimateits own location and motion. First, the robot moves straightfrom the 1st to the 16th frame, and during next 16 frames therobot changes its direction and moves round the arc towardright side, and finally the robot moves straight again. Fig.10shows plots of the azimuth angle of vertical edges in theenvironment. The result is shown in Fig. 11. The givenvertical edges in the environmental map were plotted as smallblack rings, and the obtained vertical edges on the unknownobstacles matched in more than 15 frames were plotted asblack points. Furthermore, the real locus of robot movementwas drawn by straight and curved black thin lines, and theestimated locus of the robot was drawn by a black thick line.

    An average error of the location measurement of therobot was approximately 3 cm and the maximum error wasapproximately7 cm. In this experiment, as shown in Fig.1 1,

    3f

    .............................-Id==?--a

    Fig.10 Locus Map of Vertical Edge

    after the robot moves round the arc toward right side. a largee m r was produced in the front region of the robot. Duringthe f i t 32 frames the given vertical edges matched with theenvironmental map exist around the robot randomly, then thelocation of the robot can be calculated from a lot of thesevertical edges. However, when the robot moves along a finalstraght course, these vertical edges move toward the behindregion of the robot, and some vertical edges are occuludedby unknown obstacle. Then, the number of the vertical edgesmatched with the environmental map is going to decrease.Therefore, the location of the robot can not be calulatedprecisely.The average error of the location of unknownobstacles was approximately 3 cm. However, we considerthat the precision of the obtained location of the robot and theunknown obstacle are enough for robot navigation. Thus,these results suggest that the COPIS system is a usefulsensor for robot navigation.

    6.ConclusionsIn this paper, we have described a method of

    estimating location and motion of the robot and estimating thelocation of unknown obstacles. we consider that themeasurement precisions are enough for robot navigation.

    In the feature work, we will be trying to perform theexperiment with the environmental map made by learning the

    - / 3 -

  • 8/3/2019 Estimate Location Using Omnidirectional Image Sensor

    6/6

    I

    -200 41'nvironmental Layout -2co

    0 Observed Locatlon

    0 Robot Fig.11 Result of 'Measutment of Obstacle's Locationan d Robot's Location and Motion

    location of objects in the environment while the robot movedbefore. Furthermore, an application of the COPIS to roadfollowing including moving objects with a givenenvironmental map is the subject of on-going studies.

    References111 A.M.Waxman, J.J LeMoigne and B.Scinvasan, A visual

    navigation system for autonomous land vehicles, IEEEJ.Robotics & Auto., vol.RA-3, No.2, pp.124-141(1987)

    [ 2 ] M.Turk, K.D.Morgenthaler, K.D.Gremban andM.Marra, VITS-A vision system for autonomous landvehicle navigation, IEEE Trans. Pattem Anal. Mach.Intell., vol.PAM1-IO, no.3 pp.342-360 (1988)

    [3] C. Thrope, M.H. Hebert, T. Kanade and S.A. Shafer,Vision and navigation for the Camegie-Mellon Navilab,IEEE Trans. Pattem Anal. Mach. Intell., vol.PAM1-IO,No.3, pp.362-373 (1988)

    [4] M. Yachida, T. Ichinose and S . Tsuji, Model-guidedmonitoringof a building environment by a mobile robot,

    151 H. Ishiguro, M. Yamamoto and S . Tsuji, Omni-directional stereo for making global map,Proc. rdICCV (1990).

    ROC.th IJCAI, pp.1125-1127 (August 1983)

    [6] S.J. Oh and E.L. Hall, Guidance of a mobile robot usingan omnidirectional vision navigation system,Roc. PIE852 Mobile Robots 11 pp.288-300 (1987)

    171 J. Hong, X. Tan, B . Pinette, R. Weiss and E. M.Riseman, Image-based homing, Proc. nt. conf.Robotics and Automation, pp. 620-625 (April 1991)

    [8] R.A. Jarvis and J.C. Byme, An automated guidedvehicle with map building and path finding capabilities,

    191 Y. Yagi and S . Kawato, Panorama scene analysis withconic projection, Proc. EEE Int. Workshop IntelligentRobots & Systems, pp.181-187 (1990)

    [lo] Y. Yagi and M. Yachida, Real-time generation ofenvironmental map and obstacle avoidance usingomnidirectional image sensor with conic mirror,Roc.IEEE Conf. on Computer Vision and PattemRecognition, Hawaii, (June,1991)

    1111Y. Yagi, S . Kawato and S . Tsuji, Collision avoidanceusing omnidirectional image sensor (COPIS),Proc.IEEE Int. Conf. on Robotics and Automation, pp.910-915 (Apri1,1991)

    Prw. 4th ISSR, pp.497-504 (1988)

    9 / Y -