Vision-based localization for mobile robots

17

Click here to load reader

Transcript of Vision-based localization for mobile robots

Page 1: Vision-based localization for mobile robots

Robotics and Autonomous Systems 36 (2001) 103–119

Vision-based localization for mobile robots

Giovanni Adorni a,∗, Stefano Cagnoni b, Stefan Enderle c, Gerhard K. Kraetzschmar c,Monica Mordonini b, Michael Plagge d, Marcus Ritter c,

Stefan Sablatnög c, Andreas Zell d

a Department of Communications, Computer and System Science, University of Genoa, Via Opera Pia 13, I-16145 Genoa, Italyb Department of Computer Engineering, University of Parma, Parco Area delle Scienze 181a, I-43100 Parma, Italy

c Neural Information Processing Department, University of Ulm, Oberer Eselsberg, D-89069 Ulm, Germanyd Department of Computer Architecture, University of Tübingen, Sand 1, D-72076 Tübingen, Germany

Abstract

Robust self-localization and repositioning strategies are essential capabilities for robots operating in highly dynamic envi-ronments. Environments are considered highly dynamic, if objects of interest move continuously and quickly, and if chancesof hitting or getting hit by other robots are quite significant. An outstanding example for such an environment is provided byRoboCup.

Vision system designs for robots used in RoboCup are based on several approaches, aimed at fitting both the mechanicalcharacteristics of the players and the strategies and operations that the different roles or playing situations may require. Thispaper reviews three approaches to vision-based self-localization used in the RoboCup middle-size league competition anddescribes the results they achieve in the robot soccer environment for which they have been designed. © 2001 Elsevier ScienceB.V. All rights reserved.

Keywords: Self-localization; Visual features; Dynamic environments; RoboCup

1. Introduction

To perform complex navigation tasks and to co-ordinate its movement with other agents in indoorenvironments, an autonomous mobile robot needsknowledge about the world in which it is moving.

∗ Corresponding author.E-mail addresses: [email protected] (G. Adorni),[email protected] (S. Cagnoni),[email protected] (S. Enderle),[email protected] (G.K. Kraetzschmar),[email protected] (M. Mordonini),[email protected] (M. Plagge),[email protected] (M. Ritter),[email protected] (S. Sablatnög),[email protected] (A. Zell).

This enables it to recognize the main landmarks anddetermine its position with respect to the environ-ment. In fact, answering the question “Where am I?”is one of the most crucial and, often, critical tasks(both in terms of quality and time complexity) for anautonomous mobile robot [6,12]. The self-localizationproblem is usually solved with a combination of rel-ative and absolute position measurement techniques[7]. Odometry and inertial navigation (dead reck-oning) systems are usually integrated with methodsemploying active beacons, natural or artificial land-marks as well as environment model matching, inorder to compensate for wheel slippage and other po-sitioning errors [13]. Natural and artificial landmarksare distinct features that a robot can recognize fromits sensory input. In general, landmarks have a fixed

0921-8890/01/$ – see front matter © 2001 Elsevier Science B.V. All rights reserved.PII: S0 9 2 1 -8 8 90 (01 )00138 -5

Page 2: Vision-based localization for mobile robots

104 G. Adorni et al. / Robotics and Autonomous Systems 36 (2001) 103–119

well-known position within an absolute referencesystem, relative to which a robot can localize itself.The main task in self-localization is then to recognizesuch landmarks reliably by processing the sensoryinput to compute the robot’s position [4,26,30].

The literature concerning computer vision reportson a large number of natural and artificial landmarkpositioning systems integrated in autonomous robotarchitectures [3,8]. Several authors have tried to finda good correlation between what the robot perceivesthrough a camera and its internal representation of theenvironment [21,27]. In indoor environments, edgesof doors or walls are commonly used as landmarksto perform this correlation, but often such patternsare too generic to allow an accurate localization in amore complex environment. Researchers have oftenfocused on the choice of optimal shapes or patternsas landmarks so that robust localization can be ob-tained from a single [23,24] or from multiple images[20]. Furthermore, considerable work has been de-voted to the treatment of error in sensor observations[5]. Sometimes it can be useful to evaluate alterna-tive techniques that can recognize landmarks whichcan be specially designed to carry both human- androbot-oriented visual information, though they mightalso be complemented by other natural signs whichare often found in office and industrial environments[2,28]. Often these methods do not yield high localiza-tion accuracy, but there is a number of practical situa-tions, e.g., docking and navigating in small openings,where coarser robot positioning accuracy suffices.

The RoboCup environment provides researcherswith a common environment in which it is possibleto test and compare different visual self-localizationmethods reliably. Self-localization in RoboCup isvery important to achieve coordination between play-ers, both to defend the team’s penalty area and tomake goal-oriented decisions [19]. Although theworld of RoboCup is simpler than many real-lifeindoor environments, it is a very dynamic worldin which the robot has to react quickly to thechanges caused by the continuous movement of therobots and the ball. Most robots rely on odometryfor real-time self-localization. However, odometryfails immediately when a robot hits other robots.The appeal of the RoboCup challenge is actuallydue to its highly dynamic environment, in whichrobots usually obtain only partial views of the scene,

since reference markers are often occluded by otherrobots.

To reposition themselves, robots in RoboCup usedifferent kinds of sensors, such as cameras, laser rangefinders, sonars, and so on. While self-localizationbased on laser range finders provides a very accurateand robust estimation of robot position, visual sensorsare more flexible; the number and configuration ofcameras can be chosen according to which other tasksare performed by the vision system and to the role thatthe robot has within the team. Most distance sensorsrequire that the field of play be surrounded by walls.This limits their future use, when RoboCup eventuallymoves towards a more natural, wall-free environment.

In RoboCup, according to the current rules, the ap-pearance of relevant objects is well known and it istherefore possible to provide robots with some geo-metric models of the field or of its subparts. Visualself-localization is typically performed by matchingand comparing the robot’s view of the world with amodel. This comparison can often be used to refineand compensate for the errors introduced by odomet-ric data, but in some cases a robot must be able tolocalize itself without any help from sensory systemsother than that of vision.

Several kinds of vision systems have been ap-plied in RoboCup. While most robots use traditionalsingle-camera vision systems, it is becoming quitecommon to adopt more complex solutions. For ex-ample, the problem of having only a limited field ofview can be solved using a camera pointing upwardsto a concave mirror to obtain a global vision system.Sometimes, different vision system designs and local-ization strategies are used for robots playing on thesame team, which allows for more specialized robotsto be built according to their roles but makes sharingdata among robots more difficult.

This paper reviews three approaches to visualself-localization used in the RoboCup middle-size(F-2000) competition, describing and comparingthe results they achieve in the RoboCup test-bedenvironment. The paper is organized as follows:a self-localization method based on an omnidirec-tional vision system is illustrated in Section 2, aself-localization based only on the comparison be-tween a pair of images from a binocular vision sys-tem is proposed in Section 3, and a self-localizationbased on the Monte Carlo localization technique is

Page 3: Vision-based localization for mobile robots

G. Adorni et al. / Robotics and Autonomous Systems 36 (2001) 103–119 105

described in Section 4. Section 5 reports some resultsof the above described techniques. Finally some con-clusions from which possible future work can stemare drawn in Section 6.

2. Robust vision-based localization withmonocular vision

A lot of successful work on self-localization inindoor or bounded environments with laser rangefinders has been presented in the past. As alreadymentioned in Section 1, these methods are very accu-rate, fast and robust. The main drawback in the useof laser range finders or more generally in the useof data from exclusive distance-measuring sensors isthe almost complete lack of information about theobjects from which the distance measurements orig-inate. Only by extracting geometric features such asedges or lines from the data, information about ob-jects in the environment or the environment itself canbe gathered. This leads for example to the inability toperform a global self-localization task even in rathersimple periodic or symmetric environments like theRoboCup, where there are always at least two equiva-lent position estimates with a heading difference of π .A further drawback which affects especially 2D laserrange finders is the restriction of the effective range toa 2D plane. Any relevant object must have an intersec-tion with this plane to enable the laser range finder todetect it.

Fig. 1. The left-hand picture shows the construction of the 360◦ camera. The middle picture shows an image directly taken by the camera.The blue goal is in the upper right, the yellow goal in the lower left corner of the image. The rightmost picture shows the outcome ofthe type classification of the circular horizon image.

In this section, we present an algorithm whichcombines the advantages of approaches based onlaser range finders with the flexibility provided by avision-based approach. For this purpose, an omnidi-rectional camera was built, which is able to provideboth object recognition and distance measurementcapabilities simultaneously. The design of this cam-era was developed by Franz [17] from an earlier MPIdesign used for biologically inspired vision experi-ments. A major drawback of the proposed method isthe poor resolution for larger distances, which maycompromise robustness. To overcome this problem, aset of simple algorithms were implemented to verifyand enhance the position estimates of the main algo-rithm. The algorithm works particularly well in anarea with good visual clues and, hence, is especiallysuited for a robot goalkeeper.

2.1. Localization by using distance data fromimage processing

The 360◦ camera supplies a circular image of theenvironment, which is sampled radially from the cen-ter of the circle to the fringe (see Fig. 1). This isdone for 360 adjacent radii, which results in a spa-tial resolution of 1◦. Each sampled pixel on a lineis assigned, according to its color, to one of the fol-lowing color classes: GOAL-YELLOW, GOAL-BLUE,FIELD, WALL, BALL, ROBOT, UNKNOWN. By usingthe class information of pixels along a radial line rep-resenting the area at the horizon, the type of the static

Page 4: Vision-based localization for mobile robots

106 G. Adorni et al. / Robotics and Autonomous Systems 36 (2001) 103–119

environment (GOAL-YELLOW, GOAL-BLUE, WALL)under a certain direction can be determined. This de-tection is quite stable, since several points around thehorizon are taken into account.

By specifying the mapping between a pixel on aradial line of the image and the distance to the cor-responding point on the field in the real environment,it is possible to provide a vision-based distance mea-surement with the 360◦ camera. Strictly speaking, onlydistances to points on the field can be measured, whiledistances to an arbitrary point, e.g. to an object withinthe field, cannot be measured. To obtain the mappingbetween pixels in the image and distances on the fielda calibration is needed. Since this method of deter-mining distances is based on the detection of a singlepoint type transition along a radial line in the image,it is sensitive to changing lighting conditions or badlytrained object colors for classification and is thereforeerror-prone and subject to noise. This is usually nottrue for the calibration process itself, which can beperformed under almost ideal conditions.

Now the same techniques for self-localization asused with laser scanner data can be applied [6,33].But in contrast to laser scanner data, which have anearly constant error and resolution independent of themeasured distances, the error in the data from the vi-sion system increases with distance (see Fig. 9) whileresolution decreases. This raises the need for furtherprocessing of a position estimate. We will present amethod which can perform a verification of a givenposition estimate on the basis of the more stable hori-zon type detection [32].

Let us assume that the goalkeeper defends the bluegoal. First the self-localization algorithm type-filtersthe data from the image processing. Only distance datafrom directions under which the blue goal is visibleat the horizon are used for further processing. Nextthese type-filtered distances are averaged with the val-ues from adjacent directions within a certain angle toreduce the high-frequency noise within the data. Nowthe algorithm tries to fit a line to the data representingthe blue goal. Because the exact position of this goalis known in the global coordinate frame the algorithmcan calculate the robot position and orientation.

For the verification of the given position estimate,a criterion must be found on which a decision can bemade, whether the estimate is appropriate or not. Forthat the algorithm falls back on the robust detection of

Fig. 2. If the goal is seen under an angle α, the robot position fallson a circle with origin (0, y0) and a radius r. b is the width of thegoal. The origin of the global coordinate frame is at the intersectionof the dashed lines. β is the angle between the reference directionfor the robot heading and the start of the cluster indicating thegoal. γ is the angle between the robot global heading (e.g. fromcompass) and the known orientation (from an a priori model ofthe environment) of the goal line. Line l is parallel to the goalline.

the environment. By clustering the data with the sametype a certain viewing angle α for the blue goal canbe quantified (see Fig. 2). Assuming that the originof the global coordinate frame falls on the middle ofthe goal line it can be shown that the robot positionmust be somewhere on a circle with radius r and origin(x0, y0), where r = b/(2 sin(α)) and x0 = 0, y0 =b/(2 tan(α)), b being the width of the goal. So if anestimated position falls on the circle described by thisequation the algorithm assumes the correctness of theestimate.

This algorithm for self-localization may fail undercertain conditions, e.g. if the distance to the visualclues in the environment is too large for a reliabledistance measurement. Therefore two other methodsare derived to estimate the global position as backupalgorithms. Both are only usable under certain anddiffering conditions.

2.2. Localization by using global heading data

If not only the viewing angle α of the own goal butalso the heading of the robot with respect to a globalcoordinate frame is known (e.g. by compass data), theposition of the robot with respect to this coordinateframe can be calculated (see Fig. 2). The position is atthe intersection of the previously described circle and

Page 5: Vision-based localization for mobile robots

G. Adorni et al. / Robotics and Autonomous Systems 36 (2001) 103–119 107

a line:

x2 +(

y − b

2 tan(α)

)2

=(

b

2 sin(α)

)2

and

y = tan(π − δ)

(x − b

2

). (1)

δ can be calculated as δ = π − (β +α)+γ where β isthe angle between the reference direction for the robotheading and the start of the domain indicating the goaland γ the angle between the robot global heading andthe known orientation of the goal line.

2.3. Localization by using data from odometry

While the robot is moving, the viewing angle α1,2and the angle β1,2 between the reference directionfor the robot heading and the beginning of the areaindicating the goal can be recorded at two differentpositions t1 and t2 along with the distance d traveledbetween these two positions. From these data theposition of the robot with respect to a global coordi-nate frame can be calculated (see Fig. 3).

Such a position is determined by the intersectionof two circles centered on the edges of the goal line,having radii of l1 and l2, respectively. As in theprevious section, the coordinates of the edges withinthe global coordinate frame are given by the a priorimodel of the environment. The radii l1 and l2 can be

Fig. 3. Moving the robot from t1 to t2, where the distance betweent1 and t2 is d, leads to a change of the opening angle α1 to α2

and to a change of the direction angle β1 to β2. l1 and l2 are thedistances to the two goal posts.

calculated as

l1 = d sin(β1)

sin(β2 − β1)and

l2 = d sin(β1 + α1)

sin(β2 − β1 + α2 − α1). (2)

This algorithm could easily be used not only withtranslating but also with rotating movements of therobot. In our case this is not reasonable, due to theskid drive of the goalkeeper robot, which leads to un-reliable odometry data when the robot is turning. Theneed for the robot to move before this algorithm canbe applied is its main drawback.

3. Binocular vision and geometric reasoning

Unlike the method described in the previoussection, that relies on data coming from an omnidirec-tional vision sensor, the self-localization method de-scribed in this section has been designed for use on arobot goalkeeper supplied with a vision system basedon two wide-angle cameras. The fields of view of thetwo cameras overlap in a region of about 20◦ whichcorresponds to the center of the zone that lies imme-diately in front of the robot. This configuration makesit possible to obtain a visual field about 220◦ wide(see Fig. 4). In operating conditions, when the ball isnot detected by either of the two cameras, the camerafrom which the image is acquired is switched at eachnew frame acquisition. When the ball is detected byone of the two cameras, that camera keeps acquiringimages until the ball enters the overlap region. At thatpoint the other camera is switched on for acquisition.

The goalkeeper software architecture consists of amultiagent system composed of three agents: a visualperception agent, a goalkeeper agent, and a coopera-tive agent.

The visual perception agent’s duties are to detectand localize the objects that are in the playing environ-ment and to make the robot aware of its position in-side the field. The visual perception agent therefore isessential to three main robot behaviors: object detec-tion and localization, absolute self-localization, whichis described in detail in this section, and reposition-ing. The cooperative agent enables the robot to ‘talk’,using a messaging system, with the other team mem-bers and with a remote system from which it receives

Page 6: Vision-based localization for mobile robots

108 G. Adorni et al. / Robotics and Autonomous Systems 36 (2001) 103–119

Fig. 4. The goalie’s binocular vision system.

a start signal at the beginning of each half. Finally, thegoalkeeper agent plans the robot’s behavior accordingto the activity of the other two agents. In particular,it drives the engines and actuators of the robot, andactivates the robot behaviors in the different phases ofthe game.

The goalie’s role and position inside the fieldstrongly influence and constrain its mechanical designand its motion strategy: its movements should not ex-tend beyond the goal boundaries, since it has to standin front of it to defend it. A RoboCup goalie’s motioncontrol strategy has to satisfy two often counteractingneeds. On the one hand, the robot must react promptlyto sudden changes, i.e., it must exhibit a mostly re-active behavior. The ball speed is relatively high andmost computation time has to be dedicated to imageprocessing tasks. For this reason, the specifications forthe algorithms implemented in the goalkeeper agentare that the input data, which encode ball position,speed and direction, must be concise and output com-mands must be as direct and immediately executableas possible. On the other hand, the goalkeeper agentmust keep the robot in front of its goal and well cen-tered with respect to the direction from which the ballis most likely to come, at any moment of a game. Todo so, the robot needs to be aware not only of its posi-tion with respect to the field but also of its orientationwith respect to the goal line. Data from the odometersare too unreliable for this purpose. Frequent con-tacts with other robots in the penalty area and therapid movements needed to follow ball deviationscan make wheels skid and cause “false movements”to be recorded. The landmark-based self-localizationmethod, implemented in the visual perception agent,has been designed to overcome such a drawback.

The actions performed by the goalkeeper agent arehierarchically organized at different level of priority.

The highest priority is attributed to “active defense”actions, i.e., those performed to intercept or control theball. During such actions, the visual perception agentprocesses images to detect the ball. The other actions,preparatory to the “active” robot behaviors, includ-ing the self-positioning maneuver, are cyclically per-formed during “inactive” game periods, that is whenthe ball is out of the goalie’s region of operation.

The self-localization process uses the white lines ofthe penalty area as artificial landmarks and draws itsdecisions by comparing the two views coming fromthe left and right cameras of the binocular system.Such a comparison makes it possible to compute theactual robot position inside the penalty area, and toplan and perform the right repositioning maneuverwhen needed. When the self-localization process isrunning, the image pair received from the two cam-eras is first processed to detect the lines delimiting thepenalty area. Segmentation is performed by a two-stepalgorithm. In the first step, color segmentation, theimage depth is reduced by labeling each pixel as be-longing to one of six classes, corresponding to the sixpossible colors that can be found in the RoboCup en-vironment. This is a pre-processing step that is alwaysperformed after image acquisition, whatever sectionof the code of the goalkeeper agent is going to use it.In the second step, a simple edge extraction filter isapplied, whose results are then AND-ed with the bi-nary output that is obtained by setting all non-whitepixels to 0. Doing so, only the edges of the white ar-eas can be extracted and the parameters of the straightline equations can be computed. The limited numberof pixel involved and its possible real-time implemen-tation [18] make the Hough transform appropriate toperform such a task. Moreover, the Hough transformresults are expressed in polar coordinates, allowing foran easy computation of the line equations.

Page 7: Vision-based localization for mobile robots

G. Adorni et al. / Robotics and Autonomous Systems 36 (2001) 103–119 109

The parameters of the lines obtained as describedabove allow the computation of three features that arecompared with a set of references that correspond to asituation in which the robot is in its “default position”,that is standing in the middle of the penalty area, infront of the goal at a preset distance and parallel tothe front line. Such features are the coordinates ofthe point Pi where the two views of the front line ofthe penalty area apparently intersect, and the appar-ent angle α between the two views of the lines. If thegoalie is parallel to the front line of the penalty area,then the left and right views of the line are symmet-ric (as, for example, in Fig. 4). If the goalie is rotatedand/or shifted sidewards with respect to the middle

Fig. 5. Penalty area view as seen by the robot when in a barycentric position parallel to the reference line (default position) (a), in thecenter of the penalty area in front of the right post (b), standing on the front line, centered with respect to the goal (c), turned left throughabout 45◦ (d). Below each view is the corresponding diagrammatic representation.

of the penalty area, such symmetry is lost. The anal-ysis of the asymmetry of the two views of the frontline makes it possible to estimate the robot’s absoluteposition with respect to the penalty area and, thus, tothe reference coordinate system of the playing field.

The four cases shown in Fig. 5 illustrate how geo-metric reasoning is performed. The upper parts ofthe four figures show the views from the two cam-eras as acquired when the robot is in the default po-sition and in three different displaced positions insidethe penalty area. In the lower parts of the figures,the corresponding diagrammatic representation of theviews is reported. Such figures report the results ofline segmentation as in a reference system having the

Page 8: Vision-based localization for mobile robots

110 G. Adorni et al. / Robotics and Autonomous Systems 36 (2001) 103–119

y-axis corresponding to the median axis of the areain which the fields of view of the two cameras over-lap, and the x-axis coincident with the bottom of theframes. A “horizontal flip” operation is applied to theview of the line taken from the right camera and thetwo views are superimposed to simplify the computa-tion and the visualization of the parameters Pi(xi , yi)and α.

When the robot is parallel to the front line, cen-tered on the median vertical axis of the penalty area,Pi lies on the x- or y-axis, α � 0, as shown in Fig. 5(a)and (c). When the robot moves forward, xi becomesnegative (with yi = 0), as shown in Fig. 5(c); whenit moves backwards, yi becomes positive (with xi =0). A lateral shift of the robot results in a non-zerovalue for α (Fig. 5(a) and (b)). Such a small diver-gence is due to the distortion effects introduced bythe cameras, but it is sufficient to compute the lat-eral shift, to which it is roughly proportional. Anyrotation of the robot results in a composition of thepreviously described conditions. Fig. 5(d) shows anexample in which the robot is rotated through about45◦.

4. Monocular vision and vision-based MonteCarlo localization

Monte Carlo localization (MCL) has recently be-come a very popular framework for solving theself-localization problem in mobile robots [15,16].This method is very reliable and robust with respectto noise, especially if the robots are equipped withlaser range finders or sonar sensors. However, in en-vironments like the popular RoboCup domain [22],providing a laser scanner for each robot is difficultor impossible and sonar data is extremely noisy dueto the highly dynamic environment. Furthermore,distance sensors will ultimately be of little value, be-cause at some time in the future RoboCup fields willnot have walls any more. Thus, a vision-based MCLapproach exploiting visual features extracted from therobot’s unidirectional camera is of great interest.

4.1. Markov localization

The theoretical foundation for MCL is providedby Markov localization [15], where the position of a

robot is estimated by computing the probability distri-bution Bel(l) over all possible positions l = 〈x, y, θ〉in the environment. The Markov algorithm starts withan initial distribution Bel(l) that depends on the initialknowledge of the robot’s position. As the robot oper-ates, two kinds of update steps are iteratively appliedto incrementally refine Bel(l):

1. Belief projection across robot motion. When therobot moves, the belief about its position is pro-jected across actions. A motion model P(l|l′, m) isused to account for behavioral uncertainty and topredict the likelihood of the robot being in positionl when the robot executed motion command m andwas previously in position l′. Here, it is assumedthat the new position depends only on the previousposition and the movement (Markov property). Us-ing the motion model, the distribution of the robot’sposition belief Bel(l) can be updated according tothe commonly used formula for Markov chains [9]:Bel(l) = ∫

P(l|l′, m) Bel(l′) dl′.2. Integration of sensor input. The data obtained from

the robot’s sensors are used to update the beliefBel(l). For this step, an observation model P(o|l′)must be provided which models the likelihood ofan observation o (a sensor reading) given the robotis at position l′. Bel(l) is then updated by applyingBayes’ rule as follows: Bel(l) = α P (o|l′) Bel(l′),where α is a normalization factor ensuring thatBel(l) integrates to 1.

Markov localization provides a mathematical frame-work for solving the localization problem. Unlikemethods based on Kalman filtering [25], it lendsitself easily to multimodal distributions. However,implementing Markov localization on a mobile robotefficiently is a non-trivial task.

4.2. Monte Carlo localization

MCL [16] is an efficient implementation of theMarkov localization approach (see, for example [15]).The continuous and infinite probability distributionBel(l) is represented by a discrete set of N samplesS = {s1, . . . , sN }. Each sample si = 〈li , pi〉 consistsof a robot position li and a weight pi . The weightcorresponds to the likelihood of li being the robot’scorrect position: pi ≈ Bel(li) [31]. Furthermore, weassume

∑Ni=1pi = 1.

Page 9: Vision-based localization for mobile robots

G. Adorni et al. / Robotics and Autonomous Systems 36 (2001) 103–119 111

The algorithm for MCL is adopted from the gen-eral Markov localization framework. Initially, a setof samples reflecting a priori knowledge about therobot’s position is generated. During robot operation,the following two kinds of update steps are iterativelyexecuted:

1. Sample projection across robot motion. A new sam-ple set S is generated from a previous set S′ by ap-plying the motion model. For each sample 〈l′, p′〉 ∈S′, a new sample 〈l, p′〉 is added to S, where l israndomly drawn from the density P(l|l′, m), whichtakes into account the robot’s motion uncertaintieslike drift and translational and rotational errors.

2. Observation update and weighted resampling. Allsamples are re-weighted by incorporating the sen-sor data o and applying the observation modelP(o|l′).

Given a sample 〈l′, p′〉, the new weight p is givenby p = αP (o|l′)p′. The weights for the samples inS′ provide a probability distribution, which is usedto construct a new sample set S by randomly draw-ing samples from S′ using the distribution given bythe weights.

MCL has several interesting properties; for instance,it can be implemented as an anytime algorithm [10],and it allows one to dynamically adapt the sample size(see [15] for a detailed discussion of MCL properties).

4.3. Vision-based Monte Carlo localization

There are mainly three cases where MCL based ondistance sensor readings cannot be applied: (i) if dis-tance sensors such as laser range finders or sonars arenot available; (ii) if the readings obtained by thesesensors are too noisy and unreliable, e.g. in a highlydynamic environment; (iii) if distance sensors are use-less for self-localization, e.g. an open field with noor few objects detectable by the sensors. In all thesecases, other sensory information must be used for lo-calization. In RoboCup, a natural candidate is the vi-sual channel, because many robots include cameras asstandard equipment for object detection. An examplefor using visual information for MCL has been pro-vided by Dellaert et al. [11]. They use information onan environment’s lighting distribution, which is con-tinuously obtained by a camera oriented towards theceiling. However, an interesting question is whether

the MCL approach still works when the number ofobservations is sparse and observations are sporadic,as is the case in our soccer robots.

4.3.1. Feature-based modelingThe sensor update mechanism needs a sensor model

P(o|l) which describes how likely a sensor reading o isat a given robot location l. This probability is derivedfrom the distance dist(o, o) between the sensed mea-surement o and a measurement o, which is estimatedfrom location l and an environment model. As it is notpossible to efficiently estimate complete camera im-ages and then compute image distances for hundredsof samples, we use a feature-based approach. Afterevaluating various feature detectors on our robots (see[14]), we decided to use the following features: (1)goal posts of the blue and yellow goals, (2) corners,and (3) distances to field edges. Taking into accountthe characteristics of the field and the robot’s camerasystem, at most eight environment features can be de-tected at any time. The average number of detectablefeatures is actually even lower.

4.3.2. Feature detectionIn a first step the camera image is color-segmented

in order to simplify the feature detection process.Based on the color-segmented image, we apply dif-ferent kinds of filters to detect desired color discon-tinuities. The goal post detector detects a horizontalwhite–blue or a white–yellow transition for the blueor yellow goal post, respectively (see left image inFig. 6). The corner detector searches for horizontalgreen–white–green transitions in the image (middleimage in Fig. 6). The distance estimator estimates thedistance to the field edges based on detected verticalgreen–anything transitions in the image. We selectfour specific columns in the image for detecting thefield edges (rightmost image in Fig. 6).

Fig. 6. Detection of post, corner and edge features.

Page 10: Vision-based localization for mobile robots

112 G. Adorni et al. / Robotics and Autonomous Systems 36 (2001) 103–119

Fig. 7. Heuristics for estimating the probabilities P(fi |l).

4.3.3. Weight updateLet the sensor data o be a vector of n features

f1 · · · fn. If we assume that the detection of featuresdepends solely on the robot’s position and does not de-pend on the detectability of other features, then the fea-tures are independent and we can conclude P(o|l′) =P(f1 · · · fn|l′) = P(f1|l′) · · · P(fn|l′). Note that nmay vary, i.e., it is possible to deal with varying num-bers of features. The sensor model P(fi |l) describeshow likely it is to detect a particular feature fi givena robot location l. In our implementation, this sensormodel is computed by comparing the horizontal po-sition of the detected feature with an estimated posi-tion as determined by a geometrical world model. Thedistance between the feature positions is mapped to aprobability estimate by applying a heuristic functionas illustrated in Fig. 7. The camera image shows threedetectable features: a corner and two goal posts. Thesame features are expected, but estimated at slightlydifferent positions. The application of these heuristicsto the examples used in Fig. 6 are illustrated in Fig. 8.Probabilistic combination of evidence for several fea-

Fig. 8. Probability distributions P(fi |l) for all field positions l (with fixed orientation) given the detection of a particular feature fi andfeature combination. From left to right: goal posts, corners, distance measurements, and all three combined.

tures yields significantly better results, as convincinglydemonstrated by the rightmost image. Fig. 8 also illus-trates various properties of our approach. The shapeof the function causes all samples with comparativelyhigh angular error to be drastically down-valued andsuccessively being sorted out. Thus, detecting a singlegoal post will reshape the distribution of the sampleset such that mostly locations that make it likely tosee a goal post in a certain direction will survive inthe sample set.

5. Evaluation and discussion

The algorithm described in Section 2.1 was imple-mented and evaluated on the goalkeeper robot of theAttempto team Tübingen.

Calibration. Fig. 9 presents a derived mapping of theposition of pixels on a radial line in the camera im-age where the type classification changes (e.g. fromFIELD to WALL) against the distance to this point inthe environment. Two different methods are applied

Fig. 9. The mapping of the position of a pixel on a radial lineagainst the distance to the appropriate point on the field.

Page 11: Vision-based localization for mobile robots

G. Adorni et al. / Robotics and Autonomous Systems 36 (2001) 103–119 113

Fig. 10. Results for position at eight different locations within a RoboCup field.

for generating an appropriate mapping. If a robot forwhose camera system the mapping is to be obtainedis additionally equipped with a laser range finder, thedata from the laser range finder can be used as dis-tance reference values. The second method is usedfor a robot without a laser range finder. It is put at awell-defined position within a modeled environment.This permits the prediction of distances from the givenposition to objects (walls, goals) in the environment,which are then used as reference.

Self-localization. In the experiment (Fig. 10), therobot was placed in eight different positions (a–f)pointing to the blue goal. At each position 1000position estimates were taken. The origin of the coor-dinate frame coincides with a corner of the RoboCupfield which is defined by the intersection of the bluegoal line and the left field boundary, with respect tothe connecting line from the blue to the yellow goal.That means that the blue goal line runs from [1500,0] to [3000, 0] in this coordinate frame since the

experiments were carried out in a field setup corre-sponding to the RoboCup’98 rules. For each position,the mean and the covariance matrices were calculatedover the 1000 position estimates. For the followingit is assumed that the estimates can be modeled by anormal distribution. The ellipses in the figures repre-sent a constant Mahalanobis distance from the centerof these normal distributions. As can be seen from theresults the described algorithm supplies sufficientlyaccurate position estimates within a certain area infront of a goal. That the width of the distributions inthe x-direction is significantly larger than the width inthe y-direction is a characteristic of the algorithm. Asdescribed in Section 2.1 this algorithm tries to fit aline to the set of points which are identified as repre-senting the blue goal. The length of this line is oftenshorter than the goal width, due to the characteristicsof the averaging process at the borders of the pointset. Since there is a lack of other information aboutthe alignment in the x-direction, the algorithm simplyassumes that the midpoint of the fitted line coincides

Page 12: Vision-based localization for mobile robots

114 G. Adorni et al. / Robotics and Autonomous Systems 36 (2001) 103–119

Fig. 11. Example of the goalie’s maneuver. Left: scheme of the maneuver; right: three moments (a–c) of the maneuver sequence, as seenfrom outside and from the goalkeeper’s left and right cameras.

with the midpoint of the goal line. This inaccuracyin the x-direction alignment leads to the observedasymmetric distributions.

The self-localization method described in Section3 has been implemented and used on the robot goal-keepers (TinoZoff and Galavrón) that participated inRoboCup 1999 and 2000 as members of ART. 1

In both robots, landmark detection is used not onlyfor self-localization during “active” play situations(i.e., when the ball is approaching or is in the prox-imity of the goal), but also for repositioning, whichmay occur when the robot is not actively engaged ina playing situation. In such an “inactive” game situa-tion, the goalkeeper first tries to locate the front lineof the goalkeeper area. In the typical case the lineis found and the robot position can be computed asdescribed. The basic repositioning maneuver can beperformed in this situation. It consists of three steps:firstly, a 45◦ movement towards or away from theline is performed, depending on the “vertical” offsetwith respect to the default position; secondly, once

1 ART, Azzurra Robot Team, a joint effort research of the Uni-versities of Genoa, Padova, Parma, Roma “La Sapienza” and Po-litecnico di Milano, supported by “Consorzio Padova Ricerche”,is the Italian representative that takes part in the RoboCup F-2000events (see http://robocup.ce.unipr.it for more details).

the right vertical position is reached, a rotation is per-formed to align the robot with the line. Finally, thehorizontal offset from the default position is compen-sated by a sidewards movement which completes themaneuver (see Fig. 11).

When the goalkeeper has gone out of the goalkeeperarea, so that the line is not visible, the other mainlandmarks of the RoboCup environment are used toroughly estimate the robot’s rotation and horizontaldisplacement with respect to the default position. Inparticular, detecting the goalie’s own goal is enoughfor this purpose. When the robot is out of its area with-out being rotated, its own goal is not visible, and thecenter line can be used, using geometrical reasoningin the same way as for the goalkeeper area front line,to understand if the robot is located to the right or tothe left of its target position.

The results of this rather fast self-localization stepallow the robot to decide in which direction it shouldhead during the first segment of its backward move-ment towards the goal. Because of its kinematics,the robot cannot move straight forward or backwards;long forward and backward movements are obtainedby means of subsequent pairs of short movementsalong directions which deviate alternatively 45◦ (or−45◦) from the desired direction, with a “zig-zag”trajectory. Each segment of the “zig-zag” is in fact

Page 13: Vision-based localization for mobile robots

G. Adorni et al. / Robotics and Autonomous Systems 36 (2001) 103–119 115

Fig. 12. Four rounds of the Sparrow-99 along a rectangular path in the RoboCup field (odometry path, corrected path using goal postfeatures, distance features, and all features, respectively).

the result of a rotation, followed by a shift sidewards[1].

When the backward movement has made the goal-keeper area front line once more visible, the robot isin the same situation as in the basic case just describedand the corresponding repositioning maneuver can beapplied.

The MCL method was implemented and evaluatedon a Sparrow-99 robot, a custom-built soccer robotequipped with a unidirectional camera system.

Experiment 1: Number of features. In this experi-ment, we show that the robot can localize robustly andthat the accuracy can be improved by adding more fea-tures. A Sparrow-99 robot was placed in one cornerof the field (the right top circle of Fig. 12). In order tohave an accurate reference path, we moved the robotby hand along a rectangular trajectory indicated by thedots. The first image in Fig. 12 displays the odome-try data when moving four rounds and shows the drift

Fig. 13. Effects of using different sample numbers (50, 100, 150, 1000 and 5000 samples). Left: average localization errors. Right: averageand maximum errors.

error that occurs. The second image displays the cor-rected trajectory which does not drift away. The thirdimage displays the first round corrected by the local-ization algorithm using only the goal posts as features.In the fourth image we can see a more accurate pathfound using all three feature types.

Experiment 2: Number of samples. Implementingsample-based localization, it is important to have anidea of how many samples one needs. Obviously, asmall number of samples is preferred, since the com-putational effort increases with the number of samples.On the other side, an appropriate number of samplesis needed in order to achieve the desired accuracy. Inthis experiment, we moved four rounds exactly as inthe previous experiment and used five different num-bers of samples: 50, 100, 150, 1000, and 5000. In theleft image of Fig. 13, the average localization errors ofthe different sample numbers are shown. As expected,the error decreases when more samples are used. On

Page 14: Vision-based localization for mobile robots

116 G. Adorni et al. / Robotics and Autonomous Systems 36 (2001) 103–119

Fig. 14. Sparsity and sporadicality: number of landmark features (left) and distance features (middle) detected during a sample run in areal field, and the resulting standard deviations of samples in the sample set.

the other hand, the difference in accuracy does notincrease much from 150 samples to over 1000–5000.The right image of Fig. 13 shows both the average er-ror and the maximum error of the same five runs (50,100, 150, 1000 and 5000 samples). Again, you can seethat above 150 samples, the accuracy hardly increases.

Experiment 3: Sparsity and sporadicality. Thethird experiment illustrates the performance of vision-based MCL under playing conditions, i.e., how MCLdeals with sparse and sporadic features (see Fig. 14).The figures show that even over extended periods (30cycles) where no landmark features and very few orno distance features can be detected, the robot canstill maintain a sufficiently precise location estimate.This is in accordance with the common wisdom thatodometry is reliable for narrowly limited motion se-quences. MCL ensures that the robot quickly recoupsand computes better position estimates if features canbe detected again.

6. Conclusions

The RoboCup competition has proved to be an inter-esting research platform for the study of autonomousintelligent systems which can interact with the realphysical world. In particular, RoboCup is particularlychallenging for vision systems, that need to integrateefficiency and accuracy to be able to deal with thereal-time requirements of the dynamic and rapidlychanging environment in which the competition takesplace. All three systems described in the paper wereextensively tested on the field during the latest editionsof RoboCup.

The Attempto team Tübingen [29] implemented thealgorithm described in Section 2 for the goalkeeperrobot of the team. The images from a omnidirectional

camera were utilized for a distance measurement tocertain points on the field. This information was usedwith self-localization algorithms which were origi-nally developed for laser range finders. Even with theworse resolution of the distance measurement with re-spect to a laser range finder, the position estimates ofthe algorithm were sufficiently accurate.

The goalkeeper of the ART team relied on thelandmark-based localization described in Section 3.The robustness and reliability of the system played akey role in the performance of the goalkeeper that al-lowed the team to reach the second place in RoboCup1999 and in the EuroRoboCup 2000 tournament.

The Ulm Sparrows team used the Monte Carlo ap-proach for vision-based localization of a player. Theonboard camera was used for localization purposesin addition to object recognition tasks. As a conse-quence, sensor input to update the robot’s belief aboutits position was low-dimensional and sporadic. Never-theless, the experimental evaluation demonstrated thatMCL works well even under these restrictive condi-tions. Even with a small number of detected features,the localization results are usable. However, by in-creasing the number of visual features the accuracyenhances significantly.

Although the three approaches to vision-basedlocalization were designed for the RoboCup compe-tition, they can be adapted to other semi-structuredindoor or outdoor environments quite easily. In in-door environments several natural landmarks thatcan be used for localization or navigation can befound; the same happens in some outdoor environ-ments in which landmarks are commonly present.The geometric reasoning on images acquired bythe binocular vision system described in Section 3can be applied, for example, to automatic lane de-tection in computer-assisted vehicle driving, or to

Page 15: Vision-based localization for mobile robots

G. Adorni et al. / Robotics and Autonomous Systems 36 (2001) 103–119 117

wall-following in corridor-like indoor environments.In those cases an angular disparity between the scenesobserved by the two cameras can be recorded betweenthe two edges of a road or road lane, or between theborders between the floor and the walls, respectively.

Vision-based systems, despite not usually beingthe top-performers among robot sensor systems, aregenerally applicable and very flexible. All three ap-proaches demonstrate that vision-based systems suchas the ones described in the paper can in most casesbe the only source of information for self-localizationand navigation of autonomous robots. This is partic-ularly important when the use of other more accuratebut less flexible sensors imposes restrictive require-ments that are not met by the environment in whichrobots operate.

Acknowledgements

The authors from the University of Ulm would liketo thank all the student members of the RoboCup teamThe Ulm Sparrows for their great commitment andsupport, and G. Palm, the head of the department forhis encouragement, many valuable discussions, andfor providing the necessary resources. Financial sup-port for the Ulm team was provided by the Universityadministration and the University Society Ulm.

The projects developed at the University of Parmathat are described in this paper have been partiallyfunded by ENEA (Italian Organization for AlternativeEnergy) under the SENSI (Intelligent Sensors) grant,ASI (Italian Space Agency) under the “Hybrid VisionSystem for Long Range Rovering” grant and CNR(Italian National Research Council) under the “ART— Azzurra Robot Team” grant.

References

[1] G. Adorni, S. Cagnoni, M. Mordonini, Landmark-basedrobot self-localization: A case study for the RoboCupgoal-keeper, in: Proceedings of the International Conferenceon Information Intelligence and Systems, IEEE ComputerSociety Press, Silver Spring, MD, 1999, pp. 164–171.

[2] G. Adorni, M. Gori, M. Mordonini, Just-in-time signrecognition in image sequences, Journal of Real-TimeImaging 5 (1999) 95–107.

[3] N. Aihara, H. Iwasa, N. Yokoya, H. Takemura, Memory-basedself-localization using omnidirectional images, in: Pro-

ceedings of the 14th International Conference on PatternRecognition, IEEE Computer Society Press, Silver Spring,MD, 1998, pp. 1799–1803.

[4] Y. Arai, T. Fuji, H. Asama, H. Kaetsu, I. Endo, Localcommunication-based navigation in a multirobot environment,Advanced Robotics 13 (3) (1999) 233–234.

[5] S. Atiya, G.D. Hager, Real-time vision-based robotlocalization, IEEE Transactions on Robotics and Automation9 (6) (1993) 352–358.

[6] J. Borenstein, H.R. Everett, L. Feng, Navigating MobileRobots: Sensors and Techniques, A.K. Peters Ltd., Wellesley,MA, 1996.

[7] J. Borenstein, H.R. Everett, D. Wehe, Mobile robots posi-tioning: Sensors and techniques, Robotics and AutonomousSystems 14 (4) (1997) 231–249.

[8] D. Burschka, S.A. Blum, Identification of 3D referencestructures for video-based localization, in: Computer Vision— ACCV’98, Third Asian Conference on Computer Vision.Proceedings Vol. 1, Springer, Berlin, 1997, pp. 128–135.

[9] K. Chung, Markov Chains with Stationary TransitionProbabilities, Springer, Berlin, 1960.

[10] T. Dean, M. Boddy, An analysis of time-dependent planning,in: Proceedings of AAAI-88, St. Paul, MN AAAI Press,Menlo Park, CA, 1988, pp. 49–54.

[11] F. Dellaert, W. Burgard, D. Fox, S. Thrun, Using thecondensation algorithm for robust, vision-based mobile robotlocalization, in: Proceedings of the IEEE Computer SocietyConference on Computer Vision and Pattern Recognition,1999, pp. 588–594.

[12] H.S. Dulimaria, A.K.K. Jain, Mobile robot localization inindoor environment, Pattern Recognition 30 (1) (1997) 99–111.

[13] T. Einsele, Real-time self-localization in unknown indoorenvironment using a panorama laser range finder, in:Proceedings of the 1997 IEEE/RSJ International Conferenceon Intelligent Robot and Systems. Innovative Robotics forReal-world Applications, IROS’97, Vol. 2, IEEE Press, NewYork, 1997, pp. 697–702.

[14] S. Enderle, M. Ritter, D. Fox, S. Sablatnög, G. Kraetzschmar,G. Palm, Soccer-robot localization using sporadic visualfeatures, in: Proceedings of the IAS-6 InternationalConference on Intelligent Autonomous Systems, Venice, Italy,2000, pp. 959–966.

[15] D. Fox, Markov localization: A probabilistic frameworkfor mobile robot localization and navigation, Ph.D. Thesis,University of Bonn, Bonn, Germany, December 1998.

[16] D. Fox, W. Burgard, F. Dellaert, S. Thrun, Monte Carlolocalization: Efficient position estimation for mobile robots,in: Proceedings of AAAI’99, Orlando, FL, AAAI Press,Menlo Park, CA, 1999, pp. 343–349.

[17] M.O. Franz, B. Schölkopf, H.A. Mallot, H.H. Bülthoff, Wheredid I take that snapshot? Scene-based homing by imagematching, Biological Cybernetics 79 (1999) 191–202.

[18] N. Guil, J. Villalba, E.L. Zapata, A fast Hough transform forsegment detection, IEEE Transactions on Image Processing4 (11) (1995) 1541–1548.

[19] J.S. Gutmann, W. Hatzack, I. Herrmann, B. Nebel, F.Rittinger, A. Topor, T. Weigel, Reliable self-localization,

Page 16: Vision-based localization for mobile robots

118 G. Adorni et al. / Robotics and Autonomous Systems 36 (2001) 103–119

multirobot sensor integration, accurate path-planning andbasic soccer skills: playing an effective game of roboticsoccer, in: Proceedings of the Ninth International Conferenceon Advanced Robotics ICAR’99, Japan Robot Association,1999, pp. 289–296.

[20] M. Kabuka, B. Hussain, Real-time system for accuratethree-dimensional determination and verification, IEEETransactions on Robotics and Automation 6 (1) (1990) 3143.

[21] K. Kanatani, N. Ohta, Optimal robot self-localization andaccuracy bounds, IEICE Transactions on Information andSystems E82-D (2) (1999) 447–452.

[22] H. Kitano, M. Asada, Y. Kuniyoshi, I. Noda, E. Osawa,H. Matsubara, RoboCup: A challenge problem for AI, AIMagazine 18 (1) (1997) 73–85.

[23] D.H. Kite, M. Magee, Determining the 3D position andorientation of a robot camera using 2D monocular vision,Pattern Recognition 23 (8) (1990) 819–831.

[24] M. Magee, J.K. Aggarwal, Robot self-localization using visualreasoning relative to a single target object, Pattern Recognition28 (2) (1995) 125–134.

[25] P.S. Maybeck, The Kalman Filter: An Introduction toConcepts, Springer, Berlin, 1990, pp. 194–204.

[26] L. Moreno, J.M. Armingol, A. de-la-Escalera, M.A. Salichs,Global integration of ultrasonic sensors information in mobilerobot localization, in: Proceedings of the Ninth InternationalConference on Advanced Robotics, 1999, pp. 283–298.

[27] E.M. Mouaddib, B. Marhic, Geometrical matching for mobilerobot localization, IEEE Transactions on Robotics andAutomation 16 (5) (2000) 542–552.

[28] S.B. Nickerson, P. Jasiobedzki, D. Wilkes, M. Jenkin,E. Milios, J. Tsotsos, A. Jepson, O.N. Bains, The ARKproject: Autonomous mobile robots for known industrialenvironments, Robotics and Autonomous Systems 25 (1–2)(1998) 83–104.

[29] M. Plagge, R. Günther, J. Ihlenburg, D. Jung, A. Zell, TheAttempto RoboCup robot team, in: M. Veloso, E. Pagello,H. Kitano (Eds.), RoboCup-99: Robot Soccer World Cup III,Lecture Notes in Artificial Intelligence, Vol. 1856, Springer,Berlin, 2000, pp. 424–433.

[30] J. Reuter, Mobile robot self-localization using PDAB, in:Proceedings of the 2000 ICRA. Millennium Conference. IEEEInternational Conference on Robotics and Automation, Vol.4, IEEE Press, New York, 2000, pp. 3512–3518.

[31] A.F.M. Smith, A.E. Gelfand, Bayesian statistics without tears:A sampling–resampling perspective, American Statistician46 (2) (1992) 84–88.

[32] S. Suzuki, T. Kato, M. Asada, An application of vision-basedlearning in RoboCup for a real robot with an omnidirectionalvision system and the team description of Osaka University“trackies”, in: M. Asada, H. Kitano (Eds.), RoboCup-98:Robot Soccer World Cup II, Proceedings of the SecondRoboCup Workshop, Lecture Notes in Artificial Intelligence,Vol. 1604, Springer, Berlin, pp. 316–325.

[33] G. Weiss, E. von Puttkammer, Intelligent AutonomousSystems, Map Based on Laserscans without GeometricInterpretation, IOS Press, Amsterdam, 1995, pp. 403–407(Chapter A).

Giovanni Adorni, formerly with the Uni-versity of Parma, is a Full Professor atthe Department of Communications, Com-puter and System Science of the Univer-sity of Genoa. His research interests in-clude artificial intelligence, computer vi-sion, robotics and soft computing. He is amember of AAAI, IAS and AI∗IA.

Stefano Cagnoni is an Assistant Profes-sor at the Department of Computer Engi-neering of the University of Parma. Hisresearch interests include computer vision,pattern recognition, robotics and soft com-puting. He is a member of IEEE-CS, IAPRand AI∗IA.

Stefan Enderle is a Ph.D. candidate atthe Neural Information Processing De-partment at the University of Ulm. Hisresearch interests include sensor inter-pretation and fusion, map building andself-localization in robotics, and soft-ware development for autonomous mobilerobots.

Gerhard K. Kraetzschmar is a SeniorResearch Scientist and Assistant Profes-sor at the Neural Information Process-ing Department at the University of Ulm.His research interests include autonomousmobile robots, neurosymbolic integration,learning and memory, and robot controland agent architectures. He is a memberof IEEE, AAAI, ACM, IAS, and GI.

Monica Mordonini is an Assistant Pro-fessor at the Department of Computer En-gineering of the University of Parma. Herresearch interests include computer vision,robotics and soft computing.

Page 17: Vision-based localization for mobile robots

G. Adorni et al. / Robotics and Autonomous Systems 36 (2001) 103–119 119

Michael Plagge is a Ph.D. candidate at theDepartment of Computer Architecture atthe University of Tübingen. His researchinterests include multisensor fusion, fastand robust world modeling and tracking ofdynamic objects with autonomous mobilerobots.

Marcus Ritter was a M.S. student inComputer Science at the University ofUlm. He is now an Engineer at Wonder-bits.

Stefan Sablatnög is a Ph.D. candidateat the Neural Information Processing De-partment at the University of Ulm. Hisresearch interests include spatial represen-tation and reasoning in robotics and soft-ware development for autonomous mobilerobots. He is a member of IEEE.

Andreas Zell is a Full Professor at theDepartment of Computer Architecture atthe University of Tübingen. His researchinterests cover a wide range of fields inrobotics, neural networks, evolutionarycomputation and genetic programming,bioinformatics and multimedia-aidedlearning and teaching. He is a member ofIEEE and GI.