Download - Active Perception and Scene Modeling by Planning …vigir.missouri.edu/~gdesouza/Research/Conference_CDs/...October 18-22, 2010, Taipei, Taiwan 978-1-4244-6676-4/10/$25.00 ©2010 IEEE

Transcript
Page 1: Active Perception and Scene Modeling by Planning …vigir.missouri.edu/~gdesouza/Research/Conference_CDs/...October 18-22, 2010, Taipei, Taiwan 978-1-4244-6676-4/10/$25.00 ©2010 IEEE

Active Perception and Scene Modeling by Planning with Probabilistic6D Object Poses

Robert Eidenberger and Josef Scharinger

Abstract— This paper presents an approach to probabilisticactive perception planning for scene modeling in clutteredand realistic environments. When dealing with complex, multi-object scenes with arbitrary object positions, the estimation of6D poses including their expected uncertainties is essential. Thescene model keeps track of the probabilistic object hypothesesover several sequencing sensing actions to represent the realobject constellation.

To improve detection results and to tackle occlusion problemsa method for active planning is proposed which reasons aboutmodel and state transition uncertainties in continuous and high-dimensional domains. Information theoretic quality criteria areused for sequential decision making to evaluate probabilitydistributions. The probabilistic planner is realized as a partiallyobservable Markov decision process (POMDP).

The active perception system for autonomous service robotsis evaluated in experiments in a kitchen environment. In 80 testruns the efficiency and satisfactory behavior of the proposedmethodology is shown in comparison to a random and a step-aside action selection strategy. The objects are selected from alarge database consisting of 100 different household items.

I. INTRODUCTION

Service robots - together with their perception systems -should be able to operate in everyday environments. Inpractice this expectation certainly is not met today. Inac-curate sensing devices, bad classifiers, object occlusions,poor lighting or ambiguities of object models limit thedetection capabilities. Active perception approaches aim atdeliberately utilizing appropriate sensing settings to gainmore information about the scene.

In this paper we present an approach to probabilistic scenemodeling and sensor action planning. Object hypothesesrepresented as probabilistic distributions are kept in the scenemodel. A hypothesis contains object class and pose informa-tion. The 6D pose representation describes the position andorientation of the object.

The proposed active planning methodology uses thePOMDP concept to find an efficient strategy to satisfactorilyrecognize all objects in a scene. The predicted effects offuture sensing actions on the scene model are comparedon the basis of the expected information gain of statedistributions. The next best sensing action is determined andexecuted to improve the scene knowledge.

The scene analysis described in this paper is a componentof a service robot which autonomously performs manipula-

R. Eidenberger is with Information and AutomationTechnologies, Siemens AG, 81739 Munich, [email protected]

J. Scharinger is with the Department of ComputationalPerception, Johannes Kepler University Linz, 4040 Linz, [email protected]

tion tasks such as grasping objects, where precise knowledgeabout the object setting is required. Figure 1 shows the robotand some of the used household items.

Fig. 1. The service robot, operating in a complex environment.

The outline of the paper is as follows. The next sectiongives an overview of current approaches to active perceptionwith focus on high-dimensional continuous pose modeling.Section III describes the concepts of scene modeling withprobabilistic 6D poses. In Section IV and V the active per-ception architecture and its realization is detailed. The papercloses with experiments in Section VI which demonstratea full perception loop and compare the outcomes of threeplanning strategies for various scenes.

II. RELATED WORK

Literature provides many approaches to active recognitionand next best view planning. Surveys on perception planningby Chen[1] and Dutta-Roy[2] list the current state of the art.Here we discuss active perception approaches with emphasison probabilistic continuous pose modeling and their applica-bility to complex scenarios according to their relevance forthis paper.

The research on active perception varies with respect tothe methods of evaluating sensor positions, the strategiesfor action planning and the field of application. It mainlyaims at fast and efficient object recognition of similar andambiguous objects, but does not cope with multi-object sce-narios and cluttered environments. Forssen et al.[3] combineobject recognition with an attention mechanism for obstacleavoidance to efficiently acquire scene information. Thisapproach targets on rapidly identifying objects in clutteredenvironments, but neither models pose uncertainties nor takes

The 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems October 18-22, 2010, Taipei, Taiwan

978-1-4244-6676-4/10/$25.00 ©2010 IEEE 1036

Page 2: Active Perception and Scene Modeling by Planning …vigir.missouri.edu/~gdesouza/Research/Conference_CDs/...October 18-22, 2010, Taipei, Taiwan 978-1-4244-6676-4/10/$25.00 ©2010 IEEE

into account object occlusions. Lanz[4] describes a Bayesianframework for robust multi-object tracking in presence ofocclusions. He takes into account the object visibility forcomputing the observation likelihood, but uses discrete statespaces.

Many authors claim the possible applicability of theiractive approaches to continuous, high-dimensional domains.Chli and Davison[5] show 2D active feature matching incontinuous domains. Flandin and Chaumette[6] present anapproach to active 3D object reconstruction by using spatialvoxel representations. However, representations for orienta-tions are not required.

The usage of 6D continuous state spaces for representingobject poses requires adequate probabilistic representationsto model pose uncertainties. Due to the theoretical andcomputational complexity of using multivariate distributionsfor describing orientations and positions, the use of sam-pled distributions in form of particle representations seemsreasonable. However, their suitability to high-dimensionalproblems is questionable because of the great amount ofparticles needed. The practical applicability is generally onlyshown in simplified experiments with low-dimensional statespaces. Denzler and Brown[7] and Derichs et al.[8] formulatetheir solutions for continuous domains, but only demon-strate their active object class identification methodologyand neglect pose determination. Ma and Burdick[9] presenta methodology for 6-DOF pose estimation and tracking toactively recognize moving objects. However, the relevantaspects of probabilistic modeling are not detailed. In theirexperiments only a planar problem is considered, whereby6D pose modeling is avoided.

In this work scenarios are considered, which contain ar-rangements of several objects, belonging to different as wellas to alike classes. Their pose uncertainties are representedby 6D multivariate probabilistic distributions.

III. PROBABILISTIC SCENE MODELING

Each real item, which is considered for the recognitionprocess, is modeled by its characteristics such as geometryinformation or textures. All object models - or also denotedas object classes - build up the object database. In a scenevarious instances of these object classes might appear. Thus,the state space consists of n object hypotheses each repre-sented by the tupel qi = (Ci, φi). C describes its discreteclass representation and φ its continuous pose. All n object-tuples build up the joint state q = (q1, q2, ..., qn). The entitiesare assumed to be mutually independent. Since qi representsboth discrete and continuous dimensions it will be furtherconsidered as a mixed state. The dimension of the state spacevaries as the perception process proceeds with recognizingnew object instances.

The following section details the modeling of the pose andthe probabilistic distributions over object hypotheses.

A. Probabilistic 6D Pose Modeling

In order to represent real world environments, a six dimen-sional pose representation is required to model the position

and orientation of an object. In [10] we discussed severalapproaches of probabilistic 6D pose representations. In thiswork we decided to use a Rodrigues vector (or axis/angle)representation to describe the orientation because of its goodapplicability and simple comprehensibility.

The first three components of the pose vector φi =(x, y, z, θe1, θe2, θe3) [11] describe the translational posi-tion. The orientation is represented by the axis and theangle. (e1, e2, e3)T denotes the unit vector of the rotationaxis. The length of the axis encodes the rotation angleθ ∈]0; 2π], which the coordinate frame is rotated about.However, the Rodrigues vector representation has drawbackssuch as duality and singularities, which cause problems fortheir mathematical processing. Duality means that one spe-cific orientation can be described by two different Rodriguesvectors. This results from the fact that a rotation of θ radiansabout an axis equals rotating 2π − θ radians about an axispointing into the opposite direction. This effect is dealt withby setting a working point with respect to the orientationwhat possibly requires to use the pose representation withthe Rodrigues vector in the opposite direction. A singularityarises at θ = 0 radians.

In the simplest case the pose uncertainty is modeled by amultivariate Gaussian distribution with the pose vector φi

as mean, where one 3D part consists of the componentsof the translation vector and the other 3D part of thecomponents of the Rodrigues vector. The formulation ofGaussian distributions over the translational dimensions issimple due to the real, continuous and infinite domains. Fora reasonable modeling in the rotational space the angularcharacteristics of periodicity, duality and singularities haveto be considered. The probability density function of therotational space would ideally be defined over the finitehalfsphere with radius 0 < rHS ≤ 2π. Nonetheless weuse the infinite space for computational efficiency, but applyworking point selection on the distributions. Generally, therepresentation as a single Gaussian is adequate for describingpeaked distributions, where most of the probability mass lieswithin this halfsphere. In order to represent more complex,multi-peaked distributions, linear combinations of multivari-ate Gaussians are formulated as the multivariate Gaussianmixture distribution

p(φi) =K∑k=1

wikN (φi|µik,Σik), (1)

which consists of K Gaussian kernels. The mixing coeffi-cient wik denotes the weight of the mixture component with0 ≤ wik ≤ 1 and

∑Kk=1 w

ik = 1. µik is the mean and Σik the

covariance of kernel k.

B. Probabilistic State Representation

The probabilistic joint state space p(q) equals the productof all object instance distributions p(qi) as they are assumedto be independent. p(qi) is composed of the pose modelp(φi|Ci) and the object class probability P (Ci):

p(qi) = P (Ci)p(φi|Ci) (2)

1037

Page 3: Active Perception and Scene Modeling by Planning …vigir.missouri.edu/~gdesouza/Research/Conference_CDs/...October 18-22, 2010, Taipei, Taiwan 978-1-4244-6676-4/10/$25.00 ©2010 IEEE

The probability distribution over the class is discrete andis described by a histogram over the object classes. The6D pose space from Equation (1) of the continuous posedistribution p(φi|Ci) is conditioned on the object class,meaning that object class information, in particular the modelorigin, is required as reference for the pose distribution.

IV. ACTIVE PERCEPTION ARCHITECTURE

Active perception is a process where various actions at ∈A are compared to each other and executed in order toachieve intelligent sensing to build up to a good world model.From the wide range of different actions we only considersensor positioning at different viewpoints as sensing actionsin this paper. A viewpoint comprises both position andorientation of the sensor. The active perception architectureof the proposed approach for next best view selection,containing the Observation Model, the Inference Model andthe Planning Module, is schematically illustrated in Figure2.

Fig. 2. Active perception framework

In order to find an optimal action policy π a sequenceof prospective actions and observations is evaluated by theplanning module. Decision making is based on the costsof the executed action and the expected reward from thebelief bt(q′), which denotes the conditional probability dis-tribution over the state q′, given a sequence of measure-ments. The inference module consists of data associationand state estimation. It determines the belief distributionbt(q′) by updating the initial distribution by incorporatingan observation Ot(at). The observation model providesthe measurement data for state estimation. The expectedobservation is predicted from the chosen sensing actionand the state distribution after the transition update. Objectocclusions in multi-object scenarios are handled for moreaccurate observation prediction.

The following sections describe the probabilistic modelingof multi-object scenes and the Bayesian statistical frame-work. Also the observation model under consideration ofocclusion estimation and probabilistic action planning areexplained in detail.

A. Inference Model

In this approach we decided on a Bayesian state estimator.The system properties for the dynamic environment followthe equations

q′ = g(at, q, εt) (3)Ot = h(at, q′, δt). (4)

The index t denotes the time step, which covers the periodof incorporating one action-observation pair into the stateupdate. Often the system state q cannot be addressed rightafter the control action but only after the measurement. Thusthe apostrophe ′ marks the state transition due to actioneffects.

In the following the state estimation process for the jointstate is derived following the dynamic environment model.Equation (3) describes that reaching a future state q′ dependson the previous state q and on the applied action at. Thesystem dynamics underlie the state transition uncertaintyεt, meaning that each executed action influences the statedistributions. We consider the probability distribution overthe state

bt−1(q) = p(q|Ot−1(at−1), ..., O0(a0)) (5)

as the a priori belief for previous sensor measurementsOt−1(at−1), ..., O0(a0) at time step t−1. Applying an actionwith its state transition probability pa(q′|q) containing εtleads to the probabilistic model for the prediction update

bt−1(q′) = p(q′|Ot−1(at−1), ..., O0(a0))

=∫q

pat(q′|q)bt−1(q)dq. (6)

The measurement Equation (4) describes the influenceof measurement uncertainties δt on the observation Ot(at)which is performed at time step t after executing the controlaction at. In Bayesian context the measurement update isformulated as the probability distribution P (Ot(at)|q′).

A recursive Bayesian state estimator is used to calculatethe posterior distribution b

Ot(at)t (q′) by updating a priori

information by the measurement Ot(at). Combining themodels of the transition update and the measurement updateusing Bayes’ rule leads to

bOt(at)t (q′) = p(q′|Ot(at), ..., O0(a0))

=P (Ot(at)|q′)bt−1(q′)∫

q′P (Ot(at)|q′)bt−1(q′)dq′

. (7)

The rules of probability, the Markov assumption and thetheorem of total probability are applied to derive this expres-sion. The observation Ot(at) is assumed to be conditionallyindependent of previous measurements.

B. Observation Model

The observation model encapsulates all processes fromdata acquisition to providing the joint measurement likeli-hood distribution P (Ot(at)|q′) for the current measurement

1038

Page 4: Active Perception and Scene Modeling by Planning …vigir.missouri.edu/~gdesouza/Research/Conference_CDs/...October 18-22, 2010, Taipei, Taiwan 978-1-4244-6676-4/10/$25.00 ©2010 IEEE

Ot(at). Under the assumption of using interest point detec-tors this measurement can be expressed as the detection ofthe set of F features

Ot(at) = {f1(at), ..., fF (at)} (8)

as a subset of all database features. These features areconsidered to be the currently visible interest points.

While for a real measurement the set of features is ac-quired directly from the detector, we generate this set explic-itly when predicting an observation, where the measurementis simulated. Feature characteristics and occlusion events areconsidered to determine the visibility of features. In [12]the methodology of occlusion calculation is detailed. Giventhe set of (expected) visible features and in consideration ofthe measurement uncertainty we formulate the likelihood ofseeing the feature f(at) as P (f(at)|q′). The measurementlikelihood distribution P (Ot(at)|q′) is computed as the prod-uct of conditional feature likelihoods by applying the naiveBayes rule and assuming all features to be conditionallyindependent:

P (Ot(at)|q′) =F∏j

P (fj(at)|q′). (9)

C. Perception Planning

Sequential decision-making consists of the processes ofevaluating future actions and finding the best action sequencewith respect to a specific goal. The probabilistic planningconcept is realized in form of a partially observable Markovdecision process, as proposed in [13]. The probabilisticplanner reasons by considering information theoretic qualitycriteria of the expected belief distribution bOt(at)

t (q′), whichis abbreviated by b′ in the following equations, and controlaction costs rat(b

′). The objective lies in maximizing thelong term reward of all executed actions and the activereduction of uncertainty in the belief distributions. The valuefunction

Vt(b′) = maxat

(Rat

(b′) + γ

∫Vt−1(b′)P (Ot(at)|q′)dOt

)(10)

with V1(b′) = maxatRat

(b′) is a recursive formulation todetermine the expected future reward for sequencing actions.γ denotes the discount rate for penalizing later actions andRat

(b′) is the reward. The continuous domains and the high-dimensional state spaces make the problem intractable. Asthe value function is not piecewise linear, it is evaluated atspecific positions, which demands the online calculation ofthe reward for these specific actions and observations.

The control policy

π(b′) = argmaxat

(Rat(b

′)+γ∫Vt−1(b′)P (Ot(at)|q′)dOt

)(11)

maps the probability distribution over the states to actions.Assuming a discrete observation space the integral can bereplaced by a sum.

The prospective action policy π is determined by maxi-mizing the expected reward

Rat(b′) = −αEO[hbt

(q′|Ot(at))] +∫rat

(b′)b(q)dq, (12)

which relates benefits and costs of future actions at with therelation factor α. The first term states the expected benefitof applying the control action, the second term expressesthe respective costs with ra(b′) denoting the action efforts.In perception problems the quality of information is usuallyclosely related to the probability density distribution overthe state space. The information theoretic measure of thedifferential entropy is suitable for determining the uncertaintyof the belief distribution. Since the computation of thedifferential entropy both, numerically or by sampling fromparametric probability density distributions is costly in termsof processing time, the sum of the upper bound estimatesover the object instances

hUbt(q′|Ot(at)) ≥ hbt(q

′|Ot(at))

=∑i

Ki∑k=1

wik[−logwik +12log((2π exp)D|Σik|)]

(13)

is used to approximate and determine the expected benefit[14]. D denotes the dimension of the state, |Σk| denominatesthe determinant of the kth component’s covariance matrix.

V. REALIZATION OF THE ACTIVE FRAMEWORK

This section details the basic concepts by applying themon a robotic scenario. As a sequence of observations fromvarious viewpoints is fused, the measurements have to be cor-rectly associated with the corresponding object instances ofthe prior knowledge. The essential process of data associationin the inference model is described in this section as well asthe modeling of system and measurement uncertainties andthe realized planning concept.

A. Data association

State estimation in the high-dimensional joint space ischallenging. Thus, we factor the joint space into subspaces,each describing one object instance distribution. The Bayes’update Equation (7) can almost be directly applied, ex-cept for the measurement likelihood P (Ot(at)|q′). It hasto be decomposed and associated with the priors of thecorresponding object instances. The process of multi-targetdata association aims at finding the assigned measurementlikelihoods P (Oit(at)|q′). In this work we combine globalnearest neighbor (GNN) data association and geometry-baseddata association to find corresponding measurement compo-nents. Both build up association tables. GNN data associationuses the Mahalanobis distance measure to probabilisticallycompare Gaussian kernels of pose distributions. This isonly applicable for components of the same object classes.Geometry-based data association accomplishes the associ-ation task over classes by checking object constellationsof physical plausibility, meaning objects instances must notintersect. Therefore, samples are draw from the distributions

1039

Page 5: Active Perception and Scene Modeling by Planning …vigir.missouri.edu/~gdesouza/Research/Conference_CDs/...October 18-22, 2010, Taipei, Taiwan 978-1-4244-6676-4/10/$25.00 ©2010 IEEE

and each sample arrangement is checked for intersectionsaccording to their object geometries.

If the entries of the association tables are within the vali-dation gate, the corresponding measurements are associated.Otherwise, unassigned measurements establish a new objectinstance distribution which is fused in the Bayes’ update witha uniform prior, resulting in an increase of the dimension ofthe joint state.

B. Measurement uncertainties

In this work the object recognition process uses localinterest points from the SIFT algorithm as features for objectclassification and pose determination. From stereo imagesspatial information is gained and 6D poses are calculated,which is detailed in [15].

The measurement model provides P (Ot(at)|q′) as a mix-ture distribution for the state update. The mean values ofthe measurement distribution are determined from the stereomatching algorithm on the basis of feature correspondences.In order to determine the measurement accuracy of the objectpose we evaluated the detection precision of several objectsin a total of 260 measurement against ground truth data. Thedeviation of the translational components is plotted in Figure3 with the sensor pointing into z-direction. The covarianceellipsoid of this point cloud is used to approximate themeasurement likelihood uncertainty. The uncertainty of theobject class recognition is encoded in the mixture weight.It results from relations between seen and expected interestpoints, matching errors, sensor and feature characteristics.

The measurement model for the state prediction slightlydiffers from the model for the state update as the observationneeds to be simulated. The mean object pose and the averagespreading is determined from a heuristic based on trainingdata. The interest points of 360 stereo images per objectsare calculated, matched and stored to build up the objectdatabase. The 3D locations of the features are mappedonto the object geometry to generate the database model.During measurement simulation we determine the expectedvisible features and estimate the consistent stereo matches,which are used together with the results of the occlusionestimation process to adapt the covariances of the simulatedmeasurement components. While the shape of the ellipsoidof the translational components is modeled according to thedetection results from Figure 3, the rotational covariance axesare set to equal length.

Fig. 3. Measurement uncertainty in xyz-directions with the covarianceellipsoid (97%quantile). The sensor points into z-direction.

(a) Multi-object scenario (image takenfrom VP1)

(b) Viewpoint arrangement andexecuted action sequence

Fig. 4. Experimental setup

C. State transition uncertainties

The transition uncertainty is defined as the linear Gaussianmixture

pιat(q′|q) =

K∑k=1

wkN (q|µk + ∆(at),Σk(at)), (14)

with Gaussian kernels equal in the number of componentsand mean values to the belief distribution. ∆(at) denotes thechange in the mean dependent on the action with covarianceΣk(at). Ground truth experiments have shown standarddeviations due to state transitions of 16.8mm in xy- and19mm in e3-direction for experiments on the service robot.The other dimensions are negligible.

D. Greedy planning

The incorporation of observations usually changes thebelief distribution enormously and possibly makes the cur-rent action policy ineffectual. Hence a 1-horizon planningstrategy is more efficient for the considered scenarios whichcan be achieved by applying a Greedy-technique to planan action sequence until a measurement is performed. Theiterative planning algorithm terminates when the desiredquality criteria in form of distribution entropies are reached.In this work the differential upper bound entropy is estimatedand evaluated for each object instance separately, aiming atdetecting each object instance with a specific precision.

VI. EXPERIMENTS

In this section the proposed approach is evaluated. A per-ception cycle is demonstrated with focus on the localizationprocess. The performance of the developed active planningapproach is compared to two other strategies for viewpointselection.

A. Localization process in the perception cycle

1) Experimental setup: The proposed approach is demon-strated on the example of the object constellation shown inFigure 4a. The complex scenario consists of 11 differentobjects belonging to 9 object classes. The object databasecontains a total of 100 different object classes, all householditems.

The viewpoint arrangement consists of 8 different, cir-cularly aligned viewpoints and is schematically illustratedin Figure 4b. A sensing action is defined as a change in

1040

Page 6: Active Perception and Scene Modeling by Planning …vigir.missouri.edu/~gdesouza/Research/Conference_CDs/...October 18-22, 2010, Taipei, Taiwan 978-1-4244-6676-4/10/$25.00 ©2010 IEEE

TABLE IDETECTION RESULTS: NUMBER OF RECOGNIZED OBJECTS AND

NUMBER OF COMPONENTS OF MIXTURE DISTRIBUTIONS

Observation Recognized objects Mixture componentsfrom VP class accurate pose bt−1(q) Ot(at) bt(q′)

VP 6 7 4 0 18 9VP 5 10 7 9 22 12VP 2 11 11 12 17 13

the robot’s viewpoint, equivalent to moving the sensor toa different location. These actions are evaluated in thisexperiment on the basis of the predicted belief distributions.We derive the costs of the robot’s movement to anotherviewpoint from the movement angle and distance.

2) Perception Cycle: An action sequence resulting fromthe proposed planning approach and its effects on the prob-abilistic scene model are presented in this section.

Initially we do not have any scene knowledge, so eachaction promises identical benefits. We start from the currentrobot position, namely viewpoint 6. After performing thefirst measurement and accomplishing the data associationand state estimation process we retrieve the probabilisticscene distribution consisting of a total of 7 detected objectinstances, 3 of them of unsatisfactory pose accuracy, though.Table I lists the recognition results and the number of mixturecomponents of the probability distributions during the stateupdate. Figure 5a displays characteristics of the measure-ment process. The top left image shows one of the stereoimages acquired by the camera. The bottom left graphicsillustrate the current scene with object models positionedat the means of the corresponding probability distributions.The middle plots show the horizontal and upright projectionof the translational covariance ellipsoids of each mixturecomponent of all object instance distributions. The right plotpictures the translational and rotational covariance ellipsoidsin a single graphic. The axis of the orientation of eachmixture component is drawn in black as a vector of unitlength, originating from the translational mean. The originallength representing the rotation angle is colored. At the tipof the unit vector the covariance ellipsoid of the Rodriguescomponents is attached. The weight of the mixture compo-nent is visualized by the transparency of the ellipsoids. Thetable board is shown in grey. All graphics are drawn fromthe perspective of viewpoint 1.

3) Recognition results from viewpoint 6: The first obser-vation is performed at viewpoint 6. The salt box and thetomato soup can are recognized very well. The recognitionresults of the stapled soup boxes, especially for the lowerblue one, are worth looking at. Two mixture components,one for the green and one for the blue box, are assignedto the object instance of the bottom soup box. This effectresults from the similarity of the objects as they are almostidentical in their textures, implying they have many similarinterest points. For the Amicelli box in the front - due toreflections - and for some back objects no hypotheses arefound as they are too far away or beyond the image. The

TABLE IICORRELATING COSTS AND VALUE FOR CALCULATING THE REWARD Rat

FROM EQUATION (12) FOR SELECTED VIEWPOINTS FOR α = 2.0

View- VP6 VP6 VP5points Rat costs value Rat costs value Rat

VP1 -0.75 -0.75 -1.0 -2.75 -1.00 -0.53 -2.06VP2 -1.00 -1.00 -0.00 -1.0 -0.75 -0.00 -0.75VP3 -0.75 -0.75 -0.95 -2.65 -0.50 -0.65 -1.80VP4 -0.50 -0.50 -0.88 -2.26 -0.25 -0.28 -0.81VP5 -0.25 -0.25 -0.24 -0.73 -0.50 -0.24 -0.94VP6 0.00 0.50 -0.13 -0.76 -0.25 -0.49 -1.23VP7 -0.25 -0.25 -0.28 -0.81 -0.75 -0.64 -2.03VP8 -0.50 -0.50 -0.14 -0.78 -0.50 -1.00 -2.50

planning algorithm aims at differentiating between the soupboxes and sharpening the knowledge of the Ceylon tee andthe jam tins. It determines moving to viewpoint 5 as bestfuture action. The planning results are shown in Table II.Due to the heavy occlusion of the bottom soup box, manyviewpoints are considered as disadvantageous.

4) Recognition results from viewpoint 5: Three new ob-jects are recognized in the second measurement step andmost goals, except for improving the knowledge of the leftjam tin, are reached. Note, that the uncertainty of the jamtin even grew due to state transition effects. For the rightjam tin and the Fenchel tea two mixture components depictthe respective object instance distributions. As these objectshave similar textures on several sides of their surface, aclear identification is not possible from this observation.While the two mixtures of the Fenchel tea are almost equallyweighted, the larger mixture component of the jam tin hasvery little weight assigned. Due the fairly low entropy of thisdistribution, it is still regarded as satisfactorily recognized.

5) Recognition results from viewpoint 2: Based on thecurrent belief distributions the planning algorithm proposesa prospective measurement from viewpoint 2 from whereall uncertain objects are supposed to be clearly visible.Except for the right jam tin, all objects are observed. Theham tin is newly discovered. This results in 11 stronglypeaked hypotheses distributions, which makes the algorithmterminate. The green arrows in Figure 4b show the completedaction sequence.

The detection of new object instances has great effectson the planning algorithms. Thus, especially in complexscenes with a large number of objects, the detection of allobjects cannot be guaranteed. The algorithm terminates whenall recognized objects are of high class and pose accuracy,but does not have explorative behavior. The consideration ofocclusion is essential for a reasonable observation sequence.

B. Evaluation against other strategies

In a series of experiments we compare the proposedapproach to two other strategies, namely random viewpointselection and an incremental strategy. The random strategyarbitrarily chooses the next sensing action. The incrementalstrategy decides on either moving clockwise or counterclock-wise around the table and performs measurements step bystep, always moving right to the next viewpoint.

1041

Page 7: Active Perception and Scene Modeling by Planning …vigir.missouri.edu/~gdesouza/Research/Conference_CDs/...October 18-22, 2010, Taipei, Taiwan 978-1-4244-6676-4/10/$25.00 ©2010 IEEE

OBSERVATION FROM VP6

OBSERVATION FROM VP5

OBSERVATION FROM VP2

Fig. 5. Detection results for three sequential measurements. The top left plot shows one stereo camera image, the bottom left plot the current scene model.The other plots illustrate the covariance ellipsoids (97%quantile) of each mixture component of all object instance distributions. The middle column showsthe horizontal and upright projection of the translational components, the right column a 3D view on the translational and rotational ellipsoids. On thebottom right important components are enlarged.

1042

Page 8: Active Perception and Scene Modeling by Planning …vigir.missouri.edu/~gdesouza/Research/Conference_CDs/...October 18-22, 2010, Taipei, Taiwan 978-1-4244-6676-4/10/$25.00 ©2010 IEEE

(a) Average number of iterations (b) Average costs

Fig. 6. Evaluation of number of iterations and costs for each strategy.

(a) Rates of successful termination (b) Recognition rates

Fig. 7. Experimental results, analyzed at each iteration step.

In the following we compare these strategies in 80 exper-iments with different scenes consisting of up to 10 objects.The proposed strategy outperforms the others in the numberof average iteration steps with an average of 1.68 steps asshown in Figure 6a. In the movement costs a clear increasecan be seen from the proposed to the incremental strategy.The random strategy is weakest as expected. Figure 6b showsthe results.

In Figure 7a the rates of successful termination of thealgorithms are plotted against the iterations. Significantlymore experiments terminated after few iterations for theproposed strategy than for others. Also, the proposed strategyalmost always terminates, while the others sometimes cannotachieve the desired termination precision. Note, that it doesnot necessarily mean that all objects are successfully de-tected, when the algorithms stop, as no explorative behavioris implemented. Thus, sometimes the precise recognition of asubset of all objects meets the specified termination criterionalready.

The recognition rates with respect to the different strate-gies are presented in Figure 7b. The rates are again plottedover the iteration steps to show how the results change withthe number of incorporated measurements. We differentiatebetween the detection rates for the object class, illustrated bythe dashed line and the recognition rate, which additionallytakes into account pose accuracy (solid line). As presumedthe class detection rate is similar for all strategies due tothe lack of exploration abilities. It grows with the number ofiterations as object instances accumulate with the numberof observations up to the total number of objects in thescene. The recognition rate including the pose accuracy ishigher for the proposed strategy between the second and theforth iteration step, what proves the deliberate reduction ofuncertainty in the scene model.

VII. CONCLUSIONS AND FUTURE WORKS

In this work we realized an active perception frameworkwhich probabilistically reasons over belief distributions toefficiently plan future actions. It is integrated in a real robotand operates in realistic environments. In experiments thefunctionality of the proposed approach is demonstrated. Itsgood performance is shown in comparison to two otherviewpoint selection strategies.

In future works object relations could be used to improvelocalization errors and measurement data. Also, it could beof interest to model occluded and invisible space to givethe robot more profound knowledge for action selection andenable it to explore the environment.

VIII. ACKNOWLEDGMENTS

This work was partly funded as part of the research projectDESIRE by the German Federal Ministry of Education andResearch (BMBF) under grant no. 01IME01D.

REFERENCES

[1] S. Chen, “On Perception Planning for Active Robot Vision,” IEEESMC Society eNewsletter, vol. 13, December 2005.

[2] S. Dutta Roy, S. Chaudhury, and S. Banerjee, “Active recognitionthrough next view planning: a survey,” Pattern Recognition, vol. 37,no. 3, pp. 429–446, 2004.

[3] P.-E. Forssen, D. Meger, K. Lai, S. Helmer, J. J. Little, and D. G. Lowe,“Informed visual search: Combining attention and object recognition,”in IEEE International Conference on Robotics and Automation, 2008,pp. 935–942.

[4] O. Lanz, “Occlusion robust tracking of multiple objects,” InternationalConference on Computer Vision and Graphics, 2004.

[5] M. Chli and A. J. Davison, “Active matching,” in Proceedings of theEuropean Conference on Computer Vision, 2008.

[6] G. Flandin and F. Chaumette, “Visual data fusion for objects localiza-tion by active vision,” in Proceedings of the 7th European Conferenceon Computer Vision, 2002.

[7] J. Denzler and C. M. Brown, “Information theoretic sensor dataselection for active object recognition and state estimation,” IEEETransactions on Pattern Analysis and Machine Intelligence, vol. 24,pp. 145–157, 2002.

[8] C. Derichs, F. Deinzer, and H. Niemann, “Integrated viewpoint fusionand viewpoint selection for optimal object recognition,” in BritishMachine Vision Conference, 2006, pp. 287–296.

[9] J. Ma and J. W. Burdick, “Dynamic sensor planning with stereo formodel identification on a mobile platform,” in Proceedings of IEEEInternational Conference Robotics, Automation and Control, 2010.

[10] W. Feiten, P. Atwal, R. Eidenberger, and T. Grundmann, “6d poseuncertainty in robotic perception,” in Proceedings of the GermanWorkshop on Robotics, 2009.

[11] M. W. Spong, Robot Dynamics and Control. New York, NY, USA:John Wiley & Sons, Inc., 1989.

[12] R. Eidenberger, R. Zoellner, and J. Scharinger, “Probabilistic occlusionestimation in cluttered environments for active perception planning,”in Proceedings of the IEEE International Conference on AdvancedIntelligent Mechatronics, 2009.

[13] R. Eidenberger, T. Grundmann, and R. Zoellner, “Probabilistic actionplanning for active scene modeling in continuous high-dimensionaldomains,” in Proceedings of the IEEE International Conference OnRobotics and Automation, 2009.

[14] M. F. Huber, T. Bailey, H. Durrant-Whyte, and U. D. Hanebeck,“On entropy approximation for gaussian mixture random vectors,”in Proceedings of the IEEE International Conference on MultisensorFusion and Integration for Intelligent Systems, 2008.

[15] T. Grundmann, R. Eidenberger, M. Schneider, and M. Fiegert, “Robust6d pose determination in complex environments for one hundredclasses,” in Proceedings of the 7th International Conference OnInformatics in Control, Automation and Robotics, 2010.

1043