Markov Localization & Bayes Filtering
1
with
Kalman Filters
Discrete Filters
Particle Filters
Slides adapted from Thrun et al.,
Probabilistic Robotics
Faculties of Engineering & Computer Science 2
Autonomous RoboticsCSCI 6905 / Mech 6905 – Section 3
Dalhousie Fall 2011 / 2012 Academic Term
• Introduction • Motion• Perception• Control• Concluding Remarks• LEGO Mindstorms
Control Scheme for AutonomousMobile Robot
Faculties of Engineering & Computer Science 3
Autonomous RoboticsCSCI 6905 / Mech 6905 – Section 3
Dalhousie Fall 2011 / 2012 Academic Term
• Introduction • Motion• Perception• Control• Concluding Remarks• LEGO Mindstorms
Control Scheme for AutonomousMobile Robot – the plan
– Thomas will cover generalized Bayesian filters for localization next week
– Mae sets up the background for him, today, by discussing motion and sensor models as well as robot control
– Mae then follows on Bayesian filters to do a specific example, underwater SLAM
Markov Localization
4
The robot doesn’t know where it is. Thus, a reasonable initial believe of it’s position is a uniform distribution.
Markov Localization
5
A sensor reading is made (USE SENSOR MODEL) indicating a door at certain locations (USE MAP). This sensor reading should be integrated with prior believe to update our believe (USE BAYES).
Markov Localization
6The robot is moving (USE MOTION MODEL) which adds noise.
Markov Localization
7
A new sensor reading (USE SENSOR MODEL) indicates a door at certain locations (USE MAP). This sensor reading should be integrated with prior believe to update our believe (USE BAYES).
Markov Localization
8The robot is moving (USE MOTION MODEL) which adds noise. …
9
Bayes Formula
evidence
prior likelihood
)(
)()|()(
)()|()()|(),(
yP
xPxyPyxP
xPxyPyPyxPyxP
10
Bayes Rule with Background Knowledge
)|(
)|(),|(),|(
zyP
zxPzxyPzyxP
11
Normalization
)()|(
1)(
)()|()(
)()|()(
1
xPxyPyP
xPxyPyP
xPxyPyxP
x
yx
xyx
yx
yxPx
xPxyPx
|
|
|
aux)|(:
aux
1
)()|(aux:
Algorithm:
12
Recursive Bayesian Updating
),,|(
),,|(),,,|(),,|(
11
11111
nn
nnnn
zzzP
zzxPzzxzPzzxP
Markov assumption: zn is independent of z1,...,zn-1 if we know x.
)()|(
),,|()|(
),,|(
),,|()|(),,|(
...1...1
11
11
111
xPxzP
zzxPxzP
zzzP
zzxPxzPzzxP
ni
in
nn
nn
nnn
13
Putting oberservations and actions together: Bayes Filters• Given:
• Stream of observations z and action data u:
• Sensor model P(z|x).• Action model P(x|u,x’).• Prior probability of the system state P(x).
• Wanted: • Estimate of the state X of a dynamical system.• The posterior of the state is also called Belief:
),,,|()( 11 tttt zuzuxPxBel
},,,{ 11 ttt zuzud
14
Graphical Representation and Markov Assumption
Underlying Assumptions• Static world• Independent noise• Perfect model, no approximation errors
),|(),,|( 1:1:11:1 ttttttt uxxpuzxxp )|(),,|( :1:1:0 tttttt xzpuzxzp
15111 )(),|()|( ttttttt dxxBelxuxPxzP
Bayes Filters
),,,|(),,,,|( 1111 ttttt uzuxPuzuxzP Bayes
z = observationu = actionx = state
),,,|()( 11 tttt zuzuxPxBel
Markov ),,,|()|( 11 tttt uzuxPxzP
Markov11111 ),,,|(),|()|( tttttttt dxuzuxPxuxPxzP
1111
111
),,,|(
),,,,|()|(
ttt
ttttt
dxuzuxP
xuzuxPxzP
Total prob.
Markov111111 ),,,|(),|()|( tttttttt dxzzuxPxuxPxzP
•Prediction
•Correction
111 )(),|()( tttttt dxxbelxuxpxbel
)()|()( tttt xbelxzpxbel
17
Bayes Filter Algorithm
1. Algorithm Bayes_filter( Bel(x),d ):2. 0
3. If d is a perceptual data item z then4. For all x do5. 6. 7. For all x do8.
9. Else if d is an action data item u then10. For all x do11.
12. Return Bel’(x)
)()|()(' xBelxzPxBel )(' xBel
)(')(' 1 xBelxBel
')'()',|()(' dxxBelxuxPxBel
111 )(),|()|()( tttttttt dxxBelxuxPxzPxBel
18
Bayes Filters are Familiar!
• Kalman filters
• Particle filters
• Hidden Markov models
• Dynamic Bayesian networks
• Partially Observable Markov Decision Processes (POMDPs)
111 )(),|()|()( tttttttt dxxBelxuxPxzPxBel
19
SA-1
Probabilistic Robotics
Bayes Filter Implementations
Gaussian filters
),(~),(~ 22
2
abaNYbaXY
NX
Linear transform of Gaussians
2
2)(
2
1
2
2
1)(
:),(~)(
x
exp
Nxp
-
Univariate
Gaussians
• We stay in the “Gaussian world” as long as we start with Gaussians and perform only linear transformations.
),(~),(~ TAABANY
BAXY
NX
Multivariate Gaussians
12
11
221
11
21
221
222
111 1,~)()(
),(~
),(~
NXpXpNX
NX
23
Discrete Kalman Filter
tttttt uBxAx 1
tttt xCz
Estimates the state x of a discrete-time controlled process that is governed by the
linear stochastic difference equation
with a measurement
24
0000 ,;)( xNxbel
Linear Gaussian Systems: Initialization
• Initial belief is normally distributed:
25
• Dynamics are linear function of state and control plus additive noise:
tttttt uBxAx 1
Linear Gaussian Systems: Dynamics
ttttttttt RuBxAxNxuxp ,;),|( 11
1111
111
,;~,;~
)(),|()(
ttttttttt
tttttt
xNRuBxAxN
dxxbelxuxpxbel
26
• Observations are linear function of state plus additive noise:
tttt xCz
Linear Gaussian Systems: Observations
tttttt QxCzNxzp ,;)|(
ttttttt
tttt
xNQxCzN
xbelxzpxbel
,;~,;~
)()|()(
27
Kalman Filter Algorithm
1. Algorithm Kalman_filter( t-1, t-1, ut, zt):
2. Prediction:3. 4.
5. Correction:6. 7. 8.
9. Return t, t
ttttt uBA 1
tTtttt RAA 1
1)( tTttt
Tttt QCCCK
)( tttttt CzK
tttt CKI )(
28
Kalman Filter Summary
•Highly efficient: Polynomial in measurement dimensionality k and state dimensionality n: O(k2.376 + n2)
•Optimal for linear Gaussian systems!
•Most robotics systems are nonlinear!
29
Nonlinear Dynamic Systems
•Most realistic robotic problems involve nonlinear functions
),( 1 ttt xugx
)( tt xhz
30
Linearity Assumption Revisited
31
Non-linear Function
32
EKF Linearization (1)
33
EKF Linearization (2)
34
EKF Linearization (3)
35
•Prediction:
•Correction:
EKF Linearization: First Order Taylor Series Expansion
)(),(),(
)(),(
),(),(
1111
111
111
ttttttt
ttt
tttttt
xGugxug
xx
ugugxug
)()()(
)()(
)()(
ttttt
ttt
ttt
xHhxh
xx
hhxh
36
EKF Algorithm
1. Extended_Kalman_filter( t-1, t-1, ut, zt):
2. Prediction:3. 4.
5. Correction:6. 7. 8.
9. Return t, t
),( 1 ttt ug
tTtttt RGG 1
1)( tTttt
Tttt QHHHK
))(( ttttt hzK
tttt HKI )(
1
1),(
t
ttt x
ugG
t
tt x
hH
)(
ttttt uBA 1
tTtttt RAA 1
1)( tTttt
Tttt QCCCK
)( tttttt CzK
tttt CKI )(
37
Localization
• Given • Map of the environment.• Sequence of sensor measurements.
• Wanted• Estimate of the robot’s position.
• Problem classes• Position tracking• Global localization• Kidnapped robot problem (recovery)
“Using sensory information to locate the robot in its environment is the most fundamental problem to providing a mobile robot with
autonomous capabilities.” [Cox ’91]
38
Landmark-based Localization
39
EKF Summary
•Highly efficient: Polynomial in measurement dimensionality k and state dimensionality n: O(k2.376 + n2)
•Not optimal!•Can diverge if nonlinearities are large!•Works surprisingly well even when all
assumptions are violated!
40
• [Arras et al. 98]:
• Laser range-finder and vision
• High precision (<1cm accuracy)
Kalman Filter-based System
[Courtesy of Kai Arras]
41
Multi-hypothesisTracking
42
• Belief is represented by multiple hypotheses
• Each hypothesis is tracked by a Kalman filter
• Additional problems:
• Data association: Which observation
corresponds to which hypothesis?
• Hypothesis management: When to add / delete
hypotheses?
• Huge body of literature on target tracking, motion
correspondence etc.
Localization With MHT
43
MHT: Implemented System (2)
Courtesy of P. Jensfelt and S. Kristensen
SA-1
Probabilistic Robotics
Bayes Filter Implementations
Discrete filters
45
Piecewise Constant
46
Discrete Bayes Filter Algorithm
1. Algorithm Discrete_Bayes_filter( Bel(x),d ):2. 0
3. If d is a perceptual data item z then4. For all x do5. 6. 7. For all x do8.
9. Else if d is an action data item u then10. For all x do11.
12. Return Bel’(x)
)()|()(' xBelxzPxBel )(' xBel
)(')(' 1 xBelxBel
'
)'()',|()('x
xBelxuxPxBel
47
Grid-based Localization
48
Sonars and Occupancy Grid Map
SA-1
Probabilistic Robotics
Bayes Filter Implementations
Particle filters
Sample-based Localization (sonar)
Represent belief by random samples
Estimation of non-Gaussian, nonlinear processes
Monte Carlo filter, Survival of the fittest, Condensation, Bootstrap filter, Particle filter
Filtering: [Rubin, 88], [Gordon et al., 93], [Kitagawa 96]
Computer vision: [Isard and Blake 96, 98] Dynamic Bayesian Networks: [Kanazawa et al., 95]d
Particle Filters
Weight samples: w = f / g
Importance Sampling
Importance Sampling with Resampling:Landmark Detection Example
Particle Filters
)|()(
)()|()()|()(
xzpxBel
xBelxzpw
xBelxzpxBel
Sensor Information: Importance Sampling
'd)'()'|()( , xxBelxuxpxBel
Robot Motion
)|()(
)()|()()|()(
xzpxBel
xBelxzpw
xBelxzpxBel
Sensor Information: Importance Sampling
Robot Motion
'd)'()'|()( , xxBelxuxpxBel
1. Algorithm particle_filter( St-1, ut-1 zt):
2.
3. For Generate new samples
4. Sample index j(i) from the discrete distribution given by wt-
1
5. Sample from using and
6. Compute importance weight
7. Update normalization factor
8. Insert
9. For
10. Normalize weights
Particle Filter Algorithm
0, tS
ni 1
},{ it
ittt wxSS
itw
itx ),|( 11 ttt uxxp )(
1ij
tx 1tu
)|( itt
it xzpw
ni 1
/it
it ww
draw xit1 from Bel(xt1)
draw xit from p(xt | xi
t1,ut1)
Importance factor for xit:
)|()(),|(
)(),|()|(ondistributi proposal
ondistributitarget
111
111
tt
tttt
tttttt
it
xzpxBeluxxp
xBeluxxpxzp
w
1111 )(),|()|()( tttttttt dxxBeluxxpxzpxBel
Particle Filter Algorithm
Start
Motion Model Reminder
Proximity Sensor Model Reminder
Laser sensor Sonar sensor
63
Initial Distribution
64
After Incorporating Ten Ultrasound Scans
65
After Incorporating 65 Ultrasound Scans
66
Estimated Path
Localization for AIBO robots
68
Limitations
•The approach described so far is able to • track the pose of a mobile robot and to• globally localize the robot.
•How can we deal with localization errors (i.e., the kidnapped robot problem)?
69
Approaches
•Randomly insert samples (the robot can be teleported at any point in time).
• Insert random samples proportional to the average likelihood of the particles (the robot has been teleported with higher probability when the likelihood of its observations drops).
70
Global Localization
71
Kidnapping the Robot
73
Summary
• Particle filters are an implementation of recursive Bayesian filtering
• They represent the posterior by a set of weighted samples.
• In the context of localization, the particles are propagated according to the motion model.
• They are then weighted according to the likelihood of the observations.
• In a re-sampling step, new particles are drawn with a probability proportional to the likelihood of the observation.
Top Related