Color ECE 847: Digital Image Processing Stan Birchfield Clemson University.
Qualitative Vision-Based Mobile Robot Navigation Zhichao Chen and Stanley T. Birchfield Dept. of...
-
Upload
abel-richards -
Category
Documents
-
view
213 -
download
0
Transcript of Qualitative Vision-Based Mobile Robot Navigation Zhichao Chen and Stanley T. Birchfield Dept. of...
![Page 1: Qualitative Vision-Based Mobile Robot Navigation Zhichao Chen and Stanley T. Birchfield Dept. of Electrical and Computer Engineering Clemson University.](https://reader036.fdocuments.us/reader036/viewer/2022062720/56649f0c5503460f94c1fc81/html5/thumbnails/1.jpg)
Qualitative Vision-Based Mobile
Robot Navigation
Zhichao Chen and Stanley T. Birchfield
Dept. of Electrical and Computer Engineering
Clemson University
Clemson, South Carolina USA
![Page 2: Qualitative Vision-Based Mobile Robot Navigation Zhichao Chen and Stanley T. Birchfield Dept. of Electrical and Computer Engineering Clemson University.](https://reader036.fdocuments.us/reader036/viewer/2022062720/56649f0c5503460f94c1fc81/html5/thumbnails/2.jpg)
Motivation
• Goal: Enable mobile robot to follow a desired trajectory in both indoor and outdoor environments
• Applications: courier, delivery, tour guide, scout robots
• Previous approaches:• Image Jacobian [Burschka and Hager 2001]
• Homography [Sagues and Guerrero 2005] • Homography (flat ground plane) [Liang and Pears 2002]
• Man-made environment [Guerrero and Sagues 2001]
• Calibrated camera [Atiya and Hager 1993]
• Stereo cameras [Shimizu and Sato 2000]
• Omni-directional cameras [Adorni et al. 2003]
![Page 3: Qualitative Vision-Based Mobile Robot Navigation Zhichao Chen and Stanley T. Birchfield Dept. of Electrical and Computer Engineering Clemson University.](https://reader036.fdocuments.us/reader036/viewer/2022062720/56649f0c5503460f94c1fc81/html5/thumbnails/3.jpg)
Our approach
• Key intuition: Vastly overdetermined system(Dozens of feature points, one control decision)
• Key result: Simple control algorithm– Teach / replay approach using sparse feature points – Single, off-the-shelf camera– No calibration for camera or lens– Easy to implement (no homographies or Jacobians)
![Page 4: Qualitative Vision-Based Mobile Robot Navigation Zhichao Chen and Stanley T. Birchfield Dept. of Electrical and Computer Engineering Clemson University.](https://reader036.fdocuments.us/reader036/viewer/2022062720/56649f0c5503460f94c1fc81/html5/thumbnails/4.jpg)
Preview of results
![Page 5: Qualitative Vision-Based Mobile Robot Navigation Zhichao Chen and Stanley T. Birchfield Dept. of Electrical and Computer Engineering Clemson University.](https://reader036.fdocuments.us/reader036/viewer/2022062720/56649f0c5503460f94c1fc81/html5/thumbnails/5.jpg)
Tracking feature points
Kanade-Lucas-Tomasi (KLT) feature tracker• Automatically selects features using eigenvalues of 2x2 gradient
covariance matrix
• Automatically tracks features by minimizing sum of squared differences (SSD) between consecutive image frames
• Augmented with gain and bias to handle lighting changes
• Open-source implementation
WdJI x
dx
dx
2
22
WdJI x
dx
dx
2
)2
(2
[http://www.ces.clemson.edu/~stb/klt]
unknown displacement
gray-level images
W
TZ )()( xgxggradient of image
![Page 6: Qualitative Vision-Based Mobile Robot Navigation Zhichao Chen and Stanley T. Birchfield Dept. of Electrical and Computer Engineering Clemson University.](https://reader036.fdocuments.us/reader036/viewer/2022062720/56649f0c5503460f94c1fc81/html5/thumbnails/6.jpg)
Handling lighting changes
original modified modifiedoriginal
Environmental conditions due to clouds blocking sun
Automatic gain controlof the camera
original KLT tracker
modified KLT tracker
original KLT tracker
modified KLT tracker
![Page 7: Qualitative Vision-Based Mobile Robot Navigation Zhichao Chen and Stanley T. Birchfield Dept. of Electrical and Computer Engineering Clemson University.](https://reader036.fdocuments.us/reader036/viewer/2022062720/56649f0c5503460f94c1fc81/html5/thumbnails/7.jpg)
Teach-Replay
Teaching Phase
start
destination
detect features
trackfeatures
Replay Phase
trackfeatures
comparefeatures
current featuregoal feature
initial feature
goal feature
![Page 8: Qualitative Vision-Based Mobile Robot Navigation Zhichao Chen and Stanley T. Birchfield Dept. of Electrical and Computer Engineering Clemson University.](https://reader036.fdocuments.us/reader036/viewer/2022062720/56649f0c5503460f94c1fc81/html5/thumbnails/8.jpg)
Qualitative decision ruleLandmark
image plane
Feature is to the right |uCurrent| > |uGoal| “Turn right”
Feature has changed sides sign(uCurrent) ≠ sign(uGoal) “Turn left”
No evidence“Go straight”
feature
funnel lane
Robot at goal
uGoal
uCurrent
![Page 9: Qualitative Vision-Based Mobile Robot Navigation Zhichao Chen and Stanley T. Birchfield Dept. of Electrical and Computer Engineering Clemson University.](https://reader036.fdocuments.us/reader036/viewer/2022062720/56649f0c5503460f94c1fc81/html5/thumbnails/9.jpg)
Feature is to the right “Turn right”
Side change “Turn left”
The funnel lane at an angleLandmark
image plane
Robot at goal
feature
α
α α
funnel lane
No evidence“Go straight”
![Page 10: Qualitative Vision-Based Mobile Robot Navigation Zhichao Chen and Stanley T. Birchfield Dept. of Electrical and Computer Engineering Clemson University.](https://reader036.fdocuments.us/reader036/viewer/2022062720/56649f0c5503460f94c1fc81/html5/thumbnails/10.jpg)
The funnel lane created by multiple feature points
α
α ambiguous area
Landmark #1
Landmark #2
Landmark #3
Feature is to the right “Turn right”
Side change “Turn left”
No evidence“Do not turn”
![Page 11: Qualitative Vision-Based Mobile Robot Navigation Zhichao Chen and Stanley T. Birchfield Dept. of Electrical and Computer Engineering Clemson University.](https://reader036.fdocuments.us/reader036/viewer/2022062720/56649f0c5503460f94c1fc81/html5/thumbnails/11.jpg)
A simplified example
“Turn right” “Turn left”“Go straight”
Landmarkfeature
Robot at goal
funnel lanefunnel lanefunnel lanefunnel lane
“Go straight”“Go straight”“Go straight”
![Page 12: Qualitative Vision-Based Mobile Robot Navigation Zhichao Chen and Stanley T. Birchfield Dept. of Electrical and Computer Engineering Clemson University.](https://reader036.fdocuments.us/reader036/viewer/2022062720/56649f0c5503460f94c1fc81/html5/thumbnails/12.jpg)
Qualitative control algorithm
GoalCurrent
GoalCurrent
u signu sign
and
uu
Voting schemeEach feature votes either
• “turn right”, or• “turn left”
Majority rules
Funnel constraints:
uGoal
uCurrent
uGoal
End of segment reachedWhen the mean squared error increases
![Page 13: Qualitative Vision-Based Mobile Robot Navigation Zhichao Chen and Stanley T. Birchfield Dept. of Electrical and Computer Engineering Clemson University.](https://reader036.fdocuments.us/reader036/viewer/2022062720/56649f0c5503460f94c1fc81/html5/thumbnails/13.jpg)
Experimental results
Videos available at http://www.ces.clemson.edu/~stb/research/mobile_robot
![Page 14: Qualitative Vision-Based Mobile Robot Navigation Zhichao Chen and Stanley T. Birchfield Dept. of Electrical and Computer Engineering Clemson University.](https://reader036.fdocuments.us/reader036/viewer/2022062720/56649f0c5503460f94c1fc81/html5/thumbnails/14.jpg)
Experimental results
Videos available at http://www.ces.clemson.edu/~stb/research/mobile_robot
![Page 15: Qualitative Vision-Based Mobile Robot Navigation Zhichao Chen and Stanley T. Birchfield Dept. of Electrical and Computer Engineering Clemson University.](https://reader036.fdocuments.us/reader036/viewer/2022062720/56649f0c5503460f94c1fc81/html5/thumbnails/15.jpg)
Experimental results
Indoor Outdoor
Imaging Source Firewire camera Logitech Pro 4000 webcam
![Page 16: Qualitative Vision-Based Mobile Robot Navigation Zhichao Chen and Stanley T. Birchfield Dept. of Electrical and Computer Engineering Clemson University.](https://reader036.fdocuments.us/reader036/viewer/2022062720/56649f0c5503460f94c1fc81/html5/thumbnails/16.jpg)
Conclusion• Approach
• teach-replay, comparing image coordinates of feature points
• qualitative decision rule (no Jacobians, homographies)
• Advantages • off-the-shelf camera
• no calibration (not even lens distortion)
• simple, easy to implement
• tested in both indoor and outdoor environments
• Future work• variable driving speed (sharp turns)• integration with other sensors (odometry, GPS)• obstacle avoidance