[IEEE 2013 IEEE Conference on Technologies for Practical Robot Applications (TePRA) - Woburn, MA,...

6
Development of a Low-Cost, SeContaine Combined Vision and Inertial Navigation System Justin M. Barrett, Michael A. Gennert, William R. Michalson Robotics Engineering Program Worcester Polytechnic Institute Worcester, MA [email protected], michaelpi.edu, wrm@wpi.edu Abstract-In the field of autonomous vehicle research, maintaining accurate estimates of a vehicle's velocities, position, and orientation is crucial to successful navigation. Normally, this information can be obtained through the use of a GPS system. However, GPS signals are not always available in some of the more exotic or dangerous locations that a vehicle might be dispatched to. This paper proposes a hybrid navigation system for autonomous vehicles that uses inertial sensors and a stereo camera system to accurately determine the velocities, position, and orientation of a vehicle. Such a system would be ideal for vehicles that have to operate in underground, indoor, extraterrestrial, or other GPS-denied environments. Keywords-Extended Kalman Filter; Optical ow; IMU Calibration; Navigation; GPS Denied I. INTRODUCTION As the desire for unmanned vehicles with higher and higher degrees of autonomy grows, the need for navigation systems that can produce highly accurate estimations of a vehicle's location becomes greater and greater. In many cases the primary method of achieving long-term accurate navigation for unmanned vehicles is to use the Global Positioning System (GPS), which is capable of deteining the position of a vehicle to within approximately three meters of its true position. More accuracy can be achieved by using local differential corrections or by time-averaging data, but even when using these techniques the majority of civilian- grade differential GPS receivers are only capable of one meter accuracy in real time. While GPS provides a convenient and simple way of determining the global location of an unmanned vehicle, it does have one major drawback: GPS receivers are only ideally suited to work in relatively open outdoor areas. They do not work reliably in environments with poor satellite visibility such as heavily wooded areas, urban areas, underwater, caves, or inside buildings. Because of this drawback, unmanned vehicles trying to navigate through these environments would be unable to use GPS for navigation. In order to navigate through these GPS-denied environments, unmanned vehicles must use other types of navigation sensors to detennine infonnation about their location. 978-1-4673-6225-2/13/$31.00 ©2013 IEEE Michael D. Audi, Julian L. Center, Jr., James F. Kirk Autonomous Exploration, Inc. Andover, MA [email protected], [email protected], jkirkI2@wpi.edu Bryan W. Welch NASA Glenn Research Center, Cleveland, OR [email protected] For the purpose of GPS-denied navigation, Autonomous Exploration, Inc. and Worcester Polytechnic Institute have developed a hybrid navigation system that uses a combination of inertial and visual sensors. The primary navigation sensor is a low-cost inertial measurement unit (lMU) that measures accelerations and angular velocities in three dimensions, and the secondary navigation sensor is a stereo camera system that supplies Earth-referenced velocity and position fixes. Inertial and visual sensors were chosen because their sensing abilities inherently complement each other. IMUs can track high- equency motions and can ret data at a high sampling rate, which makes them ideal for navigation purposes, but their measurements driſt over time. Cameras, on the other hand, retu data with a relatively slow sampling rate because of the extra computation required to process image data, but their measurements do not driſt over time. These characteristics make inertial and visual sensors ideally suited to work together. The following sections of this paper will describe the design of the navigation system in more detail, explain the methodology used to test the system, and analyze the results om a proof of concept test conducted in a real-world environment. II. SYSTEM DESIGN A block diagram of the hybrid navigation system can be seen in Figure 1. The system is characterized as tightly coupled because its two primary sensors directly aid each other. The basic operation of the system can be described by the following steps. The IMU supplies accelerations and angular velocities, which are corrected to remove biases, scale factors, and misalignments before being integrated into linear velocity, position, and orientation information. Stereo camera images are used to estimate the inverse depth to all of the elements in a scene, and those inverse depth measurements are fed into the optical flow calculations to distinguish linear movements om angular movements. The vision calculations are aided by predictions of how the images should have changed based on the inertial navigation information provided

Transcript of [IEEE 2013 IEEE Conference on Technologies for Practical Robot Applications (TePRA) - Woburn, MA,...

Page 1: [IEEE 2013 IEEE Conference on Technologies for Practical Robot Applications (TePRA) - Woburn, MA, USA (2013.04.22-2013.04.23)] 2013 IEEE Conference on Technologies for Practical Robot

Development of a Low-Cost, Self-Contained, Combined Vision and Inertial Navigation System

Justin M. Barrett, Michael A. Gennert, William R. Michalson

Robotics Engineering Program Worcester Polytechnic Institute

Worcester, MA [email protected], [email protected], [email protected]

Abstract-In the field of autonomous vehicle research,

maintaining accurate estimates of a vehicle's velocities, position,

and orientation is crucial to successful navigation. Normally, this

information can be obtained through the use of a GPS system.

However, GPS signals are not always available in some of the

more exotic or dangerous locations that a vehicle might be

dispatched to. This paper proposes a hybrid navigation system

for autonomous vehicles that uses inertial sensors and a stereo

camera system to accurately determine the velocities, position,

and orientation of a vehicle. Such a system would be ideal for

vehicles that have to operate in underground, indoor,

extraterrestrial, or other GPS-denied environments.

Keywords-Extended Kalman Filter; Optical Flow; IMU Calibration; Navigation; GPS Denied

I. INTRODUCTION

As the desire for unmanned vehicles with higher and higher degrees of autonomy grows, the need for navigation systems that can produce highly accurate estimations of a vehicle's location becomes greater and greater. In many cases the primary method of achieving long-term accurate navigation for unmanned vehicles is to use the Global Positioning System (GPS), which is capable of determining the position of a vehicle to within approximately three meters of its true position. More accuracy can be achieved by using local differential corrections or by time-averaging data, but even when using these techniques the majority of civilian­grade differential GPS receivers are only capable of one meter accuracy in real time. While GPS provides a convenient and simple way of determining the global location of an unmanned vehicle, it does have one major drawback: GPS receivers are only ideally suited to work in relatively open outdoor areas. They do not work reliably in environments with poor satellite visibility such as heavily wooded areas, urban areas, underwater, caves, or inside buildings. Because of this drawback, unmanned vehicles trying to navigate through these environments would be unable to use GPS for navigation. In order to navigate through these GPS-denied environments, unmanned vehicles must use other types of navigation sensors to detennine infonnation about their location.

978-1-4673-6225-2/13/$31.00 ©2013 IEEE

Michael D. Audi, Julian L. Center, Jr., James F. Kirk

Autonomous Exploration, Inc. Andover, MA

[email protected], [email protected], [email protected]

Bryan W. Welch NASA Glenn Research Center, Cleveland, OR

[email protected]

For the purpose of GPS-denied navigation, Autonomous Exploration, Inc. and Worcester Polytechnic Institute have developed a hybrid navigation system that uses a combination of inertial and visual sensors. The primary navigation sensor is a low-cost inertial measurement unit (lMU) that measures accelerations and angular velocities in three dimensions, and the secondary navigation sensor is a stereo camera system that supplies Earth-referenced velocity and position fixes. Inertial and visual sensors were chosen because their sensing abilities inherently complement each other. IMUs can track high­frequency motions and can return data at a high sampling rate, which makes them ideal for navigation purposes, but their measurements drift over time. Cameras, on the other hand, return data with a relatively slow sampling rate because of the extra computation required to process image data, but their measurements do not drift over time. These characteristics make inertial and visual sensors ideally suited to work together.

The following sections of this paper will describe the design of the navigation system in more detail, explain the methodology used to test the system, and analyze the results from a proof of concept test conducted in a real-world environment.

II. SYSTEM DESIGN

A block diagram of the hybrid navigation system can be seen in Figure 1. The system is characterized as tightly coupled because its two primary sensors directly aid each other. The basic operation of the system can be described by the following steps. The IMU supplies accelerations and angular velocities, which are corrected to remove biases, scale factors, and misalignments before being integrated into linear velocity, position, and orientation information. Stereo camera images are used to estimate the inverse depth to all of the elements in a scene, and those inverse depth measurements are fed into the optical flow calculations to distinguish linear movements from angular movements. The vision calculations are aided by predictions of how the images should have changed based on the inertial navigation information provided

Page 2: [IEEE 2013 IEEE Conference on Technologies for Practical Robot Applications (TePRA) - Woburn, MA, USA (2013.04.22-2013.04.23)] 2013 IEEE Conference on Technologies for Practical Robot

Figure 1. Overview of the System Signal Flow [1]

by the IMU. A Bayesian Infonnation Filter, which in this system is an Extended Kalman Filter (EKF), processes the optical flow information and uses it to estimate and correct for the long term drifts that appear in the IMU data. The end result is a more accurate estimate of the location of the vehicle [1 ].

Additionally, in future iterations of the navigation system, prominent features detected in the images will be characterized and tracked by the vision system. The locations of these detected features will be used as an additional source of navigation information to improve the EKF estimates. Ultimately, the information from the EKF will be used to simultaneously localize the vehicle and build a map of its environment [1].

The following subsections explain the core elements of the navigation system in more detail. The detailed subsections include IMU Calibration, Optical Flow and Vision Corrections, and the Extended Kalman Filter.

A. JMU Calibration Process

Ideally, if IMUs were completely free of errors, then the process of converting IMU data into navigation data would be as simple as subtracting out the effects of gravity and integrating the accelerations and angular velocities into velocities, positions, and orientations. However, IMUs are not free of errors, so the errors must be accounted for using a calibration process. There are a wide variety of errors embedded in IMU data, but for this system the following types of errors were deemed the most important to address in the calibration process.

1) Measurement Noise

One of the biggest sources of error in IMU navigation data is random measurement noise. In order to accurately estimate corrections for the inertial data, the IMU measurement noise must be accurately modeled in the EKF. The measurement noise is modeled using two processes: Allan Variance and Power Spectral Density. Allan Variance is a measure of the variance of time averages of data versus the time used to compute the average, while Power Spectral Density is a measure of power versus frequency for a time series of data. If these quantities are computed using a sample of static IMU data (data that represents only IMU noise) then the results can be used to determine an appropriate model for the IMU measurement noise [2].

2) Biases

Biases are another common type of IMU error. A bias is simply an offset that appears in the IMU data. If biased IMU data is directly integrated into navigation infonnation, then the biases cause very large errors to appear very quickly. As was the case with measurement noise, biases must be properly modeled so that the EKF can track and remove them from the IMU data. In most cases, biases can be modeled as a static bias combined with a fIrst-order Markov Process with a large time constant. In order to develop an accurate model of the biases, a large sample of static IMU data must be taken so that an accurate depiction of the bias behavior can be observed. It is also important to note that the initial value of the biases can change when the power is cycled on the IMU. This initial bias variability must also be modeled so that it can be incorporated into the model of the IMU noise used by the EKF [2].

3) Scale Factors and Misalignments

IMU errors can also be caused by scale factors in the IMU data and physical misalignment of the IMU's sensing axes. These terms usually cause the IMU data to be offset by a specifIc multiplicative factor. In order to compensate for them, tests must be conducted to determine the magnitude of the scale factors and misalignments so that the raw IMU data can be corrected. The tests used to determine these quantities are often referred to as mutli-angle tests and rate table tests. Multi-angle tests use gravity as a known source of constant acceleration to test each axis of the accelerometer as its orientation with respect to gravity is changed. By comparing the data that the accelerometer actually returned against the data that it ideally should have returned, the scale factors and misalignments can be detennined. For the gyroscope, a similar test is conducted using a rate table that rotates the gyroscope around each axis of rotation using a known angular velocity profIle. This allows for the scale factors and misalignments to be calculated for the gyroscopes as well using the same method described above for the accelerometers [2].

4) Temperature Sensitivity

The fInal source of error deals with the effect that ambient temperature has on the IMU biases. The MEMS-based accelerometers and gyroscopes that make up the IMU are sensitive to temperature, and variations in the ambient temperature of the IMU's environment can affect the value of the IMU biases. The gyroscopes in particular are very sensitive to temperature. This issue is addressed by using a controllable heat chamber to expose the IMU to specifIc temperatures and then observing how the biases react. If the IMU is exposed to a range of temperatures, then a look-up table can be created that relates specifIc ambient temperatures to specifIc IMU bias values. Then, using the temperature sensor embedded in the IMU, the navigation software can check the temperature of the IMU and apply an appropriate correction from the look-up table that will compensate for any bias variations that are caused by the ambient temperature of the IMU's environment [2].

Page 3: [IEEE 2013 IEEE Conference on Technologies for Practical Robot Applications (TePRA) - Woburn, MA, USA (2013.04.22-2013.04.23)] 2013 IEEE Conference on Technologies for Practical Robot

B. Optical Flow and Vision Corrections

Once data from the IMU has been used to estimate the vehicle's location, corrections need to be applied to that estimate to compensate for long-term drifts in the IMU data. Those corrections are provided by a stereo camera system which is used to detect errors in the estimated linear and angular velocities of the vehicle relative to the Earth. Those velocity error estimates are subsequently fed into the EKF as described earlier, which in turn produces corrections for the IMU navigation information. This section will focus on how the stereo camera system is able to determine the errors in the linear and angular velocities of the vehicle using a process called optical flow.

Several assumptions must be made in order for the optical flow process to be viable. First, it is assumed that the scene the cameras are viewing is essentially static over the small time period between successive stereo camera images (M). This means that any differences between two successive stereo images will have resulted purely from the motion of the vehicle over the time period M. Second, it is assumed that the brightness of a point remains basically constant even if the viewing angle is changed slightly. This is known as the Brightness Constancy Constraint [3].

The basic optical flow algorithm follows these steps (For a detailed explanation of how the following equations were derived, please refer to [3]). First, the inverse depth at every pixel in a pair of stereo images is calculated. Using this inverse depth information, the IMU data is used to predict how a previous image from one of the cameras should transform due to vehicle motion over a period M, and this prediction is differenced with the currently observed image from the same camera. Relying on the Brightness Constancy Constraint, it is assumed that the difference in the images can be attributed to errors in predicting the vehicle's motion. The change in the projection of a specific point in the space 00 onto the image plane can be mathematically related to the linear and angular velocities of the vehicle by using Equation (1) below:

/J u = -w Xx- fpv+-x - - - - p- (1)

where u = [x y oy is the change in the projection of the point, p is the inverse depth of the point, f is the focal length of the camera, .f!!. is the angular velocity of the vehicle and E. is the linear velocity of the vehicle. Equation (1) can then be further simplified and broken out into its individual components [3]:

u=

The next step in the optical flow process relies on the Brightness Constancy Constraint, which states how the time derivative of image intensity can be related to image motion:

(3)

where I(!.) is the image intensity at the pixel location �, �t(�) is

the time derivative of I(�) and VI(�) = [a��) a�;) 0] . By

substituting the value of � calculated in Equation (1) into the Brightness Constancy Constraint equation, the following equation can be created that shows how the time derivative of the image intensity at a point (optical flow) can be related to the linear and angular velocities of the vehicle:

It = (-y J - fly) Wx + (7 J - fix) Wy + (xIy - YIx)wz + fpIxvx + fpIyvy - pJvz (4)

where ] = xIx + yly . It represents the difference of two sequential images taken M seconds apart divided by M [3].

These equations show how sequential stereo images can be used to determine information about the linear and angular velocities of a vehicle. The velocity information produced by this process is fed into the EKF to produce corrections for the IMU navigation information. For more information regarding the equations used to extract the navigation errors from the optical flow information and how they are used in the EKF, please refer to [3].

C. The Extended Kalman Filter (EKF)

The heart of the hybrid navigation system is the Extended Kalman Filter (EKF). The EKF uses estimates of the linear and angular velocity errors from the optical flow calculations to correct the IMU navigation information. This correction process is what allows the system to achieve a level of accuracy that is greater than what it could achieve using either one of its sensors individually [1]. The signal flow diagram of the EKF can be seen in Figure 2.

-('/ 0

RCJ(f) 0

Rve(q) F=

0 0

0 0

Error Dynamcs Matrix

0

0

0

0

0

0 0 0 0

0 -[�ex] -Vex] -[&VI&VI �vxl 0

-(I 0

RCJ(f) - [·pl 0 0 0

0 0 0

r------------------ ------------------i . . : ! . .

I. _________________ ________________ J Measurement Distribution Matrix H=[O 0 0 lb(r)t>i 0 -[�xl OJ 010 0 0 0 0

-------- --------u�d�;� 1

----t-----------------------------------_.'

Figure 2. The signal flow diagram of the Extended Kalman Filter [4]

Page 4: [IEEE 2013 IEEE Conference on Technologies for Practical Robot Applications (TePRA) - Woburn, MA, USA (2013.04.22-2013.04.23)] 2013 IEEE Conference on Technologies for Practical Robot

The EKF works by iteratively cycling through two basic steps: a propagation step and an update step. In the propagation step, linear acceleration and angular velocity measurements are used to track the high frequency motion of the vehicle and linearized versions of the navigation equations are used to predict how errors will propagate between vision measurements. The statistics of the navigation errors are characterized by the covariance matrix, which indicates how confident the EKF is in its estimates. In the update step, the estimates of the linear and angular velocity errors in the IMU navigation information are received from the vision system via the optical flow calculations. These error estimates are multiplied by the Kalman gain matrix, which is calculated using the covariance matrix from the propagation step. The Kalman gain matrix distributes the vision-based error corrections to produce corrections for all of the states of the navigation equations. This final estimate of the navigation errors is then used to correct the navigation information produced from the IMU data, which results in a more accurate estimate of the current velocities, position, and orientation of the vehicle. The last action performed during the update step is the EKF covariance is adjusted to reflect the reduction in the navigation errors [1].

The two steps described above are repeated indefinitely, and over time the EKF will continue to provide corrections for the inertial navigation information based on the most recent stereo vision information that is available. These corrections will help keep the navigation information in line with what the system is actually experiencing by compensating for errors such as long-term accelerometer and gyroscope bias drift. By compensating for these errors, the velocity and position estimates produced by the system will be much more accurate than any estimates that could have been produced by the IMU or the stereo camera system alone [1].

III. METHODOLOGY

In order to test the performance of the proposed navigation system, a prototype was constructed using an ITX format computer with an Intel Atom D525 processor, two Point Grey Firefly MV FMVU-03MTM cameras with 4mm F1.2 fixed iris lenses, and an ADIS16364 6DOF IMU from Analog Devices, Inc. The sensors are housed in a custom made acrylic sensor head that bolts onto the front of the ITX computer case. For testing, the navigation system was mounted on a Clearpath Husky AlOO six-wheeled UGV running software that allowed a driver to manually control its forward and turning velocities from a netbook over a wireless network. A picture of the prototype system mounted on the Husky Al 00 can be seen in Figure 3.

When selecting a testing ground, it was decided that a location should be chosen that best represents a range of typical outdoor environments. This ideal testing environment should include features such as uneven terrain, dynamic/complex visual backgrounds, and natural landmarks. The location that was ultimately selected was the Wallum Lake Rod and Gun Club in Harrisville, RI. The procedure for conducting the tests was as follows:

Figure 3. Prototype navigation system mounted on the Clearpath Husky AlOO UGV

• A reference target (seen in Figure 4) was set up at the beginning of the course. The target contained a camera calibration grid on it, was adjusted to be level and plumb, and had a known global orientation.

• The vehicle was placed in a known starting position such that the reference target was ideally located in the stereo cameras' field of view. The location of the vehicle with respect to the reference target is known and was confirmed by optical survey.

• Two GPS receivers were set up. One stationary receiver was set up near the reference target, and the second was mounted to the vehicle. Using the data from both receivers, a set of differential GPS measurements can be created to serve as a source of truth data for the vehicle.

• When the test first began, the vehicle was kept stationary for a few minutes in order to determine the orientation of the IMU with respect to the gravity vector and to estimate the heading of the vehicle using the reference target. Then the vehicle was manually driven along a known course, stopping briefly every two minutes. Each time the vehicle stopped, a waypoint marker was put into the ground to mark the vehicle's location (seen in Figure 5), and the time of the stoppage was recorded using a stop watch. The vehicle then continued along the course for another two minutes, at which time another waypoint marker was added. The waypoint markers act as another source of truth data that can be used in addition to the GPS data from the receiver on the vehicle. The vehicle continued around the course until it arrived back at its original starting position. It was then kept stationary again to record more images of the reference target before being powered down and concluding the test run.

Using the test procedure described above, inertial data, stereo camera images, and GPS data was collected from several test runs at the Wallum Lake test course. That data is currently being analyzed to determine how well the navigation system performs in real-world environments.

Page 5: [IEEE 2013 IEEE Conference on Technologies for Practical Robot Applications (TePRA) - Woburn, MA, USA (2013.04.22-2013.04.23)] 2013 IEEE Conference on Technologies for Practical Robot

Figure 4. The reference target used during navigation system testing

Figure 5. One of the orange washers used to mark waypoints during the course of a test run

IV. RESULTS

This section depicts the system validation results that have been obtained to date. The stereo image data is still in the process of being analyzed, so the system is currently being tested using a combination of IMU and GPS data. Figure 6 depicts an image of the ground truth model that has been constructed based on the GPS data. The data displayed in the figure is as follows. The blue line is the GPS data from the receiver located on the vehicle, the red line is the GPS data that was deemed to be good enough to be usable for position fixes, the red Xs are the time-averaged GPS locations of the waypoint markers dropped behind the vehicle, and the green Os are the locations of the waypoints according to the GPS receiver on the vehicle.

V. DISCUSSION

Overall, the current state of the system is that the testing procedures for gathering data have been successful, but the implementation of the navigation algorithms has suffered some setbacks. There have been some numerical precision issues in the process of computing the EKF covariance matrices which have caused issues to arise in the performance of the filter. Because of this, the ability to generate accurate position fixes based on the GPS data has not yet been successfully implemented. However, once the implementation

issues with the EKF have been dealt with, the system should be able to accurately correct the IMU location estimates using the GPS data. From there, the next step will be to test the optical flow code that generates linear and angular velocity fixes and then subsequently integrate those velocity fixes into the EKF in place of the GPS position fixes.

In terms of the ground truth model that has been developed, there were also some issues that had to be dealt with. The overall quality of the GPS data was impaired due to the way in which the GPS receiver was mounted to the vehicle. The GPS receiver was mounted on the vehicle in such a way that the antenna's view of the satellites was obstructed during certain sections of the test course. The receiver was also subject to several different sources of multi-path reflections that most likely caused errors in the data as well. Because of this, specific sections of the GPS data contain large amounts of error and therefore can't be used to generate accurate position fixes. These sections of data would not be fed into the EKF algorithm, and only the sections of GPS data that were shown to have a small amount of position error would be used to generate position fixes for the IMU. The sections of GPS data that were deemed accurate enough to be used in the EKF are highlighted in red in Figure 6.

In terms of the stereo camera system, there have been some issues in dealing with camera calibration and compensating for different lighting conditions. To perform camera calibration, an OpenCV function is used in conjunction with a grid-of-squares calibration target to eliminate distortions that are caused by the camera's optics and the lenses, as well as to correct for misalignments between the two stereo cameras. Determining these calibration parameters can be difficult because most of the calibration testing has been done in an indoor or man-made light environment, but ultimately most of the vehicle testing will be done in an outdoor or natural light environment. Also, transporting the prototype system to the testing area can cause the cameras to become misaligned. Because of these issues, it may be required to perform camera calibrations in the field before each test run.

Different lighting conditions have also been a source of problems. Indoor lighting is relatively constant and predictable, but natural light is much more difficult to predict. Normally, the cameras use an automatic exposure control algorithm to deal with varying lighting conditions. However, because the two cameras are synchronized and configured to use their global shutter mode, the automatic exposure control is disabled. Combined with the fact that the lenses have fixed irises, this means that bright lighting conditions result in overexposure and image saturation. To compensate for this problem, a custom exposure control algorithm has been written to automatically adjust the exposure time of the stereo vision cameras based on the ambient lighting conditions. The images are also filtered using a bandpass filter to remove low frequency exposure variations and high frequency compression artifacts. This has helped to compensate for the inherently dynamic lighting conditions of the outdoor test course.

Page 6: [IEEE 2013 IEEE Conference on Technologies for Practical Robot Applications (TePRA) - Woburn, MA, USA (2013.04.22-2013.04.23)] 2013 IEEE Conference on Technologies for Practical Robot

Graph of Ground Truth Data 1001,-------,-------�------,_------_,------_,--r===========�

� 2 Q)

.s c o

'';::;

50

o

.� -50 Q.

>-

-100

x x

x

x

x

x

x x

-GPS Data

-Usable GPS Data

x Waypoint Markers

o Robot Waypoints

-250 -200 -150 -100 X Position (meters)

-50 o 50

Figure 6. Ground truth model based on vehicle GPS data and time-averaged waypoint GPS data

VI. FUTURE WORK

As the research on this system continues, the following items will be addressed in an attempt to improve the quality of the system or provide the system with additional functionality.

• Continue to analyze the data from the outdoor test runs and further assess the functionality of the navigation system algorithms

• Tweak the navigation algorithms as necessary to achieve the best possible results

• Integrate the navigation system with the Clearpath Husky driving software so that the Husky can drive itself autonomously to specified waypoints

• Port the system code to Linux and ROS for real-time implementation

• Design a custom PCB for the sensors and processor and create a smaller case for the system

VII. CONCLUSION

This paper showed the need for and explained the design of a hybrid navigation system that uses an IMU and a stereo camera system to estimate the location of an unmanned vehicle. The information from the IMU is corrected by feeding information from the optical flow calculations into an Extended Kalman Filter and estimating errors in the IMU navigation information. Currently a prototype system has been constructed and mounted on a Husky AlOO UGV, and several data collection tests have been run in an outdoor environment that represents the types of operational environments that the

system is most likely to encounter. The next step in the system development will be to take the collected data and post process it to see how well the navigation algorithms estimate the position and orientation of the vehicle with respect to the GPS and/or optical survey truth data. If all goes well then the project can move on to the next series of tasks, which include tweaking the navigation algorithms to increase performance, porting the system code to Linux and ROS, and optimizing the system hardware to fit into a more efficient and compact form factor.

ACKNOWLEDGMENTS

We would like to thank NASA for funding this project through an SBIR grant (Contract Number: NNXIICA57C) and The Wallum Lake Rod and Gun Club for allowing us to test our system on their grounds.

REFERENCES

[1] J. L. Center, Jr., "Calibrating and Aligning a Low-Cost Vision-Inertial Navigation System," Bayesian Inference and Maximum Entropy Methods in Science and Engineering, Waterloo, Canada, July 2011.

[2] J. M. Barrett, M. A. Gennert, W. R. Michalson, J. L. Center, Jr., "Analyzing and Modeling an LMU for use in a Low-Cost Combined Vision and Inertial Navigation System," Technologies for Practical Robot Applications, Woburn, MA, lNSPEC Acc. Num. 12804337, 2012.

[3] J. L. Center, Jr., K. H. Knuth, "Bayesian Visual Odometry," Bayesian Inference and Maximum Entropy Methods in Science and Engineering, Chamonix, France, 2010.

[4] J. L. Center, Jr., "Method for Combining Machine Vision and Inertial Measurements to Provide Accurate Motion Tracking, Localization, and Navigation," unpublished.