Tightly-Coupled INS, GPS, and Imaging Sensors for ... · combine inertial, pitot-static, and GPS...

12
Tightly-Coupled INS, GPS, and Imaging Sensors for Precision Geolocation M. Veth , Air Force Institute of Technology R. Anderson, National Geospatial-Intelligence Agency F. Webber, Air Force Institute of Technology M. Nielsen, Air Force Test Pilot School * BIOGRAPHY Major Mike Veth is an Assistant Professor in the De- partment of Electrical and Computer Engineering at the Air Force Institute of Technology. His current re- search focus is on the fusion of optical and inertial systems. He received his Ph.D. in Electrical Engineer- ing from the Air Force Institute of Technology, and a B.S. in Electrical Engineering from Purdue University. In addition, Major Veth is a graduate of the Air Force Test Pilot School. Robert C. Anderson has been working for NGA (and its predecessor organizations) specializing in the geodetic science field for the last 18 years. He currently works in the Remote Sensing Division of NGA’s Basic and Applied Research Office with a cur- rent focus on developing improved surveying and mapping tools and techniques. He has Master of Science degrees from The Ohio State University in Geodetic Science and Southern Illinois University in Mathematics, and is a Ph.D. candidate in Mathemat- ics at St. Louis University. Fred Webber is currently studying for a Master’s de- gree in Electrical Engineering at the Air Force Insti- tute of Technology. His research focus is in the area of estimation theory. He is a research assistant for the Advanced Navigation Technology Center. He re- ceived his Bachelor of Science in Mechanical Engi- neering and Computer Science from Rose-Hulman Institute of Technology and is a member of AIAA and the Institute of Navigation. Captain Mike Nielsen is an experimental test pilot as- signed to the Air Force Test Pilot School. He is cur- rently studying for a Master’s degree in Electrical En- * The views expressed in this article are those of the author and do not reflect the official policy or position of the United States Air Force, National Geospatial-Intelligence Agency, Department of De- fense, or the U.S. Government. gineering at the Air Force Institute of Technology. He has logged over 1000 hours in the F-15E, F-16, T-38 and various other aircraft. Capt Nielsen is a graduate of the Air Force Test Pilot School. ABSTRACT Recent technological advances have significantly im- proved the capabilities of micro-air vehicles (MAV). This is evidenced by their expanding use by govern- ment, police, and military forces. A MAV can provide a real-time surveillance capability to even the smallest units, which provides commanders with a significant advantage. This capability is a result of the availability of minia- turized autopilot systems which typically combine in- ertial, pitot-static, and GPS sensors into a feedback flight-control system. While these autopilots can pro- vide an autonomous flight capability, they have some limitations which impact their operational effective- ness. One of the primary issues is poor image geolo- cation performance, which limits the use of these sys- tems for direct measurements of target locations. This poor geolocation performance is primarily a conse- quence of the relatively large attitude errors charac- teristic of low-performance inertial sensors. In pre- vious efforts, we have developed a tightly-coupled image-aided inertial navigation system to operate in areas not serviced by GPS. This system extracts nav- igation information by automatically detecting and tracking stationary optical features of opportunity in the environment. One characteristic of this system is vastly reduced attitude errors, even with consumer- grade inertial sensors. In this paper, the benefits of incorporating image- based navigation techniques with inertial and GPS measurements is explored. After properly integrat- ing GPS with the image-aided inertial architecture,

Transcript of Tightly-Coupled INS, GPS, and Imaging Sensors for ... · combine inertial, pitot-static, and GPS...

Page 1: Tightly-Coupled INS, GPS, and Imaging Sensors for ... · combine inertial, pitot-static, and GPS sensors into a feedback flight-control system. While these autopilots can provide

Tightly-Coupled INS, GPS, and ImagingSensors for Precision Geolocation

M. Veth , Air Force Institute of TechnologyR. Anderson, National Geospatial-Intelligence Agency

F. Webber, Air Force Institute of TechnologyM. Nielsen, Air Force Test Pilot School ∗

BIOGRAPHY

Major Mike Veth is an Assistant Professor in the De-partment of Electrical and Computer Engineering atthe Air Force Institute of Technology. His current re-search focus is on the fusion of optical and inertialsystems. He received his Ph.D. in Electrical Engineer-ing from the Air Force Institute of Technology, and aB.S. in Electrical Engineering from Purdue University.In addition, Major Veth is a graduate of the Air ForceTest Pilot School.

Robert C. Anderson has been working for NGA(and its predecessor organizations) specializing inthe geodetic science field for the last 18 years. Hecurrently works in the Remote Sensing Division ofNGA’s Basic and Applied Research Office with a cur-rent focus on developing improved surveying andmapping tools and techniques. He has Master ofScience degrees from The Ohio State University inGeodetic Science and Southern Illinois University inMathematics, and is a Ph.D. candidate in Mathemat-ics at St. Louis University.

Fred Webber is currently studying for a Master’s de-gree in Electrical Engineering at the Air Force Insti-tute of Technology. His research focus is in the areaof estimation theory. He is a research assistant forthe Advanced Navigation Technology Center. He re-ceived his Bachelor of Science in Mechanical Engi-neering and Computer Science from Rose-HulmanInstitute of Technology and is a member of AIAA andthe Institute of Navigation.

Captain Mike Nielsen is an experimental test pilot as-signed to the Air Force Test Pilot School. He is cur-rently studying for a Master’s degree in Electrical En-

∗The views expressed in this article are those of the author anddo not reflect the official policy or position of the United States AirForce, National Geospatial-Intelligence Agency, Department of De-fense, or the U.S. Government.

gineering at the Air Force Institute of Technology. Hehas logged over 1000 hours in the F-15E, F-16, T-38and various other aircraft. Capt Nielsen is a graduateof the Air Force Test Pilot School.

ABSTRACT

Recent technological advances have significantly im-proved the capabilities of micro-air vehicles (MAV).This is evidenced by their expanding use by govern-ment, police, and military forces. A MAV can providea real-time surveillance capability to even the smallestunits, which provides commanders with a significantadvantage.

This capability is a result of the availability of minia-turized autopilot systems which typically combine in-ertial, pitot-static, and GPS sensors into a feedbackflight-control system. While these autopilots can pro-vide an autonomous flight capability, they have somelimitations which impact their operational effective-ness. One of the primary issues is poor image geolo-cation performance, which limits the use of these sys-tems for direct measurements of target locations. Thispoor geolocation performance is primarily a conse-quence of the relatively large attitude errors charac-teristic of low-performance inertial sensors. In pre-vious efforts, we have developed a tightly-coupledimage-aided inertial navigation system to operate inareas not serviced by GPS. This system extracts nav-igation information by automatically detecting andtracking stationary optical features of opportunity inthe environment. One characteristic of this system isvastly reduced attitude errors, even with consumer-grade inertial sensors.

In this paper, the benefits of incorporating image-based navigation techniques with inertial and GPSmeasurements is explored. After properly integrat-ing GPS with the image-aided inertial architecture,

Page 2: Tightly-Coupled INS, GPS, and Imaging Sensors for ... · combine inertial, pitot-static, and GPS sensors into a feedback flight-control system. While these autopilots can provide

Report Documentation Page Form ApprovedOMB No. 0704-0188

Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering andmaintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information,including suggestions for reducing this burden, to Washington Headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, ArlingtonVA 22202-4302. Respondents should be aware that notwithstanding any other provision of law, no person shall be subject to a penalty for failing to comply with a collection of information if itdoes not display a currently valid OMB control number.

1. REPORT DATE 2008 2. REPORT TYPE

3. DATES COVERED 00-00-2008 to 00-00-2008

4. TITLE AND SUBTITLE Tightly-Coupled INS, GPS, and Imaging Sensors for Precision Geolocation

5a. CONTRACT NUMBER

5b. GRANT NUMBER

5c. PROGRAM ELEMENT NUMBER

6. AUTHOR(S) 5d. PROJECT NUMBER

5e. TASK NUMBER

5f. WORK UNIT NUMBER

7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Air Force Institute of Technology,Department of Electrical andComputer Engineering,Wright Patterson AFB,OH,45433

8. PERFORMING ORGANIZATIONREPORT NUMBER

9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) 10. SPONSOR/MONITOR’S ACRONYM(S)

11. SPONSOR/MONITOR’S REPORT NUMBER(S)

12. DISTRIBUTION/AVAILABILITY STATEMENT Approved for public release; distribution unlimited

13. SUPPLEMENTARY NOTES

14. ABSTRACT Recent technological advances have significantly improved the capabilities of micro-air vehicles (MAV).This is evidenced by their expanding use by government, police, and military forces. A MAV can provide areal-time surveillance capability to even the smallest units, which provides commanders with a significantadvantage. This capability is a result of the availability of miniaturized autopilot systems which typicallycombine inertial, pitot-static, and GPS sensors into a feedback flight-control system. While these autopilotscan provide an autonomous flight capability, they have some limitations which impact their operationaleffectiveness. One of the primary issues is poor image geolocation performance, which limits the use ofthese systems for direct measurements of target locations. This poor geolocation performance is primarilya consequence of the relatively large attitude errors characteristic of low-performance inertial sensors. Inprevious efforts, we have developed a tightly-coupled image-aided inertial navigation system to operate inareas not serviced by GPS. This system extracts navigation information by automatically detecting andtracking stationary optical features of opportunity in the environment. One characteristic of this system isvastly reduced attitude errors, even with consumer-grade inertial sensors. In this paper, the benefits ofincorporating image-based navigation techniques with inertial and GPS measurements is explored. Afterproperly integrating GPS with the image-aided inertial architecture, the system is tested using acombination of Monte-Carlo simulation and flight test data. The flight test data was flown over EdwardsAFB using representative hardware. The experimental results are compared with validated truth data. Theeffects of variations in sensor quality and integration methods are investigated and shown to greatlyimprove the overall performance of the tightly-coupled image-aided sensor over the reference GPS/INS sensor.

15. SUBJECT TERMS

Page 3: Tightly-Coupled INS, GPS, and Imaging Sensors for ... · combine inertial, pitot-static, and GPS sensors into a feedback flight-control system. While these autopilots can provide

16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT Same as

Report (SAR)

18. NUMBEROF PAGES

10

19a. NAME OFRESPONSIBLE PERSON

a. REPORT unclassified

b. ABSTRACT unclassified

c. THIS PAGE unclassified

Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18

Page 4: Tightly-Coupled INS, GPS, and Imaging Sensors for ... · combine inertial, pitot-static, and GPS sensors into a feedback flight-control system. While these autopilots can provide

the system is tested using a combination of Monte-Carlo simulation and flight test data. The flight testdata was flown over Edwards AFB using represen-tative hardware. The experimental results are com-pared with validated truth data. The effects of vari-ations in sensor quality and integration methods areinvestigated and shown to greatly improve the over-all performance of the tightly-coupled image-aidedsensor over the reference GPS/INS sensor.

INTRODUCTION

Virtually all UAVs and aircraft are flying with electro-optical (EO), infrared (IR), or video cameras. Eventhe smallest can now be equipped with an integratedGlobal Positioning System (GPS) and inertial naviga-tion system (INS). Unfortunately, there is a very lim-ited set of vehicles that provide the level of navigationand timing precision to the imaging path sufficient forgenerating high-quality geospatial-intelligence prod-ucts. This is due to a variety of reasons such as costor payload restrictions that mandate small low costinertial navigation systems; or, to difficulties in pre-cise timing of the metadata to the epoch of the frameexposure; or, to the expense and complications in-volved with establishing accurate GPS positions us-ing dual frequency phase difference processing withassociated communications links.

In order to address the issue of poor georegistrationperformance for small uavs, a number of researchprojects have been conducted. The most prominentapproaches address the issue in three primary ways.The first, and probably most obvious, approach is tosimply improve the quality of the inertial and GPSsensors until the desired target location accuracy isachieved. The advantage of this approach is its sim-plicity. The disadvantages are the increased cost andweight requirements for improved sensors (especiallygyroscopes) that make this approach of limited ap-peal for small UAV operations. A variation on thistheme was proposed by Hirokawa [5] where threevery low-cost GPS sensors with displaced antennaswere used to greatly improve the attitude perfor-mance of the navigation sensor, resulting in improvedgeolocation accuracy. The second approach [1] uti-lizes previously-surveyed reference targets in the im-ages to automatically update and correct the naviga-tion errors in the UAV. This system has the capabilityto eliminate inertial sensor drift when the aircraft re-mains in an operations area with visible reference tar-gets. The final approach uses post-flight image reg-istration software to correct each image to a knownset of ground control points. This process can be per-formed manually or automatically and tends to be

used in an open-loop, postflight mode (i.e., naviga-tion error estimates are not fed back to the vehicle forcorrection).

The motivation for this paper is to determine if theposition and orientation of camera frames can be sig-nificantly improved by augmenting the camera nav-igation system with an image-aided algorithm thatreliably tracks image features across frames in a ro-bust manner. This implies that the image-aiding al-gorithms add to the solution across a wide varietyof terrain types, thus allowing for additional esti-mates of the camera position and orientation in thedynamic adjustment. As an additional benefit, thesystem should be able to continue to navigate in thepresence of GPS dropouts.

The proof of concept testing will involve both simu-lations and generation of accuracy estimates that willconclude with testing a MAV prototype flown over atest area with visible survey control points that willbe compared to derived coordinates from the sys-tem. Additionally, Geo-TIFFs will be generated fromthe camera/MAV system and compared to control.The overall goal of the project is to deliver a proto-type small-UAV imagery collection system to the Na-tional Geospatial-Intelligence Agency (NGA) for fur-ther testing and use in generating Geo-Intelligenceand Geo-Security products. The purpose of this pa-per is to outline the method used to fuse the image,inertial, and GPS measurements with a simple terraindatabase. The algorithm is tested using Monte Carlosimulation and experimental flight data collected us-ing prototype hardware. Results and lessons learnedwill be addressed and incorporated into future de-signs.

This paper is organized as follows. First, a develop-ment of the assumptions and sensor models regard-ing the specific problem of interest is presented. Inaddition, the extended Kalman filter used to tightly-couple the image tracking and GPS/INS functionalityis described. Next, the performance of the integratednavigation technique is compared to the stand-aloneGPS/INS technique using a Monte-Carlo simulationand flight test data. Finally, conclusions are drawn re-garding the performance of the technique and futurework is presented.

DEVELOPMENT

As mentioned in the previous section, the goal of thisresearch effort is to investigate the navigation andtarget location accuracy improvements achievable bytightly integrating an image-based feature trackingalgorithm with GPS and a consumer-grade inertial

Page 5: Tightly-Coupled INS, GPS, and Imaging Sensors for ... · combine inertial, pitot-static, and GPS sensors into a feedback flight-control system. While these autopilots can provide

sensor. In this section, the overall design of the ex-tended Kalman filter [8, 9] is presented along with adescription of the sensor models.

Assumptions

This method is based on the following assumptions.

• A strapdown inertial measurement unit (IMU)and GPS antenna are rigidly attached to oneor more calibrated cameras. Synchronized rawmeasurements are available from all sensors.

• The inertial, GPS and optical sensors’ relative po-sition and orientation are known (see [13] for adiscussion of boresight calibration procedures).

• The camera images areas in the environmentwhich contain some stationary objects.

• A statistical terrain model is available which pro-vides an initial indication of range to objects inthe environment.

Reference Frames

In this paper, four reference frames are used. Vari-ables expressed in a specific reference frame are indi-cated using superscript notation. The Earth-CenteredEarth-Fixed (ECEF, or e frame) is a Cartesian systemwith the origin at the Earth’s center, the xe axis point-ing toward the intersection of the equator and theprime (Greenwich) meridian, the ze axis extendingthrough the North pole, and the ye axis is the orthog-onal compliment (in this paper, a carat symbol, , de-notes a unit vector).

The Earth-fixed navigation frame (n-frame) is an or-thonormal basis in <3, with origin located at a pre-defined location on the Earth, typically very close tothe current operations area. The Earth-fixed naviga-tion frame’s x, y, and z axes point in the north, east,and down (NED) directions relative to the origin, re-spectively. Down is defined as the direction of thenominal gravity vector at the origin. In contrast to thebody-fixed navigation frame, the Earth-fixed naviga-tion frame remains fixed to the surface of the Earth.While this frame is not useful for very-long distancenavigation, it can simplify the navigation kinematicequations for local navigation routes and is especiallyappropriate for micro-air vehicle flight profiles. TheEarth-fixed navigation frame is shown in Figure 1.

The vehicle body frame (or b frame) is a Cartesian sys-tem with origin at the vehicle center of gravity, the xb

Figure 1: Earth-fixed navigation frame. The Earth-fixednavigation frame is a Cartesian reference frame which isperpendicular to the gravity vector at the origin and fixedto the Earth.

axis extending through the vehicle’s nose, the yb axisextending through the vehicle’s right side, and the zb

axis points orthogonally out the bottom of the vehi-cle. The inertial measurements are expressed in the bframe.

The camera frame (or c frame) is a Cartesian systemwith origin at the center of the camera image plane,the xc axis is parallel to the camera image plane anddefined as “camera up”, the yc axis is parallel to thecamera image plane and defined as “camera right”,and the zc axis points out of the camera aperture, or-thogonal to the image plane. The camera frame isshown in Fig. 2.

Figure 2: Camera frame illustration. The camera referenceframe originates at the optical center of the camera.

Page 6: Tightly-Coupled INS, GPS, and Imaging Sensors for ... · combine inertial, pitot-static, and GPS sensors into a feedback flight-control system. While these autopilots can provide

Algorithm Description

The system parameters (see Table 1) consist of thenavigation parameters (position, velocity, and atti-tude), inertial sensor biases, GPS clock bias and drift,and a vector describing the location of landmarks ofinterest (tn) in the navigation frame. The navigationparameters are calculated using body-frame velocityincrement (∆vb) and angular increment (∆θb

ib) mea-surements from the inertial navigation sensor whichhave been corrected for bias errors using the currentfilter-computed bias estimates. These measurementsare integrated from an initial state in the navigation(local-level) frame using mechanization algorithmsdescribed in [12].

Table 1: System Parameter Definition

Parameter Descriptionpn Vehicle position in navigation frame

(northing, easting, and down)vn Vehicle velocity in navigation frame

(north, east, down)Cn

b Vehicle body to navigation frame DCMab Accelerometer bias vectorbb Gyroscope bias vectortnm Location of landmark m in the

navigation frame (one for each landmarkcurrently tracked)

δtgps GPS clock biasδtgps GPS clock drift

The position, velocity, and attitude errors were mod-eled as a stochastic process based on the well-knownPinson navigation error model [12]. The accelerome-ter and gyroscopic bias errors were each modeled asa first-order Gauss-Markov process [8], based on thespecification for the inertial measurement unit (IMU).The GPS clock drift is modeled as a random bias. Thelandmarks are modeled as stationary with respect tothe Earth. A small amount of process noise is addedto the state dynamics to promote filter stability [9].

EKF Mechanization

Because both the system dynamics model and mea-surement models are non-linear stochastic differentialequations, an extended Kalman filter algorithm is em-ployed. While other recursive estimation techniqueshave been proposed which have superior character-istics when dealing with non-linear models (e.g., un-scented Kalman filters or particle filters [4]), the ex-tended Kalman filter is still widely used for integrat-ing inertial and GPS sensors.

In this paper, the extended Kalman filter is an error-state with feedback formulation which estimates theerrors about the nominal trajectory produced bythe nonlinear filter dynamics model. In addition,this nominal trajectory serves as the operating pointwhere the nonlinear dynamics and measurementmodels are linearized [9]. Finally, the feedback na-ture attempts to constrain the inevitable departure ofthe nominal trajectory by periodically removing errorestimates. This feedback process improves the per-formance of the extended Kalman filter by reducinglinearization errors due to errors in the nominal tra-jectory. For more information regarding methods forintegrating GPS and inertial sensors, see [11]. A blockdiagram of the system is shown in Figure 3.

Figure 3: Image-aided GPS/IMU navigation filter blockdiagram. In this filter, the location of stationary objectsare tracked and used to estimate and update the errors in aGPS-aided inertial navigation system. The inertial naviga-tion system is, in turn, used to support the feature trackingloop.

GPS Model and Updates

The navigation filter is based on single-frequencypseudorange measurements, which can be correctedusing a differential reference station at a surveyed lo-cation. A typical pseudorange measurement from themobile receiver to satellite k is modeled by

ρkm = ‖pn

k − pnm‖+ cδtmgps +T k

m + Ikm +νm +mm (1)

where pnk is the satellite location vector in the n-frame,

pnm is the mobile receiver location vector in the n-

frame, c is the speed of light, δtmgpsis the mobile re-

ceiver clock error, T km is the tropospheric code delay

from the mobile receiver to satellite k, Ikm is the iono-

spheric code delay from the mobile receiver to satel-lite k, νm is the code measurement noise, and mm isthe code multipath.

Page 7: Tightly-Coupled INS, GPS, and Imaging Sensors for ... · combine inertial, pitot-static, and GPS sensors into a feedback flight-control system. While these autopilots can provide

The pseudorange measurement from the referencestation to satellite k is modeled similarly as

ρkr = ‖pn

k − pnr ‖+ cδtrgps

+ T kr + Ik

r + νr + mr (2)

Because the reference station is at a known location,the equation can be rewritten as

ρkr − ‖pn

k − pnr ‖ = cδtrgps + T k

r + Ikr + νr + mr (3)

Subtracting (3) from (1) and grouping common errorterms yields the differentially corrected pseudorangemeasurement

ρkcorr = ‖pn

k − pnm‖+cδtgps+∆T k+∆Ik+ν+∆m (4)

If the distance between the reference and mobilereceivers is small, the differential atmospheric andsatellite position errors become insignificant. Group-ing the error terms into a common random variable,ν, yields the nonlinear measurement equation

ρkcorr = ‖pn

k − pnm‖+ cδtgps + ν (5)

Note this is of the general form

z = h [x] + ν (6)

where x is the navigation state vector and ν is mod-eled as zero-mean additive white Gaussian noise withvariance

σ2ν = E

[ν2

](7)

where E [·] is the expectation operator. For this simu-lation, a variance of σ2

ν = 2.6m2 was chosen as a con-servative estimate of the pseudorange errors found ina consumer-grade, single-frequency GPS receiver. Inthe next section, the automatic feature detection andtracking algorithm is presented.

Automated Feature Tracking

Significant navigation information can be extractedby tracking stationary optical features over multi-ple images [14]. In order to accomplish this task,a number of issues must be addressed, namely, fea-ture detection, feature correspondence matching, andcorrecting the navigation state. An overview of theimage-aided inertial navigation algorithm is shownin Figure 4.

Feature detection is the process of automatically lo-cating objects in an image and building some descrip-tor that allows the feature to be located in subsequentimages. The optimal feature has three characteristics:

• easy to detect (i.e., strong)

Figure 4: Overview of image-aided inertial algorithm. In-ertial measurements are used to aid the feature correspon-dence search which, in turn, is used to correct the naviga-tion state.

• unique (i.e., there exist significant differenceswhich make this feature different from other fea-tures)

• ego-motion invariant (i.e., the feature descriptordoes not change as a result of camera motion)

One popular algorithm that partially addresses theseissues is Lowe’s scale-invariant feature transforma-tion (SIFT) [6]. The SIFT algorithm selects featureswhich have a strong gradient in both x and y direc-tions. Once the features are selected that meet thegradient threshold, a 128-byte descriptor is calculatedsuch that it is invariant to rotation and scale changes.Additionally, the SIFT descriptor has been empiri-cally shown to be distinct and appropriate for robustfeature matching. One drawback of the SIFT algo-rithm is the features are only partially invariant toaffine (i.e., perspective) changes. The other drawbackof the SIFT algorithm is the computational complex-ity which can lead to difficulties implementing thealgorithm in real-time. One solution to this issue isto leverage the use of general-purpose graphical pro-cessing units (GPGPU) to speed up the feature extrac-tion task [3]. Our research has shown processing timereductions of up to 30x when using a GPU to extractrobust features.

Once features of interest have been identified, thenext step is to determine the correspondence betweeneach feature and a matching feature in multiple im-ages. When an inertial sensor is available, the methodof employing stochastic constraints becomes attrac-tive [15]. This technique is illustrated in Figure 5. Theinertial measurements are used to statistically predict

Page 8: Tightly-Coupled INS, GPS, and Imaging Sensors for ... · combine inertial, pitot-static, and GPS sensors into a feedback flight-control system. While these autopilots can provide

the location of a current-image feature into a futureimage in order to reduce the search space and elimi-nate statistically-unlikely false matches.

Figure 5: Stochastic feature projection. Optical features ofinterest are mapped into future images using inertial mea-surements and stochastic projections.

Once the correspondence has been determined be-tween the predicted location of a feature and the mea-sured location in the current image, the navigationstate is corrected using a Kalman filter update. Theupdate is calculated using the projected feature lo-cations in Eqns. (8) and (9) with an additive, whiteGaussian noise. The Jacobian of the nonlinear mea-surement is calculated about the nominal trajectoryin terms of the system states described previously.

In the next section, the camera model is definedmathematically along with the feature extraction andtracking methodology.

Imaging Model and Updates

In addition to the GPS and inertial sensors, a digi-tal camera is used to track features of interest. Thefirst item of interest is the definition of the cameraprojection model. After properly compensating forthe effects of nonlinear distortions (see [2] or [7]), thecamera is modeled as a pin-hole device. The pin-holecamera model is shown in Figure 6. This implies thatthe location of a feature on the image plane is simplythe projection of the true location vector of the fea-ture expressed in the camera frame. Thus, given apoint source at location sc the resulting location of thepoint source in the image plane, relative to the opticalcenter of the camera is given by

sproj =(

f

scz

)sc (8)

where scz is the distance of the point source from the

Figure 6: Camera projection model. The pinhole cameramodel is represented by placing a virtual image plane onefocal length in front of the optical center.

optical center of the camera in the zc direction. Thepinhole camera model is discussed in more detailin [7].

Given the location of a feature relative to the naviga-tion frame, (tn), and the current position and orien-tation of the vehicle, (pn,Cn

b ), the line-of-sight vectorin the camera frame can be calculated as

sc =(Cn

b Cbc

)T(tn − pn) (9)

where Cbc is the camera-to-body frame direction co-

sine matrix (DCM).

In the next section, the experimental simulation andflight test results are presented.

EXPERIMENT

The data collection system consisted of a consumer-grade MEMS IMU and two digital cameras. The IMUwas a Microbotics MIDG consumer-grade MEMS unit(see [10]) which measured acceleration and angularrate at 50 Hz. The digital cameras were both Pix-elink A-741 machine vision cameras which incorpo-rated a global shutter feature and a Firewire inter-face. The lenses were wide-angle lenses with approx-imately 90 degrees field of view. The sensors weremounted on an aluminum bracket and calibrated us-ing procedures similar to those described in [13]. Im-ages were captured at approximately 2.5 Hz. Thecameras were pointed down and forward at a 45-degree angle, respectively. For this paper, only thedownward-pointing camera was used.

The data collection system was mounted on a C-12Huron aircraft, owned by the USAF Test Pilot School(USAF TPS). A number of data collections were flown

Page 9: Tightly-Coupled INS, GPS, and Imaging Sensors for ... · combine inertial, pitot-static, and GPS sensors into a feedback flight-control system. While these autopilots can provide

START

END

Source: Digital Globe

Figure 7: Flight path for Palmdale data collection. The seg-ment used in this research was flown over a combination ofurban and desert terrain. Map imagery: c©Digital Globe,NextView.

by USAF TPS in order to present the sensor with vari-ations in terrain, airspeed, and altitude. For this test,a segment of flight data was used that consisted of acombination of urban and rural terrain (see Figure 7).Further variations in terrain and feature quality willbe tested in future efforts. A GAINR-Lite (GLITE)Time-Space Positioning Information (TSPI) unit wasmounted on the aircraft and served as the position,velocity, and orientation truth source. The GLITEsystem consists of a HG-1700 tactical-grade inertialsensor coupled with differentially-corrected pseudo-range measurements from a dual-frequency GPS re-ceiver. The sensors are integrated using a post-flightsmoother algorithm.

Simulation

A Monte Carlo simulation was used to evaluate theperformance and stability of the GPS-aided inertialnavigation algorithm both with and without image-aiding. The simulations were performed using a stan-dard flight profile, based as closely as possible to theexperimental flight data. The simulated trajectorywas generated based on a semi-circular path with nooverlapping portions. The trajectory was flown atapproximately 1000 meters above relatively flat ter-rain above Palmdale, California. A five-minute seg-ment of the profile was chosen with a combination ofstraight-and-level and turning portions.

For each Monte Carlo simulation, the inertial mea-surements and GPS measurements were generatedwith random errors consistent with the relevant er-ror model. In addition, a collection of ground land-

marks was generated using a digital terrain elevationdatabase. In order to simulate the effects of uncer-tainty in the terrain model, a random elevation errorwas added to the landmark elevations. These sim-ulated landmarks were then projected into the im-age plane and used to generate synthetic SIFT fea-tures, complete with localization and a randomizeddescriptor component. This method was chosen inorder to stimulate the feature tracking algorithm asaccurately as possible without going to the difficultstep of generating simulated images with enough fi-delity to exercise the feature tracking algorithm. As aresult the Monte Carlo simulations should be an op-timistic predictor of the true system performance asthey correspond to a ”best-case” scenario where thereis virtually perfect feature tracking and a perfect cam-era distortion model. Thus, when the results are in-terpreted using this perspective, they can help to val-idate the concept and model, while serving as a lowerbound performance expectation for this particular al-gorithm.

As mentioned previously, the target location error isa function of the position and attitude errors of thenavigation system, camera boresight and uncorrectedoptical errors, and any errors in the terrain elevationdatabase used to initialize the height estimate of thelandmark. The horizontal target error can be thus ap-proximated by expressing the horizontal target loca-tion, yh, as:

yh = ph + hv/ tan θ (10)

where ph represents the horizontal position of the air-craft, hv is the height of the aircraft above the terrain,and θ is the depression angle from horizontal to thetarget. Linearizing about the mean, represented us-ing the overbar notation, results in the following first-order approximation

δyh ≈ δph +1

tan θδhv −

hv

sin2 θδθ (11)

Assuming independence, the horizontal target loca-tion variance, σ2

yh, is

σ2yh

= σ2ph

+1

tan2 θσ2

hv+

h2v

sin4 θσ2

θ (12)

Examining Eq. 12 shows a dependence on the posi-tion and attitude uncertainty, as expected. Thus, fora given imaging scenario, the relative contribution ofeach error source can be determined. Assuming onemeter of horizontal position error and three metersof vertical position error, the expected target locationaccuracy is a function of the height above terrain, ele-vation angle, and attitude errors. The resulting targetlocation accuracy given variations of these parame-ters is shown in Figure 8.

Page 10: Tightly-Coupled INS, GPS, and Imaging Sensors for ... · combine inertial, pitot-static, and GPS sensors into a feedback flight-control system. While these autopilots can provide

10−4

10−3

10−2

10−1

100

101

102 100m HAT

500m HAT1000m HAT

10−4

10−3

10−2

10−1

100

101

102

Hor

izon

tal T

arge

t Loc

atio

n E

rror

Sta

ndar

d D

evia

tion

(m)

100m HAT500m HAT1000m HAT

10−4

10−3

10−2

10−1

100

101

102

Sensor Attitude Error Standard Deviation (rad)

100m HAT500m HAT1000m HAT

30o DEPRESSION ANGLE

60o DEPRESSION ANGLE

90o DEPRESSION ANGLE

Figure 8: Predicted Horizontal Target Location Error. Thepredicted horizontal target location error (TLE) is shownfor depression angles of 30, 60, and 90 degrees. The effectsof attitude accuracy and height above terrain demonstratesthe critical importance of reducing attitude errors in orderto improve target location accuracy. The figures assumea 1-meter horizontal and 3-meter vertical position error ofthe aircraft.

Simulation Results

The simulation results for the baseline GPS/IMU (noimage aiding) case illustrates the difficulty in ob-taining a high-quality target location from a UAVequipped with a low-cost inertial sensor. As shownin Figure 9, the attitude errors, especially in heading,have standard deviations on the order of 100 mrad.This results in a high target location errors, especiallyat longer slant ranges.

Incorporating the automated image-based feedbackimproves the attitude errors greatly. The resultingsimulated attitude errors are shown in Figure 10. Thestandard deviation are on the order of 5 mrad, whichis a significant improvement.

Flight Test

The image-aided GPS/IMU algorithm was appliedto a five-minute segment of the Palmdale, CA flightprofile. The flight profile consisted of a semi-circularpath, flown over a combination of urban and desertterrain. A sample image from the image-aiding al-gorithm illustrates the typical terrain observed by thesensor as well as a depiction of the camera field-of-view overlayed on the moving-map display (see Fig-ure 11). For purposes of comparison, the navigationsolution was calculated using the baseline GPS/IMU

0 50 100 150 200 250 300−50

0

50

Rol

l Err

(m

rad)

0 50 100 150 200 250 300−50

0

50

Pitc

h E

rr (

mra

d)

0 50 100 150 200 250 300−200

−100

0

100

200

Yaw

Err

(m

rad)

Time (s)

Figure 9: Simulated attitude errors for consumer-gradeGPS/IMU integration. As expected, the Monte Carlo sim-ulation predicts relatively large heading errors. These er-rors would have a significant effect on target location accu-racy.

0 50 100 150 200 250 300−10

−5

0

5

10

Rol

l Err

(m

rad)

0 50 100 150 200 250 300−10

−5

0

5

10

Pitc

h E

rr (

mra

d)

0 50 100 150 200 250 300−10

−5

0

5

10

Yaw

Err

(m

rad)

Time (s)

Figure 10: Simulated attitude errors for consumer-gradeGPS/IMU integration with image aiding using landmarksof opportunity. Incorporating the image-based measure-ments significantly improves the predicted attitude errors,resulting in an improvement in target location accuracy.

Page 11: Tightly-Coupled INS, GPS, and Imaging Sensors for ... · combine inertial, pitot-static, and GPS sensors into a feedback flight-control system. While these autopilots can provide

Figure 11: Sample feature track display from the Palmdaledata collection. The currently-tracked features are depictedwith red ’+’ symbols. Each feature is surrounded by a pro-jection of the two-sigma error ellipse. In the right pane, thecurrent features, quality, and field-of-view are overlayed ona moving-map display. Map imagery: c©Digital Globe,NextView.

and using the image-aided GPS/IMU. The positionand attitude errors are compared in Figures 12 and 13,respectively. A detailed view of the experimental at-titude errors is shown in Figure 14. Although the de-viations are larger than predicted by simulation, theestimate appears to be consistent with the filter’s pre-dicted one-sigma bounds.

As expected, the automatic image aiding significantlyimproves the attitude errors of the system. In addi-tion, the position errors are consistent, which indi-cates the GPS measurements are dominating the po-sitioning performance and that the image updates donot degrade this capability.

CONCLUSIONS

In this article, an algorithm was presented thatimproves the target location accuracy of a UAVequipped with a low-cost GPS/IMU and imaging sys-tem. The method is implemented recursively for on-line operation and only requires a terrain database.No a prori imagery is required over the area of oper-ations. The system automatically selects and tracksstationary features in the field-of-view and uses thesetracks to update the navigation state. The system wasshown using a combination of simulation and flighttest data to improve the attitude accuracy by an or-der of magnitude. This results in a corresponding im-provement in target location accuracy.

The next step of this project is to build a smaller sen-

0 50 100 150 200 250 300

−2

0

2

Pn E

rr (

m)

0 50 100 150 200 250 300

−2

0

2

Pe E

rr (

m)

0 50 100 150 200 250 300

−2

−1

0

1

2

Pz E

rr (

m)

Time (s)

Figure 12: Position errors for Palmdale flight test. The po-sition errors were calculated for the GPS/IMU system with(represented by the dashed green trace) and without imageaiding (represented by the solid blue trace). The dashedred lines indicate the one-sigma uncertainty bounds of theEKF for the non-image aided case. Note the image aidingdoes not significantly affect the positioning accuracy of thesystem.

0 50 100 150 200 250 300

−50

0

50

α N E

rr (

mra

d)

0 50 100 150 200 250 300

−50

0

50

α E E

rr (

mra

d)

0 50 100 150 200 250 300

−200

−100

0

100

200

α D E

rr (

mra

d)

Time (s)

Figure 13: Attitude errors for Palmdale flight test. The at-titude errors were calculated for the GPS/IMU system with(represented by the dashed green trace) and without imageaiding (represented by the solid blue trace). The dashed redlines indicate the one-sigma uncertainty bounds of the EKFfor the non-image aided case. As predicted by simulation,the attitude errors are significantly improved by incorpo-rating automatic image aiding to the GPS/IMU system.

Page 12: Tightly-Coupled INS, GPS, and Imaging Sensors for ... · combine inertial, pitot-static, and GPS sensors into a feedback flight-control system. While these autopilots can provide

0 50 100 150 200 250 300−50

0

50ψ

N E

rr (

mra

d)

0 50 100 150 200 250 300−50

0

50

ψE E

rr (

mra

d)

0 50 100 150 200 250 300−20

−10

0

10

20

ψD

Err

(m

rad)

Time (s)

Figure 14: Image-aided attitude errors for Palmdale flighttest (detail). The attitude errors calculated using theimage-aided GPS/IMU system show significant improve-ment over the non-image aided results. The estimates arereasonably consistent with the EKF predicted one-sigmabounds (red dashed lines).

sor suitable for incorporation in our micro-UAV plat-form.

References

[1] Alison Brown, Bruce Bockius, Bruce Johnson,Heather Holland, and Dave Wetlesen. Flight testresults of a video-aided gps/inertial navigationsystem. In Proceedings of the 2007 ION GNSS Con-ference, pages 1111–1117, 2007.

[2] Duane C. Brown. Close-range camera calibra-tion. In Proceedings of the Symposium on Close-Range Photogrammetry, pages 855–866, January1971.

[3] Jordan L. Fletcher, Michael J. Veth, and John F.Raquet. Real-time fusion of image and inertialsensors for navigation. In Proceedings of the Insti-tute of Navigation Annual Meeting, pages 534–544,April 2007.

[4] Simon Haykin. Kalman Filtering and Neural Net-works. John Wiley and Sons, Inc., New York, NY,2001.

[5] Rui Hirokawa, Ryusuke Ohata, Takuji Ebinuma,and Tara Suzuki. A low cost gps/ins sensor forsmall uavs augmented with multiple gps anten-nas. In Proceedings of the 2007 ION GNSS Confer-ence, pages 96–103, 2007.

[6] David G. Lowe. Distinctive image features fromscale-invariant keypoints. International Journal ofComputer Vision, 60(2):91–110, 2004.

[7] Yi Ma, Stefano Soatto, Jana Kosecka, andS. Shankar Sastry. An Invitation to 3-D Vi-sion. Springer-Verlag, Inc., New York, New York,2004.

[8] Peter S. Maybeck. Stochastic Models Estimationand Control, Vol I. Academic Press, Inc., Orlando,Florida 32887, 1979.

[9] Peter S. Maybeck. Stochastic Models Estimationand Control, Vol II. Academic Press, Inc., Or-lando, Florida 32887, 1979.

[10] Microbotics, Inc. Midg 2 specifica-tions. Specification, January 2007.URL: http://www.microboticsinv.com/.

[11] Bradford W. Parkinson and James J. Spilker.Global Positioning System: Theory and ApplicationsVolume IV. American Institute of Astronauticsand Aeronautics, Washington, DC, 1996.

[12] D.H. Titterton and J.L. Weston. Strapdown Iner-tial Navigation Technology. Peter Peregrinus Ltd.,Lavenham, United Kingdom, 1997.

[13] Michael J. Veth and John F. Raquet. Alignmentand calibration of optical and inertial sensors us-ing stellar observations. In Proceedings of IONGNSS 2005, pages 2494–2503, September 2005.

[14] Michael J. Veth and John F. Raquet. Fusing low-cost imaging and inertial sensors for navigation.Journal of the Institute of Navigation, pages 11–20,2007.

[15] Michael J. Veth, John F. Raquet, and MeirPachter. Stochastic constraints for efficient im-age correspondence search. IEEE Transactions onAerospace Electronic Systems, 42(3):973–982, July2006.