D2.1 SensorDataFusion v1 - InLane · Duedeliverydate:!31/12/2016! Actual!delivery!date:!30/12/2016!...

14
Grant Agreement Number: 687458 Project acronym: INLANE Project full title: Low Cost GNSS and Computer Vision Fusion for Accurate Lane Level Naviga tion and Enhanced Automatic Map Generation D2.1 Sensor Data Fusion Due delivery date: 31/12/2016 Actual delivery date: 30/12/2016 Organization name of lead participant for this deliverable: TCA Project cofunded by the European Commission within Horizon 2020 and managed by the European GNSS Agency (GSA) Dissemination level PU Public x PP Restricted to other programme participants (including the GSA) RE Restricted to a group specified by the consortium (including the GSA) CO Confidential , only for members of the consortium (including the GSA)

Transcript of D2.1 SensorDataFusion v1 - InLane · Duedeliverydate:!31/12/2016! Actual!delivery!date:!30/12/2016!...

Page 1: D2.1 SensorDataFusion v1 - InLane · Duedeliverydate:!31/12/2016! Actual!delivery!date:!30/12/2016! ... an Inertial Measurement Unit and computer vision systems is presented. As a

Grant  Agreement  Number:  687458  

 Project  acronym:  INLANE  

 Project  full  title:  Low  Cost  GNSS  and  Computer  Vision  Fusion  for  Accurate  Lane  Level  Naviga-­‐

tion  and  Enhanced  Automatic  Map  Generation    

D2.1  Sensor  Data  Fusion  Due  delivery  date:  31/12/2016  

Actual  delivery  date:  30/12/2016  

Organization  name  of  lead  participant  for  this  deliverable:  TCA  

Project  co-­‐funded  by  the  European  Commission  within  Horizon  2020  and  managed  by  the  European  GNSS  Agency  (GSA)  

Dissemination  level  

PU   Public   x PP   Restricted to other programme participants (including the GSA)    

RE   Restricted to a group specified by the consortium (including the GSA)    

CO   Confidential , only for members of the consortium (including the GSA)    

Page 2: D2.1 SensorDataFusion v1 - InLane · Duedeliverydate:!31/12/2016! Actual!delivery!date:!30/12/2016! ... an Inertial Measurement Unit and computer vision systems is presented. As a

2

Document Control Sheet Deliverable  number:   D2.1  

Deliverable  responsible:   TeleConsult  Austria  

Workpackage:   2  

Editor:   Axel  Koppert  

 

Author(s)  –  in  alphabetical  order  

Name   Organisation   E-­‐mail  

Axel  Koppert   TCA   axel.koppert@teleconsult-­‐austria.at  

 

Document  Revision  History  

Version   Date   Modifications  Introduced  

    Modification  Reason   Modified  by  

V0.1   18/11/2016   Table  of  Contents   Axel  Koppert  

V0.2   05/12/2016   Filling  the  document  with  content   Axel  Koppert  

V0.3   19/12/2016   Internal  Review   François  Fischer  

V1.0   23/12/2016   First  final  version  after  internal  review   Claudia  Fösleitner  

 

Abstract  

This Deliverable is the first release of the Report on the development of the sensor fusion and vision based software modules. The report presents the INLANE sensor-data fusion and GNSS processing approach. This first report release presents the status of the development (tasks 2.1 and 2.5) at M12.  

Legal Disclaimer The information in this document is provided “as is”, and no guarantee or warranty is given that the information is fit for any particular purpose. The above referenced consortium mem-bers shall have no liability for damages of any kind including without limitation direct, special, indirect, or consequential damages that may result from the use of these materials subject to any liability which is mandatory due to applicable law. © 2016 by INLANE Consortium.

Page 3: D2.1 SensorDataFusion v1 - InLane · Duedeliverydate:!31/12/2016! Actual!delivery!date:!30/12/2016! ... an Inertial Measurement Unit and computer vision systems is presented. As a

3

Abbreviations and Acronyms

Acronym Definition EDAS EGNOS Data Access Service EGNOS European Geostationary Navigation

Overlay Service GNSS Global Navigation Satellite System IMU Inertial Measurement Unit INS Inertial Navigation System RMSE Root Mean Square Error WP Work Package

Page 4: D2.1 SensorDataFusion v1 - InLane · Duedeliverydate:!31/12/2016! Actual!delivery!date:!30/12/2016! ... an Inertial Measurement Unit and computer vision systems is presented. As a

4

Table of Contents Executive Summary ............................................................................................................................... 6  1.   Introduction ..................................................................................................................................... 7  

1.1.   Motivation ............................................................................................................................... 7  1.2.   Purpose of Document ............................................................................................................ 7  1.3.   Intended Audience ................................................................................................................. 7  

2.   Sensor Data .................................................................................................................................... 8  2.1.   GNSS ..................................................................................................................................... 8  2.2.   Low-Cost Inertial Sensors ...................................................................................................... 8  2.3.   Visual Odometry ..................................................................................................................... 8  2.4.   Sensor-to-Map Alignment ...................................................................................................... 8  2.5.   Visual-Beacon-Based Positioning .......................................................................................... 9  

3.   GNSS processing and Augmentation ............................................................................................. 9  3.1.   GNSS processing approach ................................................................................................... 9  3.2.   EGNOS and EDAS ................................................................................................................ 9  3.3.   Observation Modelling ......................................................................................................... 10  

4.   Sensor Data Fusion Module ......................................................................................................... 10  5.   Current Results ............................................................................................................................. 12  6.   Discussion and further development ............................................................................................ 14  

Page 5: D2.1 SensorDataFusion v1 - InLane · Duedeliverydate:!31/12/2016! Actual!delivery!date:!30/12/2016! ... an Inertial Measurement Unit and computer vision systems is presented. As a

5

List of Figures Figure 1: Current Sensor Data Fusion Scheme ................................................................................... 11  Figure 2: Lane change on a highway (GPS/INS) ................................................................................. 12  Figure 3: Trajectory of an urban scenario (GPS/INS fusion) ............................................................... 13  Figure 4: Trajectory of a forest scenario (white dots: GNSS 2 Hz, red dots: GPS/INS 500 Hz) .......... 13   List of Tables Table 1: Classification of sensor data for sensor data fusion in INLANE ............................................... 8  Table 2: Modelling of the GNSS observations ..................................................................................... 10  

Page 6: D2.1 SensorDataFusion v1 - InLane · Duedeliverydate:!31/12/2016! Actual!delivery!date:!30/12/2016! ... an Inertial Measurement Unit and computer vision systems is presented. As a

6

Executive Summary This deliverable presents the status of the development of the INLANE GNSS processing and sensor data fusion components at M12. An intermediate release of the report on the development will be issued at M24 and the final release will be issued at M30.

The aim of INLANE’s positioning subsystem is to provide the position of the vehicle with lane-level accuracy, while using hardware that is suited for mass-market applications. An approach for combin-ing data from GNSS, an Inertial Measurement Unit and computer vision systems is presented. As a base for all developments, the available sensor information is classified systematically in terms of accuracy, availability and characteristic of the provided information. For achieving lane-level position-ing accuracy even in challenging urban scenarios, the available sensor information has to be com-bined in the best possible way. The current fusion architecture is based on a tightly coupled GPS/INS filter with additional measurement updates with data from the computer vision components. Current tests include only GPS and IMU data. First results of the developed software module are shown, fol-lowed by a discussed and a proposal for further developments.

Page 7: D2.1 SensorDataFusion v1 - InLane · Duedeliverydate:!31/12/2016! Actual!delivery!date:!30/12/2016! ... an Inertial Measurement Unit and computer vision systems is presented. As a

7

1. Introduction  1.1. Motivation  

The INLANE project aims at developing a lane-level navigation system for the mass-market. The ac-curacy of the position determination sub-system should allow in-lane positioning, even in urban envi-ronment. It is the base for lane-level navigation and accurate map updates. The determination of the state of a car in urban environment is a challenging task. This applies even more if only sensors of low-cost are available. Current approaches are based on GNSS/INS fusion. However, when employ-ing mass-market hardware the performance of those systems in urban environments is limited due to the unfavorable GNSS observation conditions and the relative low performance of low-cost inertial measurement units. For overcoming these problems, a sensor-data fusion approach making use of computer vision techniques is under development. We expect, that the information from the computer vision components will improve the overall accura-cy of the positioning sub-system, as well as the percentage of time that allows lane-level positioning.

1.2. Purpose  of  Document  Deliverable 2.1 is the report on the developed software modules that fuse EGNOS-GNSS absolute position estimates, IMU relative positions estimates, and vision-based (map matching) absolute posi-tion estimates. This current version is the first version of D2.1, based on the status of the INLANE project at the end of the first year. This deliverable gives an overview of the approach to sensor data fusion and GNSS processing of the INLANE project and the status of the tasks 2.1 and 2.5. Based on the presentation of the status and the first results, possibilities for further development are discussed and an outlook on the intended work for the next project phase is given.

1.3. Intended  Audience  This deliverable is intended to promote a better understanding of the approach that has been taken in this important part of the INLANE system and to document the progress. This is especially interesting for the project partners and for the European Commision. Moreover, D2.1 is public a public delivera-ble that is directed to everybody who is interested in the achievements of the INLANE project.

Page 8: D2.1 SensorDataFusion v1 - InLane · Duedeliverydate:!31/12/2016! Actual!delivery!date:!30/12/2016! ... an Inertial Measurement Unit and computer vision systems is presented. As a

8

2. Sensor  Data  In this section a systematic overview about the available data for the fusion will be given. The INLANE positioning component uses data from three sensors: a GNSS receiver, a low-cost inertial measure-ment unit (IMU) and a video camera. As the final prototype should be an aftermarket device that should not need a connection to the car no data from the car’s CAN bus will be used. Table 1 pro-vides an overview of the sensor data and derived information that are available for the sensor-data fusion component. The sensors have different characteristics, especially in different operational sce-narios. Table 1: Classification of sensor data for sensor data fusion in INLANE

Technique Characteristic w.r.t. Positioning

Short-term accuracy

Long-term accuracy

Availability Apriori Infor-mation

GNSS Absolute Medium High Limited (signal environment)

None

Low-Cost Inertial Sensors

Relative Medium Low Unlimited None

Visual Odometry Relative High Medium Limited (sight conditions)

None

Visual-Beacon-Based Positioning

Absolute (Relative to map)

High High Limited (map, environment, sight)

Map and approx-imate position

Sensor-to-Map Alignment

Relative (rather lateral than longitudinal)

High High Limited (map, environment, sight)

Map and good approx. position

2.1. GNSS  A single-frequency GNSS receiver provides pseudorange and carrier phase range observations as well as the navigation message of the satellites. Processing GNSS observations yields absolute posi-tions in a global reference system. A major drawback of using GNSS in urban environment is that the accuracy of the position estimate strongly depends on the operational environment. Obstructions re-duce the number of measurements. Moreover, reflected signals can be tracked by the GNSS receiv-er, which leads to large errors. To get a position estimate with constant accuracy, GNSS has to be fused with other sensor data. Further details on the GNSS processing approach are described in chapter 3.

2.2. Low-­‐Cost  Inertial  Sensors  Inertial Measurement Units (IMU) provide measurements of specific force and turn rates in the sen-sor’s axes. By integrating these observations, Inertial Measurements Systems (INS) can compute changes in position, velocity and attitude of a vehicle. The availability of the sensor information is unlimited as the sensors operate in autonomy. However, without aiding by other sensors the accuracy of the navigation solution degrades quickly due to the sensor errors. The quality of automotive-grade inertial sensors does not allow a stand-alone positioning for more than some seconds before the ac-curacy exceeds the requirements.

2.3. Visual  Odometry  Visual Odometry is a technique that is able to derive relative motion from camera data. The details about the technique are described in the deliverable “Report on developed vision based software modules” (D2.3). It provides a three-dimensional translation vector, as well as a rotation matrix that describes the rotation between two epochs. Visual odometry does not need any a priory information. The information obtained from visual odometry is very similar to that from the inertial sensors but the long-term accuracy is better.

2.4. Sensor-­‐to-­‐Map  Alignment  The sensor-to-map alignment algorithms, even though not being explicitly a positioning sensor, deliv-er information on the car’s position. In principle, the sensor-to-map alignment is more sensitive to lateral, than on longitudinal positioning on a road. This however can be very valuable in urban areas,

Page 9: D2.1 SensorDataFusion v1 - InLane · Duedeliverydate:!31/12/2016! Actual!delivery!date:!30/12/2016! ... an Inertial Measurement Unit and computer vision systems is presented. As a

9

where the obstructions of the buildings along the road provoke a GNSS-geometry that is unfavorable for lateral positioning. Lateral positioning is very important for lane keeping. The camera to map alignment needs map data as input. This means that errors in the map may propagate to inaccurate positions. For further details see the deliverable “Report on sensor-to-map data alignment” (D2.5).

2.5. Visual-­‐Beacon-­‐Based  Positioning  It the course of the project information on absolute positions obtained from images will be used. This is based on so-called visual beacons with known coordinates that are recognized in the camera im-ages. Further details on that component are not yet available.

3. GNSS  processing  and  Augmentation  

3.1. GNSS  processing  approach  Low-Cost GNSS receivers are single-frequency receivers, i.e. only one signal of a GNSS is tracked. Modern receivers deliver multi-system pseudorange and carrier phase range observations. In order to exploit those observations in an optimal manner, code- and phase ranges are processed together. This technique is known as single-frequency Precise Point Positioning (PPP). The key for reaching high accuracy is to make use of precise orbit and clock products that are superior to the navigation message of the satellites. Latest research shows that it is possible to achieve sub-meter horizontal positioning accuracy in real-time with this approach1. However the accuracy is strongly dependent on the number of available satellites. Therefore, INLANE follows a multi-system approach to improve accuracy and availability, especially in urban scenarios. The signals GPS L1, Galileo E1 and EGNOS L1 will be combined for positioning. Besides ranging signals EGNOS provides satellite orbit and clock correction data for GPS and a precise ionospheric model, which is independent of the GNSS. These data can be used as precise correction data in the PPP approach2.

3.2. EGNOS  and  EDAS  EGNOS is the European Space-Based Augmentation System (SBAS) 3. Currently three geostationary satellites are broadcasting EGNOS augmentation data. Besides the correction and integrity infor-mation, every satellite provides ranging signals, which can be used for positioning like the signals of GPS and Galileo. The EGNOS service does not yet contain Galileo Data. Currently only GPS is sup-ported. The EGNOS augmentation data mainly contain

• slow corrections (corrections for the satellite orbit and the clocks errors) • fast corrections (corrections for the fast varying satellite clock errors) • an improved ionospheric model • parameters for integrity computation.

The EGNOS satellites are in a geostationary orbit. This results in low satellite elevations at the user sites in Europe, resulting in a limited availability in urban areas. Thus, receiving the EGNOS infor-mation from the satellites should not be the preferred way for using EGNOS in an automotive envi-ronment. Through the EGNOS Data Access Service (EDAS) EGNOS data can be received over the internet, which improves the availability4. EDAS has two service levels:

• Service Level 0 (SL0): • Format: ASN.1 • Access through EDAS client software

• Service Level 1 (SL1): • Format RTCM (EDAS-specific RTCM messages) • Transport: NTRIP • Additionally observations from the ranging and integrity monitoring stations can be

received 1  P. de Bakker, C. Tiberius: Real-time single-frequency precise point positioning for cars and trains, GPS World (2016) (http://gpsworld.com/innovation-guidance-for-road-and-track/)  2 A. Heßelbarth, L. Wanninger: SBAS Orbit and Satellite Clock Corrections for Precise Point Positioning, GPS Solutions (2013) 3 EGNOS OS Service Definition Document (https://egnos-portal.gsa.europa.eu/library/technical-documents) 4 EDAS Service Definition Document (https://egnos-portal.gsa.europa.eu/library/technical-documents)

Page 10: D2.1 SensorDataFusion v1 - InLane · Duedeliverydate:!31/12/2016! Actual!delivery!date:!30/12/2016! ... an Inertial Measurement Unit and computer vision systems is presented. As a

10

Using SL1 has the advantage that NTRIP is the de-facto standard for receiving GNSS data. Once implemented, the interface can be used to receive additional information through the internet, e.g. more precise data for satellite orbits and clocks or RTK corrections. Thus in INLANE SL1 will be used.

3.3. Observation  Modelling  A careful modelling of the influences on the GNSS observations is the base for the achievable accu-racy. Before being fused by the sensor data fusion module, the GNSS observations are modelled using the methodology from Table 2. Table 2: Modelling of the GNSS observations

Influence Model Satellite orbit The satellite orbits are computed from the navigation messages.

EGNOS slow correction data are used for correcting GPS satellite coordinates. The corrected coordinates are used for computing the observation model.

Satellite clocks The satellite clock corrections are computed from the navigation messages. EGNOS slow corrections are used to correct the satel-lite clocks. The EGNOS fast corrections modelling the fast varying satellite clock errors and are directly applied to the pseudoranges.

Ionosphere The ionospheric delay is computed from the EGNOS ionospheric model. It is applied as correction to the observations.

Troposphere The Saastamoinen model used for modelling the tropospheric de-lay, which is directly applied as correction to the observations.

Receiver Clock Bias The receiver clock bias is estimated as state in the Kalman Filter. For every system, a separate receiver clock offset is added to ac-count for differences between the system times.

4. Sensor  Data  Fusion  Module  In the sensor-data fusion module, the sensor data are combined by means of a Kalman Filter. The challenge of sensor data fusion in INLANE is that the data is very inhomogeneous in terms of the type of provided information, accuracy and availability (cf. Table 1). By combining the available information in an optimal manner, the accuracy of the filtered trajectory should allow lane-level positioning, even in urban scenarios. GNSS is known for good positioning performance in open-sky environment. In urban environment however, the accuracy degrades due to blocked satellite signals (unfavorable geometry) and unde-tected multipath-observations, introducing large errors to the position estimate. On the other hand, GNSS is the only means in the INLANE’s sensor configuration providing information on absolute posi-tion. Therefore, it seems to be currently unreplaceable. The aim is to fuse only the “good” GNSS observations with the complementary sensor data. This ap-proach is known as tightly-coupled integration5. In order to exclude the erroneous GNSS observa-tions, a careful selection scheme for the GNSS observation is implemented. The current integration scheme is presented in Figure 1. It is based on a system model, which makes use of the IMU data to predict the vehicle’s state. Every time a set of observations from either the GNSS component or one of the computer vision components is available in the fusion module, it is used to correct the current prediction of the vehicle’s state. The implementation of the sensor fusion module is based on a C++ library developed for INLANE. Currently only the GPS/INS fusion is tested. The observation models for visual odometry and sensor-to-map alignment have been implemented, though. 5 P. D. Groves: Principles of GNSS, inertial, and multisensor integrated navigation systems, Artech House(2008)

Page 11: D2.1 SensorDataFusion v1 - InLane · Duedeliverydate:!31/12/2016! Actual!delivery!date:!30/12/2016! ... an Inertial Measurement Unit and computer vision systems is presented. As a

11

The following state vector is estimated:

𝐗 =

positionvelocity

attitude  anglesaccelerometer  bias

gyro  biasreceiver  clock  biasreceiver  clock  drift

 

Position, velocity, and attitude describe the state of the car. Further states are introduced to account for systematic errors of the sensors. In the future, the state vector may be augmented by additional sensor-specific states.

Figure 1: Current Sensor Data Fusion Scheme

The fusion module requires a short initialization phase before approximate values for every state are available. Currently the horizontal attitude (roll and pitch) has to be determined while the vehicle is stationary. It takes about 30-60 s until these states have been determined with sufficient precision. Then the yaw angle is estimated while the car moves, using the GNSS-derived heading. This dynamic initialization process is required, because the accuracy of low-cost inertial measurement units does not allow determining the yaw angles directly from the measurements by gyro compassing. Only after having knowledge on the attitude, the sensor fusion can start. By making use of information obtained from computer vision, this initialization process could possibly be simplified in later versions of the module.

Page 12: D2.1 SensorDataFusion v1 - InLane · Duedeliverydate:!31/12/2016! Actual!delivery!date:!30/12/2016! ... an Inertial Measurement Unit and computer vision systems is presented. As a

12

5. Current  Results  The implementation of the basic tightly-coupled GPS/INS fusion algorithm was tested in various oper-ational environments. The results shown here are obtained by fusing GPS pseudorange and doppler observations with the IMU data as described in the preceding section. EGNOS augmentation data and phase observations have not been used for the initial tests presented here. As the current IN-LANE test data set does not contain GNSS raw data, the first tests are based on a data set that has been collected with the ViFTra measurement system. ViFTra is a development of TCA and the Virtual Vehicle Research Center. Its GNSS and IMUs sensors are very similar to the proposed INLANE con-figuration (Ublox LEA M8T and a low-cost IMU). In open-sky environments, the accuracy with respect to a reference trajectory is about 0.5 m (RMSE) for North and East respectively. In Figure 2 a lane change on a highway is shown. Obviously, the quality of the trajectory allows inferring the current driving lane in this situation.

Figure 2: Lane change on a highway (GPS/INS)

In Figure 3 the estimated trajectory of a drive in a suburban area in the south of Graz is presented. The route contains two railway underpasses and a tunnel (length: 250 m). The accuracy is about 1.5 m in north and east and 2 m in height (RMSE). Without integration the accuracy is considerably larg-er, especially during and after the underpasses. In the tunnel no position is available without sensor fusion. By fusing GPS and INS, the shorter underpasses can be bridged without degradation in accu-racy. Nevertheless, the trajectory drifts considerably during the longer tunnel section. Here the GPS is unavailable and the trajectory estimate is only based on the INS. The quality of the IMU only allows for short timespans without GPS update before the accuracy degrades significantly.

Page 13: D2.1 SensorDataFusion v1 - InLane · Duedeliverydate:!31/12/2016! Actual!delivery!date:!30/12/2016! ... an Inertial Measurement Unit and computer vision systems is presented. As a

13

Figure 3: Trajectory of an urban scenario (GPS/INS fusion)

GPS positioning in the forest is challenging because of dampened and reflected signals. Comparing the GPS trajectory with the GPS-INS trajectory in a forest driving scenario (Figure 4) shows large outliers in the GPS-only trajectory and short timespans of unavailability (some seconds). The fused trajectory remains smooth but drifts slightly.

Figure 4: Trajectory of a forest scenario (white dots: GNSS 2 Hz, red dots: GPS/INS 500 Hz)

Page 14: D2.1 SensorDataFusion v1 - InLane · Duedeliverydate:!31/12/2016! Actual!delivery!date:!30/12/2016! ... an Inertial Measurement Unit and computer vision systems is presented. As a

14

6. Discussion  and  further  development    In this document, the basic concepts for INLANE’s positioning system and sensor data fusion have been described. Initial version of the GNSS and sensor fusion component are available, fusing GPS pseudorange and Doppler observations with IMU observations. Until now, a GPS-only approach was implemented. For the next phase, the algorithm will be extended to use Galileo and EGNOS for im-proving availability and accuracy. First tests demonstrate a good performance in open sky scenarios, where the system already meets the requirements for lane-level navigation (accuracy of 0.5 m). The accuracy in urban scenarios is not at this level yet, but the basic sensor fusion approach is able to improve the positioning performance in terms of accuracy and availability significantly. The main problems arise if the GNSS is unavailable for a longer time span. Because of the IMU sensor errors, longer GNSS gaps cannot be bridged. The accuracy degrades quickly in these cases. In the next project phase, the information from the computer vision algorithms (visual odometry, sen-sor-to-map-alignment) will be employed. The visual odometry provides information that is very similar to that of the INS but at a better accuracy. This will help to keep the accuracy in unfavorable GNSS environments at a constant level. The corresponding measurement updates have already been im-plemented in the software, but tests with real data are still pending. The key for performance enhancements will be to select always the best information available for positioning, based on the operating scenarios. Therefore, alternative integration schemes should be evaluated. An option would be to base the system model, which uses currently the IMU observations, on visual odometry. In this case, all other sensors would act to correct the errors of the visual odome-try.