Quadrotor Control using Optical-Flow Navigation · Quadrotor Control using Optical-Flow Navigation...

67
Quadrotor Control using Optical-Flow Navigation António Henrique Kavamoto Fayad Thesis to obtain the Master of Science Degree in Aerospace Engineering Supervisor(s): Prof. Rita Maria Mendes de Almeida Correia da Cunha Prof. Carlos Jorge Ferreira Silvestre Examination Committee Chairperson: Prof. João Manuel Lage de Miranda Lemos Supervisor: Prof. Rita Maria Mendes de Almeida Correia da Cunha Member of the Committee: Prof. Pedro Manuel Urbano de Almeia Lima October of 2014

Transcript of Quadrotor Control using Optical-Flow Navigation · Quadrotor Control using Optical-Flow Navigation...

Page 1: Quadrotor Control using Optical-Flow Navigation · Quadrotor Control using Optical-Flow Navigation António Henrique Kavamoto Fayad Thesis to obtain the Master of Science Degree in

Quadrotor Control using Optical-Flow Navigation

António Henrique Kavamoto Fayad

Thesis to obtain the Master of Science Degree in

Aerospace Engineering

Supervisor(s): Prof. Rita Maria Mendes de Almeida Correia da CunhaProf. Carlos Jorge Ferreira Silvestre

Examination Committee

Chairperson: Prof. João Manuel Lage de Miranda LemosSupervisor: Prof. Rita Maria Mendes de Almeida Correia da CunhaMember of the Committee: Prof. Pedro Manuel Urbano de Almeia Lima

October of 2014

Page 2: Quadrotor Control using Optical-Flow Navigation · Quadrotor Control using Optical-Flow Navigation António Henrique Kavamoto Fayad Thesis to obtain the Master of Science Degree in

ii

Page 3: Quadrotor Control using Optical-Flow Navigation · Quadrotor Control using Optical-Flow Navigation António Henrique Kavamoto Fayad Thesis to obtain the Master of Science Degree in

Acknowledgments

Gostaria primeiramente de agradecer a todas as pessoas que estiveram comigo no ISR(Instituto de

Sistemas e Robotica) durante a realizacao desta tese, pois sempre se mostraram disponıveis para

transportar o seu conhecimento para o meu trabalho. Dentro deste grupo, gostaria de salientar o tra-

balho exımio do Bruno Gomes, sem o qual todo o processo experimental serıa impossıvel. Um especial

agradecimento a minha orientora, a Prof. Rita Cunha, que foi incansavel e paciente durante todo o

processo da tese.

Queria tambem agradecer a minha mae e ao meu pai por tudo o que me proporcionaram e pelo esforco

que tem feito ao longo de toda a vida para que chegasse ate aqui. Ao meu irmao, um agradecimento

pelo apoio e companheirismo a todos os nıveis. Apesar da distancia, a vossa presenca faz-se sentir

todos os dias atraves daquilo que me transmitiram e continuam a transmitir.

Gostaria tambem de agradecer a todas aquelas pessoas que ao longo destes 5 anos contribuiram

para que a minha vida no Instituto Superior Tecnico fosse uma experiencia inesquecıvel. Ao Ricardo

Pedrosa, sem o qual os primeiros anos de faculdade nao teriam sido os mesmos. Ao grupo formado pelo

Afonso Ferreira, Pedro Isidro, Telma Oliveira, Pedro Caldeira, Diogo Santiago, Ines Cadilha, Alexandre

Pacheco, Luis Zilhao, Rui Jaulino, Pedro Miller, Joao Luis, Teresa Guegues, Hugo Pina, Luisa Barbosa

, Margarida Pedro, Andre Tribolet e Manel Morais pela presenca constante em todos os aspectos. Uma

palavra tambem para a Mariana Marques pela sua visao unica das coisas, pelas constantes reflexoes

que trocamos e por toda a disponibilidade ao longo destes anos.

For my international friends, the whole university experience wouldn’t be the same without you. I’m

deeply thankful for all the knowledge that I’ve absorbed being part of ”La Familia”.

Last, but not the least, a very special thank you note to Daniela Hallak without whom the last two years

wouldn’t be the same. Your unconditional support, patience and affection was vital in the whole thesis

process.

iii

Page 4: Quadrotor Control using Optical-Flow Navigation · Quadrotor Control using Optical-Flow Navigation António Henrique Kavamoto Fayad Thesis to obtain the Master of Science Degree in

iv

Page 5: Quadrotor Control using Optical-Flow Navigation · Quadrotor Control using Optical-Flow Navigation António Henrique Kavamoto Fayad Thesis to obtain the Master of Science Degree in

Resumo

O trabalho aqui descrito propoe um estimador linear de velocidade e de posicao baseado em informacoes

de um conjunto de sensores instalados a bordo para ambientes em que o GPS nao esta disponıvel .

O estimador de velocidade recorre a informacoes do modelo dinamico do veıculo e observacoes de

velocidade efectuadas por um sensor de fluxo optico usualmente utilizado em ratos de computador. A

velocidade medida pelo sensor de fluxo optico tem componentes rotacionais e lineares da velocidade,

mas a componente rotacional e retirada atraves de medicoes de velocidade angular. O estimador de

posicao usa somente informacoes do modelo dinamico do veıculo, existindo a possibilidade de ser cor-

rigido por dados de GPS. Para alem do sensor de fluxo optico, os estimadores propostos requerem

dados da altitude do veıculo e de um conjunto de sensores basicos para quadrirotores composto por

acelerometros e giroscopios.

Um vasto conjunto de experiencias e descrito e os seus resultados apresentados. Experiencias em

ambientes indoor permitiram avaliar a precisao do sensor de fluxo optico e da compensacao angular

necessaria. Nas experiencias a ceu aberto os dados de GPS permitiram verificar a precisao das es-

timativas de velocidade e de posicao para voos reais. Os resultados obtidos provam que a solucao

proposta para o estimador de velocidade e adequada para o seu uso em veıculos autonomos. No en-

tanto, os resultados obtidos para o estimador de posicao revelam que a precisao obtida atraves de uma

integracao da velocidade nao e a mais adequada a um voo completamente autonomo.

Palavras-chave: Estimacao de Velocidade, Estimacao de Posicao, Fluxo Optico, Ambientes

sem GPS, Teoria de Lyapunov,Quadrirotor

v

Page 6: Quadrotor Control using Optical-Flow Navigation · Quadrotor Control using Optical-Flow Navigation António Henrique Kavamoto Fayad Thesis to obtain the Master of Science Degree in

vi

Page 7: Quadrotor Control using Optical-Flow Navigation · Quadrotor Control using Optical-Flow Navigation António Henrique Kavamoto Fayad Thesis to obtain the Master of Science Degree in

Abstract

The work presented here proposes a position and velocity estimator using on board sensors for au-

tonomous quadrotors on GPS-denied environments. The velocity estimator uses a dynamic model of

the vehicle and corrects its estimates with velocity measurements made by an optical flow mouse sen-

sor. The optical flow measurements are influenced by the rotational movement of the camera, but are

corrected by fusing angular rate data. The position estimates are based in the dynamic model of the ve-

hicle, but depending on GPS availability its estimates can be corrected by position data. Apart from the

optical flow sensor, the proposed estimators requires altitude data and a basic set of sensors composed

by accelerometers and gyroscopes.

A wide range of indoor and outdoor experimental results are presented. Indoor experiments are used

to access the optical flow sensor and the angular compensation accuracy. In the outdoor tests, GPS

information is used to verify the accuracy of the velocity and position estimates in remote controlled

flights. The obtained results prove the suitability of the proposed velocity estimator for autonomous

vehicle’s but show that in order to have a fully automated flight the position estimate should not be the

result of pure integration of velocity values.

Keywords: Velocity Estimation, Position Estimation, Optical Flow, GPS-denied environments, Lya-

punov Theory, Quadrotor

vii

Page 8: Quadrotor Control using Optical-Flow Navigation · Quadrotor Control using Optical-Flow Navigation António Henrique Kavamoto Fayad Thesis to obtain the Master of Science Degree in

viii

Page 9: Quadrotor Control using Optical-Flow Navigation · Quadrotor Control using Optical-Flow Navigation António Henrique Kavamoto Fayad Thesis to obtain the Master of Science Degree in

Contents

Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii

Resumo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v

Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii

List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii

List of Acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii

List of Symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv

1 Introduction 1

1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

1.2 State-of-the-art . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

1.3 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

1.4 Proposed Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

1.5 Thesis Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

2 Quadrotor 3D Motion 7

2.1 Kinematics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

2.1.1 Rotation Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

2.1.2 Rotation Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

2.1.3 Kinematics Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

2.2 Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

3 Optical Flow 14

3.1 Concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

3.2 Sensor Avago ASDN3080 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

4 Estimation Algorithms 17

4.1 Mathematical Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

ix

Page 10: Quadrotor Control using Optical-Flow Navigation · Quadrotor Control using Optical-Flow Navigation António Henrique Kavamoto Fayad Thesis to obtain the Master of Science Degree in

4.2 Velocity Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

4.2.1 Linear Movement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

4.2.2 Angular Compensation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

4.2.3 Filtering Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

4.3 Trajectory Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

5 Experimental Validation 33

5.1 K estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

5.1.1 Experiment Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

5.1.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

5.2 Angular Compensation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

5.2.1 Experiment Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

5.2.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

5.3 Flight Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

5.3.1 Experiment Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

5.3.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

6 Conclusions 49

Bibliography 53

x

Page 11: Quadrotor Control using Optical-Flow Navigation · Quadrotor Control using Optical-Flow Navigation António Henrique Kavamoto Fayad Thesis to obtain the Master of Science Degree in

List of Figures

1.1 Proposed Solution Flow Chart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

2.1 Rotation of a Reference Frame . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

2.2 Referentials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

3.1 Example of Optical Flow Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

3.2 Schematic of OF Sensor Displacement detection . . . . . . . . . . . . . . . . . . . . . . . 15

3.3 Schematic of OF Sensor Image Deterioration . . . . . . . . . . . . . . . . . . . . . . . . . 16

4.1 Projection of a Single Point Schematic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

4.2 Optical Flow Rotational Inconsistency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

4.3 Proposed Solution Block Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

4.4 M2 Block Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

5.1 Mikrocopter QuadroKopter XL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

5.2 Gain KOF estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

5.3 Angular Compensation Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

5.4 Angular Compensation Experiment Results . . . . . . . . . . . . . . . . . . . . . . . . . . 38

5.5 Angular Compensation Parameter Estimation . . . . . . . . . . . . . . . . . . . . . . . . . 39

5.6 Modelled Pitch Compensation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

5.7 Modelled Roll Compensation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

5.8 Compared Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

5.9 Manual 5m x 3m Trajectory Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

5.10 Manual 5m x 3m Trajectory(2 turns) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

5.11 Velocity Estimation Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

5.12 ψ(t) during Flight Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

5.13 Angular Compensation Action During Flight . . . . . . . . . . . . . . . . . . . . . . . . . . 46

xi

Page 12: Quadrotor Control using Optical-Flow Navigation · Quadrotor Control using Optical-Flow Navigation António Henrique Kavamoto Fayad Thesis to obtain the Master of Science Degree in

5.14 Velocity Filter Action During Flight . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

5.15 Proposed Solution Position Estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

5.16 M2 and M3 Position Estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

xii

Page 13: Quadrotor Control using Optical-Flow Navigation · Quadrotor Control using Optical-Flow Navigation António Henrique Kavamoto Fayad Thesis to obtain the Master of Science Degree in

List of Acronyms

UAV - Unmanned Air Vehicle

GPS - Global Positioning System

GNSS - Global Navigation Satellite System

IMU - Inertial Measuring Unit

OF - Optical Flow

LRF - Laser Range Finder

SLAM - Simultaneous Localization and Mapping

LTI - Linear Time Invariant

cpi - counts per inch

RTK - Real Time Kinetics

ISS - Input to State Stability

xiii

Page 14: Quadrotor Control using Optical-Flow Navigation · Quadrotor Control using Optical-Flow Navigation António Henrique Kavamoto Fayad Thesis to obtain the Master of Science Degree in

List of Symbols

{I} Inertial Reference frame

{B} Body Reference frame

pi Position of a single Point in the OF sensor Surface Reference frame m

Pi Position of a single Point in the camera reference frame m

Ip Vehicle’s position in the Inertial reference frame m

Bp Vehicle’s position in the Body reference frame m

Bv Vehicle’s velocity in the Body reference frame ms−1

IvB Vehicle’s Body frame Velocity written in the Inertial reference frame ms−1

vOF Vehicle’s velocity as measured by the Optical Flow sensor ms−1

d Optical Flow Sensor Pixel Array Length m

φ,θ,ψ Roll, Pitch and Yaw angles rad

BIR Inertial to Body reference frame Rotation matrix

ω Vehicle’s Angular Velocity in the Body Reference fame rads−1

∆XOF ,∆YOF Optical Flow sensor Output in x and y axis m

h LRF altitude output m

fs Accelerometer output ms−2

f Focal Lens Distance m

Vi ith Lyapunov Function

g gravity of Earth Nkg−1

xiv

Page 15: Quadrotor Control using Optical-Flow Navigation · Quadrotor Control using Optical-Flow Navigation António Henrique Kavamoto Fayad Thesis to obtain the Master of Science Degree in

Chapter 1

Introduction

Autonomous civil Unmanned Air Vehicles (UAV) are already part of our daily routine in several outdoor

applications. Its operational cost and drastic reduction of human life risk make it a viable option in

fields where in the past the helicopter played a major role. TV broadcasting, surveillance, farming and

inspection of infrastructure are example of it.

These outdoor operations depend on the stabilization and control of the vehicle, which nowadays is

only made possible due to GPS information. However, GPS data is not available in urban canyons and

indoor environments. In order to expand the applicability of autonomous UAV’s to those situations it

is necessary that the scientific community finds suitable alternatives for it. GPS-denied environment

applications are specially relevant in exploration/search and rescue in hostile environments (e.g. mines,

disaster sites), on demand inspection and problem identification of buildings/plants. Although there are

other commercial applications, the use of UAV’s in these situations plays a major role because it prevents

the loss of humans lives without putting others in jeopardy.

Throughout this thesis the problem of position and velocity estimation using on-board sensors in un-

known GPS-denied environments is addressed. As far as this author is concerned, at the moment only

systems that require previous on-site installation of camera network and/or Global Navigation Satellite

System repeater can accomplish the necessary precision in the estimation process on those environ-

ments. Although this work does not present a full working solution, it experimental data provide in-

dications that this type of solution may one day fulfil all the necessary requirements for automation in

GPS-denied environments. The proposed solution uses an Inetial Measuring Unit (IMU), an optical mice

Optical Flow (OF) sensor and a Laser Range Finder (LRF) placed on board of the UAV.

1

Page 16: Quadrotor Control using Optical-Flow Navigation · Quadrotor Control using Optical-Flow Navigation António Henrique Kavamoto Fayad Thesis to obtain the Master of Science Degree in

1.1 Motivation

The increasing number of outdoor applications for autonomous UAV’s in the last decade naturally brings

the question of transferring this technology into indoor environment. Its price and adaptability allow it to

be used in hostile environments in which humans cannot go or would take longer to access. Although

the transition may seem logical, the indoor environment constrains clearly reduce the performance of the

existing outdoor autonomous UAV’s. Apart from GPS signal deterioration, indoor environments usually

require hover capabilities, higher manoeuvrability and size limitations that do not exist outdoor.

Quadrotors flight properties fit the indoor flight needs and close the previous gap in UAV indoor require-

ments. The use of rotor propellers allow it to hover and its thrust-to-weight ratio provide great stability

and high manoeuvrability. These qualities allow quadrotors to navigate through narrow corridors, doors

and windows.

When solving complex engineering problems humans have always looked to nature to find inspiration.

For example, the use of visual cues as navigation aid has been verified in several animals like flies,

honeybees and birds. The optical flow retrieved from the visual sensors is used to estimate velocity and

distance to objects as it is shown in [1][2]. Then, it should be possible to perform similar tasks with the

UAV’s by moving all sensors and intelligence on board.

The proposed solution in this thesis is based in the extraction of OF information from images of the

surface below the UAV. This information fused with IMU data and height of the vehicle is sufficient to

successfully estimate the velocity of the vehicle. The estimated velocity can then be used to define the

trajectory of the vehicle and control its velocity.

1.2 State-of-the-art

Several fields of scientific research that focus on quadrotors have the common goal of achieving accurate

autonomous navigation of the vehicle. The range of applications in which this goal is possible to achieve

depends on two different but interconnected fields: Control and Estimation.

Due to the non linear, multi variable and coupled dynamics of the quadrotor, its stabilization is essential

even to human controlled missions. Several work has been done in the control field in terms of stabiliza-

tion of the vehicle both with standard linear and non linear approaches [3, 4]. Non linear back-stepping

and sliding mode is also presented in [5]. More modern approaches using quaternions feedback con-

trol [6] or adaptive fuzzy control [7] have also managed to aid human control to be more accurate.

2

Page 17: Quadrotor Control using Optical-Flow Navigation · Quadrotor Control using Optical-Flow Navigation António Henrique Kavamoto Fayad Thesis to obtain the Master of Science Degree in

Although there are already important applications that can be performed by human control, the use of

quadrotors in bigger scale depends on the ability to automate these processes.

The estimation field deals with providing approximated values of quantities that are necessary to auto-

mate a certain action. In UAV’s applications, this field is mainly divided into attitude estimation and posi-

tion/velocity estimation. The first one regards the 3D orientation of the vehicle disregarding its position.

Accurate estimates are already achieved with low cost sensors even in the presence of biased mea-

sures recurring to several versions of Kalman Filters as in [8, 9]. In terms of position/velocity estimates

GPS presents itself as the state of the art system and together with an attitude estimator already makes

possible to perform waypoint navigation [10, 11], pursuit-evasion missions and target tracking [11]. Nev-

ertheless, there is still the need to find a suitable replacement for GPS in indoor environments. Off-board

alternatives like Vicon1 provide really accurate and high frequency estimates for attitude,velocity and po-

sition that makes possible to perform extreme control actions like ball juggling [12] or static and dynamic

equilibrium of an inverted pendulum [13].

In order to be able to perform the control actions mentioned above in unknown indoor environments

without prior on-site hardware installation, it is necessary to find onboards sensors that provide the

same level of information as systems like Vicon and GPS do. Approaches similar to the ones applied

in ground robots are usually adapted to aerial vehicles. For example, Simultaneous Localization and

Mapping(SLAM) algorithms are presented in [14] using on board video and in [15] using LRF data.

The state estimation is done by filtering the information from the SLAM algorithm together with IMU

data. Both works provide interesting results in which position hold and position set point missions are

executed with common PID controllers. Due to the computational complexity of the SLAM algorithms,

the retrieved data has to be sent to a ground station in order to be processed. This increases the

limitations in the system applicability since it is difficult to ensure that the data link is available at all times

in unknown environments.

Another alternative is to retrieve ego motion information from visual cues. In [16] a fish eye camera

is used to retrieve optical flow information with the final goal of building a short depth map of the sur-

roundings in corridor like environment. However, the necessary computations to the retrieved images

are also done off board. In [17] the same methodology is used but optical flow information is retrieved

by using pre defined landmarks. So the transmission of data to a ground station is avoided, the use

of dedicated sensor to retrieve image frames and compute optical flow on board were also explored in

[18, 19]. In [18] the optical flow vector field is calculated in spherical coordinates which makes possible

to retrieve additional information regarding the distance to the overflown surface. Set point missions are

performed although ground markers are necessary. In [19] a dedicated optical flow hardware is used

1more information at www.vicon.com

3

Page 18: Quadrotor Control using Optical-Flow Navigation · Quadrotor Control using Optical-Flow Navigation António Henrique Kavamoto Fayad Thesis to obtain the Master of Science Degree in

and compared with optical mouse sensors. The vehicle’s dynamics are neglected but a Kalman filter

approach is applied to provide the final velocity estimates. The position estimates are direct integration

of the velocity. Nevertheless, it is not possible to access the accuracy of the presented solution since no

estimate error information is provided.

The necessity of image transmission over the air and the use of landmarks are common limitations of

the consulted literature.The presented work tries to achieve similar accuracy as the one achieved

before using cheaper sensors like optical mouse sensors and without constrains in the overflown

surface. A novel approach is proposed in terms of the connection between attitude estimation and

estimated trajectory accuracy by separating the problem of computing the body and inertial velocity.

This work presents detailed results in terms of angular compensation accuracy, velocity and position

estimates error that is not available in the research mentioned above.

1.3 Objectives

An estimation problem is defined by the unknown quantity or quantities that need to be approximated

and the purpose for which the estimation values will be used. In this particular case, the unknown

variables are the inertial position and velocity of the vehicle. The estimated velocity values will be used

to reconstruct the vehicle’s trajectory. Any of the UAV’s and quadrotors applications mentioned in chapter

1 require position and velocity estimations to automate its procedures.

The required precision and accuracy of the estimation is highly dependent on the envisaged use of

the estimated values. Throughout this thesis the estimation will be consider adequate if the velocity

estimated values can be used to control the vehicle’s velocity in a desired set point. The trajectory

estimate does not have desired accuracy for the direct purpose of this work. In order to achieve the final

goal, several middle points have to be achieved:

• Model the optical flow sensor behaviour so its output can be used as a velocity sensor in an

unknown environment;

• Model the influence of angular rates in the measured optical flow and define a compensation law

such that the optical flow measures provide only linear velocity components;

• Design an asymptotic stable velocity estimator in which the velocity observations are provided by

the optical flow sensor. The estimates accuracy should be in the same order of magnitude as the

GPS outdoor performance.

4

Page 19: Quadrotor Control using Optical-Flow Navigation · Quadrotor Control using Optical-Flow Navigation António Henrique Kavamoto Fayad Thesis to obtain the Master of Science Degree in

The mentioned goals are meant to be achieved when using real flight acquired data.

1.4 Proposed Solution

The solution proposed in this thesis for the velocity estimation in GPS-denied environments is based

on the direct measurements made by the OF sensor, IMU and LRF in which the problem is decoupled

into the X-Y body frame plane and Z body axis, Figure 1.1. The estimation in the X-Y plane is the main

focus of this thesis. For the Z component a simple numeric derivation approach from the LRF data will

be used. The estimation is done by extracting the rotational velocity component from the UAV’s optical

flow field using only data from one sample time. This way, only the translational component is left out.

Thorough explanations of the compensation method are presented in chapter 4. The trajectory estimator

is based on the integration of the inertial frame velocity but corrects its estimates whenever GPS data is

available.

vOF

Computations

ωComp.

ddt

Sensors

vx,vy

Estimations

vz

ddt

Measurements

ω

∆XOF , ∆Y OF

h

OF

IMU

LRF

TrajectoryEstimator

Ip

Figure 1.1: Proposed Solution Flow Chart

1.5 Thesis Outline

The remaining work presented here is divided into five more chapters.

Chapter 2 derives the 3D Kinematic and Dynamic equations that define the quadrotors motion. Detailed

information on rotation matrices and referential transformations is provided as it is extensively used in

this work.

Chapter 3 provides a deeper insight of the optical flow concept and the ego motion information that can

be retrieved from it. The optical flow sensor used in the proposed solution is presented and a small

summary of its internal procedures is provided.

5

Page 20: Quadrotor Control using Optical-Flow Navigation · Quadrotor Control using Optical-Flow Navigation António Henrique Kavamoto Fayad Thesis to obtain the Master of Science Degree in

Chapter 4 contains all the core information regarding the estimation processes applied in this work.

Mathematically detailed information regarding the origin and synthesis of velocity and position estimators

is provided. The necessary angular compensation is included in the velocity estimator algorithm. Finally

the stability of both estimator is evaluated.

Chapter 5 presents the information regarding all the practical experiments done in this thesis. The aerial

vehicle used and all used sensors are specified and afterwards each of the experiments is described

and its results presented.

The general conclusions of the work are presented in Chapter 6. The direct consequences of the

achieved results are addressed as well as future developments of the presented work.

6

Page 21: Quadrotor Control using Optical-Flow Navigation · Quadrotor Control using Optical-Flow Navigation António Henrique Kavamoto Fayad Thesis to obtain the Master of Science Degree in

Chapter 2

Quadrotor 3D Motion

One of the main focus of the work presented in this thesis is the future automation of UAV’s motion in a

3D environment. In order to better comprehend the challenges and solutions presented, it is important

to summarize the kinematic and dynamic laws that govern this motion. Apart from that, the mathemat-

ical model of the quadrotor used to simulate and predict the quadrotor motion and the mathematical

nomenclature used in the remaining of this thesis will be introduced.

2.1 Kinematics

The Kinematics field deals with the motion of objects disregarding the aspects that cause or change the

current state of motion [23]. Nevertheless, it contains important aspects on how to transform vectors

that are represented in one reference frame into another one. This transformation can happen in two

ways: translation and rotation. Since the frame transformations in this work will be mainly applied to free

vectors like velocity and angular velocity only the rotation procedure will be explain in detail.

2.1.1 Rotation Matrix

In figure 2.1 it is possible to see reference frame {1} and {2} in which the last one is obtained by rotating

the first one by θ in the counter clock wise direction. At the same time, the vector ~v ≡ v components are

represented as seen from both reference frames. It is possible to exchange between one representation

and the other by using a rotation matrix. Using the same notation as in figure 2.1 in which 1v represents

the vector v as seen from the reference frame {1} it is possible then to write:

7

Page 22: Quadrotor Control using Optical-Flow Navigation · Quadrotor Control using Optical-Flow Navigation António Henrique Kavamoto Fayad Thesis to obtain the Master of Science Degree in

2v = 21R

1v (2.1)

1Y2Y

1X

2X

v1y

1x

2x

2y

θ

Figure 2.1: Rotation of a Reference Frame

A matrix is defined as a rotation matrix if and only if it the following conditions are true:

• (BAR)−1 = (BAR)T = ABR

• CBR

BAR = C

AR

• det(BAR) = 1

Although the rotation matrix presented here defines a 2D rotation, its use in 3D environment follows

the same rules but with higher complexity since instead of being defined by only one parameter (θ) it is

defined by three. In the next section the rotation representation in 3D will be introduced since it is useful

for vector transformation and also to understand the vehicle’s orientation in a 3D space.

2.1.2 Rotation Representation

The 3D rotation representation does not focus on the centre of mass position but on the body space

orientation. Simple and correct methods to represent the body orientation by a set of rotations are

essential to calculate rotation matrices mentioned in section 2.1.1 and provide physical intuition to 3D

body motion.

In the aerospace industry the usual representation is the one introduced by Leonard Euler in which the

rotation is represented by three parameters φ, θ and ψ. They represent three consecutive rotations

about the axes of a coordinate system [24].

In figure 2.2 is possible to see the direction in which the Euler angles are measured and the two reference

frame used throughout this thesis:

• Inertial Reference frame, {I}, considers a flat earth approximation from the earth ellipsoid WGS84

in which the origin is given by the initial latitude, longitude and altitude. The X axis has the direction

8

Page 23: Quadrotor Control using Optical-Flow Navigation · Quadrotor Control using Optical-Flow Navigation António Henrique Kavamoto Fayad Thesis to obtain the Master of Science Degree in

of the magnetic north, Z axis is perpendicular to the surface and the Y axis is the resultant from the

cross product IZ × IX. From now on the vectors measured in this frame will have the superscript

I, e.g. Ia

• Body reference frame, {B}, origin is coincident with the vehicle’s center of mass position. X axis

has the direction of the red bar in the vehicle in figure 2.2, the Y axis has the direction of the first

rotor when moving clockwise from the red bar and the Z axis is the resultant from the cross product

BX × BY . From now on the vectors measured in this frame will have the superscript B, e.g. Ba

YBZI

XI

YI

{I}

ZB

XB

θ

φψ

{ B }

IpB

Figure 2.2: Referentials

The Euler angles are essential to interchange vectors between the body and the inertial reference frame.

The need to perform this action comes from the fact that most of the sensor’s on board provide data

expressed in the body frame. However, calculations like vehicle’s trajectory only make sense in the

inertial frame. The rotation matrix that perform this action is built using the three consecutive rotations

over X,Y and Z in this exact order and follow the properties introduced in section 2.1.1.

IBR = B

IRT =

IF1R(ψ)

F1F2R(θ)

F2BR(φ) (2.2)

where {F1} and {F2} are intermediate reference frames

Substituting the rotation by its matrices yields:

9

Page 24: Quadrotor Control using Optical-Flow Navigation · Quadrotor Control using Optical-Flow Navigation António Henrique Kavamoto Fayad Thesis to obtain the Master of Science Degree in

IBR =

cψ −sψ 0

sψ cψ 0

0 0 1

cθ 0 sθ

0 1 0

−sθ 0 cθ

1 0 0

0 cφ −sφ

0 sφ cφ

=

=

cψcθ cψsθsφ− sψcφ sψsφ+ cψsθcφ

sψcθ cψcφ+ sψsθsφ sψsθcφ− cψsφ

−sθ cθsφ cθcφ

(2.3)

where ”c” and ”s” symbols represent cos(.) and sin(.) functions

Knowing IBR or λ = [ψ, θ, φ] is then enough to fully define the body space orientation. It is possible to

interchange between the two representations using equation 2.3 or applying trigonometric relations to

the rotation matrix elements in equation 2.3:

θ = atan2(−r31,√

(r11)2 + r21)2)

ψ = atan2(r21, r11)

φ = atan2(r3,2, r33)

(2.4)

where rij refers to the i-th row and j-th column of the matrix IBR

2.1.3 Kinematics Variables

Throughout this thesis the vehicle’s motion will be extensively studied both in {I} and {B} reference

frames. It is then necessary to define the kinematic variables that will be used and how they relate with

each other:

• p = [x, y, z]T - position in {I}

• v = [u, v, w] - velocity in {B}

• λ = [θ, φ, ψ] - Euler angles with respect to {F1},{F2} and {I}

• ω = [p, q, r] - angular velocity in {B}

The 3D motion has six degrees of freedom, in which three degrees comes from linear motion and the

remaining from angular motion. It is obvious then that the defined variables fully define the vehicle’s

10

Page 25: Quadrotor Control using Optical-Flow Navigation · Quadrotor Control using Optical-Flow Navigation António Henrique Kavamoto Fayad Thesis to obtain the Master of Science Degree in

position and orientation both in {B} and {I}. The summary of the role of each variable in the body

motion is presented in tables 2.1 and 2.2.

Linearx direction u xy direction v yz direction w z

Table 2.1: Linear Motion Variables

Angularx direction p φy direction q θz direction r ψ

Table 2.2: Angular Motion Variables

Since the velocity data from the vehicle is usually provided by sensor’s placed on board, its data come in

the body reference frame. Nevertheless, using the concepts of rotation matrix introduced in subsection

2.1.1, it is straightforward to relate the body frame and inertial frame velocity.

x

y

z

= IBR

u

v

w

(2.5)

It is possible as well to find a similar relationship between the body and inertial frame angular variables

but it is not practical to use it because each of the Euler angles is defined in a different reference frame.

Therefore this relation is usually replaced by a set of differential equations that define the rotation matrix

over time:

R = RS(ω) (2.6)

where S(.) is a skew-symmetric matrix defined such that S(x)y = x× y

By having the rotation matrix defined over time, it is then possible to define the Euler angles over time

using equations provided in 2.4.

11

Page 26: Quadrotor Control using Optical-Flow Navigation · Quadrotor Control using Optical-Flow Navigation António Henrique Kavamoto Fayad Thesis to obtain the Master of Science Degree in

2.2 Dynamics

In the previous sections the body orientation and motion was defined but its causes were left apart.

The dynamic field provides the relation between the actuating external forces and the motion caused by

them. Here it will be assumed a rigid body approximation with body frame {B} placed at the center of

mass and the inertial frame {I} introduced in section 2.1.2.

From the inertial reference frame point of view the relation between external applied forces and acceler-

ation in a translational movement is given by Newton’s second law:

If = m Ia = md

dtIv (2.7)

However, external forces are usually known in the body reference frame and it is also more intuitive to

analyse their effect in that same reference frame. Therefore, equation 2.7 is usually written as function of

body frame vectors using the relationship between the rate of change of a vector expressed in a rotating

frame as seen from the inertial frame shown in [25]:

Bf = m(d

dtBv + Bω × Bv) (2.8)

Substituting in the above equation the definitions for v and ω introduced in subsection 2.1.3 yields:

u

v

w

=

rv − qw

pw − ru

qu− pv

+1

m

fx

fy

fz

(2.9)

It is possible to specify equation 2.9 derived above to the quadrotor case studied here. The considered

external forces are the thrust provided by the quadrotor rotors and the gravitational force. Since the

gravitational force is defined in the inertial reference frame, in order for it to be included in equation 2.9 it

has to be transformed for the body frame referential using the rotation matrix introduced in equation 2.3.

Note that the quadrotor can only apply thrust in the z axis direction of {B}:

Bf =

0

0

−T

+mBIR

0

0

g

(2.10)

12

Page 27: Quadrotor Control using Optical-Flow Navigation · Quadrotor Control using Optical-Flow Navigation António Henrique Kavamoto Fayad Thesis to obtain the Master of Science Degree in

The same reasoning applied for the translational movement can be applied for the angular one in order

to find a relationship between the external applied torque and the angular motion from the vehicle:

Bτ = Jd

dtBω + Bω × (J Bω) (2.11)

where J is the 3D moment of inertia and τ is the external applied torque

Solving the equation ford

dtBω, replacing the external product for the skew-symmetric matrix multipli-

cation and using the notation introduced in section 2.1.3 yields the body angular rates with respect to

time:

p

q

r

= J−1[S(ω)Jω + τ ] (2.12)

The model of the quadrotor used in this thesis is then achieved by joining all the kinematic and dynamic

equations derived in chapter 2:

x

y

z

= IBR

u

v

w

u

v

w

=

rv − qw

pw − ru

qu− pv

+

−gsθ

gcθsφ

gcθcφ

− 1

m

0

0

T

R =

r12ωz − r13ωy −r11ωz + r13ωx r11ωy − r12ωx

r22ωz − r23ωy −r21ωz + r23ωx r21ωy − r22ωx

r32ωz − r33ωy −r31ωz + r33ωx r31ωy − r32ωx

p

q

r

= J−1[S(ω)Jω + τ ]

(2.13)

13

Page 28: Quadrotor Control using Optical-Flow Navigation · Quadrotor Control using Optical-Flow Navigation António Henrique Kavamoto Fayad Thesis to obtain the Master of Science Degree in

Chapter 3

Optical Flow

3.1 Concept

The psychologist J. J. Gibson realized that changes in the retinal image due to motion contain rich details

about the surrounding world. The optical flow concept derived from that and he defined it as the animal

visual feedback due to its movement [26]. From an engineering point of view the definition needed to

go further, so one can define which physical quantities can be retrieved from these images. Therefore,

optical flow was later defined as the velocity gradients of physical surfaces on a scene due to relative

motion between the observer and the scene [27][28]. This definition is extremely relevant to this work

since it implies that by analysing images it is possible to extract information about the movement of the

camera regarding the surrounding environment.

The main cues of motions are the optical flow gradient and the optical flow pattern. The first one indicates

the value of the relative velocity of the observer with respect to the surface and the second the direction

of that velocity. These cues are represented by arrows in Figure 3.1. In this thesis the concept of optical

flow will be used to estimate the velocity components u and v.

3.2 Sensor Avago ASDN3080

Optical flow technology is widely used in computer optical mice since the beginning of the 21st century,

making possible to find sensors that suits the aerial vehicle needs in terms of price, weight and accuracy.

In order to use the sensor’s information in real-time, its output rate has also to be adequate with the

14

Page 29: Quadrotor Control using Optical-Flow Navigation · Quadrotor Control using Optical-Flow Navigation António Henrique Kavamoto Fayad Thesis to obtain the Master of Science Degree in

(a) Movement towards the horizon (b) Touchdown Movement

Figure 3.1: Example of Optical Flow Fields

vehicle dynamics. In this case, Avago’s ADSN3080 communicates at 50Hz which does not impose any

constrains to this work. Its resolution is 30x30 pixel and it internally computes the x and y displacement

by comparing detected features positions in consecutive frames as it is represented in Figure 3.2.

y

x

∆x

∆y

tktk+1

Figure 3.2: Schematic of OF Sensor Displacement detection

Although in this case only the movement detection is used, the sensor provides several outputs related

to the captured frames:

• x and y displacement in counts per inch (cpi);

• Measure of the image quality provided (considering features detected and light conditions) in a

variable named SQUAL;

• Grey scale version of the captured frame;

An important aspect about the displacement outputs is that it result of an integration over the x and y

direction of the optical field, meaning that if the sensor detects two features and they move in opposite

15

Page 30: Quadrotor Control using Optical-Flow Navigation · Quadrotor Control using Optical-Flow Navigation António Henrique Kavamoto Fayad Thesis to obtain the Master of Science Degree in

directions, the output for displacement would be zero. This situation occurs, for example, if the camera

moves on the normal direction of the surface or rotates over the optical axis. Therefore, a limitation of

using this sensor is that it doesn’t provide the optical flow vector field. For more technical details about

ADSN3080, the data sheet can be found in [29].

Since ADSN3080 is suppose to work as an optical mouse, it is optimized to detect features while being

on top of a table, which differs from its application in UAV’s. As one moves the sensor a few centimetres

from the captured surface, the image loses definition to a point in which no features can be detected.

A representation of this problem is shown in Figure 3.3 while using the same situation shown in Figure

3.2. In order to solve this problem a lens was coupled to the sensor and it was calibrated to an ideal

distance of 70 cm. The influence of the lens in the acquire data will be more clear in the next chapter.

y

x

tktk+1

Figure 3.3: Schematic of OF Sensor Image Deterioration

16

Page 31: Quadrotor Control using Optical-Flow Navigation · Quadrotor Control using Optical-Flow Navigation António Henrique Kavamoto Fayad Thesis to obtain the Master of Science Degree in

Chapter 4

Estimation Algorithms

4.1 Mathematical Background

For reasons of completeness and to improve text readability, this section introduces some mathematical

notation and background on dynamic systems theory, which will be used to describe the estimation

algorithms in the following sections.

Since quantities in R3 will be extensively used, the usual vector notation, ~v, will be replaced from now

on for bold variable v. The dynamics of a free rigid body in a 3D environment is necessarily associated

with angular motion. The mathematical operator S(.) transforms a 3D vector into a skew symmetric

matrix and has special relevance in this application. Its properties are described below and the resultant

skew-symmetric matrix follows S(v)T + S(v) = 0.

• v × ω = S(v)ω

• S−1(v) = v

• −S(v) = S(v)T

• tr(S(v)ω) = tr(ωS(v)) = −vTS−1(ω − ωT )

• S(v)2 = (I − vvT )

A Dynamical System is defined by one or a set of differential equations that describe the evolution of

a state (vector) over time. In this particular case, the system is composed by first order differential

equations in which the state vector is denoted by x. The derivative of the state is then function of the

state value, the input u and the time.

17

Page 32: Quadrotor Control using Optical-Flow Navigation · Quadrotor Control using Optical-Flow Navigation António Henrique Kavamoto Fayad Thesis to obtain the Master of Science Degree in

x(t) = f(x, t, u) (4.1)

Usually there is another set of equations associated to equation 4.1, which defines the evolution of the

output vector y. The output vector contains values of variables of interest for the state vector and usually

is composed by physical quantities that can be directly measured.

y(t) = h(x, t, u) (4.2)

Equations 4.1 and 4.2 form the state space model of the system.

Systems are modelled to predict its behaviour or make them behave in a certain way, which is usually

referred as controlling the system.

It is possible to predict the behaviour of an observable dynamical system for all t > t0, where t0 rep-

resents the initial time, if the initial pair solution x(t0, x0) and the input vector u(t) are known for all

t > t0.

The study of the stability of a system is commonly done by studying its equilibrium points. An equilibrium

point, x, is a configuration of the state vector where the following condition is satisfied:

x(t) = f(x) = 0 (4.3)

Equilibrium points can be classified by three different levels in terms of stability:

• Stable: If for every ε > 0 there exists a δ > 0 such that

||x0 − x|| < δ ⇒ ||x(t, x0)− x|| < ε, for all t > t0

• Asymptotically Stable: If it is stable and there exists δ1 such that:

||x0 − x|| < δ1 ⇒ limx→∞

||x(t, x0 − x)|| = 0

• Unstable: If it is not stable

Although the stability conditions for equilibrium point are intuitive and simple, to be able to mathematically

prove those conditions is not. The study of systems equilibrium points stability if often done recurring to

Lyapunov Functions, V(x), named after the Russian mathematician Aleksandr Lyapunov.

18

Page 33: Quadrotor Control using Optical-Flow Navigation · Quadrotor Control using Optical-Flow Navigation António Henrique Kavamoto Fayad Thesis to obtain the Master of Science Degree in

V(x) has the following properties:

• V : Rn → R be a continuous scalar function;

• V(0) = 0;

• V (x) > 0 ∀x ε U\{0}

These functions can then be used to prove stability using Lyapunov Stability for Autonomous Sys-

tems theorem:

Theorem 1. Let x be an equilibrium point of a system x = f(x) on Rn and let V : D → R be a Lyapunov

function on a region D around the origin then x = 0 is:

• Asymptotically Stable: if V (x) < 0 along the trajectories of the system, except at x = 0;

• Stable if V (x) 6 0 along the trajectories of the system, except at x = 0;

• The existence of a Lyapunov function is a sufficient condition to prove stability or asymptotically

stability;

Considering a function V (x) = xTPx, for some symmetric n x n P matrix, function V(x) fulfils Lyapunov

function conditions if and only if:

• P is positive definite matrix;

• V (x) = xTPx+ xTPx = xT (ATP + PA)x 6 0;

Considering now a general dynamical system x(t) = Ax andQ = −ATP+PA, the conditions of theorem

1 are equivalent to Q being positive definite which means that eig(Q) > 0.

Real systems are usually time dependent and actuated by external forces. Depending on how the

system is actuated, it is possible to access its stability by using the concept of input-to-state stability(ISS).

Let’s consider the system from equation 4.4 and its unforced version in equation 4.5.

x = f(t, x, u) (4.4) x = f(t, x, 0) (4.5)

Considering that the input is piecewise continuous and bounded, the system in equation 4.4 can be seen

as a perturbation from its unforced version. It is logical then to think that if the unforced version of the

system is asymptotically stable, that depending on the structure of the system, bounded perturbations

will result in bounded errors.. In [22] the notions of input-to-state stable system is introduced:

19

Page 34: Quadrotor Control using Optical-Flow Navigation · Quadrotor Control using Optical-Flow Navigation António Henrique Kavamoto Fayad Thesis to obtain the Master of Science Degree in

The system in equation 4.4 is said to be input-to-state stable if there exist a class KL function

β and a class K function γ such that for any state x(t0) and any bounded input u(t), the

solution x(t) exists for all t ≥ t0 and satisfies:

||x(t)|| ≤ β (||x(t0||, t− t0)) + γ(supt0≤τ≤t ||u(τ)||

)

4.2 Velocity Estimation

The goal of the velocity estimation problem addressed here is to continuously be able to estimate the

velocity in a known reference frame with bounded error using only on-board sensor information. When

looking at the usual sensors available in UAV’s like gyroscopes and accelerometers, it is not possible to

continuously estimate the velocity due to unbounded growth of errors [30]. Therefore the incorporation

of additional sensors is inevitable.

The solution proposed in this work comprises two separated actions, in which the first is to compute the

body frame velocity from the OF sensor output. The second part subtracts the influence of rotations

in the OF sensor by fusing its data with IMU data (top part of Computations in figure 1.1). These two

actions will be thoroughly explained in sections 4.2.1 and 4.2.2.

4.2.1 Linear Movement

Mapping the OF sensor’s data to physical units is essential to its use in any engineering system. In

this case, besides the conversion from cpi to meters one has to model the influence of the lens in the

measured data, which has focal distance f and is at distance h from the floor.

Let {B} be the frame attached to the camera in which the Z-axis is coincident with the optical axis and

Pi=[Xi Yi Zi]T a stationary point on that same reference frame. The projection of Pi into the surface

S defined by the camera is shown in equation 4.6. The term ξ(Pi) represents the camera geometry

projection. The camera used by the OF sensor in this work projects the captured image in a flat plane

at a distance f as shown in figure 4.1. It is then obvious that ξ(Pi) =h

f. Finally, the goal is to map the

displacements measured by the sensor on S to the camera displacements in {B}.

pi =1

ξ(Pi)Pi ⇔

⇔ pi =f

ZiPi

(4.6)

20

Page 35: Quadrotor Control using Optical-Flow Navigation · Quadrotor Control using Optical-Flow Navigation António Henrique Kavamoto Fayad Thesis to obtain the Master of Science Degree in

As it was briefly explained in section 3.2 the sensor’s output is function of the number of pixels moved

by the detected features between consecutive frames. Since DNS3080 datasheet is not clear about

Avago’s interpretation of the cpi unit and the internal computations of the sensor are not available to the

public, it can only be assumed that the sensor’s output is directly proportional to the length of one pixel.

Defining d as the known length of the sensor’s 30x30 pixel array and ∆XOF and ∆YOF as the sensor’s

body frame outputs, the displacement of Pi on S, in meters, is given by:

pixpiy

=d

30KOF

∆XOF

∆YOF

(4.7)

4.2.2 Angular Compensation

Quadrotor dynamics have some particularities that need to be taken into account when retrieving velocity

information from optical flow data. For example, assuming that BZ axis is aligned with IZ axis and that

the quadrotor is hovering, it is not possible for it (like it happens with most of ground robots) to move on

the X-Y plane direction without an angular rotation. This is direct consequence of equation 2.10.

The problem introduced by rotations is that the OF sensor can not distinguish translational and rotational

movements. This occurs because in terms of captured images the rotation movement is the same as

the translational one. In figure 4.2 it is possible to see that the camera covers approximately the same

area in the initial and final position on both circumstances.

This phenomena can be better understood when looking back at the dynamics of the projected point pi

from equation 4.6. Differentiating with respect to time leads to:

h

{B}

ZX

Pi

x

z

y

piS

f

Figure 4.1: Projection of a Single Point Schematic

21

Page 36: Quadrotor Control using Optical-Flow Navigation · Quadrotor Control using Optical-Flow Navigation António Henrique Kavamoto Fayad Thesis to obtain the Master of Science Degree in

Figure 4.2: Optical Flow Rotational Inconsistency

pi = f

(ZiPi − ZiPi

Z2i

)(4.8)

Taking now in account that Pi is defined in the rotating frame {B}, its time derivative has one component

due to the position change over time and another one due the angular velocity of the frame:

Pi = −v − S(ω)Pi (4.9)

Using the result from equation 4.9 in equation 4.8 yields:

pi = f

(Zi[−v − S(ω)Pi]− (−(v)z + ωyXi + ωxYi)Pi

Z2i

)(4.10)

The influence of the camera movement in the optical flow output is clearer when looking at the individual

components of the equation above (the Z axis term is not represented since there are no optical flow

readings in that axis). After some mathematical simplifications it yields:

pixpiy

=

vzxi − vxf

Zi− ωyf + ωzyi +

ωxxiyi − ωyx2i

fvzyi − vyf

Zi+ ωxf − ωzxi +

ωxy2i − ωyxiyif

(4.11)

As it was mentioned in chapter 3, the optical flow output is an integration of the movement of the detected

features. Therefore, it can be defined as:

vOF =1

N

N∑i

pi (4.12)

The relation shown in equation 4.11 is extremely important since it shows what can be easily

seen in experimental data. The measured optical flow has components due the translational

22

Page 37: Quadrotor Control using Optical-Flow Navigation · Quadrotor Control using Optical-Flow Navigation António Henrique Kavamoto Fayad Thesis to obtain the Master of Science Degree in

movement and rotational movement. The last term on the right from equation 4.11 can be neglected

becauseN∑i

xi ≈ 0,N∑i

yi ≈ 0,N∑i

xiyi ≈ 0,N∑i

x2i ≈ const,

N∑i

y2i ≈ const and therefore its order of

magnitude is one time smaller compared to the other terms. As it was explained in chapter 3.2 when

the optical flow field is integrated over X and Y the displacement caused by rotations around the Z axis

sum zero, and therefore the terms related to ωz and vz are also neglected. Thus, using the angular

velocity values obtained by IMU one can write the translational velocity of the vehicle as function

of the optical flow indicated velocity and angular velocity values.

Solving equation 4.11 for the translational components, assuming Zi = h and applying the assumption

mentioned above leads to:

vxvy

=

−vxOF hf − wyh−vyOF

hf + wxh

(4.13)

The final equations for the translational velocity estimation are then achieved by modelling the terms

vyOF and vyOF as numeric differentiation from the displacements measured by the OF sensor in equa-

tion 4.6:

vx

vy

=d

30

h

fKOF

−∆XOF

∆t

−∆YOF∆t

+ h

−ωyωx

(4.14)

The equation above directly maps the sensor’s output to travelled distance as function of the height and

its angular speed. The results provided by this estimation process will be discussed in chapter 5.

4.2.3 Filtering Algorithm

Although the velocity estimate provided by equation 4.14 already provides a trend of the true velocity , it

contains undesired noise for its use in a real-time controller and it solely depends on the OF sensor being

able to capture correct information from the overflown surface. When travelling above surfaces that are

approximately uniform (e.g. single coloured floor), the data provided by sensor Avago ASDN3080 has

less quality than usual, which leads to less accurate estimations. In order to reduce noise and expand

the applicability of the velocity estimation procedure, a state observer was introduced.

As it was mentioned in the beginning of section 4.2, the accelerometers are not used as stand alone

information to provide velocity estimations due to unbounded growth of errors. Nevertheless, this infor-

23

Page 38: Quadrotor Control using Optical-Flow Navigation · Quadrotor Control using Optical-Flow Navigation António Henrique Kavamoto Fayad Thesis to obtain the Master of Science Degree in

mation can be used to filter velocity estimates provided by equation 4.14.

Accelerometers provide measures of the specific force that actuate on the body. Although it has accel-

erations units, its value is given by the acceleration caused by non-gravitational forces:

fs = Ba− Bg (4.15)

Based on the dynamical model introduced in chapter 2 and using the same notation as the one defined

in the same chapter, the specific force measured by the accelerometers in the quadcopter is given by:

f

m= Ba = v + S(ω)v⇔

⇔ fs = v + S(ω)v − Bg

(4.16)

Note that in order to use the accelerometers measures, the gravity vector in the body reference frame

has to be known. By using the rotation matrix BIR introduced in chapter 2 it is possible to calculate Bg:

Bg = BIR

Ig (4.17)

The dynamical model assumed for quadcopter body frame velocity can be calculated by solving equation

4.16 for v:

v = −S(ω)v + fs + Bg (4.18)

Based on the equation above, it is possible to design a velocity estimator in which the velocity observa-

tions retrieved from the optical flow sensor as considered as the model observations:

˙v = −S(ω)v + fs + Bg +Kob(vOF − v) (4.19)

The velocity estimation error and its time evolution can be defined as in equation 4.20, if it is considered

that the measured optical flow velocity equals the true velocity.

v = v − v

˙v = −S(ω)v −Kobv

(4.20)

It is possible to define Lyapunov function V1 with the following properties for v 6= 0:

24

Page 39: Quadrotor Control using Optical-Flow Navigation · Quadrotor Control using Optical-Flow Navigation António Henrique Kavamoto Fayad Thesis to obtain the Master of Science Degree in

V1 =1

2vT v > 0

V1 = − vTS(ω)v︸ ︷︷ ︸=0

−KobvT v =

= −KobvT v < 0

(4.21)

V1 meets the requirements set by theorem 1 and prove the asymptotic stability of the velocity

observer defined in equation 4.19.

Although analytically this observer produces accurate estimations, the practical results presented a drift

caused by the integration of angular rates measurements. It is possible to estimate the sensor’s drift

by evaluating different obtained values for the rotation matrix. The real time evolution of IBR is given

by equation 2.6, but it is possible to define a rotation matrix observer as in equation 4.22, where u is a

chosen input function and IBR = R. The goal is to find a function b(t) that defines the sensor’s drift

over time.

ˆR = RS(ω + u) (4.22)

Consider the relation between the measured angular rates, ωm, true angular rates and the existing drift:

ωm = ω + b

ω = ωm − b

(4.23)

It is possible now to define the estimated drift error b and the rotation matrix error R:

b = b− b = ω − ω

R = RTR

(4.24)

Using the notation introduced in equation 4.23, it is possible to substitute the unknown true angular rate

in equation 4.22 for known quantities:

ˆR = RS(ωm − b + u) (4.25)

So the estimator in equation 4.22 is evaluated in terms of stability, its estimation error time evolution is

25

Page 40: Quadrotor Control using Optical-Flow Navigation · Quadrotor Control using Optical-Flow Navigation António Henrique Kavamoto Fayad Thesis to obtain the Master of Science Degree in

considered:

˙R = S(ωm − b + u)T RTR+ RTRS(ω)

= −S(ωm − b + u)R+ RS(ω)

(4.26)

A Lyapunov Function V2 is defined and its time derivative evaluated:

V2 = tr(I − R) (4.27)

taking the time derivative,

V2 = −tr(

˙R)

= −tr(

(−ωm + b− u + ω)R) (4.28)

Using now the notation from equation 4.23 and having in mind the goal of achieving a law for the angular

rate sensor drift, it is possible to rewrite the equation above as:

V2 = −tr(S(−ωm + b− u + ωm − b)R

)= tr

(S(b + u)R

) (4.29)

Choosing u = k1S−1(R− RT ) with k1 > 0 yields:

V2 = −bTS−1(R− RT )− k1S−T (R− RT )S−1(R− RT )︸ ︷︷ ︸

≥ 0

(4.30)

V2 does not meet the stability conditions of theorem 1. Therefore another Lyapunov function V3 is

defined:

V3 = V2 +1

k2bT b (4.31)

Taking its time derivative:

26

Page 41: Quadrotor Control using Optical-Flow Navigation · Quadrotor Control using Optical-Flow Navigation António Henrique Kavamoto Fayad Thesis to obtain the Master of Science Degree in

V3 = −k1S−T (R− RT )S−1(R− RT )− bTS−1(R− RT ) +

bT

k2

˙b (4.32)

Defining ˙b = k2S

−1(R− RT ) and k2 > 0 results in:

V3 = −k1S−T (R− RT )S−1(R− RT )−bTS−1(R− RT ) + bTS−1(R− RT )︸ ︷︷ ︸

= 0

V3 = −k1S−T (R− RT )S−1(R− RT ) ≤ 0

(4.33)

The Lyapunov Function V3 meets the conditions of theorem 1 for stability but not for asymptotic

stability for any values of k1, k2 > 0. Nevertheless, it is possible to prove its asymptotic stability using

Barbalat’s theorem defined in [22]. This yields the final drift time law:

ˆR = RS(ωm − b + k1S

−1(R− RT ))

˙b = −k2S

−1(R− RT )

(4.34)

Since the drift is now defined over time, it is possible now to use the velocity observer defined in equation

4.19 by updating it with the result of equation 4.34:

˙v = −S(ω − b)v + fs + Bg +Kob(vOF − v) (4.35)

The stability of the new observer has also to be evaluated, and therefore the dynamics of the estimate

error v is evaluated again considering vOF = v:

˙v = −S(ω − b)v + S(b)v −Kobv (4.36)

Using the same Lyapunov function as the one used to prove the asymptotic stability of equation 4.19 in

the new observer law yields:

V1 = − vTS(ω − b)v︸ ︷︷ ︸=0

+vS(b)v −KobvT v =

= vS(b)v −KobvT v

(4.37)

The inclusion of the angular rate sensor drift in the velocity observer changes V1. This way, the function

27

Page 42: Quadrotor Control using Optical-Flow Navigation · Quadrotor Control using Optical-Flow Navigation António Henrique Kavamoto Fayad Thesis to obtain the Master of Science Degree in

does not meet the requirements of Lyapunov’s Stability Theorem defined in theorem 1. Nevertheless, it

is possible to rewrite equation 4.37 to evaluate if the observer is input-to-state stable:

V1 = −Kob(1− ε)vT v − εKobvT v + vS(b)v (4.38)

Setting an upper bound to the equation above yields:

V1 ≤ −Kob(1− ε)||v||2 −Kobε||v||2 + ||v||||b||||v|| =

= −Kob(1− ε)||v||2 − ||v||(Kobε||v|| − ||b||||v||

) (4.39)

Then, the system described by equation 4.37 is ISS in which S(b) is the input. Assuming that v is

bounded and b converges to zero it is possible to state that V1 is definite negative if the following condi-

tions is met:

V1 < 0 , if ||v|| > 1

Kobε||b||||v|| (4.40)

Equation 4.40 shows that V1 applied to the observer from equation 4.36 meets the ISS requirements

and it follows that the system described by 4.35 is ISS with b as the input. Since b is converging to zero,

v also converges to zero.

The proposed estimator for v is then achieved by combining equations 4.14,4.34 and 4.36:

(vOF )x =d

30

h

fKOF (

−∆XOF

∆t) + hwy

(vOF )y =d

30

h

fKOF (

−∆YOF∆t

)− hwx

I

B˙R =

I

BRS[ωm − b + k1S−1(R− RT )]

˙b = −k2S

−1(R− RT )

ˆv = −S(ω − b)v + fs + Bg +Kob(vOF − v)

(4.41)

28

Page 43: Quadrotor Control using Optical-Flow Navigation · Quadrotor Control using Optical-Flow Navigation António Henrique Kavamoto Fayad Thesis to obtain the Master of Science Degree in

4.3 Trajectory Estimation

The goal of the trajectory estimator proposed here is to continuously provide the vehicle’s centre of mass

position in the inertial reference frame. Nevertheless, the possible estimator properties have to take in

account the set of sensor’s available in the quadcopter. In this case, there is not complete position

or distance data in the body or inertial reference frame as it can be achieved using inertial positioning

systems like GPS or Vicon. Therefore, it is necessary that the trajectory estimator base its estimates in

the integration of the velocity provided by equation 4.41.

Since the velocity estimator and the sensors provide information in the body reference frame, {B}, the

dynamics of the trajectory estimator will also be written in the body frame. The desired inertial frame

trajectory it will be obtain by using the rotation matrix BIR. The position of the inertial frame with respect

to the body frame is written as BpI and is given by:

BpI = −BIR

T IpB (4.42)

Differentiating the equation above results in the position dynamics:

BpI = −(

[BIRS(ω)]T IpB + BIR

T I pB

)⇔

⇔ BpI = S(ω)BIRT IpB︸ ︷︷ ︸−BpI

−BIR

T BIR︸ ︷︷ ︸

I33

v⇔

⇔ BpI = −S(ω)BpI − v

(4.43)

Adding now the real problem constrains in terms of available data to equation 4.43, yields the

proposed trajectory estimator:B ˙pI = S(ω − b)BpI + v (4.44)

The necessary information for equation 4.44 is provided by the velocity estimator in equation 4.41. This

estimator cascade formed by equation 4.14 and equation 4.44 fully defines the proposed solution

shown in figure 1.1. A block diagram of the observer can be seen in figure 4.3:

Another alternative position estimator was designed in which low rate GPS information is used to update

the filter. This way, it is also possible to simulate applications in which GPS is available but not under

ideal conditions. Local blind spots or low rate data transmissions are possible real situations. The multi

29

Page 44: Quadrotor Control using Optical-Flow Navigation · Quadrotor Control using Optical-Flow Navigation António Henrique Kavamoto Fayad Thesis to obtain the Master of Science Degree in

OF

IMU

LRF

VelocityObserver

PositionObserver

v p

Figure 4.3: Proposed Solution Block Diagram

rate observer dynamics is given in equation 4.45, in which the GPS data is considered as the true

position Ip and the velocity estimate is provided by equation 4.41. A block diagram of the observer can

be seen in figure 4.4

B ˙pI = −S(ω)BpI − v +KGPS(p− BpI) , if GPS is available

B ˙pI = −S(ω)BpI − v , if GPS is NOT available(4.45)

For the stability analysis of the multi rate observer it will only be considered the dynamics when GPS

information is available. The estimation error and its time evolution are given by:

p = BpI − BpI

˙p = −S(ω)p− v +KGPSp

(4.46)

Using the same methodology as the one use for the velocity estimators in the previous section, it is

possible to define a Lyapunov Function V4:

V4 =1

2pT p (4.47)

Taking its time derivative yields:

V4 = −KGPSpT p− pT v (4.48)

As it is possible to see in equation 4.48, V4 is not definite negative and therefore does not meet the

stabilities requirement of theorem 1. Its input-to-state stability can be accessed by rewriting equation

4.48 as:

V4 = −KGPS(1− ε)pT p−KGPSεpT p− pT v (4.49)

The upper bound of V4 is given by:

V4 ≤ −KGPS(1− ε)||p||2 −KGPSε||p||2 − ||p||||v|| =

= −KGPS(1− ε)||p||2 − ||p|| (εKGPS ||p|| − ||v||)(4.50)

30

Page 45: Quadrotor Control using Optical-Flow Navigation · Quadrotor Control using Optical-Flow Navigation António Henrique Kavamoto Fayad Thesis to obtain the Master of Science Degree in

Therefore V4 is definite negative if:

V4 < 0 , if ||p|| > ||v||εKGPS

(4.51)

The equation above shows that the body frame position dynamics shown in equation 4.46 is ISS with v

as the input. Since v converges to zero then p converges to zero as well, which implies that the observer

defined in equation 4.45 is asymptotic stable.

VelocityObserver

GPSData

vIMU

LRF

pPositionObserver

Figure 4.4: M2 Block Diagram

In order to achieve the desired inertial position, the estimates provided by equation 4.45 have to be

projected on the inertial reference frame. This step would not add error if the real rotation matrix was

known, but in any real experiment this does not happen. Assuming that p = 0 yields:

BpI = BpI (4.52)

In this particular case, the inertial position estimation error is due to rotation only and it is given by:

I pB = I pB − IpB = −I

BRBpI +− I

BRBpI

= −(I

BR−IBR)BpI

(4.53)

The maximum value for this error is written as:

max || I pB || = max IpTB

(I

BR−IBR)T ( I

BR−IBR)IpB

= 2 IpTB

I − I

BRIBR︸ ︷︷ ︸R

IpB

(4.54)

The rotation matrix error can be written using Rodriguez’s Formula [31] which parametrizes the rotation

matrix R as a rotation of angle τ about the unitary axis γ:

31

Page 46: Quadrotor Control using Optical-Flow Navigation · Quadrotor Control using Optical-Flow Navigation António Henrique Kavamoto Fayad Thesis to obtain the Master of Science Degree in

I − R = sin(τ)S(γ) + (1− cos(τ))S(γ)2 (4.55)

The maximum inertial position estimation error assuming that the body frame position is known equals

to:

max || I pB || = 2(1− cos(θ))|| IpB ||2 if BpI = BpI (4.56)

32

Page 47: Quadrotor Control using Optical-Flow Navigation · Quadrotor Control using Optical-Flow Navigation António Henrique Kavamoto Fayad Thesis to obtain the Master of Science Degree in

Chapter 5

Experimental Validation

This chapter describes and presents the results from all experiments made in the framework of this

thesis. The procedures took place in indoor environment at IST and outdoors at the aeromodelling

track Hangar 13 in Montijo. First, in section 5.1 and 5.2 indoor estimation procedures are presented.

Afterwards in section 5.3, outdoor tests results are shown.

The tests were made with quadrotor Mikrocopter QuadroKopter XL, figure 5.1, and it was equipped with

the following sensors:

• IMU: Microstrain 3DM-GX3

Used Outputs: ω,fs,φ,θ,ψ

Output Frequency: 50 Hz

• LRF: Hokuyo UTM 30LX

Used Outputs: h

Output Frequency: 50 Hz

• Optical Flow Sensor: Avago ASDN3080 coupled with 16mm lens

Used Outputs: ∆XOF ,∆YOF

Output Frequency: 50 Hz

• GPS System1

Software: ASHTEC MB100 with Real Time Kinematic (RTK) corrections

Antenna: Trimble AV33

Used Outputs: Ip

Output Frequency: 10 Hz

1used in the outdoor flight tests

33

Page 48: Quadrotor Control using Optical-Flow Navigation · Quadrotor Control using Optical-Flow Navigation António Henrique Kavamoto Fayad Thesis to obtain the Master of Science Degree in

The flights tests were performed by controlling the quadrotor with 2.4 GHz radio control and the goal was

to simulate a flight with approximately constant height in which a rectangular shaped area was covered.

The continuous observers defined in the previous chapter were implement in a user defined function in

MATLAB® and simulated in SIMULINK® using standard ode45 integration algorithm with ∆t = 0.02s.

The solution presented here can be implemented in others UAV’s that are equipped with the neces-

sary sensors. Nevertheless the experiments made in the first two sections of this chapter are essential

in order to obtain more accurate results when using a different experimental set up, since the velocity

estimation algorithm depends on sensor calibration (distance measurements and angular compensa-

tion) and position of the OF sensor regarding the IMU. The trajectory estimation procedures can be

implemented in any dynamic system provided that the necessary dynamic variables are known.

Figure 5.1: Mikrocopter QuadroKopter XL

5.1 K estimation

5.1.1 Experiment Description

As it was stated in section 4.2, due to lack of knowledge of the OF sensor internal computations the

constant KOF was introduced in equation 4.14 to correctly scale the sensor’s output to the real travelled

distance.

The estimation of KOF can be easily achieved by looking at the output data when travelling a known

distance without angular movement. In this case, a person walked in a pre-defined straight line while

keeping one of the OF axis aligned with the desired path. This way it is known that the output data from

the referred axis should provide the length covered by the person after the necessary computations.

34

Page 49: Quadrotor Control using Optical-Flow Navigation · Quadrotor Control using Optical-Flow Navigation António Henrique Kavamoto Fayad Thesis to obtain the Master of Science Degree in

5.1.2 Results

When travelling a known distance, the only unknown in equation 4.14 is KOF . Evaluating it over the

experiment time yields:

∆sreal =d

30

Ns∑i=1

∆sofh

fKof (5.1)

where ∆sreal represents the real displacement, ∆sof the optical flow sensor output and Ns is the

number of samples

Each row from table 5.1 presents the data from a single experiment used for the estimation process. It is

possible to see the values for the real travelled distance, OF sensor’s output, distance from the surface

and light conditions. These experiment conditions were intentionally altered from one trial to the other

so the data reflect the diversity of conditions in which the sensor could be used.

∆sreal [m]Ns∑i=1

∆sof [cpi] h [m] Light Conditions

2.5 1266 0.85 indoor2.5 1300 0.82 indoor3 1477 1.03 indoor3 1731 0.72 outdoor3 1776 0.73 outdoor3 1507 0.81 indoor3 1466 0.82 indoor3 1499 0.99 outdoor3 1438 1 outdoor3 1553 1.01 outdoor3 1388 1.01 outdoor5 2558 0.99 outdoor5 2727 1.01 outdoor5 2606 1 outdoor6 2809 1.04 indoor9 4584 1.01 indoor

Table 5.1: Sensor Calibration Data

For each experiment data shown in table 5.1 there is a value of KOF that fulfils equation 5.1. In order

to estimate a constant value it is possible to use a maximum likelihood estimator. Assuming that the

measurements are corrupted with Additive White Gaussian Noise, the estimated value of KOF is given

by:

35

Page 50: Quadrotor Control using Optical-Flow Navigation · Quadrotor Control using Optical-Flow Navigation António Henrique Kavamoto Fayad Thesis to obtain the Master of Science Degree in

Kof =

30fd

Ne∑j=1

(∆sreal)j

hj

Ns∑i=1

∆sof

Ne

⇔ Kof =

Ne∑j=1

(Kof )j

Ne= 0.0569 [

1

cpi]

(5.2)

where Ne is the total number of experiments

Figure 5.2 provides a graphical view of values distribution achieved for (KOF )j in equation 5.2. Apart

from that, it is possible to see the final estimated value and additional statistical information of the

estimation process.

Figure 5.2: Gain KOF estimation

Assuming that the measurement errors are normally distributed and that the data shown in table 5.1 and

figure 5.2 fully represent the sensor’s behaviour, it is possible to say that 95.4% of the measurements

will have less than 20% error.

36

Page 51: Quadrotor Control using Optical-Flow Navigation · Quadrotor Control using Optical-Flow Navigation António Henrique Kavamoto Fayad Thesis to obtain the Master of Science Degree in

5.2 Angular Compensation

5.2.1 Experiment Description

The angular compensation procedure demonstrated in section 4.2.2 was tested experimentally in the

set up shown in figure 5.3. It is possible to see that the quadcopter is fixed laterally so the only degree

of freedom is the rotation with respect to the line that connect the fixed points.

The goal of this experiment is to use the previous knowledge that v = 0 to test the performance

of the optical flow angular compensation. The OF sensor will detect movement due to the pure

rotation, but using the angular compensation shown in equation 4.14 the computed translational velocity

should be zero.

The angular movement done by the UAV was approximately sinusoidal varying pitch or roll angles sep-

arately. In an standard UAV flight, the pitch and roll angles are only punctually bigger than ±10◦ . Nev-

ertheless, in order to make sure that the angular compensation deals with the all regular angular flight

positions the tested angles were extended to ±20◦ .

Figure 5.3: Angular Compensation Experiment

5.2.2 Results

Figure 5.4 present the angular variations and the computed linear velocity for pitch and roll movements

when applying the set of equations shown in 4.14.

It is possible to see in the figures that the rotation induces sinusoidal velocities with peak values of

1 ∼ 1.5 ms−1. Although the computed compensated velocity has a sinusoidal shape as well, the peak

values range between 0.1 ∼ 0.2 ms−1.

The mean of the relative error for the angular compensation is presented in equation 5.3, in which the

subscript ”p” and ”r” correspond to pitch and roll respectively.

37

Page 52: Quadrotor Control using Optical-Flow Navigation · Quadrotor Control using Optical-Flow Navigation António Henrique Kavamoto Fayad Thesis to obtain the Master of Science Degree in

(a) u(t) and (vOF )x(t) (b) v(t) and (vOF )y(t)

Figure 5.4: Angular Compensation Experiment Results

εr = 12.9171% (5.3a)

εp = 12.9106% (5.3b)

Due to the fact that the quadcopter is under actuated and its dynamics is influenced by vibrations modes,

the angular compensation plays a major role in the estimates accuracy. Although the compensation

errors whose source are the mathematical approximations explained in section 4.2.2 can not be reduced,

the ones that come from how the data is acquired can. For example, in the previous results it is

assumed that both IMU and OF sensor are installed at the vehicle centre of mass. More important,

it is assumed that the optical flow measures are not influenced by the current angular speed of

the camera. It is possible to take in account these differences from the ideal model to the experimental

set up by estimating the parameters in equation 4.14.

Since the linear velocity of the quadcopter is known to be zero, the set of equations shown in 4.14 are

reduced to:

1

f

vOF xvOF y

=

wy

−wx

(5.4)

It is possible to estimate this relationship experimentally if1

fis replaced for a general affine function

y = mx + b. This yields equation 5.5 and makes possible to take in account differences between the

38

Page 53: Quadrotor Control using Optical-Flow Navigation · Quadrotor Control using Optical-Flow Navigation António Henrique Kavamoto Fayad Thesis to obtain the Master of Science Degree in

ideal model and the experimental set up.

vxmp

vymr

+

bpbr

=

wy

−wx

(5.5)

Figure 5.5 shows the relationship between the OF measured velocity and the angular rates for the same

experiment as in figure 5.4. The calculated linear fitting is marked in red and the estimated values are:

mr = 55.32

mp = 56.03

br = −0.001398

bp = −0.003213

(5.6)

The parameters estimated are rich in a way that its values are similar to the theoretical ones but

show the expected small deviations from it. During this process it was also possible to see that the

quantization of the OF readings influences the angular compensation. However, it is not possible to

diminish its influence without changing the sensor.

(a) Pitch (b) Roll

Figure 5.5: Angular Compensation Parameter Estimation

Applying the identified angular compensation model, it is possible to see that the induce velocity is the

same, but the computed translational velocities peak values range is reduced from 0.1 ∼ 0.2 ms−1 to

0.05 ∼ 0.01 ms−1. Figures 5.6 and 5.7 compare the results from the theoretical compensation to the

estimated one. The average relative error of the estimated model is shown in equation 5.7

39

Page 54: Quadrotor Control using Optical-Flow Navigation · Quadrotor Control using Optical-Flow Navigation António Henrique Kavamoto Fayad Thesis to obtain the Master of Science Degree in

εr = 5.8958%

εp = 7.88%

(5.7)

Figure 5.6: Modelled Pitch Compensation Figure 5.7: Modelled Roll Compensation

5.3 Flight Tests

5.3.1 Experiment Description

Although the results presented in section 5.2 and 5.1 can already provide a reference for the estimation

performance, they result of experiments done in a set of conditions that are completely different from

an actual flight. Therefore, flight tests were performed in real outdoor environment in Hangar 13 in

Montijo. A ground GPS station was set to provide RTK corrections to the GPS data received by

the quadrotor and the flight was manually controlled using radio control.

The results will be divided in two different parts. The performance of the estimation process will be

addressed first and secondly it will be compared with current alternatives. In all the results presented

it will be assumed that the GPS information retrieved is the ground truth since it is the most precise

information available.

In order to have a deeper understanding of the proposed solution performance, it is important to compare

it with current procedures for velocity and position estimation that rely on the same data as the one

available in the used experimental set up. Apart from the proposed solution other two different models

for the estimation process will be evaluated. The one introduced in equation 4.45 will be referred as M2

and the solution shown in figure 5.8 will be referred as M3. The observer structure and dynamics is the

40

Page 55: Quadrotor Control using Optical-Flow Navigation · Quadrotor Control using Optical-Flow Navigation António Henrique Kavamoto Fayad Thesis to obtain the Master of Science Degree in

same as M2 but it does not use optical flow information. Therefore, the velocity estimates that are used

to estimate the vehicle’s trajectory are based only in the vehicle’s dynamic equation 4.18.

VelocityObserver

GPSData

vIMU

LRF

pPositionObserver

Figure 5.8: Compared Solutions

The constants defined throughout this work were set as following:

• k1 = 1

• k2 = 0.1

• Kob = 10

• KGPS = 75

All the results shown in the next section were computed by post-processing data retrieved and logged

during the flight. MATLAB® and SIMULINK® were used to perform the results shown in the following

chapters.

5.3.2 Results

Since it was not possible to test the trajectory estimates in a indoor environment due to lack of true

trajectory values and external influences in the magnetometer (eg. cellphones and metal structures), a

rectangular trajectory of 5m of length and 3m of width was performed while carrying the quadrotor in an

outdoor environment while receiving GPS data. The direction of movement was always aligned with the

X body axis, so the influence of non constant heading can also be evaluated.

In figure 5.9 it is possible to see the 2D trajectory data retrieved from the first manual experiment.

The blue line shows GPS obtained data and the green line was obtained by using the proposed solution

trajectory estimator. The measured travelled distance error is given by:

ε(%) =|∆s−∆s|

∆s=

=|16.3925− 16.9675|

16.3925100

= 3.5077%

(5.8)

It is clear that trajectory error estimated is considerably larger than the travelled distance error, which

means that the error is caused by the projection of the body frame estimated velocity to the inertial

frame. It is possible to see in figure 5.9a that the estimated heading angle in green is not consisted

with rectangular shaped trajectory. In order to access the reason for this large estimate error, the

41

Page 56: Quadrotor Control using Optical-Flow Navigation · Quadrotor Control using Optical-Flow Navigation António Henrique Kavamoto Fayad Thesis to obtain the Master of Science Degree in

heading angle was estimated using only the rotational matrix dynamics from equation 2.6 and

a second estimated trajectory computed using the same velocity estimates from the proposed

solution. The results obtained with this approach is shown in red in figure 5.9. It is possible to see that

the general trend of the heading angle is the one expected and consequently the estimated trajectory is

more accurate than the previous estimate.

The proposed solution rotation matrix didn’t present clear deviations during the flight tests, which indi-

cates that the error observed in the manual tests were due external influences like cellphones close to

the quadrotor magnetometer.

This result proves that the accuracy of the trajectory estimates using on board sensor is highly

dependent on the accuracy of the rotation matrix. Therefore, even if more accurate techniques are

found to estimate the body frame velocity, it will not necessarily result into more accurate estimated

trajectories.

A second manual experiment was performed in the same trajectory at a bigger speed but in which the

rectangular trajectory was travelled two consecutive times in the same direction. The results presented

in figure 5.10 show that the travelled distance measures were not affected by the increasing speed of the

experiment and its travelled distance error shown in equation was smaller regarding the first experiment.

ε(%) =|30.9183− 31.42|

30.9183100 = 1.62% (5.9)

The second experiment results confirmed the conclusions retrieved with the first experiment that the

heading estimates limits the estimated trajectory accuracy.

After the manual experiments, flight tests were performed. The left column of figures in 5.11 show both

the norm and inertial components of the velocity estimates. The proposed solution data is presented in

red, the M3 velocity is presented in green and the GPS velocity is presented in blue. The right column

shows the estimate error of the proposed solution.

The mean of the estimate velocity norm error has an approximated value of −0.0066 ms−1 and the

estimate error in each of the inertial component has an approximated average of −0.025 ms−1. The

higher order of magnitude in the inertial velocity components error can be explained by the

induced error in the projection of the body frame velocity. It is possible to see in figure 5.9a that the

magnetometer readings have a considerable settling time. Looking at the big heading changes around

10 and 40 seconds during the flight test in figure 5.12, it is expected that the heading angle errors

on those periods are considerably large. Using the equation 4.56 from section 4.3, a 3◦ error already

42

Page 57: Quadrotor Control using Optical-Flow Navigation · Quadrotor Control using Optical-Flow Navigation António Henrique Kavamoto Fayad Thesis to obtain the Master of Science Degree in

(a) ψ(t) (b) Travelled Distance

(c) 2D Trajectory

Figure 5.9: Manual 5m x 3m Trajectory Experiment

provides an 0.0027 ms−1 error, which is the same order of magnitude as the obtained estimate velocity

norm error.

The peaks observed in the estimate error coincide with more abrupt manoeuvres since there is additional

error originated by the angular compensation. This can be easily seen in figure 5.13 in which the non

compensated velocity is presented in brown. This doesn’t present limitations in terms of the estimator

applicability since angular movements of that magnitude are rare in a standard quadrotor flight.

The norm of the estimated velocity can still be used to shown the importance of the velocity observer

introduced in section 4.2.3. It is possible to see in figure 5.14 that the observer considerably

reduces the noise originated in the numeric differentiation of the OF displacement measures

and its output quantization. It is important to state that the observer does not introduce visible delay

43

Page 58: Quadrotor Control using Optical-Flow Navigation · Quadrotor Control using Optical-Flow Navigation António Henrique Kavamoto Fayad Thesis to obtain the Master of Science Degree in

(a) 2D Trajectory (b) Travelled Distance

Figure 5.10: Manual 5m x 3m Trajectory(2 turns)

in the velocity estimation.

Figure 5.15 shows the 3D position estimation for the proposed solution estimator and its error regard-

ing the GPS data. The colours follow the same designation as in the previous images. Although the

estimated trajectory provides good indications of the quadrotors general trajectory, the estimation error

grows in time since the position estimate is originated from pure integration of the velocity data. Even in

the ideal case that the velocity estimator has mean zero error, it is not possible to bound the error.

The results presented in figure 5.16 show M2 and M3 results for four different TGPS values. In the left

column the red and green line represent M2 and M3 trajectory estimates and in the right column the

estimate error for M2 is shown. For TGPS = 25s, the trajectory provided by M3 is not presented because

its accuracy is several order of magnitudes lower than M2.

It is possible to see that the introduction of an OF sensor to the usual set of sensor’s composed by

gyroscopes and accelerometers considerably improves the trajectory estimate for any of the cases. In

fact, while M3 can only provide a acceptable trajectory with GPS updates every second, M1 estimates

error are under 1 m in the entire scope of the experiment with the lower update rate tested.

44

Page 59: Quadrotor Control using Optical-Flow Navigation · Quadrotor Control using Optical-Flow Navigation António Henrique Kavamoto Fayad Thesis to obtain the Master of Science Degree in

(a) ||Iv(t)|| (b) || Iv(t)|| Estimation Error

(c) Ivy(t) (d) Iv(t)y Estimation Error

(e) Ivx(t) (f) Iv(t)x Estimation Error

Figure 5.11: Velocity Estimation Data

45

Page 60: Quadrotor Control using Optical-Flow Navigation · Quadrotor Control using Optical-Flow Navigation António Henrique Kavamoto Fayad Thesis to obtain the Master of Science Degree in

Figure 5.12: ψ(t) during Flight Experiment

(a) Flight Pitch Compensation (b) Flight Roll Compensation

(c) θ(t) (d) φ(t)

Figure 5.13: Angular Compensation Action During Flight

46

Page 61: Quadrotor Control using Optical-Flow Navigation · Quadrotor Control using Optical-Flow Navigation António Henrique Kavamoto Fayad Thesis to obtain the Master of Science Degree in

Figure 5.14: Velocity Filter Action During Flight

(a) Trajectory Estimation (b) Estimation Error

Figure 5.15: Proposed Solution Position Estimates

47

Page 62: Quadrotor Control using Optical-Flow Navigation · Quadrotor Control using Optical-Flow Navigation António Henrique Kavamoto Fayad Thesis to obtain the Master of Science Degree in

(a) TGPS = 15 s (b) Estimation Error

(c) TGPS = 5 s (d) Estimation Error

(e) TGPS = 1 s (f) Estimation Error

Figure 5.16: M2 and M3 Position Estimates

48

Page 63: Quadrotor Control using Optical-Flow Navigation · Quadrotor Control using Optical-Flow Navigation António Henrique Kavamoto Fayad Thesis to obtain the Master of Science Degree in

Chapter 6

Conclusions

This thesis proposed a velocity and position estimator based on low cost on-board sensors. Apart from

the standard sensors composed by accelerometer and gyroscopes, an mice optical flow sensor is used

as a velocity indicator. The induced velocity due to rotation movements is compensated by fusing IMU

data. The proposed solution is tested in a quadrotor as the ability to estimate position and velocity with

on board sensors would open the door for several new applications in indoor/GPS-denied environments.

The proposed estimators are linear estimators in which global stability is proven by assuming that true

values of velocity and position are received.

Experiments that go from sensor calibration, modelling the angular compensation and manual tests are

presented and form a set of necessary experiments for anyone who intends to use a similar solution

to the one proposed here. It is shown that the angular velocity of the camera influences the quality

of the measured data by presenting more accurate angular compensations with the identified model

than the ones achieved with theoretical results. Consequently, the defined modelling objectives were

accomplished.

Outdoor tests were made in order to be able to compare the proposed solutions results to the ones

achieved with GPS data. The velocity estimator presents promising results, but its use in automation

processes could not be accessed. The outdoor tests show that the heading estimate might be a bottle-

neck for the accuracy of the position estimator since the estimated velocities are necessarily projected

to the inertial frame in order to estimate the position. Due to lack of accuracy in the proposed solution

in terms of position estimates, GPS data at a lower rate was included as true position measurements.

This intermediate solution simulates environments where although GPS is available, its transmission rat

is not constant and might even not be available long periods.

49

Page 64: Quadrotor Control using Optical-Flow Navigation · Quadrotor Control using Optical-Flow Navigation António Henrique Kavamoto Fayad Thesis to obtain the Master of Science Degree in

Regarding future developments of the proposed solution, a SLAM algorithm could be added by using

LRF data. Similar algorithms for mapping were already used in [14, 15]. Since the used set up already

contains an LRF to provide altitude readings, its information could be used for mapping algorithms

without changing the experimental set up. The use of a more robust estimator for heading values could

also be applied in order to check the influence of the heading accuracy in the observers implemented in

the presented work.

50

Page 65: Quadrotor Control using Optical-Flow Navigation · Quadrotor Control using Optical-Flow Navigation António Henrique Kavamoto Fayad Thesis to obtain the Master of Science Degree in

Bibliography

[1] Partha S Bhagavatula, Charles Claudianos, Michael R Ibbotson, and Mandyam V Srinivasan. Op-

tic flow cues guide flight in birds. Current biology : CB, 21(21):1794–9, November 2011. ISSN

1879-0445. doi: 10.1016/ j.cub.2011.09.009. URL http: // www. ncbi. nlm. nih. gov/ pubmed/

22036184 .

[2] E.W. Justh and P.S. Krishnaprasad. Steering laws for motion camouflage. Proceedings of

the Royal Society A: Mathematical, Physical and Engineering Sciences, 462(2076):3629–3643,

December 2006. ISSN 1364-5021. doi: 10.1098/ rspa.2006.1742. URL http: // rspa.

royalsocietypublishing. org/ cgi/ doi/ 10. 1098/ rspa. 2006. 1742 .

[3] Jan Wendel, Oliver Meister, Christian Schlaile, and Gert F. Trommer. An integrated gps/mems-

imu navigation system for an autonomous helicopter. Aerospace Science and Technology, 10

(6):527 – 533, 2006. ISSN 1270-9638. doi: http:// dx.doi.org/ 10.1016/ j.ast.2006.04.002. URL

http: // www. sciencedirect. com/ science/ article/ pii/ S1270963806000484 .

[4] Stabilization of a mini rotorcraft with four rotors”. IEEE Control Systems Magazine, 25:45–50, 2005.

[5] Samir Bouabdallah and Roland Siegwart. Backstepping and Sliding-mode Techniques Applied to

an Indoor Micro Quadrotor. (April):47–52, 2005.

[6] a. Tayebi and S. McGilvray. Attitude stabilization of a VTOL quadrotor aircraft. IEEE Transactions

on Control Systems Technology, 14(3):562–571, May 2006. ISSN 1063-6536. doi: 10.1109/ TCST.

2006.872519. URL http: // ieeexplore. ieee. org/ lpdocs/ epic03/ wrapper. htm? arnumber=

1624481 .

[7] Cosmin Coza and C J B Macnab. A New Robust Adaptive-Fuzzy Control Method Applied to Quadro-

tor Helicopter Stabilization guide X4a2Q )/ bi rF ( x ) Tc1 Ei ( X ). pages 475–479.

[8] Pedro Batista, Carlos Silvestre, and Paulo Oliveira. Sensor-based Complementary Globally Asymp-

totically Stable Filters for Attitude Estimation. (978):7563–7568, 2009.

51

Page 66: Quadrotor Control using Optical-Flow Navigation · Quadrotor Control using Optical-Flow Navigation António Henrique Kavamoto Fayad Thesis to obtain the Master of Science Degree in

[9] Kenneth D Sebesta and Nicolas Boizot. A Real-Time Adaptive High-Gain EKF , Applied to a Quad-

copter Inertial Navigation System. 61(1):495–503, 2014.

[10] Jong-hyuk Kim, Salah Sukkarieh, and Stuart Wishart. Real-Time Navigation , Guidance , and

Control of a UAV Using Low-Cost Sensors. pages 299–309, 2006.

[11] H.Jin Kim and David H. Shim. A flight control system for aerial robots: algorithms and experiments.

Control Engineering Practice, 11(12):1389 – 140, 2003. ISSN 0967-0661. doi: http:// dx.doi.org/

10.1016/ S0967-0661(03)00100-X. URL http: // www. sciencedirect. com/ science/ article/

pii/ S096706610300100X . Award winning applications-2002 {IFAC}World Congress.

[12] Mark Muller, S. Lupashin, and R. D’Andrea. Quadrocopter ball juggling. 2011 IEEE/RSJ Interna-

tional Conference on Intelligent Robots and Systems, pages 5113–5120, September 2011. doi:

10.1109/ IROS.2011.6094506. URL http: // ieeexplore. ieee. org/ lpdocs/ epic03/ wrapper.

htm? arnumber= 6094506 .

[13] Markus Hehn and Raffaello D’Andrea. A flying inverted pendulum. 2011 IEEE International Con-

ference on Robotics and Automation, (2):763–770, May 2011. doi: 10.1109/ ICRA.2011.5980244.

URL http: // ieeexplore. ieee. org/ lpdocs/ epic03/ wrapper. htm? arnumber= 5980244 .

[14] Jakob Engel and Daniel Cremers. Camera-Based Navigation of a Low-Cost Quadrocopter. pages

2815–2821, 2012.

[15] Abraham Bachrach, Samuel Prentice, Ruijie He, and Nicholas Roy. RANGE-Robust autonomous

navigation in GPS-denied environments. Journal of Field Robotics, 28(5):644–666, September

2011. ISSN 15564959. doi: 10.1002/ rob.20400. URL http: // doi. wiley. com/ 10. 1002/ rob.

20400 .

[16] Simon Zingg, Davide Scaramuzza, Stephan Weiss, and Roland Siegwart. MAV Navigation through

Indoor Corridors Using Optical Flow. pages 3361–3368, 2010.

[17] Spencer Ahrens, Daniel Levine, Gregory Andrews, and Jonathan P How. Vision-Based Guidance

and Control of a Hovering Vehicle in Unknown , GPS-denied Environments. pages 2643–2648,

2009.

[18] R. Mahony, P. Corke, and T. Hamel. Dynamic Image-Based Visual Servo Control Using Cen-

troid and Optic Flow Features. Journal of Dynamic Systems, Measurement, and Control, 130

(1):011005, 2008. ISSN 00220434. doi: 10.1115/ 1.2807085. URL http: // dynamicsystems.

asmedigitalcollection. asme. org/ article. aspx? articleid= 1475501 .

52

Page 67: Quadrotor Control using Optical-Flow Navigation · Quadrotor Control using Optical-Flow Navigation António Henrique Kavamoto Fayad Thesis to obtain the Master of Science Degree in

[19] Dominik Honegger, Lorenz Meier, Petri Tanskanen, Marc Pollefeys, and Z Eth. An Open Source

and Open Hardware Embedded Metric Optical Flow CMOS Camera for Indoor and Outdoor Appli-

cations.

[20] J. G. Maks D. Jeltsema G. J. Olsder, J. W. van der Woude. Mathematical Systems Theory, 4th

Edition. VSSD, 2011.

[21] David C. Lay. Linear Algebra and Its Applications, Fourth Edition. Assison - Wesley, 2012.

[22] H.K. Khalil. Nonlinear Systems. Prentice Hall, 2002. ISBN 9780130673893. URL http: // books.

google. pt/ books? id= t_ d1QgAACAAJ .

[23] Raymond A. Serway John W. Jewett. Physics for Scientists and Engineers with Modern Physics,

Seventh Edition. David Harris, 2008.

[24] E. Russell David Mazurek Phillip Cornwell Ferdinand Beer, Jr. Johnston. Vector Mechanics for

Engineers Dynamics: Statics and Dynamics. McGraw-Hill Education, 2012.

[25] R.C. Hibbeler. Engineering Mechanics - Dynamics, Thirteenth Edition. Prentice Hall, 2012.

[26] James J. Gibson. The Perception of the Visual World. Houghton Mifflin, 1950.

[27] John Radford Andrew Burton. Thinking in Perspective: Critical Essays in the Study of Thought

Processes. Methuen, 1978.

[28] Edward R. Strelow D.H. Warren. Electronic Spatial Sensing for the Blind: Contributions from Per-

ception, Rehabilitation, and Computer Vision. Springer, 1985.

[29] Avago Technologies. Adns-3080, high-performance optical mouse sensor, October 2008.

datasheet.octopart.com/ADNS-3080-Avago-datasheet-14184657.pdf.

[30] Mitch Bryson and Salah Sukkarieh. Vehicle Model Aided Inertial Navigation for a UAV using Low-

cost Sensors. 2006.

[31] Johan E. Mebius. Derivation of the euler-rodrigues formula for three-dimensional rotations from the

general formula for four-dimensional rotations. 2007.

53