MULTI-VEHICLE COOPERATIVE CONTROL FOR...

48
MULTI-VEHICLE COOPERATIVE CONTROL FOR VISION-BASED ENVIRONMENT MAPPING By ERIC ALAN BRANCH A THESIS PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE UNIVERSITY OF FLORIDA 2007 1

Transcript of MULTI-VEHICLE COOPERATIVE CONTROL FOR...

Page 1: MULTI-VEHICLE COOPERATIVE CONTROL FOR …ufdcimages.uflib.ufl.edu/UF/E0/01/97/41/00001/branch_e.pdf1.3 Multi-robot Systems Multiple autonomous robot teams are suggested to have many

MULTI-VEHICLE COOPERATIVE CONTROL FOR VISION-BASED ENVIRONMENTMAPPING

By

ERIC ALAN BRANCH

A THESIS PRESENTED TO THE GRADUATE SCHOOLOF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT

OF THE REQUIREMENTS FOR THE DEGREE OFMASTER OF SCIENCE

UNIVERSITY OF FLORIDA

2007

1

Page 2: MULTI-VEHICLE COOPERATIVE CONTROL FOR …ufdcimages.uflib.ufl.edu/UF/E0/01/97/41/00001/branch_e.pdf1.3 Multi-robot Systems Multiple autonomous robot teams are suggested to have many

c© 2007 Eric Alan Branch

2

Page 3: MULTI-VEHICLE COOPERATIVE CONTROL FOR …ufdcimages.uflib.ufl.edu/UF/E0/01/97/41/00001/branch_e.pdf1.3 Multi-robot Systems Multiple autonomous robot teams are suggested to have many

To my family and friends,

whose patience and support made this possible

3

Page 4: MULTI-VEHICLE COOPERATIVE CONTROL FOR …ufdcimages.uflib.ufl.edu/UF/E0/01/97/41/00001/branch_e.pdf1.3 Multi-robot Systems Multiple autonomous robot teams are suggested to have many

ACKNOWLEDGMENTS

This work was supported jointly by the Air Force Research Laboratory and the Air Force

Office of Scientific Research under F49620-03-1-0381 and F49620-03-1-0170.

I want to acknowledge and thank AFRL-MN and Eglin Air Force for the funding of this

project. I would also like to sincerely thank my advisor, Dr.Rick Lind, for his guidance and

support throughout my time at the University of Florida. Special thanks also to my supervisory

committee members, Dr. Warren Dixon and Dr. Carl Crane, for their time and consideration.

This work would not be possible without the members of the Flight Controls Lab; Adam

Watkins, Joe Kehoe, Ryan Causey, Daniel Grant and Mujahid Abdulrahim, who have always

been willing to share their knowledge and expertise. My parents, Mark and Peggy Branch, who

always encourage and support me in all my pursuits.

4

Page 5: MULTI-VEHICLE COOPERATIVE CONTROL FOR …ufdcimages.uflib.ufl.edu/UF/E0/01/97/41/00001/branch_e.pdf1.3 Multi-robot Systems Multiple autonomous robot teams are suggested to have many

TABLE OF CONTENTS

page

ACKNOWLEDGMENTS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

LIST OF FIGURES. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

ABSTRACT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

CHAPTER

1 INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

1.1 Problem Statement. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101.2 Motivation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101.3 Multi-robot Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121.4 System Architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

2 COMPUTER VISION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

2.1 Camera . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152.2 Feature Point Detection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162.3 Feature Point Tracking. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172.4 Two-view Image Geometry. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

2.4.1 Epipolar Constraint. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192.4.2 Structure from Motion. . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

3 DIMENSIONALITY REDUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

3.1 Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223.2 Environment Representation. . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

3.2.1 Spatial Decomposition/Occupancy Grid. . . . . . . . . . . . . . . . . . 243.2.2 Topological Representation. . . . . . . . . . . . . . . . . . . . . . . . . 253.2.3 Geometric Maps. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

3.3 Data Clustering. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263.3.1 K-means. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263.3.2 PCA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273.3.3 Plane Fitting. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

3.4 Incremental Map Building . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

4 MAPPING CRITERIA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

4.1 Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304.2 Coverage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

4.2.1 Random Movement Method. . . . . . . . . . . . . . . . . . . . . . . . . 304.2.2 Frontier-Based Exploration. . . . . . . . . . . . . . . . . . . . . . . . . 314.2.3 Seek-Open Space Algorithm. . . . . . . . . . . . . . . . . . . . . . . . 314.2.4 Mapping Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

5

Page 6: MULTI-VEHICLE COOPERATIVE CONTROL FOR …ufdcimages.uflib.ufl.edu/UF/E0/01/97/41/00001/branch_e.pdf1.3 Multi-robot Systems Multiple autonomous robot teams are suggested to have many

5 TRAJECTORY PLANNING . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

5.1 Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345.2 Rapidly Exploring Random Tree. . . . . . . . . . . . . . . . . . . . . . . . . . 345.3 Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365.4 Collision Avoidance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

6 SIMULATION RESULTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

7 CONCLUSION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

7.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437.2 Future Direction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

REFERENCES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

BIOGRAPHICAL SKETCH . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

6

Page 7: MULTI-VEHICLE COOPERATIVE CONTROL FOR …ufdcimages.uflib.ufl.edu/UF/E0/01/97/41/00001/branch_e.pdf1.3 Multi-robot Systems Multiple autonomous robot teams are suggested to have many

LIST OF FIGURES

Figure page

1-1 Micro Air Vehicles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

1-2 Urban Environment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

1-3 Vehicle Architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

1-4 Team Architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

2-1 Mapping from Environment to Image Plane. . . . . . . . . . . . . . . . . . . . . . . 15

2-2 The Geometry of the Epipolar Constraint. . . . . . . . . . . . . . . . . . . . . . . . . 19

3-1 Environment Representations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

3-2 Principle Component Analysis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

3-3 Incremental Plane Merging. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

5-1 Rapidly-Exploring Random Tree Expansion. . . . . . . . . . . . . . . . . . . . . . . 36

5-2 Rapidly-Exploring Random Tree w/ Kinematic Constraints . . . . . . . . . . . . . . . 37

5-3 Collision Check. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

6-1 Simulated Urban Environment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

6-2 Simulation at T=0s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

6-3 Simulation at T=80s. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

6-4 Simulation at T=150s. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

6-5 Simulation at T=150s Top View. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

7

Page 8: MULTI-VEHICLE COOPERATIVE CONTROL FOR …ufdcimages.uflib.ufl.edu/UF/E0/01/97/41/00001/branch_e.pdf1.3 Multi-robot Systems Multiple autonomous robot teams are suggested to have many

Abstract of Thesis Presented to the Graduate Schoolof the University of Florida in Partial Fulfillment of the

Requirements for the Degree of Master of Science

MULTI-VEHICLE COOPERATIVE CONTROL FOR VISION-BASED ENVIRONMENTMAPPING

By

Eric Alan Branch

August 2007

Chair: Richard C. Lind, Jr.Major: Aerospace Engineering

Flight within urban environments is currently achievable using small and agile aircraft;

however, the path planning for such vehicles is difficult dueto the unknown location of obstacles.

This thesis introduces a multi-aircraft formulation to themapping approach. The problem

addressed is the trajectory generation for the mapping of anunknown, urban environment by a

distributed team of vision-based Micro Air Vehicles (MAVs). Most mission scenarios require

a vehicle to know its position relative its surroundings in order to assure safe navigation and

successful mission planning. A vision sensor can be used to infer three-dimensional information

from a two-dimensional image for use in generating an environment map. Structure from Motion

(SFM) is employed to solve the problem of inferring three-dimensional information from two-

dimensional image data. No global information is availablein the unmapped regions so the map

must be constructed incrementally and safe trajectories must be planned on-line. A feature point

data set can be reduced to a set of geometric primitives, if the environment can be well estimated

by a geometric model. In the case of an urban environment the scene can be well represented by

planar primitives. The trajectories need to be planned to gather information required to construct

an optimal map. An efficient mapping can be accomplished by maximizing environment

coverage and vehicle distribution. The distribution and coverage criteria are governed by an

augmented cost function which takes into account the distance between mapped areas and all

other aircraft. Maximizing this cost function automatically distributes aircraft throughout a

region and disperses them around regions that remain to be mapped. Also, the mapping approach

8

Page 9: MULTI-VEHICLE COOPERATIVE CONTROL FOR …ufdcimages.uflib.ufl.edu/UF/E0/01/97/41/00001/branch_e.pdf1.3 Multi-robot Systems Multiple autonomous robot teams are suggested to have many

uses maneuvering capabilities to determine the paths for each aircraft. This determination results

by utilizing branches from tree structures that represent achievable maneuvers. Such an approach

inherently alters the path between aircraft that may have different turn radius or climb rates.

9

Page 10: MULTI-VEHICLE COOPERATIVE CONTROL FOR …ufdcimages.uflib.ufl.edu/UF/E0/01/97/41/00001/branch_e.pdf1.3 Multi-robot Systems Multiple autonomous robot teams are suggested to have many

CHAPTER 1INTRODUCTION

1.1 Problem Statement

This thesis introduces a multi-aircraft formulation to themapping approach. The problem

addressed is the trajectory generation for the mapping of anunknown, urban environment by a

distributed team of vision-based Micro Air Vehicles (MAVs). Most mission scenarios require

a vehicle to know its position relative its surroundings in order to assure safe navigation and

successful mission planning. A vision sensor can be used to infer three-dimensional information

from a two-dimensional image for use in generating an environment map. No global information

is available in the unmapped regions so the map must be constructed incrementally and safe

trajectories must be planned on line. The trajectories needto be planned to gather information

required to construct an optimal map. An efficient mapping can be accomplished by maximizing

environment coverage and vehicle distribution. The distribution and coverage criteria are

governed by an augmented cost function which takes into account the distance between mapped

areas and all other aircraft. Maximizing this cost functionautomatically distributes aircraft

throughout a region and disperses them around regions that remain to be mapped. Also, the

mapping approach uses maneuvering capabilities to determine the paths for each aircraft.

This determination results by utilizing branches from treestructures that represent achievable

maneuvers. Such an approach inherently alters the path between aircraft that may have different

turn radius or climb rates.

1.2 Motivation

The MAV has been developed to be a low-cost platform capable of completing complex,

low-altitude missions for both military and civilian applications. The long list of missions

envisioned for MAVs include military reconnaissance, surveillance and tracking of a specified

target, mapping an area affected by the dispersal of a chemical/biological agent, and visual

monitoring of traffic. A common aspect to these missions is that the vehicle must fly at low

altitudes and even inside rooms; and function in regions that are either beyond an operators

line-of-sight or deemed too hazardous for direct human involvement. Therefore, the ability

10

Page 11: MULTI-VEHICLE COOPERATIVE CONTROL FOR …ufdcimages.uflib.ufl.edu/UF/E0/01/97/41/00001/branch_e.pdf1.3 Multi-robot Systems Multiple autonomous robot teams are suggested to have many

to exhibit a high level of autonomy, and situational awareness becomes a critical attribute for

ensuring the usefulness of MAVs [33]. An accurate mapping ofunknown environments is a

critical task for ensuring accurate situational awareness. The number of hazards at these flight

conditions is disproportionately large in comparison to flight at high altitudes. Information must

be gathered that indicates the location of objects in these areas along with their size and shape in

relation to the vehicles. It is reasonable to assume pre-flight knowledge of major objects within

cities or rooms; however, it is doubtful that every tree and sign or angle of every door is known

in advance. As such, in-flight mapping is a realistic expectation to obtain information about

the environment and permit flight operations. Figure1-1 shows several examples of current

generation MAVs.

Figure 1-1. Micro Air Vehicles

An approach was developed to optimize mapping using a singleaircraft. This approach

developed a metric that noted the distance between the aircraft and features that had already

been mapped. Path planning thus resulted by maximizing thatdistance. The motivation for this

thesis is to develop technologies that enable a distributedteam of MAVs to autonomously map

an unknown environment. The capability of cooperative map generation allows the vehicles

to accurately and efficient map the environment and share locally generated incremental maps

with the vehicle team which increases vehicle safety and enables efficient trajectory generation

beyond any individual vehicles line-of-sight. An accurateenvironment map is also a critical

component for future high level mission planning.

11

Page 12: MULTI-VEHICLE COOPERATIVE CONTROL FOR …ufdcimages.uflib.ufl.edu/UF/E0/01/97/41/00001/branch_e.pdf1.3 Multi-robot Systems Multiple autonomous robot teams are suggested to have many

The mapping system proposed in this paper iteratively performs the three following

activities:

1. locally building a partial map that represents the portion of the environment current sensed

2. updating the global map according to the newly acquired partial map

3. determining and planning toward the next observation position, according to theexploration strategy

1.3 Multi-robot Systems

Multiple autonomous robot teams are suggested to have many advantages over single

vehicle systems. Cooperating robots have the potential to accomplish tasks more efficiently and

with greater speed than a single vehicle. Beyond gains in efficiency multiple vehicle systems

have the potential to demonstrate faster and more accurate localizations if they exchange

information about their position and sensed environment ina cooperative manner. Distributed

team systems also have the added advantage of increasing redundancy and sensor variety. Using

several low cost vehicles introduces redundancy and a higher level of risk reduction than having

only one, possibly more expensive, vehicle. Multi-robot systems have the following properties:

• greater efficiency• improved system performance• fault tolerance• lower economic cost• distributed sensing and action• a larger range of task domains

In the field of mobile robotics, research on multiple robot systems is gaining popularity

because the multiple robot approach provides distinct advantages over a single robot such as

scalability, reliability and speed. On the other hand it increases the complexity of the system over

that of a single vehicle system.

Unlike most cooperative mult-vehicle behavior exploration using multiple vehicles is

characterized by techniques that avoid tightly coordinated behavior. Most cooperative multiple

vehicle behavior relates to the close coordination of vehicles in a formation or a “follow the

leader” strategy. However exploration using multiple vehicles is characterized by a team

12

Page 13: MULTI-VEHICLE COOPERATIVE CONTROL FOR …ufdcimages.uflib.ufl.edu/UF/E0/01/97/41/00001/branch_e.pdf1.3 Multi-robot Systems Multiple autonomous robot teams are suggested to have many

distribution strategy which efficiently covers an environment and avoids techniques which rely on

tightly coordinated behavior.

Figure 1-2. Urban Environment

1.4 System Architecture

This thesis is concerned with trajectory generation for vision-based mapping of an urban

environment by a distributed MAV team. The overall system architecture is made up of four sep-

arate problems: three-dimensional reconstruction from two-dimensional image data, environment

representation and incremental map building, explorationand mapping criteria, and trajectory

generation.

Chapter 2 describes the camera model and the approach for mapping a three-dimensional

object from a two-dimensional image, feature point detection , and feature point tracking. As

well as describe the implicit relationship between the camera and environment using two view

geometry and the epipolar constraint.

Chapter 3 describes methods for environment representation and dimensionality reduction.

A sensor based data set, in this case feature points, can be reduced to a set of geometric prim-

itives, if the environment can be well estimated by a geometric model. In the case of an urban

environment the scene can be well represented by planar primitives.

Chapter 4 describes the mapping criteria and team distribution strategy used to ensure

coverage of the environment. The search and exploration strategy uses a multivariate approach to

13

Page 14: MULTI-VEHICLE COOPERATIVE CONTROL FOR …ufdcimages.uflib.ufl.edu/UF/E0/01/97/41/00001/branch_e.pdf1.3 Multi-robot Systems Multiple autonomous robot teams are suggested to have many

determine goal locations which have a higher probability ofcontaining environment features of

interest.

Chapter 5 describes the trajectory planning technique using a sample based motion plan-

ning algorithm, which is able to be executed as an on-line algorithm to produce a conditional

trajectory, which can replan as environment information increases.

Chapter 6 and 7 presents simulation results and conclusionson the algorithms presented.

Also describes future direction for research and development.

An overview of how the separate components are interconnected is graphically depicted

below in Figure1-3 and 1-4.

Figure 1-3. Vehicle Architecture

Figure 1-4. Team Architecture

14

Page 15: MULTI-VEHICLE COOPERATIVE CONTROL FOR …ufdcimages.uflib.ufl.edu/UF/E0/01/97/41/00001/branch_e.pdf1.3 Multi-robot Systems Multiple autonomous robot teams are suggested to have many

CHAPTER 2COMPUTER VISION

2.1 Camera

A camera provides a mapping between the three-dimensional world and a two- dimensional

image. For this discussion we will consider a basic pinhole camera model, consisting of the

central projection of points in space onto an image plane. Inthis case the center of projection will

be the origin of a Euclidean coordinate system, with the image plane defined as a plane normal to

the camera’s central axis and located a focal length,f , away from the center of projection. The

pinhole camera geometry is described in Fig.2-1.

Camera Axis

Image Plane

ηx

P

ηy

ηz

Projection ofPi1

i3

i2

f i3

I

µν

Figure 2-1. Mapping from Environment to Image Plane

Under the pinhole camera model a point in space with coordinatesη = [ηx,ηy,ηz]T

is mapped to the point on the image plane where a line joining the pointη to the center of

projection meets the image plane. By similar triangles the point (ηx,ηy,ηz)T is mapped to

the point(µ,ν)T . Equation2–1describes the central projection mapping from world to image

coordinates [4].

(ηx,ηy,ηz)T −→

(

fηx

ηz, f

ηy

ηz

)T

(2–1)

For a pinhole camera with the camera origin placed at the lensthe relative positions are

given by the following model.

15

Page 16: MULTI-VEHICLE COOPERATIVE CONTROL FOR …ufdcimages.uflib.ufl.edu/UF/E0/01/97/41/00001/branch_e.pdf1.3 Multi-robot Systems Multiple autonomous robot teams are suggested to have many

µ= fηx

ηz(2–2)

ν = fηy

ηz(2–3)

The central projection can further be expressed in homogeneous coordinates as shown below

ηz

µ

ν

1

=

f 0 0 0

0 f 0 0

0 0 1 0

ηx

ηy

ηz

1

(2–4)

2.2 Feature Point Detection

The first step in the scene reconstruction process is the detection of features within the

viewable environment. The detection of feature points allows image pixels, features, to be

defined as geometric entities, such as points and lines. The feature point selection is based on

changes in image gradient from pixel to pixel over the entireimage, therefore selected feature

points are usually areas with unique characteristics. Features such as corners, edges, and lines are

characterized by large changes in color intensity and therefore allow for good feature selection,

capable of being easily tracked image to image [32]. For scene reconstruction the assumption

is that the selected feature points correspond to obstaclesin the environment which must be

detected for collision avoidance and mapping tasks. Objects of interest such as buildings,

bridges, and city signs have many sharp edges and abrupt color changes allowing for good

feature detection.

A commonly used feature point detection method is the gradient-based corner detection

algorithm. The algorithm requires that the image gradient∇I be calculated over the image frame.

Therefore, given an image gradientI(µ,ν) a corner feature can be detected at a given pixel(µ,ν)

by computing the summation of the outer product of the gradients,G(µ,ν), over a search window

of the sizeW(µ,ν).

16

Page 17: MULTI-VEHICLE COOPERATIVE CONTROL FOR …ufdcimages.uflib.ufl.edu/UF/E0/01/97/41/00001/branch_e.pdf1.3 Multi-robot Systems Multiple autonomous robot teams are suggested to have many

G(x) = ∑x∈W(x)

∇I(x)∇IT(x) =

∑ I2µ ∑ IµIν

∑ IµIν ∑ I2ν

(2–5)

at each pixel in the windowW the matrixG is computed. If the smallest singular value of

G is found to be greater than a defined threshold,τ, then the pixel(µ,ν) is classified as a corner

feature [3] [12] [10].

2.3 Feature Point Tracking

In order to match individual feature points from image to image a feature tracking process

must be performed. The feature correspondence process relies on image intensity matching over

a defined pixel region. The Lucas-Kanade feature point tracker is a standard algorithm for small

baseline camera motion. The algorithm tracks feature points through the correspondence of

pixel groupings, a neighborhood of pixels is tracked between two images. Therefore, a feature

point can be associated with a feature point on a subsequent image by a translation in pixel

coordinates where the displacement is defined ash = [dx,dy]T . The solution forh is found

through the minimization of intensity error obtained by matching pixel windows of similar

intensities [21] [32].

Therefore, if one assumes a simple translational

h = argminh

∑x∈W(x)

‖I1(x− I2(x+δx)‖2 (2–6)

Applying the Taylor series expansion to this expression about the point of interest,x, while

retaining only the first term in the series results in Equation 2–7.

∂I∂µ

dµdt

+∂I∂ν

dνdt

+∂I∂t

= 0 (2–7)

Rewriting Equation2–7in matrix form results in Equation2–8.

∆ITu+ It = 0 (2–8)

17

Page 18: MULTI-VEHICLE COOPERATIVE CONTROL FOR …ufdcimages.uflib.ufl.edu/UF/E0/01/97/41/00001/branch_e.pdf1.3 Multi-robot Systems Multiple autonomous robot teams are suggested to have many

whereu = [dµdt ,

dνdt ].

We can solve for the velocities if additional constraints are enforced on the problem,

which entails restraining regions to a local window that move at constant velocity. Upon these

assumption one can minimize the error function given in Equation 2–9.

E1(u) = ∑W(x)

[∇IT(x, t)u(x)+ It(x, t)]2 (2–9)

The minimum of this function is obtained by setting∇E1 = 0 and therefore, resulting in

Equation2–10.

∑ I2µ ∑ IµIν

∑ IµIν ∑ I2ν

u+

∑ IµIt

∑ IνTt

= 0 (2–10)

or, rewritten in matrix form results in the following

u = −G−1b (2–11)

where

G =

∑ I2µ ∑ IµIν

∑ IµIν ∑ I2ν

(2–12)

b =

∑ IµIt

∑ IνTt

(2–13)

2.4 Two-view Image Geometry

The two-view image geometry relates the measured image coordinates to the 3D scene.

The camera configuration could be either two images taken over time of the same scene, as in

the monocular case, or two cameras simultaneously capturing two images of the same scene,

as in the stereo vision case. The geometry of the two-view configuration will generate epipolar

constraint from which the 3D scene reconstruction will be formulated [22].

18

Page 19: MULTI-VEHICLE COOPERATIVE CONTROL FOR …ufdcimages.uflib.ufl.edu/UF/E0/01/97/41/00001/branch_e.pdf1.3 Multi-robot Systems Multiple autonomous robot teams are suggested to have many

2.4.1 Epipolar Constraint

The relationship between camera and environment is formulated as the epipolar constraint

or coplanarity constraint. This constraint requires position vectors, which describe a feature point

relative to the camera at two instants in time, to be coplanarwith the translation vector and the

origins of the camera frames. This geometric relationship is illustrated in Figure2-2 whereη1

andη2 denote the position vectors of the feature point,P, in the camera reference frames. Also,

the values ofx1 andx2 represent the position vectors projected onto the focal plane whileT

indicates the translation vector of the origin of the cameraframes.

(R,T)

P

p1

e2

I1

l1

e1

p2

I2

l2

Figure 2-2. The Geometry of the Epipolar Constraint

A geometric relationship between the vectors in Figure2-2 is expressed by introducingR

as a rotation matrix. This rotation matrix is the transform of the camera position between image

frames. Assuming a pin-hole camera which is collinear with its projection into the focal plane,

the geometric relationship is described in Equation2–14.

19

Page 20: MULTI-VEHICLE COOPERATIVE CONTROL FOR …ufdcimages.uflib.ufl.edu/UF/E0/01/97/41/00001/branch_e.pdf1.3 Multi-robot Systems Multiple autonomous robot teams are suggested to have many

x2 · (T ×Rx1) = 0 (2–14)

The expression in Equation2–14reflect that the scalar triple product of three coplanar

vectors is zero, which forms a plane in space. These relationships can be expanded using linear

algebra to generate a standard form of the epipolar geometryas in Equation2–15. This new

form indicates a relationship between the rotation and translation, written as the essential matrix

denoted asE, to the intrinsic parameters of the camera and associated feature points.

[x2]E [x1] = 0 (2–15)

2.4.2 Structure from Motion

Structure from motion (SFM) is a technique to estimate the location of environmental

features in 3D space [26]. This technique utilizes the epipolar geometry in Figure2-2 and

assumes that the rotation,R, and translation,T, between frames is known. Given that, the

coordinates ofη1 andη2 can be computed, by the fundamental relationship given in Equa-

tion 2–16[21] [20] [27].

η2 = Rη1+T (2–16)

The location of environmental features is obtained by first noting the relationships be-

tween feature points and image coordinates given in Equation 2–2and Equation2–3. These

relationships allow some components ofηx andηy to be written in terms ofµ andν which are

known from the images. Thus, the only unknowns are the depth components,η1,z andη2,z, for

each image. The resulting system can be cast as Equation2–17and solved using a least-squares

approach.

µ2f −(R11

µ1f +R12

ν1f +R13)

ν2f −(R21

µ1f +R22

ν1f +R23)

1 −(R31µ1f +R32

ν1f +R33)

η2,z

η1,z

=

Tx

Ty

Tz

(2–17)

20

Page 21: MULTI-VEHICLE COOPERATIVE CONTROL FOR …ufdcimages.uflib.ufl.edu/UF/E0/01/97/41/00001/branch_e.pdf1.3 Multi-robot Systems Multiple autonomous robot teams are suggested to have many

This equation can be written in a compact form as shown in Equation 2–18usingz =

[η2,z,η1,z] as the desired vector of depths.

Az = T (2–18)

The least-squares solution to Equation2–18obtains the depth estimates of a feature point

relative to both camera frames. This information along withthe image plane coordinates can be

used to compute(η1,x,η1,y) and(η2,x,η2,y) by substituting these values back into Equations2–2

and2–3. The resulting components ofη1 can then be converted to the coordinate frame of the

second image and it should exactly matchη2. These values will never match perfectly due

to noise and unknown camera parameters so, in practice, an average process is often used to

estimate the feature coordinates [1] [13] [34].

21

Page 22: MULTI-VEHICLE COOPERATIVE CONTROL FOR …ufdcimages.uflib.ufl.edu/UF/E0/01/97/41/00001/branch_e.pdf1.3 Multi-robot Systems Multiple autonomous robot teams are suggested to have many

CHAPTER 3DIMENSIONALITY REDUCTION

3.1 Introduction

For autonomous MAVs the ability to create maps of the environment autonomously is the

first requirement for the planning and completion of many tasks. It cannot be assumed a priori

environment information will be available for navigation and mission planning, in fact it is rarely

the case. Not only do city blueprints or related types of mapsfail to be consistently reliable but,

numerous aspects of an environment are not likely to appear on a map such as signs, billboards

and semi-stationary objects (i.e., stationary vehicles).Maps made for people often depend upon

the interpretive skills of the person using the map and on hisor her ability to infer functional

information. Maps usually represent structural elements with abstract symbols and semantic

labels. A MAV must be able to relate its current location directly to its own sensory information

regarding its environment. This suggests that an appropriate map for an autonomous robot should

relate to the types of sensor data the robot is likely to observe. In general, these factors imply that

the ability to perform some degree of autonomous map construction, update, and validation is the

primary requirement for successful mission planning [29] [5].

Maps can take many forms and fall between two specific representational extremes, which

are particularly relevant to autonomous vehicles. The firstare metric maps, which are based on

an absolute reference frame and location estimates of whereobjects are in space. The second

are topological maps, or relational maps, that only explicitly represent connectivity information

between important objects or landmarks. Each of these representation extremes allow for various

specific descriptions of an environment [23] [30].

The amount of information contained in a representation andthe assumptions made about

the environment leads to layering of representations of mapdata with increasingly strong

assumptions and a narrowing of suitable mission tasks. Below is a list of map structures in order

implied model assumptions.

22

Page 23: MULTI-VEHICLE COOPERATIVE CONTROL FOR …ufdcimages.uflib.ufl.edu/UF/E0/01/97/41/00001/branch_e.pdf1.3 Multi-robot Systems Multiple autonomous robot teams are suggested to have many

1. Sensorial. Raw sensor data (i.e., raw feature point data)

2. Spatial Decomposition/Occupancy Grid. Discretely sampled environment, definingprobability of occupancy

3. Geometric. 2- or 3-Dimensional objects inferred from sensor data

4. Topological. The large-scale rational links that connect objects and locations across theenvironment as a whole

5. Semantic. Function labels associated with the landmarksof the map

Geometric maps are generally the most intuitive at first glance and give a more general

representation of the environment. Because the obtained map is more general in nature it is well

suited for a wide variety of mission tasks. While a topological mapping is more closely related

to an individual mission requirement, and thus less easily adapted to diverse mission tasks.

Sensorial mappings represent minimally altered sensor data, however the large storage size and

difficulty inferring information makes them undesirable for some high level mission planning.

Not only are existing maps of most environments inaccurate,but almost every environment

occupied by humans undergoes change. As a result robot systems must be able to accommodate

change in their environment. If they are to maintain maps, they must be able to update them. The

general approach of map building is to incrementally integrate new data into the map. When each

new frame is obtained, it is aligned to a cumulative global map.

The exploration of an unknown environment, the construction of a geometric map from

that exploration, and other related problems have been studied extensively in the field of

computational geometry. Exploration relates to a variety of related capabilities. These include

searching for a specific objective or goal position, searching for a route with specific properties,

covering free space, and learning about the occupancy space(i.e., mapping). A broad class of

problems deal with search in unknown or partially known environments. Such problems are

interesting not only in their own right but also because theyare closely related to environmental

exploration in general where the goal being sought is additional geometric knowledge.

23

Page 24: MULTI-VEHICLE COOPERATIVE CONTROL FOR …ufdcimages.uflib.ufl.edu/UF/E0/01/97/41/00001/branch_e.pdf1.3 Multi-robot Systems Multiple autonomous robot teams are suggested to have many

3.2 Environment Representation

3.2.1 Spatial Decomposition/Occupancy Grid

One of the most straightforward and widely used environmentrepresentations for robotic

mapping are grid structures. Grid structures or occupancy grids are spatial decompositions of

the environment, where the two- or higher-dimensional environment is discretely sampled. The

simplest method used to sample the environment is to sample the space using a uniform grid,

with each sample taken at points in the grid expressing the probability of occupancy at that

sample point. At each sampled point the space is therefore defined as empty, full, or partially

full. Because this method of environment representation only describes the space sampled and

not the individual object contained within that space, no information is available to identify or

discriminate between objects. Other disadvantages are that the grid resolution is limited by how

course or fine the environment is discretized and storage requirement is the same whether the

environment is empty or occupied. The main advantage of a uniform grid representation is its

extreme generality, requiring no assumptions be made regarding the type of objects contained

within the space. The 2-dimensional occupancy map can be extended to represent 3-dimensional

spaces through use of a volumetric model. The volumetric model defines the environment by

defining a 3-dimensional grid which divides the workspace into voxels (cubes) with edges and

volume [24]. Furthermore, more efficient sampling can be obtained by using an adaptive cell size

to more efficiently model empty and occupied space, by using areas of high resolution for areas

of cluttered features and low resolution in open space [8] [15] [28].

A B C

Figure 3-1. Environment Representations A) Occupancy GridB) Topological Map C) GeometricMap

24

Page 25: MULTI-VEHICLE COOPERATIVE CONTROL FOR …ufdcimages.uflib.ufl.edu/UF/E0/01/97/41/00001/branch_e.pdf1.3 Multi-robot Systems Multiple autonomous robot teams are suggested to have many

3.2.2 Topological Representation

A topological map is a graph like representation, which usesnodes and edges to describe

distinct features of an environment. Nodes represent the key-places in the map and edges repre-

sent the transitions to navigate between key-places [29]. In contrast to metric maps, topological

maps naturally capture qualitative information while deemphasizing what may be irrelevant

or confusing details. The key to a topological relationshipis some explicit representation of

connectivity between regions or objects. The vehicle has noreal understanding of the geometric

relationship between locations in the environment; locations are only linked by their topological

representation. As a result, topological maps have a very explicit connection to mission tasks and

problem statement. A typical subway map, or the typical navigation instructions used by humans

over substantial regions of space are examples of topological representations [6].

3.2.3 Geometric Maps

Geometric maps are those made up of discrete geometric primitives; for example lines,

polygons and/or polyhedral. Such maps have the advantage ofbeing highly space-efficient

because an arbitrarily large region of space can be represented by a model with only a few

parameters. In addition, geometric maps can store occupancy data with almost arbitrarily high

resolution without becoming liable to the storage limitations incurred by techniques based on

spatial sampling.

The primary shortcoming of geometric model-based representations relates to the fact that

they can be difficult to infer reliably from sensor data. Three fundamental modeling problems

are regularly encountered: the representation may change drastically depending on the modeling

assumptions; many different environments may map to the same representation, building will

have no discerning characteristics; and it may be difficult to represent prominent features of the

environment within the modeling system.

Geometric maps however can be an efficient description of theenvironment, if one assumes

that the sensor provides suitable data, and that the environment to be described is well suited

25

Page 26: MULTI-VEHICLE COOPERATIVE CONTROL FOR …ufdcimages.uflib.ufl.edu/UF/E0/01/97/41/00001/branch_e.pdf1.3 Multi-robot Systems Multiple autonomous robot teams are suggested to have many

to the geometric modeling primitives to be used. In this casethe environment, an urban city, is

characterized by large planar features which lends itself to a planar modeling primitive.

The world,W, will be defined as a 3-dimensional space in whichW ∈ R3. The described

world will consist of two objects; obstacles and vehicles. Obstacles are considered the portions

of the world that are permanently occupied, for example, as in walls of a building. Vehicles are

bodies that are modeled geometrically and controllable viaa motion planning step. In this case

vehicles will be thought off as a point in space with no geometric size. This assumption is do

to the fact that the characteristic size of a MAV is very smallin relation to the defined world.

Therefore collision avoidance between vehicles and obstacles is assured solely by checking the

vehicle path for conflict.

The following sections present a method for systematicallyconstructing representations for

obstacles and vehicles using a collection of primitives [2][22]. Obstacle regions will be denoted

asO, a set of all points inW that lie in one or more obstacles;O ∈ W. Three-dimensional

obstacles will be represented using a boundary representation consisting of convex polygons

initialized in 3-dimensional space. Ann-sided polygon can be described using two kinds of

features, vertices and edges. Every vertex corresponds to acorner of the polygon. and every

edge corresponds to a line segment between a pair of vertices. The polygon can be specified by a

sequence,(x1,y1),(x2,y2), ...,(xn,yn) of n points in the plane defined byP.

3.3 Data Clustering

3.3.1 K-means

K-means is an unsupervised learning algorithm which solvesthe clustering problem. The

K-means algorithm allows for a way to split a given data set ofN points intoK subsets so as to

minimize the sum-of-squares criteria [11].

wherexn is a vector representing thenth data point andµ is the geometric centroid of the

data subset.

J =K

∑j=1

N

∑i=1

∥x( j)

i −µj

2(3–1)

26

Page 27: MULTI-VEHICLE COOPERATIVE CONTROL FOR …ufdcimages.uflib.ufl.edu/UF/E0/01/97/41/00001/branch_e.pdf1.3 Multi-robot Systems Multiple autonomous robot teams are suggested to have many

The algorithm works by initializingK centroids into the space represented by the data set

being clustered. Once the centroids are initially set each data point is assigned to the subgroup

with the closest centroid. Once all data points are assignedthe centroids are recalculated for the

currently defined data sets. The above procedure is repeateduntil the centroids no long move or

the change in centroid position falls below some user definedthreshold.

The K-means algorithm suffers from several sort comings. One is the algorithm is sensitive

to the initial randomly selected centroid locations. Also the number of centroids must be selected

a priori. The algorithm does not guarantee a globally optimal solution ofJ over the centroid

assignments.

3.3.2 PCA

Principal Component Analysis (PCA) is a way to identify patterns in a data set and express

the data set in such a way as to highlight the similarities anddifferences in the data. PCA can be

used to simplify a dataset by reducing the dimensionality while retaining those characteristics of

the dataset that contribute most to the variation. PCA is an orthogonal linear transformation that

transforms the data to a new coordinate system such that the first principle component accounts

for as much of the variability in the data as possible, with each succeeding component accounting

for a majority of the remaining variation. The primary objectives of PCA are to discover or to

reduce the dimensionality of a dataset and to identify new meaningful underlying variables. In

PCA such directions are merely the eigenvectors of the covariance matrix having the largest

eigenvalue; this projection optimizes a sum-squared-error criterion [7] [31] [35].

First, thed-dimensional mean vectorµ anddxd covariance matrixΣ are computed for

the full data set. Next, the eigenvectors and eigenvalues are computed, and sorted according to

decreasing eigenvalue. Choosing thek eigenvectors having the largest eigenvalues. Often there

will be just a few large eigenvalues, and this implies thatk is the inherent dimensionality of the

subspace governing the data while the remaining dimensionsgenerally contain noise. Next we

form adxk matrixA whose columns consist of thek eigenvectors. The representation of the

27

Page 28: MULTI-VEHICLE COOPERATIVE CONTROL FOR …ufdcimages.uflib.ufl.edu/UF/E0/01/97/41/00001/branch_e.pdf1.3 Multi-robot Systems Multiple autonomous robot teams are suggested to have many

Figure 3-2. Principle Component Analysis

data by principal components consists of projecting the data onto thek-dimensional subspace

according to

3.3.3 Plane Fitting

The plane fitting procedure works by clustering the data set using the k-means algorithm

and checking the planarity of the newly defined clusters using the PCA algorithm in an iterative

approach. First the number of clusters,k, is found by selectively splitting and removing centroids

based on the planarity of the data set they represent. Once a data subset can be accurately

modeled by a plane, then that cluster is removed from the dataset. If no clusters are found to fit

the planar model then the centroid is split and the k-means clustering algorithm is iterated over

the newly defined centroids. The planarity check is based on the fact that for a data set where

all data points lie with in a plane the principal components computed by PCA will give two

eigenvectors which lie in the plane and a third that is the normal vector of the plane. Therefore,

the decision to remove a data set associated with a centroid is determined by projecting the data

set onto the calculated eigenvectors (principal components). The data which after projection fall

within a threshold distance from the origin along the normalof the defined plane are determined

to approximately lie on the plane.

28

Page 29: MULTI-VEHICLE COOPERATIVE CONTROL FOR …ufdcimages.uflib.ufl.edu/UF/E0/01/97/41/00001/branch_e.pdf1.3 Multi-robot Systems Multiple autonomous robot teams are suggested to have many

3.4 Incremental Map Building

The general approach of map building is to incrementally integrate new data into the map.

When each new frame is obtained, it is aligned to a cumulativeglobal map.

In the case of multiple vehicles, each vehicle’s local incremental map must be integrated

into the global map for use in future path planning by not onlyitself but all other vehicles in the

distributed team. Each vehicle continuously performs the scene reconstruction and plane fitting

process over the locally sensed feature point data. By completing the dimensionality reduction

process on each local data set the incremental global map building process is reduced to a plane

merging task. By performing the dimensionality reduction on the incremental local data set the

storage requirement for each vehicle is reduced and the dataset transferred for inclusion in the

global map is reduced. The plane fitting procedure is therefore completed on a smaller data set

allowing for more computationally efficient data clustering. The global map building procedure

therefore becomes a merging process, where the individual local low dimensional maps are

incorporated into the central global map.

Although communication restrictions are not explicitly considered in the problem statement,

by sharing a reduced order data set with the centralized global map the required communication

bandwidth would be reduced to a more realistic level from therequirement necessary to share the

unaltered sensor data.

0

2000

0

20000

1000

x (ft)

y (ft)

altit

ude

(ft)

A

0

2000

0

20000

1000

x (ft)

y (ft)

altit

ude

(ft)

B

Figure 3-3. Incremental Plane Merging A) Local Planes B) Merged Planes

29

Page 30: MULTI-VEHICLE COOPERATIVE CONTROL FOR …ufdcimages.uflib.ufl.edu/UF/E0/01/97/41/00001/branch_e.pdf1.3 Multi-robot Systems Multiple autonomous robot teams are suggested to have many

CHAPTER 4MAPPING CRITERIA

4.1 Introduction

The primary problem in any exploration task is that given what you know about the

environment where should you move to gain as much new information as possible or ensure a

high probability of learning new information in unexploredregions. The choice of movement

algorithms or mapping criteria will affect the success and efficiency of a vehicle team attempting

to disperse themselves in an unknown environment. This section will develop a mapping criteria

which simultaneously distributes a MAV team and ensure exploration of unexplored regions of

the environment. Many methods are available to formulate movement algorithms and define goal

locations for autonomous exploration. Random explorationstrategies are easily implemented,

however they do not make use of current map knowledge for planning. Another method used

often in indoor cyclic environments is Frontier-based exploration. Frontier-based exploration

works on the assumption that new information can be gained byexploring the boundary between

open space and unexplored territory. Another method is the ’Seek-Open Space’ method which

attempts to simultaneously distribute and cover an environment with mobile robots. The ’Seek-

Open Space’ method is formulated for mobile robots in a 2-dimensional space, however the basic

idea can be used and expanded for 3-dimensional environmentwith a MAV.

The search and exploration strategy proposed here is based on selecting goal locations

which offer the highest probability of finding new obstacles. The strategy is based on multivariate

exploration and decision methods, which classifies a goal location which is both statistically

distant from known obstacles and other team MAVs.

The coverage problem has been defined as the maximization of the total obstacle surface

area covered by the MAV vision sensor.

4.2 Coverage

4.2.1 Random Movement Method

Random movement algorithms are the most basic exploration algorithms. The random

movement algorithm works by commanding the search vehicle to move forward with small

30

Page 31: MULTI-VEHICLE COOPERATIVE CONTROL FOR …ufdcimages.uflib.ufl.edu/UF/E0/01/97/41/00001/branch_e.pdf1.3 Multi-robot Systems Multiple autonomous robot teams are suggested to have many

random control inputs at random time intervals. The overallvehicle path exhibits a random

’wondering’ movement with a continuously curved path. To ensure forward movement and avoid

repeated circling, the turn command and the time interval are constrained to promote exploration.

Another random movement strategy works by planning straight line paths between the

vehicle position and a randomly generated location, contained inside the search area. This

method leads to complete coverage, however the search effort leads to excessive coverage of the

central area at the expense of the boundary area.

Random exploration strategies are easily implemented and computationally inexpensive.

However they do not make use of current map knowledge for planning and do not inherently

provide a method for coordinated team distribution.

4.2.2 Frontier-Based Exploration

Frontier-based exploration works on the assumption that new information can be gained by

exploring the boundary between open space and unexplored territory. Frontier-based exploration

has been shown to work well for 2-dimensional mobile robot mapping, where the primary

goal is to map the boundaries of an environment, for example the outer walls of a room. The

frontier-based approach incrementally constructs a global occupancy map of the environment.

The map is analyzed to locate the ’frontiers’ between the free and unknown space. Exploration

proceeds in the direction of the closest ’frontier’. The frontier method is proven to work well for

indoor cyclic environment where a boundary wall can be continuously explored, providing new

environment information. The frontier approach does not explicitly provide for team distribution.

Frontier based exploration also is formulated around an occupancy grid representation which will

have the same limitations detailed in Chapter3.

4.2.3 Seek-Open Space Algorithm

The seek-open space movement algorithm causes a robot vehicle to move toward open areas

in the map which are distant from known obstacles. The algorithm is executed by first calculating

the average obstacle vector for all obstacles in sensor range. The average obstacle vector is then

computed for all obstacles sensed. The obstacle vector is defined such that the magnitude of the

31

Page 32: MULTI-VEHICLE COOPERATIVE CONTROL FOR …ufdcimages.uflib.ufl.edu/UF/E0/01/97/41/00001/branch_e.pdf1.3 Multi-robot Systems Multiple autonomous robot teams are suggested to have many

vector must be large for objects close to the vehicle and small for objects far away. After the

average obstacle vector is computed, the goal of the vehiclebecomes to move in the opposite

direction of the average obstacle vector. The vehicle is given a control input which turns the

vehicle toward the direction of the negative obstacle vector. The rate of turn is determined by the

magnitude of the average obstacle vector. The Seek-Open Space algorithm has the advantage

that team distribution can be explicitly accounted for by modeling each vehicle as an obstacle.

Therefore, the vehicle team will explore the environment while avoiding regions with both team

vehicles and obstacles present.

4.2.4 Mapping Criteria

As with the Seek-Open Space strategy the search and exploration strategy proposed here

is based on selecting a goal location which offers highest probability of finding new obstacles,

which are assumed to be in areas which are distant from known obstacles. The strategy is based

on multivariate exploration and decision methods [25].

The presented approach enables coverage of the environmentby selecting individual vehicle

goal locations that are statistically distant from measured landmarks and other vehicle locations.

The distance measure used to determine the goal location is the Mahalanobis distanceDM [7].

The Mahalanobis distance is a distance measurement that is scaled by the statistical variation of

a known data set; therefore, the Mahalanobis distance can beused to determine the similarity

of an unknown sample to a known distribution. For a multivariate Gaussian with meanµ and

covariance matrixΣ, the Mahalanobis distance of a multivariate vectorx is given by

DM(x) =√

(x−µ)TΣ−1(x−µ) (4–1)

The goal locationxG of the vehicle that promotes coverage of the environment andvehicle

separation is chosen to be the point that maximizes the Mahalanobis distance forN known

landmarks and team vehicles as given by

32

Page 33: MULTI-VEHICLE COOPERATIVE CONTROL FOR …ufdcimages.uflib.ufl.edu/UF/E0/01/97/41/00001/branch_e.pdf1.3 Multi-robot Systems Multiple autonomous robot teams are suggested to have many

xG = maxx

N

∑i‖DMi(x)‖ (4–2)

The Mahalanobis distance of the each vehicle positionDMviare also included in order to

drive the goal locations away from the current vehicle positions.

The spatial characteristic of the expected data set are modeled by the covariance matrix.

This approach allows for a flexible coverage criteria which ensures exploration and team

distribution regardless of whether or not prior information is known about the environment.

The more information known about the environment the higherthe probability a goal location,

determined by the exploration criteria, will contain new obstacle information. If no prior

knowledge of the environment is utilized the Mahalanobis distance collapses to a standard

Euclidean distance measurement.

The goal locations are calculated by each individual vehicle, with the assumption that each

vehicle location is available for inclusion in the calculation of xG. Goal locations for each vehicle

are calculated independently of each other. Therefore eachvehicle’s goal location is based on

the current global map information and the current vehicle locations. By using this method for

goal calculation a vehicle search pattern can be intentionally fixed or calculated using a different

performance criteria, but still influence the goal locations of the remaining vehicles.

33

Page 34: MULTI-VEHICLE COOPERATIVE CONTROL FOR …ufdcimages.uflib.ufl.edu/UF/E0/01/97/41/00001/branch_e.pdf1.3 Multi-robot Systems Multiple autonomous robot teams are suggested to have many

CHAPTER 5TRAJECTORY PLANNING

5.1 Introduction

One major disadvantage with many path planners is the assumption that the environment

is known in advance. Methods that explicitly deal with the need to replan and can reevaluate

the path as it is being executed are known as on-line algorithms, and the trajectory they produce

is sometimes referred to as a conditional plan. A true on-line algorithm should be able to

generate a preliminary trajectory even in the complete absence of any map. These algorithms

may not guarantee an optimal trajectory however they do guarantee a planned trajectory and

connectivity [19].

Sample-based motion planning is performed as a search over the defined configuration space

generating samples to learn about the problem space being search. Sample-based algorithms

relies on random sampling of states to explore the configuration space, but can also be based on

a deterministic sampling scheme. Samples representing theconfiguration space are stored in a

data structure which can be used to approximate the space. The data structure is composed of

nodes and edges; nodes represent sampled states in the configuration space and edges represent a

valid path connecting two states. The data structure is stored as a expanding tree. Sample-based

approaches have appealing properties such as guaranteed connectivity, are computationally

inexpensive, and are easily made to suit the dynamics constraints of the system. Sample-based

motion planning methods also are probabilistically complete; meaning that if a solution exists,

the probability to find it converges to the one when the computation time approaches infinity. If

no path is found through the configuration space it could meanthat no valid path exists or the

sampling process was insufficient to adequately explore thespace [18].

Rapidly Exploring Random Tree (RRT) planners are a class of random motion planning

algorithm that can be used for systems involving kinodynamic constraints [14] [16] [17].

5.2 Rapidly Exploring Random Tree

The Rapidly-Exploring Random Tree (RRT) algorithm provides a randomized method for

the construction of roadmaps. The goal is to rapidly and efficiently explore the state-space or

34

Page 35: MULTI-VEHICLE COOPERATIVE CONTROL FOR …ufdcimages.uflib.ufl.edu/UF/E0/01/97/41/00001/branch_e.pdf1.3 Multi-robot Systems Multiple autonomous robot teams are suggested to have many

configuration space. The RRT algorithm was developed by LaValle and Kuffner specifically to

handle problems that involve dynamics and differential constraints. An advantage of an RRT is

that they can be directly applied to kinodynamic planning; which stems from the fact that RRTs

do not require any connections to be made between pairs of configurations as in probabilistic

roadmaps. Through rapid exploration of the space, the RRT can achieve efficient roadmap-style

solutions between two points, and as with all sample-based planners has been shown to be

probabilistically complete.

When RRTs are used for MAV path planning, the tree nodes are potential MAV waypoints

and the branches are paths to the waypoints. A tree is initially composed of the UAV’s position

as a single node. A random MAV state is generated and the tree is extended toward the random

point creating a new branch and node as outlined in Algorithm1. When a path is found or a

maximum number of iterations is reached, the RRT terminates.

Algorithm 1 Rapidly-Exploring Random Trees

1. Initialize a tree containing one node the starting location of the UAV2. while a path has not been found do3. Xrand = a random UAV state4. Xnear = The state in the tree that is closest toXrand5. if the tree can be connected fromXnear to Xrand without collision then6. extend the tree fromXnear to Xrand7. end if8. end while

Path Planning is viewed as the search in spaceX, for a continuous path from an initial state

xinit to a goal statexgoal. It is assumed that the spaceX is a bounded region which contains a

fixed obstacle region,Xobs, and free space,Xf ree. The RRT will be constructed so that all of its

vertices are withinXf ree and each edge of the RRT will correspond to a path that lies entirely in

Xf ree. Collision detection will be performed by an incremental method, and will be described in

future section.

For a given initial state location,xinit , an RRT,T, with K vertices is constructed as follows.

The first node ofT is initialized to be equal toxinit , for a single forward planning RRT. Then

35

Page 36: MULTI-VEHICLE COOPERATIVE CONTROL FOR …ufdcimages.uflib.ufl.edu/UF/E0/01/97/41/00001/branch_e.pdf1.3 Multi-robot Systems Multiple autonomous robot teams are suggested to have many

A B

Figure 5-1. Rapidly-Exploring Random Tree Expansion A) Sample Random Node B) ExtendTree

for each iteration of the RRT generating algorithm a new random state,xrand is selected from

the configuration spaceX. The node nearestxrand in terms of a distance metricρ is found and

used as the point for expansion. An incremental control input u is selected which minimizes the

distance fromxnear to xrand. The new node and edge must be located in free space and checked

for obstacle collision. The tree,T, is updated with the new node and control input.

5.3 Sampling

Traditionally randomized motion planning algorithms use auniform sample distribution

to generatexrand which tends to grow the tree in such a way that the entire spaceis covered. In

robot motion planning applications, the presence of obstacles often restricts the use of “greedy”

solutions which involve the robot proceeding directly to the goal. The use of the uniform

distribution increases the algorithms robustness when solving problem in which the space is not

convex, meaning the space contains local minimums which mayresult in the planning algorithm

not being able to find a valid path. However, when the space is convex and the vehicle dynamics

are fast enough to ensure adequate maneuverability, using auniform distribution is unnecessary

and computationally expensive, because the “obvious” solution is not explored immediately.

Another option would be to use a distribution which was biased in such a way to generate

with some probability samples which fall close to the goal location. The possible drawback of a

biased search scheme or any “greedy” search strategy is thatit can get stuck in local minima and

fail to find a valid path.

36

Page 37: MULTI-VEHICLE COOPERATIVE CONTROL FOR …ufdcimages.uflib.ufl.edu/UF/E0/01/97/41/00001/branch_e.pdf1.3 Multi-robot Systems Multiple autonomous robot teams are suggested to have many

0 200 400 600 800 1000 1200 1400 1600 1800 20000

200

400

600

800

1000

1200

1400

1600

1800

2000

X

Y

A

500

1000

1500

2000

500

1000

1500

2000

250

500

750

X

Y

Z

B

Figure 5-2. Rapidly-Exploring Random Tree w/ Kinematic Constraints A) Two-DimensionalRRT B) Three-Dimensional RRT

5.4 Collision Avoidance

Once it has been decided where the samples will be placed, thenext problem is to determine

whether the configuration is in collision. Thus, collision detection is a critical component of

sampling-based planning. In most motion planning applications, the majority of computation

time is spent on collision checking. The choice of the collision detector is extremely important,

as most of the time spent by planners is devoted to collision checking, both for validating

samples and edges connecting samples. The problem of obstacle avoidance for autonomous

aircraft is complicated by the dynamic constraints of the vehicle. A large body of work exists

that has focused on the problem of obstacle avoidance for mobile ground robots. In contrast to

ground vehicles and rotorcraft which can always stop, aircraft must maintain a minimum airspeed

in order to generate sufficient lift to remain aloft. In comparison to helicopters which can turn in

place, the lateral dynamics of aircraft limit the turning capabilities of the system.

The obstacle avoidance algorithm developed in this work assumes the UAV is stabilized by

a low level autopilot and that the standard kinematic model describes the navigation behavior of

the vehicle.

The three dimensional obstacles are composed of two dimensional convex polygons, as

defined in Chapter3. Each convex polygon, as shown in Figure5-3, represents an obstacle face.

A polygon is in collision with a potential incremental path segment, defined between nodesxi

andx j , if there exists a pointr which lies on the line connecting the two nodes and contained

37

Page 38: MULTI-VEHICLE COOPERATIVE CONTROL FOR …ufdcimages.uflib.ufl.edu/UF/E0/01/97/41/00001/branch_e.pdf1.3 Multi-robot Systems Multiple autonomous robot teams are suggested to have many

A B

Figure 5-3. Collision Check A) Line-Plane Intersection B) Point Inside Polygon

inside the polygon. The pointr is contained inside the polygon if the sum of the interior angles,

αi , is equal to 2π.

38

Page 39: MULTI-VEHICLE COOPERATIVE CONTROL FOR …ufdcimages.uflib.ufl.edu/UF/E0/01/97/41/00001/branch_e.pdf1.3 Multi-robot Systems Multiple autonomous robot teams are suggested to have many

CHAPTER 6SIMULATION RESULTS

A simulation is used to demonstrate the team distribution and mapping strategy. The

simulation consists of a two MAV team governed by individualkinematic models. Kinematic

models were used instead of a nonlinear dynamic model to reduce complexity and computational

cost. The purpose of the proposed simulation is to test the coverage and team distribution

algorithms, therefore the assumption is made that the kinematic model sufficiently describes the

flight characteristics which would be expected from a more accurate dynamic model combined

with a suitable control system.

There is a camera mounted on the nose of each MAV, aligned parallel with the longitudinal

axis. The camera is updated at a sensor rate of .01Hz. This simulation was run using a feature

point mesh over the faces of the buildings at an even interval. At each sensor time step the

feature points in the camera frustrum are located using scene reconstruction and processed

with the previously described dimensionality reduction, plane fitting described in Chapter3.

Optimal implementations of the scene reconstruction analysis is unnecessary. Therefore, the

actual computation time will be ignored and the sensor rate and scene reconstruction process

will be updated at rates found in published literature. For this example, the scene reconstruction

algorithm will be running at 0.01Hz.

For this simulation, it is assumed that perfect feature point extraction and tracking is

available. There is also assumed to be perfect state estimation, perfect terrain mapping, and

perfect path planning. Clearly these assumptions are unrealistic but will suffice to demonstrate

the effectiveness of the mapping criteria.

The MAV team will fly through an environment designed to demonstrate the performance of

the map criteria in an urban environment. The environment isshown in Figure6-1. The mission

objective is to ensure the vehicles are distributed in the environment and efficiently map the

unknown environment.

Path planning is accomplished using the RRT algorithm, as described in Chapter5. The

RRT provides a kinodynamically feasible, collision free trajectory for the currently known

39

Page 40: MULTI-VEHICLE COOPERATIVE CONTROL FOR …ufdcimages.uflib.ufl.edu/UF/E0/01/97/41/00001/branch_e.pdf1.3 Multi-robot Systems Multiple autonomous robot teams are suggested to have many

Figure 6-1. Simulated Urban Environment

environment map. The planned path connects the vehicle fromits current pose to the currently

defined goal location, determined from the mapping criteria. The planned path serves as a

’conditional trajectory’, and only ensures a feasible pathfor the current environment map. The

conditional trajectory does not guarantee obstacle avoidance for the unknown portion of the

environment, nor an optimal trajectory to the goal location. Because the vehicle is operating in a

changing state of environment knowledge, the RRT algorithmis used as an ’on-line’ path planner

which replans the trajectory at a given rate. The RRT algorithm will provide a new trajectory at a

rate of .25 Hz.

0500

10001500

20002500

3000

0

500

1000

1500

2000

2500

3000

0

500

1000

x (ft)

y (ft)

altit

ude

(ft)

Figure 6-2. Simulation at T=0s

The simulation is initialized with a completely unknown environment, with the search area

bounded by[3000f t.,3000f t.,1000f t.] environment volume. The two vehicle team is initialized

40

Page 41: MULTI-VEHICLE COOPERATIVE CONTROL FOR …ufdcimages.uflib.ufl.edu/UF/E0/01/97/41/00001/branch_e.pdf1.3 Multi-robot Systems Multiple autonomous robot teams are suggested to have many

slightly laterally seperated at equal altitudes with identical initial attitudes. The initial goal

location as determined from the mapping criteria is based completely on vehicle location, since

there are no initially known obstacles. Figure6-2shows the initial environment, vehicle locations

and conditional trajectories. The initial goal locations quickly result in diverging trajectories,

distributing the vehicles to opposing environment boundaries.

When a team vehicle approaches within a defined proximity of the exploration goal

location the map criteria recalculates a new exploration goal based on the current map and

team knowledge. The new goal location moves the vehicle awayfrom its current position, team

vehicle position and known map knowledge.

0500

10001500

20002500

3000

0

500

1000

1500

2000

2500

3000

0

500

1000

x (ft)

y (ft)

altit

ude

(ft)

Figure 6-3. Simulation at T=80s

Figure6-3 shows the replanned goal location and new conditional trajectory.

After continuing the simulation for 150 seconds 65% obstacle coverage was achieved.

Obstacle coverage is defined as the percent of mapped obstacle surface area.

The simulation was also run with randomly selected goal locations. Therefore current map

knowledge and team distribution were not included in the mapcriteria. The simulation was again

run for 150 seconds, with 60% of the obstacles mapped. Using the map criteria the team vehicles

remained more widely distributed. The two simulations, random and proposed method, are

only intended to compare the overall representative behavior of each method as they effect team

distribution and coverage. No guarantees are made that the percentage of coverage would remain

41

Page 42: MULTI-VEHICLE COOPERATIVE CONTROL FOR …ufdcimages.uflib.ufl.edu/UF/E0/01/97/41/00001/branch_e.pdf1.3 Multi-robot Systems Multiple autonomous robot teams are suggested to have many

0500

10001500

20002500

3000

0

500

1000

1500

2000

2500

3000

0

500

1000

x (ft)

y (ft)al

titud

e (f

t)

Figure 6-4. Simulation at T=150s

0 500 1000 1500 2000 2500 3000

0

500

1000

1500

2000

2500

3000

y (ft)

x (ft)

altit

ude

(ft)

Figure 6-5. Simulation at T=150s Top View

the same for subsequent simulation runs. This is due to the random nature of the trajectory

generation and therefore the path defined toward each new goal location.

42

Page 43: MULTI-VEHICLE COOPERATIVE CONTROL FOR …ufdcimages.uflib.ufl.edu/UF/E0/01/97/41/00001/branch_e.pdf1.3 Multi-robot Systems Multiple autonomous robot teams are suggested to have many

CHAPTER 7CONCLUSION

7.1 Summary

Micro Air Vehicles (MAVs) are a reasonable platform for vision-based autonomous

environment mapping. Cameras are relatively small and lightweight, making them especially

well suited for vehicles with small payloads. They provide arich stream of data describing their

environment, suitable for map building. Multiple MAV teamshave the potential to exhibit many

advantages over single vehicle systems, however team distribution and task assignments must be

efficiently governed to realize the added benefits.

Use of geometric maps to represent the sensed environment allows for sufficient detail to

be mapped. The geometric representation provides a suitable map for a wide range of mission

tasks, provided an adequate geometric model is used. The model must be able to accurately

capture environment features with enough detail to satisfythe mission requirements. In this case

an urban environment can be well modeled by polygon plane segments providing a suitable map

for mission planning, collision avoidance, and obstacle recognition.

The mapping criteria developed in this thesis proved to be a valid exploration method,

allowing for the intelligent selection of goal locations togovern vehicle movement and team

distribution. The mapping criteria exhibited advantages over other uncoordinated, random

vehicle exploration strategies, and was found to be easily scaled for any number of team vehicles

without increasing system complexity.

The use of RRT path planners for trajectory generation provided a flexible method for

computing conditional vehicle trajectories which are ableto be quickly replanned when new map

knowledge becomes available. RRT algorithms also allowed kinodynamic constraints for each

vehicle to be tailored for heterogeneous vehicle teams adding another layer of flexibility.

The simulations are limited by the assumptions of perfect feature point detection, feature

tracking, and scene reconstruction but suffice to demonstrate the feasibility and performance of

the mapping criteria proposed.

43

Page 44: MULTI-VEHICLE COOPERATIVE CONTROL FOR …ufdcimages.uflib.ufl.edu/UF/E0/01/97/41/00001/branch_e.pdf1.3 Multi-robot Systems Multiple autonomous robot teams are suggested to have many

7.2 Future Direction

Recently many research efforts have considered the problemof simultaneous localization

and mapping (SLAM). The SLAM process uses a single mobile robot to simultaneously generate

a global environment maps and localize the vehicle within the environment. A somewhat

different problem, is that of cooperative localization andmapping (CLAM). Basically, CLAM

involves two or more robots cooperating to build a map of the environment. The Cooperative

strategy is not only aimed at simply increasing the speed with which the map is constructed , but

also it is aimed at increasing the accuracy of the resultant maps and vehicle location estimates.

44

Page 45: MULTI-VEHICLE COOPERATIVE CONTROL FOR …ufdcimages.uflib.ufl.edu/UF/E0/01/97/41/00001/branch_e.pdf1.3 Multi-robot Systems Multiple autonomous robot teams are suggested to have many

REFERENCES

[1] Broida, T. and Chellappa, R., “Estimation of Object Motion Parameters from NoisyImages,”IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 8, No. 1,pp. 90-99, Jan. 1986.

[2] Brunskill, E., and Roy, N., “SLAM using Incremental Probabilistic PCA and Dimen-sionality Reduction,”IEEE International Conference on Robotics and Automation, April2005.

[3] Canny, J.F., “A Computational Approach to Edge Detection,” IEEE Transactions onPattern Analysis and Machine Intelligence, Vol. 8, No. 6, November 1986, pp. 679-698.

[4] Castro, G.J., Nieto, J., Gallego, L.M., Pastor, L., Cabello, E., “An Effective CameraCalibration Method,”IEEE 0-7803-4484-7/98, 1998, pp. 171-174.

[5] Choset, H., Lynch, K., Hutchinson, S., Kantor, G., Burgard, W., Kavarki, L., and Thrun,S.,“Principles of Robot Motion: Theory, Algorithms, and Implementations ”,MIT Press,Cambridge, MA, 2005.

[6] Choset, H. and Nagatani, K., “Topological SimultaneousLocalization and Mapping(SLAM): Toward Exact Localization Without Explicit Localization,” IEEE Transactionson Robotics and Automation, Vol. 17, No. 2, April 2001.

[7] Duda, R. O., Hart, P. E., and Strok, D. G., “Pattern Classification,” John Wiley and Sons,Inc., 2001.

[8] Elfes, A. “Sonar-Based Real-World Mapping and Navigation,” IEEE Journal of Roboticsand Automation, Vol. RA-3, No. 3, June 1987.

[9] Elfes, A., “Using Occupancy Grids for Mobile Robot Perception and Navigation,”Computer, Vol. 22, Issue 6, pp. 46-57, 1989.

[10] Forsyth, D.A. and Ponce, J.,Computer Vision : A Modern Approach, Prentice-HallPublishers, Upper Saddle River, NJ, 2003.

[11] Hammerly, G., and Elkan, C., “Learning the k in k-means”, Neural Information Process-ing Systems, 15, 2004.

[12] Harris, C. and Stephens, M., “A Combined Corner and EdgeDetector,”Proceedings of theAlvey Vision Conference, 1988, pp. 147-151.

[13] Hartley, R., “In Defense of the Eight-Point Algorithm,” IEEE Transactions on PatternAnalysis and Machine Intelligence, Vol. 19, No. 6, June 1997.

[14] Kuffner, J., and LaValle, S., “RRT Connect: An efficientApproach to Single-QueryPath Planning,”Proceedings of the IEEE International Conference on Robotics andAutomation, 2000, pp. 995-1001.

45

Page 46: MULTI-VEHICLE COOPERATIVE CONTROL FOR …ufdcimages.uflib.ufl.edu/UF/E0/01/97/41/00001/branch_e.pdf1.3 Multi-robot Systems Multiple autonomous robot teams are suggested to have many

[15] Kuipers, B. and Byun, Y., “A Robot Exploration and Mapping Strategy Based on SemanticHierarchy of Spacial Representations”,Journal of Robotics and Autonomous Systems,8:47-63, 1991.

[16] LaValle, S.M., and Kuffner, J.J., “Randomized Kinodynamic Planning,”InternationalJournal of Robotics Research, Vol. 20, No. 5, May 2001, 378-400.

[17] LaValle, S.M.,Planning Algorithms, Cambridge University Press, Also availableat:http://msl.cs.uiuc.edu/planning/, 2006.

[18] Latombe, J.C.,Robot Motion Planning, Boston, MA:Kluwer Academic, 1991.

[19] Laumond, J.P.,Robot Motion Planning and Control, Online book: http://www.laas.fr/jpl/book.html.

[20] Longuet-Higgins, H.C., “A Computer Algorithm for Reconstructing a Scene from TwoProjections,”Nature, Vol. 293, pp. 133-135, Sept 1981.

[21] Lucas, B. and Kanade, T., “An Iterative Image Registration Technique with an Applicationto Stereo Vision,”Proceedings of the DARPA Image Understanding Workshop, 1981, pp.121-130.

[22] Ma, Y., Soatto, S., Kosekca, J., and Sastry S., “An Invitation to 3-D Vision: From Imagesto Geometric Models,”Springer-Verlag New York, Inc. 2004.

[23] Mahon, I., and Williams, S., “Three-Dimensional Robotic Mapping”,Proceedings of the2003 Australasian Conference on Robotics and Automation, Brisbane, Queensland, 2003.

[24] Martin, C., and Thrun S., “Online Acquisition of Compact Volumetric Maps with MobileRobots”,In IEEE International Conference on Robotics and Automation, Washington, DC,2002.

[25] MacQueen, J.B., “Some Methods for Classification and Analysis of Multivariate Ob-servations”,Proceedings of the 5th Berkeley Symposium on Mathematical Statistics andProbability, pp. 281-297, 1967.

[26] Oliensis, J., “A Critique of Structure-from-Motion Algorithms”,Computer Vision andImage Understanding, 80:172-214,2000.

[27] Prazenica, R., Watkins, A., Kurdila, A., Ke, Q., and Kanade T., “Vision-Based KalmanFiltering for Aircraft State Estimation and Structure fromMotion”, AIAA Guidance,Navigation, and Control Conference and Exhibit, San Francisco, California, August 2005.

[28] Surmann, H., Nuchter, A., and Hertzberg, J., “An Autonomous Mobile Robot with a 3DLaser Range Finder for 3D Exploration and Digitalization ofIndoor Environments”,Robotics and Autonomous Systems, Vol. 45, pp. 181-198, 2003.

46

Page 47: MULTI-VEHICLE COOPERATIVE CONTROL FOR …ufdcimages.uflib.ufl.edu/UF/E0/01/97/41/00001/branch_e.pdf1.3 Multi-robot Systems Multiple autonomous robot teams are suggested to have many

[29] Thrun, S., Gutmann, J.-S., Fox, D., Burgard, W., and Kuipers, B., “Integrating Topologicaland Metric Maps for Mobile Robot Navigation: A Stastical Approach”,In Proceedings ofthe National Conference on Artificial Intelligence (AAAI), 1998.

[30] Thrun, S., Burgard, W., and Fox D., “A Real-Time Algorithm for Mobile Robot Mapping-With Applications to Multi-Robot and 3D Mapping”,In IEEE International Conference onRobotics and Automation, 2000

[31] Tipping,M., and Bishop, C., “Probabilistic PrincipalComponent Analysis”,Journal of theRoyal Statistical Society, Series B, 61, Part 3, pp. 611-622.

[32] Tomasi, C. and Kanade, T., “Shape and Motion from Image Streams Under Orthography”,International Journal of Computer Vision, Vol. 9, No. 2, pp. 137-154.

[33] Office of the Secretary of Defense, United States, “Unmanned Aircraft Systems Roadmap2005-2030”, August 2005.

[34] Thomas, J. and Oliensis J., “Dealing with Noise in Multiframe Structure from Motion”,Computer Vision and Image Understanding,Vol. 76, No. 2, November, pp. 109-124, 1999.

[35] Weng, J., Zhang, Y., and Hwang W., “Candid Covariance-Free Incremental PrincipleComponent Analysis”,IEEE Transactions on Pattern Analysis and Machine Intelligence,2003.

47

Page 48: MULTI-VEHICLE COOPERATIVE CONTROL FOR …ufdcimages.uflib.ufl.edu/UF/E0/01/97/41/00001/branch_e.pdf1.3 Multi-robot Systems Multiple autonomous robot teams are suggested to have many

BIOGRAPHICAL SKETCH

Eric Branch was born and raised in Hollywood, Florida. He attended the University of

West Florida in Pensacola, Florida. During his time in Pensacola, he worked as a flightline

attendant at Pensacola Region Airport, where he started hisflight training and eventually

earned a Commercial Pilots License. In 2001, he transferredto the University of Florida, where

he received a Bachelor of Science degree in aerospace engineering in August 2003. After

graduation, Eric went to work for Naval Air Systems Command (NAVAIR) as a Flight Test

Engineer. Eric later returned to the University of Florida to earn a Master of Science degree in

aerospace engineering. His graduate research involved thedevelopment of vision-based flight

control methodologies.

48