Hovering by Gazing: A Novel Strategy for Implementing ...· Hovering by Gazing: A Novel Strategy for

download Hovering by Gazing: A Novel Strategy for Implementing ...· Hovering by Gazing: A Novel Strategy for

of 14

  • date post

  • Category


  • view

  • download


Embed Size (px)

Transcript of Hovering by Gazing: A Novel Strategy for Implementing ...· Hovering by Gazing: A Novel Strategy for

  • Hovering by Gazing: A Novel Strategy for Implementing

    Saccadic Flight-based Navigation in GPS-denied


    Augustin Manecy, Nicolas Marchand, Stephane Viollet

    To cite this version:

    Augustin Manecy, Nicolas Marchand, Stephane Viollet. Hovering by Gazing: A Novel Strategyfor Implementing Saccadic Flight-based Navigation in GPS-denied Environments. InternationalJournal of Advanced Robotic Systems, InTech, 2014, 11 (66), 13 p. .

    HAL Id: hal-00982329


    Submitted on 23 Apr 2014

    HAL is a multi-disciplinary open accessarchive for the deposit and dissemination of sci-entific research documents, whether they are pub-lished or not. The documents may come fromteaching and research institutions in France orabroad, or from public or private research centers.

    Larchive ouverte pluridisciplinaire HAL, estdestinee au depot et a la diffusion de documentsscientifiques de niveau recherche, publies ou non,emanant des etablissements denseignement et derecherche francais ou etrangers, des laboratoirespublics ou prives.


  • Hovering by Gazing: a NovelStrategy for Implementing SaccadicFlight-based Navigation inGPS-denied Environments


    Abstract Hovering flies are able to stay still in placewhen hovering above flowers and burst into movementtowards a new object of interest (a target). This suggeststhat sensorimotor control loops implemented onboardcould be usefully mimicked for controlling UnmannedAerial Vehicles (UAVs). In this study, the fundamentalhead-body movements occurring in free-flying insectswas simulated in a sighted twin-engine robot with amechanical decoupling inserted between its eye (or gaze)and its body. The robot based on this gaze control systemachieved robust and accurate hovering performances,without an accelerometer, over a ground target despite anarrow eye field of view (5). The gaze stabilizationstrategy validated under Processor-In-the-Loop (PIL) andinspired by three biological Oculomotor Reflexes (ORs)enables the aerial robot to lock its gaze onto a fixed targetregardless of its roll angle. In addition, the gaze controlmechanism allows the robot to perform short range targetto target navigation by triggering an automatic fast "targetjump" behaviour based on a saccadic eye movement.

    Keywords Micro Aerial Vehicle, Automatic Navigation,Gaze Control, Visual Control, Hover Flight, EyeMovement, Oculomotor Control, Biorobotics


    FOV: Field Of View.

    rVOR: Rotational Vestibulo-Ocular Reflex.

    tVOR: Translational Vestibulo-Ocular Reflex.

    VFR: Visual Fixation Reflex.

    ZSL: Zero-Setting System.

    VFL: Visual Feedback Loop.

    D-EYE: Decoupled eye robot.

    F-EYE: Fixed eye robot.

    PIL: Processor-In-the-Loop.

    MLD: Maximum Lateral Disturbance.

    1. Introduction

    Making Unmanned Aerial Vehicles (UAVs) autonomousis likely to be a key research subject for the next 10years. To achieve this autonomy, one of the mostcrucial steps is stabilizing the attitude of aerial vehicles.This objective has been classically achieved using anInertial Measurement Unit (IMU) composed of rate gyros,accelerometers and magnetometers ([13]).But IMUs oftenhave to be combined with other sensors to estimate theposition of the UAV without drifts. A GPS has been used,

    Augustin Manecy, Nicolas Marchand and Stphane Viollet: Hovering by Gazing:

    A Novel Strategy for Implementing Saccadic Flight-based Navigation in GPS-denied Environments



    Int J Adv Robot Syst, 2014, 11:66 | doi: 10.5772/58429

    1 University Grenoble Alpes, CNRS, GIPSA-Lab, F-38000 Grenoble, France2 Aix-Marseille University, CNRS, ISM UMR 7287, Marseille, France3 CNRS, France* Corresponding author E-mail: augustin.manecy@gmail.com

    Received 25 Sep 2013; Accepted 02 Mar 2014

    DOI: 10.5772/58429

    2014 The Author(s). Licensee InTech. This is an open access article distributed under the terms of the CreativeCommons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use,distribution, and reproduction in any medium, provided the original work is properly cited.

    Augustin Manecy1,2,3,*, Nicolas Marchand1,3 and Stphane Viollet2,3

    Hovering by Gazing: A Novel Strategy for Implementing Saccadic Flight-based Navigation in GPS-denied EnvironmentsRegular Paper

    International Journal of Advanced Robotic Systems

  • for example, to remove the bias from the pose estimatesin outdoor applications ([46]). Many strategies involvingvision and IMU combined have been adopted for indoorapplications because they work efficiently in GPS-deniedenvironments. For example, [7] used an external trajectorydevice yielding very fast dynamics, which was able toaccurately measure both the position and the attitude of arobot equipped with markers. Likewise, [8] used a simpleexternal CCD camera to measure the positions of two LEDmarkers onboard a quadrotor. Although external visualsensing devices of this kind are quite accurate, their visualcomputational resources often have to be provided by aground-based host computer. In other studies, the visualsensor was implemented onboard a robot, which wasstabilized using two elliptic markers (one placed underthe robot and the other in front of it) to determine itsattitude and position ([9]). Five markers with a specificgeometry were used by [10] to estimate the robots yawand position, while an IMU gave the robots attitude andangular velocities. Another particularly cheap solution([11]) comprised using a Wii remote sensor (Nintendo) tocontrol a hovering vehicle, while the pitch and roll weremeasured by an IMU. Another solution ([12]) consisted ofmerging a five DOF estimation provided by a monocularcamera with a gravity vector estimation provided byan IMU to achieve take-off, hovering and landing of aquadrotor. Successful control of a hovering quadrotorwas achieved by cancelling the linear velocity based onoptic flow measurements ([13], [14]). Optic flow was alsoused to estimate the altitude and the forward speed of anairborne vehicle ([15]). The robots attitude has usuallybeen determined in these studies by means of an IMUcombined with accelerometers compensating for the drift.However, the authors of several studies have used smallor panoramic cameras to determine the robots attitude([7]). Visual servoing was recently applied to stabilize ahovering quadrotor equipped with a fixed camera facinga ground-coloured target ([16]). When this approach wastested under natural outdoor conditions, its vision-basedhovering accuracy was found to range between -2m and2m. In [17], a monocular camera and computer visionwith high delays was used to avoid drift obtained with asimple IMU pose estimation. A camera is also used in [18]as a dual-sensor to achieve a drift-free hover and performobstacles avoidance on a quadrotor.

    In our bio-inspired minimalistic framework, a new controlstrategy was developed, which consists of using only threesensors: an optical angular position sensing device with a small

    Field-Of-View (FOV) of only a few degrees, which isable to locate the angular position of a ground targetwith great accuracy ([19], [20])

    a low-cost rate gyro a simple proprioceptive sensor used to measure the eye

    orientation in the robots frame.

    Using a decoupled eye onboard an UAV greatly reducesthe computational complexity because a narrower FOVcould be used, meaning fewer pixels were needed. In aprevious robotic study ([21]), a decoupled eye is classicallyused as an observation device to track a target andcompensate for the UAV displacements around this target.

    The latter was successfully tracked by an autonomoushelicopter, the position and attitude of which weredetermined using a combination of GPS and IMU. In thepresent study, it was proposed to use the robots own gaze(i.e., the orientation of the eye in the inertial frame) asa reference value in order to determine its rotations andpositions.

    An eye with a restricted FOV of only a few degrees (likea kind of fovea1) was adopted here for the following mainreasons: a small FOV means that very few pixels have to be

    processed because a small FOV can be obtained byselecting a small Region Of Interest (ROI)

    a small ROI means a high frame rate a fovea makes it possible to greatly reduce the

    computational complexity of the visual processingalgorithms, as described in [22]. Therefore that tiny,low-cost, low consumption microcontroller can beimplemented onboard the robot.

    The robots eye was taken to be a visual Position-SensingDevice (PSD), which is able to locate an edge (or a bar)accurately thanks to its small FOV (here, FOV = 5,in opposition to a classical FOV of 25). This visualsensor was endowed with hyperacuity ([23]), i.e., theability to locate a contrasting target with a much betterresolution than that dictated by the pixel pitch. Theimplementation of this kind of hyperacute sensor visualsensor was described by previous authors ([19, 20, 2426]).

    The simulated robot and its decoupled eye are describedin section 2 along with the nonlinear dynamic model. TheProcessor-In-the-Loop (PIL) system used to perform thesimulations was also described in this section. The originalnonlinear observer presented in section 3 estimates thelinear speed, the position, the attitude (on the roll axis onlyin this paper), and the rate gyros bias. Then the gazecontroller as well as the whole robot controller based onthese estimations is described in detail. In section 4, a newnavigation strategy for aerial robots in terms of automatictarget to target navigation based on a saccadic flight isoutlined. Lastly, the advantages of using a decoupledeye with a gaze control strategy a