Post on 26-Feb-2021
Designing an Effective Millirobot for Swarm
Behaviour Studies with Human-Swarm Interaction
by
Justin Yonghui Kim
A thesis submitted in conformity with the requirements for the degree of Master of Applied Science
MECHANICAL AND INDUSTRIAL ENGINEERING
UNIVERSITY OF TORONTO
© Copyright by Justin Yonghui Kim 2017
ii
Designing an Effective Millirobot for Swarm Behaviour Studies
with Human-Swarm Interaction
Justin Yonghui Kim
Master of Applied Science
MECHANICAL AND INDUSTRIAL ENGINEERING
UNIVERSITY OF TORONTO
2017
Abstract
This thesis presents a novel design of an open-source millirobot utilized for swarm behaviour studies
and a novel swarm localization algorithm to enable human-swarm interaction (HSI). Millirobots have
increasingly become popular over the past years in swarm robotics research. However, currently
existing millirobots for swarm robotics research are typically custom made, with few off-the-shelf
components, and exhibit integral design. Furthermore, as these robots have become smaller in size,
their sensory capabilities and battery capacities have been reduced. This thesis proposes a new
millirobot design that addresses the abovementioned limitations.
For swarm robots to be successfully integrated into real world applications, HSI must be enabled;
a human supervisor monitors and provides meaningful commands to the robots. One method of
enabling HSI is through localization. This thesis proposes a novel localization method to estimate the
swarm topology without the use of any external sensors such as overhead cameras monitoring the
robots.
iii
Acknowledgements
I would first like to thank Professor Benhabib and Professor Nejat for the guidance and support they
have provided me on finishing the work on mROBerTO. This work would have not been possible
without their efforts. Professor Benhabib and Professor Nejat have reviewed and provided feedback
for all prototype versions of mROBerTO. In addition, they provided feedback on all documentation
work for the project. Furthermore, Professor Benhabib provided the initial algorithm for the HSI
portion of this thesis.
I would like to thank Zendai Kashino for his continuous support on completing the mROBerTO
project. Specifically, I would like to thank him for helping me review the algorithms used on the robots.
Furthermore, he has helped me with analyzing the experimental data for the HSI portion of the thesis
as well as graphing the analyzed experimental data.
I would also like to thank Tyler Colaco for his undergrad thesis work which pertained to the
tracking component of mROBerTO. Specifically, he helped make the initial tracking code, which can
track both the global position and orientation of the robot on the workspace using an overhead camera,
scalable to be used with multiple robots. The new scalable code can track multiple robots moving on
the workspace by implementing a scheduler for threaded processing.
I would like to thank Evgeny Nuger for all the support he has given me during the completion of
my Master of Applied Science. Furthermore, I would like to thank Julio Vilela, Mario Luces, Benjamin
Corcoran, Ahmed Sageer, and everyone else from the Autonomous Systems and Biomechatronics Lab
that have provided support during the completion of my research.
Thank you Bruno Korst for providing me technical help as well as personal advice since the
beginning of my engineering studies. After I finished my first year, I wanted to learn more about
electrical engineering during the summer so I decided to reach out to faculty and staff members in the
ECE department. I was directed to Bruno Korst and I told him how much I wanted to learn more about
electrical engineering with great enthusiasm in our first meeting. He decided to give me a chance and
provided me my very first microcontroller, T.I. MSP430 series, to tinker with and the rest was history.
Finally, I would like to thank my friends and my family for their continuing support. I would like
to give a special thanks to Swarna Mitra who provided me continuous support throughout both my
undergraduate and graduate studies.
iv
Table of Contents
Acronyms ...................................................................................................................................... vii
Nomenclature ..............................................................................................................................viii
Chapter 1 Introduction and Motivation ...................................................................................... 1
1.1 Design of Swarm Millirobots.............................................................................................. 2
1.2 Human-Swarm Interaction .................................................................................................. 3
1.3 Problem Statement and Thesis Objectives .......................................................................... 4
1.4 Proposed Methodology and Thesis Organization ............................................................... 5
Chapter 2 Review on Currently Existing Swarm Robots and Designs ..................................... 6
2.1 Hardware Design ................................................................................................................ 6
2.1.1 Processing and Sensing Capabilities ........................................................................... 7
2.1.2 Robot Actuation .......................................................................................................... 7
2.1.3 Wireless Communication ............................................................................................ 8
2.2 Swarm Algorithms and Control System ............................................................................. 9
Chapter 3 Designing an Effective Modular Swarm Millirobot ............................................... 11
3.1 Proposed Swarm Millirobot Design.................................................................................. 11
3.1.1 Modules of the Robot ................................................................................................ 12
3.1.2 Sensing and Communication for Collective Behaviour ............................................ 18
3.1.3 Software .................................................................................................................... 20
3.1.4 Power Management ................................................................................................... 26
3.1.5 Comparison to Other Millirobots .............................................................................. 26
3.2 Experiments ...................................................................................................................... 27
3.2.1 Open-Loop Movement Control ................................................................................. 28
3.2.2 Path Following .......................................................................................................... 28
3.2.3 Swarm Capabilities ................................................................................................... 31
3.3 Summary ........................................................................................................................... 38
Chapter 4 Review on Human-Swarm Interaction .................................................................... 40
v
4.1 Propagating Human Influence on Swarm ......................................................................... 40
4.2 Localization ...................................................................................................................... 41
4.2.1 Range-Based Localization......................................................................................... 43
4.2.2 Localization with Evolutionary Algorithms .............................................................. 43
4.3 Summary ........................................................................................................................... 44
Chapter 5 Enabling Localization for Human-Swarm Interaction .......................................... 45
5.1 Proposed Swarm Localization Method ............................................................................. 45
5.2 Experiments ...................................................................................................................... 50
5.3 Summary ........................................................................................................................... 55
Chapter 6 Conclusions and Recommendations for Future Work ........................................... 57
6.1 Summary of Contributions ................................................................................................ 57
6.1.1 An Effective Swarm Millirobot Design .................................................................... 58
6.1.2 Swarm Localization Algorithm ................................................................................. 58
6.2 Recommendations for Future Work.................................................................................. 59
Appendices .................................................................................................................................... 61
Appendix A: Drawings and Assemblies of mROBerTO ........................................................... 61
A.1 Locomotion Module: Drawing and Assembly .......................................................... 62
A.2 Mainboard Module: Drawing and Assembly ............................................................ 64
A.3 Primary Sensing Module: Drawing and Assembly ................................................... 66
A.4 Secondary Sensing (Swarm) Module: Drawing and Assembly ................................ 68
A.5 Secondary Sensing (Central) Module: Drawing and Assembly ................................ 70
Appendix B: Bill of Materials .................................................................................................... 72
Appendix C: Schematics and Board Layouts ............................................................................. 74
C.1 Locomotion Module: Schematic and Layout ............................................................ 74
C.2 Mainboard Module: Schematic and Layout .............................................................. 78
C.3 Primary Sensing Module: Schematic and Layout ..................................................... 82
C.4 Secondary Sensing (Swarm) Module: Schematic and Layout .................................. 86
vi
C.5 Secondary Sensing (Central) Module: Schematic and Layout .................................. 90
Appendix D: Specification of mROBerTO ................................................................................ 94
Appendix E: IR Sensors for Proximity and Bearing Estimation ................................................ 95
Appendix F: Source Code ........................................................................................................ 105
F.1 FHT and GA on ATmega328P................................................................................ 105
F.2 Angular Velocity to PWM Function ....................................................................... 107
F.3 Discretized PID Controller ...................................................................................... 110
Appendix G: Centralized Controller with Overhead Camera .................................................. 112
Appendix H: Details of Four Swarm Scenarios ....................................................................... 120
H.1 Aggregation Behaviour ........................................................................................... 120
H.2 Chain Formation ...................................................................................................... 124
H.3 Collective Exploration............................................................................................. 130
H.4 Dynamic Task Allocation........................................................................................ 132
H.5 ANT Wireless Communication and Source Code Snippet ..................................... 139
Appendix I: HSI Experimental Setup ....................................................................................... 144
Appendix J: MATLAB Code for the Localization Experiment ............................................... 151
Appendix K: List of Publications and Submitted Papers ......................................................... 155
References ................................................................................................................................... 156
vii
Acronyms
ADC Analog to Digital Converter
BoM Bill of Materials
BLE Bluetooth Low Energy
BSA Backtracking Search Algorithm
CDS Connected Dominating Set
EA Evolutionary Algorithm
FHT Fast Hartley Transformation
FSM Finite-State Machine
GA Genetic Algorithm
GPIO General Purpose Input/Output
GPS Global Positioning System
GUI Graphical User-Interface
HSI Human-Swarm Interaction
I2C Inter-Integrated Circuit (also referred to as I2C)
IDE Integrated Development Environment
IMU Inertial Measurement Unit
IR Infrared
Li-Po Lithium-Polymer
LoA Level of Autonomy
MEMS Microelectromechanical Systems
MRS Multi-Robot Systems
PFSM Probability Finite-State Machine
PID Proportional-Integral-Derivative
PSO Particle Swarm Optimization
PWM Pulse Width Modulation
RL Reinforcement Learning
SR Swarm Robotics
SoC System-on-Chip
SPI Serial Peripheral Interface
TWI Two-Wire Interface
UART Universal Asynchronous Receiver/Transmitter
UAV Unmanned Aerial Vehicle
UI User-Interface
UV Unmanned Vehicles
WSN Wireless Sensor Network
viii
Nomenclature
φ̂ Estimated bearing value of the nearby robot with respect to its heading
𝛽𝑖 Angular distance between the 𝑖𝑡ℎ sensor and the robot’s heading
ρ𝑖 Scaled IR intensity reading of the 𝑖𝑡ℎ sensor
𝑟𝑖𝑗 Proximity of robot 𝑗 to robot 𝑖, as observed by robot 𝑖
𝜃𝑖𝑗 Bearing of robot 𝑗 in the local reference frame of robot 𝑖
𝛼𝑖 Transformation of data set 𝑖 from the local reference frame to the global reference
frame
𝑿𝑖𝑗 Cartesian coordinates of robot 𝑗 as observed by robot 𝑖 in the common global reference
frame
𝑎𝑖 Summed normalized IR reading values from the IR detectors
𝐼𝑅𝑖,𝑗 Raw IR reading values for the 𝑖𝑡ℎ channel and 𝑗𝑡ℎ IR detector
𝐼𝑅max 𝑖,𝑗 Maximum raw IR reading found during calibration at close distance (i.e., 50 mm)
𝐼𝑅′𝑖,𝑗 Normalized IR reading value
𝑙𝑑𝑙𝑖𝑚𝑖𝑡 Summed value of the normalized IR reading values
𝑭𝑎𝑡𝑡 Artificial attractive force
𝜁 Scaling factor for the attractive force
𝑑𝑙𝑖𝑚𝑖𝑡 distance limit value of how close the robot can to another robot
𝑑𝑐𝑙𝑜𝑠𝑒 Transitioning value from conic to parabolic artificial potential well
𝑑𝑓𝑎𝑟 IR sensing radius limit of the robot
𝑭𝑜𝑏𝑠 Artificial repulsive force
𝜂 Scaling factor for the repulsive force
𝑑𝑡ℎ𝑟𝑒𝑠ℎ𝑜𝑙𝑑 Threshold distance value where the robot tries to avoid obstacles
𝑭𝑝𝑒𝑟𝑝 Next movement vector in chain formation
c𝜃 Cosine function of 𝜃
s𝜃 Sine function of 𝜃
𝜌𝑤 Weight parameter for the perpendicular movement vector
𝑭𝒐 Repulsive forces for obstacles
𝑭𝒏 Repulsive forces for nearby mobile robots
𝑘𝑜 Scaling factor for the repulsive force of nearby obstacles
𝑘𝑛 Scaling factor for the repulsive force of other nearby robots
𝑑𝑜,𝑖 Relative distance between the obstacles
𝑑𝑛,𝑖 Relative distance between the robots
𝜃𝑑 Angle of deviation in the end result of the calibration iteration
𝜃𝑖𝑛𝑖𝑡𝑖𝑎𝑙 Initial global orientation of the robot for the calibration
𝜃𝐿 Total radians of the robot’s left wheel that turned during the calibration
𝜃𝑅 Total radians of the robot’s right wheel that turned during the calibration
𝑙𝑎𝑥𝑙𝑒 Length between left and right wheel of the robot
𝐷𝐿 Diameter of the left wheel of the robot
𝐷𝑅 Diameter of the right wheel of the robot
𝜔𝐿 Angular velocity of the left wheel of the robot during calibration
𝜔𝑅 Angular velocity of the right wheel of the robot during calibration
𝑇𝑡𝑒𝑠𝑡 Total time duration of the calibration
𝑇𝑟 Transformation matrix of the robot in SE(2)
ix
𝑣1 Next acceleration value of the robot in the x direction for the PID controller
𝑣2 Next acceleration value of the robot in the y direction for the PID controller
𝑧2𝑑̇ Next desired acceleration value of the robot in the x direction for the PID controller
𝑧4𝑑̇ Next desired acceleration value of the robot in the y direction for the PID controller
𝑧1 Current x position in the global coordinate system using the centralized controller
𝑧3 Current y position in the global coordinate system using the centralized controller
𝑧1𝑑 Next desired x position in the global coordinate system using the centralized
controller
𝑧3𝑑 Next desired y position in the global coordinate system using the centralized
controller
𝑧2 Current x velocity value of the robot in the x direction for the PID controller
𝑧4 Current y velocity value of the robot in the x direction for the PID controller
𝑘𝑝1 Proportional gain value in the x direction for the PID controller
𝑘𝑝2 Proportional gain value in the y direction for the PID controller
𝑘𝑑1 Derivative gain value in the x direction for the PID controller
𝑘𝑑2 Derivative gain value in the y direction for the PID controller
𝑘𝑖1 Integral gain value in the x direction for the PID controller
𝑘𝑖2 Integral gain value in the y direction for the PID controller
𝑥𝑝 Current x position (pixel units) in the global coordinate system using the overhead
camera and centralized controller
𝑦𝑝 Current y position (pixel units) in the global coordinate system using the overhead
camera and centralized controller
𝑥𝑔 Current x position (pixel units) of the robot’s front green LED in the global
coordinate system using the overhead camera and centralized controller
𝑦𝑔 Current y position (pixel units) of the robot’s front green LED in the global
coordinate system using the overhead camera and centralized controller
𝜃𝑔𝑙𝑜𝑏𝑎𝑙 Global orientation of the robot on the workspace using the overhead camera and the
centralized controller
1
Chapter 1
Introduction and Motivation
Swarm robotics (SR) is an emerging research area within the field of autonomous multi-robot systems
(MRSs) where a swarm may consist of tens, hundreds, and even thousands, of simple decentralized
homogenous robots that only locally interact [1]-[3]. Swarm robots are simple both in design and in
processing and sensing capabilities when compared with traditional MRS teams. They take inspiration
from social animals that exhibit self-organized behaviours [4]. Self-organized behaviours naturally
exhibit the following desirable properties: (i) robustness to partial failures, (ii) flexibility for adapting
to dynamic environments, and (iii) scalability to allow for changes of group size. Swarm robots found
in the literature range from milli-, to micro-, and even to nano-scale [5].
A number of challenges exist with respect to the development of millirobots. In particular, trade-
offs between size, sensory capabilities, battery life, and processing power need to be made.
Furthermore, as the millirobots get smaller in size, their assembly becomes complex and most require
custom-made components, including chassis and drivetrains, which can diminish their advantages of
having easy manageability and setup. To date, only a few units are commercially available [6], [7]. As
reported in the literature, although sensory capabilities are usually reduced as the size of the robot
decreases, smaller millirobots can operate collectively in larger quantities and in relatively small areas.
Communication and data transmission is another challenge in the design of millirobots. Many of the
millirobots reported in the literature use only low power IR communication, which is simple to
implement but subject to slow data transfer speeds and limited range. Robots that are modular in design
have attachable modules with radio communication that allow for alternative communication protocols
such as ZigBee® [8].
In order to successfully maintain a desired swarm topology, localization is a necessary task, and it
is preferably carried out in an on-line manner either through external [9], [10], or on-board sensors
[11]-[45]. Ranged-based methods are often needed for localization, which require the robots in the
swarm to directly measure their relative distance and bearing with respect to other neighbouring robots
[11]. The majority of localization algorithms that can be used for SR require anchors with known global
coordinates obtained through external sensors such as overhead cameras and global positioning
systems (GPS) [12]–[18], or that the robots be in motion [19]–[22]. Furthermore, only a few have
2
implemented the localization algorithms with physical robots and, even then, restricted the set-ups to
two or three robots at most [14], [20].
Despite the need for an effective swarm millirobot design along with a localization algorithm in
order to enable HSI for real world SR applications, the development of such design and algorithm in
SR are sparse in the literature. An effective swarm millirobot design should have maximum modularity,
maximum use of off-the-shelf components, and rich processing and sensing capabilities. Current works
mostly focus on the design and implementation of a single collective behaviour algorithm or
completion of a single task such as foraging [1]–[3], [23]. In addition, these works do not consider the
actual design of the physical robots, neither do they consider enabling HSI when designing the swarm
system. This thesis focuses on a new design of a swarm millirobot along with a new localization
method to estimate swarm topology in order to enable HSI.
1.1 Design of Swarm Millirobots
The major hardware design goals for mobile swarm millirobots include [1], [24]–[28]:
1. Minimum footprint area to allow large number of units to work in small arenas,
2. Easy assembly for mass production and easy maintenance,
3. Maximum processing and sensing capabilities to run a variety of collective behaviour
algorithms in real-time, and,
4. Utilization of hands-free technology for scalable operation and easy manageability.
Researchers have also mentioned the need for modular swarm robot design [21], [25], [26], [40], but
many existing millirobots have integral designs which makes it difficult for future upgrades in both
hardware and software, and for experimenting with varying swarm scenarios. Modular design allows
management of uncertainties when future changes and upgrades are needed [32]. The number and types
of sensors as well as the type of communication to be implemented onto swarm millirobots must be
based on the basic required functionalities of the robot [27]. The swarm control system analyzes the
data received by the sensors and exhibit the required global behaviours through the actuators and
wireless communication of the robot [33]–[35].
Small-scale robots range in size from mm3 to μm3 and have been categorized as milli-, micro-, or
even nano-robots [5]. Currently existing swarm millirobots with larger footprint area (greater than
75×75 mm2) [31], [36]–[38] typically have superior capabilities than smaller millirobots [8], [26],
[28]–[30], [39]–[42] (e.g., more powerful processors, larger battery capacities, and increased sensing
3
capabilities). They are designed for testing few collective behaviour algorithms in lab testbed settings.
Examples of larger millirobots (above 120×120 mm2 footprint) include the E-Puck [31] and SwarmBot
millirobots [36] that have onboard cameras and Linux-based operating systems to accomplish complex
tasks. Another larger millirobot is the R-One millirobot, [37], [38], which has static grippers that can
lock onto other R-One robots as well as to any oversized objects and can connect to an automated self-
charging station.
As reported in the literature, although sensory capabilities are usually reduced as the size of the
robot decreases, smaller millirobots can operate collectively in larger quantities and in relatively small
areas. In [39], for example, the millirobot, Robomote, was equipped with a compass and distance
sensor for moving to different locations autonomously in order to enable mobility in a large wireless
sensor network (WSN).
Examples of smaller millirobots targeted for swarm behaviour studies include Alice II [8], Jasmine
[40], AMiR [26], Wanda [41], TinyTeRP [30], [43], and GRITSBot [29]. Alice II can operate up to
ten hours and a linear camera can be attached to the robot as an additional sensor. Jasmine is equipped
with six pairs of IR emitters and receivers placed near the outer edges of the robot’s chassis for all-
around coverage of IR sensing and communication. Wanda and AMiR include software development
tools for both low- and high-level control. TinyTeRP has an all-terrain add-on, where tracked wheels
can be used for movement. GRITSBot has 3 stackable layers that consist of varying sensors and
modules. Two other smaller millirobots, Kilobot [28] and Colias [42], have also been designed for
swarm intelligence research but are limited in their modularity. Kilobots can operate in teams of more
than a thousand units programmed via an over-the-air controller. Colias can reach a speed of up to 350
mm/s, which allows rapid movement over large, flat, areas.
1.2 Human-Swarm Interaction
The major difference between conventional MRS and swarm systems is that a swarm can operate with
a high-level of autonomy (LoA) where the control algorithms used are often inspired from nature.
Furthermore, only local interactions are needed in swarm systems for a desired global behaviour to
emerge which removes the requirement of a centralized controller [44]. In short, a swarm acts as a
single entity (i.e., superorganism) whereas conventional robots in multi-unmanned vehicles (UV)
systems act individually [23]. This raises several challenges that are not addressed in human-robot
interaction (HRI) literature for traditional MRS.
Although swarm millirobots are expected to exhibit high LoA [45], several benefits exist for having
4
a human supervisor monitor and control the swarm robots and enable HSI [23], [44]. These benefits
include: (i) prevent shortcomings that swarm millirobots cannot process due to their limited
capabilities, (ii) increase overall performance of the swarm by using strategies that cannot be explicitly
programmed into the robots when completing global missions, and (iii) provide changes in information
whenever global mission changes. To enable HSI, different localization algorithms [15], [46], [47] and
communication network configurations [48]–[50] must be considered and implemented. Furthermore,
hardware design must be considered in parallel when enabling HSI on these swarm millirobots [27].
The concept of HSI has only recently been explored and several challenges have been addressed in
the HSI domain [23]. In [44], command complexity was discussed with regards to the human
supervisor’s cognitive limits. A human supervisor can only control a certain number of robots until
control performance significantly decreases [23]. In [51] and [52], the concept of neglect benevolence
was introduced where it states that the human supervisor should not provide multiple directives to the
swarm robots until the swarm converges to a stable state to prevent any degrading of swarm
performance. In [48], dynamic selection of robot leaders controlled by a human to influence the swarm
was used to address scalability and communication latency. In [53]–[56], different design approaches
for user-interface (UI) to monitor and command the swarm as well as traditional MRS were discussed
and showcased.
It is clear, then, that several benefits exist for having a human supervisor monitor and direct swarm
robots. However, in order for a human supervisor to interact and work with a swarm of robots,
localization of the robots appears to be one of the essential components needed since the human
supervisor cannot provide any meaningful feedback without knowing the states of the robots. Other
essential components for enabling HSI include, but are not limited to, sufficient bandwidth and latency
for remote communication, streamlined UI that prevents information overload, and a decentralized
network for enabling scalability.
1.3 Problem Statement and Thesis Objectives
The literature suggests the need for the design of an effective swarm millirobot that is made using only
off-the-shelf and easy to source components, with a high level of modularity, minimum footprint area,
and high level of sensing and processing capabilities. Furthermore, several benefits exist for these
newly designed robots by enabling HSI where the robots are monitored and influenced by a human
supervisor. Thus, the primary objectives of this thesis are the following:
5
1. To develop and implement an effective swarm millirobot design for swarm behaviour studies,
and
2. To design and validate a swarm localization algorithm for estimating swarm topology in order
to enable HSI.
1.4 Proposed Methodology and Thesis Organization
This thesis describes the development, implementation, and evaluation of an effective swarm
millirobot design for swarm behaviour studies as well as a novel localization system which can estimate
swarm topology for enabling HSI.
Chapter 2 reviews the literature on currently existing millirobot designs used for swarm robotics
research as well as different aspects of a swarm millirobot design with regards to hardware and swarm
control system. The hardware aspects include (i) processing and sensing capabilities, (ii) robot
actuation, and (iii) wireless communication.
Chapter 3 presents a novel swarm millirobot design to be used for swarm behaviour studies in order
to test the feasibility of various swarm algorithms for future applications. The newly proposed swarm
millirobot, named mROBerTO (milli-robot-Toronto) [57], has a modular design making use of easy
to source, off-the-shelf components. mROBerTO also includes a variety of sensors (proximity, inertial
measurement unit (IMU), compass, ambient light, camera, infrared (IR)) and has a 16×16 mm2
footprint. mROBerTO’s communication capabilities include ANTTM, Bluetooth® Smart, or both
simultaneously. Processing is handled by an ARM processor with 256 KB of flash memory. The
millirobot supports over-the-air programming via Bluetooth®. Results for performance benchmark and
swarm experiments that were conducted are provided.
Chapter 4 reviews the literature on HSI as well as localization in SR. Several different methods of
currently existing swarm localization are discussed as well as their limitations in real world
applications.
Chapter 5 presents the novel localization algorithm for estimating swarm topology without the use
of any external sensors such as overhead cameras. The localization algorithm is evaluated using
mROBerTOs. Unlike many of the localization algorithms found in literature for swarm robotics, this
algorithm does not require the knowledge of global coordinates of individual or any of the robots.
Finally, Chapter 6 presents concluding remarks on the developed and implemented mROBerTO
and the localization algorithm for HSI as well as future recommendations.
6
Chapter 2
Review on Currently Existing Swarm Robots and Designs
Small-scale robots are, typically, expandable in design, rapidly constructible, and have easy
manageability with regards to maintenance and setup. They can operate individually, or collectively
using swarm intelligence [58]–[60].
Millirobot applications include WSNs [61], [39], micro-assembly [62], medicine [63], [64], urban
search and rescue [65], as well as surveillance [66]. In [61], the challenge of reducing overall power
consumption of a network was addressed, where millirobots were used as mobile nodes within a large
WSN. In [62], a single robot was able to move with micron precision to show potential use in micro-
manufacturing, where end-effectors grasp components that are μm in length and assemble them. In
[64], a pill-sized robot, with a camera and wireless communication module, was used as an endoscope
for minimally-invasive treatment. In [65], the potential use of millirobots in urban search and rescue
was discussed, where small robots travel through tiny cracks in collapsed buildings to search for
survivors. In [66], different types of millirobot locomotion were compared and the potential use of
these robots for surveillance was highlighted.
As mentioned previously in Subsection 1.1, currently existing millirobots for swarm behaviour
studies with larger footprint area (greater than 75×75 mm2) [31], [36]–[38] typically have superior
capabilities than smaller millirobots [8], [26], [28]–[30], [39]–[42]. They are designed for testing few
collective behaviour algorithms in lab testbed settings without giving much considerations to the
hardware aspects of the robot design. Furthermore, they typically have integral (i.e., non-modular)
designs with custom-made components that also make assembly and maintenance more complex [32].
2.1 Hardware Design
In addition to the major hardware design goals mentioned in Subsection 1.1, there are other hardware
aspects of the millirobot that must be considered during the design phase. These aspects include (i)
processing and sensing capabilities that will be required to complete a desired task, (ii) robot actuation
which involves the type of locomotion the robot will have, and (iii) wireless communication
technology which will enable exchanging information between neighbouring robots.
7
2.1.1 Processing and Sensing Capabilities
As noted in [67], the type of processor (i.e., speed and architecture as well as embedded peripherals
such as analog-to-digital converter (ADC), digital-to-analog (DAC), etc.) will determine the types of
allowed sensors on the robots. Thus, when designing and implementing a swarm millirobot, careful
selection of the processor is required in order to meet the specifications of the intended use of these
swarm millirobots. Furthermore, if modularity is one of the design goals for the robot design, scalable
communication interface such as two-wire interface (TWI) that can communicate to different types of
sensors and even different microprocessors should be made available to all modules of the robot [29],
[43].
Sensors such as IMUs and proximity sensors may be considered standard on robots when they are
utilized for basic functionalities such as moving and obstacle avoidance [67]. Furthermore, in order for
swarm robots to exhibit collective behaviours, these robots will be required to estimate both bearings
and proximities of neighbouring robots [1]–[3]. Additional sensors for extended capabilities such as
cameras, ambient light sensor, and magnetometer may be installed on the robots to experiment with
variety of swarm test case scenarios.
However, currently existing swarm millirobots have limited processing and sensing capabilities
and sacrifice such capabilities to further reduce overall size of the robots. For example, in the case for
[8], [28], [40], one clear limitation is that a single 8-bit processor at a relatively low clock speed (i.e.,
less than 8 MHz) handles all the sensing hardware of the robot, and even actuation tasks, which leaves
almost no processing capabilities left for other activities. This prevents researchers from running
variety of swarm scenarios for their experiments and the swarm robots are limited to running simple
swarm algorithms. Furthermore, some swarm millirobots, namely TinyTeRP [43] and Kilobot [28],
can only measure proximities of nearby robots and not bearings which makes it difficult for collective
behaviours to emerge while typically requiring more complex algorithms to achieve swarm
capabilities. Lastly, as mentioned above, majority of small (i.e., less than 75×75 mm2) swarm
millirobots [8], [28], [29], [40], [42] utilize 8-bit processors that are limited both in speed and
embedded peripherals as well as requiring additional clock cycles for simple operations such as
multiplication when compared with a 32-bit processor.
2.1.2 Robot Actuation
Another hardware design consideration that must be taken into account when designing swarm
millirobots is the type of actuation to be implemented on the robots. There are three common types of
8
actuation found in SR for mobile swarm millirobots: wheeled, tracked, and leg-based [27]. Wheeled
robots are the most common in millirobots due to easy implementation. However, these are typically
only used in lab settings on flat workspace floors [1]-[3]. Tracked millirobots take up more volume
than wheeled robots but are more robust to the type of workspace floors they can operate on [8], [36],
[43]. Leg-based robots [68], [69] are harder to implement and maintain and may require specialized
manufacturing technology such as microelectromechanical systems (MEMS) in order for it to be used
on small (less than 4×4 mm2) millirobots [70]. There exists other types of actuation such as two
vibration motors utilized on Kilobots, that provide differential drive configuration and are considered
to be the cheapest and easiest to implement [28]. However, they can only operate on smooth, low
friction floors (e.g., whiteboard and acrylic glass) and their movement cannot be controlled easily due
to the nonlinear behaviour of the vibration motors. When the robots are designed for real world
applications, these robots’ actuation must be robust enough to operate on different types of surfaces
while using minimal battery life.
For swarm behaviour studies and research, one of the main objective of the robot design is to have
minimum footprint to allow large number of units to work in a limited workspace area. Thus, the
actuation component of the robot should be as small as possible while still allowing the robot to have
relatively accurate movement around the workspace. Small actuation component footprint with
accurate movement was the objective in [8], [29], [43], where miniaturized motors were utilized in the
robot design.
Smaller swarm millirobots currently in existence either have custom-made components such as
drivetrains [43] that are susceptible to misalignment or have high inaccuracy moving around the
workspace floor due to unconventional method of actuation [28]. Furthermore, small wheeled based
swarm millirobots [40]–[42] have wheels that require relatively large motors and gears which means
less real-estate for processing and sensing hardware as well as battery capacity [1].
2.1.3 Wireless Communication
Swarm millirobots need wireless technology to enable communication among neighbouring robots as
well as to provide information and accept directives from the human supervisor (i.e., enable HSI) [23].
Due to the limited battery capacities of these swarm millirobots, the wireless technology must be low-
power and have minimal impact on the processor of the swarm millirobot. Furthermore, the data
transmission speed of the wireless technology should be relatively fast (e.g., minimum of 60 Kbit/s)
9
for running real-time experiments and application. Another requirement is that the wireless technology
must have minimal footprint to meet the robot design objective of having small overall size.
As mentioned previously, the majority of currently existing swarm millirobots use low-power IR
emitters and detectors for wireless communication [8], [28], [40]–[42] which is prone to error, has
limited communication range, and limited in data transfer speed. In short, IR data transmission is not
a reliable wireless communication technology. In light of these constraints, swarm millirobots such as
TinyTeRP [43] and GRITSBot [29] presented a potential solution, where the robots utilized RF
technologies (i.e., ZigBee® and Wi-Fi) for wireless communication.
Several commercially available low-power wireless communication modules exist that can meet
the abovementioned requirements with point-to-point and mesh network capabilities as well as
relatively fast data transmission speed. The two main types of commercially available wireless
technologies include IR (e.g., IrDA®) and RF (e.g., Bluetooth®, ZigBee®, and ANTTM). Any wireless
communication technology which allows mesh network capability on the swarm robot may be used to
communicate to the human supervisor in a decentralized fashion for enabling HSI [23].
2.2 Swarm Algorithms and Control System
Swarm millirobots are expected to exhibit collective behaviours using various collective (swarm)
algorithms. As detailed in [1], [2]. there are three different primary methods to implement swarm
behaviour: probabilistic methods, artificial-physics methods, and evolutionary methods. In the
probabilistic methods, the robot’s next behaviour is partially determined in a random manner and
dictated by interactions between nearby robots and its environment. These methods are often used with
finite state machines (FSMs) as well as probabilistic FSM (PFSMs). They accurately model the
behaviours of social insects, such as honeybees and cockroaches [71].
In artificial-physics methods (also referred to as physicomimetics), specifically, for MRSs, virtual
forces are utilized by the robots to determine their next movement vectors [72]. Formations observed
in nature, such as with flocks of birds, schools of fishes, and swarms of insects, can be modelled by
attractive and repulsive forces.
In evolutionary methods, robot controllers are designed based on artificial evolution techniques
[73], such as genetic algorithms [74], [75], and q-tournament selection [76]. Fitness functions are often
employed, where robots are assigned a higher (better) fitness value if the desired behaviour is exhibited.
The evolution cycle continues where the fitness functions re-evaluate the fitness levels of all robots
until the global task given to the swarm is completed.
10
Approaches to designing swarm robot control systems have been divided into two categories:
behaviour-based and automatic [1]. The behaviour-based swarm control system is best suited for robots
that must operate in a dynamic environment where real-time responses to deal with unstructured setting
are required [33]–[35]. In contrast, an automatic swarm control system makes use of reinforcement
learning (RL) and evolutionary algorithms (EA) that require heavy computation and time [77]–[80].
For many potential applications with SR, such as dynamic wireless sensor networks [61], [81],
environmental monitoring [82]–[84], search and rescue [65], and surveillance [66], behaviour-based
approaches are used due to their inherent ability to quickly deal with uncertainty in dynamic
environments [33]–[35].
However, small swarm millirobots found in literature can typically only exhibit a single swarm
algorithm such as aggregation. Furthermore, due to their limited processing and sensing capabilities,
it is found to be difficult, if not possible, to implement swarm control systems onto currently existing
small swarm millirobots.
Next, a design of an effective modular swarm millirobot is presented, and it addresses some of the
limitations discussed above.
11
Chapter 3
Designing an Effective Modular Swarm Millirobot
In contribution to Swarm Robotics literature, a novel modular millirobot named milli-robot-Toronto
(mROBerTO) with a 16×16 mm2 footprint was developed and implemented as shown in Fig. 1.
mROBerTO is novel in that it (a) has a modular design capable of expansion, and (b) makes exclusive
use of off-the-shelf and easy to source components. At the time of the writing of this thesis, it is one
of the smallest millirobots in existence with extensive sensing, communication, and processing
capabilities.
Fig. 1. mROBerTO shown next to a Canadian Nickel for comparison.
3.1 Proposed Swarm Millirobot Design
The proposed millirobot for swarm behaviour studies was designed based on three primary objectives:
a) to achieve a high degree of modularity, b) to maximize use of off-the-shelf components for ease of
assembly, production, and maintenance, and c) to occupy a minimum possible footprint area, while
maximizing processing power and sensing capabilities. Modularity is desirable to simplify the
12
expansion of sensing capabilities and allow the introduction of improved circuitry in future revisions.
Using commercially available, off-the-shelf components enables easy replication and high-volume
production at low cost. Small footprint area allows researchers to operate large groups of the robots in
a small limited workspace area. Lastly, powerful processing and large number of sensing capabilities
allow researchers to run a variety of swarm scenarios during experiments.
3.1.1 Modules of the Robot
The robot has four modules within its 16×16×32 mm3 envelope: processing and communication
module (referred to as mainboard, herein), locomotion module, primary sensing module, and secondary
sensing module as shown in Fig. 2. The modules are all structurally connected through soldering,
except for the secondary sensing module, which is connected through header pins for rapid exchange.
The primary sensing module is not soldered in Fig. 2, but soldering this module is recommended if the
camera hardware and front proximity sensor are to be utilized. The overview of mROBerTO’s
architecture is given in Fig. 3. Drawings of the robot, bill of materials (BoM), and the circuit schematics
and layouts for all modules and materials are provided in Appendix A, B, and C respectively. Detailed
specifications of mROBerTO are provided in Appendix D.
Fig. 2. mROBerTO exploded-assembly (front and back views).
13
Fig. 3. mROBerTO’s architecture overview.
14
Mainboard
The processing and communication module, the mainboard, is the central hub of mROBerTO and it is
shown in Fig. 4. All other modules connect to this module. The mainboard has on it the Nordic
nRF51422 system-on-chip (SoC) which is embedded with a 32-bit ARM Cortex-M0 clocked at 16
MHz with 32 KB RAM, 256 KB flash memory, and built in wireless networking capabilities. Wireless
protocols that the SoC can use are ANTTM and Bluetooth® Smart. The nRF51422 has several features
that make it particularly suitable for achieving the design objectives mentioned previously.
One such feature is the nRF51’s Flexible General-Purpose Input/Output (GPIO) mapping. This
enables any set of GPIOs to be configured as TWI/inter-integrated circuit (I2C), or serial peripheral
interface/universal asynchronous receiver and transmitter (SPI/UART). This feature gives users
flexibility when designing new primary and secondary sensing modules.
Bluetooth® Smart (also known as Bluetooth® Low Energy, or BLE) is a feature of the nRF51422
that allows mROBerTO to utilize high data-throughput sensors, such as an onboard camera, without
excessive energy costs (i.e., 0.153 μJ/bit for 960 usable bits per second [85]). BLE has a high data-
transfer rate (up to 1.0 Mbps) with low power consumption. Additionally, the nRF51422 can be
programmed over-the-air through its BLE interface. This simplifies the setup of the robots, especially,
when there are multiple units to program, and adds overall user-friendliness.
Fig. 4. Mainboard module circuit board (back and front views).
15
Locomotion Module
Below the mainboard, serving as the structural base of the robot is the locomotion module presented
in Fig. 5. On the locomotion module, there is an H-bridge for controlling two 4 mm Nano Coreless
motors manufactured by Precision Microdrives. The H-bridge is connected to the motors to provide
differential drive. The motor shafts have no wheels on them and are directly in contact with the surface,
removing the need for custom made wheels and drive trains. The third point of contact for mROBerTO
is a small (1/8 inch≈3.175 mm diameter) polytetrafluoroethylene ball at the front of the locomotion
module – chosen for its small weight and low coefficient of friction.
With its current specifications, Appendix D, mROBerTO is capable of moving at speeds ranging
from 1 mm/s up to 150 mm/s in a straight line and can turn at approximately 500 deg/s. In order to
reduce wheel slippage, the heaviest components, the lithium polymer (Li-Po) batteries, were placed
above the motors as shown in Fig. 2.
There are alternative locomotion methods to achieve differential drive without wheels. One
example is the vibration system used in Kilobot as mentioned previously [58]. However, past
experience with these systems shows that such methods of locomotion are, typically, not quite suitable
for precise movement over large distances, especially when making use of commercially available,
low-end vibration motors. This is due to the non-linear behaviour of the locomotion and random noise
added from slippage in various directions.
Fig. 5. Locomotion module circuit board (top and bottom views).
16
Primary Sensing Module
The configuration chosen for the primary sensing module allows a variety of sensors to be mounted on
this front-facing module. In the design presented herein mROBerTO is outfitted with a CMOS camera,
the Toshiba TCM8230MD, and a time of flight proximity sensor, VL6180X (Fig. 6). The
TCM8230MD can acquire video at 30 fps in VGA resolution, with RGB565 or YUV422 data output,
in parallel from 8 data pins. The VL6180X is a 3-in-1 (proximity sensing, ambient light sensing, and
laser source) unit (4.8×2.8×1.0 mm3) that can be used for obstacle avoidance.
Fig. 6. Primary sensing module circuit board (front and back views).
Secondary Sensing Module
Presented below are two secondary sensing module designs. The first is, specifically, to allow for
swarm behaviour. It comprises of four wide-angle IR emitters and six wide-angle IR phototransistors
on the edges of the module for an all-around coverage (Fig. 7). They are mainly used for measuring
the relative distances and bearings of nearby robots (and obstacles), but can also be used for
communication purposes. The module also has a 6-axis IMU, one RGB LED, and has 4-pin GPIO
access to the processor. The main processor on the swarm module is the ATmega328P, which interacts
with the main SoC via TWI.
An additional secondary sensing module was developed for centralized control. This design
includes two LEDs (one RGB, one green), a magnetometer, and an IMU with a gyroscope (Fig. 8).
This design allows access to eight GPIO pins on the processor. The LEDs are placed on top of the
17
robot to work with an overhead camera (external to the robots). The overhead camera can be used in
determining the global position and orientation of the robot on the workspace.
Both secondary sensing modules are attachable to the mainboard through a set of header pins for
easy exchange. This allows sensing capabilities of the robot to be modified without having to rebuild
the whole unit.
The primary and secondary sensing modules share two GPIO pins from the SoC, allowing for a
single instance of SPI/UART or TWI to communicate with both sensing modules simultaneously. The
locomotion module provides both sensing modules with a 2.8 V power supply from the voltage
regulators located on it.
Fig. 7. Secondary sensing module circuit board for swarm purposes (top and bottom views).
Fig. 8. Secondary sensing module circuit board for centralized control (top and bottom
views).
Front
18
3.1.2 Sensing and Communication for Collective Behaviour
The ability to locally measure the relative distances and bearings of neighboring robots is fundamental
for collective behaviour in SR [86]–[89]. For mROBerTOs, the secondary sensing module, with IR
emitters and detectors, was specifically designed and implemented for this purpose. mROBerTO sends
out modulated IR signals via its IR emitters, achieved using the pulse-width modulation (PWM) feature
on the ATmega328P microcontroller. The modulated IR signals are encoded with the robot’s unique
ID and are capable of supporting up to 127 unique IDs. The six wide-viewing-angle (120°) IR
phototransistors’ voltage values are constantly read by the ADC block of the microcontroller. The
incoming modulated IR signals are demodulated using the Goertzel Algorithm (GA), through software
on the microcontroller, in order to decode the incoming robot’s ID and determine the presence of
nearby robots [90].
The bearings of nearby robots are determined by weighting the scaled IR intensities received with
the six IR detectors, as described in [91]. The following equation is used to estimate the bearing of the
nearby robots:
�̂� = 𝑎𝑟𝑐𝑡𝑎𝑛 (∑ 𝜌𝑖∙𝑠𝑖𝑛(𝛽𝑖)6𝑖=1
∑ 𝜌𝑖∙𝑐𝑜𝑠(𝛽𝑖)6𝑖=1
),
(1)
where �̂� is the estimated bearing value of the nearby robot with respect to its heading, 𝑖 is the sensor
position, 𝑖 = 1to6, 𝛽𝑖 is the angular distance between the 𝑖𝑡ℎ sensor and the robot’s heading (i.e.,
0°, 60°, 120°,… , 300°), and 𝜌𝑖 is the scaled IR intensity reading of the 𝑖𝑡ℎ sensor.
In order to estimate the relative distances of neighboring robots, the scaled IR intensities are
summed and used to determine an approximate distance value. A translator between summed scaled
IR intensities to relative distance was created through an offline calibration procedure where the
summed IR intensities from a detector were recorded at different positions and angles relative to
another robot acting as an IR signal source. The physical arrangement of the calibration process follows
the same procedure that was used in [92] for a modelling test, where values of IR intensities were
recorded at several different positions and angles using one IR emitter and one IR detector in order to
characterize the IR based localization system. The calibration procedure for the swarm modules can
be conducted up to the maximum distance between the centres of the IR emitter to the center of the IR
detector, defined to be 125 mm in this thesis, which is approximately near the end of the detection
range of the IR detectors. Appendix E has more details regarding the IR sensors and the procedures for
estimating both proximities and bearings of nearby robots.
19
mROBerTO’s current microcontroller, with the use of GA, can read up to 105 bits per second (bps)
on two separate IR channels (210 bps in total). In order to enable the use of multiple IR channels and
remove noise from the environment, the IR signals are modulated using amplitude modulation
(amplitude-shift keying).
Multiple channels are desired to allow for scalability for communication among robots at close
proximities while increasing the data transmission throughput on the same type of network [93].
However, the main means of communication among robots remains BLE and ANTTM wireless
communication, for a more reliable and larger transfer of data. Lastly, the power of the wireless
transmission for both BLE and ANTTM can be controlled through software and dynamically be changed
during runtime, where the power can range from −40𝑑𝐵𝑚 to +4𝑑𝐵𝑚. This is useful if the
communication radii of one or more robots needed to be changed for testing a specific swarm scenario
and to save power during deployment if the robots were to become idle.
Several alternate approaches and designs were considered before reaching the final configuration
of the secondary sensor module for IR communication described above. Firstly, instead of the GA, the
Fast Hartley Transformation (FHT) [94] was initially utilized for IR communication demodulation.
Although the FHT provides better overall amplitude resolution than would the GA, the achievable
update frequency with FHT is only 3 Hz compared to the 105 Hz achievable with the GA. To increase
the overall performance of the GA and FHT implementation, fixed-point numbers, 12 bits in size, were
used instead of floating point numbers (32 bits long) on the microcontroller to utilize the 16-bit integer
multiplier hardware in the AVR processor. Experiments showed that by making use of the hardware
multiplier, the update frequency could almost be doubled. Secondly, the original processor clock speed
on the ATmega328P, located on the secondary sensing module for swarm purposes and used to carry
out IR communication demodulation, was set to 8 MHz. In order to increase performance of this
module, without the need for additional components, the processor clock was overclocked from 8 to
12 MHz. However, consequently, a new calibration process needed to be carried out to ensure that the
internal clock speed is indeed set to 12 MHz. The source code for FHT and GA can be found in
Appendix F.1.
Since the voltage sources for the primary and secondary sensing modules only provide up to 2.8
V, adding and using a 16 MHz external oscillator for faster clock speed was not an option. A minimum
of 4.5 V is required to use the 16 MHz external oscillator. Thus, the internal oscillator was overclocked
instead.
20
Lastly, the first version of the secondary sensing module for swarm purposes consisted of narrow
angle (i.e., viewing angle of less than 70°) IR phototransistors for detecting the IR signals. However,
as stated in [40], dead zone issues were noted, where IR signals incoming at angles between two IR
phototransistors in the center area were not detected due to the low-viewing-angles of the
phototransistors. One solution considered to resolve dead zone areas was to add more IR
phototransistors for all around coverage to the original six IR phototransistors. However, several
problems were noted with this approach. Firstly, the ATmega328P only has up to eight ADC input
pins and two out of the eight ADC pins are also configured to be TWI data and clock pins. Thus, unless
a different communication interface is utilized between the ATmega328P and the SoC, the user is left
with six usable ADC pins. Secondly, the ATmega328P is restricted to be used with only up to eight IR
phototransistors. Thirdly, adding more IR phototransistors would require extra clock cycles from the
microcontroller to read the additional ADC pins’ voltages as well as analyze more ADC data. Finally,
with additional IR phototransistors, the IR data transmission rate would linearly decrease.
To overcome the abovementioned problems, wide-angle IR phototransistors with a viewing angle
of 120° were used. The trade-off of using wide angle versus narrow angle IR phototransistor is that the
wide-angle one, generally, has less sensing range than the narrow angle. However, the wide-angle IR
phototransistor, in this case, could detect other neighboring robots beyond 125 mm, which is more than
enough distance for carrying out meaningful collective behaviours.
3.1.3 Software
High-level control functions for the sensors and motors were implemented and packaged into header
file libraries to serve as the first mROBerTO library. For example, a motor control function exists with
inputs of wheel angular velocity within which low-level instructions are carried out. Calibration,
proportional-integral-derivative (PID) motion control, and wireless debugging are all pre-programmed
into the robots’ firmware.
Calibration and Angular Velocity to PWM Translator Function
The calibration method from [95] was adopted to calibrate mROBerTOs. The purpose of the calibration
was to create a translator function for translating angular velocities of left and right wheels to PWM
duty cycle percentage values. This is because the PID controller will output angular velocities for each
wheel but the robot can only use PWM values to control the left and right motors, and in turn the
wheels. The calibration requires a workspace for the robots to move on as well as an overhead camera
21
that monitors and tracks the robots on the workspace. This overhead camera is connected to a host
computer which is running the calibration program and recording the tracking data.
The procedures for calibrating the mROBerTOs are described as the following. A single
mROBerTO is first placed on the workspace on a designated position and orientation. Next, the user
inputs the amount of PWM duty cycle percentage for the left and right wheel as well as the forward
movement duration in seconds on the host computer that is running the calibration program. Once the
PWM percentage values of the two wheels and the forward moving duration for the robot have been
inputted, the host computer sends out this information to the robot on the workspace via Bluetooth®.
When the robot receives this information, it begins moving forward with the set PWM duty cycle
percentage values of each wheel for the user-specified duration amount. The host computer uses its
overhead camera to record the movement of the robot every one second from the beginning till the end
of the robot’s movement. Once the robot stops and the overhead camera stops recording, the beginning
position and orientation as well as final position of the robot on the workspace are logged. Using this
logged data and the total duration of the robot’s movement, the length of the actual path of the robot
can be determined which can be used to calculate the angular velocities of the left and right wheels.
This procedure is repeated until 𝑁 number of iterations (i.e., 𝑁 = 10 for this thesis) are completed with
the same PWM duty cycle percentage values and the average value is used in the translator function.
Fig. 9 provides the overview of the calibration procedure for calibrating a single PWM duty cycle
percentage.
As mentioned above, the logged data can be used to estimate the angular velocities for left and
right wheels for the specified PWM duty cycle percentage. Fig. 10 shows a top view of the robot during
the calibration procedure where it starts at position (xi, yi) with initial orientation𝜃𝑖𝑛𝑖𝑡𝑖𝑎𝑙, and stops at
position (xf, yf) at the end of the iteration of the calibration. The first step for determining the angular
velocities with the logged data is to calculate the total angle of deviation,𝜃𝑑, at the final position as
well as the total distance travelled by the robot, d, if it moved perfectly in a straight line without any
deviation (Fig. 10). These two values can be calculated with the following equations.
𝜃𝑑 = 𝑎𝑡𝑎𝑛2(𝑦𝑓 − 𝑦𝑖, 𝑥𝑓 − 𝑥𝑖) − 𝜃𝑖𝑛𝑖𝑡𝑖𝑎𝑙,
(2)
𝑑 = √(𝑥𝑓 − 𝑥𝑖)
2+ (𝑦𝑓 − 𝑦𝑖)
2∙ cos(𝜃𝑑),
(3)
22
Fig. 9. Calibration procedure for mROBerTO for determining the angular velocity
relationship to PWM for desired PWM duty cycle percentage. 𝑁 = 10 for this thesis.
Fig. 10. Top view of the robot moving forward with user specified PWM duty cycle
percentage for left and right wheels. The robot begins at position (xi, yi) with orientation𝜃𝑖𝑛𝑖𝑡𝑖𝑎𝑙, and
stops at position (xf, yf).
23
Once the values of 𝜃𝑑 and 𝑑 are determined, the total radians that left and right wheels turned, 𝜃𝐿 and
𝜃𝑅 respectively, can be calculated using the following equations for the case when right wheel turned
faster than the left wheel (i.e.,𝜃𝑑 is greater than 0),
𝜃𝐿 =𝑑
𝐷𝑅𝜋2𝜋,
(4)
𝜃𝑅 =
𝐷𝐿𝜃𝐿𝐷𝑅
+4𝑙𝑎𝑥𝑙𝑒|𝜃𝑑|
𝐷𝑅,
(5)
where, 𝐷𝐿/𝐷𝑅 are wheel diameters for left/right wheel (i.e., diameter of 0.8 mm for mROBerTO) and
𝑙 is the distance between the two wheels (i.e., length of 16 mm for mROBerTO). For the case when the
left wheel turned faster than the right wheel (i.e.,𝜃𝑑 is less than 0), the following equations can be used
to determine the total radians each wheel turned.
𝜃𝑅 =𝑑
𝐷𝑅𝜋2𝜋,
(6)
𝜃𝐿 =
𝐷𝑅𝜃𝑅𝐷𝐿
+4𝑙𝑎𝑥𝑙𝑒|𝜃𝑑|
𝐷𝐿,
(7)
Finally, once the total travelled radians have been determined, the angular velocities of left and right
wheels, 𝜔𝐿 and 𝜔𝑅, can be calculated using the following equation,
𝜔𝐿 =
𝜃𝐿𝑇𝑡𝑒𝑠𝑡
, 𝜔𝑅 =𝜃𝑅𝑇𝑡𝑒𝑠𝑡
,
(8)
where, 𝑇𝑡𝑒𝑠𝑡 is the total calibration period of one iteration in seconds. Fig. 11 shows one iteration of
the calibration with PWM duty cycle percentage set to 20% for both left and right wheels as well as
the recorded data for the iteration.
For mROBerTO, PWM duty cycle percentages from 5% to 25% in increments of 2.5% were
recorded in order to create the translator between angular velocities to PWM duty cycle percentage.
Appendix F.2 provides the source code for the angular velocity to PWM translator function used on
the robot’s microcontroller (i.e., nRF51422) firmware. As seen in the source code, the translator is a
24
linear interpolation that describes the relationship between angular velocity values for each wheel to
the PWM duty cycle percentage values.
(a) (b)
(c) (d)
Fig. 11. One iteration recording of mROBerTO calibration for 20% PWM duty cycle
percentage, where (a) shows the starting position and (c) shows the final position. (d) shows the
recorded data for this iteration of calibration.
PID Controller
A PID controller was implemented on mROBerTO to be used with the overhead camera for centralized
motion control in order to test path following performance further explained in Subsection 3.2.2. The
overhead camera tracks the robot on the workspace and provides continuous feedback to the robot of
its current position for motion control (Appendix G). For the robot’s position on the workspace, only
the 2D plane is considered on the x and y axes and the robot has 3 degrees of freedom (DoF), x, y,
and𝜃. Thus, the robot lies in the Special Euclidean SE(2),
25
𝑇𝑟 = [
𝑐𝜃 −𝑠𝜃 𝑥𝑠𝜃 𝑐𝜃 𝑦0 0 1
],
(9)
Below equation is the PID controller in continuous time to move the robot on the workspace,
[𝑣1𝑣2] = [
𝑧2𝑑̇
𝑧4𝑑̇] + [
𝑘𝑝1 0
0 𝑘𝑝2] [𝑧1𝑑 − 𝑧1𝑧3𝑑 − 𝑧3
] + [𝑘𝑑1 00 𝑘𝑑2
] [𝑧2𝑑 − 𝑧2𝑧4𝑑 − 𝑧4
] + [𝑘𝑖1 00 𝑘𝑖2
]
[ ∫ (𝑧1𝑑(𝜏) − 𝑧1(𝜏)) 𝑑𝜏
𝑡
𝑡0
∫ 𝑧3𝑑(𝜏) − 𝑧3(𝜏)𝑑𝜏𝑡
𝑡0 ]
,
(10)
where, 𝑣1 and 𝑣2 are the next acceleration values of the robot in the x and y directions respectively, 𝑧2𝑑̇
and 𝑧4𝑑̇ are the next desired accelerations of the robot in x and y directions respectively, 𝑧1 and 𝑧3 are
current x and y positions determined by the overhead camera, 𝑧1𝑑 and 𝑧3𝑑 are the next desired x and y
positions, 𝑧2 and 𝑧4 are the current x and y velocities, 𝑧2𝑑 and 𝑧4𝑑 are the next desired x and y velocities,
and 𝑘𝑝𝑗, 𝑘𝑑𝑗, 𝑘𝑖𝑗 are proportional, derivative, and integral gains respectively.
Since the robot is a discrete system and continuous controllers cannot be utilized by the robots, the
continuous PID controller was discretized using bilinear discretization [96]. The source code of the
discretized PID controller can be found in Appendix F.3.
Precompiled Firmware and Programming the Robot
ANTTM and Bluetooth® capabilities are programmed into Nordic’s SoftDevices (precompiled, linked
binary software). An advantage of using the nRF51422 is that the cross ARM GCC can be used for
compiling the source code which is available for free. Most integrated development environments
(IDE), such as the free and popular Eclipse, can make use of this free software compiler. For this thesis,
Eclipse 4.5.1 was utilized and tutorial on setting up the environment for the nRF51422 can be found
in [97]. Additionally, by using Eclipse, one can perform on-board debugging using the Serial Wire
Debug interface for ARM included in the SEGGER J-Link programmer.
The microcontroller used in the swarm secondary-sensing module, the ATmega328P, is
programmable using a low-cost (i.e., less than $20 CAD) programmer (e.g., Polulu’s USB in-system
programming (ISP) module). Atmel Studio 7.0 and AVR Libc were utilized as the main IDE and
compiler respectively for the ATmega328P in this thesis; both are free to download and use. The
tutorial on setting up AVR Studio 7.0 can be found in [98].
26
3.1.4 Power Management
mROBerTO is powered by three 3.7 V lithium-polymer (Li-Po) batteries connected in parallel such
that the total current capacity of the batteries is 120 mAh (Fig. 2). mROBerTO can operate for a
minimum of one hour at full operation, defined herein as operating both motors under a 30% duty cycle
PWM. This results in a speed of approximately 100 mm/s. In addition, the camera and proximity sensor
are turned on/off every second, LEDs are continuously on, the IMU is used with a 10 ms update period,
the IR emitters/receivers are used to send and receive data packets at 4 Hz, and data is
broadcasted/received at 16 Hz through ANTTM.
Battery voltage level is directly monitored by utilizing a voltage divider. The positive end of the
battery pack is connected to a voltage divider, which divides the voltage value by two. The reduced
voltage is, then, monitored through an ADC port. This allows the robot to warn the user when its battery
is almost depleted and requires a recharge.
3.1.5 Comparison to Other Millirobots
Table I shows an overview comparison of currently existing swarm millirobots that are size comparable
(i.e., footprint area of less than 52×52 mm2), including the proposed mROBerTO [57]. The list was
compiled with similar sized swarm millirobots for a fair comparison since robot capabilities in SR are
typically proportional to the size of the robots as abovementioned.
Costs
One of the concerns in swarm robotics research is access to low-cost robots. For Kilobots, the
developers state a unit cost of about $14 for parts, when 1000 units or more are purchased. However,
for practical research purposes, it would be difficult to acquire parts at this cost and manually assemble
1000 units. A more realistic cost per unit for parts, in the case of Kilobot, is approximately $50 for a
purchase of parts for 100 units. The Kilobots also require an overhead controller that costs $588 in
order to communicate with, calibrate, and program the Kilobots (both wired and wirelessly). As of
now, assembled and preprogrammed Kilobots are commercially available for about $150 per unit.
Although low-cost was not one of the main objectives when designing mROBerTO, the outcome
was quite favorable when compared to other millirobots, especially, when processor, sensor
capabilities, and communication are taken into consideration, Table I. The mROBerTO’s cost is about
$82 CAD in parts per unit, when 25 units’ worth of parts are purchased: the mainboard costs $10, the
27
locomotion module costs $19, primary sensing module costs $19, secondary sensing module costs $16,
and the battery packs cost $18.
Table I. Comparison with currently existing swarm millirobots [8], [26], [28]–[30], [39]–[42].
Robot Footprint
(mm2)
Processor Commu-
nication
Sensors Power
(hr)
Contains
Custom
Made
Components
Modular
Design
mROBerTO 16 × 16 ARM
32-bit
Bluetooth®
& ANTTM Light, range,
gyro,
accelerometer,
compass,
distance,
bearing
1.5 to 6 No Yes, with
4
modules
GRITSBot 31 × 30 AVR
8-bit
Wi-Fi Light,
accelerometer,
gyro, distance,
bearing
1 to 5 Yes Yes, with
3
modules
Colias 40 × 40 AVR
8-bit
IR Distance,
bumper, light,
range, bearing
1 to 3 No No
TinyTeRP 17 × 18 8051
8-bit
ZigBee® Gyro,
accelerometer
0.5 to 1 Yes Yes, with
2
modules
Wanda 51 × 51 ARM
32-bit
IR Colour, line,
range, light,
accelerometer
2.5 Yes No
Kilobot 33 × 33 AVR
8-bit
IR Distance, light 3 to 24 No No
Alice II 22 × 21 PIC
8-bit
IR Bumper,
range, camera
10 Yes No
Jasmine 30 × 30 AVR
8-bit
IR Distance,
light, colour,
bearing
1 to 2 Yes No
3.2 Experiments
Performance benchmark and swarm scenario experiments were conducted using mROBerTOs.
Additional experiments were conducted using Kilobots in order to compare performance benchmark
results with mROBerTOs. Below subsections show the results for the performance benchmark and
swarm experiments.
28
3.2.1 Open-Loop Movement Control
Millirobots are often prone to misalignments/unbalances due to design/production/assembly problems,
as well as motion slippage. As the first test and experiment, thus, open-loop control experiments were
performed in the laboratory to identify these deficiencies for mROBerTO (i.e., no IMUs or any form
of odometry was used). The same experiments were also carried out using a Kilobot. The motions of
each robot were recorded using an overhead camera. The plots in Fig. 12 correspond to the five
experiments completed for mROBerTO and Kilobot, as well as data extracted from [43] for the
TinyTeRP robot. As one can note, mROBerTO (blue) has approximately 35 mm spread after a travel
distance of 300 mm, the Kilobot (green) has approximately 235 mm spread, and the TinyTeRP (red)
has approximately 115 mm spread.
Fig. 12. Open-loop experiment comparison.
3.2.2 Path Following
The second set of experiments utilized an overhead camera for the closed-loop control for mROBerTO
motion via Bluetooth® by a centralized controller and a PID controller pre-programmed into its
firmware. The centralized controller creates a path for the robot to follow in the workspace. This path
is divided into n segments. Once the robot is at the starting position, it is given its current global
coordinates and orientation obtained from the overhead camera along with next desired location on the
29
path. The PID controller in the robot’s firmware uses the data from the centralized controller to move
to its next desired location. The next desired location changes periodically (i.e., set to one second for
this thesis) until the path is completed. Detailed explanation and snippets of the source code used for
the overhead centralized controller is provided in Appendix G.
Similarly, for comparison purposes, a Kilobot was also controlled via feedback from the overhead
camera. However, instead of using Bluetooth® communication, the Kilobot was controlled with an IR
based overhead controller provided by K-Team, the manufacturer for Kilobot.
Kilobots have difficulty moving on a smooth trajectory without a complex controller due to the
nonlinear nature of the vibration motors and the vibration differential drive system. Specifically, they
can only be commanded to move forward, or make full left or right turns. One can recall that, unlike
mROBerTO, Kilobots are not equipped with any IMUs either. Thus, as stated in [28], Kilobots with
stick-slip based locomotion using vibration motors cannot move over long distances with acceptable
precision.
Thus, for a fair comparison, a Kilobot was controlled to move in small increments to follow a given
path at a slow speed (i.e., mROBerTO moved at approximately 3 mm/s whereas Kilobot moved at
approximately less than 1.5 mm/s) while being provided constant feedback from the overhead
controller of its current position and next position on the path. In addition, the next desired location on
the path did not change periodically, but instead, only changed once the robot was near or at the desired
position and only then was the next desired location changed.
Fig. 13 to Fig. 15 show some sample test results for mROBerTO vs Kilobot, respectively, following
different paths. For the straight-line experiments in Fig. 13, the robots travelled approximately 500
mm. For the circular motion experiments in Fig. 14, the robots travelled on a circle with approximately
500 mm diameter. For the curved-path motion experiments in Fig. 15, the robots travelled
approximately 250 mm along both x and y axes. Table II shows the error distributions obtained from a
total of 10 runs with mROBerTO and Kilobot each, respectively, for each set of experiments. The error
distributions were calculated by using the error measured in the experiments which was measuring the
shortest distance between the recorded positions of the robots to the desired path. mROBerTO
outperformed Kilobot, which was expected since as mentioned previously, mROBerTO is equipped
with IMUs and stick-slip based movement of the Kilobot will cause unwanted slippages towards wrong
directions as can be seen from the figures. On the complicated curved path in particular, mROBerTO
outperforms Kilobot due to its superior maneuverability and differential drive.
30
(a) (b)
Fig. 13. Straight-line experiments for (a) mROBerTO and (b) Kilobot.
(a)
(b)
Fig. 14. Circular path experiments for (a) mROBerTO and (b) Kilobot.
31
(a)
(b)
Fig. 15. Curved path experiments for (a) mROBerTO and (b) Kilobot.
Table II. Path error data for path following experiments where a total of 10 runs were completed
for each robot and each experiment.
Straight-line
(mm)
Circular Path (mm) Curved Path
(mm)
mROBerTO
/ Kilobot
mROBerTO
/ Kilobot
mROBerTO
/ Kilobot
Average Error 1 / 1 3 / 4 2 / 7
Max. Error
Value
5 / 11 13 / 26 8 / 34
Standard
Deviation
0.9 / 1.0 2.5 / 3.7 1.8 / 6.6
3.2.3 Swarm Capabilities
Swarm robots need to be simple in design and use a distributed approach, with regards to both
processing and decentralized control, in order to complete a global task collectively [2]. Collective
behaviours studied in the past have been broadly categorized into [1]: (i) spatially organizing
32
behaviour, where formation control is involved, (ii) navigational behaviour, utilized to explore an
unknown environment, and (iii) collective decision-making, using wireless communication between
robots to reach a consensus.
The main emphasis of running the collective behaviour experiments with existing swarm
algorithms was to showcase mROBerTO’s ability to:
a. Measure relative distances and bearings of nearby robots and obstacles.
b. Differentiate nearby robots with their unique IDs.
c. Wirelessly communicate with other robots using point-to-point and mesh network topologies.
d. Move with enough accuracy and precision to exhibit formation control in the collective
behaviours.
All the above features are essential in swarm robotics and these experiments showcase the feasibility
of mROBerTOs being utilized in collective behaviour experiments.
Constraints
Many of the collective behaviours in swarm robotics, especially, for formation control, become trivial
if all robots in the workspace are aware of each other’s global positions and orientations, and work
with a centralized controller. However, by definition, swarm robots may only interact locally
(decentralized) and are assumed to have no global knowledge regarding any of the robots or their
environment at any given moment [1], [2], [50], [99]. Thus, the swarm scenarios discussed in the
sections below are subjected to the following common constraints:
1. No robot has global knowledge of any other robot in the workspace (i.e., no use of overhead
camera and centralized controller for tracking). Furthermore, it has no information regarding
the size of the workspace or the total number of robots operating within it.
2. A robot’s perceptual knowledge of other robots and the environment is limited to within its
sensing radius. Though, even this perception is neither absolute nor reliable.
3. There is no centralized controller. Communications between robots is limited (e.g., up to 128
bytes of data per packet for mROBerTO, using advanced ANTTM burst mode), and are only
locally broadcasted in either a mesh type or point-to-point network. Furthermore, the pertinent
signals are not reliable.
4. Non-holonomic differential-drive robots that do not have odometry, typically, rely on sensors
susceptible to noise such as IMUs and magnetometers for motion planning.
33
To showcase mROBerTO’s swarm capabilities, four different collective behaviours were chosen
and implemented in experiments herein as examples of the three common collective behaviour
categories listed above: 1) aggregation, 2) chain formation, 3) collective exploration, and 4) dynamic
task allocation. Aggregation and chain formation demonstrate a spatially-organizing behaviour, where
formation control is employed by the robots. Collective exploration illustrates mROBerTO’s
navigation capabilities in an unknown environment, where obstacle avoidance is employed. Lastly, the
dynamic task allocation illustrates mROBerTO’s wireless communication capabilities using ANTTM.
The specific algorithms for the four swarm scenarios were chosen based on their popularity in the
literature. Furthermore, most have been implemented and verified by other millirobots [24], [100]–
[102]. Appendix H provides more detailed descriptions of the selected swarm scenarios as well as
source code snippets of the firmware used on mROBerTOs for achieving these scenarios.
Swarm Scenario 1: Experiments in Aggregation
In nature, self-organizing aggregation can be observed in schools of fish, flocks of birds, insects such
as bees, cockroaches, and even in bacteria [103]–[107]. Aggregation allows animals to protect
themselves from predators and increase their sensing capabilities as a whole by huddling into one unit.
Aggregation type behaviour is often employed in swarm robotics for simulating such scenarios,
allowing interactions between robots at close proximities [1], [2]. Aggregation scenarios have been
simulated in the literature via the use of FSM approaches [100], [106], [107], artificial physics [108]–
[112], and evolutionary algorithms [74], [113], [114].
For mROBerTOs, FSM approach was chosen for implementing aggregation behaviour due to its
simplicity as well as being one of the more popular methods in the literature. The main objective of
this scenario is to showcase the robot’s ability to measure nearby robots’ relative distances and bearings
and use this information in order to complete the self-organized aggregation task.
Adapting the FSM approach from [100], a behaviour-based model [33] was implemented. The
robots were programmed to (i) approach other robots, where the robot looks for any nearby robots and
create a movement vector using the summed vectors of nearby robots’ relative positions, and (ii) wait,
where if one or more robots are within a user specified 𝑑𝑐𝑙𝑜𝑠𝑒 relative distance, then, the robot stops
moving.
Fig. 16 and Fig. 17 show two separate experimental results where a green LED indicates the
approach state while a blue LED indicates the wait state. Initially, the robots have no knowledge of
other nearby robots’ positions and they start at a minimum of 80 mm distance away from each other.
34
(a) (b) (c)
(d) (e) (f)
Fig. 16. Aggregation behaviour, (a) to (f), demonstrated by mROBerTOs. Robots start at a
minimum distance of 80 mm amongst themselves.
(a) (b)
Fig. 17. Second aggregation behaviour experiment demonstrated by mROBerTOs: (a) start
and (b) end.
Swarm Scenario 2: Experiments in Chain Formation
Social insects, such as ants, form paths between two locations [2] – for example, from a food source
to their nest in a foraging scenario. In swarm robotics, chain formation can be used for surveillance
35
and navigation purposes. Same as for aggregation, chain formation can also be achieved via FSM
approaches [101], [115], artificial-physics [116], or evolutionary methods [117].
Herein, a FSM approach was used to implement the chain formation collective behaviour [101].
However, instead of having the robots that are not part of the chain perform random walks around the
entire arena, as done in [101], mROBerTOs stayed stationary near the nest since the main objective
was to demonstrate their ability to form a straight chain. The chain, starting from the nest outward, was
achieved with the use of relative distance and bearing information of nearby robots as well as with the
use of the ANTTM mesh communication network.
Fig. 18 and Fig. 19 show two separate experimental results for chain formation. The green LED
indicates the current tail of the chain and the red LED indicates the robot in motion to get ahead of the
tail, until it stops and becomes the new tail.
Similar to the aggregation behaviour, the chain formation method adapted from [101] did not
explicitly provide the technique used for determining the next movement vector of each robot. Thus,
the artificial potential approach was adopted herein to move the robots accordingly in order to achieve
chain formation (Appendix H).
Swarm Scenario 3: Experiments in Collective Exploration
Collective exploration of an unknown environment is often used in area coverage problems for
monitoring and surveillance, as well as for swarm-guided navigation. For example, robots that cover
an area can guide others to get to specific positions within the area for mapping purposes [1], [61]. In
general, exploration must be carried out while performing obstacle avoidance [118]. Collective
exploration involves self-deployment of robots without a centralized controller, with only local sensing
capability.
Several different approaches have been proposed for solving the deployment and collective
exploration problem in an unknown environment. In [119]–[123], taking inspiration from social
insects, such as ants, a stigmergic strategy was proposed, where artificial pheromones were dropped
by robots to guide and communicate with other robots and for mapping purposes. In [102], [124],
[125], artificial physics was used to model the robots as particles and consider the walls of the unknown
environment as obstacles in order for robots to disperse evenly in any sized and shaped environment
while achieving full area coverage.
The method in [102], using artificial physics, was implemented herein, using mROBerTOs
(Appendix H).
36
The arena for this experiment was designed to have a wall in the middle of the workspace as shown
in Fig. 20 and Fig. 21.
(a) (b) (c)
(d) (e) (f)
(g) (h) (i)
(j)
Fig. 18. Chain formation demonstrated by mROBerTOs.
(a) (b)
Fig. 19. Second chain-formation behaviour experiment demonstrated by mROBerTOs: (a)
start and (b) end.
Swarm Scenario 4: Experiments in Dynamic Task Allocation
In swarm robotics, a group of robots can be asked to occasionally share/distribute several different
tasks amongst themselves by forming subgroups. For example, in a foraging scenario, one subgroup
of robots could be tasked to bring resources back to the nest, another subgroup assigned to actively
scout for additional resources in an unknown environment, while a third subgroup is tasked with
defending the nest [126]–[129].
37
(a) (b) (c)
(d) (e) (f)
Fig. 20. Collective exploration demonstrated by mROBerTOs using artificial potential
physics.
(a) (b)
Fig. 21. Second collective exploration behaviour experiment demonstrated by mROBerTOs:
(a) start and (b) end.
38
For the dynamic task allocation experiment, herein, the Card Dealer’s algorithm in [24] was chosen
for implementation. The algorithm begins with all the robots having the desired target distribution
vector that describes how the group should be divided into subgroups. The algorithm works iteratively
through a series of stages, where at the end of each stage a robot without an assigned task is dealt a
task. Further detailed explanation can be found in Appendix H.
One of the prerequisite for the Card Dealer’s algorithm is that the robots are required to be within
proximity of each other to measure the relative distances of nearby robots as well as for relaying
wireless communication. Relative distances are measured for the purpose of choosing the amount of
time period assigned per stage. For example, each stage should have a longer period of broadcasting if
there are more robots than a situation where there are fewer robots on the workspace in order to ensure
that the messages were relayed successfully to all robots. The communication links between each robot
define a directed graph 𝒢, where the nodes represent the robots themselves and edges represent the
communication links.
For this experiment, the desired task distribution vector was chosen as 𝒑 = (2/9, 3/9, 4/9),
where the first task (for two robots) is displayed by a red LEDs, the second task (for three robots) is
displayed by green LEDs, and the third task (for four robots) is displayed by blue LEDs, Fig. 22.
Communication between the robots was achieved using the ANTTM wireless communication in a mesh
type network. The robots remain stationary during task allocation.
3.3 Summary
In this chapter, the development and implementation of the novel small-sized (16×16 mm2) modular
mROBerTO millirobot is presented. The robot is designed using only commercially available and
easily sourced components for a simplified design, easy assembly, and easy maintenance. In contrast
to others, mROBerTO offers a rich range of sensor capabilities while being one of the smaller
millirobots reported in the literature. Its modular design allows the user to easily change many of its
hardware features with little disruption to other modules. This also allows independent development
of separate modules to occur and is made to be easily exchangeable among research teams, according
to their specific needs.
Various communication network topologies can be utilized on mROBerTO, such as point-to-point,
mesh, and tree topologies using its BLE and ANTTM communications. Decentralized formation control
for swarm behaviour studies, where the robot moves autonomously with respect to other robots, can
be achieved via the use of the secondary sensing module.
39
The swarm scenario experiments have verified the applicability of mROBerTO to swarm studies –
being capable of exhibiting different collective behaviours, such as aggregation, chain formation,
collective exploration, and dynamic task allocation. The results of these experiments, as presented
above, show that variety of complex swarm experiments can be completed with small millirobots such
as mROBerTOs.
(a) (b) (c)
(c) (d) (e)
(f) (g) (h)
(i) (j)
Fig. 22. Dynamic task allocation demonstrated by mROBerTOs using Card Dealer’s
algorithm and ANTTM wireless communication.
40
Chapter 4
Review on Human-Swarm Interaction
The concept of HSI has only recently been explored and several challenges have been addressed in the
HSI domain [23]. In [44], cognitive limits of the human supervisor was described within command
complexity. That is, a human supervisor can only control a certain number of robots until control
performance significantly decreases. In [51], [52], the authors introduced the concept of neglect
benevolence which states that the swarm should not be provided any command until the swarm reaches
a stable state from the previous command to prevent any degrading of performance. In [48], issues of
scalability and communication latency were explored to and addressed by dynamic selection of robot
leaders controlled by a human to influence the swarm. In [53]–[56], different design approaches for
UI to monitor and command the swarm as well as traditional MRS were discussed and showcased. In
conclusion, several benefits exist for having a human supervisor monitor and direct swarm robots.
However, a swarm control system and a UI that can address all the abovementioned issues are required
in order to effectively implement a human supervisor with a team of swarm robots. The first step
towards enabling HSI is implementing a localization system for feedback of swarm states to the human
supervisor so that the supervisor can use the state information to provide meaningful commands to the
swarm [23].
4.1 Propagating Human Influence on Swarm
There are two main methods to interact and communicate with a swarm: proximal interactions and
remote interaction [23]. The majority of research as of now have been on studying remote control
interaction with the swarm where human operators would talk to the swarm via PC terminal and
selected wireless communication topology. The three most important factors to consider in remote
interaction are latency, bandwidth, and asynchrony. In contrast, proximal interaction requires the
human operators to be within the working environment of the swarm and often do not require
complicated communication topologies. Human influence should occur after feedback is provided
from the swarm to the human operator. This means that state estimation of the swarm should be
provided to the user through an UI which could include a visualization of the states.
One of the earliest research on swarm robotics and human UI where both hardware and software
were considered can be found in [53]. Three main components in controlling a swarm were considered
41
in [53]: physical infrastructure to support hands-free operation, utility software for centralized control,
and robot output that allowed users to see the states of the robots. With regards to hardware, a physical
infrastructure was presented for hands-free operation in order to remove any manual work from human
operators that could be automated. This allowed the human supervisors to focus their efforts on control
side of the swarm. This physical infrastructure included a stationary automated charging station, long-
range navigational beacons to direct the swarm robots to the automated charging station and for long-
range communication purposes, and a semi-automated calibration setup. In addition, two types of
centralized graphic user interfaces (GUIs) were created in order to direct individual robots as well as
the swarm. Although GUI provide highly effective data collection and display and may also allow
higher performance output from human supervisors controlling the swarm, the GUI has met with mixed
success according to the authors in [53]. The robots were able to showcase their current states to the
users by using RGB LEDs on top of the robots and allowed up to 12 different additional states to be
shown as well as with the use of audio speakers inside the robots. The authors mention that although
robots are able to show their states through LEDs and sounds, the users can be exposed to information
overload if the lights and sounds are presented in a cluttered manner.
In [56], bandwidth and localization issues within HSI were studied, and more specifically with
regards to erroneous and time delayed state information of the swarm. The user was tasked to relocate
the swarm to various global positions. Three different scenarios are studied: low (i.e., only one robot
state is provided to the user at a time instance which creates delays), medium (i.e., swarm
communicates amongst themselves and estimates their position information for the user), and high
bandwidth (i.e., all robots are able to send their position information to the user at the same time). As
expected, the results showed that low bandwidth scenario performed the worst out of all the scenarios.
However, medium and high bandwidth performance results were almost the same which shows that
estimated spatial information of the swarm was accurate and precise enough for human operators to
relocate the swarm in various places.
However, in order to properly propagate human influence and provide commands to the swarm,
and make decisions for the swarm, localization will be required so that state information of the swarm
robots can be provided to the human supervisor.
4.2 Localization
When completing a global mission in SR, several benefits exist in having a human supervisor monitor
the swarm as well as provide directives, when needed, such as where the swam should move toward
42
to and the desired topology of the swarm [23]. In order to successfully achieve such goals, the swarm’s
state information, including the relative positions of the robots would need to be provided to the human
supervisor for real-time feedback and decision making.
Some methods for swarm-robotics localization has been proposed, where external sensors such as
cameras in the environment have been utilized for robot tracking in order to provide absolute positions
and act as a GPS. In [9], an infrastructure for monitoring swarm robotics was developed, with the main
emphasis being low-cost, using multiple overhead cameras, and the localization data were displayed
in 3D on a graphical user-interface. In [10], the swarm state data were gathered by tracking physical
swarm robots using multiple overhead cameras and the data were then transferred via internet to the
user.
Unlike traditional MRSs, swarm robots, typically, do not have GPS due to their size and limited
capabilities and, especially, in indoor applications [130]. Ranged-based methods are, thus, often
employed for localization, which requires the robots to directly measure their relative proximity and
bearing with respect to their immediate neighboring robots using their onboard sensors [11]. Several
such range-based methods have been proposed in the literature for ground and aerial robotic swarms
as well as sensor networks. Furthermore, least square technique is applied in several of these
localization algorithms [12], [13], [20]–[22].
Localization algorithms in SR can provide relative local positions of the robots (i.e., state of the
swarm) which may be displayed on a GUI for the human supervisor to view. The chosen localization
algorithm needs to be scalable for the large number of robots operating at once and scalability can be
achieved with distributed computation [46]. Furthermore, the localization algorithm needs to be robust
to noise found in dynamic environments during the completion of the user specified global mission.
To relay the information from the swarm to the human supervisor as well as the user commands to
the swarm, communication network configurations must be embedded into the swarm control system
[30]. Decentralized communication allows any number of robots to communicate within the swarm
and is robust to any sudden addition or removal of robots [59]. The chosen communication network
configuration embedded on the robots to enable the communication between the swarm and the user
needs to have the same objectives as the selected localization algorithm: scalable and robust to noise.
Currently existing localization algorithms [15], [46], [47] and communication network
configurations [48]–[50] address the issues of scalability. However, these algorithms and
configurations assume that the sensor nodes have sufficient processing capabilities to run complex
optimization algorithm (e.g., particle swarm optimization (PSO)) or the results are not needed in real-
time. In SR, the localization algorithms must be implemented on small swarm millirobots with limited
43
processing capabilities thus implementing complex algorithms may not be possible. Furthermore, the
results for the localization must be provided to the human supervisor through the communication
network in real-time. Thus, a new approach for localization and a new communication network
configuration will be needed with the constraints of being implemented on small swarm millirobots,
which has the capability of providing results to the user in real-time.
4.2.1 Range-Based Localization
In [22], two different localization methods are presented for large (ground) swarms to determine the
relative proximities and bearings of the robots. The first approach uses a non-linear least-squares-based
strategy, whereas the second uses a stop-and-move motion coordination scheme to determine the
relative location of neighbouring robots via trilateration. In [21], the relative locations of a pair of
unmanned aerial vehicles (UAVs) are determined using three range measurements obtained during
coordinated motion and configurations similar to mechanical linkages.
In [13], node localization for wireless networks is achieved using the Gauss-Newton method, and a
scalable scheduling algorithm is used for sharing this information with neighboring nodes. Each node
has proximity measurements to neighboring nodes as well as to any anchor nodes. An initial estimate
is assumed when using the algorithm, which estimates new updated positions for the nodes. In [14],
relative localization of pairs of robots is carried out based on visual odometry and range measurements.
A constrained optimization problem is solved by maximizing a convex function subject to a set of
convex constraints.
In [20], a Bluetooth®-based relative localization algorithm is presented for micro UAVs to prevent
collisions within the swarm. The vehicles provide their state measurements, and measure the received
signal strength of other robots to determine relative distance. In [19], multiple antennas are placed on
robots in a group to estimate their locations using lateration and angulation.
4.2.2 Localization with Evolutionary Algorithms
In [15], a three-stage method is presented that can either use PSO or backtracking search algorithm
(BSA) for robot/sensor node localization. During the 1st stage, the sum-dist is used to estimate distance
to neighboring robots; in the 2nd stage, the Min-Max method is used estimate an initial position; and,
lastly, in the 3rd stage, PSO or BSA is used to optimize the position.
In [16], a two-phase swarm-intelligent localization algorithm is presented for mobile nodes. The
algorithm assumes a small number of the nodes have prior knowledge of their global positions.
44
Thereafter, other nodes estimate their positions utilizing this information and PSO during a coarse
phase. In the fine phase, these positions are iteratively refined through collaborations. In [17], the use
of GA and BSA for relative localization were proposed and compared. The swarm robots are assumed
to be only one-hop away from anchor robots that have GPS coordinates and one of the two optimization
algorithms is utilized to refine the initial results of the localization. In [12], a two-phase method is
presented where Connected Dominating Set (CDS) is utilized to generate a virtual backbone in the first
phase. In the second phase, the CDS result is utilized with matrices of inner anchor nodes, relative
distances, and proximities of unknown nodes, in order to estimate the relative positions.
The majority of the abovementioned localization algorithms require anchors with known global
coordinates [12]–[18], or that the robots be in motion [19]–[22]. Furthermore, only a few have
implemented the localization algorithms with physical robots and, even then, restricted the set-ups to
two or three robots at most [14], [20].
4.3 Summary
In order to enable HSI, a localization system will need to be implemented so that human supervisors
can provide meaningful feedback using the state information of the swarm. Although several
localization algorithms have been proposed in the swarm robotics literature, few [14], [20] have
implemented the algorithms onto physical robots when running experiments. Furthermore, the papers
that did implement the algorithms onto physical robots were limited to two to three robots. In order to
validate the relative localization algorithm to be used in real-world applications, experiments with
physical robots will need to be carried out. In addition, more than three robots should be used in the
experiments since a swarm typically consists of tens, even hundreds, of robots. Lastly, no external
sensors such as overhead cameras should be utilized for localization since external sensors or GPS may
not be available in real-world applications in SR.
Next, a novel swarm localization algorithm for estimating swarm topologies will be presented
showing a successful implementation, testing, and comprehensive analysis of the algorithm.
45
Chapter 5
Enabling Localization for Human-Swarm Interaction
In order to successfully maintain a desired swarm topology and enable HSI, localization is a necessary
task that must be completed in an on-line manner and can be accomplished either through external [9],
[10], or on-board sensors [11]-[45]. One of the most common and often needed methods for
localization is the ranged-based methods, which require the robots in the swarm to directly measure
their relative distance and bearing with respect to other neighboring robots [11].
Herein, the localization problem is solved without a need for anchors, external sensors, or any
previously known global coordinates and only through range-based measurements. Correspondingly,
a detailed novel localization method for swarm topology estimation is provided, without a restriction
on the number of the robots.
5.1 Proposed Swarm Localization Method
A novel localization method for swarm topology estimation was developed in-house, initially
conceived by Professor Benhabib from the Computer Integrated Manufacturing Lab at the University
of Toronto, for swam topology estimation. The proposed method fuses partial localization data
acquired from individual robots to simultaneously determine the complete set of robot locations (i.e.,
all relative proximities and bearings). The method is novel in that it only requires partial location data
sets acquired from individual robots to perform a complete relative localization of all robots. In
particular, the method performs localization without any global position knowledge or motion of the
robots.
One must note that the relative location data sets acquired from individual robots, in terms of polar
coordinates of their neighboring robots, would be restricted by the detection radii of the individual
robots.
Fig. 23 below shows an example for the experimental set-up that uses nine robots. The top left
figure shows the relative locations of Robots 2, 3, 4, 5, 7, 8, 9 with respect to Robot 1. Robot 6 is not
detectable by Robot 1. Similarly, the top middle figure shows the relative locations of Robots 3 and 9
with respect to Robot 2. The other robots are not detectable by Robot 2. The next seven figures show
the other seven sets of location data for Robots 3 to 9, respectively.
46
The data sets shown in Fig. 23 are both position and rotation invariant. Thus, in order to fuse
relative-location data, individual reference frames need to be transformed to positions and orientations
in a common global reference frame. Namely, all the data sets need to be superimposed and
manipulated (translated and rotated), where all respective robot data points are clustered to minimize
distances between observed data points for every robot, respectively.
Fig. 23. Nine robot location data sets.
The above clustering process can be carried out sequentially and iteratively. For example, in this
case, the data set for Robot 1 is used in the beginning as a reference, top left figure in Fig. 24. Then,
data sets for Robots 1 and 2 are superimposed and manipulated to yield the top middle figure in Fig.
47
24. Next, the data set for Robot 3 is superimposed on this combined 1-2 data set, to yield 1-2-3, top
right figure in Fig. 24. The clustering process continues until 1-2-3-4-5-6-7-8-9 is achieved, bottom
right figure in Fig. 24. Once this combined figure is obtained, the process is repeated. Namely, next,
the Robot 1 data set is re-transformed and manipulated for better combined results, followed by the
next eight robot data sets. The process is continued until an objective-function threshold is reached.
Fig. 25 shows the optimal cluster for the nine robots. After the optimal transforms are determined,
using the iterative optimization procedure described above, estimated robot positions are calculated by
determining the mean positions of all observations for a given robot in the common global reference
frame, Fig. 26.
Fig. 24. The data sets in Fig. 23 are incrementally superimposed and manipulated.
48
Fig. 25. The optimal cluster.
The Algorithm
Let rij be the proximity of Robot j to Robot i, as observed by Robot i, and θij be the bearing of Robot j
in the local reference frame of Robot i, where the polar coordinates (rij, θij) denote the location of Robot
j as observed by Robot i.
Let the transformation of Data Set i, from the local reference frame to the common global reference
frame, be defined by a rotation of the data set, by i, about the local reference frame origin, and a
subsequent translation, by (xi, yi), in the x and y directions, respectively. Then, given the transformation
49
parameters of the Robot i data set, (xi, yi, i), the Cartesian coordinates of Robot j as observed by Robot
i in the common global reference frame, Xij are:
𝑿𝑖𝑗 = [
𝑟𝑖𝑗 𝑐𝑜𝑠(𝜃𝑖𝑗 + 𝑎𝑖) + 𝑥𝑖
𝑟𝑖𝑗 𝑠𝑖𝑛(𝜃𝑖𝑗 + 𝑎𝑖) + 𝑦𝑖],
(11)
Fig. 26. Estimated robot positions.
In order to fuse the relative-location data, transformation parameters for individual robot data sets
are optimized sequentially. At each optimization step, a single set of transformation parameters is
selected for optimization, leaving all others fixed. Optimal transformations are the ones that minimize
50
the distances between observed data points for every robot. In particular, the transformation parameters
of Robot i are optimized to minimize the maximum distance between the coordinates of a robot as
observed by Robot i and the coordinates of the same robot as observed by all other robots. Namely, for
the optimization of the Robot i transformation parameters, the objective function to minimize is:
𝜖𝑖 = 𝑚𝑎𝑥𝑗,𝑘
||𝑿𝑖𝑗 −𝑿𝑘𝑗||, (12)
where Xij represents the Cartesian coordinates of Robot j in the global reference frame as observed by
Robot i, Xkj represents the Cartesian coordinates of Robot j in the global reference frame as observed
by Robot k, and ||⋅|| denotes the Euclidean norm.
The optimization of individual transforms is performed sequentially and is repeated for all robots
iteratively until the improvement of the objective function is less than the objective-function threshold
(e.g., 0.1% in this case).
5.2 Experiments
As noted above, swarm robot localization, among other uses, is a requirement for HSI studies [23]. In
HSI, human supervisors need to be provided in real-time with information regarding the up-to-date
topology of the swarm (i.e., the relative positions of the robots). The experiments discussed in this
section illustrate mROBerTO’s affinity to such localization. Appendix I provides details on the
experimental setup for the HSI portion of this thesis. Furthermore, Appendix J includes the MATLAB
code used for analyzing the HSI experiments.
In the experiments that were conducted, a swarm was initially arranged into a known topology,
specified by both robot relative proximities and bearings. The robots were, then, allowed to
communicate with each other using IR to locate other robots within their respective communication
range. The data of relative robot proximities acquired by each robot were, subsequently, combined
according to the data-fusion algorithm detailed in Section 3, above. The goal was to obtain the location
(proximity and bearing) of all robots in the swarm relative to a single reference robot.
The swarm topologies obtained through data fusion were compared to the actual topologies in order
assess the affinity of mROBerTO to localization as well as the specific performance of the proposed
localization method.
The experiments were repeated multiple times for many topologies. Fig. 27 and Fig. 28 illustrate
two of the repetitions for two different example swarm topologies, respectively. The actual (true) robot
51
positions are shown in purple, while the measured (estimated) robot positions are shown in green. The
first topology, shown in Fig. 27, has eight robots arranged in a diamond shape with one additional
robot in the center (Robot 1). The second topology, shown in Fig. 28, has nine robots distributed
randomly, with no single robot being identifiable in the centre of the swarm.
Fig. 27. Diamond-shaped swarm topology: (a) Repetition #1, and (b) Repetition #2.
52
Fig. 28. Random-shaped swarm topology: (a) Repetition #1, and (b) Repetition #2.
53
From the experiments, it was evident that swarm topology can be successfully estimated using the
mROBerTOs and the proposed localization method. However, in order to actually measure the
dissimilarity between the actual (true) and measured (estimated) shapes, a topology signature was
defined as the polar coordinates of all robots with respect to one reference robot, in the above examples
chosen as Robot 1.
Table III shows the dissimilarity measure in terms of the percentage errors between the topology
signatures of the actual (true) and measured (estimated) relative robot locations (r: proximity, and, θ:
bearing). The minimum, maximum, and average errors for each experiment are given. Overall, for all
the (four) experiments shown here, combined, the average proximity error was 7% with a standard
deviation of 7.5%, and the average bearing error was 5% with a standard deviation of 4.5%. For all
HSI experiments completed in this thesis, the author was the human supervisor monitoring and
providing commands to the robots.
Table III. Topology-error data of 4 experiment runs.
Diamond
Rep. #1
Diamond
Rep. #2
Random
Rep. #1
Random
Rep. #2
Min. r Error (%) 1 0 1 1
Avg. r Error (%) 3 6 9 9
Max. r Error (%) 6 1 20 38
Min. θ Error (%) 1 1 0 0
Avg. θ Error (%) 4 5 6 4
Max. θ Error (%) 9 11 20 13
The distance errors between the actual (true – purple) and measured (estimated – green) respective
robot positions were also calculated. The minimum, maximum, and average errors for each experiment
are given in Table IV. Overall, for all four experiments shown here, good results were observed with
54
the average distance error valued at 13.1 mm with a standard deviation of 6.8 mm. The robots were
able to localize and the estimated topologies were close to the actual positions of the robots.
In addition to the static trials above, an experiment with HSI was performed as well. Namely, a
swarm of nine robots (in the field) was asked to change its topology through input from a human
supervisor. Specifically, in the experiment presented, the swarm with initially randomly positioned
robots, Fig. 29a, was instructed to reconfigure itself into a diamond topology centred about Robot 1.
The instructions were issued by a human supervisor and relayed to the robots through a GUI.
Fig. 29b shows the swarm topology after the first movement towards the diamond topology.
Iterations progress from left to right, beginning with randomly positioned robots and ending in a
diamond topology. The middle row indicates the true robot positions (corresponding to the photos in
the row above), while the bottom row indicates estimated swarm topologies.
Table IV. Localization-error data.
Diamond
Rep. #1
Diamond
Rep. #2
Random
Rep. #1
Random
Rep. #2
Min. Error (mm) 0.6 2.7 9.2 11.6
Avg. Error (mm) 6.9 10.0 17.4 18.0
Max. Error (mm) 9.2 16.1 22.9 21.6
The human supervisor could not directly observe the physical swarm and only had access to the
estimated topology of the swarm, obtained through localization through a GUI. At the start of the
process, the robots in the swarm were assigned relocation identities/destinations to minimize
displacements, and individual translation vectors to achieve the desired diamond topology, based on
estimated goal positions.
A total of four iterations of movements were performed to achieve the topologies shown in Fig.
29b to Fig. 29e, respectively. Localization was performed at every iteration to provide the human
supervisor with up-to-date information regarding the latest swarm state.
55
Fig. 30 shows the average dissimilarity measures of the estimated robot topology signature with
respect to the desired diamond topology signature over motion iterations. The results demonstrate the
convergence of the swarm to its desired estimated topology.
Fig. 29. The evolution of the robot swarm over time from random positions to diamond
topology.
Fig. 30. Average dissimilarity of the estimated swarm from desired diamond topology.
56
5.3 Summary
This chapter presented a novel localization algorithm for estimating swarm topology of multiple robots
and no knowledge of global positions of the robots were required. The experimental results validated
this new method using physical robots (mROBerTOs). Two different topologies were tested in the
experiment: randomly placed robot positions and a diamond formation. Furthermore, an experiment
was presented where mROBerTOs successfully changed formation from random positions for a
diamond formation through the use of estimated localization data.
57
Chapter 6
Conclusions and Recommendations for Future Work
This chapter presents a brief overview of the research challenges that were addressed in this thesis and
the proposed robot design as well as the localization algorithm that were developed to address them.
This thesis consisted of a new swarm millirobot design for swarm behaviour studies. Furthermore, a
novel localization algorithm for estimating swarm topology was presented in order to enable HSI. A
summary of contributions is presented in Section 6.1 regarding the proposed robot design and the
localization algorithm. Section 6.2 presents possible future work regarding swarm millirobots and HSI.
6.1 Summary of Contributions
The main contributions of this thesis are the following:
1. Development and implementation of an effective swarm millirobot design for swarm behaviour
studies that is one of the smallest robot in existence to allow large number of units working in
small arenas, has high level of modularity for future changes and upgrades, uses only off-the-
shelf and easy to source components for easy assembly and maintenance, as well as having
rich processing and sensing capabilities to handle complex tasks.
2. The design and validation of a swarm localization algorithm for estimating swarm topologies
without the knowledge of the robots’ global positions and without the use of any external
sensors such as overhead cameras, in order to enable HSI.
mROBerTO allows researchers to test various swarm scenarios because of its modular design and
rich processing and sensing capabilities. The high level of modularity allows researchers to
independently develop different modules for their needs. Its small size allows large number of units to
operate in a relatively small workspace area. Since mROBerTO is designed using only off-the-shelf
and easy to source components, it can be easily mass produced without the need for any complex
manufacturing requirements.
The newly proposed swarm localization algorithm for estimating swarm topologies requires no
knowledge of global coordinates of any of the robots and can be done in an on-line fashion. This allows
the algorithm to be used in real world applications where external sensors such as overhead cameras
would not be present. This algorithm was validated with physical robots (i.e., mROBerTOs) to prove
its effectiveness in real world environments.
58
6.1.1 An Effective Swarm Millirobot Design
Swarm millirobots, and SR in general, is a new emerging field where it takes inspirations from social
animals that exhibit several desirable behaviours. In order to test and implement these desirable
behaviours onto the robots, physical robots will need to be developed. Several challenges exist when
designing effective swarm millirobots for swarm behaviour studies. These challenges include making
trade-offs between size, battery capacity, and processing and sensing capabilities.
A list of currently existing swarm robots was investigated and it was found that these robots
typically exhibit integral design which prevents researchers from making additional upgrades or
changes with ease. Furthermore, these robots often consist of custom made components such as
drivetrains which prevent mass production of these robots. It is desirable for the swarm robots to be as
small as possible so that large number of units can operate in small workspaces when running swarm
scenario experiments. However, as literature reports, processing and sensing capabilities are sacrificed
in order for robots to be smaller in overall size.
mROBerTO swarm millirobot addresses the abovementioned challenges and provides a platform
for researchers to effectively run swarm scenario experiments. In order to show potential use of
mROBerTO in swarm behaviour studies, several experimental results were provided regarding its
performance as well as its ability to exhibit collective behaviours.
6.1.2 Swarm Localization Algorithm
Although swarm robots are expected to have high LoA, several benefits exist for including a human
supervisor into a team of swarm robots. However, many challenges exist in order for humans to be
effectively included into the team of swarm robots (i.e., enabling HSI). One such challenge is the
successful implementation of localization in order to get state information of the swarm. Swarm state
information will provide feedback to the human supervisor in order to make decisions and provide
meaningful commands to the swarm robots.
Several localization methods in SR have been proposed in literature and were investigated.
However, these methods typically require external sensors such as overhead cameras that monitor the
individual robot positions. Such methods are not suitable for real world applications where GPS cannot
be installed on swarm millirobots due to their size and in environments that cannot have external
sensors. Other localization methods require complex algorithms that have heavy computation time and
are not suitable for swarm millirobots with limited processing and sensing capabilities.
59
The localization algorithm proposed in this thesis addresses the abovementioned issues in order to
effectively enable HSI. This algorithm does not require any external sensors and can be completed
using only the sensors already on the swarm millirobots that can measure both proximities and bearings
of nearby robots. The algorithm was validated using nine mROBerTOs but can be used for any number
of robots within the swarm team.
6.2 Recommendations for Future Work
The first recommendation for future work is investigating and developing an upgraded design of
mROBerTO swarm millirobot. One example of such investigation is the investigation of different types
of actuation used on the swarm millirobots. mROBerTO’s actuation only allowed operation on smooth
uniform surfaces which limits the workspaces it can be used on such as lab testbed settings. It may be
beneficial for the swarm millirobot to have more robust actuation such as tracked wheels with the
trade-off of having larger footprint area. The design decision of not using any gears for locomotion did
simplify the design with the trade-off of having low torque output for each motor (i.e., maximum 28
nNm per motor) which may cause an issue for testing future swarm scenarios on various applications.
The catalogue of components implemented on the swarm millirobots should also be further
investigated. For example, there is a newly released nRF52832 SoC which has the same features and
package size as the nRF51422 but with an upgraded ARM architecture, faster processing speed, and
more memory (both flash and RAM). Other sensor ICs such as accelerometers, gyroscopes, and
proximity sensors should be looked into as well since there are newer versions of the previously used
components on mROBerTO that consume less power with more capabilities than their previous
versions.
As mentioned previously in Section 3, mROBerTO has built-in Bluetooth® and ANTTM Wireless
technologies. To enable swarm capabilities which requires mesh network, ANTTM Wireless technology
was utilized. However, although ANTTM Wireless has a relatively fast data transmission speed (i.e., 60
Kbit/s), the maximum data packet size per message is only up to 8 usable bytes. This may be an issue
when enabling HSI using localization algorithms and communication network configurations proposed
in Section 4. Thus, other commercially available wireless technologies should be investigated that
allow larger than 8 bytes of data per message. One possible wireless technology to replace ANTTM is
the ZigBee® which allows up to 72 usable bytes of data per packet but requires more power for data
transmission when compared to ANTTM.
60
In [23], several suggestions are made with regards to future research areas in HSI that have yet to
be fully addressed in literature. One of the main and common suggestion, which is also mentioned in
other recent surveys [1], [3], [131], for future HSI research area to explore is determining the
effectiveness characteristics of each collective algorithm in swarm robotics in order to compare
performance of all the pre-existing algorithms (i.e., creation of a standardized metric system) [132].
Another future work that should be considered is studying different types of human UI for monitoring
the swarm robots in order to get a streamlined feedback of the swarm dynamics during the mission
[133]. If human operators can get better understanding of the swarm robots’ states during missions,
they can better predict the behaviours of the swarm and provide more efficient timed inputs to increase
overall performance of the swarm [134].
61
Appendices
Appendix A: Drawings and Assemblies of mROBerTO
The drawings and assemblies of all circuit boards (modules) of mROBerTO are included in this
appendix. These modules include the locomotion module, mainboard, primary sensing module,
secondary sensing module (swarm), and secondary sensing module (central).
The tolerances indicated in the circuit board drawings are taken from the manufacturer of the boards
of mROBerTO: Itead Studio [135]. Drawings of individual components that are placed on each module
and are listed in the BoM (Appendix B) can be found on the website of the distributor/wholesaler by
searching the respective part number on the site.
62
A.1 Locomotion Module: Drawing and Assembly
63
64
A.2 Mainboard Module: Drawing and Assembly
65
66
A.3 Primary Sensing Module: Drawing and Assembly
67
68
A.4 Secondary Sensing (Swarm) Module: Drawing and Assembly
69
70
A.5 Secondary Sensing (Central) Module: Drawing and Assembly
71
72
Appendix B: Bill of Materials
Below lists the BoM for individual modules of the robot. The Items column corresponds to the circuit
schematic component names in Appendix C. Unless specified, tolerance values for resistors,
capacitors, and inductors can be ≤ 10%. Generic surface mount resistors, capacitors, and inductors
can be soldered onto the boards with the specified footprint listed below. Generic male rectangular
header pins can be used with specified pitch values below (i.e., 1.27 mm (0.05”) or 2.54 mm (0.1”)).
Locomotion Module
Quantity Items Values Manufacturer Distributor/ Wholesaler
Part Number Footprint
1 U1 2.8 V 80 mA Voltage Regulator (High Acc)
Microchip Digi-Key TC1016-2.8VCTTR
SOT-23-5
1 U2 2.5 V 700 mA Voltage Regulator
Torex Semiconductor Digi-Key XC6210B252MR-G
SOT-23-5
1 U3 2.8 V 250 mA Voltage Regulator
Microchip Digi-Key MCP1700T-2802E/TT
SOT-23-5
1 U4 H-Bridge (x2 Full/x4 Half) Allegro MicroSystems Digi-Key A3901SEJTR-T 10-WFDFN
4 C1,C2, C4,C5
2.2 uF Ceramic Capacitor - Digi-Key - 0402
1 C3 10 uF Ceramic Capacitor - Digi-Key - 0402
2 - 4mm Nano DC Motors Precision Microdrive Precision Microdrive
104-001 -
3 - Li-Po 3.7V Battery Data Power Tech. Ltd. SparkFun DTP301120 20 x 11 cm2
1 - PTFE Bearing Ball McMaster Carr McMaster Carr 9660K13 -
1 - PCB ITEAD Studio ITEAD Studio - 5 x 5 cm2 max
Mainboard
Quantity Items Values Manufacturer Distributor/ Wholesaler
Part Number Footprint
1 U1 nRF51422 System on Chip Nordic Digi-Key NRF51422-QFAC-R 48-VFQFN
1 U2 16 MHz Crystal AVG Corp/Kyocera Corp Digi-Key CX3225GB16000D0HPQCC 4-SMD
1 U3 2.4 GHz Chip Antenna Ethertronics Inc. Digi-Key 1001312 SMD
1 U4 Nordic Balun Filter Johanson Tech. Inc. Digi-Key 2450BM14E0003T 0603
1 L1 15 nH Fixed Inductor (Optional)
- Digi-Key - 0402
1 L2 10 uH Fixed Inductor (Optional)
- Digi-Key - 0603
1 JP1 1x3 2.54 mm Pitch Male Pinhead
- eBay - -
1 JP3 1x6 1.27 mm Pitch Male Pinhead (Right-Angled)
- eBay - -
1 JP4 1x8 1.27 mm Pitch Male Pinhead
- eBay - -
1 JP5 1x9 1.27 mm Pitch Male Pinhead
- eBay - -
1 JP6 1x10 1.27 mm Pitch Female Pinhead
- eBay - -
2 R1,R2 2 MOhms Resistor (%1) - Digi-Key - 0402
1 C1 0.1 uF Ceramic Capacitor - Digi-Key - 0402
2 C2,C3 12 pF Ceramic Capacitor - Digi-Key - 0402
1 C4 1 nF Ceramic Capacitor - Digi-Key - 0402
1 C5 47 nF Ceramic Capacitor - Digi-Key - 0402
2 C6,C7 2.2 uF Ceramic Capacitor - Digi-Key - 0402
1 - PCB ITEAD Studio ITEAD Studio - 5 x 5 cm2 max
73
Primary Sensing Module
Quantity Items Values Manufacturer Distributor/ Wholesaler
Part Number Footprint
1 U1 Time-of-Flight Proximity/Ambient Light
Sensor
STMicroelectronics Digi-Key VL6180XV0NR -
1 U2 Toshiba VGA CMOS Camera Toshiba SparkFun TCM8230MD -
1 U3 1.5 V Voltage Regulator Texas Instruments Digi-Key LP2985AIM5-1.5 SOT-25
2 R1,R2 47 KOhms Resistor (1%) - Digi-Key - 0402
2 R3,R4 2.2 KOhms Resistor (1%) - Digi-Key - 0402
1 C1 4.7 uF Ceramic Capacitor - Digi-Key - 0402
1 C2 2.2 uF Ceramic Capacitor - Digi-Key - 0402
1 C3 10 nF Ceramic Capacitor - Digi-Key - 0402
1 - PCB ITEAD Studio ITEAD Studio
- 5 x 5 cm2 max
Secondary Sensing Module (Swarm)
Quantity Items Values Manufacturer Distributor/ Wholesaler
Part Number Footprint
1 U1 ATMEGA328P-MU Atmel Digi-Key ATMEGA328P-MU
32-VFQFN
1 U2 MPU-6500 InverseSense Digi-Key MPU-6500 24-QFN
1 U3 SMLP36RGB2W3 Rohm Semiconductor
Digi-Key SMLP36RGB2W3 0604
4 LD1,LD2,LD3, LD4
IR Emitters 880 nm Knightbright Digi-Key APA3010F3C-GX -
6 LP1,LP2,LP3, LP4,LP5,LP6
IR Phototransistor 860 nm Vishay Corp Digi-Key APA3010P3BT-GX
-
1 R1 10 KOhms Resistor - Digi-Key - 0402
2 R12,R13 4.7 KOhms Resistor - Digi-Key - 0402
6 R2,R3,R4, R5,R6,R7
11 KOhms Resistor (1%) - Digi-Key - 0402
4 R8,R9,R10, R11
200 Ohms Resistor (1%) - Digi-Key - 0402
3 R14,R15,R16 150 Ohms Resistor (1%) - Digi-Key - 0402
3 C1,C9,C11 0.1 uF Ceramic Capacitor - Digi-Key - 0402
1 C2 2.2 uF Ceramic Capacitor - Digi-Key - 0402
6 C3,C4,C5, C6,C7,C8
10 nF Ceramic Capacitor (1%) - Digi-Key - 0402
1 C10 10 nF Ceramic Capacitor - Digi-Key - 0402
1 L1 10 uH Fixed Inductor - Digi-Key - 0603
1 - PCB ITEAD Studio ITEAD Studio - 5 x 5 cm2 max
Secondary Sensing Module (Centralized Control)
Quantity Items Values Manufacturer Distributor/ Wholesaler
Part Number Footprint
1 U1 3D Mangetometer NXP USA Inc. Digi-Key MAG3110FCR1 10-VFDFN
1 U2 3D Gyroscope InvenSense Digi-Key ITG-3701 16-WFQFN
1 LD1 RGB SMD LED Rohm Semiconductor Digi-Key SMLP36RGB2W3 0603
1 LD2 Green SMD LED Rohm Semiconductor Digi-Key APT2012LZGCK 0603
3 R1,R2,R3 560 Ohms Resistor (1%) - Digi-Key - 0402
1 R4 280 Ohms Resistor (1%) - Digi-Key - 0402
5 C1,C2,C4, C5,C6
0.1 uF Ceramic Capacitor - Digi-Key - 0402
1 C2 10 nF Ceramic Capacitor - Digi-Key - 0402
1 - PCB ITEAD Studio ITEAD Studio - 5 x 5 cm2 max
74
Appendix C: Schematics and Board Layouts
Below are the schematics and layouts of the four modules: locomotion, mainboard, primary sensing,
and secondary sensing modules. The board layouts include layouts of both top and bottom (bottom
layer is mirrored) layers together, top layer separately, and bottom layer separately.
C.1 Locomotion Module: Schematic and Layout
75
76
77
78
C.2 Mainboard Module: Schematic and Layout
79
80
81
82
C.3 Primary Sensing Module: Schematic and Layout
83
84
85
86
C.4 Secondary Sensing (Swarm) Module: Schematic and Layout
87
88
89
90
C.5 Secondary Sensing (Central) Module: Schematic and Layout
91
92
93
94
Appendix D: Specification of mROBerTO
Table D1. mROBerTO’s spec sheet.
Size and Weight
Dimensions 16 × 16 × 32 mm3 (L × W × H)
Weight 8.9 g
Processor, Sensors, and Peripherals
Processor ARM Cortex-M0 (32-bit @ 16 MHz)
Memory 32 KB RAM and 256 KB flash
Inertial Measurement Units 3D gyroscope, 3D accelerometer, and 3D magnetometer
Front Range Time-of-flight proximity sensor (measures up to 255 mm) and ambient
light sensor
Front Camera CMOS VGA (allows up to 640 × 480 @ 30 FPS)
Proximities and Bearings Multi-channel IR based communication module
Visual User Interface RGB LED (top side)
Locomotion and Speed
Maximum Speed 150 mm/s
Minimum Speed 1 mm/s
Type of Locomotion Differential drive with 2 wheels
Battery and Power Management
Operation Time Approximately 3 hours (ranges from 1 to 6 hours)
Battery Type Lithium polymer 3.7 V nominal with 120 mAh
Charge Time 1 hour
Wireless Communication
Communication Type (RF) Selectable or concurrent operation of BLE 4.2 (up to 1 Mbps) or ANTTM
(up to 128 Kbps)
Maximum Range (RF) < 0.5 m
Communication Type (IR) Multi-channel infrared based (up to 2 × 105 bps)
Maximum Range (IR) < 125 mm
Software and Debugging
IDE Eclipse with ARM GCC compiler
Programmer/Debugger SEGGER’s J-Link
Programming (Over-the-Air) Available
Operating Environment
Temperature 0 °C to 55 °C
Workspace Surface Flat with no to little roughness (e.g., whiteboard and acrylic glass)
95
Appendix E: IR Sensors for Proximity and Bearing Estimation
An algorithm to approximate the relative distance and bearing was developed where it uses all readings
of the six IR phototransistors (detectors). Below show the procedures of this algorithm for
approximating the relative distances and bearings of nearby robots.
1) Grab the raw IR readings for all nearby robots.
2) Normalize the raw IR readings.
3) Use the normalized IR reading values to approximate the bearing of all nearby robots.
4) Sum the normalized IR reading values and use this summed value as well as the approximated
bearing for all nearby robots to approximate their relative distances.
Six wide-angle view IR detectors (Fig. E1) are placed on top of mROBerTO in order to estimate both
proximities and bearings of nearby robots, up to 125 mm. There are also four wide-angle IR emitters
that are similar to the IR detectors that emit modulated IR signals with specified carrier frequency.
Fig. E1. Wide-angle IR detector with its relative radiant sensitivity vs. angular displacement
characteristics [136].
The raw IR signals for each of the six IR detectors are gathered using six ADC input pins on the
ATmega328P microcontroller on the secondary sensing module. The ADC block on the
microcontroller is set to sample at approximately 14.42 Ksps by setting the prescalar of the ADC clock
to 64 with system clock overclocked to operate at 12 MHz (i.e., (12𝑀𝐻𝑧/64)/
(13𝑐𝑦𝑐𝑙𝑒𝑠𝑝𝑒𝑟𝑠𝑎𝑚𝑝𝑙𝑒)). As of now, the robots can demodulate up to nine different IR carrier
frequencies (channels) ranging from 2040 to 3835 Hz using either the FHT or GA on the ATmega328P.
Thus, by the end of one cycle of the demodulation procedure, raw IR data for nine different frequencies
96
(channels) for six IR detectors per channel are measured as shown below, where i is the 𝑖𝑡ℎ channel
and j is the 𝑗𝑡ℎ IR detector on the secondary sensing module (Fig. E2).
𝐼𝑅𝑟𝑎𝑤 =
[ 𝐼𝑅1,1 𝑅1,2 𝑅1,3 𝑅1,4 𝑅1,5 𝑅1,6⋮ ⋱ ⋮ ⋮ ⋰ ⋮⋮ 𝐼𝑅𝑖,𝑗 ⋮ ⋮ ⋯ ⋮
⋮ ⋰ ⋮ ⋮ ⋱ ⋮𝐼𝑅9,1 ⋯ ⋯ ⋯ ⋯ 𝐼𝑅9,6]
Fig. E2 . Secondary sensing module with IR emitters and detectors.
After the raw IR reading values have been gathered, the values are used in the below equation to
get the normalized IR reading values to be used for approximating the relative distances of nearby
robots as well as the summed normalized IR reading values to be used later on for proximity estimation.
𝑎𝑖 =∑𝐼𝑅𝑖,𝑗
𝐼𝑅max 𝑖,𝑗
6
𝑗=1
=∑𝐼𝑅′𝑖,𝑗
6
𝑗=1
,
(E1)
where, 𝑎𝑖 is the summed normalized IR reading values, 𝐼𝑅𝑖,𝑗 is the raw IR reading value for 𝑖𝑡ℎ channel
and 𝑗𝑡ℎ IR detector, 𝐼𝑅max 𝑖,𝑗 is the raw IR reading found during calibration at close distance (i.e., 50
mm in this case), and 𝐼𝑅′𝑖,𝑗 is the normalized IR reading value.
97
As mentioned previously in Subsection 3.1.2, the bearings of nearby robots are determined by
weighting the scaled IR intensities received with the six IR detectors, as described in [91],
�̂�𝑖 = 𝑎𝑟𝑐𝑡𝑎𝑛 (
∑ 𝐼𝑅′𝑖,𝑗∙𝑠𝑖𝑛(𝛽𝑖)6𝑗=1
∑ 𝐼𝑅′𝑖,𝑗∙𝑐𝑜𝑠(𝛽𝑖)6𝑗=1
),
(E2)
where φ̂𝑖 is the estimated bearing value of the 𝑖𝑡ℎ IR channel signal with respect to its heading, 𝑗 is the
sensor position, 𝑗 = 1to6, and 𝛽𝑖 is the angular distance between the 𝑗𝑡ℎ sensor and the robot’s
heading (i.e., 0°, 60°, 120°,… , 300° as shown in Fig. E2).
Once the bearing has been approximated and summed normalized IR reading values are calculated,
this information can be used with equation E3 to approximate the relative distance. The equation E3 is
a linear interpolation between 0 to 50, 50 to 75, 75 to 100, and 100 to 125 mm distances at a specific
bearing value (Fig. E3). Fig. E3 shows the model used for approximating the relative distances where
the blue line is the average summed normalized IR readings at 50 mm distance found during
calibration, and the other colours, red, green, and cyan, are summed normalized IR readings at 75, 100,
and 125 mm, respectively.
𝑑(𝑎𝑖 , 𝜑𝑖) =
{
50𝑚𝑚−0𝑚𝑚
𝑙50𝑚𝑚(𝜑𝑖)−𝑙0𝑚𝑚(𝜑𝑖)∙ 𝑎𝑖 + 𝑏50𝑚𝑚, 𝑓𝑜𝑟𝑙50𝑚𝑚(𝜑𝑖) ≤ 𝑎𝑖 ≤ 𝑙0𝑚𝑚(𝜑𝑖)
75𝑚𝑚−50𝑚𝑚
𝑙75𝑚𝑚(𝜑𝑖)−𝑙50𝑚𝑚(𝜑𝑖)∙ 𝑎𝑖 + 𝑏75𝑚𝑚, 𝑓𝑜𝑟𝑙75𝑚𝑚(𝜑𝑖) ≤ 𝑎𝑖 ≤ 𝑙50𝑚𝑚(𝜑𝑖)
100𝑚𝑚−75𝑚𝑚
𝑙100𝑚𝑚(𝜑𝑖)−𝑙75𝑚𝑚(𝜑𝑖)∙ 𝑎𝑖 + 𝑏100𝑚𝑚, 𝑓𝑜𝑟𝑙100𝑚𝑚(𝜑𝑖) ≤ 𝑎𝑖 ≤ 𝑙75𝑚𝑚(𝜑𝑖)
125𝑚𝑚−100𝑚𝑚
𝑙125𝑚𝑚(𝜑𝑖)−𝑙100𝑚𝑚(𝜑𝑖)∙ 𝑎𝑖 + 𝑏125𝑚𝑚, 𝑓𝑜𝑟𝑙125𝑚𝑚(𝜑𝑖) ≤ 𝑎𝑖 ≤ 𝑙100𝑚𝑚(𝜑𝑖)
, (E3)
where, 𝑙𝑑𝑙𝑖𝑚𝑖𝑡(𝜑𝑖) is the summed value of the normalized IR reading values as seen in Fig. E3 at
50, … , 125𝑚𝑚.
Table E1 shows the error data for both the bearings and proximities estimations where ten readings
at distances 50, 75, 100, and 125 mm and from 0° to 360° directions at increments of 15° were recorded
and analyzed.
98
Fig. E3 . Sum of the normalized IR readings at 50 (blue), 75 (red), 100 (green), and 125 mm (cyan),
at different directions from 0 to 360 degrees at increments of 15 degrees during the calibration stage.
Table E1. Error data for both bearings and proximities at distances 50 to 125 mm from 0° to
360° directions at increments of 15° for the wide-angle IR detectors.
Distances Average
Bearing
Error
(Degrees)
Standard
Deviation for
Bearing Error
(Degrees)
Max
Bearing
Error
(Degrees)
Average
Proximity
Error
(mm)
Standard
Deviation
for
Proximity
Error
(mm)
Max
Proximity
Error
(mm)
50 mm 3 1.8 10 4 2.4 10
75 mm 4 1.7 7 2 1.5 7
100 mm 6 1.9 11 5 2.5 12
125 mm 10 3.0 18 6 2.8 14
Below is the source code snippet for estimating both the proximities and bearings of nearby robots
that was described above.
99
Proximity and Bearing Estimation Pseudocode
Below is the pseudocode for estimating the proximities and bearings of neighbouring robots with the
IR sensor readings. Note that the flag for getting proximities and bearings mentioned below is set by
the SoC of the robot.
If flag for getting proximities and bearings is raised
Start getting all values from all six IR sensors and store them;
Call the function ‘translateRawIR(…)’ to translate raw IR reading values
to normalized IR values and store the normalized IR reading values;
Call the function ‘getBearings(…)’ to get the bearings of all nearby
robots;
Store the bearing data;
Call the function ‘getProximities(…)’ to get the proximities of all nearby
robots;
Set flag to zero and wait for next command;
Else if no flag is raised
Do nothing for now and wait for next command;
Below is the flowchart explaining the above pseudocode in further detail.
Fig. E4. Overall procedure for estimating the proximities and bearings of nearby robots.
100
Source Code Snippet of the Proximity and Bearing Estimation
...
if (startGettingProximities) {
// starts from FF position and go clockwise
// i.e., ADC2 -> ADC3 -> ADC0 -> ADC6 -> ADC7 -> ADC1
uint8_t selectionADC[6] = {0x02, 0x03, 0x00, 0x06, 0x07, 0x01};
for (uint8_t i = 0; i < NUM_OF_IRSENS; i++) {
getAllIR(fx, selectionADC[i]);
// get the amplitudes of our selected channels (frequencies)
for (uint8_t channelNum = 0; channelNum < NUM_OF_CHANNELS; channelNum++) {
proximitiesIR[i][channelNum] = fx[channelBin[channelNum]];
}
}
// translate our raw IR readings to scaled IR readings
translateRawIR(proximitiesIR, scaledIR);
// calculate the relative bearing for all channels
getBearings(scaledIR);
// grab the proximities for all channels
getProximities(scaledIR, bearingsAllChan);
// set our get proximity flag to 0
startGettingProximities = 0;
}
...
void translateRawIR(uint8_t rawIR[NUM_OF_IRSENS][NUM_OF_CHANNELS],
float scaleIR[NUM_OF_IRSENS][NUM_OF_CHANNELS])
{
// scale factor for all channels since amplitude decreases as frequency increases
float scaleChan[9] = {1.0, SCALE_CHANNEL_1, SCALE_CHANNEL_2, SCALE_CHANNEL_3,
SCALE_CHANNEL_4, SCALE_CHANNEL_5, SCALE_CHANNEL_6, SCALE_CHANNEL_7, SCALE_CHANNEL_8};
float maxRawIR_50mm[NUM_OF_IRSENS] = {MAX_RAW_IR_1, MAX_RAW_IR_2, MAX_RAW_IR_3,
MAX_RAW_IR_4, MAX_RAW_IR_5, MAX_RAW_IR_6};
// normalize the raw IR readings with the MAX RAW IR values at 50 mm for each detector
for (uint8_t i = 0; i < NUM_OF_IRSENS; i++) {
for (uint8_t j = 0; j < NUM_OF_CHANNELS; j++) {
scaleIR[i][j] = (((float)rawIR[i][j]) * scaleChan[j]) / maxRawIR_50mm[i];
}
}
}
...
static float sensorIRPositions_SIN[NUM_OF_IRSENS] = {sin(0.0), sin(-1.0472), sin(-2.0944), sin(3.14159),
sin(2.0944), sin(1.0472)};
static float sensorIRPositions_COS[NUM_OF_IRSENS] = {cos(0.0), cos(-1.0472), cos(-2.0944), cos(3.14159),
cos(2.0944), cos(1.0472)};
void getBearings(float proximitiesIR[NUM_OF_IRSENS][NUM_OF_CHANNELS])
{
double sinValue = 0.0;
double cosValue = 0.0;
double tempRadAngle = 0.0;
for (short i = 0; i < NUM_OF_CHANNELS; i++) {
for (short j = 0; j < NUM_OF_IRSENS; j++) {
sinValue += proximitiesIR[j][i] * sensorIRPositions_SIN[j];
cosValue += proximitiesIR[j][i] * sensorIRPositions_COS[j];
}
// arctangent our sin and cos values
tempRadAngle = atan2(sinValue, cosValue);
// convert from radians to degrees
bearingsAllChan[i] = (uint16_t)((tempRadAngle * (180.0 / PI)) + 360.0);
if (bearingsAllChan[i] > 360) {
// convert all degrees to positive values
bearingsAllChan[i] -= 360;
}
101
// reset our cosine and sine sums
sinValue = 0.0;
cosValue = 0.0;
}
return;
}
void getProximities(float proximitiesIR[NUM_OF_IRSENS][NUM_OF_CHANNELS], uint16_t
bearingsAllChan[NUM_OF_CHANNELS])
{
// add all the scaled IR readings together for each channel
float addedIRReadings[NUM_OF_CHANNELS];
// initialize everything to 0
for (short i = 0; i < NUM_OF_CHANNELS; i++) {
addedIRReadings[i] = 0.0;
}
// add up all the IR readings for each channel
for (short i = 0; i < NUM_OF_CHANNELS; i++) {
for (short j = 0; j < NUM_OF_IRSENS; j++) {
addedIRReadings[i] += proximitiesIR[j][i];
}
}
// get proximities from channels 0 to NUM_OF_CHANNELS
for (uint8_t curChan = 0; curChan < NUM_OF_CHANNELS; curChan++) {
float LIM_50MM;
float LIM_75MM;
float LIM_100MM;
float LIM_125MM;
// between 0 to 15 degrees
if ((bearingsAllChan[curChan] >= 0) && (bearingsAllChan[curChan] <= 15)) {
// get the limits of 50, 75, and 100 mm distances
LIM_50MM = SLO_50MM_0_15 * bearingsAllChan[curChan] + INT_50MM_0_15;
LIM_75MM = SLO_75MM_0_15 * bearingsAllChan[curChan] + INT_75MM_0_15;
LIM_100MM = SLO_100MM_0_15 * bearingsAllChan[curChan] + INT_100MM_0_15;
LIM_125MM = SLO_125MM_0_15 * bearingsAllChan[curChan] + INT_125MM_0_15;
}
// between 15 to 30 degrees
else if ((bearingsAllChan[curChan] >= 15) && (bearingsAllChan[curChan] <= 30)) {
// get the limits of 50, 75, and 100 mm distances
LIM_50MM = SLO_50MM_15_30 * bearingsAllChan[curChan] + INT_50MM_15_30;
LIM_75MM = SLO_75MM_15_30 * bearingsAllChan[curChan] + INT_75MM_15_30;
LIM_100MM = SLO_100MM_15_30 * bearingsAllChan[curChan] + INT_100MM_15_30;
LIM_125MM = SLO_125MM_15_30 * bearingsAllChan[curChan] + INT_125MM_15_30;
}
// between 30 to 45 degrees
else if ((bearingsAllChan[curChan] >= 30) && (bearingsAllChan[curChan] <= 45)) {
// get the limits of 50, 75, and 100 mm distances
LIM_50MM = SLO_50MM_30_45 * bearingsAllChan[curChan] + INT_50MM_30_45;
LIM_75MM = SLO_75MM_30_45 * bearingsAllChan[curChan] + INT_75MM_30_45;
LIM_100MM = SLO_100MM_30_45 * bearingsAllChan[curChan] + INT_100MM_30_45;
LIM_125MM = SLO_125MM_30_45 * bearingsAllChan[curChan] + INT_125MM_30_45;
}
// between 45 to 60 degrees
else if ((bearingsAllChan[curChan] >= 45) && (bearingsAllChan[curChan] <= 60)) {
// get the limits of 50, 75, and 100 mm distances
LIM_50MM = SLO_50MM_45_60 * bearingsAllChan[curChan] + INT_50MM_45_60;
LIM_75MM = SLO_75MM_45_60 * bearingsAllChan[curChan] + INT_75MM_45_60;
LIM_100MM = SLO_100MM_45_60 * bearingsAllChan[curChan] + INT_100MM_45_60;
LIM_125MM = SLO_125MM_45_60 * bearingsAllChan[curChan] + INT_125MM_45_60;
}
// between 60 to 75 degrees
else if ((bearingsAllChan[curChan] >= 60) && (bearingsAllChan[curChan] <= 75)) {
// get the limits of 50, 75, and 100 mm distances
LIM_50MM = SLO_50MM_60_75 * bearingsAllChan[curChan] + INT_50MM_60_75;
LIM_75MM = SLO_75MM_60_75 * bearingsAllChan[curChan] + INT_75MM_60_75;
LIM_100MM = SLO_100MM_60_75 * bearingsAllChan[curChan] + INT_100MM_60_75;
LIM_125MM = SLO_125MM_60_75 * bearingsAllChan[curChan] + INT_125MM_60_75;
}
// between 75 to 90 degrees
else if ((bearingsAllChan[curChan] >= 75) && (bearingsAllChan[curChan] <= 90)) {
// get the limits of 50, 75, and 100 mm distances
LIM_50MM = SLO_50MM_75_90 * bearingsAllChan[curChan] + INT_50MM_75_90;
LIM_75MM = SLO_75MM_75_90 * bearingsAllChan[curChan] + INT_75MM_75_90;
LIM_100MM = SLO_100MM_75_90 * bearingsAllChan[curChan] + INT_100MM_75_90;
102
LIM_125MM = SLO_125MM_75_90 * bearingsAllChan[curChan] + INT_125MM_75_90;
}
// between 90 to 105 degrees
else if ((bearingsAllChan[curChan] >= 90) && (bearingsAllChan[curChan] <= 105)) {
// get the limits of 50, 75, and 100 mm distances
LIM_50MM = SLO_50MM_90_105 * bearingsAllChan[curChan] + INT_50MM_90_105;
LIM_75MM = SLO_75MM_90_105 * bearingsAllChan[curChan] + INT_75MM_90_105;
LIM_100MM = SLO_100MM_90_105 * bearingsAllChan[curChan] + INT_100MM_90_105;
LIM_125MM = SLO_125MM_90_105 * bearingsAllChan[curChan] + INT_125MM_90_105;
}
// between 105 to 120 degrees
else if ((bearingsAllChan[curChan] >= 105) && (bearingsAllChan[curChan] <= 120)) {
// get the limits of 50, 75, and 100 mm distances
LIM_50MM = SLO_50MM_105_120 * bearingsAllChan[curChan] + INT_50MM_105_120;
LIM_75MM = SLO_75MM_105_120 * bearingsAllChan[curChan] + INT_75MM_105_120;
LIM_100MM = SLO_100MM_105_120 * bearingsAllChan[curChan] + INT_100MM_105_120;
LIM_125MM = SLO_125MM_105_120 * bearingsAllChan[curChan] + INT_125MM_105_120;
}
// between 120 to 135 degrees
else if ((bearingsAllChan[curChan] >= 120) && (bearingsAllChan[curChan] <= 135)) {
// get the limits of 50, 75, and 100 mm distances
LIM_50MM = SLO_50MM_120_135 * bearingsAllChan[curChan] + INT_50MM_120_135;
LIM_75MM = SLO_75MM_120_135 * bearingsAllChan[curChan] + INT_75MM_120_135;
LIM_100MM = SLO_100MM_120_135 * bearingsAllChan[curChan] + INT_100MM_120_135;
LIM_125MM = SLO_125MM_120_135 * bearingsAllChan[curChan] + INT_125MM_120_135;
}
// between 135 to 150 degrees
else if ((bearingsAllChan[curChan] >= 135) && (bearingsAllChan[curChan] <= 150)) {
// get the limits of 50, 75, and 100 mm distances
LIM_50MM = SLO_50MM_135_150 * bearingsAllChan[curChan] + INT_50MM_135_150;
LIM_75MM = SLO_75MM_135_150 * bearingsAllChan[curChan] + INT_75MM_135_150;
LIM_100MM = SLO_100MM_135_150 * bearingsAllChan[curChan] + INT_100MM_135_150;
LIM_125MM = SLO_125MM_135_150 * bearingsAllChan[curChan] + INT_125MM_135_150;
}
// between 150 to 165 degrees
else if ((bearingsAllChan[curChan] >= 150) && (bearingsAllChan[curChan] <= 165)) {
// get the limits of 50, 75, and 100 mm distances
LIM_50MM = SLO_50MM_150_165 * bearingsAllChan[curChan] + INT_50MM_150_165;
LIM_75MM = SLO_75MM_150_165 * bearingsAllChan[curChan] + INT_75MM_150_165;
LIM_100MM = SLO_100MM_150_165 * bearingsAllChan[curChan] + INT_100MM_150_165;
LIM_125MM = SLO_125MM_150_165 * bearingsAllChan[curChan] + INT_125MM_150_165;
}
// between 165 to 180 degrees
else if ((bearingsAllChan[curChan] >= 165) && (bearingsAllChan[curChan] <= 180)) {
// get the limits of 50, 75, and 100 mm distances
LIM_50MM = SLO_50MM_165_180 * bearingsAllChan[curChan] + INT_50MM_165_180;
LIM_75MM = SLO_75MM_165_180 * bearingsAllChan[curChan] + INT_75MM_165_180;
LIM_100MM = SLO_100MM_165_180 * bearingsAllChan[curChan] + INT_100MM_165_180;
LIM_125MM = SLO_125MM_165_180 * bearingsAllChan[curChan] + INT_125MM_165_180;
}
// between 180 to 195 degrees
else if ((bearingsAllChan[curChan] >= 180) && (bearingsAllChan[curChan] <= 195)) {
// get the limits of 50, 75, and 100 mm distances
LIM_50MM = SLO_50MM_180_195 * bearingsAllChan[curChan] + INT_50MM_180_195;
LIM_75MM = SLO_75MM_180_195 * bearingsAllChan[curChan] + INT_75MM_180_195;
LIM_100MM = SLO_100MM_180_195 * bearingsAllChan[curChan] + INT_100MM_180_195;
LIM_125MM = SLO_125MM_180_195 * bearingsAllChan[curChan] + INT_125MM_180_195;
}
// between 195 to 210 degrees
else if ((bearingsAllChan[curChan] >= 195) && (bearingsAllChan[curChan] <= 210)) {
// get the limits of 50, 75, and 100 mm distances
LIM_50MM = SLO_50MM_195_210 * bearingsAllChan[curChan] + INT_50MM_195_210;
LIM_75MM = SLO_75MM_195_210 * bearingsAllChan[curChan] + INT_75MM_195_210;
LIM_100MM = SLO_100MM_195_210 * bearingsAllChan[curChan] + INT_100MM_195_210;
LIM_125MM = SLO_125MM_195_210 * bearingsAllChan[curChan] + INT_125MM_195_210;
}
// between 210 to 225 degrees
else if ((bearingsAllChan[curChan] >= 210) && (bearingsAllChan[curChan] <= 225)) {
// get the limits of 50, 75, and 100 mm distances
LIM_50MM = SLO_50MM_210_225 * bearingsAllChan[curChan] + INT_50MM_210_225;
LIM_75MM = SLO_75MM_210_225 * bearingsAllChan[curChan] + INT_75MM_210_225;
LIM_100MM = SLO_100MM_210_225 * bearingsAllChan[curChan] + INT_100MM_210_225;
LIM_125MM = SLO_125MM_210_225 * bearingsAllChan[curChan] + INT_125MM_210_225;
}
// between 225 to 240 degrees
else if ((bearingsAllChan[curChan] >= 225) && (bearingsAllChan[curChan] <= 240)) {
// get the limits of 50, 75, and 100 mm distances
LIM_50MM = SLO_50MM_225_240 * bearingsAllChan[curChan] + INT_50MM_225_240;
LIM_75MM = SLO_75MM_225_240 * bearingsAllChan[curChan] + INT_75MM_225_240;
LIM_100MM = SLO_100MM_225_240 * bearingsAllChan[curChan] + INT_100MM_225_240;
LIM_125MM = SLO_125MM_225_240 * bearingsAllChan[curChan] + INT_125MM_225_240;
103
}
// between 240 to 255 degrees
else if ((bearingsAllChan[curChan] >= 240) && (bearingsAllChan[curChan] <= 255)) {
// get the limits of 50, 75, and 100 mm distances
LIM_50MM = SLO_50MM_240_255 * bearingsAllChan[curChan] + INT_50MM_240_255;
LIM_75MM = SLO_75MM_240_255 * bearingsAllChan[curChan] + INT_75MM_240_255;
LIM_100MM = SLO_100MM_240_255 * bearingsAllChan[curChan] + INT_100MM_240_255;
LIM_125MM = SLO_125MM_240_255 * bearingsAllChan[curChan] + INT_125MM_240_255;
}
// between 255 to 270 degrees
else if ((bearingsAllChan[curChan] >= 255) && (bearingsAllChan[curChan] <= 270)) {
// get the limits of 50, 75, and 100 mm distances
LIM_50MM = SLO_50MM_255_270 * bearingsAllChan[curChan] + INT_50MM_255_270;
LIM_75MM = SLO_75MM_255_270 * bearingsAllChan[curChan] + INT_75MM_255_270;
LIM_100MM = SLO_100MM_255_270 * bearingsAllChan[curChan] + INT_100MM_255_270;
LIM_125MM = SLO_125MM_255_270 * bearingsAllChan[curChan] + INT_125MM_255_270;
}
// between 270 to 285 degrees
else if ((bearingsAllChan[curChan] >= 270) && (bearingsAllChan[curChan] <= 285)) {
// get the limits of 50, 75, and 100 mm distances
LIM_50MM = SLO_50MM_270_285 * bearingsAllChan[curChan] + INT_50MM_270_285;
LIM_75MM = SLO_75MM_270_285 * bearingsAllChan[curChan] + INT_75MM_270_285;
LIM_100MM = SLO_100MM_270_285 * bearingsAllChan[curChan] + INT_100MM_270_285;
LIM_125MM = SLO_125MM_270_285 * bearingsAllChan[curChan] + INT_125MM_270_285;
}
// between 285 to 300 degrees
else if ((bearingsAllChan[curChan] >= 285) && (bearingsAllChan[curChan] <= 300)) {
// get the limits of 50, 75, and 100 mm distances
LIM_50MM = SLO_50MM_285_300 * bearingsAllChan[curChan] + INT_50MM_285_300;
LIM_75MM = SLO_75MM_285_300 * bearingsAllChan[curChan] + INT_75MM_285_300;
LIM_100MM = SLO_100MM_285_300 * bearingsAllChan[curChan] + INT_100MM_285_300;
LIM_125MM = SLO_125MM_285_300 * bearingsAllChan[curChan] + INT_125MM_285_300;
}
// between 300 to 315 degrees
else if ((bearingsAllChan[curChan] >= 300) && (bearingsAllChan[curChan] <= 315)) {
// get the limits of 50, 75, and 100 mm distances
LIM_50MM = SLO_50MM_300_315 * bearingsAllChan[curChan] + INT_50MM_300_315;
LIM_75MM = SLO_75MM_300_315 * bearingsAllChan[curChan] + INT_75MM_300_315;
LIM_100MM = SLO_100MM_300_315 * bearingsAllChan[curChan] + INT_100MM_300_315;
LIM_125MM = SLO_125MM_300_315 * bearingsAllChan[curChan] + INT_125MM_300_315;
}
// between 315 to 330 degrees
else if ((bearingsAllChan[curChan] >= 315) && (bearingsAllChan[curChan] <= 330)) {
// get the limits of 50, 75, and 100 mm distances
LIM_50MM = SLO_50MM_315_330 * bearingsAllChan[curChan] + INT_50MM_315_330;
LIM_75MM = SLO_75MM_315_330 * bearingsAllChan[curChan] + INT_75MM_315_330;
LIM_100MM = SLO_100MM_315_330 * bearingsAllChan[curChan] + INT_100MM_315_330;
LIM_125MM = SLO_125MM_315_330 * bearingsAllChan[curChan] + INT_125MM_315_330;
}
// between 330 to 345 degrees
else if ((bearingsAllChan[curChan] >= 330) && (bearingsAllChan[curChan] <= 345)) {
// get the limits of 50, 75, and 100 mm distances
LIM_50MM = SLO_50MM_330_345 * bearingsAllChan[curChan] + INT_50MM_330_345;
LIM_75MM = SLO_75MM_330_345 * bearingsAllChan[curChan] + INT_75MM_330_345;
LIM_100MM = SLO_100MM_330_345 * bearingsAllChan[curChan] + INT_100MM_330_345;
LIM_125MM = SLO_125MM_330_345 * bearingsAllChan[curChan] + INT_125MM_330_345;
}
// between 345 to 360 degrees
else if ((bearingsAllChan[curChan] >= 345) && (bearingsAllChan[curChan] <= 360)) {
// get the limits of 50, 75, and 100 mm distances
LIM_50MM = SLO_50MM_345_360 * bearingsAllChan[curChan] + INT_50MM_345_360;
LIM_75MM = SLO_75MM_345_360 * bearingsAllChan[curChan] + INT_75MM_345_360;
LIM_100MM = SLO_100MM_345_360 * bearingsAllChan[curChan] + INT_100MM_345_360;
LIM_125MM = SLO_125MM_345_360 * bearingsAllChan[curChan] + INT_125MM_345_360;
}
//-- check where the summed IR readings lie between and approximate the distance
// it's less than 50 mm
float m = 0;
float b = 0;
if (addedIRReadings[curChan] >= LIM_50MM) {//1.6873
m = (50.0 - 0.0)/(LIM_50MM - 5.0);
b = 50.0 - m * LIM_50MM;
}
// it's between 50 to 75 mm
else if ((addedIRReadings[curChan] <= LIM_50MM) && (addedIRReadings[curChan] >= LIM_75MM)) {
m = (75.0 - 50.0)/(LIM_75MM - LIM_50MM);
b = 75.0 - m * LIM_75MM;
}
// it's between 75 to 100 mm
else if ((addedIRReadings[curChan] <= LIM_75MM) && (addedIRReadings[curChan] >= LIM_100MM)) {
104
m = (100.0 - 75.0)/(LIM_100MM - LIM_75MM);
b = 100.0 - m * LIM_100MM;
}
// it's between 100 to 125 mm, or greater
else if ((addedIRReadings[curChan] <= LIM_100MM)) {//&& (addedIRReadings[curChan] >= LIM_100MM) {
m = (125.0 - 100.0)/(LIM_125MM - LIM_100MM);
b = 100.0 - m * LIM_100MM;
}
// calculate our final relative proximity of the channel
float tempAnswer = (m * addedIRReadings[curChan] + b);
if (tempAnswer < 0.0) {
tempAnswer = 0.0;
}
proximitiesAllChan[curChan] = (uint8_t)tempAnswer;
}
}
...
105
Appendix F: Source Code
Below subsections provide main code snippets used for various tasks of the robot in this thesis
experiments. The source code used for the thesis experiments (which includes source code and the
necessary library files for the nRF51422 SoC of the robot as well as the source code for the
ATmega328P microcontroller for swarm experiments) can be found in the Git repository in [137].
F.1 FHT and GA on ATmega328P
Pseudocode of the FHT
Below is the pseudocode of the FHT algorithm source code.
Turn off IR emitters so that they do not interfere with IR sensor readings;
Start up ADC block and begin getting all values from all six IR sensors;
Store the raw IR reading values;
Load the predetermined trig values to be used for the FHT algorithm;
Call function ‘fhtDitInt(…)’ to complete the FHT using fixed point
operations;
Change the results of the FHT from complex to real using the function
‘complexToReal(…)’;
Store the array of IR sensor readings of different channels;
Below is the flowchart explaining the above pseudocode.
Fig. F1. Procedure for completing the FHT algorithm.
Source Code Snippet of the FHT /************************************************************************************
*
* FHT source code
*
* Below are the macro values for the below source code:
106
*
* NUM_OF_IRSENS = 6
* NUM_OF_CHANNELS = 9
*
* fx variable contains the ADC samples of the IR detectors for IR detectors 0 to 5
* (i.e., 1 to 6). It is also used for temporary holding all the proximity values for
* all channels and IR detectors.
*
*/
// starts from Front Forward position and go clockwise
// i.e., ADC2 -> ADC3 -> ADC0 -> ADC6 -> ADC7 -> ADC1
uint8_t selectionADC[6] = {0x02, 0x03, 0x00, 0x06, 0x07, 0x01};
for (uint8_t i = 0; i < NUM_OF_IRSENS; i++) {
getAllIR(fx, selectionADC[i]);
// get the amplitudes of our selected channels (frequencies)
for (uint8_t channelNum = 0; channelNum < NUM_OF_CHANNELS; channelNum++) {
proximitiesIR[i][channelNum] = fx[channelBin[channelNum]];
}
}
...
void getAllIR(int16_t *fx, uint8_t channelADC)
{
//--- turn off IR emitter for measurement ---/
set_IRFrequency(0);
// clear ADC channel mux register
ADMUX &= 0xF0;
// select which ADC channel to read from
ADMUX |= channelADC;
// get up to FHT_LEN number of samples
for (short i = 0; i < FHT_LEN; i++) {
// do single conversion
ADCSRA |= (1<<ADSC);
while(ADCSRA & (1<<ADSC));
fx[i] = (int16_t)ADCW;
}
//--- tun back on our IR emitter ---/
set_IRFrequency(((unsigned short)channelFreq[ourChannel]));
fhtDitInt(fx); // from the Simon Inns’ FHT library (simon.inns@gmail.com)
complexToReal(fx, 0); // from the Simon Inns’ FHT library (simon.inns@gmail.com)
}
Pseudocode of the GA
Below is the pseudocode of the GA algorithm source code.
Turn off IR emitters so that they do not interfere with IR sensor readings;
Start up ADC block and begin getting all values from all six IR sensors;
Store the raw IR reading values;
Load the predetermined coefficient values to be used for the GA algorithm;
Begin the GA algorithm calculations using fixed point operations;
Store the array of IR sensor readings of different channels;
Below is the flowchart explaining the above pseudocode.
107
Fig. F2. Procedure for completing the GA algorithm.
Source Code Snippet of the GA /************************************************************************************
*
* GA source code
*
* Below are the macro values for the below source code:
*
* NUM_OF_IRSENS = 6
* NUM_OF_CHANNELS = 9
* COEFF_PRECISION = 12
*
* samplesADC contains the ADC samples of the IR detectors for IR detectors 0 to 5
* (i.e., 1 to 6).
*
*/
...
getAllIR_GA(samplesADC);
...
for (uint8_t curIR = 0; curIR < NUM_OF_IRSENS; curIR++) {
// get the amplitudes of our selected channels (frequencies)
for (uint8_t channelNum = 0; channelNum < NUM_OF_CHANNELS; channelNum++) {
//--- process the samples ---/
for (uint16_t index = 0; index < SAMPLE_SIZE; index++) {
multip = coeffTest[channelNum] * Q1;
Q0 = (multip >> COEFF_PRECISION) - Q2 + samplesADC[curIR][index];
Q2 = Q1;
Q1 = Q0;
}
// divide the answer by 2^12 (i.e., COEFF_PRECISION)
multip = coeffTest[channelNum] * Q1;
proximitiesIR[curIR][channelNum] =
((uint16_t)sqrt(Q1 * Q1 + Q2 * (Q2 - (multip >> COEFF_PRECISION)))) >> MAG_REDUCE;
// reset Goertzel variables
Q1 = 0;
Q2 = 0;
}
}
F.2 Angular Velocity to PWM Function
Pseudocode of the Function
Below is the pseudocode of the angular velocity to PWM function.
108
Initialize calibration parameters that were calculated offline;
Get the desired angular velocity to translate to PWM value;
Check to see which range the angular velocity is in, from the offline
calibrated parameters;
Calculate the PWM value using the specified offline calibrated values with
the desired angular velocity value;
Return the calculated PWM value;
Below is the flowchart explaining the above pseudocode.
Fig. F3. Procedure for translating angular velocity to PWM.
Source Code Snippet of the Function /************************************************************************************
*
* wLimit_DIRECTION -> the intervals of different angular velocity of the motor
* m_DIRECTION -> slope of the ith interval for the linear interpolation
* b_DIRECTION -> constant "b" of the ith interval for the linear interpolation
* linear interpolation: y = m*x + b
*
* Following PWM powers were recorded:
* 5%, 7.25%, 10%, 12.5%, 15%, 17.5%, 20%, 22.5%, 25%
*
*/
// LEFT MOTOR
static double wLimit_LEFT[8] = {0.6959, 1.2271, 2.7278, 7.7325, 12.5011, 16.5330, 26.7562, 64.1804};
static double m_LEFT[8] = {7.1845, 4.7065, 1.6658, 0.4995, 0.5242, 0.6201, 0.2445, 0.0668};
static double b_LEFT[8] = {0, 1.7245, 5.4558, 8.6373, 8.4461, 7.2486, 13.4569, 18.2126};
// RIGHT MOTOR
static double wLimit_RIGHT[8] = {0.8382, 1.4304, 3.1117, 8.2534, 13.0034, 17.1013, 26.5204, 63.8232};
static double m_RIGHT[8] = {5.9652, 4.2212, 1.4869, 0.4862, 0.5263, 0.6101, 0.2654, 0.0670};
static double b_RIGHT[8] = {0, 1.4618, 5.3729, 8.4870, 8.1561, 7.0671, 12.9609, 18.2226};
unsigned short w2PWM_LEFT(double motorW_L)
{
unsigned short velocityMag_L = 0;
if (motorW_L <= 0) {
velocityMag_L = 0;
}
109
else if ((motorW_L > 0) && (motorW_L < wLimit_LEFT[0])) {
velocityMag_L = (short)((m_LEFT[0] * motorW_L + b_LEFT[0]) * 10);
}
else if ((motorW_L >= wLimit_LEFT[0]) && (motorW_L < wLimit_LEFT[1])) {
velocityMag_L = (short)((m_LEFT[1] * motorW_L + b_LEFT[1]) * 10);
}
else if ((motorW_L >= wLimit_LEFT[1]) && (motorW_L < wLimit_LEFT[2])) {
velocityMag_L = (short)((m_LEFT[2] * motorW_L + b_LEFT[2]) * 10);
}
else if ((motorW_L >= wLimit_LEFT[2]) && (motorW_L < wLimit_LEFT[3])) {
velocityMag_L = (short)((m_LEFT[3] * motorW_L + b_LEFT[3]) * 10);
}
else if ((motorW_L >= wLimit_LEFT[3]) && (motorW_L < wLimit_LEFT[4])) {
velocityMag_L = (short)((m_LEFT[4] * motorW_L + b_LEFT[4]) * 10);
}
else if ((motorW_L >= wLimit_LEFT[4]) && (motorW_L < wLimit_LEFT[5])) {
velocityMag_L = (short)((m_LEFT[5] * motorW_L + b_LEFT[5]) * 10);
}
else if ((motorW_L >= wLimit_LEFT[5]) && (motorW_L < wLimit_LEFT[6])) {
velocityMag_L = (short)((m_LEFT[6] * motorW_L + b_LEFT[6]) * 10);
}
else if ((motorW_L >= wLimit_LEFT[6]) && (motorW_L < wLimit_LEFT[7])) {
velocityMag_L = (short)((m_LEFT[7] * motorW_L + b_LEFT[7]) * 10);
}
else if ((motorW_L >= wLimit_LEFT[7]) && (motorW_L < 47)) {
velocityMag_L = (short)((m_LEFT[7] * motorW_L + b_LEFT[7]) * 10);
}
else if (motorW_L >= 47) {
velocityMag_L = 228;
}
return velocityMag_L;
}
unsigned short w2PWM_RIGHT(double motorW_R)
{
unsigned short velocityMag_R = 0;
if (motorW_R <= 0) {
velocityMag_R = 0;
}
else if ((motorW_R > 0) && (motorW_R < wLimit_RIGHT[0])) {
velocityMag_R = (short)((m_RIGHT[0] * motorW_R + b_RIGHT[0]) * 10);
}
else if ((motorW_R >= wLimit_RIGHT[0]) && (motorW_R < wLimit_RIGHT[1])) {
velocityMag_R = (short)((m_RIGHT[1] * motorW_R + b_RIGHT[1]) * 10);
}
else if ((motorW_R >= wLimit_RIGHT[1]) && (motorW_R < wLimit_RIGHT[2])) {
velocityMag_R = (short)((m_RIGHT[2] * motorW_R + b_RIGHT[2]) * 10);
}
else if ((motorW_R >= wLimit_RIGHT[2]) && (motorW_R < wLimit_RIGHT[3])) {
velocityMag_R = (short)((m_RIGHT[3] * motorW_R + b_RIGHT[3]) * 10);
}
else if ((motorW_R >= wLimit_RIGHT[3]) && (motorW_R < wLimit_RIGHT[4])) {
velocityMag_R = (short)((m_RIGHT[4] * motorW_R + b_RIGHT[4]) * 10);
}
else if ((motorW_R >= wLimit_RIGHT[4]) && (motorW_R < wLimit_RIGHT[5])) {
velocityMag_R = (short)((m_RIGHT[5] * motorW_R + b_RIGHT[5]) * 10);
}
else if ((motorW_R >= wLimit_RIGHT[5]) && (motorW_R < wLimit_RIGHT[6])) {
velocityMag_R = (short)((m_RIGHT[6] * motorW_R + b_RIGHT[6]) * 10);
}
else if ((motorW_R >= wLimit_RIGHT[6]) && (motorW_R < wLimit_RIGHT[7])) {
velocityMag_R = (short)((m_RIGHT[7] * motorW_R + b_RIGHT[7]) * 10);
}
else if ((motorW_R >= wLimit_RIGHT[7]) && (motorW_R < 47)) {
velocityMag_R = (short)((m_RIGHT[7] * motorW_R + b_RIGHT[7]) * 10);
}
else if (motorW_R >= 47) {
velocityMag_R = 232;
}
return velocityMag_R;
}
110
F.3 Discretized PID Controller
Pseudocode of the Discretized PID Controller
Below is the pseudocode of the discretized PID Controller.
Get the current values of global position and orientation from the
centralized controller;
Use the PID formula to calculate the next velocity in the x and y axes;
Calculate the required angular velocities of left and right wheels to achieve
desired velocity;
Send the angular velocities to the function that will translate it into PWM
values;
Store the PWM values of left and right wheels to be used by the motors;
Below is the flowchart explaining the above pseudocode.
Fig. F4. Procedure for completing the discretized PID controller algorithm.
Source Code Snippet of the Discretized PID Controller
...
// calculate v1 and v2 in pixel units
v1 = (K_P * (nextX - currentX)) + (K_D * (x_dot_coord[timeCounter] - robotProfile::currentX_dot)) +
K_I * (v1Past + (nextX - (currentX + currentX_dot + (0.25 * v1Past)) + (nextX - currentX) / 2.0) * T_PERIOD);
v2 = (K_P * (nextY - currentY)) + (K_D * (y_dot_coord[timeCounter] - robotProfile::currentY_dot)) +
K_I * (v2Past + (nextY - (currentY + currentY_dot + (0.25 * v2Past)) + (nextY - currentY) / 2.0) * T_PERIOD);
// calculate the next x and y velocity to find next u1 and u2
nextX_dot = currentX_dot + B_PARAMETER * v1;
nextY_dot = currentY_dot + B_PARAMETER * v2;
111
// calculate next u1 and u2
u_1_next = sqrt((nextY_dot * nextY_dot) + (nextX_dot * nextX_dot));
u_2_next = ((v2 * cos(currentTheta * (PI/180.0))) - (v1 * sin(currentTheta * (PI/180.0)))) / (u_1_next +
EPSILON);
// calculate the angular velocity for left and right wheel
motorW_L = (u_1_next - 2.0 * AXLE_D * u_2_next) / WHEEL_R;
motorW_R = (u_1_next + 2.0 * AXLE_D * u_2_next) / WHEEL_R;
// check for signs of the angular velocities
if (motorW_L >= 0) {
velocitySign_L = 0;
}
else if (motorW_L < 0) {
velocitySign_L = 1;
}
if (motorW_R >= 0) {
velocitySign_R = 0;
}
else if (motorW_R < 0) {
velocitySign_R = 1;
}
// change our angular velocities to absolute values
motorW_L = fabs(motorW_L);
motorW_R = fabs(motorW_R);
// translate angular velocity to PWM
velocityMag_L = w2PWM_LEFT(motorW_L);
velocityMag_R = w2PWM_RIGHT(motorW_R);
...
112
Appendix G: Centralized Controller with Overhead Camera
An overhead camera connected to a host PC was utilized with OpenCV in order to track the global
positions as well as global orientations of multiple mROBerTOs. For all the experiments mentioned in
this thesis, the Logitech’s C920 HD (1080p) Camera was used. OpenCV 3.0 running on a host PC with
Linux Ubuntu was utilized for the tracking portion of the project with the centralized controller.
The robot’s global position is tracked by the overhead camera where the camera tracks their unique
LED colour as shown in the below figure where the below example shows the robot with a red LED.
The OpenCV will first find the known LED colours of each robot on the workspace (e.g., red LED for
below) and will associate this position as the global position of each robot in pixel coordinates, (𝑥𝑝, 𝑦𝑝).
Fig. G1. Top-view of mROBerTO on the workspace.
Once the global position is found, the global orientation of the robot can be determined using the green
LED position (in pixel units) in the front of the robot, (𝑥𝑔, 𝑦𝑔). The two-argument arctangent function
is used with the robot’s global position and its front green LED position as done below in order to
calculate the global orientation, 𝜃𝑔𝑙𝑜𝑏𝑎𝑙.
𝜃𝑔𝑙𝑜𝑏𝑎𝑙 = 𝑎𝑟𝑐𝑡𝑎𝑛2(𝑦𝑔 − 𝑦𝑝, 𝑥𝑔 − 𝑥𝑝) (G1)
Lastly, the global coordinates of the robot in pixel units is converted into mm units using a converting
function that was calibrated and programmed offline.
The procedure for achieving centralized control of mROBerTO on the workspace with the host PC
and the overhead camera is the following. First, the host PC begins the centralized control software
which has full control over the overhead camera via OpenCV. The program begins by initializing the
113
total number of robots to be controlled on the workspace with unique profiles that keep hold of current
global position on the workspace as well as the unique LED colour in RGB(255, 255, 255) format for
each robot. The program is first initialized by creating two threads: control system thread that handles
all the tracked data of the robots which is also responsible for sending tracked data to each robot via
Bluetooth® and an image processing thread to handle the LED tracking portion of the centralized
control task. In addition, three different timers are created to keep track of start and end times as well
as help with synchronizing the image processing task. After, one additional thread is created for
handling the frame capture of the workspace using the overhead camera. This concludes the initializing
phase of the centralized controller program and the program now begins collecting the robots’ global
positions and orientation data. The collected data is sent periodically (i.e., every one second for this
thesis) to the robots via Bluetooth® and the robots use their PID controller preprogrammed in their
firmware with the received data to get to their next desired positions on the workspace. The program
ends once all robots have completed travelling their user-defined trajectory paths. Below figure shows
the overall procedure for tracking the robots on the workspace with the centralized controller program
using the overhead camera. Next subsections provide the pseudocode and the source code for the
centralized controller program using the overhead camera for tracking.
Fig. G2. Overall procedure for centralized controller program that uses the overhead camera for
tracking the robots and sending state information to the robots via Bluetooth®.
114
Pseudocode for the Centralized Controller with Overhead Camera
Below is the pseudocode for the centralized controller using the overhead camera for tracking.
Initialize usable data;
Create a set of threads for each robot on the workspace: control system
thread and image processing thread;
Create a set of timers for starting time and end time to be used by the image
capturing task;
Start frame capture thread task;
While the camera has not yet captured its first frame, wait;
While one or more robots are still finishing their trajectory
Get captured frame for analysis;
Send the captured frame to image processing thread for tracking robots;
Send the tracked data to each control system thread created for each
robot;
// Note that each control system thread will send the tracked position to
// each robot via Bluetooth
While ~1/30 of a second has not yet passed, wait;
Kill all timers;
End all thread tasks;
Output debug and additional program information for the user;
Source Code Snippets of the Centralized Controller
...
/***************************************************************************************************
*
* Below is the robot profile class that each robot on the workspace will be designated to
*
*/
class RobotProfile {
public:
// data to be Sent to Control System
double x;
double y;
double angle;
// matrix Storing frame to process
Mat image;
Mat color;
// colour moments used to determine X/Y Coordinates
CvMoments colorMoment;
double moment10;
double moment01;
double area;
// coordinates of Green Orientation LED
double green_x;
double green_y;
};
...
/***************************************************************************************************
*
* Below is the control system function that handles the coordinates of the robot
*
*/
void controlSystem(int uBotnum){
auto start = chrono::high_resolution_clock::now();
115
auto end =chrono::high_resolution_clock::now();
unsigned int circleCounter;
chrono::duration<double> diff = end-start;
Mat resultant;
Robot temp;
// get current x,y,theta,
//Require some init function to get values at start
temp = getInitCoords();
// create object for control system with starting x,y,angle + Create Path
robotProfile uBot(temp.x, temp.y, temp.angle);
// set the initial Coordinates
uBot.setInitialCoordinates(temp.x, temp.y, temp.angle);
// create Images that have path and points on it
for (circleCounter = 0; circleCounter < uBot.totalCoordinatePoints; circleCounter++)
{
line(*plannedPathTrajectory, cvPoint(uBot.x_coord[circleCounter], 480 -
uBot.y_coord[circleCounter]), cvPoint(uBot.x_coord[circleCounter+1], 480 - uBot.y_coord[circleCounter+1]),
cvScalar(255, 0, 255), 1); //To draw a continuous stretch of lines
circle(*plannedPathTrajectory, cvPoint(uBot.x_coord[circleCounter], 480 -
uBot.y_coord[circleCounter]), 5, cvScalar(255, 128, 90), 1, 4, 0);
}
cout << "set initial Coordinates to: " << "X Coord: " << temp.x << " Y Coord: " << temp.y << " Angle " <<
temp.angle << endl;
// require some form of synchronization so everything begins at same time
// in practice will need to cordinate between the image processing to make sure their color is chosen
// for now can just wait on Control system
init=0;
while(1){
start = chrono::high_resolution_clock::now();
end =chrono::high_resolution_clock::now();
diff = end-start;
cout << "Started CS waiting" << endl;
//Wait a second
while(diff.count() <= 1){
this_thread::yield();
end = chrono::high_resolution_clock::now();
diff = end-start;
}
cout << "Finished CS waiting" << endl;
// add(frame,*plannedPathTrajectory,resultant);
// imshow("Trajectory",resultant);
// waitKey(1);
//Get the corresponding data -- Might Need a lock here -- can combine data retrieval/send update
temp.x = average_x[uBotnum];
temp.y = average_y[uBotnum];
temp.angle = average_angle[uBotnum];
// send update
// call getNextInput with local x, y, theta values
uBot.getNextInputAndSend(temp.x, temp.y, temp.angle);
// repeat until path finished
if (uBot.timeCounter > (uBot.totalCoordinatePoints - 1))
{
printf("\n[DONE] last command is sent...\n\n");
operate =0;
break;
}
}
}
...
/***************************************************************************************************
*
* Below is the code for sending the state data to the robot via Bluetooth using BlueZ library
*
*/
// send off the next inputs of left and right angular velocity
char sendCommand[97];
sprintf(sendCommand, "gatttool -t random -b C1:8A:D9:67:54:3C --char-write-req --handle=0x000e --
value=63%02X%02X%02X%02X", motorPWM_L & 0xFF, motorSign_L & 0xFF, motorPWM_R & 0xFF, motorSign_R & 0xFF);
char sendCommand[97];
116
sprintf(sendCommand, "gatttool -t random -b C1:8A:D9:67:54:3C --char-write-req --handle=0x000e --
value=63%02X%02X%02X%02X", motorPWM_L & 0xFF, motorSign_L & 0xFF, motorPWM_R & 0xFF, motorSign_R & 0xFF);
...
/***************************************************************************************************
*
* Below is the code for detecting all of the robots’ LED (including front green LED)
*
* Author: Tyler Colaco
*
* Note that 'i' is the total number of robots on the workspace
*
*/
void detectRobots(int i)
{
Robot uBot;
//Require some form of synchronization so everything begins at same time
//In practice will need to cordinate between the image processing to make sure their color is chosen
//For now can just wait on Control system
while (init){
this_thread::yield();
}
while(operate){
//Wait on new Frame
while (!flag.test(i)){
this_thread::yield();
}
//Copy Newly captured frame to local working copy
//img[i] = frame;
frame.copyTo(uBot.image);
//GaussianBlur(img, img, Size(1, 1), 0, 0); //Just a filter to reduce the noise
//Filter Color
inRange(uBot.image, RED_LED_LOW_LIMIT, RED_LED_HIGH_LIMIT, // Adjust the scalar values for different
color
uBot.color); //We make RED color pixels to white and other colors to black
uBot.colorMoment = moments(uBot.color); //We give the binary converted frames for calculating the
moments
uBot.moment10 = cvGetSpatialMoment(&uBot.colorMoment, 1, 0); //Sum of X coordinates of all white pixels
uBot.moment01 = cvGetSpatialMoment(&uBot.colorMoment, 0, 1); //Sum of Y coordinates of all white pixels
uBot.area = cvGetCentralMoment(&uBot.colorMoment, 0, 0); //Sum of all white color pixels
//Insuring that able to find colour in frame
if (uBot.moment10 > 0 && uBot.moment01 > 0 ){
uBot.x = uBot.moment10 / uBot.area;
uBot.y = uBot.moment01 / uBot.area;
// Next step zooming in on Green and getting the angle
// Zooms into the found coordinate with an image 36x36 pixels and the LED in the center (18,18)
uBot.image = uBot.image(Rect(uBot.x - (ROI_dimension/2),uBot.y -
(ROI_dimension/2),ROI_dimension,ROI_dimension));
// Draw a Circle to Cover the Red LED
circle(uBot.image,Point(18,18),4,Scalar(0,0,0),-1,8,0);
//Find Green LED Coordinates
inRange(uBot.image, GREEN_LED_LOW_LIMIT, GREEN_LED_HIGH_LIMIT,
uBot.image);
uBot.colorMoment = moments(uBot.image); //We give the binary converted frames for calculating the
moments
uBot.moment10 = cvGetSpatialMoment(&uBot.colorMoment, 1, 0); //Sum of X coordinates of all white
pixels
uBot.moment01 = cvGetSpatialMoment(&uBot.colorMoment, 0, 1); //Sum of Y coordinates of all white
pixels
uBot.area = cvGetCentralMoment(&uBot.colorMoment, 0, 0); //Sum of all white color pixels
uBot.green_x = uBot.moment10 / uBot.area;
uBot.green_y = uBot.moment01 / uBot.area;
//To have greater precision of the angle MAY add a multiplier to green coordinate values
// -1 in y exists since (0,0) for image is top left corner, adjustment for cartesian
uBot.angle = atan2(-1*(uBot.green_y - 18),(uBot.green_x - 18)) * 180.0/PI;
117
if (uBot.angle < 0) uBot.angle+=360;
/*************************
Need to add a safety measure to account for if angle is not calculated
Would occur if green LED isn't detected
- Store previous and resuse?
- Have a backup for calculating the angle
************************/
//Adjust y coordinate to represent cartesian coordinates
uBot.y = MAX_Y - uBot.y;
//Push Class into Queue corresponding to index
enqueue(uBot,i);
}
framecounter[i]++;
if (!operate) break;
//Indicate Finished Processing
mtx.lock();
flag.reset(i);
mtx.unlock();
}
cout << "Image Processing signaled to stop" << endl;
}
void initQueue(int n){
int i;
//Create space for the head and tail
for (i=0;i<n;i++){
head[i] = (node*)malloc(sizeof(node));
tail[i] = (node*)malloc(sizeof(node));
head[i]->next = NULL;
tail[i]->next = NULL;
sum_x[i] = 0;
sum_y[i] = 0;
sum_angle[i] = 0;
average_x[i] = 0;
average_y[i] = 0;
average_angle[i] = 0;
queue_size[i] = 0;
}
}
void enqueue(Robot value, int index){
node *newNode = (node*)malloc(sizeof(node));
if (queue_size[index] == FPS){
//Decrement oldest values from running sum
sum_x[index] -= tail[index]->next->data.x;
sum_y[index] -= tail[index]->next->data.y;
sum_angle[index] -= tail[index]->next->data.angle;
/* DOUBLE CHECK IF Updating LAST ELEMENT REFERENCED TO ITSELF IS VALID */
//Update the pointer to last element
tail[index]->next = tail[index]->next->prev;
//Remove the last element
free(tail[index]->next->next);
tail[index]->next->next = NULL;
//Add new element
newNode->next = head[index]->next;
head[index]->next->prev = newNode;
newNode->prev = NULL;
newNode->data.x = value.x;
newNode->data.y = value.y;
newNode->data.angle = value.angle;
//Don't update Size of Queue as it remained constant
}
118
if (queue_size[index] < FPS && queue_size[index] != 0){
//Connect new node to current first and vice versa
newNode->next = head[index]->next;
head[index]->next->prev = newNode;
newNode->prev = NULL;
newNode->data.x = value.x;
newNode->data.y = value.y;
newNode->data.angle = value.angle;
//Update Size of Queue
queue_size[index] += 1;
}
//First Element in the Queue
if (queue_size[index] == 0){
newNode->next = NULL;
newNode->prev = NULL;
newNode->data.x = value.x;
newNode->data.y = value.y;
newNode->data.angle = value.angle;
//Update Size of Queue
queue_size[index] += 1;
//First element in the queue is both first and last node
tail[index]->next = newNode;
}
//Update Head Pointer
head[index]->next = newNode;
//Update Running Sums
sum_x[index] += value.x;
sum_y[index] += value.y;
sum_angle[index] += value.angle;
//Calculate new Average - May Require a Lock
average_x[index] = sum_x[index]/queue_size[index];
average_y[index] = sum_y[index]/queue_size[index];
//average_angle[index] = sum_angle[index]/queue_size[index];
average_angle[index] = value.angle;
}
...
/***************************************************************************************************
*
* Below is the code for frame capture using the overhead camera
*
* Author: Tyler Colaco
*
*/
void frameCapture(){
auto start = chrono::high_resolution_clock::now();
auto end =chrono::high_resolution_clock::now();
chrono::duration<double> diff = end-start;
// cap >> frame;
// frame = imread("LEDON4.png");
int i;
//Wait for Control System to Signal Begin
while (init){
this_thread::yield();
}
while (operate){
//Capture a new Frame
cap >> frame;
//Combine the drawn path and the points with the frame
//imshow("Frame", plannedPathTrajectory);
start = chrono::high_resolution_clock::now();
119
//Signal that there is a new frame
mtx.lock();
//set all the bits high
flag.set();
//clear the bits of unused robots
for (i=n; i < MAX_BOTS; i++){
flag.reset(i);
}
mtx.unlock();
//wait until all processing finished
while(!flag.none()){
this_thread::yield();
}
end =chrono::high_resolution_clock::now();
diff = end-start;
//Make sure waiting 1/FPS before capturing next frame
while(diff.count() <= (double)1/FPS){
this_thread::yield();
end = chrono::high_resolution_clock::now();
diff = end-start;
}
}
cout << "Frame Capture been told to Finish" << endl;
}
...
120
Appendix H: Details of Four Swarm Scenarios
This appendix provides more detailed descriptions of the experimental procedures used for the four
swarm scenarios introduced in Section 3.2.3 as well as source code snippets of the firmware
programmed on the mROBerTOs to achieve these scenarios.
H.1 Aggregation Behaviour
The aggregation behaviour was implemented on mROBerTOs using the method [100] with the
behaviour-based approach adapted from [33]. However, since there were no explicit formulae for the
motor schemas in these papers and other papers, in this work, an artificial potential method was adopted
to move the robots. The approach state for aggregating was achieved using an artificial attractive force.
The equation below defines the artificial attractive force, 𝑭𝑎𝑡𝑡, used for the approach state when a robot
looks for nearby robots and move towards the group:
𝑭𝑎𝑡𝑡,𝑖(𝑑𝑖, 𝜃𝑖) =
{
0 𝑑𝑖 ≤ 𝑑𝑙𝑖𝑚𝑖𝑡
𝜁(𝑑𝑖 − 𝑑𝑙𝑖𝑚𝑖𝑡) [cos(𝜃𝑖)sin(𝜃𝑖)
] 𝑑𝑙𝑖𝑚𝑖𝑡 < 𝑑𝑖 ≤ 𝑑𝑐𝑙𝑜𝑠𝑒
𝑑𝑐𝑙𝑜𝑠𝑒𝜁 [cos(𝜃𝑖)
sin(𝜃𝑖)] 𝑑𝑐𝑙𝑜𝑠𝑒 < 𝑑𝑖 < 𝑑𝑓𝑎𝑟
0 𝑑𝑖 ≥ 𝑑𝑓𝑎𝑟
,
(H1)
𝑭𝑎𝑡𝑡 =∑𝑭𝑎𝑡𝑡,𝑖
𝑁
𝑖=1
,
(H2)
where 𝑑𝑖 and 𝜃𝑖 are the 𝑖𝑡ℎ nearby robot’s relative distance and bearing, respectively, 𝜁 is the scaling
factor for the attractive force, 𝑑𝑙𝑖𝑚𝑖𝑡 is distance limit value of how close the robot can get, 𝑑𝑐𝑙𝑜𝑠𝑒 is the
transitioning value from conic to parabolic artificial potential well, 𝑑𝑓𝑎𝑟 is the IR sensing radius limit
of the robot, set to 125 mm in this work, and 𝑁 is the total number of nearby robots that are within the
sensing radius (125 mm).
An additional behaviour, referred to as avoid obstacle state herein, was implemented in order to
avoid collisions with other robots (treated as obstacles). The following artificial repulsive force, 𝑭𝑜𝑏𝑠,
was used to achieve this:
121
𝑭𝑜𝑏𝑠,𝑖(𝑑𝑖, 𝜃𝑖) = {𝜂 (
1
𝑑𝑖−
1
𝑑𝑡ℎ𝑟𝑒𝑠ℎ𝑜𝑙𝑑) [cos(𝜃𝑖)
sin(𝜃𝑖)] 𝑑𝑖 < 𝑑𝑡ℎ𝑟𝑒𝑠ℎ𝑜𝑙𝑑
0 𝑑𝑖 ≥ 𝑑𝑡ℎ𝑟𝑒𝑠ℎ𝑜𝑙𝑑
,
(H3)
𝑭𝑜𝑏𝑠 =∑𝑭𝑜𝑏𝑠,𝑖
𝑁
𝑖=1
,
(H4)
where 𝜂 is the scaling factor for the repulsive force and 𝑑𝑡ℎ𝑟𝑒𝑠ℎ𝑜𝑙𝑑 is the threshold distance value where
the robot tries to avoid obstacles if it were to be closer than the threshold distance value. One can note
that this state is only active when 𝑑𝑖 < 𝑑𝑡ℎ𝑟𝑒𝑠ℎ𝑜𝑙𝑑 and the magnitude of the repulsive velocity increases
as relative distance to nearby robot decreases.
Examining Equations (H1) to (H4), one can note that the approach state is activated when an 𝑖𝑡ℎ
nearby robot’s relative distance is in between 𝑑𝑙𝑖𝑚𝑖𝑡 < 𝑑𝑖 < 𝑑𝑓𝑎𝑟 and avoid obstacle state is activated
when 𝑑𝑖 < 𝑑𝑡ℎ𝑟𝑒𝑠ℎ𝑜𝑙𝑑. If 𝑑𝑙𝑖𝑚𝑖𝑡 < 𝑑𝑡ℎ𝑟𝑒𝑠ℎ𝑜𝑙𝑑, then, there exists a range of relative distance values, 𝑑𝑖,
when both approach state and avoid obstacle state can be active simultaneously (i.e., when the 𝑖𝑡ℎ
nearby robot is in between 𝑑𝑙𝑖𝑚𝑖𝑡 < 𝑑𝑖 < 𝑑𝑡ℎ𝑟𝑒𝑠ℎ𝑜𝑙𝑑). This is an invalid state and must be avoided since
only one out of the two artificial forces should be active per robot. In order to prevent this invalid state
from occurring, the following constraint: 𝑑𝑙𝑖𝑚𝑖𝑡 ≥ 𝑑𝑡ℎ𝑟𝑒𝑠ℎ𝑜𝑙𝑑 is employed.
However, the IR detectors on the secondary sensing module that are responsible for getting the
relative distance values of nearby robots are susceptible to noise. Thus, if the relative distance value,
𝑑𝑖 for a nearby robot is close to the values of 𝑑𝑙𝑖𝑚𝑖𝑡 and 𝑑𝑡ℎ𝑟𝑒𝑠ℎ𝑜𝑙𝑑, oscillatory motion could be
experienced. Namely, the robot could frequently switch from approach state to avoid obstacle state. In
order to prevent this, the above constraint can be changed to 𝑑𝑙𝑖𝑚𝑖𝑡 > 𝑑𝑡ℎ𝑟𝑒𝑠ℎ𝑜𝑙𝑑 so that the robot stays
stationary and the movement vector is zero if the 𝑖𝑡ℎ nearby robot is within the relative distance range
of 𝑑𝑙𝑖𝑚𝑖𝑡 > 𝑑𝑖 > 𝑑𝑡ℎ𝑟𝑒𝑠ℎ𝑜𝑙𝑑. In addition, one can set the difference between 𝑑𝑙𝑖𝑚𝑖𝑡 and 𝑑𝑡ℎ𝑟𝑒𝑠ℎ𝑜𝑙𝑑 to a
large value for better robustness to the oscillatory motion.
Below are the pseudocode as well as the source code snippet of the main loop of the firmware used
on mROBerTOs for achieving aggregation behaviour that was described above.
Pseudocode of the Aggregation Behaviour
Below is the pseudocode of the aggregation behaviour firmware that was programmed on mROBerTOs
for this thesis experiment.
Initialize parameters for artificial forces as well as variables for storing
122
data of neighbouring robots;
Initialize the state of the robot to ‘approach’;
While indefinite
If current state is ‘approach’
Get neighbouring robots’ distances and bearings;
Else if current state is ‘wait’ or ‘repel’
Get neighbouring robots’ distances and bearings;
Check to see what the lowest distance value was;
Switch to current state case
Case: state is ‘approach’
Use ‘approachRobots(…)’ function and move closer to robots;
Case: state is ‘wait’
Use ‘approachRobots(…)’ to check that there are still robots nearby;
If no robots are nearby
Change state to ‘approach’;
Case: state is ‘repel’
Check that there are still robots being too close;
If no robots are nearby
Change state to ‘wait’;
Else use ‘repulseRobots(…)’ function and move away from robots;
Check that no nearby robot’s distance is too close and passed the
threshold value for ‘wait’ and/or ‘repel’ states;
If nearby robot did pass the threshold value for ‘wait’ state
Change state to ‘wait’;
Else if nearby robot passed the threshold value for ‘repel’ state
Change state to ‘repel’;
Below is a flowchart explaining the above psceudocode.
Fig. H1. Flowchart explaining the aggregation behaviour firmware.
123
Source Code Snippet of Aggregation Behaviour
...
uint8_t proximityData[NUM_OF_CHANNELS]; // holds all the proximity data of all nearby robots
uint16_t bearingData[NUM_OF_CHANNELS]; // holds all the bearing data of all nearby robots
// below is for getting the lowest out of how many tries
#define TIMES_TRYING 3
uint8_t proximityData_2D[NUM_OF_CHANNELS][TIMES_TRYING];
// vectors for robot motion forces
float v_x = 0, v_y = 0;
//--- STATE VARIABLE ---/
// 1 -> approach
// 2 -> wait
// 3 -> repel
uint8_t currentState = 1;
// enter main loop
for (;;)
{
// get our averaged proximities and bearings data
//getAvgData(MY_CHANNEL, proximityData, bearingData);
if (currentState == 1) {
setEmitChannel(MY_CHANNEL);
startGetProximities(0);
nrf_delay_ms(WAIT_MS_UPDATE);
getAllData(proximityData, bearingData);
}
else if ((currentState == 2) || (currentState == 3)) {
for (uint8_t i = 0; i < TIMES_TRYING; i++) {
setEmitChannel(MY_CHANNEL);
startGetProximities(0);
nrf_delay_ms(WAIT_MS_UPDATE);
getAllData(proximityData, bearingData);
// copy the new data into the 2D array
for (uint8_t j = 0; j < NUM_OF_CHANNELS; j++) {
proximityData_2D[j][i] = proximityData[j];
}
}
// check to see what the lowest proximity value was
for (uint8_t i = 0; i < NUM_OF_CHANNELS; i++) {
uint8_t smallestValue = 255;
for (uint8_t j = 0; j < TIMES_TRYING; j++) {
if (smallestValue > proximityData_2D[i][j]) {
smallestValue = proximityData_2D[i][j];
}
proximityData[i] = smallestValue;
}
}
}
// evaluate our current state
switch (currentState) {
//--- approach state ---/
case 1:
// start off our approach robot behaviour
approachRobots(&v_x, &v_y, MY_CHANNEL, proximityData, bearingData);
// ensure LED turns on!
for (short i = 0; i < 10; i++) {
setLEDs(0,1,0);
}
turnUsingGyro(atan2(v_y, v_x) * (180.0/PI), &PWM1, &PWM2);
moveForward(600, &PWM1, &PWM2);
break;
//--- wait state ---/
case 2:
approachRobots(&v_x, &v_y, MY_CHANNEL, proximityData, bearingData);
// break out of wait state if there are nearby robots
if ((abs(v_x) > 0) || (abs(v_y) > 0)) {
// change to approach state
currentState = 1;
setLEDs(0,1,0);
}
// robots are still close by, stay in wait state
else {
// ensure LED turns on!
for (short i = 0; i < 10; i++) {
124
setLEDs(0,0,1);
}
while (app_pwm_channel_duty_set(&PWM1, 0, 0) == NRF_ERROR_BUSY); // left backward
while (app_pwm_channel_duty_set(&PWM1, 1, 0) == NRF_ERROR_BUSY); // left forward
while (app_pwm_channel_duty_set(&PWM2, 0, 0) == NRF_ERROR_BUSY); // right forward
while (app_pwm_channel_duty_set(&PWM2, 1, 0) == NRF_ERROR_BUSY); // right backward
}
break;
//--- repel state ---/
case 3:
repulseRobots(&v_x, &v_y, MY_CHANNEL, proximityData, bearingData);
// break out of repel state if there are no nearby robots
if ((abs(v_x) == 0) || (abs(v_y) == 0)) {
// ensure LED turns on!
for (short i = 0; i < 10; i++) {
setLEDs(0,0,1);
}
while (app_pwm_channel_duty_set(&PWM1, 0, 0) == NRF_ERROR_BUSY); // left backward
while (app_pwm_channel_duty_set(&PWM1, 1, 0) == NRF_ERROR_BUSY); // left forward
while (app_pwm_channel_duty_set(&PWM2, 0, 0) == NRF_ERROR_BUSY); // right forward
while (app_pwm_channel_duty_set(&PWM2, 1, 0) == NRF_ERROR_BUSY); // right backward
// change to wait state
currentState = 2;
}
// robots are still close by, repel
else {
// start our repel robot behaviour
// ensure LED turns on!
for (short i = 0; i < 10; i++) {
setLEDs(1,0,0);
}
turnUsingGyro(atan2(v_y, v_x) * (180.0/PI), &PWM1, &PWM2);
moveForward(600, &PWM1, &PWM2);
}
break;
default:
break;
}
//--- CHECK FOR STATE FUNCTION ---/
for (uint16_t i = 0; i < NUM_OF_CHANNELS; i++) {
if (i == MY_CHANNEL) {
// do nothing...
}
else {
if (proximityData[i] < D_LIMIT) {
if (proximityData[i] < D_THRESH) {
// robot is too close, change to REPEL state
currentState = 3;
}
else {
// all is good, set to WAIT state
currentState = 2;
}
}
else {
// nothing is 'too' near, go back to APPROACH state!
currentState = 1;
}
}
}
}
...
H.2 Chain Formation
The second collective behaviour demonstrated by mROBerTOs is chain formation using the method
from [101]. Prior to the start of the chain formation experiment, one designated robot is set to be the
nest of the chain (i.e., the very first robot in the chain formation) with the robot illuminating its green
LED. The rest of the robots are in the wait state, where the robots stay put indefinitely until chosen to
125
be the next tail of the chain (i.e., next robot to become part of the chain). Before the experiment starts,
the current nest of the chain is also the current tail of the chain. The chain formation procedure is as
follows,
a. A robot in the wait state within the group is randomly chosen, initially from the front column and
then the back, to be the next tail of the chain.
b. The chosen robot moves towards the existing chain, or towards the nest if chain has yet to be formed.
c. Once the new to be tail robot is close to the chain, it starts moving perpendicular in the clockwise
direction along the chain.
d. When the robot reaches the end of the chain, it rotates around the end (tail) of the chain until the
desired orientation relative to the end (tail) of the chain is reached.
e. Once the desired angle is reached, this robot is now the new tail of the chain (the new tail robot is
indicated with green LED). The new tail robot will relay a message to the robots that are in the wait
state in order to randomly select a new robot to continue with increasing the chain size. This
procedure repeats until no robots are left in the group in the wait state and all robots are part of the
chain.
Similar to the aggregation behaviour, [101] discusses all the required motor schemas and
behaviours in order to achieve chain formation but does not explicitly express which approach is used
to create the movement vectors of the robots. Thus, as for the aggregation behaviour, the artificial
potential approach was adopted in this work to determine the next movement vectors for the robots in
order to achieve chain formation.
Reviewing the previous list of procedures for achieving chain formation, one can note that there
are two motor schemas that are required: move towards a robot (i.e., nest of the chain) and move
perpendicular to the robots. Moving towards a robot can be implemented using Equation (H2). As for
moving perpendicular to robots in order to move along the chain, an artificial force vector is needed
that points perpendicular to a robot in the clockwise direction. In addition, this artificial perpendicular
force vector needs to keep the robot at a fixed radius to other robots in the chain so that it does not drift
away from the chain or get too close and collide into the chain of robots.
When the robot moves along a chain, it should latch onto a target robot and move perpendicular to
that robot with a fixed radius until it gets close enough to the next robot on the chain. This can be seen
in Fig. 18c, where the moving robot first latches onto the nest robot and moves in a perpendicular
direction until it reaches a certain distance to the second robot in the chain. Next, the moving robot
latches onto the second robot in the chain and it moves in the perpendicular direction until end of the
126
chain is reached. In summary, the moving robot latches onto different robots in the chain by
continuously changing its target robot in order to move along the chain until the end of the chain is
reached.
The equation below was used to move the robot perpendicular to a chosen target robot (i.e., a robot
to latch onto) with a fixed range of radius between the target robot and the robot itself:
𝑭𝑝𝑒𝑟𝑝(𝑑, 𝜃) =
{
𝜂 (
1
𝑑−
1
𝑑𝑙𝑖𝑚𝑖𝑡) [c𝜃s𝜃] + 𝜌𝑤 [
c𝜃+90°s𝜃+90°
] 𝑑 ≤ 𝑑𝑙𝑖𝑚𝑖𝑡
𝜌𝑤 [c𝜃+90°s𝜃+90°
] 𝑑𝑙𝑖𝑚𝑖𝑡 < 𝑑 ≤ 𝑑𝑝𝑒𝑟𝑝
𝜁(𝑑 − 𝑑𝑙𝑖𝑚𝑖𝑡) [c𝜃s𝜃] + 𝜌𝑤 [
c𝜃+90°s𝜃+90°
] 𝑑𝑝𝑒𝑟𝑝 < 𝑑 ≤ 𝑑𝑐𝑙𝑜𝑠𝑒
𝑑𝑐𝑙𝑜𝑠𝑒𝜁 [c𝜃s𝜃] + 𝜌𝑤 [
c𝜃+90°s𝜃+90°
] 𝑑𝑐𝑙𝑜𝑠𝑒 < 𝑑
,
(H5)
where 𝑭𝑝𝑒𝑟𝑝 is the next force movement vector, 𝑑 and 𝜃 are relative distance and bearing of the target
robot, c𝜃 and s𝜃 are the cosine and sine functions of 𝜃 respectively, 𝜂 and 𝜁 are the scaling factors for
repulsive and attractive forces respectively, and 𝜌𝑤 is the weight parameter for the perpendicular
movement vector.
In order to prevent collisions, an artificial repulsive force was added into the above equation and is
used when the two robots are closer than 𝑑𝑙𝑖𝑚𝑖𝑡. In addition, in order to prevent the moving robot
drifting away from the target robot and the chain itself, an artificial attractive force was added into the
equation and is used when the distance between the moving and target robots are 𝑑 > 𝑑𝑝𝑒𝑟𝑝. Lastly,
when the distance between moving and target robots are within the ideal distance range (i.e., 𝑑𝑙𝑖𝑚𝑖𝑡 <
𝑑 ≤ 𝑑𝑝𝑒𝑟𝑝), the robot moves only in the perpendicular direction.
Below are the psceudocode as well as the source code snippet of the main loop of the firmware
used on mROBerTOs for achieving chain formation that was described above.
Pseudocode of the Chain Formation
Below is the pseudocode and the flowchart of the chain formation firmware for this thesis experiment.
Initialize parameters for artificial forces as well as variables for storing
data of neighbouring robots;
If not the first robot
Initialize the state of the robot to ‘wait’;
Else
Initialize the state of the robot to ‘tail’;
While indefinite
127
If robot is not in ‘wait’ state
Get the proximities and bearings of nearby robots;
Switch to current state case
Case: state is ‘wait’
Continue to listen on ANT Wireless if chosen to be the next tail;
If chosen to explore the chain to become the next tail
Change state to ‘explore’
Case: state is ‘explore’
First, move towards the nest (i.e., first robot in the chain)
Use artificial forces to move along the chain until becoming the
new tail of the chain;
If this robot is the new tail
Change state to ‘tail’
Case: state is ‘tail’
Check if the current robot exploring the chain has become the new
Tail;
If the current exploring robot is the new tail
Send out ANT message indicating there is a new tail;
Change state to ‘chain’
Case: state is ‘chain’
Continue to emit IR signals and turn off ANT Wireless;
Fig. H2. Flowchart explaining the chain formation firmware.
128
Source Code Snippet of Chain Formation
...
// vectors for robot motion forces
float v_x = 0, v_y = 0;
uint8_t proximityData[NUM_OF_CHANNELS];
uint16_t bearingData[NUM_OF_CHANNELS];
//--- STATE NUMBERS ---/
if (MY_CHANNEL == 0) {
currentState = 2;
}
else {
currentState = 0;
}
// 0 -> WAIT
// 1 -> EXPLORE CHAIN
#define NEST_ROB_ID 0
uint8_t nestFound = 0;
uint8_t curRobot2TurnOn = 0;
// 2 -> TAIL
uint8_t initTail = 1;
#define ALLOWED_THETA_DIFF 25
uint16_t desiredBearing = 0;
double desiredBearingAvg = 0.0;
double counterAvg = 1.0;
uint8_t checkCounter = 0;
// 3 -> CHAIN
uint8_t initChain = 1;
// Enter main loop
for (;;)
{
// get our relative distance and bearing data of nearby robots
if (currentState != 0) {
//while(1) {
startGetProximities(0);
// wait 340 ms for the measurement to finish
srand(time(NULL));
nrf_delay_ms(WAIT_MS_UPDATE);// + (rand()%10*MY_CHANNEL));
getAllData(proximityData, bearingData);
}
//--- go to the correct chain behaviour ---/
switch (currentState) {
//------------ STATE 0: WAIT ------------/
case 0:
// do nothing...
break;
//------------ STATE 1: EXPLORE CHAIN ------------/
case 1:
// emit your channel signal
setEmitChannel(MY_CHANNEL);
// you already found the nest so explore the chain!
if (nestFound) {
// indicate we have changed our state
for (short i = 0; i < 5; i++) {
setLEDs(1,0,1);
nrf_delay_ms(50);
}
if (curRobot2TurnOn == (MY_CHANNEL - 1)) {
orbitRobotClkWise(&v_x, &v_y, CONST_PERP_DIS, curRobot2TurnOn, proximityData, bearingData);
}
else if (curRobot2TurnOn == 2) {
orbitRobotClkWise(&v_x, &v_y, 30, curRobot2TurnOn, proximityData, bearingData);
}
else {
orbitRobotClkWise(&v_x, &v_y, CONST_PERP_DIS, curRobot2TurnOn, proximityData, bearingData);
}
if ((v_x != 0) || (v_y != 0)) {
turnUsingGyro((180.0/PI)*atan2(v_y, v_x), &PWM1, &PWM2);
moveForward(400, &PWM1, &PWM2);
}
// check if we're near the next robot on the chain
if (((curRobot2TurnOn+1) == 2)) {
129
if (proximityData[curRobot2TurnOn+1] <= (85)) {
curRobot2TurnOn++;
}
}
else {
if (proximityData[curRobot2TurnOn+1] <= (40)) {
if (curRobot2TurnOn < (MY_CHANNEL - 1)) {
curRobot2TurnOn++;
}
}
}
}
// first, go to the nest and then explore the chain!
else {
// indicate we have changed our state
for (short i = 0; i < 5; i++) {
setLEDs(1,0,0);
nrf_delay_ms(50);
}
// check if we're near the nest
if (proximityData[curRobot2TurnOn] <= (CONST_PERP_DIS + (35))) {
nestFound = 1;
}
else {
goToNest(&v_x, &v_y, 0, curRobot2TurnOn, proximityData, bearingData);
if ((v_x != 0) || (v_y != 0)) {
turnUsingGyro((180.0/PI)*atan2(v_y, v_x), &PWM1, &PWM2);
moveForward(1000, &PWM1, &PWM2);
}
}
}
break;
//------------ STATE 2: TAIL ROBOT ------------/
case 2:
// emit your channel signal
setEmitChannel(MY_CHANNEL);
// initialize for our tail robot
if (initTail) {
// indicate we have changed our state
for (short i = 0; i < 5; i++) {
setLEDs(0,1,0);
nrf_delay_ms(50);
}
// encode our ANT message
tx_buffer[0] = MY_CHANNEL;
tx_buffer[1] = 0x00;
// turn on our ANT TX channel
err_code = sd_ant_channel_open(ANT_MS_CHANNEL_NUMBER);
APP_ERROR_CHECK(err_code);
// initialize of tail robot complete!
initTail = 0;
}
// check to see if robot exploring has reach desired theta
if (MY_CHANNEL == 0) {
desiredBearing = 90;
}
else {
if (counterAvg < 1000.0) {
desiredBearing = bearingData[MY_CHANNEL-1] + 180;
if (desiredBearing <= 360) {
desiredBearing += 360;
}
desiredBearingAvg = (((double)desiredBearing) * (1.0 / counterAvg)) + (desiredBearingAvg *
((counterAvg - 1.0)/counterAvg));
counterAvg++;
}
}
if (MY_CHANNEL < (NUM_OF_CHANNELS-1)) {
if ((((bearingData[MY_CHANNEL+1] + 360) >= (desiredBearingAvg - (ALLOWED_THETA_DIFF*2)))
&& (bearingData[MY_CHANNEL+1] + 360) <= (desiredBearingAvg))
&& ((proximityData[MY_CHANNEL+1]) <= (80))) {
checkCounter++;
if (checkCounter > 5) {
// stop the exploring robot, it's desired bearing has been reached
// encode our ANT message
tx_buffer[0] = MY_CHANNEL;
tx_buffer[1] = 0x01;
}
130
}
else {
checkCounter = 0;
}
}
break;
//------------ STATE 3: CHAIN ROBOT ------------/
case 3:
while(1) {
// emit your channel signal
setEmitChannel(MY_CHANNEL);
// indicate we have changed our state
for (short i = 0; i < 5; i++) {
setLEDs(0,0,1);
nrf_delay_ms(50);
}
// turn off our ANT TX channel
if (initChain) {
err_code = sd_ant_channel_close(ANT_MS_CHANNEL_NUMBER);
APP_ERROR_CHECK(err_code);
initChain = 0;
}
nrf_delay_ms(1000);
}
break;
default:
break;
}
}
...
H.3 Collective Exploration
The third collective behaviour demonstrated by mROBerTOs was the collective exploration using the
same method found in [102]. Initially, the robots are placed relatively close to one another in a corner
of the walled arena (Fig. 20a). With the use of artificial repulsive forces, the robots move away from
both each other and the walls (obstacles) around them when the experiment begins. The equations,
with artificial forces, expressed in [102] were used to implement the artificial repulsive forces used in
the collective exploration behaviour:
𝑭 = 𝑭𝒐 + 𝑭𝒏, (H6)
where 𝑭𝒐 and 𝑭𝒏 are repulsive forces for obstacles (i.e., walls) and other mobile robots respectively
and:
𝑭𝒐 = −∇𝑈𝑜 = −𝑘𝑜∑
1
𝑑𝑜,𝑖2 ∙
𝒅𝒐,𝒊𝑑𝑜,𝑖
𝑖
, (H7)
𝑭𝒏 = −𝑘𝑛∑
1
𝑑𝑛,𝑖2 ∙
𝒅𝒏,𝒊𝑑𝑛,𝑖
𝑖
. (H8)
131
𝑘𝑜 and 𝑘𝑛 are the scaling factors for the repulsive forces of obstacles and other mobile robots, and
𝑑𝑜,𝑖/𝑑𝑛,𝑖 is the relative distance between the obstacles/robots.
Below are the pseudocode as well as the source code snippet of the main loop of the firmware used
on mROBerTOs for achieving collective exploration that was described above.
Pseudocode of the Collective Exploration
Below is the pseudocode and the flowchart of the collective exploration firmware for this thesis
experiment.
Initialize parameters for artificial forces as well as variables for storing
data of neighbouring robots;
Change robot state to ‘wait’
While indefinite
If robot is in ‘wait’ state
Do nothing until command is given by the user to start the experiment;
Else
Get proximities and bearings of nearby robots;
Change movement vector using the proximities and bearings information
of nearby robots;
Get proximities and bearings of nearby walls;
Adjust movement vector using the proximities and bearings information
of nearby walls;
Use the calculated movement vector for next robot movement;
Fig. H3. Flowchart explaining the collective exploration firmware.
132
Source Code Snippet of Collective Exploration
...
// vectors for robot motion forces
float F_x = 0, F_y = 0;
//float v_x = 0, v_y = 0;
uint8_t firstTime = 1;
uint8_t proximityData[NUM_OF_CHANNELS];
uint16_t bearingData[NUM_OF_CHANNELS];
// enter main loop
for (;;)
{
if (currentState == 0) {
// do nothing...
nrf_delay_ms(100);
}
else if (currentState == 1) {
if (firstTime) {
for (short i = 0; i < 10; i++) {
setLEDs(0,1,0);
nrf_delay_ms(100);
}
// initialize our parameters
K_O = ((float)K_O_TEMP) / 100.0;
K_N = ((float)K_N_TEMP) / 100.0;
V_STOP = ((float)V_STOP_TEMP) / 100.0;
firstTime = 0;
}
// start emitting our IR channel
setEmitChannel(MY_CHANNEL);
//--- get the proximities of nearby robots ---/
startGetProximities(0);
nrf_delay_ms(WAIT_MS_UPDATE);
getAllData(proximityData, bearingData);
repulseRobots(&F_x, &F_y, MY_CHANNEL, proximityData, bearingData);
//--- get the proximities of obstacles (walls) ---/
startGetProximities(1);
nrf_delay_ms(WAIT_MS_UPDATE);
getAllData(proximityData, bearingData);
repulseObstacles(&F_x, &F_y, MY_CHANNEL, proximityData, bearingData);
//--- only move if velocity is above deadzone band ---/
if (sqrt((F_x*F_x) + (F_y*F_y)) >= V_STOP) {
turnUsingGyro((180.0/PI)*atan2(F_y, F_x), &PWM1, &PWM2);
moveForward(1300, &PWM1, &PWM2);
}
}
nrf_delay_ms(50);
//nrf_delay_ms(1000);
}
...
H.4 Dynamic Task Allocation
The fourth collective behaviour demonstrated by mROBerTOs was the dynamic task allocation using
the method found in [24], and more specifically, using the Card Dealer’s algorithm. Before the
algorithm can be applied, the robots that need to be assigned a task must be within communication
range in a mesh type network where the links between each robot can be represented by a directed
graph 𝒢, and the nodes represent the robots themselves and edges represent the communication links.
133
Furthermore, the robots must be able to measure relative distances of nearby robots in order to
determine their presence, which determine how long each communication period will last. Namely, if
there are more robots present in the group than a situation where there are less robots, the
communication period between robots should be longer for the case where there are more robots. In
addition, all robots in the group must have a unique ID assigned.
The Card Dealer’s algorithm is completed in a series of stages. At the end of each stage, one of the
robots that participated in the stage is dealt a task and the next stage begins with the robot that has just
received a new task being eliminated from upcoming future stages. This cycle repeats until all robots
within the group are assigned a task.
In each stage, a suppression technique is applied in order to select which robot gets assigned a task
at the end of the stage. This technique is achieved by having all the robots without an assigned task
broadcast their unique ID in the beginning of a new stage so that everyone in the group is aware of
every robots’ unique ID. During this stage, when a robot receives a broadcast message from another
robot with a lower unique ID value than its own, it stops broadcasting its own ID and begins
broadcasting the lower robot ID. This is accomplished for all robots and, at the end of the stage, all
robots should be broadcasting the lowest robot ID that was relayed in the entire group. At the end of
this stage, the robot with the lowest unique ID that was being broadcasted by all other robots will be
assigned a task and the next stage will begin when the newly task assigned robot gives the greenlight
to all other robots. This cycle repeats until no robot is left without a task.
One should note that the Card Dealer’s algorithm is scalable for any number of robots. In addition,
the algorithm can be made robust to sudden addition or removal of robots into the group during runtime
of the algorithm by adding a timeout period feature. The timeout feature can be explained through a
simple example. In a situation where the robot with the lowest ID is about to get a task assigned, it gets
removed with all communication links to this robot being cut-off. The stage ends with no robot being
assigned a task and all other robots are waiting for the greenlight to start the next stage from the newly
task assigned robot. Since this newly task assigned robot was removed before being assigned a task
and is now completely out of the group, all other robots are stuck in the waiting state indefinitely. By
adding a timeout period feature, all awaiting robots can count down to a certain timeout period value
and automatically start the new stage without having the need to get the command from the newly task
assigned robot.
Below are the pseudocode as well as the source code snippet of the main loop of the firmware used
on mROBerTOs for achieving dynamic task allocation that was described above.
134
Pseudocode of the Dynamic Task Allocation
Below is the pseudocode and the flowchart of the dynamic task allocation firmware for this thesis
experiment.
Initialize timing parameters for waiting values in different stages during
the task allocations;
While indefinite
If current state is ‘wait’
Wait until command is given by the user to start the experiment;
If current state is ‘determine T’
Begin broadcasting robot ID and determine the robot with the smallest
valued ID;
If this robot has the smallest valued ID
Set this robot as the root of the tree;
Get proximities and bearings of all nearby robots and broadcast it;
Determine the level of the tree using other robots’ broadcasted
information and determine period value ‘T’ using the level
values;
// Above is to determine how many robots are between the root and
// the furtherest leaf (i.e., robot)
Set current state to ‘assign tasks’;
Begin broadcasting to other robots the value of ‘T’ and that task
will now be assigned and to change their current state to ‘assign
tasks’;
Else if this robot does not have the lowest robot ID
If internal state is ‘1’
Get proximities and bearings of all nearby robots;
Send the proximities and bearings information to root of the
tree;
Set internal state to ‘2’;
Else if internal state is ‘2’
Wait until root has given command to begin assigning tasks;
Else if current state is ‘assign tasks’
Begin broadcasting own robot ID;
Wait and listen, and make a list of robot IDs;
If this robot has the smallest ID
Assign task to itself;
Change current state to ‘task assigned’;
Notify other robots that task has been assigned and next round
should begin;
Stop broadcasting own robot ID and prepare for next round;
Else if current state is ‘task assigned’
Wait for other robots to finish broadcasting;
When a nearby robot notifies this robot that task has been assigned
and next round should begin, broadcast the repeated message for
other robots;
135
Fig. H4. Flowchart explaining the dynamic task allocation firmware.
Source Code Snippet of Dynamic Task Allocation
...
// our nearby robots’ data
uint8_t proximityData[NUM_OF_CHANNELS];
uint16_t bearingData[NUM_OF_CHANNELS];
uint8_t nearbyRobots[NUM_OF_CHANNELS];
for (uint8_t j = 0; j < NUM_OF_CHANNELS; j++) {
nearbyRobots[j] = 0;
}
uint8_t firstTime = 1;
uint8_t firstTime_state_2 = 1;
uint8_t firstTime_state_3 = 1;
currentStage = 1;
internalState = 0;
// set level counter to highest possible
levelCounter = 0xFE;
136
// reset our highest level counter
highestLevel = 1;
// enter main loop
for (;;)
{
if (currentState == 0) {
// do nothing...
nrf_delay_ms(100);
}
//--- STATE 1: DETERMINE 'T' ---/
#define INIT_WAIT_PERIOD 4000
else if (currentState == 1) {
if (firstTime) {
// find the smallest ID in the group
trackAndBroadcastSmallest(INIT_WAIT_PERIOD);
firstTime = 0;
}
// check if this robot is the lowest
if (lowestID == MY_CHANNEL) {
for (short i = 0; i < 10; i++) {
setLEDs(0,1,0);
nrf_delay_ms(100);
}
// this robot is the root of the tree
levelCounter = 0;
// check which robots are nearby
#define CHECKING_TIMES 7
#define R_SENSE 70
for (uint8_t i = 0; i < CHECKING_TIMES; i++) {
setEmitChannel(MY_CHANNEL);
//--- get the proximities of nearby robots ---/
startGetProximities(0);
nrf_delay_ms(WAIT_MS_UPDATE);
getAllData(proximityData, bearingData);
// check all channels
for (uint8_t j = 0; j < NUM_OF_CHANNELS; j++) {
if (proximityData[j] <= R_SENSE) {
nearbyRobots[j] = 1;
}
}
}
// begin transmitting nearby robots' ID(s) and levelCounter
tx_buffer[0] = 0xF1;
tx_buffer[1] = levelCounter;
tx_buffer[2] = 0xFF;
tx_buffer[3] = 0xFF;
tx_buffer[4] = 0xFF;
tx_buffer[5] = 0xFF;
tx_buffer[6] = 0xFF;
tx_buffer[7] = 0xFF;
uint8_t k = 2;
for (uint8_t j = 0; j < 6; j++) {
if (nearbyRobots[j] == 1) {
tx_buffer[k] = j;
k++;
}
}
// turn off IR emitter
setIR_ON_OFF(0);
// turn on ANT broadcaster
uint32_t err_code = sd_ant_channel_open(ANT_MS_CHANNEL_NUMBER);
APP_ERROR_CHECK(err_code);
// wait for whatever set period
#define WAIT_ROOT_MS 6000
nrf_delay_ms(WAIT_ROOT_MS);
// turn off ANT broadcaster
err_code = sd_ant_channel_close(ANT_MS_CHANNEL_NUMBER);
APP_ERROR_CHECK(err_code);
#define SHOWCASE_LEVEL 3
for (uint8_t j = 0; j < SHOWCASE_LEVEL; j++) {
for (uint8_t i = 0; i < highestLevel; i++) {
for (short i = 0; i < 5; i++) {
137
setLEDs(0,1,0);
nrf_delay_ms(100);
}
for (short i = 0; i < 5; i++) {
setLEDs(0,0,0);
nrf_delay_ms(100);
}
}
nrf_delay_ms(2000);
}
// go to next state
currentState++;
}
// not the lowest ID robot
else {
if (internalState == 0) {
for (short i = 0; i < 10; i++) {
setLEDs(1,0,0);
nrf_delay_ms(100);
}
// begin emitting current IR channel
setEmitChannel(MY_CHANNEL);
nrf_delay_ms(100);
}
else if (internalState == 1) {
for (uint8_t i = 0; i < CHECKING_TIMES; i++) {
setEmitChannel(MY_CHANNEL);
//--- get the proximities of nearby robots ---/
startGetProximities(0);
nrf_delay_ms(WAIT_MS_UPDATE);
getAllData(proximityData, bearingData);
// check all channels
for (uint8_t j = 0; j < NUM_OF_CHANNELS; j++) {
if (proximityData[j] <= R_SENSE) {
nearbyRobots[j] = 1;
}
}
}
// begin transmitting nearby robots' ID(s) and levelCounter
tx_buffer[0] = 0xF1;
tx_buffer[1] = levelCounter;
tx_buffer[2] = 0xFF;
tx_buffer[3] = 0xFF;
tx_buffer[4] = 0xFF;
tx_buffer[5] = 0xFF;
tx_buffer[6] = 0xFF;
tx_buffer[7] = 0xFF;
uint8_t k = 2;
for (uint8_t j = 0; j < 6; j++) {
if (nearbyRobots[j] == 1) {
tx_buffer[k] = j;
k++;
}
}
// turn off IR emitter
setIR_ON_OFF(0);
// turn on ANT broadcaster
err_code = sd_ant_channel_open(ANT_MS_CHANNEL_NUMBER);
APP_ERROR_CHECK(err_code);
// wait for whatever set period
#define WAIT_NON_ROOT_MS 3000
nrf_delay_ms(WAIT_NON_ROOT_MS);
// turn off ANT broadcaster
err_code = sd_ant_channel_close(ANT_MS_CHANNEL_NUMBER);
APP_ERROR_CHECK(err_code);
// indicate that this robot is leaf if it is
// we need to increment highestLevel since robot with highest level
// cannot send message to itself regarding highest level value
if (levelCounter == (highestLevel + 1)) {
for (short i = 0; i < 10; i++) {
setLEDs(0,0,1);
nrf_delay_ms(100);
}
138
}
internalState++;
}
else if (internalState == 2) {
// do nothing...
nrf_delay_ms(100);
}
}
// to turn on IR channel, use this: setEmitChannel(MY_CHANNEL);
// to turn off IR, use this: setIR_ON_OFF(0);
}
//--- STATE 2: ASSIGN TASKS ---/
else if (currentState == 2) {
if (firstTime_state_2) {
for (short i = 0; i < 5; i++) {
setLEDs(0,0,0);
nrf_delay_ms(100);
}
internalState = 0;
firstTime_state_2 = 0;
}
//--- INTERNAL STATE 1: FIND AND ASSIGN ---/
if (internalState == 0) {
#define WAIT_PERIOD 5000
trackAndBroadcastSmallest_TASK(WAIT_PERIOD*highestLevel*2);
// check if this robot is the lowest and assign a task if so
if (lowestID == MY_CHANNEL) {
uint8_t curTask = currentStage;
if ((curTask == 1) || (curTask == 2)) {
for (short i = 0; i < 10; i++) {
setLEDs(1,0,0);
nrf_delay_ms(100);
}
}
else if ((curTask == 3) || (curTask == 4) || (curTask == 5)) {
for (short i = 0; i < 10; i++) {
setLEDs(0,1,0);
nrf_delay_ms(100);
}
}
else if (curTask > 5) {
for (short i = 0; i < 10; i++) {
setLEDs(0,0,1);
nrf_delay_ms(100);
}
}
// change to finished state
currentState = 3;
// increment the current stage number
currentStage++;
}
internalState++;
}
else if (internalState == 1) {
lowestID = MY_CHANNEL;
tx_buffer[0] = 0xFF;
tx_buffer[1] = 0xFF;
tx_buffer[2] = 0xFF;
tx_buffer[3] = 0xFF;
tx_buffer[4] = 0xFF;
tx_buffer[5] = 0xFF;
tx_buffer[6] = 0xFF;
tx_buffer[7] = 0xFF;
nrf_delay_ms(100);
}
}
//--- STATE 3: TASK ASSIGNED (COMPLETE) ---/
else if (currentState == 3) {
#define WAIT_NEXT_STAGE 6000
if (firstTime_state_3) {
// wait a period before starting new stage
nrf_delay_ms(WAIT_NEXT_STAGE);
broadcastNewStage(2*WAIT_PERIOD*highestLevel);
firstTime_state_3 = 0;
}
// do nothing afterwards...
139
nrf_delay_ms(100);
}
}
...
H.5 ANT Wireless Communication and Source Code Snippet
ANT Wireless with its mesh network capability was utilized for all swarm scenario experiments
(including HSI experiments in Section 5). Using Nordic’s SoftDevice S212, the mROBerTOs were
able to enable and use ANT Wireless capabilities. Below is the pseudocode of the ANT wireless
communication system used for the swarm experiments and Figure H5 shows the procedures of the
system in a flowchart.
Setup necessary variables and the SoftDevice for the wireless module;
Initialize ANT Wireless listening and transmitting channels;
Begin receiving and sending ANT packets;
While indefinite
If new RX message is received
If the RX message is for this robot
Execute necessary swarm scenario command;
If new TX message is needed to be sent
Load and update the current ANT packet to be sent;
Begin broadcasting the new ANT message;
Fig. H5. Procedure for using the ANT Wireless communication for swarm experiments.
140
The ANT Wireless communication system first initializes the ANT Wireless with enabling SoftDevice
to run on the SoC. Next, the two channels (incoming and outgoing channels) are initialized where then
the two channels immediately begin listening and transmitting ANT packets (messages). From this
point, the ANT Wireless system continuously checks for new incoming messages and updates
messages being sent out when required. Below is the source code snippet for enabling, initializing,
listening, and transmitting ANT messages used in the swarm scenario experiments.
Source Code Snippet of ANT Wireless Communication
...
// below code executes when we get a new interrupt event from ANT Wireless
void ant_evt_dispatch(ant_evt_t * p_ant_evt)
{
switch (p_ant_evt->channel)
{
case ANT_BS_CHANNEL_NUMBER:
background_scanner_process(p_ant_evt);
break;
case ANT_MS_CHANNEL_NUMBER:
master_beacon_process(p_ant_evt);
break;
default:
break;
}
}
// below is for initializing ANT Wireless
void application_initialize(void)
{
/* Set library config to report RSSI and Device ID */
uint32_t err_code = sd_ant_lib_config_set(ANT_LIB_CONFIG_MESG_OUT_INC_RSSI
| ANT_LIB_CONFIG_MESG_OUT_INC_DEVICE_ID);
APP_ERROR_CHECK(err_code);
//--- channel configuration for MASTER CHANNEL ---/
// responsible for sending out messages
const ant_channel_config_t ms_channel_config =
{
.channel_number = ANT_MS_CHANNEL_NUMBER,
.channel_type = ANT_MS_CHANNEL_TYPE,
.ext_assign = 0x00,
.rf_freq = ANT_FREQUENCY,
.transmission_type = ANT_TRANSMISSION_TYPE,
.device_type = ANT_DEVICE_TYPE,
.device_number = ANT_MS_DEVICE_NUMBER,
.channel_period = ANT_CHANNEL_PERIOD,//ANT_CHANNEL_PERIOD_NONE, //ANT_CHANNEL_PERIOD,
.network_number = ANT_NETWORK_NUMBER,
};
//--- channel configuration for SLAVE CHANNEL ---/
// responsible for listening to background messages
const ant_channel_config_t bs_channel_config =
{
.channel_number = ANT_BS_CHANNEL_NUMBER,
.channel_type = ANT_BS_CHANNEL_TYPE,
.ext_assign = EXT_PARAM_ALWAYS_SEARCH,
.rf_freq = ANT_FREQUENCY,
.transmission_type = ANT_TRANSMISSION_TYPE,
.device_type = ANT_DEVICE_TYPE,
.device_number = ANT_BS_DEVICE_NUMBER,
.channel_period = ANT_CHANNEL_PERIOD_NONE,
.network_number = ANT_NETWORK_NUMBER,
};
141
const ant_search_config_t bs_search_config =
{
.channel_number = ANT_BS_CHANNEL_NUMBER,
.low_priority_timeout = ANT_LOW_PRIORITY_TIMEOUT_DISABLE,
.high_priority_timeout = ANT_HIGH_PRIORITY_SEARCH_DISABLE,
.search_sharing_cycles = ANT_SEARCH_SHARING_CYCLES_DISABLE,
.search_priority = ANT_SEARCH_PRIORITY_DEFAULT,
.waveform = ANT_WAVEFORM_DEFAULT,
};
err_code = ant_channel_init(&ms_channel_config);
APP_ERROR_CHECK(err_code);
err_code = ant_channel_init(&bs_channel_config);
APP_ERROR_CHECK(err_code);
err_code = ant_search_init(&bs_search_config);
APP_ERROR_CHECK(err_code);
// Fill tx buffer for the first frame
//ant_message_send();
err_code = sd_ant_channel_open(ANT_MS_CHANNEL_NUMBER);
APP_ERROR_CHECK(err_code);
// turn off ANT TX channel for now
err_code = sd_ant_channel_close(ANT_MS_CHANNEL_NUMBER);
APP_ERROR_CHECK(err_code);
err_code = sd_ant_channel_open(ANT_BS_CHANNEL_NUMBER);
APP_ERROR_CHECK(err_code);
}
// setup our SoftDevice
void softdevice_setup(void)
{
uint32_t err_code;
err_code = softdevice_ant_evt_handler_set(ant_evt_dispatch);
APP_ERROR_CHECK(err_code);
// use internal RC low freq oscillator as lf clk source
err_code = softdevice_handler_init(NRF_CLOCK_LFCLKSRC_RC_250_PPM_4000MS_CALIBRATION, NULL, 0, NULL);
APP_ERROR_CHECK(err_code);
err_code = ant_stack_static_config();
APP_ERROR_CHECK(err_code);
}
// below code executes when RX message event occurs
void background_scanner_process(ant_evt_t * p_ant_evt)
{
//uint32_t err_code;
ANT_MESSAGE * p_ant_message = (ANT_MESSAGE*)p_ant_evt->evt_buffer;
switch(p_ant_evt->event)
{
case EVENT_RX:
{
/*if(p_ant_message->ANT_MESSAGE_stExtMesgBF.bANTDeviceID)
{
m_last_device_id = (uint16_t)(p_ant_message->ANT_MESSAGE_aucExtData[0]
| ((uint16_t)p_ant_message->ANT_MESSAGE_aucExtData[1] << 8));
}
if(p_ant_message->ANT_MESSAGE_stExtMesgBF.bANTRssi)
{
m_last_rssi = p_ant_message->ANT_MESSAGE_aucExtData[5];
}*/
//app_trace_log("Device ID: %d\n\r", m_last_device_id);
//app_trace_log("RSSI: %d\n\r\n\r", m_last_rssi);
//--- collective exploration stuffz ---/
if ((p_ant_message->ANT_MESSAGE_aucPayload[0] == 0xFE) &&
(p_ant_message->ANT_MESSAGE_aucPayload[1] == 0xFD)) {
currentState = 1;
142
K_O_TEMP = p_ant_message->ANT_MESSAGE_aucPayload[2];
K_N_TEMP = p_ant_message->ANT_MESSAGE_aucPayload[3];
V_STOP_TEMP = p_ant_message->ANT_MESSAGE_aucPayload[4];
}
//--- REMOVE: EXPERIMENTAL STUFF TO ENABLE AGGRE WITH COL EXP ---/
else if ((p_ant_message->ANT_MESSAGE_aucPayload[0] == 0xFD) &&
(p_ant_message->ANT_MESSAGE_aucPayload[1] == 0xFD)) {
currentState = 2;
}
else if ((p_ant_message->ANT_MESSAGE_aucPayload[0] == 0xFC) &&
(p_ant_message->ANT_MESSAGE_aucPayload[1] == 0xFD)) {
currentState = 0;
}
//--- dynamic task allocation stuffz ---/
/*if ((p_ant_message->ANT_MESSAGE_aucPayload[0] == 0xF0) &&
(p_ant_message->ANT_MESSAGE_aucPayload[1] == 0xF3)) {
if (p_ant_message->ANT_MESSAGE_aucPayload[2] < lowestID) {
lowestID = p_ant_message->ANT_MESSAGE_aucPayload[2];
tx_buffer[2] = lowestID;
}
}
else if ((p_ant_message->ANT_MESSAGE_aucPayload[0] == 0xF1) &&
((currentState != 2) || (currentState != 3))) {
if ((p_ant_message->ANT_MESSAGE_aucPayload[2] == MY_CHANNEL) ||
(p_ant_message->ANT_MESSAGE_aucPayload[3] == MY_CHANNEL) ||
(p_ant_message->ANT_MESSAGE_aucPayload[4] == MY_CHANNEL) ||
(p_ant_message->ANT_MESSAGE_aucPayload[5] == MY_CHANNEL) ||
(p_ant_message->ANT_MESSAGE_aucPayload[6] == MY_CHANNEL) ||
(p_ant_message->ANT_MESSAGE_aucPayload[7] == MY_CHANNEL)) {
// assign our level counter if lower
if (p_ant_message->ANT_MESSAGE_aucPayload[1] < levelCounter) {
levelCounter = p_ant_message->ANT_MESSAGE_aucPayload[1] + 1;
internalState = 1;
}
}
// track the highest level counter if this robot is root
if (p_ant_message->ANT_MESSAGE_aucPayload[1] > highestLevel) {
highestLevel = p_ant_message->ANT_MESSAGE_aucPayload[1];
}
}*/
/*if ((p_ant_message->ANT_MESSAGE_aucPayload[0] == 0xF2) &&
(p_ant_message->ANT_MESSAGE_aucPayload[1] == 0xF3) &&
(currentState != 3)) {
currentState = 2;
if (p_ant_message->ANT_MESSAGE_aucPayload[2] < lowestID) {
lowestID = p_ant_message->ANT_MESSAGE_aucPayload[2];
tx_buffer[2] = lowestID;
}
// check to see if our current stage number is off
if (currentStage < p_ant_message->ANT_MESSAGE_aucPayload[3]) {
currentStage = p_ant_message->ANT_MESSAGE_aucPayload[3];
}
}
// start new stage
if ((p_ant_message->ANT_MESSAGE_aucPayload[0] == 0xF2) &&
(p_ant_message->ANT_MESSAGE_aucPayload[1] == 0xF4) &&
(currentState != 3)) {
internalState = 0;
// get new stage number
currentStage = p_ant_message->ANT_MESSAGE_aucPayload[3];
}*/
/*
//--- chain formation stuffz ---/
if ((p_ant_message->ANT_MESSAGE_aucPayload[0] == (MY_CHANNEL - 1)) &&
(p_ant_message->ANT_MESSAGE_aucPayload[1] == 0x00) &&
(currentState == 0)) {
currentState = 1;
}
else if ((p_ant_message->ANT_MESSAGE_aucPayload[0] == (MY_CHANNEL - 1)) &&
(p_ant_message->ANT_MESSAGE_aucPayload[1] == 0x01)) {
currentState = 2;
}
143
// new tail robot has been chosen, we are no longer a tail robot but a chain
else if ((p_ant_message->ANT_MESSAGE_aucPayload[0] == (MY_CHANNEL + 1)) &&
(p_ant_message->ANT_MESSAGE_aucPayload[1] == 0x00) &&
(currentState == 2)) {
currentState = 3;
}
break;
}
default:
{
break;
}
}
}
// below code executes whenever we need to send out messages periodically
void ant_message_send(uint8_t *tx_bufferz)
{
uint32_t err_code;
err_code = sd_ant_broadcast_message_tx(ANT_MS_CHANNEL_NUMBER,
ANT_STANDARD_DATA_PAYLOAD_SIZE,
tx_bufferz);
APP_ERROR_CHECK(err_code);
}
// below is for handling TX events
void master_beacon_process(ant_evt_t * p_ant_evt)
{
switch(p_ant_evt->event)
{
case EVENT_TX:
ant_message_send(tx_buffer);
break;
default:
break;
}
}
...
144
Appendix I: HSI Experimental Setup
In order to gather the proximity and bearing data from all mROBerTOs of their nearby robots for the
HSI experiments, a separate host PC was utilized. The host PC had a Nordic nRF52832 development
kit connected to receive data from all robots on the workspace via ANTTM Wireless. Figure I1 shows
the overview setup for the experiment.
Fig. I1. Overview of the HSI experiment setup.
The host PC communicated with the nRF52832 development kit via UART using one of the standard
baud rate of 19200. Thus, open source terminal tools such as PuTTY can be used to view all the
received data from the nRF52832 via UART.
The procedure for the experiment is as follows. After all the robots are placed on the workspace,
the user inputs a start command on the host PC which then gets transmitted to the nRF52832
development kit. The development kit then transmits a start command to the robots via ANTTM
Wireless. When the robots receive the start command, they immediately begin measuring all nearby
robots’ proximities and bearings. After the proximities and bearings are measured, they send the
measured data to the nRF52832 development kit via ANTTM Wireless. The nRF52832 development
kit continuously gather all proximity and bearing measurement data from all robots for a user specified
period of time and continuously outputs the data onto the host PC. Once the user specified period of
time has passed, the data is ready for analysis. For this thesis, the experiemnts were not completed in
real-time and the robots were repositioned by hand in an online fashion to mimick the movement of
the robots to their next positions with movement error. However, with additional coding and using the
IMU sensors on the robots, this movement process can be automated and may be completed in real-
time.
Below subsections are pseudocode and source code snippets of the firmware used on the nRF52832
development kit for decoding the received bytes from the mROBerTOs as well as the firmware used
on mROBerTOs for measuring and transmitting proximity and bearing data of neighbouring robots.
145
Pseudocode for the nRF52832 Development Kit
Below is the pseudocode for the nRF52832 development kit gathering data from mROBerTOs for the
localization experiment where it runs the code in background throughout the entire experiment.
Grab the received ANT Wireless message;
Check which robot sent the message;
If repeated message from before
Ignore the received packet;
Else
Decode the received packet into readable proximities and bearings
information;
Store the decoded data into an array of proximities and bearings with
previously received data;
// Transmit data to host PC when requested;
Below is the flowchart explaining the above pseudocode.
Fig. I2. Procedure for the nRF52832 development kit gathering data from mROBerTOs for
localization.
Source Code Snippet of nRF52832 Development Kit
...
void background_scanner_process(ant_evt_t * p_ant_evt)
{
uint32_t err_code;
ANT_MESSAGE * p_ant_message = (ANT_MESSAGE*)p_ant_evt->msg.evt_buffer;
switch(p_ant_evt->event)
{
case EVENT_RX:
{
err_code = bsp_indication_set(BSP_INDICATE_RCV_OK);
APP_ERROR_CHECK(err_code);
if(p_ant_message->ANT_MESSAGE_stExtMesgBF.bANTDeviceID)
{
146
m_last_device_id = uint16_decode(p_ant_message->ANT_MESSAGE_aucExtData);
}
if(p_ant_message->ANT_MESSAGE_stExtMesgBF.bANTRssi)
{
m_last_rssi = p_ant_message->ANT_MESSAGE_aucExtData[5];
}
//--- check which robot send the message ---/
uint8_t curRobotNum = 0;
for (; curRobotNum < NUM_OF_CHAN; curRobotNum++) {
// store the correct robot's estimated data
if (robotNum[curRobotNum] == m_last_device_id) {
// leave the for loop since we found our robot
break;
}
}
//--- check if the message is repeated message; ignore if it is ---/
uint8_t newMsg = 0;
for (uint8_t i = 0; i < 8; i++) {
if (prevRX_MSG[curRobotNum][i] != p_ant_message->ANT_MESSAGE_aucPayload[i]) {
// it is not the same message! continue on and store this new msg
for (uint8_t j = 0; j < 8; j++) {
prevRX_MSG[curRobotNum][j] = p_ant_message->ANT_MESSAGE_aucPayload[i];
newMsg = 1;
}
break;
}
}
// no new message, IGNORE!
if (!newMsg) {
break;
}
//--- decode the message into readable RX message ---/
// note that each ANT packet contains up to 3 robots due to limited 8 bytes per packet
// you have to wait and repeat below execution 3 times to get all 9 robot data
#define CLOSE_CHECK 3
uint8_t robNums[CLOSE_CHECK];
uint8_t proxData[CLOSE_CHECK];
uint16_t bearData[CLOSE_CHECK];
proxData[0] = p_ant_message->ANT_MESSAGE_aucPayload[0];
proxData[1] = p_ant_message->ANT_MESSAGE_aucPayload[1];
proxData[2] = p_ant_message->ANT_MESSAGE_aucPayload[2];
robNums[0] = ((p_ant_message->ANT_MESSAGE_aucPayload[3]>>4) & 0b00001111) & 0xFF;
robNums[1] = (p_ant_message->ANT_MESSAGE_aucPayload[3] & 0b00001111) & 0xFF;
robNums[2] = (p_ant_message->ANT_MESSAGE_aucPayload[4] & 0b00001111) & 0xFF;
bearData[0] = (((p_ant_message->ANT_MESSAGE_aucPayload[4]>>4) & 0b000001111) | (((p_ant_message-
>ANT_MESSAGE_aucPayload[5])<<1) & 0b111110000)) & 0b111111111;
bearData[1] = ((p_ant_message->ANT_MESSAGE_aucPayload[5] & 0b000000111) | ((p_ant_message-
>ANT_MESSAGE_aucPayload[6]<<3) & 0b111111000)) & 0b111111111;
bearData[2] = ((p_ant_message->ANT_MESSAGE_aucPayload[7]<<1) | ((p_ant_message-
>ANT_MESSAGE_aucPayload[6]>>7) & 0b1)) & 0b111111111;
//--- store message information into master matrix ---/
// reset the master data matrix for proximity first
for (uint8_t j = 0; j < NUM_OF_CHAN; j++) {
estimatedProxData[curRobotNum][j] = 0xFF; // set it to the highest possible value
}
// store the proximity data
estimatedProxData[curRobotNum][robNums[0]] = proxData[0];
estimatedProxData[curRobotNum][robNums[1]] = proxData[1];
estimatedProxData[curRobotNum][robNums[2]] = proxData[2];
// store the bearing data
estimatedBearData[curRobotNum][robNums[0]] = bearData[0];
estimatedBearData[curRobotNum][robNums[1]] = bearData[1];
estimatedBearData[curRobotNum][robNums[2]] = bearData[2];
//--- average out the current data readings with past history ---/
#define CUTOFF_DISTANCE 255 // in mm
for (uint8_t i = 0; i < NUM_OF_CHAN; i++) {
// only count it towards the average if it was one of the robNums[]
if ((robNums[0] == i) || (robNums[1] == i) || (robNums[2] == i)) {
if (counterAvg[curRobotNum][i] < 1000.0) {
// only count towards the estimated avg if below CUTOFF_DISTANCE
if (estimatedProxData[curRobotNum][i] <= CUTOFF_DISTANCE) {
147
estimatedProxDataAvg[curRobotNum][i] = (((double)estimatedProxData[curRobotNum][i]) * (1.0
/ counterAvg[curRobotNum][i])) + (estimatedProxDataAvg[curRobotNum][i] * ((counterAvg[curRobotNum][i] -
1.0)/counterAvg[curRobotNum][i]));
// deal with bearing averaging corner case
if ((estimatedBearDataAvg[curRobotNum][i] >= 270) && ((estimatedBearData[curRobotNum][i] >=
0) && (estimatedBearData[curRobotNum][i] <= 180))) {
estimatedBearData[curRobotNum][i] += 360;
estimatedBearDataAvg[curRobotNum][i] = (((double)estimatedBearData[curRobotNum][i]) *
(1.0 / counterAvg[curRobotNum][i])) + (estimatedBearDataAvg[curRobotNum][i] * ((counterAvg[curRobotNum][i] -
1.0)/counterAvg[curRobotNum][i]));
// ensure average ranges from 0 to 360
if (estimatedBearDataAvg[curRobotNum][i] >= 360) {
estimatedBearDataAvg[curRobotNum][i] -= 360;
}
}
else if ((estimatedBearDataAvg[curRobotNum][i] <= 90) &&
((estimatedBearData[curRobotNum][i] >= 180) && (estimatedBearData[curRobotNum][i] <= 360))) {
estimatedBearDataAvg[curRobotNum][i] += 360;
estimatedBearDataAvg[curRobotNum][i] = (((double)estimatedBearData[curRobotNum][i]) *
(1.0 / counterAvg[curRobotNum][i])) + (estimatedBearDataAvg[curRobotNum][i] * ((counterAvg[curRobotNum][i] -
1.0)/counterAvg[curRobotNum][i]));
// ensure average ranges from 0 to 360
if (estimatedBearDataAvg[curRobotNum][i] >= 360) {
estimatedBearDataAvg[curRobotNum][i] -= 360;
}
}
else {
estimatedBearDataAvg[curRobotNum][i] = (((double)estimatedBearData[curRobotNum][i]) *
(1.0 / counterAvg[curRobotNum][i])) + (estimatedBearDataAvg[curRobotNum][i] * ((counterAvg[curRobotNum][i] -
1.0)/counterAvg[curRobotNum][i]));
}
counterAvg[curRobotNum][i]++;
}
else {
// do nothing...
}
}
// counter has reached max limit, so don't increment and only calculate
else {
estimatedProxDataAvg[curRobotNum][i] = (((double)estimatedProxData[curRobotNum][i]) * (1.0 /
counterAvg[curRobotNum][i])) + (estimatedProxDataAvg[curRobotNum][i] * ((counterAvg[curRobotNum][i] -
1.0)/counterAvg[curRobotNum][i]));
// deal with bearing averaging corner case
if ((estimatedBearDataAvg[curRobotNum][i] >= 270) && ((estimatedBearData[curRobotNum][i] >= 0)
&& (estimatedBearData[curRobotNum][i] <= 180))) {
estimatedBearData[curRobotNum][i] += 360;
estimatedBearDataAvg[curRobotNum][i] = (((double)estimatedBearData[curRobotNum][i]) * (1.0
/ counterAvg[curRobotNum][i])) + (estimatedBearDataAvg[curRobotNum][i] * ((counterAvg[curRobotNum][i] -
1.0)/counterAvg[curRobotNum][i]));
// ensure average ranges from 0 to 360
if (estimatedBearDataAvg[curRobotNum][i] >= 360) {
estimatedBearDataAvg[curRobotNum][i] -= 360;
}
}
else if ((estimatedBearDataAvg[curRobotNum][i] <= 90) && ((estimatedBearData[curRobotNum][i]
>= 180) && (estimatedBearData[curRobotNum][i] <= 360))) {
estimatedBearDataAvg[curRobotNum][i] += 360;
estimatedBearDataAvg[curRobotNum][i] = (((double)estimatedBearData[curRobotNum][i]) * (1.0
/ counterAvg[curRobotNum][i])) + (estimatedBearDataAvg[curRobotNum][i] * ((counterAvg[curRobotNum][i] -
1.0)/counterAvg[curRobotNum][i]));
// ensure average ranges from 0 to 360
if (estimatedBearDataAvg[curRobotNum][i] >= 360) {
estimatedBearDataAvg[curRobotNum][i] -= 360;
}
}
else {
estimatedBearDataAvg[curRobotNum][i] = (((double)estimatedBearData[curRobotNum][i]) * (1.0
/ counterAvg[curRobotNum][i])) + (estimatedBearDataAvg[curRobotNum][i] * ((counterAvg[curRobotNum][i] -
1.0)/counterAvg[curRobotNum][i]));
}
}
}
}
break;
}
default:
{
break;
}
}
}
148
Pseudocode for the mROBerTO Firmware
Below is the pseudocode for the mROBerTO gathering proximity and bearing data of nearby robots in
the localization experiment.
Initialize ANT message packet as well as variables to store proximities and
bearings of all nearby robots;
While indefinite
Get proximities and bearings of all nearby robots;
Check which robots are the closest and order them;
Send via ANT Wireless the three closest robots’ proximities and bearings;
// only 3 robot data are sent at a time due to limited ANT packet size
Wait 0.5 s before sending the next set of data;
Send via ANT Wireless the next three closest robots’ data;
Wait 0.5 s before sending the next set of data;
Send via ANT Wireless the last set of closest robots’ data;
Wait 0.5 s;
Below is the flowchart explaining the above pseudocode in further detail.
Fig. I3. Procedure for the mROBerTOs gathering data of nearby robots and sending the data to
the nRF52832 development kit for localization.
149
Source Code Snippet of mROBerTO Firmware
...
tx_buffer[0] = 0xFF;
tx_buffer[1] = 0xFF;
tx_buffer[2] = 0xFF;
tx_buffer[3] = 0xFF;
tx_buffer[4] = 0xFF;
tx_buffer[5] = 0xFF;
tx_buffer[6] = 0xFF;
tx_buffer[7] = 0xFF;
// ensure blue LED turns on!
for (short i = 0; i < 10; i++) {
setLEDs(0,0,1);
nrf_delay_ms(100);
}
uint8_t proximityData[NUM_OF_CHANNELS];
uint16_t bearingData[NUM_OF_CHANNELS];
// below are used to store the 9 closest robots
#define CLOSE_CHECK 9
uint8_t closestRob = 0xFF;
uint8_t closestRobNums[CLOSE_CHECK] = {0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF};
uint8_t cloestProxData[CLOSE_CHECK] = {0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF};
uint16_t cloestBearData[CLOSE_CHECK] = {};
// enter main loop
for (;;) {
// start emitting our IR channel
setEmitChannel(MY_CHANNEL);
//--- get the proximities of nearby robots ---/
startGetProximities(0);
nrf_delay_ms(WAIT_MS_UPDATE);
getAllData(proximityData, bearingData);
//--------- send off the prox and bear data ---------/
for (uint8_t i = 0; i < CLOSE_CHECK; i++) {
for (uint8_t curRob = 0; curRob < NUM_OF_CHANNELS; curRob++) {
// only check if it's a number we have not yet checked
if ((curRob != closestRobNums[0]) && (curRob != closestRobNums[1]) &&
(curRob != closestRobNums[2]) && (curRob != closestRobNums[3]) &&
(curRob != closestRobNums[4]) && (curRob != closestRobNums[5])
(curRob != closestRobNums[6]) && (curRob != closestRobNums[7])
(curRob != closestRobNums[8])) {
if (proximityData[curRob] < closestRob) {
closestRob = proximityData[curRob];
closestRobNums[i] = curRob;
cloestProxData[i] = proximityData[curRob];
cloestBearData[i] = bearingData[curRob];
}
}
}
// reset the cloestRob for next round
closestRob = 0xFF;
}
// turn off all wireless messaging until TX buffer has been filled
err_code = sd_ant_channel_close(ANT_MS_CHANNEL_NUMBER);
APP_ERROR_CHECK(err_code);
// format the TX message to send the 3 closest robots
// [robot# (4 bits), proximity (8 bits), bearing (9 bits)]
tx_buffer[0] = cloestProxData[0];
tx_buffer[1] = cloestProxData[1];
tx_buffer[2] = cloestProxData[2];
tx_buffer[3] = (((closestRobNums[0]<<4) & 0b11110000) | (closestRobNums[1]
& 0b00001111)) & 0xFF;
tx_buffer[4] = ((closestRobNums[2] & 0b00001111) | ((cloestBearData[0]<<4)
& 0b11110000)) & 0xFF;
tx_buffer[5] = (((cloestBearData[0]>>1) & 0b11111000) | (cloestBearData[1]
& 0b00000111)) & 0xFF;
tx_buffer[6] = (((cloestBearData[1]>>3) & 0b00111111) | ((cloestBearData[2]<<7)
& 0b10000000)) & 0xFF;
tx_buffer[7] = (cloestBearData[2]>>1) & 0xFF;
// turn on all wireless messaging again!
err_code = sd_ant_channel_open(ANT_MS_CHANNEL_NUMBER);
150
APP_ERROR_CHECK(err_code);
// send above message for 500 ms before sending other message
nrf_delay_ms(500);
// turn off all wireless messaging until TX buffer has been filled
err_code = sd_ant_channel_close(ANT_MS_CHANNEL_NUMBER);
APP_ERROR_CHECK(err_code);
tx_buffer[0] = cloestProxData[3];
tx_buffer[1] = cloestProxData[4];
tx_buffer[2] = cloestProxData[5];
tx_buffer[3] = (((closestRobNums[3]<<4) & 0b11110000) | (closestRobNums[4]
& 0b00001111)) & 0xFF;
tx_buffer[4] = ((closestRobNums[5] & 0b00001111) | ((cloestBearData[3]<<4)
& 0b11110000)) & 0xFF;
tx_buffer[5] = (((cloestBearData[3]>>1) & 0b11111000) | (cloestBearData[4]
& 0b00000111)) & 0xFF;
tx_buffer[6] = (((cloestBearData[4]>>3) & 0b00111111) | ((cloestBearData[5]<<7)
& 0b10000000)) & 0xFF;
tx_buffer[7] = (cloestBearData[5]>>1) & 0xFF;
// turn on all wireless messaging again!
err_code = sd_ant_channel_open(ANT_MS_CHANNEL_NUMBER);
APP_ERROR_CHECK(err_code);
// send above message for 500 ms before sending other message
nrf_delay_ms(500);
// turn off all wireless messaging until TX buffer has been filled
err_code = sd_ant_channel_close(ANT_MS_CHANNEL_NUMBER);
APP_ERROR_CHECK(err_code);
tx_buffer[0] = cloestProxData[6];
tx_buffer[1] = cloestProxData[7];
tx_buffer[2] = cloestProxData[8];
tx_buffer[3] = (((closestRobNums[6]<<4) & 0b11110000) | (closestRobNums[7]
& 0b00001111)) & 0xFF;
tx_buffer[4] = ((closestRobNums[8] & 0b00001111) | ((cloestBearData[6]<<4)
& 0b11110000)) & 0xFF;
tx_buffer[5] = (((cloestBearData[6]>>1) & 0b11111000) | (cloestBearData[7]
& 0b00000111)) & 0xFF;
tx_buffer[6] = (((cloestBearData[7]>>3) & 0b00111111) | ((cloestBearData[8]<<7)
& 0b10000000)) & 0xFF;
tx_buffer[7] = (cloestBearData[8]>>1) & 0xFF;
// turn on all wireless messaging again!
err_code = sd_ant_channel_open(ANT_MS_CHANNEL_NUMBER);
APP_ERROR_CHECK(err_code);
// reset the stored proximity data for next check
closestRob = 0xFF;
for (uint8_t i = 0; i < CLOSE_CHECK; i++) {
closestRobNums[i] = 0xFF;
cloestProxData[i] = 0xFF;
}
}
...
151
Appendix J: MATLAB Code for the Localization Experiment
Pseudocode for Analyzing the HSI Experiments
Below is the pseudocode for analyzing the HSI experiment results (e.g., Fig. 27 and Fig. 28).
Check that correct formatted data exists from the HSI experiment;
Initialize and store variables for analysis;
Run initial optimization (single loop) for estimating swarm topology;
Prepare for plotting the data;
Set end conditions for optimization: reach 'N' iterations OR less than 0.1%
Improvement result;
While current iteration of optimization is < 'N'
Use 'fminsearch' to get optimization result;
Check result for stopping condition (i.e., < 0.1% improvement) and stop
if met;
Plot the current result;
Below is a flowchart further detailing the psceudocode used for the HSI experiment analysis.
Fig. J1. Procedure for analyzing the HSI experiment results.
152
The next subsection provides the MATLAB source code used in the HSI experiment which was
written by Zendai Kashino.
MATLAB Source Code for the HSI Experiments
...
%% MAIN ENTRY OF THE MATLAB CODE
% example of original data
proxData(1,:,1) = [255 255 63 255 255 64 67 1 3];
proxData(2,:,1) = [255 255 66 255 54 255 255 255 82];
proxData(3,:,1) = [255 37 255 255 255 69 255 255 255];
proxData(4,:,1) = [255 255 255 255 255 39 255 83 255];
proxData(5,:,1) = [255 45 255 255 255 55 13 255 255];
proxData(6,:,1) = [68 255 50 60 72 255 255 255 85];
proxData(7,:,1) = [76 255 255 255 61 255 255 255 67];
proxData(8,:,1) = [69 255 255 61 255 255 255 255 51];
proxData(9,:,1) = [47 255 255 255 69 70 7 50 255];
bearData(1,:,1) = [0 0 81 0 0 98 136 2 261];
bearData(2,:,1) = [0 0 203 0 146 0 0 0 138];
bearData(3,:,1) = [0 26 0 0 0 181 0 0 0];
bearData(4,:,1) = [0 0 0 0 0 86 0 310 0];
bearData(5,:,1) = [0 204 0 0 0 79 30 0 0];
bearData(6,:,1) = [357 0 255 18 318 0 0 0 14];
bearData(7,:,1) = [324 0 0 0 48 0 0 0 323];
bearData(8,:,1) = [322 0 0 264 0 0 0 0 324];
bearData(9,:,1) = [131 0 0 0 191 179 267 94 0];
validData = proxData < 255;
%check that you have at least 2 data points in each row, else grab at most
%two that are > 70
for i = 1:size(proxData,1)
numValid = sum(validData(i,:));
if numValid >= 2
continue
else
needValid = 2-numValid;
candidates = and(proxData(i,:) < 255,proxData(i,:) > 70);
[v,j] = sort(proxData(i,:));
choices = find(candidates(j),needValid);
validData(i,j(choices)) = 1;
end
end
%make the diagonal 0s
proxData(eye(size(proxData))>0) = 0;
bearData(eye(size(bearData))>0) = 0;
validData(eye(size(validData))>0) = true;
save proxBearDataCircle.mat proxData bearData validData
...
function [ maxErr ] = utilityFunction( fixedRobotData, varyingRobotData, transRot )
%UTILITYFUNCTION utility function for the pattern recognition
% IN
% fixedRobotData - data of the fixed robots, the ones not being varied
% varyingRobotData - data of the robot that is having its translation and
% rotation varied
% transRot - the translation and rotation
% OUT
% maxErr - the maximum error, the utility function value used
try
xf = fixedRobotData.x;
yf = fixedRobotData.y;
[xv, yv] = pol2cart(varyingRobotData.bear + transRot(3),varyingRobotData.prox);
xv = xv+transRot(1);
yv = yv+transRot(2);
153
dists = zeros(size(xf));
for i = 1:length(xv)
dists(:,i) = pdist2([xv(i),yv(i)], [xf(:,i),yf(:,i)]);
end
maxErr = max(max(dists));
catch
1
end
end
...
load proxBearDataCircle
%build initial solution
%start with the first robot and greedily add on robots that share the most
%information
bearData = bearData*pi/180;
bearData(~validData) = nan;
proxData(~validData) = nan;
[xData, yData] = pol2cart(bearData,proxData);
%selecting one row of good data xData(1,validData(1,:)),yData(1,validData(1,:))
transRots = zeros(size(validData,1),3); %store translations and rotations as x,y,theta columns
included = false(size(validData,1),1); included(1) = true;
%% initial optimization
for i = 1:size(validData,1)-1
%get robots currently visible
currentlyIncludedData = sum(validData(included,:),1)>0;
%get most similar
notIncludedRobotIndices = find(~included);
notIncludedRobotData = validData(notIncludedRobotIndices,:);
similarity = zeros(size(notIncludedRobotData,1),1);
for j = 1:size(notIncludedRobotData,1)
similarity(j) = sum(notIncludedRobotData(j,:).*currentlyIncludedData);
end
[~,maxIndex] = max(similarity);
robotToInclude = notIncludedRobotIndices(maxIndex);
%run optimization
fixedRobotData.x = xData(included,:);
fixedRobotData.y = yData(included,:);
varyingRobotData.prox = proxData(robotToInclude,:);
varyingRobotData.bear = bearData(robotToInclude,:);
%find best translation starting point
xstart = nanmean(fixedRobotData.x(:,robotToInclude));
ystart = nanmean(fixedRobotData.y(:,robotToInclude));
%check if starting point is nan
if isnan(xstart) || isnan(xstart)
%if it is, choose one of the robot's that the robot to include sees
%to be the centre instead of the viewing robot
includedValids = validData(included,:);
dataPoints = sum(includedValids,1);
robotsInToInclude = validData(robotToInclude,:);
possibleRobots = find(robotsInToInclude);
[~,maxIndex]= max(dataPoints(possibleRobots));
bestCenter = possibleRobots(maxIndex);
%change center
[x,y] = pol2cart(bearData(robotToInclude,:), proxData(robotToInclude,:));
x = x-x(bestCenter);
y = y-y(bestCenter);
[bearData(robotToInclude,:), proxData(robotToInclude,:)] = cart2pol(x,y);
varyingRobotData.prox = proxData(robotToInclude,:);
varyingRobotData.bear = bearData(robotToInclude,:);
xstart = nanmean(fixedRobotData.x(:,bestCenter));
ystart = nanmean(fixedRobotData.y(:,bestCenter));
end
%find best rotation starting point
options = optimset('Display','iter','PlotFcns',@optimplotfval);
rotStart = fminsearch(@(Rot)utilityFunction(fixedRobotData,varyingRobotData,[xstart ystart Rot]),
0,options);
outTransRot = fminsearch(@(transRot)utilityFunction(fixedRobotData,varyingRobotData,transRot), [xstart
ystart rotStart],options);
transRots(robotToInclude,:) = outTransRot;
154
[xv, yv] = pol2cart(bearData(robotToInclude,:)+transRots(robotToInclude,3),proxData(robotToInclude,:));
xv = xv+transRots(robotToInclude,1);
yv = yv+transRots(robotToInclude,2);
xData(robotToInclude,:) = xv;
yData(robotToInclude,:) = yv;
included(robotToInclude) = true;
includedIndex = find(included);
clf
for j = 1:length(includedIndex)
scatter3(xData(includedIndex(j),:),yData(includedIndex(j),:),1:9),hold on
end
end
%% prep for plotting
a = [1:size(validData,1)]'; b = num2str(a); numstrings = cellstr(b);
%% optimization
iterations = 100;
maxdist = max(pdist([xData(:),yData(:)]));
%precentage improvement required
threshold = 0.0001;
for i = 1:iterations
for j = 2:size(validData,1)
%run optimization
fixedRobotData.x = xData(1:size(validData,1) ~= j,:);
fixedRobotData.y = yData(1:size(validData,1) ~= j,:);
varyingRobotData.prox = proxData(j,:);
varyingRobotData.bear = bearData(j,:);
outTransRot = fminsearch(@(transRot)utilityFunction(fixedRobotData,varyingRobotData,transRot),
transRots(j,:));
transRots(j,:) = outTransRot;
[xv, yv] = pol2cart(bearData(j,:)+transRots(j,3),proxData(j,:));
xv = xv+transRots(j,1);
yv = yv+transRots(j,2);
xData(j,:) = xv;
yData(j,:) = yv;
end
%check stopping condition
maxdists = inf(size(validData,1),1);
for j = 1:size(validData,1)
maxdists(j) = max(pdist([xData(:,j),yData(:,j)]));
end
improvement = maxdist-max(maxdists);
if improvement < maxdist*threshold
break;
else
maxdist = max(maxdists);
end
%plot the result
clf
scatter(nanmean(xData,1),nanmean(yData,1))
text(nanmean(xData,1),nanmean(yData,1),numstrings);
title(['Iteration # ' num2str(i) ', Improvement: ' num2str(improvement,2)]);
drawnow
end
...
155
Appendix K: List of Publications and Submitted Papers
Journal Publications
[A] J. Y. Kim, Z. Kashino, T. Colaco, G. Nejat, B. Benhabib, "A novel millirobot for swarm
applications: design and localization," Advanced Robotics Journal, 2017. Submitted on January
23, 2017. (Under review)
[B] Z. Kashino, J. Y. Kim, G. Nejat, B. Benhabib, " A Generalized Approach to Optimal Deployment
of Sparse Static-Sensor Networks in Search of Lost Mobile Persons," IEEE Trans. on Sensors,
2016.
Conference Proceedings
[A] Z. Kashino, J. Vilela, J. Y. Kim, G. Nejat, B. Benhabib, "An Adaptive Static-Sensor Network
Deployment Strategy for Detecting Mobile Targets," IEEE/RSJ Int. Symp. on Safety, Security,
and Rescue Robot., Lausanne, 2016.
[B] J. Y. Kim, T. Colaco, Z. Kashino, G. Nejat, B. Benhabib, "mROBerTO: A Modular Millirobot
for Swarm-Behaviour Studies," IEEE/RSJ Int. Conf. on Intell. Robots and Syst., Daejeon, 2016.
[C] R. Drisdelle, Z. Kashino, J. Y. Kim, G. Nejat, B. Benhabib, "Motion Control of a Wheeled
Millirobot," Int. Conf. of Control, Dynamic Syst., and Robotics, Toronto, 2017. Submitted on
April 28, 2017.
156
References
[1] M. Brambilla, E. Ferrante, M. Birattari, and M. Dorigo, “Swarm robotics: a review from the
swarm engineering perspective,” Swarm Intell., vol. 7, no. 1, pp. 1–41, Jan. 2013.
[2] L. Bayındır, “A review of swarm robotics tasks,” Neurocomputing, vol. 172, pp. 292–321, Jan.
2016.
[3] J. C. Barca and Y. A. Sekercioglu, “Swarm robotics reviewed,” Robotica, vol. 31, no. 3, pp. 345–
359, May 2013.
[4] E. Bonabeau, M. Dorigo, and G. Theraulaz, Swarm Intelligence: From Natural to Artificial
Systems. OUP USA, 1999.
[5] I. Paprotny and S. Bergbreiter, Small-Scale Robotics From Nano-to-Millimeter-Sized Robotic
Systems and Applications: First International Workshop, microICRA 2013, Karlsruhe, Germany,
May 6-10, 2013, Revised and Extended Papers. Springer, 2014.
[6] Gctronic, “Mobile Robot Products.” [Online]. Available: http://www.gctronic.com/products.php.
[Accessed: 17-Feb-2016].
[7] K-Team Corporation, “K-Team Mobile Robot Products.” [Online]. Available: http://www.k-
team.com/mobile-robotics-products. [Accessed: 17-Feb-2016].
[8] G. Caprari and R. Siegwart, “Mobile micro-robots ready to use: Alice,” IEEE/RSJ Int. Conf. on
Intell. Robots and Systems, 2005, pp. 3295–3300.
[9] A. G. Millard, J. A. Hilder, J. Timmis, and A. F. Winfield, “A low-cost real-time tracking
infrastructure for ground-based robot swarms,” Swarm Intelligence: Int. Conf., ANTS, 2014, vol.
8667, p. 278.
[10] J. Gilles et al., “Robot swarming over the Internet,” American Control Conference, 2012, pp.
6065–6070.
[11] P. Ekberg, Swarm-Intelligent Localization. 2009.
[12] G. Kumar and M. K. Rai, “An energy efficient and optimized load balanced localization method
using CDS with one-hop neighbourhood and genetic algorithm in WSNs,” J. Netw. Comput.
Appl., vol. 78, pp. 73–82, Jan. 2017.
[13] B. H. Cheng, L. Vandenberghe, and K. Yao, “Distributed algorithm for node localization in
wireless ad-hoc networks,” ACM Trans. Sens. Netw., vol. 6, no. 1, 2009.
[14] A. Shariati, K. Mohta, and C. J. Taylor, “Recovering relative orientation and scale from visual
odometry and ranging radio measurements,” IEEE/RSJ Int. Conf. on Intell. Robots and Systems,
2016, pp. 3627–3633.
[15] A. O. de Sá, N. Nedjah, and L. de Macedo Mourelle, “Distributed efficient localization in swarm
robotic systems using swarm intelligence algorithms,” Neurocomputing, vol. 172, pp. 322–336,
Jan. 2016.
[16] P. Ekberg and E. C. H. Ngai, “A distributed Swarm-Intelligent Localization for sensor networks
with mobile nodes,” Int. Wireless Communications and Mobile Computing Conf., 2011, pp. 83–
88.
[17] A. O. de Sá, N. Nedjah, and L. de M. Mourelle, “Genetic and backtracking search optimisation
algorithms applied to localisation problems,” Int. J. Innov. Comput. Appl., vol. 6, no. 3–4, pp.
223–228, Jan. 2015.
[18] P. Cheng, T. Han, X. Zhang, R. Zheng, and Z. Lin, “A single-mobile-anchor based distributed
localization scheme for sensor networks,” Chinese Control Conference, 2016, pp. 8026–8031.
[19] Y. Sun, J. Xiao, and F. Cabrera-Mora1, “Robot localization and energy-efficient wireless
communications by multiple antennas,” IEEE/RSJ Int. Conf. on Intell. Robots and Systems, 2009,
pp. 377–381.
157
[20] M. Coppola, K. N. McGuire, K. Y. W. Scheper, and G. C. H. E. de Croon, “On-board Bluetooth-
based Relative Localization for Collision Avoidance in Micro Air Vehicle Swarms [arXiv],”
arXiv, p. 17 pp., Sep. 2016.
[21] J. Strader, Y. Gu, J. N. Gross, M. D. Petrillo, and J. Hardy, “Cooperative relative localization for
moving UAVs with single link range measurements,” IEEE/ION Position, Location and
Navigation Symp., 2016, pp. 336–343.
[22] A. Cornejo and R. Nagpal, “Distributed Range-Based Relative Localization of Robot Swarms,”
Algorithmic Foundations of Robotics XI, vol. 107, H. L. Akin, N. M. Amato, V. Isler, and A. F.
van der Stappen, Eds. Cham: Springer International Publishing, 2015, pp. 91–107.
[23] A. Kolling, P. Walker, N. Chakraborty, K. Sycara, and M. Lewis, “Human Interaction With
Robot Swarms: A Survey,” IEEE Trans. Hum.-Mach. Syst., vol. 46, no. 1, pp. 9–26, Feb. 2016.
[24] J. Mclurkin and D. Yamins, “Dynamic Task Assignment in Robot Swarms,” Proceedings of
Robotics: Sci. and Systems, 2005.
[25] R. Casanova et al., “Enabling swarm behavior in mm3-sized robots with specific designed
integrated electronics,” IEEE/RSJ Int. Conf. on Intell. Robots and Systems, 2007, pp. 3797–3802.
[26] K. S. Farshad Arvin, “Development of a Miniature Robot for Swarm Robotic Application,” Int.
J. Comput. Electr. Eng., 2009.
[27] M. Patil, T. Abukhalil, S. Patel, and T. Sobh, “Hardware Architecture Review of Swarm Robotics
System: Self Reconfigurability, Self Reassembly and Self Replication,” Innovations and
Advances in Computing, Informatics, Systems Sciences, Networking and Engineering, T. Sobh
and K. Elleithy, Eds. Springer International Publishing, 2015, pp. 433–444.
[28] M. Rubenstein, C. Ahler, N. Hoff, A. Cabrera, and R. Nagpal, “Kilobot: A low cost robot with
scalable operations designed for collective behaviors,” Robot. Auton. Syst., vol. 62, no. 7, pp.
966–975, Jul. 2014.
[29] D. Pickem, M. Lee, and M. Egerstedt, “The GRITSBot in its natural habitat - A multi-robot
testbed,” IEEE Int. Conf. on Robotics and Automation, 2015, pp. 4062–4067.
[30] H. B. Jang, R. D. Villalba, D. Paley, and S. Bergbreiter, “RSSI-based rendezvous on the tiny
terrestrial robotic platform (TinyTeRP),” Inst. Syst. Res., Aug. 2013.
[31] W. Liu and A. F. T. Winfield, “Open-hardware e-puck Linux extension board for experimental
swarm robotics research,” Microprocess. Microsyst., vol. 35, no. 1, pp. 60–67, Feb. 2011.
[32] C. Y. Baldwin and K. B. Clark, “Modularity in the Design of Complex Engineering Systems,”
Complex Engineered Systems, D. Braha, A. A. Minai, and Y. Bar-Yam, Eds. Springer Berlin
Heidelberg, 2006, pp. 175–205.
[33] “Behavior-Based Robotics,” MIT Press. [Online]. Available:
https://mitpress.mit.edu/books/behavior-based-robotics. [Accessed: 11-May-2016].
[34] M. Wahde, “Introduction to autonomous robots,” Dep. Appl. Mech. Chalmers Univ. Technol.
Goteborg Swed., 2012.
[35] F. Michaud and M. Nicolescu, “Behavior-Based Systems,” Springer Handbook of Robotics, B.
Siciliano and O. Khatib, Eds. Springer International Publishing, 2016, pp. 307–328.
[36] R. Gross, M. Bonani, F. Mondada, and M. Dorigo, “Autonomous Self-Assembly in Swarm-
Bots,” IEEE Trans. Robot., vol. 22, no. 6, pp. 1115–1130, Dec. 2006.
[37] J. McLurkin et al., “A robot system design for low-cost multi-robot manipulation,” IEEE/RSJ
Int. Conf. on Intell. Robots and Systems, 2014, pp. 912–918.
[38] G. Habibi, Z. Kingston, W. Xie, M. Jellins, and J. McLurkin, “Distributed centroid estimation
and motion controllers for collective transport by multi-robot systems,” IEEE Int. Conf. on
Robotics and Automation, 2015, pp. 1282–1288.
[39] K. Lembke, L. Kietlinski, M. Golanski, and R. Schoeneich, “RoboMote: Mobile autonomous
hardware platform for Wireless Ad-hoc Sensor Networks,” IEEE Int. Symp. on Industrial
Electronics, 2011, pp. 940–944.
158
[40] S. Kornienko and S. Kornienko, “IR-based Communication and Perception in Microrobotic
Swarms,” ArXiv11093617 Cs, Sep. 2011.
[41] A. Kettler, M. Szymanski, and H. Wörn, “The Wanda Robot and Its Development System for
Swarm Algorithms,” Advances in Autonomous Mini Robots, U. Rückert, S. Joaquin, and W.
Felix, Eds. Springer Berlin Heidelberg, 2012, pp. 133–146.
[42] F. Arvin, J. Murray, C. Zhang, and S. Yue, “Colias: An autonomous micro robot for swarm
robotic applications,” Int. J. Adv. Robot. Syst., vol. 11, no. 1, 2014.
[43] A. P. Sabelhaus, D. Mirsky, L. M. Hill, N. C. Martins, and S. Bergbreiter, “TinyTeRP: A Tiny
Terrestrial Robotic Platform with modular sensing,” IEEE Int. Conf. on Robotics and
Automation, 2013, pp. 2600–2605.
[44] M. Lewis, “Human Interaction With Multiple Remote Robots,” Rev. Hum. Factors Ergon., vol.
9, no. 1, pp. 131–174, Nov. 2013.
[45] T. B. Sheridan and W. L. Verplank, “Human and Computer Control of Undersea Teleoperators,”
Jul. 1978.
[46] W. Dargie and C. Poellabauer, “Localization,” in Fundamentals of Wireless Sensor Networks,
John Wiley & Sons, Ltd, 2010, pp. 249–266.
[47] A. Gasparri, R. K. Williams, A. Priolo, and G. S. Sukhatme, “Decentralized and Parallel
Constructionsfor Optimally Rigid Graphs in,” IEEE Trans. Mob. Comput., vol. 14, no. 11, pp.
2216–2228, Nov. 2015.
[48] P. Walker, S. A. Amraii, N. Chakraborty, M. Lewis, and K. Sycara, “Human control of robot
swarms with dynamic leaders,” IEEE/RSJ Int. Conf. on Intell. Robots and Systems, 2014, pp.
1108–1113.
[49] R. Nagpal, H. Shrobe, and J. Bachrach, “Organizing a Global Coordinate System from Local
Information on an Ad Hoc Sensor Network,” Inform. Processing in Sensor Networks, F. Zhao
and L. Guibas, Eds. Springer Berlin Heidelberg, 2003, pp. 333–348.
[50] R. Doriya, S. Mishra, and S. Gupta, “A brief survey and analysis of multi-robot communication
and coordination,” Int. Conf. on Computing, Communication Automation, 2015, pp. 1014–1021.
[51] P. Walker, S. Nunnally, M. Lewis, A. Kolling, N. Chakraborty, and K. Sycara, “Neglect
benevolence in human control of swarms in the presence of latency,” IEEE Int. Conf. on Systems,
Man, and Cybernetics, 2012, pp. 3009–3014.
[52] P. Walker, S. Nunnally, M. Lewis, A. Kolling, N. Chakraborty, and K. Sycara, “Neglect
Benevolence in Human-Swarm Interaction with Communication Latency,” Swarm,
Evolutionary, and Memetic Computing, B. K. Panigrahi, S. Das, P. N. Suganthan, and P. K.
Nanda, Eds. Springer Berlin Heidelberg, 2012, pp. 662–669.
[53] J. McLurkin, J. Smith, J. Frankel, D. Sotkowitz, D. Blau, and B. Schmidt, “Speaking Swarmish:
Human-Robot Interface Design for Large Swarms of Autonomous Mobile Robots.,” AAAI Spring
Symposium: To Boldly Go Where No Human-Robot Team Has Gone Before, 2006, pp. 72–75.
[54] T. Abukhalil, M. Patil, S. Patel, and T. Sobh, “Coordinating a heterogeneous robot swarm using
Robot Utility-based Task Assignment (RUTA),” IEEE Int. Workshop on Advanced Motion
Control, 2016, pp. 57–62.
[55] C. T. Recchiuto, A. Sgorbissa, and R. Zaccaria, “Visual feedback with multiple cameras in a
UAVs Human–Swarm Interface,” Robot. Auton. Syst., vol. 80, pp. 43–54, Jun. 2016.
[56] S. Nunnally et al., “Human influence of robotic swarms with bandwidth and localization issues,”
IEEE Int. Conf. on Systems, Man, and Cybernetics, 2012, pp. 333–338.
[57] J. Y. Kim, T. Colaco, Z. Kashino, G. Nejat, and B. Benhabib, “mROBerTO: A modular millirobot
for swarm-behavior studies,” IEEE/RSJ Int. Conf. on Intell. Robots and Systems, 2016, pp. 2109–
2114.
159
[58] Y. K. Lopes, A. B. Leal, and T. J. Dodd, “Application of Supervisory Control Theory to Swarms
of e-puck and Kilobot Robots,” Swarm Intell., Springer International Publishing, 2014, pp. 62–
73.
[59] N. Correll, S. Rutishauser, and A. Martinoli, “Comparing Coordination Schemes for Miniature
Robotic Swarms: A Case Study in Boundary Coverage of Regular Structures,” Experimental
Robotics, O. Khatib, V. Kumar, and D. Rus, Eds. Springer Berlin Heidelberg, 2008, pp. 471–480.
[60] D. Fyler, B. Sullivan, and I. A. Raptis, “Distributed object manipulation using a mobile multi-
agent system,” IEEE Int. Conf. on Technologies for Practical Robot Applications (TePRA), 2015,
pp. 1–6.
[61] F. El-Moukaddem, E. Torng, G. Xing, E. Torng, G. Xing, and G. Xing, “Mobile Relay
Configuration in Data-Intensive Wireless Sensor Networks,” IEEE Trans. Mob. Comput., vol.
12, no. 2, pp. 261–273, Feb. 2013.
[62] P. Vartholomeos, K. Vlachos, and E. Papadopoulos, “Analysis and Motion Control of a
Centrifugal-Force Microrobotic Platform,” IEEE Trans. Autom. Sci. Eng., vol. 10, no. 3, pp. 545–
553, Jul. 2013.
[63] A. W. Mahoney and J. J. Abbott, “Five-degree-of-freedom manipulation of an untethered
magnetic device in fluid using a single permanent magnet with application in stomach capsule
endoscopy,” Int. J. Robot. Res., p. 0278364914558006, Feb. 2015.
[64] S. Yim, E. Gultepe, D. H. Gracias, and M. Sitti, “Biopsy using a Magnetic Capsule Endoscope
Carrying, Releasing, and Retrieving Untethered Microgrippers,” IEEE Trans. Biomed. Eng., vol.
61, no. 2, pp. 513–521, Feb. 2014.
[65] R. S. Fearing, “Challenges for Efffective Millirobots,” Int. Symp. on Micro-NanoMechatronics
and Human Science, 2006, pp. 1–5.
[66] S. Bergbreiter, “Effective and efficient locomotion for millimeter-sized microrobots,” IEEE/RSJ
Int. Conf. on Int. Robots and Systems, 2008. IROS 2008, 2008, pp. 4030–4035.
[67] T. Bräunl, Embedded Robotics: Mobile Robot Design and Applications with Embedded Systems.
Springer Science & Business Media, 2008.
[68] D. W. Haldane and R. S. Fearing, “Running beyond the bio-inspired regime,” IEEE Int. Conf. on
Robotics and Automation (ICRA), 2015, pp. 4539–4546.
[69] R. Bruhwiler et al., “Feedback control of a legged microrobot with on-board sensing,” IEEE/RSJ
Int. Conf. on Int. Robots and Systems (IROS), 2015, pp. 5727–5733.
[70] K. Saito et al., “Neural Networks Integrated Circuit for Biomimetics MEMS Microrobot,”
Robotics, vol. 3, no. 3, pp. 235–246, Jun. 2014.
[71] R. Jeanson et al., “Self-organized aggregation in cockroaches,” Anim. Behav., vol. 69, no. 1, pp.
169–180, Jan. 2005.
[72] W. M. Spears, D. F. Spears, J. C. Hamann, and R. Heil, “Distributed, Physics-Based Control of
Swarms of Vehicles,” Auton. Robots, vol. 17, no. 2–3, pp. 137–162, Sep. 2004.
[73] Y. Jin and Y. Meng, “Morphogenetic Robotics: An Emerging New Field in Developmental
Robotics,” IEEE Trans. Syst. Man Cybern. Part C Appl. Rev., vol. 41, no. 2, pp. 145–160, Mar.
2011.
[74] V. Trianni, R. Groß, T. H. Labella, E. Şahin, and M. Dorigo, “Evolving Aggregation Behaviors
in a Swarm of Robots,” Advances in Artificial Life, W. Banzhaf, J. Ziegler, T. Christaller, P.
Dittrich, and J. T. Kim, Eds. Springer Berlin Heidelberg, 2003, pp. 865–874.
[75] G. Francesca, M. Brambilla, V. Trianni, M. Dorigo, and M. Birattari, “Analysing an Evolved
Robotic Behaviour Using a Biological Model of Collegial Decision Making,” From Animals to
Animats 12, T. Ziemke, C. Balkenius, and J. Hallam, Eds. Springer Berlin Heidelberg, 2012, pp.
381–390.
160
[76] M. Gauci, J. Chen, T. J. Dodd, and R. Groß, “Evolving Aggregation Behaviors in Multi-Robot
Systems with Binary Sensors,” Distributed Autonomous Robotic Systems, M. A. Hsieh and G.
Chirikjian, Eds. Springer Berlin Heidelberg, 2014, pp. 355–367.
[77] L. Panait and S. Luke, “Cooperative Multi-Agent Learning: The State of the Art,” Auton. Agents
Multi-Agent Syst., vol. 11, no. 3, pp. 387–434.
[78] Y. Yuequan, J. Lu, C. Zhiqiang, T. Hongru, X. Yang, and N. Chunbo, “A survey of reinforcement
learning research and its application for multi-robot systems,” Control Conference, 2012, pp.
3068–3074.
[79] R. Groβ and M. Dorigo, “Evolution of Solitary and Group Transport Behaviors for Autonomous
Robots Capable of Self-Assembling,” Adapt. Behav., vol. 16, no. 5, pp. 285–305, Oct. 2008.
[80] V. Sperati, V. Trianni, and S. Nolfi, “Evolving coordinated group behaviours through
maximisation of mean mutual information,” Swarm Intell., vol. 2, no. 2–4, pp. 73–95, Sep. 2008.
[81] K. Dantu, M. Rahimi, H. Shah, S. Babel, A. Dhariwal, and G. S. Sukhatme, “Robomote: Enabling
Mobility in Sensor Networks,” Int. Symp. on Information Processing in Sensor Networks,
Piscataway, NJ, USA, 2005.
[82] D. J. Bruemmer, D. D. Dudenhoeffer, M. D. McKay, and M. O. Anderson, “A Robotic Swarm
for Spill Finding and Perimeter Formation,” Aug. 2002.
[83] A. Dhariwal, G. S. Sukhatme, and A. A. G. Requicha, “Bacterium-inspired robots for
environmental monitoring,” IEEE Int. Conf. on Robotics and Automation, 2004, vol. 2, pp. 1436–
1443 Vol.2.
[84] M. Duarte et al., “Application of swarm robotics systems to marine environmental monitoring,”
OCEANS, 2016, pp. 1–8.
[85] P. Smith, “Comparing low-power wireless technologies,” Tech Zone Digikey Online Mag. Digi-
Key Corp., vol. 701, 2011.
[86] R. Ramaithitima, M. Whitzer, S. Bhattacharya, and V. Kumar, “Sensor coverage robot swarms
using local sensing without metric information,” IEEE Int. Conf. on Robotics and Automation,
2015, pp. 3408–3415.
[87] Y. Ou, P. Kang, M. J. Kim, and A. A. Julius, “Algorithms for simultaneous motion control of
multiple T. pyriformis cells: Model predictive control and Particle Swarm Optimization,” IEEE
Int. Conf. on Robotics and Automation, 2015, pp. 3507–3512.
[88] L. Barnes, M.-A. Fields, and K. Valavanis, “Unmanned ground vehicle swarm formation control
using potential fields,” Mediterranean Conf. on Control Automation, 2007, pp. 1–8.
[89] R. Oikawa, M. Takimoto, and Y. Kambayashi, “Distributed formation control for swarm robots
using mobile agents,” IEEE Int. Symp. on Applied Computational Intell. and Informatics, 2015,
pp. 111–116.
[90] G. Goertzel, “An Algorithm for the Evaluation of Finite Trigonometric Series,” Am. Math. Mon.,
vol. 65, no. 1, pp. 34–35, 1958.
[91] J. Pugh and A. Martinoli, “Relative localization and communication module for small-scale
multi-robot systems,” 2006, pp. 188–193.
[92] A. Gutierrez, A. Campo, M. Dorigo, J. Donate, F. Monasterio-Huelin, and L. Magdalena, “Open
E-puck Range; Bearing miniaturized board for local communication in swarm robotics,” IEEE
Int. Conf. on Robotics and Automation, 2009, pp. 3111–3116.
[93] Pan Li, N. Scalabrino, Yuguang Fang, E. Gregori, and I. Chlamtac, “How to Effectively Use
Multiple Channels in Wireless Mesh Networks,” IEEE Trans. Parallel Distrib. Syst., vol. 20, no.
11, pp. 1641–1652, Nov. 2009.
[94] R. N. Bracewell, “The fast Hartley transform,” Proc. IEEE, vol. 72, no. 8, pp. 1010–1018, Aug.
1984.
161
[95] Y. Maddahi, N. Sepehri, A. Maddahi, and M. Abdolmohammadi, “Calibration of wheeled mobile
robots with differential drive mechanisms: an experimental approach,” Robotica, vol. 30, no. 06,
pp. 1029–1039, Oct. 2012.
[96] A. Dunoyer, L. Balmer, K. J. Burnham, and D. J. G. James, “On the discretization of single-input
single-output bilinear systems,” Int. J. Control, vol. 68, no. 2, pp. 361–372, Jan. 1997.
[97] “Setting Up Eclipse - Nordic Developer Zone.” [Online]. Available:
https://devzone.nordicsemi.com/tutorials/7/. [Accessed: 07-Jun-2017].
[98] “Atmel Studio 7 | Microchip Technology Inc.” [Online]. Available:
http://www.microchip.com/mymicrochip/filehandler.aspx?ddocname=en594569. [Accessed:
07-Jun-2017].
[99] T. AbuKhalil, T. Sobh, and M. Patil, “Survey on Decentralized Modular Robots and Control
Platforms,” Innovations and Advances in Computing, Informatics, Syst. Sci., Networking and
Eng., T. Sobh and K. Elleithy, Eds. Springer International Publishing, 2015, pp. 165–175.
[100] O. Soysal and E. Sahin, “Probabilistic aggregation strategies in swarm robotic systems,” IEEE
Swarm Intell. Symp., 2005, pp. 325–332.
[101] S. Nouyan, R. Groß, M. Bonani, F. Mondada, and M. Dorigo, “Teamwork in Self-Organized
Robot Colonies,” IEEE Trans. Evol. Comput., vol. 13, no. 4, pp. 695–711, Aug. 2009.
[102] A. Howard, M. J. Matarić, and G. S. Sukhatme, “Mobile Sensor Network Deployment using
Potential Fields: A Distributed, Scalable Solution to the Area Coverage Problem,” Distributed
Autonomous Robotic Syst. 5, H. Asama, T. Arai, T. Fukuda, and T. Hasegawa, Eds. Springer
Japan, 2002, pp. 299–308.
[103] S. Camazine, Self-organization in Biological Systems. Princeton University Press, 2003.
[104] A. Okubo, “Dynamical aspects of animal grouping: Swarms, schools, flocks, and herds,” Adv.
Biophys., vol. 22, pp. 1–94, Jan. 1986.
[105] J. K. Parrish, S. V. Viscido, and D. Grünbaum, “Self-organized fish schools: an examination of
emergent properties,” Biol. Bull., vol. 202, no. 3, pp. 296–305, Jun. 2002.
[106] O. Soysal and E. Şahin, “A Macroscopic Model for Self-organized Aggregation in Swarm
Robotic Systems,” Swarm Robotics, E. Şahin, W. M. Spears, and A. F. T. Winfield, Eds. Springer
Berlin Heidelberg, 2006, pp. 27–42.
[107] J. Hereford, “Analysis of BEECLUST swarm algorithm,” IEEE Symp. on Swarm Intell., 2011,
pp. 1–7.
[108] S. W. Ekanayake and P. N. Pathirana, “Formations of robotic swarm: an artificial force based
approach,” Int. J. Adv. Robot. Syst., vol. 6, no. 1, pp. 7–24, 2009.
[109] K. Derr and M. Manic, “Extended Virtual Spring Mesh (EVSM): The Distributed Self-
Organizing Mobile Ad Hoc Network for Area Exploration,” IEEE Trans. Ind. Electron., vol. 58,
no. 12, pp. 5424–5437, Dec. 2011.
[110] D. F. S. William M. Spears, “Physicomimetics: Physics-Based Swarm Intelligence,”
Physicomimetics Phys.-Based Swarm Intell., 2012.
[111] D. V. Dimarogonas and K. J. Kyriakopoulos, “Connectedness preserving distributed swarm
aggregation for multiple kinematic robots,” IEEE Trans. Robot., vol. 24, no. 5, pp. 1213–23, Oct.
2008.
[112] V. Gazi, “Swarm aggregations using artificial potentials and sliding-mode control,” IEEE Trans.
Robot., vol. 21, no. 6, pp. 1208–1214, 2005.
[113] H. Hamann, H. Worn, K. Crailsheim, and T. Schmick, “Spatial macroscopic models of a bio-
inspired robotic swarm algorithm,” IEEE/RSJ Int. Conf. on Int. Robots and Syst., 2008, pp. 1415–
1420.
[114] B. Yang, Y. Ding, Y. Jin, and K. Hao, “Self-organized swarm robot for target search and trapping
inspired by bacterial chemotaxis,” Robot. Auton. Syst., vol. 72, pp. 83–92, Oct. 2015.
162
[115] S. Nouyan, A. Campo, and M. Dorigo, “Path formation in a robot swarm,” Swarm Intell., vol. 2,
no. 1, pp. 1–23, Dec. 2007.
[116] P. M. Maxim, W. M. Spears, and D. F. Spears, “Robotic Chain Formations,” IFAC Proc. Vol.,
vol. 42, no. 22, pp. 19–24, 2009.
[117] V. Sperati, V. Trianni, and S. Nolfi, “Self-organised path formation in a swarm of robots,” Swarm
Intell., vol. 5, no. 2, pp. 97–119, Apr. 2011.
[118] S. M. Lee and H. Myung, “Receding horizon particle swarm optimisation-based formation
control with collision avoidance for non-holonomic mobile robots,” IET Control Theory Appl.,
vol. 9, no. 14, pp. 2075–2083, 2015.
[119] E. Osherovich, V. Yanovki, I. A. Wagner, and A. M. Bruckstein, “Robust and Efficient Covering
of Unknown Continuous Domains with Simple, Ant-Like A(ge)nts,” Int. J. Robot. Res., vol. 27,
no. 7, pp. 815–831, Jul. 2008.
[120] T. Kuyucu, I. Tanev, and K. Shimohara, “Evolutionary Optimization of Pheromone-Based
Stigmergic Communication,” Applicat. of Evolutionary Computation, C. D. Chio, A. Agapitos,
S. Cagnoni, C. Cotta, F. F. de Vega, G. A. D. Caro, R. Drechsler, A. Ekárt, A. I. Esparcia-Alcázar,
M. Farooq, W. B. Langdon, J. J. Merelo-Guervós, M. Preuss, H. Richter, S. Silva, A. Simões, G.
Squillero, E. Tarantino, A. G. B. Tettamanzi, J. Togelius, N. Urquhart, A. Ş. Uyar, and G. N.
Yannakakis, Eds. Springer Berlin Heidelberg, 2012, pp. 63–72.
[121] I. A. Wagner, M. Lindenbaum, and A. M. Bruckstein, “Distributed covering by ant-robots using
evaporating traces,” IEEE Trans. Robot. Autom., vol. 15, no. 5, pp. 918–933, Oct. 1999.
[122] J. Svennebring and S. Koenig, “Building Terrain-Covering Ant Robots: A Feasibility Study,”
Auton. Robots, vol. 16, no. 3, pp. 313–332, May 2004.
[123] B. Ranjbar-Sahraei, G. Weiss, and A. Nakisaee, “A Multi-robot Coverage Approach Based on
Stigmergic Communication,” Multiagent Syst. Technologies, I. J. Timm and C. Guttmann, Eds.
Springer Berlin Heidelberg, 2012, pp. 126–138.
[124] S. Poduri and G. S. Sukhatme, “Constrained coverage for mobile sensor networks,” IEEE Int.
Conf. on Robotics and Automation, 2004, vol. 1, pp. 165–171 Vol.1.
[125] E. Ugur, A. E. Turgut, and E. Sahin, “Dispersion of a swarm of robots based on realistic wireless
intensity signals,” Int. Symp. on Comput. and Inform. Sci., 2007, pp. 1–6.
[126] E. Castello et al., “Adaptive foraging for simulated and real robotic swarms: the dynamical
response threshold approach,” Swarm Intell., pp. 1–31, Jan. 2016.
[127] J. P. Hecker, J. C. Carmichael, and M. E. Moses, “Exploiting clusters for complete resource
collection in biologically-inspired robot swarms,” IEEE/RSJ Int. Conf. on Intelligent Robots and
Syst. (IROS), 2015, pp. 434–440.
[128] N. R. Hoff, A. Sagoff, R. J. Wood, and R. Nagpal, “Two foraging algorithms for robot swarms
using only local communication,” 2010, pp. 123–130.
[129] K. Lerman, C. Jones, A. Galstyan, and M. J. Matarić, “Analysis of Dynamic Task Allocation in
Multi-Robot Systems,” Int. J. Robot. Res., vol. 25, no. 3, pp. 225–241, Mar. 2006.
[130] S. Shen, N. Michael, and V. Kumar, “Autonomous multi-floor indoor navigation with a
computationally constrained MAV,” IEEE Int. Conf. on Robotics and Automation, 2011, pp. 20–
25.
[131] Y. Tan and Z. Zheng, “Research Advance in Swarm Robotics,” Def. Technol., vol. 9, no. 1, pp.
18–39, Mar. 2013.
[132] A. Aly, S. Griffiths, and F. Stramandinoli, “Metrics and benchmarks in human-robot interaction:
Recent advances in cognitive robotics,” Cogn. Syst. Res.
[133] M. D. Manning, C. E. Harriott, S. T. Hayes, J. A. Adams, and A. E. Seiffert, “Heuristic Evaluation
of Swarm Metrics’ Effectiveness,” ACM/IEEE Int. Conf. on Human-Robot Interaction Extended
Abstracts, NY, USA, 2015, pp. 17–18.
163
[134] C. E. Harriott, A. E. Seiffert, S. T. Hayes, and J. A. Adams, “Biologically-Inspired Human-
Swarm Interaction Metrics,” Proc. Hum. Factors Ergon. Soc. Annu. Meet., vol. 58, no. 1, pp.
1471–1475, Sep. 2014.
[135] “Itead Studio: Normal Condition of PCB capabilities.” [Online]. Available:
http://support.iteadstudio.com/support/solutions/articles/1000156313-normal-condition-of-pcb-
capabilities. [Accessed: 29-Apr-2017].
[136] “APA3010P3BT-GX Kingbright | Sensors, Transducers | DigiKey.” [Online]. Available:
https://www.digikey.ca/product-detail/en/kingbright/APA3010P3BT-GX/754-1841-1-
ND/4977013. [Accessed: 09-May-2016].
[137] “mROBerTO - Git Repository.” [Online]. Available: https://bitbucket.org/beeboop/mroberto.
[Accessed: 09-Jun-2017].