Human Robot Interaction in Guardiansmecatron.rma.ac.be/pub/RISE/RISE -...

12
Human Robot Interaction in Guardians Amir M. Naghsh 1 , Jeremi Gancet 2 , Andry Tanoto 3 , Jacques Penders 1 , Chris R. Roast 1 , and Michel Ilzkovitz 2 1 Sheffield Hallam University, Sheffield, UK [email protected] [email protected] [email protected] 2 Space Applications Services N.V., Zaventem, Belgium [email protected] [email protected] 3 Heinz Nixdorf Institute, Paderborn, Germany [email protected] Summary. There are different forms of human-robot interactions, allowing a team of humans and robots to take advantage of skills of each team member. A relatively new area of research considers interactions between human and a team of robots, called a swarm of robots. This paper discusses possible forms of interactions in swarm robotics within the Guardians project. The paper particularly addresses the use of assistive swarm roboticts to support fire-fighters with navigation and search operations. It reports on existing fire-fighters operations. Furthermore the paper presents the current state of art in human-swarm interactions for such operations. Then direct and remote interactions in the Guardians project are introduced and finally related communication issues are discussed. 1 Introduction The Swarm robotics is built upon the pioneering work by Reynolds [29], who simulated a flock of birds in flight. Sahin [32] describes the swarm robotics as a (i) a large number, of (ii) homogeneous, (iii) autonomous, (iv) relatively incapable or inefficient robots with (v) local sensing and communication capabilities. The Guardians (Group of Unmaned Assistant Robots Deployed In Aggressive Navigation by Scent) project is applying a swarm of autonomous robots during the urban search and rescue operations. The swarm consists of several robot platforms including off-the-shelf mini robots as well as middle size robots. The Guardians swarm is intended to support the real-life task of navigation and search operations in urban environment, particularly the project has selected the searching of an industrial warehouse in smoke to be its central application scenario. A base station comes in addition to monitor and control the overall activities of both the swarm of robots and humans on the field, and to provide decision making support to the operations commanders. The Guardians project distinguishes between the operations involving robots only and operations where human fire-fighters are involved as well. In this paper, we focus on a situation where human fire-fighters are involved and the major task for the robot swarm is to provide support in navigation and safeguard the humans. The safety of the human fire-fighters is the highest priority in such operations. An earlier paper on Guardians [25] has reported on operations that only robots are applied to gather information at the early stage of an incident. Direct user involvement in the development process is a key factor in a successful design. Such involvement must proceed iteratively from the early stages of design. The Guardians project applies a participatory design approach to support direct involvement of different stakeholders including potential human swarm members. To obtain feedback on design proposals and to comply with fire fighting practice, Guardians project has

Transcript of Human Robot Interaction in Guardiansmecatron.rma.ac.be/pub/RISE/RISE -...

Human Robot Interaction in Guardians

Amir M. Naghsh1, Jeremi Gancet2, Andry Tanoto3, Jacques Penders1, Chris R. Roast1, and Michel Ilzkovitz2

1 Sheffield Hallam University, Sheffield, [email protected]@[email protected]

2 Space Applications Services N.V., Zaventem, [email protected]@spaceapplications.com

3 Heinz Nixdorf Institute, Paderborn, [email protected]

Summary. There are different forms of human-robot interactions, allowing a team of humans and robots to takeadvantage of skills of each team member. A relatively new area of research considers interactions between human and ateam of robots, called a swarm of robots. This paper discusses possible forms of interactions in swarm robotics withinthe Guardians project. The paper particularly addresses the use of assistive swarm roboticts to support fire-fighterswith navigation and search operations. It reports on existing fire-fighters operations. Furthermore the paper presentsthe current state of art in human-swarm interactions for such operations. Then direct and remote interactions in theGuardians project are introduced and finally related communication issues are discussed.

1 Introduction

The Swarm robotics is built upon the pioneering work by Reynolds [29], who simulated a flock of birds in flight.Sahin [32] describes the swarm robotics as a (i) a large number, of (ii) homogeneous, (iii) autonomous, (iv)relatively incapable or inefficient robots with (v) local sensing and communication capabilities.

The Guardians (Group of Unmaned Assistant Robots Deployed In Aggressive Navigation by Scent) projectis applying a swarm of autonomous robots during the urban search and rescue operations.

The swarm consists of several robot platforms including off-the-shelf mini robots as well as middle sizerobots.

The Guardians swarm is intended to support the real-life task of navigation and search operations inurban environment, particularly the project has selected the searching of an industrial warehouse in smoke tobe its central application scenario.

A base station comes in addition to monitor and control the overall activities of both the swarm of robotsand humans on the field, and to provide decision making support to the operations commanders.

The Guardians project distinguishes between the operations involving robots only and operations wherehuman fire-fighters are involved as well. In this paper, we focus on a situation where human fire-fighters areinvolved and the major task for the robot swarm is to provide support in navigation and safeguard the humans.The safety of the human fire-fighters is the highest priority in such operations. An earlier paper on Guardians[25] has reported on operations that only robots are applied to gather information at the early stage of anincident.

Direct user involvement in the development process is a key factor in a successful design. Such involvementmust proceed iteratively from the early stages of design. The Guardians project applies a participatory designapproach to support direct involvement of different stakeholders including potential human swarm members.To obtain feedback on design proposals and to comply with fire fighting practice, Guardians project has

2 Amir M. Naghsh, Jeremi Gancet, Andry Tanoto, Jacques Penders, Chris R. Roast, and Michel Ilzkovitz

approached South Yorkshire Fire and Rescue Service (SyFire) to be a partner to this project. Direct involvementof the SyFire in the project is intended to ensure effective interaction between human and swarm, fit for thepurpose of search and rescue support.

2 Guardians’ Environment

When SyFire was first approached to enquire about the applicability of a swarm of robots, they pointed outthat industrial warehouses are a major practical problem.

In general, the industrial warehouses are large single story buildings. The warehouses consists of large open-spaces alternating with storage areas consisting of vertical racks in which a multiplicity of materials is stored.The warehouses are divided into sections separated by walls that can resists fire for several hours. The fireresistance walls restrict the fire to a certain area in the event of a fire. However, the smoke may cover the wholesection or several sections in the warehouse. Apart from the smoke, the warehouse is in a normal and orderlystate, and the ground is easily passable.

The large open spaces of a warehouse, the low visibility and the time constraints render the searching of awarehouse very risky. In addition to this, the smoke in the warehouse may contain substantial concentrationsof toxic or inflammable material. There have been tragic examples where fire-fighters died by getting lost inthe fire. In the warehouse fire of 1991 in Gillender Street London (UK), two firefighters died and in the 1999warehouse fire in Worcester (USA), six firefighters lost their lives. In the later incident, a thick black smokewas developed and communication link was frequently disrupted.

Fig. 1. Left: Guardians partners experiencing fire fighting conditions in the SyFire training facilities. Right: partnersexperiencing ”guideline” feeling with a leather glove

In the event of a fire incident the smoke builds up quickly in a few minutes only, and reduces visibilitydramatically leaving fire fighters with no sight. In conditions of poor visibility human beings easily get disori-ented and may get lost. Rendered without sight fire fighters can only rely on their touch and hearing senses.However these senses are restrained, the sense of touch is restricted by their clothing gear, the sense of hearingis reduced by the noisy breathing apparatus. Thus, in a smoke-filled environment the fire fighters are severelyhampered and have to proceed carefully (figure 1).

2.1 Firefighters Operations

To have a better understanding of operations carried out by fire-fighters, we have conducted a series of meetingswith personal of SyFire. Additionally, to help the partners truly understand the conditions that fire-fightersare subjected to, the partners of the Guardians have attended one full day training at the SyFire trainingcentre. This section reports some of our findings on fire-fighters operations and practice.

Human Robot Interaction in Guardians 3

There is usually a little information provided when the fire brigade is alerted to an incident. The first taskof the arriving appliances is to assess the incident. The fire fighters safety is considered as a high priority atall time during the incident.

When arriving at the incident, fire-fighters are grouped into teams and briefed with their specific tasks androles that they have to perform. A map of the premises is usually available. The map however only shows themajor constructive elements such as walls and doorways, and may not contain the interior design.

The span of control for any officer is arranged to be between 3 and 5 lines of communication, in order to avoidan overload (and consequently neglect of) information. In large incident involving industrial warehouses, theincident commander (IC) deals with the overall supervision of the incident, where the Operations Commander(OpsComm) deals with the sector commanders and crews who are directly involved. In addition, an EntryControl Officer (ECO) for each entry point is appointed with the following duties: (i) to update the EntryControl Board with the information of the fire-fighters who have committed into the scene of incident or haveleft it; (ii) to check the breathing apparatus’s ’Time of Whistle’ for committed crews into the incident4; (iii)to liaison with other ECOs; (v) to liaison with the sector commander.

Fire fighters are usually committed into the incident in teams of two. When entering a smoke-filled envi-ronment, they are provided with breathing apparatus to provide fresh air. In the United Kingdom proceduresare that a first team lays-out and fix a guideline along a wall (fig. 1, right part). The guideline is a special linewhich is used to indicate a route between the Entry Control Point and the scene of operations. Subsequentteams aiming towards the scene of operations follow the guideline but nevertheless they advance only at acrawling speed.

One of the fire-fighters usually the squad leader moves forward while feeling for obstacles/survivors andtesting the integrity of the floor. Where the other fire-fighter holds on to the leader and maintain the com-municates with the squad leader (verbally) and ECO through the radio channel. On the way out, the squadteam debrief the ECO who reports back to the sector commander. Further the sector commander feeds backthe collected information to the operations commander.

2.2 GUARDIANS Appliance Integration and Deployment Issues

A Guardians appliance is integrated in fire brigade services in the same way as any other existing specializedfirefighters appliance (diving appliance, chemical hasard appliance, etc.). It typically consists of a truck carryingthe robots, light maintenance material, communication hardware, base station infrastructure (i.e. computersrunning the base station modules, plus remote HRI devices), and the appliance crew, i.e. in a typical config-uration 1 base station coordinator, 2 operators, 1 data specialist and 3 human crew members (with adequateequipment).

In a typical Guardians scenario, a Guardians appliance is deployed into an operational sector whereparticularly high risk levels are identified: e.g. if the building maps are unavailable, if the building materialis particularly inflammable, if flashovers (i.e. a simultaneous ignition of all the combustible material in thearea) are likely to occur, if a number of victims is suspected, etc. The Guardians appliance is moved on thetarget sector in the same way as a firefighting truck, then is deployed at a reasonable distance. For obviousreasons, the setup of the system shall be extremely fast, below ten minutes according to end users feedback. Thesetup consists in powering all the electronic devices, launching an robots check-list procedures (ensuring thatthe wireless communication works properly, and that all sensors and actuators are operational) and preparingthe mission execution (mission template instantiation and plan generation). Then tasks are allocated both torobots and human crew members, and the operations start.

Figure 2 depicts the Guardians appliance w.r.t. usual (SyFire) fire brigade organization for the manage-ment of an incident.

4The cylinders contain roughly 20 minutes of air supply.

4 Amir M. Naghsh, Jeremi Gancet, Andry Tanoto, Jacques Penders, Chris R. Roast, and Michel Ilzkovitz

Guardians Base Station

ON Site

ON Sector

HMI

HMI

HMI

data links

data links

OFF Site

Guardians Appliance

On-the-fieldHRI

HMI

CO

IC

OpsComm

Sector CommanderOther Appliances

Other Off-site Actors

Sectors of Operation Support

Safety Officer

ECO(s)

Base Station Infrastructure

RobotsCoordinator

SensorSpecialist

Operators

Human Crew Members

Fig. 2. Guardians appliance w.r.t. SyFyre fire brigade organization. CO stands for Communication Officer, IC standsfor Incident Commander, OpsComm stands for Operations Commander and ECO stands for Exit Control Officer.

3 Related Work

3.1 Existing Approaches to Direct HRI

The interface for human-swarm interaction is essentially different to the human-robot interfaces applied intelerobotics. In telerobotics (refer to PeLoTe IST-2001-38873, or View-Finder FP6-045541) multiple humansmay control a robot, where humans can coordinate among themselves to issue commands to the robot, or eachone individually can interact with the robot. In the later case, the robot should priorities commands beforecarrying them out. In Guardians, however one or several humans cooperate with a swarm of robots. Thisincludes a human fire-fighter who interact directly with the robot in the field of the incident, as well as a humanoperator who may interact with the robot remotely.

The human fire-fighters cooperating with a swarm are subjected to conditions such as no visibility, noisyenvironment and a thick clothing gear, which restricts their senses. Therefore the human robot swarm interfacecannot fully rely on the commonly used audio and visual communication means. The project is forced to lookat other means to establish interaction between the human and the robots, while taking account of the human’scognitive load.

Van Erp et al. [35] investigated the feasibility of presenting navigational information on a tactile display.They used a Vibrotactile waist belt consisted of eight tactors. The tactors were used in order to provideinformation on direction and distance to a specific location or way-point. Van Erp et al. distinguished fourparameters for tactors that can be translated into a “tactile picture”. These parameters are: (i) the location ofthe tactors that presents the vibration, (ii) timing that indicates when the tactor is vibrating, (iii) frequencyof vibration and (v) the amplitude of the vibration. Other research also have used tactors to simulate a“tactile picture”. The location of the tactors is the first choice among the four parameters to code navigationalinformation [5, 3].

Matrix of tactors have been used to display a more complex spatial information in the form of a 3D “tactilepicture”. Research by [4, 19] have used a larger number of tactors to cover a larger area of the torso in order to

Human Robot Interaction in Guardians 5

present complex navigational information. Presenting spatial information via sense of touch can be considered apossible form of communication enabling the swarm to interact with the human under Guardians conditions.

A swarm that operates around the human fire-fighter should move accordingly and in coherent with thehumans. A possible form of communication is passive human to robot interaction. The research in social robots[11] has investigated the use of autonomous robots in engaging with humans in form of natural person-followingbehavior, where the robot walks alongside a human beings. To detect and follow the human beings Gockleyet al. [11] have used a laser-based tracking-method which is not appropriate to be used in the Guardiansconditions.

The movement of the human body or the body language has also been considered as another form thatcan be used for the passive communication and interaction between humans and robots [20]. Other movementbased interactions consider a more limited physical space such as the movement of hand only or other specificpart of the body. Haptic devices such as phantom [26] and Data-Glove [38] as input devices that can performindependently from the visibility condition. For instance, the Data-Glove is a light weight cotton glove thatcan provide tactile feedback by measuring the figures bending and hand movements.

3.2 Remote Monitoring and Control Solutions

A well-known environment for multi-robot mission specification and execution monitoring is the Arkin’s Mis-sionLab suite [2, 18], that has been experienced and improved for more than 10 years. It allows to designcomplex robot control structures and to execute them as robot behaviors. In addition it supports plan repara-tion through case-based reasoning [21]. MissionLab ability to support mission design and execution monitoringcould be in line with Guardians’ needs. It is inspired from military-style plans and approach, and hencecould make sense regarding integration to fire brigade procedures. However, the provided behavior-based robotcontrol framework is not compatible with our project needs: indeed the robotic platforms in Guardians comewith their own existing navigation capabilities, software architecture, and the MissionLab architecture cannotjust be implemented on the Guardians robots. Moreover the scalability to a swarm of robots is not clearlystressed in MissionLab documentation.

The FAMOUS framework [10] has been developed and exploited for almost 10 years for remote moni-toring and control of robots and spacecrafts. It is structured as a multi-purpose M&C solution, featuringmission recording mechanisms, multi-operator interfaces and simulation interfaces that meet space industryquality requirements standards. However this framework is not especially designed to cope with multiple systemM&C, hence lacks scalability to cope with robot swarms. In addition existing implementation is now outdated.Nonetheless much of the subjacent philosophy still makes sense in Guardians.

MOCU [27] is another control station that is currently used for military purposes: it supports monitoringand control activities for multiple robots, and can handle a number of modules related to environment mod-elization, data gathering and processing. It enables edition of monitoring and control interfaces through XMLconfiguration files. However the chosen approach is to control a single robot at a time, so a single user cannotoperate multiple robots through this scheme. As a consequence scalability to robot swarms is an issue too.

Although the features of these operational robot monitoring and control stations may be beneficial to theGuardians base station design, none of them fully addresses the requirements of a base station that has todeal with: (i) a swarm of robots, (ii) human on the fields interacting with the robots, (iii) flexible remote HRIwith various levels of user clearance, in addition to usual robot station concerns.

3.3 HRI Metrics

To evaluate and validate the effectiveness of interactions between robot and human in general, we need tohave some standardized method to measure them. Such metrics are regarded as essential but somehow notsufficiently explored.

There have been significant efforts to establish common metrics for HRI. One of the most comprehensivework is the contribution of Fong et al [8]. In their paper, they list existing metrics that can possibly be used

6 Amir M. Naghsh, Jeremi Gancet, Andry Tanoto, Jacques Penders, Chris R. Roast, and Michel Ilzkovitz

as common metrics for a wide range of HRI tasks and applications. They classify these common metrics intothree main groups: system, operator, and robot performance.

To help us analyzing human-robot interaction, it is essential to have a taxonomy of human-robot inter-actions. An example of such taxonomy is [34], where Scholtz mentions two issues in evaluating human-robotinteraction. The first one is that we need to evaluate not only the state of the robot system after performingan action but also the state when the action was given. It is important to ensure the synchronization betweenthe specified and the actual behavior. The second issue is the separation between the performance of the HRIsystem and the usability of the interface.

However, we believe that existing taxonomies (such as [34, 37, 9]) do not stress enough two importantaspects: environment adversity & human condition. In Guardians, because of the environmental conditionsfirefighters need to equip themselves with special clothing, mask, gloves, which makes it delicate to interactwith each other and the robots.

Additinaly it is important to have a good understanding on how humans collaborate with robots, whichrequires thorough investigations on what is humans perception of robots.

Another important consideration in the design of a multiple human-robot system such as Guardians, is toanalyze which tasks or operations are more suitable for robots and which are more likely to be performed byhumans. With respect to this task allocation issue, Rodriguez and Weisbin [31] introduced an original methodfor deciding who will be performing which tasks in what situation in the context of space exploration mission,using human-robot teams.

Let finally mention the work of Kooijmans et al. [14], related to an interaction analysis tool, namely”Interaction Debugger”: its purpose is to collect, match, analyse and graphically represent interactions relateddata during HRI operations. Such a tool may be very relevant in Guardians as a support to HRI performancesmeasurement.

4 Human Swarm Interactions in GUARDIANS

4.1 Direct Interactions On The Field

As explained earlier, the objective of the Guardians project is to provide a swarm of robots to enhance thetask performance of the human fire-fighters. Particularly, to assist them with navigation and searching theindustrial warehouse.

On the scene of an incident the swarm assists the fire-fighters by notifying of all the possible hazardssurrounding them. When required the swarm can also lead the fire-fighters to the exit point, towards the sceneof operation or towards any specific area of interest. Whether assisting or leading, the swarm of robots shouldin general not increase the navigation related load (physical or cognitive) [17] of a human being.

We distinguish the following requirements for the human swarm interaction:

1. The swarm notifies the humans of possible hazards (e.g. obstacles, high temperature, chemicals);2. The swarm stays within a relatively close range but also maintains its distance to the fire fighters to allow

them freedom of action;3. The swarm indicates unambiguously the direction to the scene of incident or backwards to the exit point.

The safety of the human fire fighters has highest priority. Therefore interfaces indicating hazards should bemost noticeable for the fire fighters, while interfaces that provide directional guidance have lower priority

As explained in the introduction,the user participation is central to the successful design of an interactivesystem. Enabling users to envisage or make sense of design proposals (whether those proposals originate with’professional designers’ or from the users themselves) is an essential element of all participatory approachesto design. Users can only make informed choices when the proposals being discussed are meaningful to them.Prototyping is one popular method of helping users (and designers) to understand possible alternatives. There-fore, it is essential to apply tools and methods which allow designers to rapidly prototype novel physical userinterfaces that can be evaluated with the potential human swarm members.

Human Robot Interaction in Guardians 7

Guardians project applies use of Phidgets toolkit5 and the tools and methods [23] that support di-rect participation of all involved stakeholders including the potential end-users. The Phidgets toolkit com-bines lightweight building blocks as input output devices, such as buttons, sliders, LCD/LED displays,heat/touch/light sensors, etc. The toolkit uses the Universal Serial Bus (USB) to communicate with a PCor hand-held device. Using this toolkit does not require any soldering for connecting the components, whichmakes it quick and easy to use for prototyping possible interaction models.

Human-Swarm Interaction

Passive: The swarm has to adapt its actions to the fire-fighter’s movement in form of a passive human toswarm interaction. Guardians’ robots are provided with sensors (e.g. ultra-sonic, infrared, radio-frequencyidentification sensors) that are not affected with low or no visibility and can be used to monitor fire-fighter’smovements. To help the robots to distinguish the fire-fighters from surrounding obstacles, fire-fighter’s gear isequipped with a radio-frequency identification (RFID) tag. The passive human robot interaction, allows theautonomous swarm to move in coherent to the fire-fighters and maintain a bubble of robots around them atall time.

Tangible: When the swarm is committed into the field to accompany and safeguard the fire-fighters, it isassigned with a high-level task. Swarm algorithms are built based on the autonomous operations of the robotsto which human originating tactical planning instructions might be added. Human control in swarm roboticsallows for dynamic authoritative control of specific swarm activities based upon local circumstances and humanexpertise.

In order to enable fire fighters to interact with the swarm, a tangible interface (large buttons preferably) willbe provided that is designed and build in accordance with the current standards of the tools and fire-fightersgear.

Movement-Based: The provided tangible interface has to be effective but simple, since any additionalcomplexity of interacting with the swarm can distract fire-fighters. To ensure the simplicity of the tangibleinterface, number of provided buttons has to be limited. Therefore, based on the fire-fighters practice andcode of work, the most highly important tasks have to be identified to be assigned to the restricted numberof buttons on the tangible interface. However, in order to maintain the control and cooperation between thehumans and robots, the interaction language should allow fire-fighters to communicate a larger number oftasks with the swarm when it required. Other possible form of interaction that can be applied consideringGuardians’ requirements is to use movement-based devices which can perform in no-visibility conditions.For instance, a form of interaction could be recognising simple gestures from the feedback provided by hapticdevices on fire-fighter’s hand movements.

Swarm-Human Interactions

Tactile: The safety of fire-fighters is a top priority, and the most noticeable form of interaction that can be usedto notify fire-fighters of possible hazards is through sense of touch. A tactile interface consisting of eight tactorswill be attached to the fire-fighters torso. The interface displays a “tactile picture” of possible hazards locationssurrounding fire-fighters. Both parameters of the frequency and amplitude will be used to communicate theseriousness of hazards.

Visual: To lead the fire-fighters to a point of interest, the swarm has to forward navigational informationto the fire-fighters. The swarm communicates unambiguous directions through a novel visual device (figure 3)to be installed within the fire fighters’ helmet. The visual device display the directions in a simple form whichrequires minimum attention from fire-fighters in order to understand them.

5http://www.phidgets.com/

8 Amir M. Naghsh, Jeremi Gancet, Andry Tanoto, Jacques Penders, Chris R. Roast, and Michel Ilzkovitz

Fig. 3. Early stage of visual LED-based interface prototyping for robot -¿ human communication

4.2 Remote Interaction Via Base station

As a Guardians appliance is deployed on the field, robots and human crew members interact to explore theenvironment, assess the situation, localize victims, performing their tasks in extremely harsh conditions. Insuch conditions a base station is critical to the safety of the crew members and the efficiency of human crewmembers and robots tasks execution. According to potential end users (and especially firemen), main basestation features shall consist of:

1. timely remote support to human crew members and robots: providing clear instructions at the right momentis the key to avoid troubles. Even the most senseful instructions may result in unexpected actions if thehuman crew is overloaded.

2. realtime overall situational awareness for Guardians system operators and stakeholders: a significanteffort in the design of the base station shall be devoted to enabling simple, efficient situational awarenessat the different levels of the commanding chain: Guardians appliance operators, coordinator, operationscommander, etc. It shall be mentioned that, in a number of critical systems involving human support forM&C, potential failures or accident are usually due to lack of situational awareness. Historically this hassadly been the case in notorious tragedies such as the Tchernobyl disaster in 1987, or aerial disasters suchas the Tenerife disaster in 1977.

Base station vision

The Guardians base station shall provide classical robot station features such as mission authoring tools,mission execution monitoring and control means, interface to robots and human crew members, and missiondata recording. The originality of the approach is the ability to support multiple parallel client connections toa main base station server, featuring tailored (and tunable) M&C means according to user roles:

1. Base station coordinator: he is in chage of preparing and validating mission plans, coordinating the activitiesof operators, robots, human crew members and sensor data specialists, taking decisions in the scope of theGuardians appliance activities and is an interface from and toward the commanding chain above theGuardians appliance.

2. Operators: they remotely monitor and control robots and human crew members activities on the field,support the analysis of the operational situation and balance the autonomy level of robots (and crew mem-bers) according to the available information (situational awareness). They have the means and clearanceto teleoperate robots, groups of robots and humans crew members (let say in this case ”fine supervision”,i.e. step by step action request).

3. Sensor data specialists: they observe and analyse sensor data (both rough data or engineered data), andaccordingly provide advices and reports to the appliance coordinator. They typically can influence themission plan, providing suggestions about where to send robots and human crew members, and whatactivities they shall perform in the current context (exploring places, sampling the air, etc.).

Human Robot Interaction in Guardians 9

4. Stakeholder in the commanding chain: they remotely observe operations, requesting only from theGuardians appliance the most meaningful information for decision making (reports, snapshots, etc.).They take general purpose decisions regarding the Guardians appliance activities, that the base stationcoordinator shall apply accordingly.

5. Tier information channels (media...): only restrained, relevant synthetic information are provided to themedia channel, to report about the current state of the situation. Provided data are selected and validatedby stakeholders.

Mission Templates

Mission Editor

Ro

bo

ts & H

CM

Interface

Mission Planning, Scheduling &

Execution Monitoring(MPSEM)

Mission Data Recorder and

Player

Services Manager

Sensor Data Processing

ModuleUsers

Profiles

TM/TC

AV Data

SensorsDataExpl. Data

Expl.Data

TM/TC

Users Profiles Data

UserData

Mission Edition Data

TM/TC

AV Data

Expl. Data

Mission Data

Edited Mission Templates

Selected Mission

Data flow TC

Guardians Base Station

Fig. 4. Guardians base station architecture and data flows. TM stands for telemetry, TC stands for telecommand,and AV stands for audio-visual.

The main modules of the base station are illustrated on figure 4. The services manager provides users withadequate remote HRI means (display and other interaction means) according to their clearance level. It getstelemetry data (robot attitude, human crew members health monitoring...), audio-visual data and processedsensor data from the mission data recorder that, in addition to recording and replaying mission data on request,is also a data centralizer and dispatcher for all the system entities. Besides, the mission editor offers authoringtools to design mission templates that, once applied in the operational context through the MPSEM, give riseto mission tasks both for human crew members and robots on the field. The MPSEM allows different schemes ofmission monitoring and control over the human crew and the robots, ranging from teleoperation (step by stepaction) to policy level control (high level request such as ”explore and report any abnormal event”). Operatorsmay interfere at any time with plan execution and can either use or disable the MPSEM autonomous executionmonitoring.

Remote human - robot swarm interaction

The principle of a robot swarm is to rely on auto-organization and group behavior emergence to fulfill tasks,while benefiting from redundancy. Nonetheless a base station is useful to monitor the overall swarm activity, tosend policy level requests, and to take the control if necessary over a single or few robots of the swarm, whileothers robots continue their activities. Visualization of the swarm activity in the base station is an essential

10 Amir M. Naghsh, Jeremi Gancet, Andry Tanoto, Jacques Penders, Chris R. Roast, and Michel Ilzkovitz

issue: efficiently encompassing the overall robots activities in a single view is a major aim for the Guardians’base station HRI. Besides, individual (or small group) control means can be achieved with joysticks and otherinteracting devices such as touchscreens. Efficient multi-scale agent control may typically find inspiration inreal-time strategy video game (i.e. Warcraft like).

During normal operations, robots operate autonomously. However there are situations in which autonomylevel adjustment [15, 16, 22, 1] is deemed necessary. Humans may take control over one or more robots toenforce a particular way of performing tasks, if they think it is appropriate in the current situation: typically iftasks are particularly complex, delicate, or require knowledge that the robots are not aware of. With adjustableautonomy, an entity needs not make all decisions autonomously, rather it can choose to reduce its own autonomylevel and transfer decision making control to other users or agents. Thus the system is more flexible, since itcan handle tasks that might not have been anticipated at the design process. Furthermore, the ability to shiftthe level of autonomy in an elegant way of balancing the load among all parties, whether it is human or robot.

Two main questions of this paradigm are: when it should be done [6, 7, 12, 13] and how to do it in aseamless way [6, 12, 33].

These questions call themselves for much more questions: what is actually to be adjusted, who should oris allowed to adjust the level of autonomy of an entity or agent and in which situations, to whom the point ofcontrol can be transferred, and where inside the system does the adjustment of autonomy level take place?

We obviously do not have all the answers at the moment, but these are issues that have a particularimportance for the Guardians control station design, and accordingly that shall be carefully addressed.

Remote human - human crew members interaction

One of the specificities of the Guardians system is the ability to take in charge both robots and human beingson the fields, both in terms of monitoring and control. The main benefit is the possibility to coordinate robotsand humans activities on the fields together, in a comparable way. At first glance teleoperation of human beingsis meaningless, but this should be understood in terms of successive fine grain actions requests, as operatorsare likely to have a better overall situation awareness than the firefighter in some situations: indeed firefighterssometimes face conditions where visibility is null and ambient noise makes it impossible to discuss with othercrew members. In such a situation, the base station can send simple step by step elementary actions requests,for instance to guide the firefighter toward the exit, or toward a victim that a robot localized. Moreover it shallbe mentioned that adjustable autonomy concepts introduced earlier also make sense for the remote human -human crew members interactions.

Technical flavour: preliminary considerations

The base station implementation calls for a number of technologies.In terms of users (clients) interfaces, we consider two approaches in parallel: for the ”heaviest” clients

(Guardians appliance coordinator, operators), a standalone interaction environment shall be designed, in-cluding visualization as well as alternative interaction modalities (voice, in particular). For lighter clients(stakeholders, sensor data specialist, media), web services technologies are on the list. In both cases, the socalled ”Ecological Display” approach is considered: this is an extension of the Ecological Interface Design (EID)[28] concept, which goal is to ”make constraints and complex relationships in the work environment perceptuallyevident (e.g. visible, audible) to the user, allowing more of users’ cognitive resources to be devoted to highercognitive processes such as problem solving and decision making” [36]. In [30], and then in [24] authors exploitthis concept with an augmented reality display for robot teleoperation and interaction, as a valuable alternativeto more classical, multiple gauges and windows HMI. In the context of Guardians, ecological display approachdefinitely makes sense to improve the situational awareness of operators and stakeholders.

Human Robot Interaction in Guardians 11

4.3 Wireless communication issues

In search and rescue operation, communication is crucial for human performing tasks. In Guardians, wirelesscommunication is used to support cooperation between robots in a swarm as well as between the swarm andhumans. Two forms of communication may take place during the mission: (i) short range communication,i.e. communication within a group of robots or between robots and fire-fighters, and (ii) middle range com-munication, i.e. communication between networks of robots, static activated sensors, humans and the basestation.

Wireless communication channels in Guardians raise a number of issues.In the industrial warehouse search and rescue scenario, metal structures may disturb the quality of the

communication. Moreover, no robot communication infrastructure is available at the incident: it must beestablished fast, without compromising the reliability.

Also it is important to know the size of the network when designing the protocol: for instance simplebroadcasting approach may work well in a small group, but it becomes infeasible in large networks due toscalability problems and limited range of communication devices.

The last issue is communication bandwidth: a lot of data must be exchanged, such as environment in-formation (temperature, chemicals concentration), robot and firefighters’ status, video and audio. These datarequire large bandwidth thus the raising question is which technology can provide enough bandwidth in suchan environment.

Various wireless communication standards exist, e.g. Wireless LAN, Bluetooth and ZigBee. These technolo-gies differ in the network size, radio range, data rate and power consumption. Wireless LAN is suitable forhigh data rate and high communication range, at the cost of power consumption. Bluetooth has a lower datarate and transmission range but a significantly lower power consumption. ZigBee is highly scalable with evenlower power consumption, but with a trade-off for lower data rates.

For the project, we consider that the three technologies can complement each other. Further analysis isrequired in order to determine when, where, and how these technologies can be used.

5 Conclusions

The research on the human swarm interface has to tackle several very new problems. The combination ofrobot swarm and the base station in Guardians is to help the human to navigate where the human sensesare failing. Furthermore, this project aims to develop novel interactions technology to maintain control andcoordination between the swarm and the humans. Next step consist in providing a full conceptual design tohuman - robot swarm interactions, and further the physical and operational prototype will be evaluated in fullscale experiments with end users (i.e. fire-fighters).

Acknowledgement. Guardians is a EU funded project (FP6 IST-045269). We would like to thank all Guardians’partners.

References

1. J. A. Adams, P. Rani, and N. Sarkar. Mixed-initiative interaction and robotic systems. In AAAI Workshop on Supervisory Controlof Learning and Adaptative Systems, 2004.

2. R. C. Arkin. Cooperation without communication: Multiagent schema based robot navigation. Journal of Robotics Systems, vol.9(3):pp 351–364, 1992.

3. S. Bosman, B. Groenendaal, J. W. Findlater, T. Visser, M. De Graaf, and P. Markopoulos. Gentleguide: An exploration of hapticoutput for indoors pedestrian guidance. In Mobile HCI, 2003.

4. J. Chiasson, B. Mcgratcj, and A. Prupert. Enhanced situation awareness in sea, air and land environments. In NATO RTO HumanFactors and Medicine Meeting, 2002.

5. T. Dobbins and S. Samway. The use of tactile navigation cues in high-speed craft operations. In RINA Conference on High SpeedCraft: Technology and Operation, 2002.

12 Amir M. Naghsh, Jeremi Gancet, Andry Tanoto, Jacques Penders, Chris R. Roast, and Michel Ilzkovitz

6. G. Dorais, R. Bonasso, D. Kortenkamp, B. Pell, and D. Schreckenghost. Adjustable Autonomy for human-centered autonomoussystems on Mars. In Proceedings of the First International Conference of the Mars Society, pages 397–420, August 1998.

7. G. Ferguson, J. F. Allen, and B. Miller. TRAINS-95: Towards a Mixed-Initiative Planning Assistant. In Proceedings of the ThirdConference on Artificial Intelligence Planning Systems (AIPS), pages 70–77, 1996.

8. T. Fong, D. Kaber, M. Lewis, J. Scholtz, A. Schultz, and A. Steinfeld. Common Metrics for Human-Robot Interaction. IEEE 2004International Conference on Intelligent Robots and Systems, Sendai, Japan, 2004.

9. T. Fong, I. Nourbakhsh, and K. Dautenhahn. A survey of socially interactive robots. Robotics and Autonomous Systems, 42, 2002.10. B. Fontaine, L. Steinicke, and G. Visentin. A reusable and flexible control station for preparing and executing robotics missions in

space. In Proceeding of the SpaceOps conference, Germany, 1996. DLR.11. R. Gockley, J. Forlizzi, and R. Simmons. Natural person-following behavior for social robots. In HRI ’07: Proceeding of the

ACM/IEEE international conference on Human-robot interaction, pages 17–24, New York, NY, USA, 2007. ACM Press.12. J. Gunderson and W. Martin. Effects of uncertainty on variable autonomy in maintainance robots. In Agents’99 Workshop on

Autonomy Control Software, pages 26–34, 1999.13. E. Horvitz. Principles of Mixed-Initiative User Interfaces. In Proceedings of CHI ’99, ACM SIGCHI Conference on Human Factors

in Computing Systems, Pittsburgh, Pennsylvania, May 1999, pages 159–166, May 1999.14. T. Kooijmans, T. Kanda, C. Bartneck, H. Ishiguro, and N. Hagita. Interaction debugging: an integral approach to analyze human-

robot interaction. In HRI ’06: Proceeding of the 1st ACM SIGCHI/SIGART conference on Human-robot interaction, pages 64–71,New York, NY, USA, 2006. ACM Press.

15. D. Kortenkamp, R. P. Bonasso, D. Ryan, and D. Schreckenghost. Traded control with autonomous robots as mixed initiative inter-action. In in proceedings of the AAAI Spring Symposium on Mixed Initiative Interaction, 1997.

16. D. Kortenkamp and D. Dorais. Tutorial: designing Human Centered Autonomous Agents, 2000.17. V. Kulyukin, C. Gharpure, J. Nicholson, and G. Osborne. Robot-assisted wayfinding for the visually impaired in structured indoor

environments. Auton. Robots, 21(1):29–41, 2006.18. D. MacKenzie, R. Arkin, and J. Cameron. Multiagent mission specification and execution. In Autonomous Robots, number 1, pages

pp 29–52, 1997.19. B. J. Mcgrath, A. Eestrada, M. G. Braithwate, A. K. Raj, and A. H. Rupert. Tactile situation awareness system flight demonstration.

2004.20. J. Moen. From hand-held to body-worn: embodied experiences of the design and use of a wearable movement-based interaction

concept. In TEI ’07: Proceedings of the 1st international conference on Tangible and embedded interaction, pages 251–258, NewYork, NY, USA, 2007. ACM Press.

21. L. Moshkina, Y. Endo, and R. C. Arkin. Usability evaluation of an automated mission repair mechanism for mobile robot missionspecification. In Proceeding of the 1st ACM SIGCHI/SIGART conference on Human-robot interaction, 2006.

22. R. R. Murphy, J. L. Casper, M. J. Micire, and J. Hyams. Mixed-initiative control of multiple heterogeneous robots for urban searchand rescue. In in proceedings of the IEEE Transactions on Robotics and Automation, 2000.

23. A. M. Naghsh and O. M. B. Gabbeh - a tool for computer supported collaboration in electronic paper prototyping. In D. A andW. L., editors, HCI 2004: Design for Life, 2004.

24. C. Nielsen, D. Bruemmer, D. Few, and M. Walton. Mixed-initiative interactions for mobile robot search. In Proceedings of theAmerican Association for Artificial Intelligence Mobile Robot Workshop, 2006.

25. J. Penders, E. Cervera, U. Witkowski, L. Marques, J. Gancet, P. Bureau, V. Gazi, and R. Guzman. Guardians: a swarm of autonomousrobots for emergencies. In Workshop of Robotics in Challenging and Hazardous Environments in ICRA2007, 2007.

26. PHANTOM. Sensable technologies, http://www.sensable.com.27. D. Powell, G. Gilbreath, and M. Bruch. Multi-robot operator control unit. In Proceedings of the SPIE, volume vol. 6230, page pp.

62301N, 2006.28. J. Rasmussen and K. J. Vicente. Coping with human errors through system design: Implications for ecological interface design.

International Journal of Man-Machine Studies, vol. 31:pp. 517–534, 1989.29. C. W. Reynolds. Flocks, herds and schools: A distributed behavioral model. In SIGGRAPH ’87: Proceedings of the 14th annual

conference on Computer graphics and interactive techniques, pages 25–34, New York, NY, USA, 1987. ACM Press.30. B. Ricks, C. Nielsen, and M. Goodrich. Ecological displays for robot interaction: a new perspective. In Proceedings of the IEEE/RSJ

International Conference on Intelligent Robots and Systems (IROS’04), volume 3, pages 2855–2860, 2004.31. G. Rodriguez and C. R. Weisbin. A New Method to Evaluate Human-Robot System Performance. Autonomous Robots, 14(2-3):165–

178, March 2003.32. E. Sahin and W. M. Spears. Swarm robotics, a state of the art survey. 3342, 2005.33. P. Scerri, D. V. Pynadath, and M. Tambe. Towards Adjustable Autonomy for the Real World. J. Artif. Intell. Res. (JAIR),

17:171–228, 2002.34. J. Scholtz. Theory and evaluation of human-robot interaction. In Proc. HICSS 36, 2003., 2003.35. J. B. F. Van-Erp, H. A. H. C. Van-Veen, C. Jansen, and T. Dobbins. Waypoint navigation with a vibrotactile waist belt. ACM Trans.

Appl. Percept., 2(2):106–117, 2005.36. Wikipedia. Ecological interface design, http://en.wikipedia.org/wiki/ecological interface design, 2007.37. H. A. Yanco and J. L. Drury. Classifying Human-Robot Interaction: An Updated Taxonomy. In Proceedings of the IEEE Conference

on Systems, Man and Cybernetics, October 2004, 2004.38. T. G. Zimmerman, J. Lanier, C. Blanchard, S. Bryson, and Y. Harvill. A hand gesture interface device. SIGCHI Bull., 18(4):189–192,

1987.