Matrix-based Discrete Event Control for Surveillance...

28
Noname manuscript No. (will be inserted by the editor) Matrix-based Discrete Event Control for Surveillance Mobile Robotics D. Di Paola · D. Naso · B. Turchiano G. Cicirelli · A. Distante Abstract This paper focuses on the control system for an autonomous robot for the surveil- lance of indoor environments. Our approach proposes a matrix-based formalism which al- lows us to merge in a single framework discrete-event supervisory control, conflict resolu- tion and reactive control. As a consequence, the robot is able to autonomously handle high level tasks as well as low-level behaviors, solving control and decision-making issues simul- taneously. Moreover, the matrix-based controller is modular and can be easily reconfigured if mission characteristics or robot hardware configuration change. An illustrative example and a report on experimental investigations are provided to illustrate the main features of the proposed approach. Keywords discrete event systems, mobile robots, planning, supervisory control, surveil- lance 1 Introduction The increasing need for automated surveillance systems in indoor environments such as airports, warehouses, production plants, etc. has stimulated the development of intelligent systems based on mobile sensors. Differently from the mature and widespread surveillance systems based on non-mobile sensors (e.g. [1], [2]), surveillance systems based on mobile robots are still in their initial stage of development , and many issues are currently open for investigation ([3], [4]). Mobility and multifunctionality are generally adopted as a means to reduce the number of sensors needed to cover a given area. Besides, it is also important to note that the use of robots significantly expands the potential of surveillance systems, D. Di Paola, G. Cicirelli, A. Distante Institute of Intelligent Systems for Automation (ISSIA) National Research Council (CNR), Bari, Italy Tel.: +39-080-5929447 Fax: +39-080-5929460 E-mail: [email protected] D. Naso, B. Turchiano Department of Electrical and Electronic Engineering (DEE) Polytechnic of Bari, Italy

Transcript of Matrix-based Discrete Event Control for Surveillance...

Page 1: Matrix-based Discrete Event Control for Surveillance ...users.ba.cnr.it/dipaola/papers/DiPaola-JINT_2009.pdf · oped for an experimental surveillance platform. This work aims to adapt

Noname manuscript No.(will be inserted by the editor)

Matrix-based Discrete Event Controlfor Surveillance Mobile Robotics

D. Di Paola · D. Naso · B. TurchianoG. Cicirelli · A. Distante

Abstract This paper focuses on the control system for an autonomous robot for the surveil-lance of indoor environments. Our approach proposes a matrix-based formalism which al-lows us to merge in a single framework discrete-event supervisory control, conflict resolu-tion and reactive control. As a consequence, the robot is able to autonomously handle highlevel tasks as well as low-level behaviors, solving controland decision-making issues simul-taneously. Moreover, the matrix-based controller is modular and can be easily reconfiguredif mission characteristics or robot hardware configurationchange. An illustrative exampleand a report on experimental investigations are provided toillustrate the main features of theproposed approach.

Keywords discrete event systems, mobile robots, planning, supervisory control, surveil-lance

1 Introduction

The increasing need for automated surveillance systems in indoor environments such asairports, warehouses, production plants, etc. has stimulated the development of intelligentsystems based on mobile sensors. Differently from the mature and widespread surveillancesystems based on non-mobile sensors (e.g. [1], [2]), surveillance systems based on mobilerobots are still in their initial stage of development , and many issues are currently open forinvestigation ([3], [4]). Mobility and multifunctionality are generally adopted as a meansto reduce the number of sensors needed to cover a given area. Besides, it is also importantto note that the use of robots significantly expands the potential of surveillance systems,

D. Di Paola, G. Cicirelli, A. DistanteInstitute of Intelligent Systems for Automation (ISSIA)National Research Council (CNR), Bari, ItalyTel.: +39-080-5929447Fax: +39-080-5929460E-mail: [email protected]

D. Naso, B. TurchianoDepartment of Electrical and Electronic Engineering (DEE)Polytechnic of Bari, Italy

Page 2: Matrix-based Discrete Event Control for Surveillance ...users.ba.cnr.it/dipaola/papers/DiPaola-JINT_2009.pdf · oped for an experimental surveillance platform. This work aims to adapt

2

which can evolve from the traditional passive role in which the system can only detectevents and trigger alarms, to active surveillance systems in which a robot can be used tointeract with the environment (e.g. manipulating objects,removing obstacles, finding exitpathways), with humans (e.g. indicating emergency exits, prohibiting access to forbiddenareas), or with other robots for more complex cooperative actions ([5], [6], [7], [8], [9]).

However, developing a fully autonomous, integrated and effective surveillance systembased on mobile robots is also challenging from the control engineering viewpoint becausethe control system must simultaneously address task planning, dynamic task sequencing,resolution of conflicts for shared resources, and event-based feedback control. On the onehand, it is well-known that all these sub-problems have beenextensively investigated inrelated areas such as operations research and manufacturing system control [10], [11]. Onthe other hand, it can still be observed that the preponderance of the related literature fo-cuses on one aspect only, disregarding (or strongly simplifying) the interactions with theothers. This gap is particularly notable in the considered application domain, in which nei-ther commercially-available (typically passive) platforms ([12], [13]) nor architectures de-veloped for other problems (rover missions, automated guided vehicles for manufacturingsystems, etc.) seem to fully match with the mentioned specifications [14], [15].

Therefore, in order to achieve an effective exploitation ofthe capabilities of multifunc-tional robotics in surveillance systems, as well as in many other areas in which mobilerobotics is becoming increasingly widespread, it is important to develop integrated ap-proaches capable of properly describing and addressing allthe problems within a unifiedmodeling and decision-making approach.

Recently a new approach which is very efficient in both modeling and controlling dis-crete event systems has been proposed [16]. It is based on a novel matrix formulation thatallows fast and intuitive design, mission planning, computer simulation, and actual imple-mentation. The Matrix-based Discrete Event Control (M-DEC) is an integrated modelingand control framework based on Boolean matrices and vectors, which is intuitive (it can beinterpreted using simple if-then rules), inherently modular (models of large systems can beobtained by assembling submodels of smaller size) and versatile.

In this paper we present the control system, based on the M-DEC, which has been devel-oped for an experimental surveillance platform. This work aims to adapt the M-DEC conceptto a fully autonomous platform based on a single robot in the area of surveillance, which is anovel attempt with respect to existing literature. To date this new modeling and control toolhas been applied to a variety of different complex, large-scale distributed systems includingmanufacturing flow lines [10], material handling systems [11] and warehouses [17]. In thiswork we prove the efficiency of M-DEC in controlling a surveillance robot which has tomanage a variety of complex tasks.

The matrix-based framework allows a simultaneous characterization of the issues ofboth supervisory and operational levels in the same mathematical formulation. In this work,in order to enhance the reactivity and the autonomy of the robot in accomplishing its surveil-lance task, we furthermore extend the M-DEC with a new level,implementing a reactiveactuator control. In this new control system, each task is viewed and executed as a sequenceof closed-loop continuous plant behaviors. That is, a discrete-event controller selects a con-tinuous state controller, a reference trajectory, and an output function from a library of plantcontrol algorithms, input functions, and output functions, respectively. The selected plantbehavior is kept active for a certain time until an event fromthe environment or from thecontroller occurs. Thus, in this paper, the M-DEC is not onlyin charge of implementing pre-conditions, post-conditions, and interdependencies of the tasks and shared resource manage-ment, but it also controls the sequence of low-level behaviors needed to execute each task.

Page 3: Matrix-based Discrete Event Control for Surveillance ...users.ba.cnr.it/dipaola/papers/DiPaola-JINT_2009.pdf · oped for an experimental surveillance platform. This work aims to adapt

3

This strategy leads to an integrated, hybrid deliberative-reactive platform combining the fullpredictability and mathematical rigorousness of supervisory discrete-event control with theprompt, inherently sensor-driven dynamics of reactive control ([18], [19]). To show the ef-fectiveness of the approach this paper presents details on the implementation of M-DEC onan experimental robotic platform developed for indoor active surveillance.

The remainder of this paper is structured as follows. Section 2 surveys the related works.Sections 3 and 4 overview the main assumptions and the overall architecture of the controlsystem, respectively. Section 5 describes the discrete event model and the supervisory con-trol, section 6 proposes a case study and section 7 summarizes experimental results. Finallysection 8 draws some conclusions.

2 Related Work

Modeling and control of discrete event systems (DES) have been studied for some years in avariety of applications: manufacturing, process control,data acquisition systems, fault diag-nosis, software engineering, transportation, and so on. Recently the extension of DES theoryto mobile robotics is receiving increasing attention. Recent related works have proposed dis-crete event supervisory control to address specific issues such as behavior modulation [20]and fault accommodation [21]. In [22] event-based control of an industrial robotic workcellis also presented. Various modeling and control approacheshave been proposed such as fi-nite state machines ([23]), Petri Nets ([24], [25], [26]), vector DES ([27], [28]) and so on.Also the potential of the M-DEC in the area of robotic and sensor networks has been investi-gated. In [29], the M-DEC is used to control a mobile sensor network for which it is assumedthat task allocation is predetermined off-line by human planners usinga priori information,while in [30] the framework is extended with an efficient taskallocation algorithm capableof dynamically determining task-to-resource assignment in real time.

In our work the M-DEC framework is used to control a surveillance mobile robot(SMR). Traditional approaches to mobile robot control derive a single control law by con-structing a complex behavior as a combination of concurrently executing elementary behav-iors, avoiding the complexity of a monolithic control law and allowing the robot to performtasks in the presence of uncertainties. In most cases such systems use DES supervisory con-trol at a low level, where simple reactive behaviors are employed. The application of DESsupervisory control to complex problems is unusual, since behaviors don’t have an abstractrepresentation that would allow them to be employed at a highlevel. In [31] the authorsintroduce the concept of abstract behaviors which are used to specify one or more complextasks in the form of a behavior network which can be automatically generated. Our workconceptually follows the same direction. The key difference is that in our approach the M-DEC supervisory controller is in charge of monitoring only the complex tasks at the higherlevel. As a consequence it does not suffer of the problem of state explosion typical of finitestate machines. Moreover the M-DEC is a versatile and intuitive tool.

3 Problem Statement

A Surveillance Mobile Robot (SMR) can be seen as a versatile unit equipped with hetero-geneous sensory and actuating devices. This mobile platform is able to perform a varietyof tasks, including measuring, observing, tracking and finding or manipulating objects. Theobjective of this paper is to design and implement active environment surveillance operation

Page 4: Matrix-based Discrete Event Control for Surveillance ...users.ba.cnr.it/dipaola/papers/DiPaola-JINT_2009.pdf · oped for an experimental surveillance platform. This work aims to adapt

4

Fig. 1: Decomposition of the active environment surveillance operation into missions, tasks, and behaviors.

(Fig. 1) using a single SMR equipped with different physicalsensors (laser range finder,sonar, RFID reader and antenna or video cameras).

The SMR must execute a set ofmissions. Each mission consists of a predeterminedset of atomictasks, which may be subject to precedence constraints (the end of one taskmay be a necessary prerequisite for the start of some other ones). Some missions may havea repetitive nature (e.g. nocturnal patrolling of an unattended indoor environment), othermay be triggered by sensed events (fire, intrusion detectionor other emergency situations).Reactions to these events should be prompt and effective, inthe sense that the robot shouldhave the ability to reorganize the sequence of missions whennew events occur and/or whenpriorities change.

To execute each task, the SMR must perform a predetermined sequence of actions. Theseprimitive actions are the basicbehaviorsof the robot. Each behavior runs until an event (ei-ther triggered by sensor data or by the supervisory controller) occurs. Two or more behaviorscan be active simultaneously. In such a case, a predefined coordination algorithm is used tocompute the appropriate command to robot actuators. Missions may be triggered at unknowntimes, while the number and type of tasks composing a missionis knowna priori. Further,we make the following assumptions:

No preemption: Once started, a task interrupted by unexpected circumstances cannot beresumed and must be restarted from the beginning.

Mutual exclusion: The same behavior cannot be simultaneously associated to two ormore tasks.

Hold while waiting: To start a task, all the necessary behaviors must be available (i.e.not currently executed by other tasks).

Immediate release: behaviors are released immediately after the task ends.Following these assumptions, we propose the use of the M-DECas a central supervisor tocontrol the execution of surveillance missions as described above.

Page 5: Matrix-based Discrete Event Control for Surveillance ...users.ba.cnr.it/dipaola/papers/DiPaola-JINT_2009.pdf · oped for an experimental surveillance platform. This work aims to adapt

5

(a) (b)

Fig. 2: A comparison of robot control schemes. Traditional robot control approach using a low-level controlperformed by a DES (a). Hybrid DES control scheme for complexbehaviors/tasks management (b).

In the next section we provide a brief overview of the architecture which implementsthe control of SMR, using a discrete event model of surveillance missions, controlled by theM-DEC.

4 Control Architecture Overview

A fully autonomous SMR needs a reactive control scheme capable of handling complexsituations in unstructured environments to efficiently perform assigned missions. Due to thecomplexity of the surveillance domain and the large number of tasks, the design of a robustcontrol architecture is a challenging issue.

The surveillance scenario needs a very large number of cooperating behaviors (and addi-tional sensor processing components) to perform complex tasks. By using a DES supervisorto control all the involved behaviors, a large number of events and states have to be handled(see Fig. 2a). The reduction of the state space size, for example, could simplify the modelingand analysis problem, but the synthesis of the supervisory controller still remains a highlycomplex problem. In general such a simplification reduces the complexity of the off-linedesign phase, but the obtained system is incomplete and as a consequence its computationburden increases during the on-line phase. This clearly accentuates the necessity of an ab-straction from the robot state variables and pure sensor data processing. This abstractionrequires the use of underlying continuous control and sensing modules which act as eventgenerators for DES. In this way the system is an hybrid systemwhich contains both contin-uous state dynamics and logical clauses. Such a technique utilizes the supervisory control

Page 6: Matrix-based Discrete Event Control for Surveillance ...users.ba.cnr.it/dipaola/papers/DiPaola-JINT_2009.pdf · oped for an experimental surveillance platform. This work aims to adapt

6

Fig. 3: Overall architecture of the SMR control system.

capabilities of DES framework, while reducing the complexity of the model by adding areactive component to handle uncertainties and unpredicted effects.

In contrast to the classical supervision of DES, our hybrid control framework consistsof three main components: a continuous-state plant, the DESsupervisory controller, and aninterface (see Fig. 2b). Through the interface, the continuous control elements act as gen-erators of symbolic events for the DES controller, therefore removing the necessity of acontinuous monitoring at the supervisory level. By using these abstract events, the supervi-sor handles an abstract state space, without considering those parts of the system which areautomatically handled within the continuous control elements. The described control pol-icy can be easily integrated into a three layer deliberative/reactive structure well known asrobotic control paradigm, in which the low level control is independent from the high levelplanning and reasoning processes ([32], [33], [34]).

Our SMR control scheme is based on a framework belonging to the hybrid three-layersystem just described. In our approach a DES supervisory controller automatically coordi-nates the activation of a set of general reactive modules, thus reducing the amount of off-linespecifications required to the system designer. More specifically, the control system can bedecomposed into the continuous part and the DES supervisorycontroller (DEC). The firsthandles a set of reactive behaviors which can be activated individually or in parallel and areused by a library of complex tasks to reach a specific objective. More specifically, behaviorsare simple primitive activities, such asgoTo, avoidObstacle, goFoward, wallFollow, wander.

Page 7: Matrix-based Discrete Event Control for Surveillance ...users.ba.cnr.it/dipaola/papers/DiPaola-JINT_2009.pdf · oped for an experimental surveillance platform. This work aims to adapt

7

The DEC allows the system to move through a task-dependent sequence of behaviors thatare kept active for a given amount of time until a prescribed plant event or controller eventoccurs.

As shown in Fig. 3, the SMR control architecture is composed of three main modules,nevertheless two modules can be viewed as containers of objects belonging to the lowercontrol layer and the other one contains the supervisory controller and the interface. Allthe modules are executed on a discrete-time basis: matrix and state updates, event detectionand commands are all processed at the beginning of each sample time. Each module isconnected with the sensory input. This information is used in different ways: at the higherlevel, sensory data are converted into events which are usedby the supervisor to start orinhibit task executions; at the middle level, sensory data are used to monitor and control theexecution of the task in progress; finally, at the lower level, sensory inputs are used by theactive behaviors to perform the associated actions.

The controller module performs the control at the behavior level. This module con-tains all the behaviors needed to accomplish all possible SMR tasks. Multiple behaviors canbe executed at the same time, if different tasks are activated. Each behavior computes anoutput for each actuator on the robot, and when multiple behaviors are active at the sametime, a predefined command fusion/cooperation algorithm (e.g. subsumption [35], cooper-ative methods [36], fuzzy logic [37]) is used to obtain the final control signal. This moduleinteracts with the upper module in two ways: it receives the information about the activationand configuration of behaviors to be performed (in the form ofa list bexe shown in Fig 3)and it sends the information about the currently active behaviors by means of listba.

The executor module handles the execution of the tasks as commanded by theupperlevel; similarly to thecontroller, this module is considered as a continuous-state controllercontainer. Each task can be viewed as a procedure to achieve aresult. At the end of the taskthe completion flag and the result is sent to the upper level. Two different classes of tasks areconsidered: Passive Tasks (PT) which mainly involve the analysis of sensory inputs to gen-erate information needed by other tasks (e.g. store data, build a model of the environment,detect a face), and Active Tasks (AT), which involve the execution of actions (e.g. move toa goal, find a tag, follow a people). For the AT, the executor sends the corresponding com-mands to thecontroller by means ofbexe. Finally, theexecutor sends, at every time sample,information about completed tasks using the listtc.

The supervisor module implements the high-level functions, monitoring the missionexecution and generating events through the evaluation of sensory data. More specificallythis module controls the execution of missions in progress (e.g. guaranteeing the satisfactionof precedence constraints), sends the configuration information about the tasks that must bestarted (listta) to theexecutor module and receives the information about the completedtasks (tc). The supervisor performs its function using a discrete-event model of the SMRand a conflict resolution strategy, which will be described in the following section.

5 M-DEC Supervisory Control

The matrix-based framework, provides a rigorous, yet intuitive mathematical framework torepresent the dynamic evolution of system according to linguistic if–then rules such as

Rulek : IF 〈 task( j)-missioni completed 〉 AND 〈 behaviorh available 〉THEN 〈 task(l)-missioni start 〉

Page 8: Matrix-based Discrete Event Control for Surveillance ...users.ba.cnr.it/dipaola/papers/DiPaola-JINT_2009.pdf · oped for an experimental surveillance platform. This work aims to adapt

8

Rules are converted in Boolean matrices, vectors and equations, which are updated andprocessed at each controller time sample. The following subsections present the systemmodel formalism and the control scheme with the conflict resolution strategy.

5.1 Matrix-based Discrete Event Model

The discrete event model is obtained using the M-DEC formalism in the same way describedin [29]. For brevity, here we provide a short introduction tothe main components of themodel and the fundamental matrices and equations.

Let us considern missions composed of an overall number ofm tasks, andt behav-iors. The value of the logical conditions for the activations of q rules of a certain missionicomposed ofp tasks is summarized by therule logical vector

xi = [ xi(1) xi(2) . . . xi(q) ]T . (1)

An entry of “1” in thek-th position of vectorxi denotes that rulek is currently fired (all thepreconditions are true). The conditions of the tasks of eachactive mission are described bytwo vectors, themission task-in-progress vector vi

IP, and themission task-completed vectorvi :

viIP = [ vi

IP(1) viIP(2) . . . vi

IP(p) ]T , (2)

vi = [ vi(1) vi(2) . . . vi(p) ]T . (3)

Each component ofviIP (vi) is set to “1” when the corresponding task is in-progress

(completed), and “0” otherwise. In particular, vectorvi represents a fundamental precondi-tion of many logical rules of the matrix-based model, while vectorvi

IP is mainly necessaryto complete the characterization of each mission. Hereinafter, for brevity, we will focus onvi only, and omit the further details aboutvi

IP and its dynamical update rules.Finally, let us indicate withui andyi , the Booleaninput andoutputvectors of mission

i, respectively. The generic element of vectorui is set to “1” when the triggering event(timer alarm, movement detection, intrusion, etc.) of mission i has occurred, and the genericelement of vectoryi is set to “1” when missioni is completed.

In order to modeln simultaneously missions we define theglobal rule logical vector

x = [ (x1)T (x2)T. . . (xn)T ]T , (4)

obtained by stacking then column rule logical vectors. Similarly, we can define:theglobal task vector

v = [ (v1)T (v2)T. . . (vn)T ]T , (5)

theglobal input vector

u = [ (u1)T (u2)T. . . (un)T ]T , (6)

and theglobal output vector

y = [ (y1)T (y2)T. . . (yn)T ]T . (7)

Page 9: Matrix-based Discrete Event Control for Surveillance ...users.ba.cnr.it/dipaola/papers/DiPaola-JINT_2009.pdf · oped for an experimental surveillance platform. This work aims to adapt

9

Moreover, let us define the Booleanglobal behavior vector bhavingt elements, wherean entry of “1” represents the behavior currently available:

b = [ b1 b2. . . bt ]T , (8)

Vectorsv, u, y, andx have a dynamic size depending on the number of missions that therobot has to accomplish at a given time. In particular, if a new mission is assigned to therobot, the vectors are resized accordingly, introducing new elements with appropriate values(e.g., the vectoru is updated adding a new element set to “1”).

All of these vectors provide a static description of the conditions of the SMR at a giventime. In the following, we illustrate the update laws defining the evolution of the systemvariables over time. As mentioned, the model is run in discrete-time, i.e. the effects of eventsare computed at the first sample time after their occurrence.When no event occurs betweentwo sample times, all the system variables remain unchanged. Hereafter, except where noteddifferently, all matrix operations are defined in the or/andalgebra, where∨ denotes logicalOR,∧ denotes logical AND,⊕ denotes logical XOR, and overbar indicates negation (see[16] for further details).

Before the description of the update rules, it is convenientto introduce some fundamen-tal matrices. LetF i

v (q× p) be thetask sequencing matrixfor an individual missioni. Thismatrix has element(k, j) set to “1” if the completion ofj-th task of vectorvi is a necessaryprerequisite fork-th rule in the rule logical vectorxi . Similarly, letF i

b (q× t) be thebehaviorrequirements matrixhaving element(k,h) set to “1” if the condition ofh-th behavior is animmediate prerequisite fork-th rule in the rule logical vectorxi . Moreover, letF i

u (q× z),wherez is the size of vectorui , be theinput matrix, having element(k, l) set to “1” if theoccurrence of inputl -th is an immediate prerequisite fork-th rule in the rule logical vectorxi .

As for the vectors of variables, the matrices related to the list of n missions can be easilyobtained by either stacking or concatenating together the matrix blocks corresponding toeach individual mission. As an example, we report the task sequencing matrixFv for nmissions

Fv =

F1v

. . .

F iv

. . .

Fnv

(9)

The updates of vectors are computed according to the equations that will follow in thesequel.

After a task starts the corresponding behaviors becomes busy and the correspondingelement of vectorb is set to “0”:

b = b ⊕ (Fb)T ∧ x (10)

At the end of a task, the controller will command the release of the corresponding be-haviors and the SMR will feed the “behavior release” input, which is directly used in themodel to reset the value ofb to the idle (“1”) condition.

The update of vectorv occurs in two different cases. The first one is when the M-DECmodel receives the “j-th task ofi-th completed mission” message from the executor module.

Page 10: Matrix-based Discrete Event Control for Surveillance ...users.ba.cnr.it/dipaola/papers/DiPaola-JINT_2009.pdf · oped for an experimental surveillance platform. This work aims to adapt

10

In such a case, the corresponding element ofv is set to one (vi( j) = 1). The second caseoccurs when the vector element, corresponding to previously completed tasks, is set to “0”,using the following equation:

v = v ⊕ (Fv)T ∧ x (11)

Finally, mission inputs and outputs vectors are updated with the following equations:

u = Su∧x (12)

y = Sy∧x (13)

whereSu (w×q), wherew is the size of vectoru, is theinput events matrix, having element(i,k) set to “1” if the activation of thek-th rule in the vectorx determines the activation ofmissioni, andSy (o×q), whereo is the size of vectory, is theoutput matrix, having element(i,k) set to “1” if the activation ofk-th rule determines the completion of missioni.

5.2 Matrix-based Discrete Event Control

The main function of the M-DEC is to determine which rules must be fired, which tasksmust be started, and which behaviors are in charge of performing the tasks. These functionsare processed by means of two different sets of logical equations, one for checking theconditions for the activation of a generic rule of a generic mission, and one for definingthe consequent supervisor outputs. The updated value of therule logical vector is computedwith the following controller state equation:

x = (Fv∧ v) ∨ (Fb∧ b) ∨ (Fu∧ u) ∨ (Fud ∧ ud) (14)

Matrix Fud in equation (14) is called the SMRconflict resolution matrixand is used tomodel the influence ofcontrol input vector ud on the rule vectorx. In particular, an entryof “1” (“0”) in ud disinhibits (inhibits) the activation of the corresponding task. The vectorud can be seen as one controllable input of the SMR. Depending onthe way one selects thestrategy to assign the control vectorud, task sequencing decisions can be implemented.

On the ground of the current value of the rule logical vectorx, the controller determineswhich tasks to start and which behaviors to release by means of the matrix controller outputequations. In particular, the command of a task start is performed by means of thetask startvector vs (having the same structure of vectorsv andvIP) using the following Boolean matrixequation:

vs = Sv∧x (15)

whereSv (m×q) is the task start matrixand has element( j,k) set to “1” if the k-th rulein vectorx triggers the start command for thej-th task in vectorvs. Similarly, the behaviorrelease command is performed by means of the Booleanbehavior release vector br usingthe following matrix equation:

br = Sb∧x (16)

Page 11: Matrix-based Discrete Event Control for Surveillance ...users.ba.cnr.it/dipaola/papers/DiPaola-JINT_2009.pdf · oped for an experimental surveillance platform. This work aims to adapt

11

Fig. 4: Supervisor block diagram.

in whichSb (t×q) is thebehavior release matrix, having element(h,k) set to “1” if thek-thrule in vectorx triggers the release command for theh-th behavior of the SMR. Therefore,vectorbr hast elements, each set to “1” when the controller commands the release of thecorresponding behavior.

As illustrated in Fig. 4, at each sample time, the supervisoractivity can be describedas follows: new input events, completed tasks and released behaviors are received by thenM-DEC Discrete Event Model block, then vectorsv, b, u, andx are sent to the M-DECSupervisory Controller block which determines the rules that must be fired, the tasks thatmust be started, and the behaviors for which a release command must be sent to the lowerlevel of the SMR.

6 A Case Study: Environment Monitoring for Object and People Detection

In order to illustrate the intuitive, modular structure of the M-DEC supervisory control ofthe SMR, this section discusses a simple example.

Let us consider a SMR equipped with three sensors (a laser rangefinder, a RFID systemand a monocular camera), which has to perform the surveillance of an indoor environment(the map of the considered environment is assumed to be knownin advance). Assume thatthe SMR has to periodically visit a set of predefined goal points. The navigation betweenthe goal points is performed using the known map, laser rangefinder and odometry. Periodi-cally or when a large positioning error is detected, the SMR must execute a self-localizationprocedure based on RFID tags and visual landmarks spread in the environment. Once a goalpoint is reached, the SMR has to examine the scene looking forremoved or abandoned ob-jects. If this event is detected, the SMR has to issue an alertand continue to the next goalpoint. Moreover, the SMR has to detect the intrusion of humans in the environment, and ifthis event occurs, it has to follow the human until he leaves the area.

Page 12: Matrix-based Discrete Event Control for Surveillance ...users.ba.cnr.it/dipaola/papers/DiPaola-JINT_2009.pdf · oped for an experimental surveillance platform. This work aims to adapt

12

Table 1: Events

event description

u1 global position erroru2 start patrollingu3 person detection

Table 2: Missions and Tasks

name notation description

RFID Global Localization m1FindRFIDTag t1 The robot wanders until a specific RFID tag is detectedLocalizeRFIDTag t2 The robot scans the environment to estimate the RFID tag positionMoveToGoal t3 The robot moves to a goal point on the mapLocalizeRobotVision t4 The robot localize itself using a visual landmark

Patrolling m2PathPlan t5 The robot computes the path to a goal in the environmentMoveToGoal t6 The robot moves to a goal point on the mapDetectObjectsLaser t7 The robot detects objects using the laser range finderDetectObjectsVision t8 The robot detects objects using the monocular camera

People Following m3PeopleFollowing t9 The robot follows the nearest person

Table 3: Available Behaviors

name notation description

AvoidObstacle b1 The robot avoids obstacles using the Laser sensorGoTo b2 The robot goes to a position in the environmentWander b3 The robot wanders randomly in the environmentLocalizeRFID b4 The robot wanders until a RFID tag is detected

6.1 Mission Programming

According to the M-DEC notation introduced above, the SMR has three missions:RFIDGlobal Localization(self-locating using the RFID and visual landmarks),Patrolling (theSMR visits the sequence of goal-points looking for scene changes), andPeople Following(the SMR follows intruders). Implementing the M-DEC consists of three different steps.First, the fundamental sets of inputs, outputs, tasks and behaviors have to be defined. Inthis example, we consider three main events, nine tasks, andfour behaviors, all listed anddescribed in Tables 1, 2, and 3, respectively. The association between tasks and behaviors islisted in Table 4. The behavior and task vectors are the following:

b = [ b1, b2, b3, b4 ] (17)

v = [ t1, t2, t3, t4, t5, t6, t7, t8, t9 ] (18)

The second step is defining the if-then rules describing the system constraints and super-visory control strategy (reported in Tables 5, 6, and 7). In the construction of the rule bases

Page 13: Matrix-based Discrete Event Control for Surveillance ...users.ba.cnr.it/dipaola/papers/DiPaola-JINT_2009.pdf · oped for an experimental surveillance platform. This work aims to adapt

13

Table 4: Task/Behaviors Association

task name used behavior

DetectObjectLaser –DetectObjectVision –FindRFIDTag b1, b3LocalizeRFIDTag b4LocalizeRobotVision –MoveToGoal b1, b2PathPlan –PeopleFollowing b1, b2

Table 5: Mission 1 - Rule Base

notation rule description

x1 IF 〈 u1 occurs ANDb1, b3 available〉 THEN 〈 startt1 〉x2 IF 〈 t1 completed ANDb4 available〉 THEN 〈 startt2 AND releaseb1, b3 〉x3 IF 〈 t2 completed ANDb1, b2 available〉 THEN 〈 startt3 AND releaseb4 〉x4 IF 〈 t3 completed〉 THEN 〈 startt4 AND releaseb1, b2 〉x5 IF 〈 t4 completed〉 THEN 〈 terminatemission1[y1]〉

Table 6: Mission 2 - Rule Base

notation rule description

x6 IF 〈 u2 occurs〉 THEN 〈 startt5 〉x7 IF 〈 t5 completed ANDb1, b2 available〉 THEN 〈 startt6 〉x8 IF 〈 t6 completed〉 THEN 〈 startt7, t8 AND releaseb1, b2 〉x9 IF 〈 t7, t8 completed〉 THEN 〈 terminatemission2[y2]〉

Table 7: Mission 3 - Rule Base

notation rule description

x10 IF 〈 u3 occurs ANDb1, b2 available〉 THEN 〈 startt9 〉x11 IF 〈 t9 completed〉 THEN 〈 terminatemission3[y3]〉

particular attention has to be devoted to the definition of consecutive tasks that use the samebehavior. If the tasks are not interdependent, before starting the new consecutive task, theDEC releases the corresponding behavior and makes sure thatno other missions are waitingfor it. If after a predetermined period of time no other missions request that behavior, theprevious mission can continue.

Then, the rules are translated into Boolean matrix notation, which is better suited formathematical analysis and computer implementation. For example, considering the rule-base of mission 1 (Table 5), the first blocks ofFv andFb can be easily written as illustratedin Fig. 5 (which also includes the block of the other missionsobtained with an identicalprocedure). In particular,Fv(k, j) is “1” if the completion of the taskj is a precondition forthe rulek andFb(k, j) is “1” if behavior j is required to execute the task associated to rulek.

TheSv matrix [Fig. 6(a)] is built considering which tasks should be executed after a ruleis fired. For example, for mission 2 (see Table 6), when rulex8 is fired, theDetectObjectsVi-

Page 14: Matrix-based Discrete Event Control for Surveillance ...users.ba.cnr.it/dipaola/papers/DiPaola-JINT_2009.pdf · oped for an experimental surveillance platform. This work aims to adapt

14

(a)

(b)

Fig. 5: Implementation of missions. (a) Task sequencing block-matrix Fv, (b) behavior requirements block-matrix Fb.

sion(taskt7) and theDetectObjectsLaser(taskt8) are started. Accordingly, the elements inposition (7, 8) and (8, 8) ofSv are equal to “1”.

TheSb matrix [Fig. 6(b)] is built considering thatSb(h,k) is “1” if behavior h has to bereleased after rulek has been fired. For example, sinceAvoidObstacle(behaviorb1 ) has tobe released when rules 2, 4, 8, and 11 are fired, we have entriesof “1” in positions (1, 2), (1,4), (1, 8), and (1, 11).

Page 15: Matrix-based Discrete Event Control for Surveillance ...users.ba.cnr.it/dipaola/papers/DiPaola-JINT_2009.pdf · oped for an experimental surveillance platform. This work aims to adapt

15

(a)

(b)

(c)

Fig. 6: Implementation of missions. (a) Task start block-matrix Sv, (b) behavior release block-matrixSb, (c)conflict resolution block-matrixFud .

Page 16: Matrix-based Discrete Event Control for Surveillance ...users.ba.cnr.it/dipaola/papers/DiPaola-JINT_2009.pdf · oped for an experimental surveillance platform. This work aims to adapt

16

Thus, the matrix formulation of the overall monitoring operation is easily obtained bystacking together the matrices of mission 1, mission 2 and mission 3 as blocks of the globalmatrices (as can be seen in Fig. 5 and 6).

6.2 Implementation of Conflict Resolution Strategy

The conflict resolution matrixFud [Fig. 6(c)] has as many columns as the number of taskssharing one or more behaviors. Since these tasks may generate conflicts (e.g. two tasks mayneed the sameGoTobehavior but heading to two different goal-points) the element(k, j) ofFud is set to “1” if completion of shared taskj is an immediate prerequisite for the activationof thek-th rule in vectorx. Then, an entry of “1” in positionj in the conflict-resolution vectorud, determines the inhibition of of thek-th rule in vectorx. It results that, depending on theway one selects the conflict-resolution strategy to generate vectorud, different sequencingstrategies can be implemented.

In order to take into account the dynamical priorities amongmultiple missions, we derivethe global conflict-resolution matrixFud with the following procedure, derived from [29].After assigning a priority order to each mission, we calculate, for every shared behaviorh and every missioni, a matrixF i

ud(bh). This matrix is composed by a number of columns

equal to the number of occurrences of “1” appearing in theh-th column ofFb for the missioni. The number of rows is the number of rules associated to the missioni, as in matrixFb. Foreach column there must be just a “1” for the rule as in the matrix Fb, all other elements mustbe “0”. Then, we construct the global conflict-resolution matrix of behaviorbh, [Fud(bh)],inserting eachF i

ud(bh) matrix in position(i, f ), where f is the priority index of the mission

i. The result is the overall matrix

Fud = [ Fud(b1) Fud(b2) . . . Fud(bh) . . . Fud(bt) ] (19)

that contains the conflict-resolution information needed for dispatching shared behaviors inmultiple missions for the SMR.

An analysis of the matrixFb (see Fig. 5(c)), for the mission 1, reveals thatb1 andb2

are shared behaviors (due to multiple “1” in the corresponding column of the first block, ontop, of Fb), therefore we need to specify justF1

ud(b1) andF1

ud(b2), the first and the second

block on top, respectively, in the globalFud matrix [Fig. 6(c)]. In the same way, the set ofmatrices for the shared behaviorsb1 andb2, relative to mission 2 and mission 3, can be builtas follow:

Fud =

F1ud

(b1) F1ud

(b2)

F2ud

(b1) F2ud

(b2)

F3ud

(b1) F3ud

(b2)

(20)

6.3 Execution of the Missions

A dynamic simulation of the example described above can be easily obtained by imple-menting the hybrid control scheme and the above vectors, matrices, and equations within

Page 17: Matrix-based Discrete Event Control for Surveillance ...users.ba.cnr.it/dipaola/papers/DiPaola-JINT_2009.pdf · oped for an experimental surveillance platform. This work aims to adapt

17

Fig. 7: Execution of missions: utilization time trace of tasks and behaviors. The circles indicate the instanttime when missions and tasks start. Notice that missions arenot really active until the triggering of relativetasks because of conflicts.

any general-purpose programming environment (e.g. Matlab[16]). In this subsection wepresent the results of a simulation campaign of the three missions described above, to provethe reactivity of event-driven control and the efficiency ofconflict resolution strategy. First,we show a single simulation with randomly generated events.Fig. 7 reports the evolutionof missions, the utilization time trace of the behaviors, and the execution time trace of thetasks. In these time traces, active behaviors and tasks in progress are denoted by high level,whereas idle behavior and tasks not in progress are denoted by a low level.

At time instant 2, the eventu2 (start patrolling) occurs, and mission 2 (Patrolling) istriggered. Then taskt5, the first task of mission 2 (see Table 6), can start. At time instant10, the eventu1 (global position error) occurs, and mission 1 (RFID Global Localization) istriggered, but the first task of this mission,t1 cannot start because the behaviorb1 (AvoidOb-stacle), is already assigned to mission 2 for the execution of taskt6. When the behaviorb1

is idle, at the instant 12,t6 is completed, then it releases the behavior that can be assigned totaskt1. Mission 1 can start (actually at the instant 15), while mission 2 can be successfullyterminated since there are no other conflicts between missions.

Since mission 3 (People Following) has a lower priority, the eventu3 (person detected)occurs at instant 28, the mission is temporarily kept waiting because behaviorsb1 (AvoidOb-stacle) andb2 (GoTo) are assigned to mission 1 for the execution of taskt3 (MoveToGoal).At time instant 37,b1 andb2 are assigned to mission 3, which can actually start and thenmission 1 ends with the execution of last taskt4 (LocalizeRobotVision). Finally, mission 3can be successfully terminated.

To prove the efficiency of the SMR we show a second simulation case with a differentevent sequence (see Fig. 8). The mission priority order is changed with respect to that of theprevious simulation. In this second case we simulate a nocturnal patrolling of the environ-ment, when the SMR has to monitor an area where people is not allowed. By this assumptionmission 3 (People Following) has the highest priority, mission 1 (RFID Global Localization)the lower, mission 2 (Patrolling) is in the middle.

The first mission that starts is mission 2 (Patrolling), the robot starts to monitor theenvironment. At time instant 4 eventu3 (person detected) occurs. Since mission 3 has the

Page 18: Matrix-based Discrete Event Control for Surveillance ...users.ba.cnr.it/dipaola/papers/DiPaola-JINT_2009.pdf · oped for an experimental surveillance platform. This work aims to adapt

18

Fig. 8: Execution of missions: utilization time trace of tasks and behaviors. The circles indicate the instanttime when missions and tasks start. Notice that missions arenot really active until the triggering of relativetasks because of conflicts. The square indicates the end of task t5. Taskt6 cannot start until taskt9 ends.

Fig. 9: Makespan of 30 simulations used for performance analysis.

highest priority, its first task is triggered and mission 2 iskept waiting. Also in this case, theM-DEC solves the conflict: taskt6 and taskt9 require bothb1 (AvoidObstacle) andb2 (GoTo)behaviors, then taskt6, associated with the mission with the lower priority is postponed. Taskt9, associated with the mission with the higher priority, starts first (as shown in Fig. 8). Thus,taskt6 can start only when taskt9 ends.

During the execution of mission 2, at time instant 22, the event u1 (global position error)occurs and mission 1 (RFID Global Localization) is triggered. Behaviorsb1 (AvoidObstacle)andb3 (Wander), required by the first taskt1 of mission 1, cannot start, according to thepriority, since the behaviorb1 is already assigned to mission 2. When behaviorb1 is idle, t6is completed (at instant 28), then it releases the behavior that can be assigned to taskt1 andmission 1 can start (actually at the instant 29).

From the time traces in Fig. 7 and Fig. 8, it is interesting to note that whenever pos-sible, the missions are executed simultaneously, and the shared behaviors are alternativelyassigned to the three missions.

Page 19: Matrix-based Discrete Event Control for Surveillance ...users.ba.cnr.it/dipaola/papers/DiPaola-JINT_2009.pdf · oped for an experimental surveillance platform. This work aims to adapt

19

To test the performance of the proposed M-DEC task allocation in case of conflict res-olution, we performed 30 different simulations with the same setup of the last simulationdescribed above. Also in these cases the events are generated randomly. As an evaluationparameter we consider themakespan(the overall number of time samples needed to accom-plish all missions). To compare the results we consider, as aground truth, a preliminarysimulation in which all missions are triggered manually by an operator to avoid conflictsbetween tasks and to obtain the best execution of the three mission. The resulting makespanof this simulation is 54 time samples. After that, 30 simulations are performed. The result-ing makespan of these simulations is depicted in Fig. 9. The results show that the M-DEChandles the conflict in a efficient way, in fact the mean makespan of these simulation is55 time samples, comparable with the result obtained by the human operator. Moreoverthe makespan in the worst case (60 time samples), and in the best case (53 time samples),demonstrate the efficiency of the M-DEC in both conflict resolution and task allocationproblem.

The matrix-based control system described above can be alsoused in a straightforwardway to implement the actual control system for the hardware SMR platform, as illustratedin the next section, where real experiments will be shown.

7 Experimental Results

7.1 Hardware and Software Platform

The hardware platform developed at the Institute of Intelligent Systems for Automationof the National Research Council in Bari, Italy, consists ofa PeopleBot mobile robot byMobileRobots Inc (Fig. 10). The robot is equipped with sonarand infrared sensors, a SICKLMS-200 laser range finder, an AVT Marlin IEEE 1394 FireWire monocular camera, andan RFID device. The latter consists of two circularly polarized antennas and a reader. Thesoftware system is distributed on three processing units, the robot embedded PC and twoadditional laptops: an Intel Pentium M @ 1.6 Ghz used for the vision and RFID signalprocessing on board the robot and an Intel Pentium M @ 1.5 Ghz used for applicationcontrol and user interface.

The developed software system, running on both laptops and the PC, is developed un-der GNU/Linux OS, using an open source robotic development framework, named MARIE[38]. The software control architecture is implemented as described in section 3. Missionsand tasks dependencies are defined, using XML syntax [39], ina configuration file. Usingthis file the mission-parser transforms if-then rules into the matrices of the discrete eventmodel (Fig. 5, 6). Tasks and behaviors are implemented usingstate-of-art signal processingand control algorithms. More specifically, the navigation package is based on CARMENNavigation Toolkit [40], the detection of abandoned or removed objects is based on animage-processing algorithm [41] and a laser based algorithm [42], the localization of therobot is performed using a method based on RFID sensor [43], and people following isimplemented using the laser range finder [44].

7.2 Real World Experiments

Differently from the simple experimental setup of simulations shown in the previous section,in the real world experiments the SMR surveillance capabilities are extended. The available

Page 20: Matrix-based Discrete Event Control for Surveillance ...users.ba.cnr.it/dipaola/papers/DiPaola-JINT_2009.pdf · oped for an experimental surveillance platform. This work aims to adapt

20

Fig. 10: The robotic platform.

events, tasks, and behaviors are implemented considering the real-world situations. In par-ticular each mission is defined as a sequence of tasks which depend by the real environmentand by the surveillance requirements. These concepts will be better clarified in the follow-ing.

Five events are available:global position error(u1), start environment learning(u2),start patrolling (u3), person detection(u4) andstart update map(u5). The developed mis-sions are:RFID Global Localization, Environment Learning, Patrolling, People RecognitionandMap Update. The name of each mission explains by itself which is the roleof the mis-sion. Notice that some new missions have been considered with respect to the simulationcase (see Section 6, due to the requirements of real world experimentation. Some others,instead, are similar to those of the simulated experiment.

Table 8 lists the described missions, giving for each one thesequence of tasks that haveto be run to complete the mission. Notice that the sequence oftasks of each mission dependson the real environment in which the SMR operates. For example, in missionPatrollingthe SMR has to visit five goal positions in the environment in order to check if there areabandoned or removed objects. Therefore tasksMoveToGoal, DetectObjectsLaserandDe-tectObjectsVisionare triggered five times in the sequence. Notice that each task has a newidentification label, even if it is the same. This is due to theabstraction ability of the M-DEC, in fact the M-DEC supervisory controller abstracts from the meaning of each singletask. Actually it triggers each task in the sequence depending on the happening events, therules and eventual conflicts among tasks and behaviors.

In the real experiment behaviors are the same of Table 3, withthe difference of theAvoidObstaclethat can be performed either by using the laser range finder (AvoidObstacle-

Page 21: Matrix-based Discrete Event Control for Surveillance ...users.ba.cnr.it/dipaola/papers/DiPaola-JINT_2009.pdf · oped for an experimental surveillance platform. This work aims to adapt

21

Table 8: Missions and Tasks

name notation description

RFID Global Localization m1FindRFIDTag t1 The robot wanders until a specific RFID tag is detectedLocalizeRFIDTag t2 The robot orients itself toward the RFID tagLocalizeRobotVision t3 The robot localizes itself using a visual landmark

Environment Learning m2MoveToGoal t4 The robot moves to a goal point on the mapStoreData (Laser) t5 The robot stores laser data in a databaseStoreData (Vision) t6 The robot stores image data in a databaseMoveToGoal t7 The robot moves to a goal point on the mapStoreData (Vision) t8 The robot stores image data in a databaseMoveToGoal t9 The robot moves to a goal point on the mapStoreData (Laser) t10 The robot stores laser data in a databaseStoreData (Vision) t11 The robot stores image data in a databaseMoveToGoal t12 The robot moves to a goal point on the mapStoreData (Laser) t13 The robot stores laser data in a databaseStoreData (Vision) t14 The robot stores image data in a databaseMoveToGoal t15 The robot moves to a goal point on the mapStoreData (Vision) t16 The robot stores image data in a database

Patrolling m3MoveToGoal t17 The robot moves to a goal point on the mapDetectObjectsLaser t18 The robot detects abandoned objects using laser dataDetectObjectsVision t19 The robot detects abandoned objects using imagesMoveToGoal t20 The robot moves to a goal point on the mapLocalizeRFIDTag t21 The robot orients itself toward the RFID tagDetectObjectsVision t22 The robot detects removed objects using imagesMoveToGoal t23 The robot moves to a goal point on the mapDetectObjectsLaser t24 The robot detects abandoned objects using laser dataDetectObjectsVision t25 The robot detects abandoned objects using imagesMoveToGoal t26 The robot moves to a goal point on the mapDetectObjectsLaser t27 The robot detects abandoned objects using laser dataDetectObjectsVision t28 The robot detects abandoned objects using imagesMoveToGoal t29 The robot moves to a goal point on the mapLocalizeRFIDTag t30 The robot orients itself toward the RFID tagDetectObjectsVision t31 The robot detects removed objects using images

People Recognition m4PeopleFollowing t32 The robot follows the nearest personTalk t33 The robot speaks a sentenceDetectFaceVision t34 The robot detects a face and compares it whit a database

Map Udpate m5MapUpdate t35 The robot updates the map using laser data

Table 9: Available Behaviors

name notation description

AvoidObstacleLaser b1 The robot avoids obstacles using the laser sensorAvoidObstacleSonar b2 The robot avoids obstacles using the sonar sensorGoTo b3 The robot goes to a position in the environmentWander b4 The robot wanders randomly in the environmentLocalizeRFID b5 The robot wanders until a RFID tag is detected

Page 22: Matrix-based Discrete Event Control for Surveillance ...users.ba.cnr.it/dipaola/papers/DiPaola-JINT_2009.pdf · oped for an experimental surveillance platform. This work aims to adapt

22

Table 10: Task/Behaviors Association

task name used behavior

DetectFaceVision –DetectObjectLaser –DetectObjectVision –FindRFIDTag b1, b2, b4LocalizeRFIDTag b5LocalizeRobotVision –MapUpdate b1, b2, b4MoveToGoal b1, b2, b3PeopleFollowing b1, b3StoreData –Talk –

Laser) or the sonar sensors (AvoidObstacleSonar). The complete list of behaviors and theirdescription are shown in Table 9. As described in Section 4 behaviors can be coordinatedby using a variety of reactive approaches, in our work behaviors are coordinated using thesubsumption approach [35] in thecontroller module.

As described in Section 6 all information is codified as Boolean vectors and matrices. Inthe real case of experimentation, the size of vectors and matrices increase: e.g. matrixFv has35 columns (tasks) and 40 rows (rules),Fb has 5 columns (behaviors) and 40 rows. This istypical of a real world implementation, however the M-DEC formalism allows the handlingof huge quantities of events, rules, tasks and behaviors in an efficient way. For brevity therules are not presented here, but they can be easily deductedfrom Tables 8, 9, 10 and fromthe following description.

For the sake of clarity, in Fig. 11, the configuration of the operating environment isdepicted. This figure shows the map of the area under surveillance, the position of the goal-points for scene analysis and object detection, and the position of the RFID tags and visuallandmarks for global localization.

During the experimental phase, missionm2 (Environment Learning) and missionm5

(Map Update) are preliminarily activated. In this phase the robot explores the environmentto store all data useful for the surveillance activity. In Fig. 12(a), the robot is performing theMap Update mission. As shown in Table 8, this mission consists of only one task. This caseemphasises the potential of the hybrid control architecture because the M-DEC supervisorstarts taskt35 (MapUpdate) delegating the control of task execution to thelower modules.The control of behaviorsb1, b2, b4, activated by this task, is entrusted to the controllermodule which performs a low level control coordinating the behaviors. When the task iscompleted or an event occurs the M-DEC will take the control of the overall system. Anexample of this situation is illustrated in Fig.12(b). During the MapUpdate task, eventu2

(start environment learning) occurs, and the M-DEC starts missionm2 and suspends missionm5, according to the priority levels, and triggers taskt4 first. During the execution of thismission the robot traces the goal sequence storing laser andimage data for each those goals(G1, G3, G4) where abandoned objects must be searched and storing only image data forthose goals (G2, G5) where an RFID tagged object must be checked.

Once the environment has been explored, the robot can perform the surveillance of theassigned area. The first mission that starts is missionm3 (Patrolling). Fig. 13(a) shows theSMR while reaches the first goal (G1). After reaching the goal, taskst18 and t19 are per-formed searching for abandoned objects. As shown in Fig. 14(a), the SMR detects a new

Page 23: Matrix-based Discrete Event Control for Surveillance ...users.ba.cnr.it/dipaola/papers/DiPaola-JINT_2009.pdf · oped for an experimental surveillance platform. This work aims to adapt

23

Fig. 11: Configuration of the monitored environment: goal points (G1, G2, G3, G4, G5), RFID tagged objects(O1, O2) and RFID-VISUAL landmarks (L1, L2, L3, L4).

object in the environment (an abandoned bag). In this case analarm message is generated.In Fig. 13(b) the SMR has reached goalG2 to check if the RFID tag is in the correct position.After that, taskt22 starts to check if the monitored object is the correct objectassociated tothe tag ID, then the robot goes toward goalG3. After the monitoring of the area close to thisgoal, where no abandoned objects have been detected, the robot starts taskt26 [Fig. 13(b)].During the path toward goalG4 a global position error event occurs (u1). Thus M-DECstops the current task, suspending the current mission (m3), in order to start the first taskt1of missionm1 to find an RFID tag in the environment. In Fig. 13(d), the robotperforms ashort wandering, avoiding obstacles, to find the tag and to start the localization procedureusing RFID and camera sensors [Fig. 14(b)]. When the RFID Global Localization mission(m1) is completed, the suspended mission (m3) starts again activating taskt26 (MoveToGoal)to go toG4 [Fig. 15(a)]. Notice that, as described in Section 3 this task cannot be resumedand must be restarted from the beginning (no preemption assumption). While the SMR goestoward goalG5, it detects the presence of a person (eventu4), thenPeople Recognitionmis-sion (m3) must be activated. Since the priority of this mission is higher than the Patrolling(m3) one the SMR approaches the person, suspending missionm3. Once the person has beenreached the SMR checks if he is an authorized person or not (task t34) [Fig. 15(b)]. Finally,suspended mission (m3) starts again until the final goal (G5) is reached and the detectionobjects tasks are completed [Fig. 15(c)].

Some final considerations must be done observing the performance of the M-DEC dur-ing the real world experiment. First of all each mission is activated or stopped correctly

Page 24: Matrix-based Discrete Event Control for Surveillance ...users.ba.cnr.it/dipaola/papers/DiPaola-JINT_2009.pdf · oped for an experimental surveillance platform. This work aims to adapt

24

(a) (b)

Fig. 12: Real-world experiments execution: The robot performs the MapUpdate mission, wandering in theenvironment (a). During this mission theEnvironment Learningmission starts, then the robot starts to tracethe goal sequence (b).

depending on the occurring events. Suspended missions and in particular suspended tasksare resumed at the suitable instant permitting the correct ending of each mission. Moreoverdue to the preemption assumption the surveillance activityis exhaustively carried out. Infact since each suspended task starts from the beginning andnot from where it was stoppedor as it was completed, the overall execution of missions is guaranteed without gaps. Thisis furthermore advantageous if some change in the environment occurs. Moreover the SMRdoes not get stuck due to the conflict resolution strategy during the surveillance activity.Finally, task sequencing allows the SMR to carry out its missions safely and efficiently dueto the low level reactivity of behaviors in the controller module.

8 Conclusions

This paper presented a control system for an autonomous robot for the surveillance of in-door environments. The proposed control architecture integrates, in the same matrix-basedformalism, a discrete event model, a supervisory controller and a behavior-based controller.This control system makes the robot able to execute autonomously multiple heterogeneousmissions in dynamic environments. In particular, the M-DECis able to model the sequentialconstraints of the tasks, to define the priority among missions and to dynamically select themost appropriate behaviors in any given circumstance. Thispaper shows that the M-DECallows one to easily perform a “simulate and experiment” approach, with noticeable benefitsin terms of cost, time and performance.

The M-DEC proposed in this paper is fully centralized since it uses a single robot only.Following current trends in multi-robot systems, the research activities in progress are fo-

Page 25: Matrix-based Discrete Event Control for Surveillance ...users.ba.cnr.it/dipaola/papers/DiPaola-JINT_2009.pdf · oped for an experimental surveillance platform. This work aims to adapt

25

(a) (b)

(c) (d)

Fig. 13: Real-world experiments execution: The robot reaches the first goal and founds a new object in theenvironment(a). The robot reaches the goalG2 searching for a tagged object (b). The robot reaches the thirdgoal (c). During the path following toward the goalG4 a global position error event occurs, then the robotstarts the localization procedure using RFID and visual landmark (d).

Page 26: Matrix-based Discrete Event Control for Surveillance ...users.ba.cnr.it/dipaola/papers/DiPaola-JINT_2009.pdf · oped for an experimental surveillance platform. This work aims to adapt

26

(a) (b)

Fig. 14: Real-world experiments execution: The robot founds a new object in the environment (a). During thePatrolling mission a global position error event occurs, then the robotstarts the localization procedure (b).

(a) (b) (c)

Fig. 15: Real-world experiments execution: The robot reaches the goalG4 (a). During the path followingtoward the goalG5 a person is detected and missionm4 starts (b). When missionPeople Recognitioniscompleted, the robot terminate successfully the pending missionPatrolling, going towards goalG5.

cused on the distribution of the M-DEC controller across a set of independent robots toobtain a multi-agent surveillance system.

References

1. Hu W., Tan T., Wang L., and Maybank S.,A Survey on Visual Surveillance of Object Motion and Behav-iors, in IEEE Transactions On Systems, Man, And Cybernetics - Part C: Applications And Reviews, Vol.34, No. 3 (2004)

2. Valera M. and Velastin S. A.,Intelligent distributed surveillance systems: a review, in IEE Proceedings ofVision, Image, and Signal Processing, Vol. 152, No. 2, pp. 192–204 (2005)

3. Everett H.R.,Robotic security systems, IEEE Instruments and Measurements Magazine, Vol.6, N.4, pp.30–34 (2003)

Page 27: Matrix-based Discrete Event Control for Surveillance ...users.ba.cnr.it/dipaola/papers/DiPaola-JINT_2009.pdf · oped for an experimental surveillance platform. This work aims to adapt

27

4. DehuaI Z., Gang X., Jinming Z., Li L.,Development of a Mobile Platform for Security Robot, in Proceed-ings of IEEE International Conference on Automation and Logistics, pp. 1262–1267 (2007)

5. Burgard W., Moors M., Fox D., Reid S., and Thrun S.Collaborative Multi-Robot Exploration, in Proceed-ings of IEEE International Conference on Artificial Intelligence, pp. 852–858 (2000)

6. Kwok K. S., Driessen B. J., Phillips C. A., Tovey C. A.,Analyzing the Multiple-target-multiple-agentScenario Using Optimal Assignment Algorithms, Journal of Intelligent and Robotic Systems, Vol. 35, N. 1,pp. 111–122 (2002)

7. Grace J., Baillieul J.,Stochastic Strategies for Autonomous Robotic Surveillance,in Proceedings of IEEEConference on Decision and Control, pp. 2200–2205 (2005)

8. Roman-Ballesteros I., Pfeiffer C.F.A,Framework for Cooperative Multi-Robot Surveillance Tasks, in Pro-ceedings of Electronics, Robotics and Automotive Mechanics Conference, Vol. 2, pp. 163–170 (2006)

9. Vig L., Adams J. A.,Coalition Formation: From Software Agents to RobotsJournal of Intelligent andRobotic Systems, Vol.50, N.1, pp. 85–118 (2007)

10. Mireles J., Lewis F.,Deadlock analysis and routing on free-choice multipart reentrant flow lines using amatrix-based discrete event controller, Proceedings of the IEEE International conference on Decision andControl, Vol. 1, pp. 793–798 (2002)

11. Mireles J., Lewis F.,Intelligent material handling: development and implementation of a matrix-baseddiscrete event controller, IEEE Transactions on Industrial Electronics, Vol. 48, Issue 6 , pp. 1087–1097(2001)

12. IOImage - http://www.ioimage.com13. Alma Vision - http://www.almavision.it14. Volpe R., Nesnas I., Estlin t., Mutz D. , Petras R., and DasH., The CLARAty architecture for robotic

autonomy, Proceedings of the IEEE Aerospace Conference, Big Sky, Montana (2001)15. Mes M., van der Heijden M., van Hillegersberg J.,Design choices for agent-based control of AGVs in

the dough making process, Decision Support Systems , Vol. 44, Issue 4, pp. 983–999, (2008)16. Tacconi D. and Lewis F.,A new matrix model for discrete event systems: Application to simulation, IEEE

Control Syst. Mag., vol. 17, no. 5, pp. 62–71 (1997)17. Giordano V., Zhang J. B., Naso D., Lewis F.,Integrated supervisory and operational control of a ware-

house with a matrix-based approach, IEEE Transactions on Automation Science and Engineering,Vol. 5,N. 1, pp. 53–70 (2008)

18. Koutsoukos X. D., Antsaklis P. J., Stiver J. A., Lemmon M.D., Supervisory control of hybrid systems,Proceedings of the IEEE, vol.88, no.7, pp.1026–1049, (2000)

19. Fierro R., Lewis F.L.,A framework for hybrid control design, Systems, Man and Cybernetics, Part A,IEEE Transactions on , vol.27, no.6, pp.765–773 (1997)

20. Huq R., Mann G. K. I. , and Gosine R. G. ,Behavior-Modulation Technique in Mobile Robotics UsingFuzzy Discrete Event System, in IEEE Transactions On Robotics, Vol. 22, No. 5 (2006)

21. Ji M., Sarkar N. ,Supervisory Fault Adaptive Control of a Mobile Robot and ItsApplication in Sensor-Fault Accommodation, IEEE Transactions on Robotics, vol.23, no.1, pp.174–178 (2007)

22. Brink K., Olsson M., Bolmsj G.,Increased Autonomy in Industrial Robotic Systems: A Framework,Journal of Intelligent and Robotic Systems, Vol.19, N. 4, pp. 357–373 (1997)

23. Chen Y. L., Ling F.,Modeling of Discrete Event Systems using Finite State Machines with Parameters,in Proceedings of IEEE International Conference on ControlApplications, Anchorage, Alaska, (2000)

24. Ma L., Hasegawa K., Sugisawa M., Takahashi K., Miyagi P. E., Santos Filho D. J.,On Resource Arcfor Petri Net Modelling of Complex Resource Sharing SystemJournal of Intelligent and Robotic Systems,Vol.26, N.3, pp. 423–437 (1999)

25. Holloway L. E., Krogh B. H., Giua A.,A survey of Petri Net methods for controlled discrete eventsystems, Discrete Event Dynamic Systems: Theory and Applications,Vol. 7, N. 2, (1997)

26. Georgilakis P. S., Katsigiannis J. A., Valavanis K. P., Souflaris, A. T.,A Systematic Stochastic Petri NetBased Methodology for Transformer Fault Diagnosis and Repair ActionsJournal of Intelligent and RoboticSystems, Vol. 45, N. 2, pp. 181–201 (2006)

27. Li Y., Wonham W. M.,Control of vector discrete-event systems I - the base model, IEEE Transactions onAutomatic Control, Vol. 38, N. 8, pp. 1214–1227, (1993)

28. Li Y., Wonham W. M.,Control of vector discrete-event systems II - controller synthesis, IEEE Transac-tions on Automatic Control, Vol. 39, N. 3, pp. 512–513, (1994)

29. Giordano V., Ballal P., Lewis F., Turchiano B., and ZhangJ. B.,Supervisory Control of Mobile SensorNetworks: Math Formulation, Simulation, and Implementation, IEEE Transactions On Systems, Man, AndCyberneticsPart B: Cybernetics, Vol. 36, No. 4 (2006)

30. Schiraldi V., Giordano V., Naso D., Turchiano B., and Lewis F.,Matrix-based scheduling and control ofa mobile sensor network, 17th IFAC World Congress, pp. 10415–10420 (2008)

31. Nicolescu M. N., Mataric M. J.A Hierarchical Architecture for Behavior-Based Robots, in Proceedingsof First International Joint Conference on Autonomous Agents and Multi-Agent Systems, Italy, (2002)

Page 28: Matrix-based Discrete Event Control for Surveillance ...users.ba.cnr.it/dipaola/papers/DiPaola-JINT_2009.pdf · oped for an experimental surveillance platform. This work aims to adapt

28

32. Gat E.,Three-Layer Architecturesin D. Kortenkamp, R.P. Bonasso and R. Murphy Editors, ArtificialIntelligence and Mobile Robots, pp.195–210, AAAI Press, (1998)

33. Connell J.,SSS: a hybrid architecture applied to robot navigation, in Proceedings of IEEE InternationalConference on Robotics and Automation, (1992)

34. Arkin R. C., Balch T. R.,Aura: principles and practice in review, Journal of Experimental and Theoret-ical Artificial Intelligence, Vol. 9, pp. 175–189, (1997)

35. Brooks R.A.,Intelligence without Representation, Artificial Intelligence, Vol.47, pp. 139–159 (1991)36. Payton D. W., Keirsey D., Kimble D. M., Krozel J. and Rosenblatt J. K.,Do whatever works: A robust

approach to fault-tolerant autonomous control, Applied Intelligence, vol. 2, No. 3, pp.225–250 (1992)37. Cupertino F., Giordano V., Naso D., Delfine L.,Fuzzy control of a mobile robot using a Matlab-based

rapid prototyping system, IEEE Robotics and Automation Magazine, Vol. 13, N. 4, pp. 74–81 (2006)38. Cote C., Brosseau Y., Letourneau D., Raıevsky C., and Michaud F. ,Robotic software integration using

MARIE, in International Journal of Advanced Robotic Systems, 3(1), 55–60 (2006)39. World Wide Web Consortium - Extensible Markup Language (XML) - http://www.w3.org/XML/40. Montemerlo M., Roy N., and Thrun S.,Perspectives on Standardization in Mobile Robot Programming:

The Carnegie Mellon Navigation (CARMEN) Toolkit, in Proceedings of the IEEE/RSJ International Con-ference on Intelligent Robots and Systems (IROS), Vol. 3, pp.2436–2441 (2003)

41. Di Paola D., Milella A., Cicirelli G., and Distante A.,Robust Vision-based Monitoring of Indoor Envi-ronments by an Autonomous Mobile Robot, in Proceedings of ASME International Mechanical EngineeringCongress & Exposition (2007)

42. Marotta C., Milella A., Cicirelli G., and Distante A.,Using a 2D Laser Rangefinder for EnvironmentMonitoring by an Autonomous Mobile Robot, in Proceedings of ASME International Mechanical Engi-neering Congress & Exposition (2007)

43. Milella A., Vanadia P., Cicirelli G., and Distante A.,RFID-based Environment Mapping for AutonomousMobile Robot Applications, in Proceedings of the IEEE/ASME International Conferenceon AdvancedIntelligent Mechatronics (2007)

44. Milella A., Dimiccoli C., Cicirelli G. and Distante A.,Laser-based people-following for human-augmented mapping of indoor environments, Proceedings of the 25th IASTED International Multi-Conference, pp. 151–155 (2007)