[IEEE 2007 IEEE/WIC/ACM International Conference on Intelligent Agent Technology (IAT'07) - Fremont,...

4
A Unified Framework based on HTN and POP Approaches for Multi-Agent Planning Damien Pellier Dept. of Mathematics & Computer Science Paris-Descartes University 45 rue des Saints-Peres F-75270 Paris Email: [email protected] Humbert Fiorino Laboratoire d’Informatique de Grenoble 46, avenue F´ elix Viallet F-38031, Grenoble Email: [email protected] Abstract— The purpose of this paper is to introduce a multi- agent model for plan synthesis in which the production of a global shared plan is based on a unified framework based on HTN and POP approaches. In order to take into account agents’ partial knowledge and heterogeneous skills, we propose to consider the global multi-agent planning process as a POP planning procedure where agents exchange proposals and counter-proposal. Each agent’s proposal is produced by a relaxed HTN approach that defines partial plans in accordance with the plan space search planning, i.e., plan steps can contain open goals and threats. Agents’ interactions define a joint investigation that enable them to progressively prune threats, solve open goals and elaborate solutions step by step. This distributed search is sound and complete. I NTRODUCTION The problem of plan synthesis achieved by autonomous agents in order to solve complex and collaborative tasks is still an open challenge. Increasingly new application areas can ben- efit from this research domain e.g., distributed robotics, web services [1], where centralized approaches are not possible. Consequently, the enhancement of multiagent planning models become more and more fundamental. Classically, multiagent planning [2] is defined as a planning process involving a group of agents, i.e., a process that takes as input agents’ actions, a description of the state of the world known by those agents, and a goal. It returns an organized collection of actions whose global effect, if they are carried out and if they perform as expected, achieves the goal. Such a definition is correct, but hides the complexity of the different phases of a distributed planning process. Actually, multiagent planning can be divided into five separate phases: (1) Task decomposition, (2) Subtask delegation, (3) Individual planning, (4) Individual plans coor- dination, (5) Joint plan execution. Although this decomposition is convenient to explain the multiagent planning, experience shows that those five phases are tightly interleaved: e.g., (i) phases 1 and 2 are often merged due to the existing link between the agents capabilities and the task decomposition, (ii) the last joint plan execution phase involves replanning [3] due to a failure and implies to backtrack to phase 1, 2 and 3, and (iii) the coordination phase can be executed either before, after or during the planning phase. Thus, how to devise a framework that integrates the different phases of the distributed planning process ? In this paper, we introduce a multiagent framework for plan synthesis in which the production of a global shared plan is viewed as a collaborative goal directed reasoning about actions that merge the four first steps of the multiagent planning process. The key idea is to take advantage of two planning approaches: POP planning [4] that is well adapted to produce concurrent plans in distributed environment (because no explicit global state is required) and HTN planning [5] because of its expressiveness and performance in real planning environment to generate plan refinements. At the team’s level, each agent can refine, refute or repair the ongoing team plan with a POP procedure. If the repair of a previously refuted plan succeeds, it becomes more robust but it can still be refuted later. If the repair of the refuted plan fails, agents leave this part of the reasoning and explore another possibility: finally, “bad” sub-plans are ruled out because there is no agent able to push the investigation process further. At the agent’s level, the specificity of this approach relies on the agent’s capabilities to elaborate plans under partial knowledge and/or to produce plans that partially contradict its knowledge. In other words, in order to reach a goal, such an agent is able to provide a plan which could be executed if certain conditions were met. Unlike “classical” planners, the planning process does not fail if some conditions are not asserted in the knowledge base, but rather proposes plans where action preconditions are possibly not resolved and are considered as open goals. Obviously, they must be as few as possible because they become new goals for the other agents. To that end, we designed a planner based on the HTN procedure that relaxes some restrictions regarding the applicability of planning operators. The rest of this paper is organized as follow: section 1 introduces the primary definitions; section 2 defines the team planning algorithm; section 3 and 4 present refinement and refutation mechanisms and section 5 shortly describes the extended HTN planning procedure used in our approach. I. PRIMARY DEFINITIONS In this section, we define the syntax and semantics used in the planning algorithms. In particular, we use the usual first or- der logic definitions: constant, function, predicate and variable symbols. Propositions are tuples of parameters, (i.e., constants or variables), and can be negated or not. Codesignation is an equivalence relation on variables and constants. Binding constraints enforce codesignation or noncodesignation of pa- 2007 IEEE/WIC/ACM International Conference on Intelligent Agent Technology 0-7695-3027-3/07 $25.00 © 2007 IEEE DOI 10.1109/IAT.2007.30 287 2007 IEEE/WIC/ACM International Conference on Intelligent Agent Technology 0-7695-3027-3/07 $25.00 © 2007 IEEE DOI 10.1109/IAT.2007.30 285 2007 IEEE/WIC/ACM International Conference on Intelligent Agent Technology 0-7695-3027-3/07 $25.00 © 2007 IEEE DOI 10.1109/IAT.2007.30 285

Transcript of [IEEE 2007 IEEE/WIC/ACM International Conference on Intelligent Agent Technology (IAT'07) - Fremont,...

Page 1: [IEEE 2007 IEEE/WIC/ACM International Conference on Intelligent Agent Technology (IAT'07) - Fremont, CA, USA (2007.11.2-2007.11.5)] 2007 IEEE/WIC/ACM International Conference on Intelligent

A Unified Framework based on HTN and POPApproaches for Multi-Agent Planning

Damien PellierDept. of Mathematics & Computer Science

Paris-Descartes University45 rue des Saints-Peres F-75270 Paris

Email: [email protected]

Humbert FiorinoLaboratoire d’Informatique de Grenoble

46, avenue Felix Viallet F-38031, GrenobleEmail: [email protected]

Abstract— The purpose of this paper is to introduce a multi-agent model for plan synthesis in which the production of a globalshared plan is based on a unified framework based on HTN andPOP approaches. In order to take into account agents’ partialknowledge and heterogeneous skills, we propose to consider theglobal multi-agent planning process as a POP planning procedurewhere agents exchange proposals and counter-proposal. Eachagent’s proposal is produced by a relaxed HTN approach thatdefines partial plans in accordance with the plan space searchplanning, i.e., plan steps can contain open goals and threats.Agents’ interactions define a joint investigation that enable themto progressively prune threats, solve open goals and elaboratesolutions step by step. This distributed search is sound andcomplete.

INTRODUCTION

The problem of plan synthesis achieved by autonomousagents in order to solve complex and collaborative tasks is stillan open challenge. Increasingly new application areas can ben-efit from this research domain e.g., distributed robotics, webservices [1], where centralized approaches are not possible.Consequently, the enhancement of multiagent planning modelsbecome more and more fundamental. Classically, multiagentplanning [2] is defined as a planning process involving a groupof agents, i.e., a process that takes as input agents’ actions, adescription of the state of the world known by those agents,and a goal. It returns an organized collection of actions whoseglobal effect, if they are carried out and if they perform asexpected, achieves the goal. Such a definition is correct, buthides the complexity of the different phases of a distributedplanning process. Actually, multiagent planning can be dividedinto five separate phases: (1) Task decomposition, (2) Subtaskdelegation, (3) Individual planning, (4) Individual plans coor-dination, (5) Joint plan execution. Although this decompositionis convenient to explain the multiagent planning, experienceshows that those five phases are tightly interleaved: e.g., (i)phases 1 and 2 are often merged due to the existing linkbetween the agents capabilities and the task decomposition, (ii)the last joint plan execution phase involves replanning [3] dueto a failure and implies to backtrack to phase 1, 2 and 3, and(iii) the coordination phase can be executed either before, afteror during the planning phase. Thus, how to devise a frameworkthat integrates the different phases of the distributed planningprocess ?

In this paper, we introduce a multiagent framework forplan synthesis in which the production of a global shared

plan is viewed as a collaborative goal directed reasoningabout actions that merge the four first steps of the multiagentplanning process. The key idea is to take advantage of twoplanning approaches: POP planning [4] that is well adapted toproduce concurrent plans in distributed environment (becauseno explicit global state is required) and HTN planning [5]because of its expressiveness and performance in real planningenvironment to generate plan refinements.

At the team’s level, each agent can refine, refute or repairthe ongoing team plan with a POP procedure. If the repair of apreviously refuted plan succeeds, it becomes more robust but itcan still be refuted later. If the repair of the refuted plan fails,agents leave this part of the reasoning and explore anotherpossibility: finally, “bad” sub-plans are ruled out because thereis no agent able to push the investigation process further. Atthe agent’s level, the specificity of this approach relies on theagent’s capabilities to elaborate plans under partial knowledgeand/or to produce plans that partially contradict its knowledge.In other words, in order to reach a goal, such an agent isable to provide a plan which could be executed if certainconditions were met. Unlike “classical” planners, the planningprocess does not fail if some conditions are not asserted inthe knowledge base, but rather proposes plans where actionpreconditions are possibly not resolved and are consideredas open goals. Obviously, they must be as few as possiblebecause they become new goals for the other agents. To thatend, we designed a planner based on the HTN procedurethat relaxes some restrictions regarding the applicability ofplanning operators.

The rest of this paper is organized as follow: section 1introduces the primary definitions; section 2 defines the teamplanning algorithm; section 3 and 4 present refinement andrefutation mechanisms and section 5 shortly describes theextended HTN planning procedure used in our approach.

I. PRIMARY DEFINITIONS

In this section, we define the syntax and semantics used inthe planning algorithms. In particular, we use the usual first or-der logic definitions: constant, function, predicate and variablesymbols. Propositions are tuples of parameters, (i.e., constantsor variables), and can be negated or not. Codesignation isan equivalence relation on variables and constants. Bindingconstraints enforce codesignation or noncodesignation of pa-

2007 IEEE/WIC/ACM International Conference on Intelligent Agent Technology

0-7695-3027-3/07 $25.00 © 2007 IEEEDOI 10.1109/IAT.2007.30

287

2007 IEEE/WIC/ACM International Conference on Intelligent Agent Technology

0-7695-3027-3/07 $25.00 © 2007 IEEEDOI 10.1109/IAT.2007.30

285

2007 IEEE/WIC/ACM International Conference on Intelligent Agent Technology

0-7695-3027-3/07 $25.00 © 2007 IEEEDOI 10.1109/IAT.2007.30

285

Page 2: [IEEE 2007 IEEE/WIC/ACM International Conference on Intelligent Agent Technology (IAT'07) - Fremont, CA, USA (2007.11.2-2007.11.5)] 2007 IEEE/WIC/ACM International Conference on Intelligent

rameters and two propositions codesignate if both are negatedor both are not negated, and if the tuples are of the same lengthand if corresponding parameters codesignate. An operator isa tuple of the form o = (name(o), precond(o), add(o), del(o)):name(o) is denoted by n(x1, . . . , xk) where n is an operatorsymbol and x1, . . . , xk parameters; precond(o), add(o) anddel(o) are respectively the preconditions, the operator’s addlist and delete list. A method is a tuple of the form m =(name(m), precond(m), reduction(m)): name(m) is denotedby n(x1, . . . , xk) where n is the method name and x1, . . . , xk

its parameters; precond(m) are the method preconditions;reduction(m) is a sequence of relevant operators or methodsto achieve m. An agent is an autonomous planning process.

Definition 1 (Agent): An agent ag is a tuple of the form(name(ag), operators(ag), methods(ag), beliefs(ag)) where:name(ag) identifies the agent; operators(ag) and methods(ag)is respectively a set of operators and methods; beliefs(ag)are facts about the world that the agent believes to be true. Itis worth noting that absent facts are not false but unknown.Moreover, we consider that each agent’s beliefs are mutuallyconsistent: if a fact is asserted in one agent’s beliefs, it cannotbe negated elsewhere.

Definition 2 (Planning problem): A planning problem is atuple of the form P = (s0, T , g) where: s0 is the union setof the agents’ beliefs; T is a set of agents; g is the goal i.e.facts about the world to be achieved.

In order to solve a planning problem, the agents search thedifferent steps that will bring them from s0 to g. These stepsare partially ordered actions (concurrency is possible betweenagents), i.e., operators or methods whose parameters are in-stantiated according to their corresponding binding constraints.A partial plan is an endeavor to find a solution:

Definition 3 (Partial Plan): A partial plan is a tuple of theform π = (A,≺, I, C) where: A = {a0, . . . , an} is a set ofactions; ≺ is a set of ordering constraints on A where ai ≺ aj

means ai precedes aj ; I is a set of binding constraints onaction parameters denoted by x = y , x 6= y , or x = cst suchthat cst ∈ Dx and Dx is the domain of x ; C is a set of causallinks denoted by ai

p−→ aj such that ai and aj are two actionsof A; the ordering constraint ai ≺ aj exists in ≺; the fact pis an effect of ai and a precondition of aj , and the bindingconstraints about the parameters of ai and aj correspondingto p are in I.

When asserting, an agent can use actions whose precondi-tions are not totally supported by causal links. In a multiagentcontext, this makes sense because these open goals becomesub-goals to be achieved by the other agents.

Definition 4 (Open Goal): Let π = (A,≺, I, C) be a par-tial plan. An open goal stated by π is defined as a preconditionp of an action aj ∈ A such that, for all actions ai ∈ A, thecausal link ai

p−→ aj 6∈ C. opengoals(π) and opengoals(aj)respectively denote the set of open goals stated by π and aj .

We assume that ≺ is consistent: it is possible to find atleast one total order compliant with ≺, (i.e., a completion). Inother terms, there is no cycle in A and ≺ represents a classof total orders. Obviously, planning is finished when all the

linearizations of a partial plan solves the assigned goal.Definition 5 (Solution plan): A partial plan π = (A,≺,

I, C) is a solution plan for P = (s0, T , g) if: (i) the setsof ordering constraints ≺ and of binding constraints I areconsistent and (ii) all the linearizations λ of π define asequence of states 〈s0, . . . , si, . . . sn〉 for 0 ≤ i ≤ n suchthat the goal g is satisfied in sn, i.e., g ⊆ sn and λ does notstate open goals, i.e., opengoals(λ) = ∅.

But the number of linearizations of a partial plan is exponen-tial and computing them in order to decide whether a partialplan is a solution plan would be time consuming. Therefore,we propose a necessary criterion [6] based on the notion ofrefutation to fix the correctness of a partial plan.

Definition 6 (Refutation): A refutation on π = (A,≺, I, C)is a tuple (ak, ai

p−→ aj) such that: (i) ak produces the effect¬q, and, p and q codesignate; (ii) the ordering constraints ai ≺ak and ak ≺ aj are consistent with ≺ and (iii) the bindingconstraints corresponding to the codesignation of p and q areconsistent with I.

Proposition 1: A partial plan π = (A,≺, I, C) is a solutionplan for a problem P if : (i) ≺ and I are consistent and (ii)π contains no threats, i.e., neither open goals nor refutations.

II. TEAM PLANNING PROCEDURE

Agents Interactions Rules. The agents involved in the pro-cess of co-operatively building a solution plan are committedto follow several interactions rules so as to guarantee thevalidity of the reasoning as well as the correctness of theproposed solution plan. Each agent handles its own viewof the space search in which it records the propositions ofthe other agents. The space search is a Directed AcyclicGraph (DAG) whose nodes are partial plans and edges arerefinement, refutation or repair operators. The agents producetwo kinds of interactions: informational interactions such asrefine, refute, repair, failure, that allow them to exchangeinformation about the partial plans contained in their spacesearch, and contextualization interactions such as prop.solve,prop.failure, prop.success, ack.failure, ack.success, thatenable them to set the interaction context. Each agent’ssearch space is initiated with π0: A = {a0, a∞} such thatprecond(a∞) = g and add(a0) = s0; (a0 ≺ a∞); the bindingconstraints corresponding to a0, a∞ and an empty set of causallinks. The agents broadcast asynchronous messages whosedelivery time is finite and receive messages in the causal orderof their emission. The informational interactions comply withthe agent’s rationality and update rules given in Tab. I.

Contextualization Rules. This section explains how theagents start and stop the plan synthesis. The correspondingcontextualization rules are represented as a finite state au-tomaton (cf. figure 1) whose states are the dialog states andedges are the dialog acts (cf. table I). ? and ! before an actrespectively means that the act is received or sent by the agent.

Initially, the agents are in IDLE state: they wait for a goalto solve. When sending or receiving prop.solve, they switch toPlanning state in which they exchange refinements, refutationsand repairs according to table I. There are two possibilities to

288286286

Page 3: [IEEE 2007 IEEE/WIC/ACM International Conference on Intelligent Agent Technology (IAT'07) - Fremont, CA, USA (2007.11.2-2007.11.5)] 2007 IEEE/WIC/ACM International Conference on Intelligent

refine(ρ, p, π): ρ is a refinement and p the refined open goal ofpartial plan π.

Rationality: the agent ensures that:• π is a partial plan of its space search;• h is an open goal of π;• ρ has not been proposed as refinement of p;• the partial plan π′, result of ρ, contains consis-

tent binding and ordering constraints.Dialog:the agents can either refine all the open goals of

π′ or refute π′.Update:add π′ as refinement of p in π into its space

search.refute(φ, π): φ refutes π.

Rationality: the agent ensures that:• π is in its space search;• φ has not been proposed as refutation of π.

Rules: the agents can repair π.Update:add φ as refutation of π into its space search.

repair(ψ, φ, π): ψ is a repair of π corresponding to the refutationφ.

Rationality: the agent ensures that:• π is in its space search;• φ refutes π;• ψ has not been proposed as repair of π about

the refutation φ;• the partial plan π′ provided by ψ contains

consistent binding and ordering constraints.Rules: either its teammates refine all the open goals of

π′ or they refute π′.Update:add π′ as repair of the refutation φ in π.

failure(Φ,π):Rationality: the agent ensures that:

• π is in its space search;• Φ is a threat of π;

Rules: ∅Update:label Φ as unsolved by the agent that produces the

interaction.

Table I: Interactions Rules

leave this state: either the agent proposes to stop or it receivesa proposition to stop. In the former case, the agent releasesthe act prop.failure (resp. prop.success) to stop with failure(resp. to stop with success). Then, the automaton switches tothe Failure state (resp. Success) in which the agent waits thelocal acknowledgments of the other agents. If all teammatesacknowledge the proposition, the stop conditions with failure(resp. with success) are satisfied.

Agent Planning Algorithm. The agent planning algorithm isgiven by the algorithm 1. First of all, a non terminal partialplan is nondeterministically chosen within the search space (apartial plan is terminal if no refinement, repair or refutationis applicable). If all the partial plans are terminal (line 3),investigation cannot be pushed further and the agent proposesto stop with failure and ends its reasoning loop (line 5). Theproposition is submitted to its teammates. If at least one ofthe partial plan is non terminal, the agent must assess whetherthis partial plan is a solution plan: in this case, the plan π doesnot contain any open goal and cannot be refuted. The formercondition is verified by the procedure OpenGoals(π). Thisprocedure can be efficiently implemented with a hash table:each time a new action is added, its open goals are added to the

IF IS

!prop.solve | ?prop.solve

Idle?refine | !refine |?refute | !refute |?repair | !repair

?prop.succes

?prop.succes

!ack.succes

Succes?ack.succes

?refute

CoRe

?refine |?refute |?repair

Failure?ack.failure

!ack.failure

!prop.failure

?prop.failure

?refute

?refine |?refute |?repair

1: Contextualization automaton

table, and, whenever a causal link is added, the correspondingopen goal is withdrawn. The latter condition is checked by theprocedure Threat(π). When a plan is considered to be a so-lution plan (line 9), the agent proposes to stop the investigationprocess and ends its reasoning loop. Otherwise, it proposesto apply a refinement, a repair or a refutation. Let supposeit decides to apply a refinement (line 14): Refine(φ, π)returns all the possible refinements where φ are the open goalsof π. If there is no possible refinement, φ is labeled as unsolvedand the agent broadcasts the failure. Otherwise, the agent picksup a refinement and submits it to the other agents. The samemechanism is applied for repairs (line 24). Finally, if φ is arefutation (line 35), the agent broadcasts it to the other agents.

III. REFINEMENT MECHANISMS

Causal link addition. This is the simplest refinement: if p isan open goal of aj and there is an action ai that produces p,then (ai

p−→ aj) is added to C. Causal link addition allows totake advantage of the favorable relations between actions ina plan. Moreover, it enables the agents to iteratively build a0

(i.e. s0). This initial state is scattered over the agents, but thankto causal link additions relevant initial facts are broadcasted.More generally, this kind of refinement is important becauseit provides a rational means to share beliefs.

Sub-plan addition In this kind of refinement, a sub-plan isadded to support an open goal. But, this sub-plan can itselfcontain open goals that will have to be solved etc. This canbe achieved by our relaxed HTN planning approach, which isdetailed later.

Refutation mechanism The refutations have been defined indefinition 6: computing a refutation means to find an actionak that invalidates a causal link ai

p−→ aj .

IV. REPAIR MECHANISMS

Repair mechanisms compute the modifications to be doneon refuted plan. We distinguish ordering constraints addition,binding constraints modification and sub-plan addition.

Ordering constraints addition. In this kind of repair, or-dering constraints are added in order to prevent the rebuttingaction ak from being executed between ai and aj . It is worthnoting that, when the plan cannot be repaired by ordering

289287287

Page 4: [IEEE 2007 IEEE/WIC/ACM International Conference on Intelligent Agent Technology (IAT'07) - Fremont, CA, USA (2007.11.2-2007.11.5)] 2007 IEEE/WIC/ACM International Conference on Intelligent

Algorithm 1: Agent planning procedurewhile planning = true do1

plans ← non terminal plan in the space search;2if plans is empty then3

Submit failure proposition;4planning ← false ;5

else6Select a plan π ∈ plans ;7flaws ← OpenGoals(π) ∪ Threats(π);8if flaws is empty then9

Submit a success proposition about π;10planning ← false ;11

else12Select a threat φ ∈ flaws ;13if φ is an open goal then14

refinements ← Refine(φ, π);15if refinements is empty then16

Label φ as unsolved;17Assert failed repair of φ;18

else19Select refinement ρ ∈ refinements ;20Submit ρ as refinement φ of π;21

else if φ is a threat already asserted then22repairs ← Repair(φ, π);23if repairs is empty then24

Label φ as impossible to solve;25Assert the failed repair of φ;26

else27Select repair ψ ∈ repairs ;28Submit ψ as repair of φ in π;29

else30Submit φ as new refutation of π;31

Existing constraints on ak Constraints to add∅ aj ≺ ak or ak ≺ ai

ak ≺ aj ak ≺ ai

ai ≺ ak aj ≺ ak

ai ≺ ak and ak ≺ aj no solution

Table II: Ordering constraints addition

constraints addition, it can eventually be repaired by adding asub-plan. The different possibilities are listed in Tab. II.

Sub-plan addition. When ak deletes p necessary to aj , sub-plan addition must enforce the production of p after ak. Thiscan be achieved by our relaxed HTN planning approach, whichis detailed in the next section.

Binding constraints modification. Another way to repair arefutation is to prevent the undercutting facts p and ¬q fromcodesignating. To that end, binding constraints can be modifiedas long as consistency is preserved.

V. REFINEMENT BASED ON HTN PLANNING

We postulate that operators or methods can be triggeredeven though some preconditions are not satisfied by the agent’sbeliefs. These preconditions are assumed by the agent. For-mally, open goal computation is based on the identification ofall the possible substitutions to apply a method or an operator.The results of these substitutions are respectively complex orprimitive actions. The refinement algorithm is based on [5]. Weconsider that the goal g of a planning problem represents the

preconditions of a complex action, i.e., a sequence of primitiveor complex actions. Then refinement is defined as follows:

Definition 7 (Refinement): Let P = (s0, T , 〈α0, . . . , αn〉)be a planning problem. Plan π is a refinement of P if allthe linearizations λ ∈ completion(π) refine P . Let λ =〈a0, . . . , ak〉 be a sequence of primitive actions, λ refines Pif one of the following conditions is true: (i) the sequence ofactions to be achieved is empty then λ is empty; (ii) α0 isprimitive: α0 is relevant for a0, α0 is applicable from s0 andλ = 〈a1, . . . , ak〉 refines (s1, T , 〈α1, . . . , αn〉) and (iii) α0 iscomplex: there is a reduction 〈r1, . . . , rj〉 applicable to achieveα0 from s0 and λ refines (s′0, T , 〈r1, . . . , rj , α1, . . . , αn〉) withs′0 = s0 ∪ opengoals(α0).

Algorithm introduced in [7] implements the definition 7.It explores a state space called refinement tree. Each nodeof the tree is a tuple (s, 〈α0, . . . , αn〉) where s is the stateof the world and 〈α0, . . . , αn〉 the remaining actions at thisdecomposition step. The edges are the possible transitionsbetween the different states. Each transition is labeled by theapplied operator or method with its binding constaints and itspossible open goals.

CONCLUSION

In this paper, we have introduced a multi-agent model forplan synthesis in which the production of a global sharedplan is viewed as a collaborative goal directed reasoningabout actions. The advantages of the presented model is toenable agents to compose dynamically their heterogeneousskills and beliefs through a unified planning framework basedon POP and HTN procedures. Moreover, it structures the in-teractions as a collaborative sound and complete investigationprocess and former works on synchronization, coordinationand conflict resolution are integrated through the notions ofrefutation/repair. From our point of view, this approach issuitable for applications in which agents share a commongoal and in which the splitting of the planning and thecoordination steps becomes difficult due to the agents stronginterdependence.

REFERENCES

[1] D. Wu, B. Parsia, J. Sirin E, and D. Nau, “Automating DAML-S webservices composition using SHOP2,” in Proceedings of InternationalSemantic Web Conference, 2003.

[2] E. H. Durfee, Multiagent Systems. A Modern Approach to DistributedArtificial Intelligence. MIT Press, 2000, ch. Distributed Problem Solvingand Planning.

[3] M. Fox, A. Gerevini, D. Long, and I. Serina, “Plan stability: Replanningversus plan repair,” in Proceedings of the International Conference onPlanning and Scheduling. California, USA: AAAI Press, 2006.

[4] J. Penberthy and D. Weld, “UCPOP: A sound, complete, partial orderplanner for ADL,” in Proceedings of the International Conference onPrinciples of Knowledge Representation and Reasoning, C. R. B. Nebeland W. Swartout, Eds. Morgan Kaufmann Publishers, 1992, pp. 103–114.

[5] D. Nau, T. Au, O. Ilghami, U. Kuter, W. Murdock, D. Wu, and Y. Yaman,“SHOP2: An HTN planning system,” Journal of Artificial IntelligenceResearch, vol. 20, no. 1, pp. 379–404, 2003.

[6] D. Chapman, “Planning for conjunctive goals,” Artificial Intelligence,vol. 32, no. 3, pp. 333–377, 1987.

[7] D. Pellier and H. Fiorino, “Assumption-based planning,” in Proceedings ofthe International Conference on Advances in Intelligence Systems Theoryand Applications, Luxembourg-Kirchberg, Luxembourg, 2004, pp. 367–376.

290288288