Luís Moniz Pereira Centro de Inteligência Artificial - CENTRIA
Adaptive Reasoning for Cooperative Agents Luís Moniz Pereira Alexandre Pinto Centre for Artificial...
-
date post
21-Dec-2015 -
Category
Documents
-
view
215 -
download
0
Transcript of Adaptive Reasoning for Cooperative Agents Luís Moniz Pereira Alexandre Pinto Centre for Artificial...
![Page 1: Adaptive Reasoning for Cooperative Agents Luís Moniz Pereira Alexandre Pinto Centre for Artificial Intelligence – CENTRIA Universidade Nova de Lisboa INAP’09,](https://reader036.fdocuments.us/reader036/viewer/2022062714/56649d605503460f94a40b48/html5/thumbnails/1.jpg)
Adaptive Reasoning for Cooperative Agents
Luís Moniz Pereira Alexandre Pinto
Centre for Artificial Intelligence – CENTRIAUniversidade Nova de Lisboa
INAP’09, 5-7 November Évora, Portugal
![Page 2: Adaptive Reasoning for Cooperative Agents Luís Moniz Pereira Alexandre Pinto Centre for Artificial Intelligence – CENTRIA Universidade Nova de Lisboa INAP’09,](https://reader036.fdocuments.us/reader036/viewer/2022062714/56649d605503460f94a40b48/html5/thumbnails/2.jpg)
Summary - 1
• Explicit affirmation and negation plus a 3rd logic value of undefined, is useful in situations where decisions must be taken based of scarce, ambiguous, or contradictory information
• In a 3-valued setting, we consider agents that learn definitions for a target concept and its opposite, taking both positive and negative examples as instances of the two classes
![Page 3: Adaptive Reasoning for Cooperative Agents Luís Moniz Pereira Alexandre Pinto Centre for Artificial Intelligence – CENTRIA Universidade Nova de Lisboa INAP’09,](https://reader036.fdocuments.us/reader036/viewer/2022062714/56649d605503460f94a40b48/html5/thumbnails/3.jpg)
Summary - 2
• A single agent exploring an environment can gather only so much information, which may not suffice to find good explanations
• A cooperative multi-agent strategy, where each agent explores part of the environment and shares its findings, provides better results
• We employ a distributed genetic algorithm framework, enhanced by a Lamarckian belief revision operator
• Agents communicate explanations—coded as belief chromosomes—by sharing them in a common pool
![Page 4: Adaptive Reasoning for Cooperative Agents Luís Moniz Pereira Alexandre Pinto Centre for Artificial Intelligence – CENTRIA Universidade Nova de Lisboa INAP’09,](https://reader036.fdocuments.us/reader036/viewer/2022062714/56649d605503460f94a40b48/html5/thumbnails/4.jpg)
Summary - 3
• Another way of interpreting this communication is via agent argumentation
• When taking in all arguments to find a common ground or consensus, we may have to revise assumptions in each argument
• A collaborative viewpoint results: Arguments are put together to find 2-valued consensus on conflicting learnt concepts, within an evolving genetic pool, so as to identify “best” joint explanations to observations
![Page 5: Adaptive Reasoning for Cooperative Agents Luís Moniz Pereira Alexandre Pinto Centre for Artificial Intelligence – CENTRIA Universidade Nova de Lisboa INAP’09,](https://reader036.fdocuments.us/reader036/viewer/2022062714/56649d605503460f94a40b48/html5/thumbnails/5.jpg)
Learning Positive and Negative Knowledge
– Autonomous agent: acquisition of information by means of experiments
– Experiment:• execution of an action• evaluation of the results with respect to the
achievement of a goal• positive and negative results
– Learning general rules on actions:• distinction among actions with a positive, negative or
unknown or undefined outcome
![Page 6: Adaptive Reasoning for Cooperative Agents Luís Moniz Pereira Alexandre Pinto Centre for Artificial Intelligence – CENTRIA Universidade Nova de Lisboa INAP’09,](https://reader036.fdocuments.us/reader036/viewer/2022062714/56649d605503460f94a40b48/html5/thumbnails/6.jpg)
2-valued vs. 3-valued Learning
– Two values:• bottom-up generalization from instances• top-down refinement from general classes
– Three values:• learning a definition both for the concept and its
opposite
+++
+ +++ +
- - ---- --
+++
+ +++ +
- - ---- --
+++
+ +++ +
- - ---- --
a cb
![Page 7: Adaptive Reasoning for Cooperative Agents Luís Moniz Pereira Alexandre Pinto Centre for Artificial Intelligence – CENTRIA Universidade Nova de Lisboa INAP’09,](https://reader036.fdocuments.us/reader036/viewer/2022062714/56649d605503460f94a40b48/html5/thumbnails/7.jpg)
Learning in a 3-valued Setting
– Extended Logic Programs (ELP): with explicit negation “ A”
– Clauses of the formL0 L1, ... , Lm, not Lm+1, ... , not Lm+n
where Li can be either A or A
– Explicit representation of negative information fly(X) penguin(X).
– Three logical values: true, false, unknown or undefined
![Page 8: Adaptive Reasoning for Cooperative Agents Luís Moniz Pereira Alexandre Pinto Centre for Artificial Intelligence – CENTRIA Universidade Nova de Lisboa INAP’09,](https://reader036.fdocuments.us/reader036/viewer/2022062714/56649d605503460f94a40b48/html5/thumbnails/8.jpg)
Problem definition
Given• a set P of possible ELP programs (bias)• a set E+ of positive examples• a set E- of negative examples• a consistent extended logic program
(background knowledge)
Use learning to find an ELP, P P such that
P E+ P E- (completeness)P L- P L+ L E+ E- (consistency)
![Page 9: Adaptive Reasoning for Cooperative Agents Luís Moniz Pereira Alexandre Pinto Centre for Artificial Intelligence – CENTRIA Universidade Nova de Lisboa INAP’09,](https://reader036.fdocuments.us/reader036/viewer/2022062714/56649d605503460f94a40b48/html5/thumbnails/9.jpg)
Intersection of definitions
E+ E-
pp
Exceptions to the positive concept: negative examples
Exceptions to the negative concept: positive examples
Unseen atoms
![Page 10: Adaptive Reasoning for Cooperative Agents Luís Moniz Pereira Alexandre Pinto Centre for Artificial Intelligence – CENTRIA Universidade Nova de Lisboa INAP’09,](https://reader036.fdocuments.us/reader036/viewer/2022062714/56649d605503460f94a40b48/html5/thumbnails/10.jpg)
Unseen atoms
– Unseen atoms which are both true and false are classified as unknown or undefined:
p(X) p+(X), not p(X). p(X) p-(X), not p(X).
– If the concept is true and its opposite undefined, then it is classified as true:
p(X) p+(X), undefined( p(X) ). p(X) p-(X), undefined( p(X) ).
![Page 11: Adaptive Reasoning for Cooperative Agents Luís Moniz Pereira Alexandre Pinto Centre for Artificial Intelligence – CENTRIA Universidade Nova de Lisboa INAP’09,](https://reader036.fdocuments.us/reader036/viewer/2022062714/56649d605503460f94a40b48/html5/thumbnails/11.jpg)
Training set atoms
– They must be classified according to the training set
– Default literals, representing non-abnormality conditions, are added to rules:
p(X) p+(X), not abp (X), not p(X).
p(X) p-(X), not abp (X), not p(X).
![Page 12: Adaptive Reasoning for Cooperative Agents Luís Moniz Pereira Alexandre Pinto Centre for Artificial Intelligence – CENTRIA Universidade Nova de Lisboa INAP’09,](https://reader036.fdocuments.us/reader036/viewer/2022062714/56649d605503460f94a40b48/html5/thumbnails/12.jpg)
Example: knowledge
B: bird(a). has_wings(a).jet(b). has_wings(b).angel(c). has_wings(c). has_limbs(c).penguin(d). has_wings(d). has_limbs(d).dog(e). has_limbs(e).cat(f). has_limbs(f).
E+ = { flies(a) } E- = { flies(d), flies(e) }
flies+ E+
a b c fd e
flies-E-
![Page 13: Adaptive Reasoning for Cooperative Agents Luís Moniz Pereira Alexandre Pinto Centre for Artificial Intelligence – CENTRIA Universidade Nova de Lisboa INAP’09,](https://reader036.fdocuments.us/reader036/viewer/2022062714/56649d605503460f94a40b48/html5/thumbnails/13.jpg)
flies+(X) has_wings(X). flies-(X) has_limbs(X).
flies(X) flies+(X), not abflies+(X), not flies(X).
flies(X) flies-(X), not flies(X). abflies+(d).
flies(X) flies+(X), undefined( flies(X) ).flies(X) flies-(X), undefined( flies(X) ).
Generalizing exceptions we obtain:
abflies+(X) penguin(X).
Example: learned theory
![Page 14: Adaptive Reasoning for Cooperative Agents Luís Moniz Pereira Alexandre Pinto Centre for Artificial Intelligence – CENTRIA Universidade Nova de Lisboa INAP’09,](https://reader036.fdocuments.us/reader036/viewer/2022062714/56649d605503460f94a40b48/html5/thumbnails/14.jpg)
Least General vs. Most General Solutions
– Bottom-up methods: • Search from specific to general:
Least General Solution (LGS)
• GOLEM (RLGG), CIGOL (Inverse Resolution)
– Top-down methods:• Search from general to specific:
Most General Solution (MGS)
• FOIL, Progol
![Page 15: Adaptive Reasoning for Cooperative Agents Luís Moniz Pereira Alexandre Pinto Centre for Artificial Intelligence – CENTRIA Universidade Nova de Lisboa INAP’09,](https://reader036.fdocuments.us/reader036/viewer/2022062714/56649d605503460f94a40b48/html5/thumbnails/15.jpg)
Criteria for chosing the generality
– Risk that can derive from a classification error
high risk LGSlow risk MGS
– Confidence in the set of negative examples
high confidence MGSlow confidence LGS
![Page 16: Adaptive Reasoning for Cooperative Agents Luís Moniz Pereira Alexandre Pinto Centre for Artificial Intelligence – CENTRIA Universidade Nova de Lisboa INAP’09,](https://reader036.fdocuments.us/reader036/viewer/2022062714/56649d605503460f94a40b48/html5/thumbnails/16.jpg)
Generality of Solutions
B: bird(X) sparrow(X).
mammal(X) cat(X).
sparrow(a). cat(b). bird(c). mammal(d).
E+ = { flies(a) } E– = { flies(b) }
fliesMGS(X) bird(X).
fliesLGS(X) sparrow(X).
fliesMGS(X) cat(X).
fliesLGS(X) mammal(X).
![Page 17: Adaptive Reasoning for Cooperative Agents Luís Moniz Pereira Alexandre Pinto Centre for Artificial Intelligence – CENTRIA Universidade Nova de Lisboa INAP’09,](https://reader036.fdocuments.us/reader036/viewer/2022062714/56649d605503460f94a40b48/html5/thumbnails/17.jpg)
beggar2 beggar1attacker1
attacker2
Of Beggars and Attackers
![Page 18: Adaptive Reasoning for Cooperative Agents Luís Moniz Pereira Alexandre Pinto Centre for Artificial Intelligence – CENTRIA Universidade Nova de Lisboa INAP’09,](https://reader036.fdocuments.us/reader036/viewer/2022062714/56649d605503460f94a40b48/html5/thumbnails/18.jpg)
Example: Mixing LGS and MGS (1)
– Concept of attacker maximize the concept and minimize its opposite:
attacker1(X) attacker +MGS(X), not attacker1(X).
attacker1(X) attacker –LGS(X), not attacker1(X).
– Concept of beggar —give only to those appearing to need it minimize the concept and maximize its opposite:
beggar1(X) beggar +LGS(X), not beggar1(X).
beggar1(X) beggar –MGS(X), not beggar1(X).
![Page 19: Adaptive Reasoning for Cooperative Agents Luís Moniz Pereira Alexandre Pinto Centre for Artificial Intelligence – CENTRIA Universidade Nova de Lisboa INAP’09,](https://reader036.fdocuments.us/reader036/viewer/2022062714/56649d605503460f94a40b48/html5/thumbnails/19.jpg)
– However, rejected beggars, may turn into attackers maximize the concept and minimize its opposite:
beggar2(X) beggar +MGS(X), not beggar2(X).
beggar2(X) beggar –LGS(X), not beggar2(X).
– Concepts can be used to minimize the risk when carrying a lot of money:
run lot_of_money, attacker1(Y), not beggar2(Y). run give_money. give_money beggar1(Y). give_money attacker1(Y), beggar2(Y).
Example: Mixing LGS and MGS (2)
![Page 20: Adaptive Reasoning for Cooperative Agents Luís Moniz Pereira Alexandre Pinto Centre for Artificial Intelligence – CENTRIA Universidade Nova de Lisboa INAP’09,](https://reader036.fdocuments.us/reader036/viewer/2022062714/56649d605503460f94a40b48/html5/thumbnails/20.jpg)
– When carrying little money, one may prefer to risk being beaten up.
Therefore one wants to relax attacker1 but not so much as to use attacker –
MGS :
run little_money, attacker2(Y).
attacker2(X) attacker +LGS(X), not attacker2(X).
attacker2(X) attacker –LGS(X), not attacker2(X).
Example: Mixing LGS and MGS (3)
![Page 21: Adaptive Reasoning for Cooperative Agents Luís Moniz Pereira Alexandre Pinto Centre for Artificial Intelligence – CENTRIA Universidade Nova de Lisboa INAP’09,](https://reader036.fdocuments.us/reader036/viewer/2022062714/56649d605503460f94a40b48/html5/thumbnails/21.jpg)
Abduction
• Consider rule: a => b
• Deduction: from a conclude b
• Abduction: knowing or observing b, assume a as its hypothetical explanation
• From theory + observations find abductive models —the explanations for observations
![Page 22: Adaptive Reasoning for Cooperative Agents Luís Moniz Pereira Alexandre Pinto Centre for Artificial Intelligence – CENTRIA Universidade Nova de Lisboa INAP’09,](https://reader036.fdocuments.us/reader036/viewer/2022062714/56649d605503460f94a40b48/html5/thumbnails/22.jpg)
Distributing Observations
• Code observations as Integrity Constraints (ICs):
<-- not some_observation
• Find abductive explanations for observations
• Create several agents, give each the same base theory and a subset of the observations
• Each agent comes up with alternative abductive explanations for its own ICs. These need not be minimal sets of hypotheses
![Page 23: Adaptive Reasoning for Cooperative Agents Luís Moniz Pereira Alexandre Pinto Centre for Artificial Intelligence – CENTRIA Universidade Nova de Lisboa INAP’09,](https://reader036.fdocuments.us/reader036/viewer/2022062714/56649d605503460f94a40b48/html5/thumbnails/23.jpg)
Choosing the Best Explanation
• “Brainstorming” is used for solving complex problems
• Each participant contributes by adding ideas (abducibles) to a common idea pool, shared by all
• Ideas are mixed, crossed, mutated, and selected• Solutions arise from pool by iterating this
evolutionary process• Our work is inspired on the evolution of alternative
ideas and arguments to find collaborative solutions
![Page 24: Adaptive Reasoning for Cooperative Agents Luís Moniz Pereira Alexandre Pinto Centre for Artificial Intelligence – CENTRIA Universidade Nova de Lisboa INAP’09,](https://reader036.fdocuments.us/reader036/viewer/2022062714/56649d605503460f94a40b48/html5/thumbnails/24.jpg)
Lamarckian Evolution - 1
• Lamarckian evolution = meme evolution
• “meme” is cognitive equivalent of gene
• In genetic programming Lamarckian evolution has proven a powerful concept
• There are GAs that include, additionally, a logic-based Lamarckian belief revision operator, where assumptions are coded as memes
![Page 25: Adaptive Reasoning for Cooperative Agents Luís Moniz Pereira Alexandre Pinto Centre for Artificial Intelligence – CENTRIA Universidade Nova de Lisboa INAP’09,](https://reader036.fdocuments.us/reader036/viewer/2022062714/56649d605503460f94a40b48/html5/thumbnails/25.jpg)
Lamarckian Evolution - 2
• Lamarckian operator (L-op) Darwinian ones (D-ops)• L-op modifies chromosomes coding beliefs to improve
fitness with experience, rather than randomly• L-op and D-ops play distinct roles• L-op employed to bring chromosomes closer to a
solution, by belief revision • D-ops randomly produce alternative belief
chromosomes to deal with unencountered situations, by interchanging memes
![Page 26: Adaptive Reasoning for Cooperative Agents Luís Moniz Pereira Alexandre Pinto Centre for Artificial Intelligence – CENTRIA Universidade Nova de Lisboa INAP’09,](https://reader036.fdocuments.us/reader036/viewer/2022062714/56649d605503460f94a40b48/html5/thumbnails/26.jpg)
Specific Belief Evolution Method
• In traditional multi-agent problem solving, agents benefit from others’ knowledge & experience by message-passing
• Our new method: knowledge & experience coded as memes, and exchanged by crossover
• Crucial: logic-based belief revision is used to modify belief assumptions (memes) based on individual agent experience
![Page 27: Adaptive Reasoning for Cooperative Agents Luís Moniz Pereira Alexandre Pinto Centre for Artificial Intelligence – CENTRIA Universidade Nova de Lisboa INAP’09,](https://reader036.fdocuments.us/reader036/viewer/2022062714/56649d605503460f94a40b48/html5/thumbnails/27.jpg)
Fitness Functions
• Various fitness functions can be used in belief revision. The simplest is:
Fitness( ci ) = (ni /n) / (1 + NC )
where- ni is number of ICs satisfied by chromosome
ci
- n is total number of ICs- NC is number of contradictions
depending on chromosome ci
![Page 28: Adaptive Reasoning for Cooperative Agents Luís Moniz Pereira Alexandre Pinto Centre for Artificial Intelligence – CENTRIA Universidade Nova de Lisboa INAP’09,](https://reader036.fdocuments.us/reader036/viewer/2022062714/56649d605503460f94a40b48/html5/thumbnails/28.jpg)
Assumptions & Argumentation - 1
• Assumptions are coded as abducible literals in LPs
• Abducibles are packed together in chromosomes
• Evolutionary operators —crossover, mutation, revision, selection— are applied to chromosomes
• This setting provides means to search for evolutionary consensus from initial assumptions
![Page 29: Adaptive Reasoning for Cooperative Agents Luís Moniz Pereira Alexandre Pinto Centre for Artificial Intelligence – CENTRIA Universidade Nova de Lisboa INAP’09,](https://reader036.fdocuments.us/reader036/viewer/2022062714/56649d605503460f94a40b48/html5/thumbnails/29.jpg)
Assumptions & Argumentation - 2
• 3-valued contradiction removal presented before (with undefined value) is superficial
• Removes contradiction p(X) & ¬p(X) by forcing a 3-valued semantics, not looking into reasons why both hold
• Improvement relies on principles of argumentation: find arguments supporting p(X) — or ¬p(X) —
and change some of their assumptions
![Page 30: Adaptive Reasoning for Cooperative Agents Luís Moniz Pereira Alexandre Pinto Centre for Artificial Intelligence – CENTRIA Universidade Nova de Lisboa INAP’09,](https://reader036.fdocuments.us/reader036/viewer/2022062714/56649d605503460f94a40b48/html5/thumbnails/30.jpg)
Collaborative Opposition
• Challenging environment of Semantic Web is a ‘place’ for future intelligent systems to float in
• Learning in 2- or in 3-values are open possibilities
• Knowledge & Reasoning shared and distributed
• Opposing arguments will surface from agents
• Need to know how to reconcile opposing arguments
• Find 2-valued consensus as much as possible
• Least commitment 3-valued consensus are not enough
![Page 31: Adaptive Reasoning for Cooperative Agents Luís Moniz Pereira Alexandre Pinto Centre for Artificial Intelligence – CENTRIA Universidade Nova de Lisboa INAP’09,](https://reader036.fdocuments.us/reader036/viewer/2022062714/56649d605503460f94a40b48/html5/thumbnails/31.jpg)
Argumentation & Consensus - 1
• The non-trivial problem we addressed was that of defining 2-valued complete models, consistent with the 3-valued preferred maximal scenarios of Dung
• The resulting semantics —Approved Models— is a conservative extension to the well-known Stable Models semantics, in that every SM is an Approved Model
![Page 32: Adaptive Reasoning for Cooperative Agents Luís Moniz Pereira Alexandre Pinto Centre for Artificial Intelligence – CENTRIA Universidade Nova de Lisboa INAP’09,](https://reader036.fdocuments.us/reader036/viewer/2022062714/56649d605503460f94a40b48/html5/thumbnails/32.jpg)
Argumentation & Consensus - 2
• Approved Models are guaranteed to exist for every Normal Logic Program (NLP), whereas SMs are not
• Examples show NLPs with no SMs can usefully model knowledge
• The guarantee is crucial in program composition of knowledge from diverse sources
• Model existence warrant also crucial after external- or self-updating NLPs
![Page 33: Adaptive Reasoning for Cooperative Agents Luís Moniz Pereira Alexandre Pinto Centre for Artificial Intelligence – CENTRIA Universidade Nova de Lisboa INAP’09,](https://reader036.fdocuments.us/reader036/viewer/2022062714/56649d605503460f94a40b48/html5/thumbnails/33.jpg)
Argumentation & Consensus - 3
• Start by merging all opposing abductive arguments• Draw conclusions from program + single merged
argument• If contradictions arise: non-deterministically choose
one assumption of single argument and revise its truth value
• Iteration finds the non-contradictory arguments• Evolutionary method presented implements yet
another mechanism to find consensual non-contradictory arguments
![Page 34: Adaptive Reasoning for Cooperative Agents Luís Moniz Pereira Alexandre Pinto Centre for Artificial Intelligence – CENTRIA Universidade Nova de Lisboa INAP’09,](https://reader036.fdocuments.us/reader036/viewer/2022062714/56649d605503460f94a40b48/html5/thumbnails/34.jpg)
Thank you for your attention!