Do software agents know what they talk about?

40
Do software agents know what they talk about? Agents and Ontology dr. Patrick De Causmaecker, Nottingham, March 7-11 2005

description

Do software agents know what they talk about?. Agents and Ontology dr. Patrick De Causmaecker, Nottingham, March 7-11 2005. Deductive reasoning agents. Logical programming. First order logic Example: Prolog Example: Rule based systems Example: Constraint Satisfaction. First order logic. - PowerPoint PPT Presentation

Transcript of Do software agents know what they talk about?

Page 1: Do software agents know what they talk about?

Do software agents know what they talk about?

Agents and Ontology

dr. Patrick De Causmaecker, Nottingham, March 7-11 2005

Page 2: Do software agents know what they talk about?

Deductive reasoning agents

Page 3: Do software agents know what they talk about?

Nottingham, March 2005

Agents and Ontology [email protected]

3

Logical programming First order logic Example: Prolog Example: Rule based systems Example: Constraint Satisfaction

Page 4: Do software agents know what they talk about?

Nottingham, March 2005

Agents and Ontology [email protected]

4

First order logic

Predicates on atoms, not on predicates. Quantifiers relate atoms Grelling’s paradox (cannot be

expressed in first order logic)

If an adjective truly describes itself, call it “autological", otherwise call it "heterological". For example, "polysyllabic" and "English" are autological, while "monosyllabic" and "pulchritudinous" are heterological. Is "heterological" heterological? If it is, then it isn't; if it isn't, then it is.”

Page 6: Do software agents know what they talk about?

Nottingham, March 2005 Agents and Ontology [email protected]

6

father(terach,abraham).father(terach,nachor).father(terach,haran).father(abraham,isaac).father(haran,lot):-!.father(haran,milcah).mother(sara,isaac).male(terach).male(abraham).male(nachor).male(haran).male(isaac).male(lot).female(sarah).female(milcah).female(yiscah).likes(X,pome).son(X,Y):-father(Y,X),male(X).daughter(X,Z):-father(Z,X),female(X).granfather(X,Z):-father(X,Y),father(Y,Z).

Page 7: Do software agents know what they talk about?

Nottingham, March 2005

Agents and Ontology [email protected]

7

hanoi(1, A,B,C,[[A,B]]):-!.

hanoi(N, A,B,C,Moves):- N1 is N - 1, hanoi(N1, A,C,B,Ms1), hanoi(N1, C,B,A,Ms2), append(Ms1, [[A,B]|Ms2], Moves), !.

Towers of Hannoi

Page 8: Do software agents know what they talk about?

Nottingham, March 2005

Agents and Ontology [email protected]

8

Example: Rulebased systems

http://www.expertise2go.com/download/demo.html

Page 9: Do software agents know what they talk about?

Nottingham, March 2005

Agents and Ontology [email protected]

9

RULE [Is the battery dead?]If [the result of switching on the headlights] = "nothing happens" or[the result of trying the starter] = "nothing happens"Then [the recommended action] = "recharge or replace the battery"

RULE [Is the car out of gas?]If [the gas tank] = "empty"Then [the recommended action] = "refuel the car"

Page 10: Do software agents know what they talk about?

Nottingham, March 2005

Agents and Ontology [email protected]

10

RULE [Is the battery weak?]

If [the result of trying the starter] : "the car cranks slowly" "the car cranks normally" and

[the headlights dim when trying the starter] = true and

[the amount you are willing to spend on repairs] > 24.99

Then [the recommended action] = "recharge or replace the battery"

Page 11: Do software agents know what they talk about?

Nottingham, March 2005

Agents and Ontology [email protected]

11

RULE [Is the car flooded?]

If [the result of trying the starter] = "the car cranks normally" and

[a gas smell] = "present when trying the starter"

Then [the recommended action] = "wait 10 minutes, then restart flooded car"

Page 12: Do software agents know what they talk about?

Nottingham, March 2005

Agents and Ontology [email protected]

12

RULE [Is the gas tank empty?]

If [the result of trying the starter] = "the car cranks normally" and

[a gas smell] = "not present when trying the starter"

Then [the gas tank] = "empty" @ 90

Page 13: Do software agents know what they talk about?

Nottingham, March 2005

Agents and Ontology [email protected]

13

PROMPT [the result of trying the starter] Choice CF

"What happens when you turn the key to try to start the car?"

"the car cranks normally"

"the car cranks slowly"

"nothing happens"

Page 14: Do software agents know what they talk about?

Nottingham, March 2005

Agents and Ontology [email protected]

14

PROMPT [a gas smell] MultChoice CF

"The smell of gasoline is:"

"present when trying the starter"

"not present when trying the starter"

Page 15: Do software agents know what they talk about?

Nottingham, March 2005

Agents and Ontology [email protected]

15

PROMPT [the result of switching on the headlights] MultChoice CF

"The result of switching on the headlights is:"

"they light up"

"nothing happens"

PROMPT [the headlights dim when trying the starter] YesNo CF

"Do the headlights dim when you try the starter with the lights on?"

Page 16: Do software agents know what they talk about?

Nottingham, March 2005

Agents and Ontology [email protected]

16

Example: Constraint Satisfactionhttp://kti.ms.mff.cuni.cz/~bartak/constraints/

index.html

Page 17: Do software agents know what they talk about?

Nottingham, March 2005 Agents and Ontology [email protected]

17

Page 18: Do software agents know what they talk about?

Nottingham, March 2005 Agents and Ontology [email protected]

18

Page 19: Do software agents know what they talk about?

Nottingham, March 2005 Agents and Ontology [email protected]

19

Page 20: Do software agents know what they talk about?

Nottingham, March 2005 Agents and Ontology [email protected]

20

Page 21: Do software agents know what they talk about?

Nottingham, March 2005

Agents and Ontology [email protected]

21

Deductive reasoning Intelligent behaviour can be reached

by providing the system with a symbolic representation of its environment and allow it to manipulate this representation syntactically

The symbolic representation is a set of logical formulas. The manipulation is deduction, or theorem proving.

Page 22: Do software agents know what they talk about?

Nottingham, March 2005 Agents and Ontology [email protected]

22

Interp:Pixel manipulation

Knowledge bank: belief:dist(mij,d1) = 90 cmdoor(d1)

PlanSTOP

ActionBREAK!

D020

Page 23: Do software agents know what they talk about?

Nottingham, March 2005

Agents and Ontology [email protected]

23

Two problems Transduction

Sufficiantly fast transformation of observations in an adequate symbolic representation.

Representation/reasoning The symbolic representation as a

basis for the manipulation process. Both should be sufficiently fast.

Page 24: Do software agents know what they talk about?

Nottingham, March 2005

Agents and Ontology [email protected]

24

AI aproach Perception:

Vision, speach, natural language, learning,…

Representation Knowledge representation tasks,

automatic reasoning, automatic planning A lot of work has been done, results

are still very limited.

Page 25: Do software agents know what they talk about?

Nottingham, March 2005

Agents and Ontology [email protected]

25

Agents as theorem provers The internal state of the agent is a

database of first order predicates:

This database contains all beliefs of the agent.

Open(valve221)Temperature(reactor4726,321)Pressure(tank776,28)

Page 26: Do software agents know what they talk about?

Nottingham, March 2005

Agents and Ontology [email protected]

26

Agents as theorem provers Beliefs are not exact, complete. Interpretation may be faulty. Still these predicates are all the

agent can walk on.

Page 27: Do software agents know what they talk about?

Nottingham, March 2005

Agents and Ontology [email protected]

27

Agents as theorem provers FormallyL = {all first-order predikaten}D = (L) = {all L databases}, 1, 2,… D= {deductionrules of the agent} means that formula from L can be proven from database using rules .

Page 28: Do software agents know what they talk about?

Nottingham, March 2005

Agents and Ontology [email protected]

28

Agents as theorem provers The agent:

The perception function: see : S -> Per

The adaptation of the internal state: next : D Per -> D

The action function: action : D -> Ac

Page 29: Do software agents know what they talk about?

Nottingham, March 2005

Agents and Ontology [email protected]

29

Function Action by proof1. Function action( :D) return een actie Ac2. begin3. for each Ac4. if Do() then return 5. end for6. for each Ac7. if Do() then return 8. end for9. return null10. end

Page 30: Do software agents know what they talk about?

Nottingham, March 2005

Agents and Ontology [email protected]

30

Example: the vacuum cleaning agent

Page 31: Do software agents know what they talk about?

Nottingham, March 2005

Agents and Ontology [email protected]

31

Vacuum cleaning The world

Previous information changes

old() = {P(t1,…,tn) |P {In,Dirt,Facing} en P(t1,…,tn) }}

In(x,y)Dirt(x,y)Facing(d)

Page 32: Do software agents know what they talk about?

Nottingham, March 2005

Agents and Ontology [email protected]

32

Vacuum cleaning The function new generates new

knowledge: new : D Per -> D (exercise)

One can define next as: next(,p) = ( \old()) new(,p)

Page 33: Do software agents know what they talk about?

Nottingham, March 2005

Agents and Ontology [email protected]

33

Vacuum cleaning Deductionrules are as

(…) (…) “If is consistent with the content of the

database, conclude ” Rule 1: arbeit

In(x,y) Dirt(x,y) Do(suck) Rule 2:bewegen

In(0,0) Facing(north) Dirt(0,0) Do(forward) In(0,1) Facing(north) Dirt(0,0) Do(forward) In(0,2) Facing(north) Dirt(0,0) Do(turn) In(0,2) Facing(east) Do(forward)

Page 34: Do software agents know what they talk about?

Nottingham, March 2005

Agents and Ontology [email protected]

34

Conclusions Rather impractical… Agent must try do determine its optimal

action by reasoning. This takes time (deductive systems are

slow). The world can have changed… “calculative rationality”: agent decides

for the optimal action at the time of the start of the reasoning process.

Not allways acceptable

Page 35: Do software agents know what they talk about?

Nottingham, March 2005

Agents and Ontology [email protected]

35

Other problems Logic is elegant but slow The see functie is in a difficult,

poorly understood, sector of AI. The vacuum cleaning problem was

already difficult to describe!

Page 36: Do software agents know what they talk about?

Nottingham, March 2005

Agents and Ontology [email protected]

36

Agent georiënterd programming: Agent0 (Shoham 1993) Desire, belief, intention In Agent0 an agent is

capabilities, Initial beliefs Initial commitments Rules to deduct commitments (commitment

rules).

Page 37: Do software agents know what they talk about?

Nottingham, March 2005

Agents and Ontology [email protected]

37

Agent0 A commitment rule is

A message condition To be compared with received messages

A mentale condition To be compared with the beliefs and

intentions An action actie

To be selected if appropriate

Page 38: Do software agents know what they talk about?

Nottingham, March 2005

Agents and Ontology [email protected]

38

Agent0 Two kinds of actions:

Communicative Private

Three kinds of messages: Requests for action Unrequests to stop action Inform for infomation

Page 39: Do software agents know what they talk about?

Nottingham, March 2005

Agents and Ontology [email protected]

39

COMMIT((agent, REQUEST, DO(time, action)) ;;; boodschapvoorwaarde

(B,[now, Friend agent] AND CAN(self, action) AND NOT [time, CMT(self, anyaction)]), ;;; mentale voorwaarde

self, DO(time,action))

Page 40: Do software agents know what they talk about?

Nottingham, March 2005

Agents and Ontology [email protected]

40

Initialize

Update beliefs

Update commitments

Execute

Beliefs

Commitments

Abilities

messages in

messages outinternal actions