1
Declarative Problem Solving through Abduction
Antonis C. KakasDepartment of Computer Science
University of CyprusCYPRUS (CHYPRE)[email protected]
(Subject Title: “DPS-Paris2007”)
PART 2
11-18 January, 2007
Paris, France
2
Course Breakdown
• Introduction• Abduction – General Introduction• Modelling Problems for Abduction and DPS
• Computational Logic & PROLOG – Background
• Abductive Logic Programming – Semantics• Abductive Logic Programming – Computation• ALP for Declarative Problem Solving –
Diagnosis
• Projects – Discussioncnt…
3
Lecture 1: Reasoning for Declarative Problem Solving
Deduction: analytic reasoning, inferring a result from applying general rules to particular cases, e.g.from A (case) and B if A (general rule)infer B (result)Abduction: synthetic reasoning, inferring the case from the rule and a result, e.g.from B (result) and B if A (general rule)infer A (case)Induction: synthetic reasoning, but inferring the rule from the case and the result
4
Reasoning for Declarative Problem Solving
Deduction: analytic reasoning, inferring a result from applying general rules (model) to particular cases, e.g.from A (case) and B if A (general rule)infer B (result)
• Deduction: produces observable (phenotype) information.• Example: sad(ale) can be observed/tested.
• But this information is already known in the model. How can we improve the model?
5
Logical Reasoning for Declarative Problem Solving
Deduction: concerned with PREDICTION of (e.g. phenotype) from a given model
T |= Obs
Abduction: concerned with EXPLANATION according to a given model: produces specific information, H, to account for the Obs
T U H |= Obs
6
Lecture 2 - Contents
• Modelling for Problem Solving through Abduction• Ontological Distinctions of Information• Applications of Abduction in Artificial Intelligence
• Examples of AI application domains
• Introduction to Abductive Logic Programming (ALP)• Semantics of abductive explanation• Computation of Abductive explanations
• Example Programs in ALP• Introduction to ProLogICA
• Project Discussion
7
Modeling Problem Domains for Abductive Reasoning (& ALP)
• Vocabulary – Language of Domain
• Theory - Knowledge Representation of Domain
• Queries – Problems of Domain
8
Ontological distinction of Information and Logical Predicates
• Observable Information/Predicates: describes observations of scientific experiments. Testable directly.
• Abducible Information/Predicates: describe underlying (theoretical) relations. Missing/incomplete information that needs to be inferred – not testable directly.
• Background Information/Predicates: auxiliary information that helps us link observable and abducible information.
9
EXAMPLE: A “socio-economic”model of Universities
• Language/Ontology of relations{sad/1, overworked/1, academic/1, student/1, lecturer/1, poor/1}
• Model & background knowledgesad(X) if overworked(X), poor(X)
overworked(oliver) overworked(alex) overworked(krycia)lecturer(alex) lecturer(krycia) student(oliver)academic(alex), … poor(alex).
• We can deduce (compute) the information of sad(alex)but we can not deduce anything new for oliver or krycia.
10
Modeling through Logic & Abduction
• Any model of our problem domain can be (is) incomplete!
• Information is separated into three types:
• Observable (e.g. phenotype) – obtained from experiments via observations
• Theoretical (e.g. functional genotype) –underlying relations that cause the observable behaviour
• Background – known relevant properties, e.g. structural or chemical information.
• Example: {sad/1, overworked/1, academic/1, student/1, lecturer/1, poor/1}
11
EXAMPLE: A “socio-economic” model of Universities (Cnt.)
• Model & background knowledge
sad(X) if overworked(X), poor(X)
overworked(oliver) overworked(alex) overworked(krycia)lecturer(alex) lecturer(krycia) student(oliver)academic(alex), …
• Observations = {sad(alex), sad(krycia), not sad(oli)}
• Abductive Explanation ={poor(ale), poor(krycia), not poor(oli)}
Abducible
12
Abduction for AI: applications (1)
Planning:• observations are goals (e.g. )• explanations are plans (e.g. )
• Diagnosis:• observations are symptoms (e.g. toothache)
• explanations are diseases/faults (e.g. cavity)
Design:• observations are design goals (e.g. pc)
• explanations are designs(e.g. processor + operating system)
13
Abduction for AI: applications (2)
Vision: • observations are partial descriptions (e.g. )
• explanations are objects (e.g. or )
Natural language understanding:• observations are ambiguous sentences
(e.g. “The Milan office called…”)• explanations are interpretations
(e.g. “John working for the office located in …road in Milan called…”)
14
Course Breakdown
• Introduction• Abduction – General Introduction• Modelling Problems for Abduction and DPS
• Computational Logic & PROLOG – Background
• Abductive Logic Programming – Semantics• Abductive Logic Programming – Computation• ALP for Declarative Problem Solving –
Diagnosis
• Projects – Discussioncnt…
15
Abduction in Logic Programming
• Declarative Problem Solving Framework.• Computing Abduction!
16
Abductive Logic Programming (ALP)
• ALP as a: • Declarative Problem Solving Framework.• Framework for Computing Abduction!
• Aim: To present how we represent knowledge in ALP and how abductive reasoning helps to solve problems by completing this knowledge.
Useful reading “Abduction in Logic Programming”and the “ProLogICA” paper. Other useful reference for this is: “The role of abduction in logic programming”.
17
Abductive Logic Programming (ALP)
An ALP theory (or model) of a domain is <T,H,IC>:
T (theory presentation) is a normal logic programH (candidate hypotheses) is a set of undefined predicatesIC (integrity constraints) is a set of (FOL) sentences
O (observation or goal) is a conjunction of literals
Semantics: Given <T,H,IC> E is an explanation of O iff1) T ∪ E entails O2) T ∪ E satisfies IC 3) E ⊆ Ground(H)
where entails and satisfies are model theoretic notions, e.g. truth in a canonical model of T ∪ E.
18
T: wobbly-wheel if broken-spokeswobbly-wheel if flat-tyreflat-tyre if leaky-valve flat-tyre if punctured-tube
H: broken-spokes, leaky-valve, punctured-tube
IC: if flat-tyre & smooth-ride then false
O: {wobbly-wheel}Es: {broken-spokes}, {punctured-tube}, {broken-spokes,
punctured-tube}, {leaky-valve}, …
If we also know in T or IC that smooth-ride holds then E’s: {broken-spokes} is the only explanation.
Is this the same as O’: {wobbly-wheel}U{smooth-ride}?
Integrity Constraints in ALP
19
Integrity Constraints in ALP
Integrity constraints are domain-specific properties that:Explanations are required to satisfyValidity requirements on the explanations/hypothesesExpress Partial information on the incomplete abducible predicatescf. integrity constraints in DBs
Example: T: sibling(X,Y) if parent(Z,X) and parent(Z,Y)
parent(X,Y) if mother(X,Y)parent(X,Y) if father(X,Y)mother(mary,ann)father(john,ann)
H={mother(t,s), father(t,s) | t,s ground terms}O: sibling(ann,bob)E={mother(john,bob)} violates the property that
if mother(X,Y) and father(X,Z) then false
20
Example Domains Theories of ALPin ProLogICA
• Cars Example• Medical Diagnosis Example• Circuits Example• Other examples
21
Integrity Constraints in ALP
Compare with integrity constraints in DBs:
Example database:DB:{father(john,mary)}IC: father(X,Y) -> male(X)
Is the integrity constraint satisfied?DBUIC is satisfiable but …Can we update the DB with {father(bob,john)}?
22
Integrity constraints satisfaction
A given knowledge base KB (e.g. T ∪ E) satisfies a set of integrity constaints (sentences) IC if:
consistency view: KB∪IC is “consistent”theoremhood view: KB “entails” IC epistemic/meta-level view: IC is “known/believed” in KB
e.g. IC: if employee(X) then ∃ Y insurance-no(X,Y)KB={employee(mary)} satisfies IC under
consistency view, but not under theoremhood andepistemic views
KB={} satisfies IC under consistency andepistemic views, but not under theoremhood view
23
Semantics of definite logic programs (Recap)
The meaning (semantics) of a definite logic program is given by its least Herbrand model (LHM in short).
e.g. for Psimple : p if q s if q and r q
the LHM is {q, p}Given a definite logic program P, LHM(P)= least fixed point of TP=TP ↑ω, where the operator TP is defined by:
TP (I)= {A | A is an atom, A if B ∈ ground(P) and B ⊆ I}
e.g. for Psimple
TP ({})={q}; TP ({q})={q,p}; TP ({q,p})={q,p}: LHM
24
Semantics of Integrity Constraints
Example: Psimple : p if q s if q and r
H: {r,q}IC: if r and p then false
There are 4 possible sets of hypotheses that could form explanations:
E1={}E2={r}E3={q}E4={r,q}
Which ones satisfy the IC? The LHM of P U E in each case is:
M1={} √M2={r} √M3={q,p} √M4={r,q,s,p} X
25
Abductive Logic Programming (ALP)
Given T= <P,H,IC> where P is a definite logic program then E is an explanation of O iff:
1) O is true in M;
2) IC are true in M (epistemic view)
3) E ⊆ G(H)
where M is the least Herbrand model of P ∪ E;
When P is a normal logic program, i.econtaining NAF, then M is any stable modelof P ∪ E in the above definition.
26
Abductive Logic Programming (ALP)
Given T= <P,H,IC> where P is a normal logic program then M is a Generalized Model or Extension of T iff there exists E ⊆ G(H) such that:
1) M is a stable model of P U E
2) IC are classically true in M
Given T= <P,H,IC> a goal/query, G, is:
credulous entailed by T iff there exists an extension M of T such that M |= G.sceptically entailed by T iff for every extension M of T, M |= G.
27
Definitional and Assertional Knowledge
Given T= <P,H,IC>:P is definitional knowledge: it defines the (observable) predicates.IC is assertional knowledge: weaker knowledge of properties of the domain
IC are not used to generate explanations but to validate them, more specifically to filter out invalid explanations.
For example the IC “father(X,Y) -> male(X)” is not to be used to explain why someone is male!
Note though that ICs can require the generation of extra hypotheses, e.g. if we assume father(bob,john) the above IC requires that we also assume male(bob).
28
Additional requirements on E
In addition, E might be required to be:
basic, e.g. O=p, T={p if q, q if r, p if s}E={r} is basicE={q} and E={p} are not
minimal, e.g. E={r} is minimalE={r,q} is not
optimal, e.g. utility(r)=10, utility(s)=100E={s} is optimalE={r} is not
29
Basic and minimal explanations
• Definition: E is basic for O iff there exist no non-trivial * explanation E’ for O such that E’ is an explanation for E
• * E is a trivial explanation of O iff E entails O e.g (any superset of) any E is a trivial explanation of itselfNote: H can be chosen so that every E is basic
• Definition: E is minimal for O iff there exists no different E’ such that E’ ⊆ E and E’ is an explanation for O
• Note: some computational techniques guarantee “local” minimality of explanation
30
Assimilation/Rationalization of Observations in ALP
T: sibling(X,Y) if parent(Z,X) and parent(Z,Y) parent(X,Y) if mother(X,Y)parent(X,Y) if father(X,Y)mother(mary,ann) father(john,ann)
H={mother(t,s), father(t,s) | t,s ground terms}IC: if mother(X,Y) and father(X,Y) then false
O: sibling(ann,bob)E={mother(john,bob)} is not an explanation, whereas E={father(john,bob)} and E={mother(mary,bob)} are.
O’: {sibling(ann,bob), sibling(john,bob)}E={father(john,bob)} is now not an explanation when we consider:IC: if sibling(X,Y) and father(X,Y) then false (plus other X<->Y)
E’={mother(mary,bob), father(k,john), father(k,bob)}John’s father also married Mary (currently Jonh’s wife with a daughter Ann) to give birth to Bob!
31
Example Domains Theories of ALPin ProLogICA
• Cars Example• Medical Diagnosis Example• Circuits Example• Other examples
32
Project Discussion (1)
Represent as an ALP theory <P,H,IC> a domain of your choice. Give in this:
Informal description of the domain to be modelledLanguage and Ontology neededLogical Representation of P and IC in ALPSample observations to be explained together with possible resulting explanations.
Deadlines:
Domain of your choice: Approved on Monday the 15th at 16:00Informal Description (one page): Hand in on Tuesday the 16th
Representation in ALP: Tuesday & Wednesday Lecture.Project Presentation: Wednesday the 17th at 18:00.
33
Abduction in Logic (FOL): Theorist
Given:T (theory presentation), FOL theoryH (candidate hypotheses), set of ground FOL sentencesO (observation), FOL sentence
E (explanation) is such that 1) T ∪ E entails O2) T ∪ E is consistent(equivalently T ∪ E does not entail false)
3) E ⊆ H
Note: “entailment” is FOL deductive entailmentNote: H is a set of ground atoms
Top Related