Updates plus Preferences Luís Moniz Pereira José Júlio Alferes Centro de Inteligência Artificial...

36
Updates plus Preferences Luís Moniz Pereira José Júlio Alferes Centro de Inteligência Artificial Universidade Nova de Lisboa Portugal JELIA’00, Málaga, España

Transcript of Updates plus Preferences Luís Moniz Pereira José Júlio Alferes Centro de Inteligência Artificial...

Updates plus Preferences

Luís Moniz Pereira

José Júlio Alferes

Centro de Inteligência ArtificialUniversidade Nova de Lisboa

Portugal

JELIA’00, Málaga, España

Motivation

To combine into one uniform framework: results on LP updates

results on LP preferences

Join these complementary results, in order to: enforce preferences on the result of updates

allow preferences to be updated

Outline

MotivationConcepts

Updates versus Revisions Preferences versus Updates

Formalizations Updating Prefering Combining both

ExampleConclusions and Ongoing Work

Concepts :

Updates versus Revisions

LP and World Knowledge

Now-a-days LP allows the non-monotonic addition of new knowledge, as rules or facts, about either a static world or a dynamic world.

Till recent work on updates, LP semantics did not envisage the evolution of knowledge where new rules contradict and override old ones. Instead, LPs represented either revisable knowledge about a

static world or monotonically increasing knowledge about a

dynamic world.

Knowledge Evolution In real settings, knowledge evolves by:

non-monotonically adding information – revising changing to accompany the changes in the

world itself – updating

Example: I know now to have a flight booked for London, either to Heathrow or to Gatwick. Next:

if I learn it is not for Heathrow (revision), I conclude it is for Gatwick;

if I learn flights for Heathrow were canceled (update), then either I have a flight for Gatwick, or no flight at all.

Concepts :

Preferences versus Updates

Prefering

Preferences are employed with incomplete knowledge, as modeled by default rules, so that several models are possible.

Preferences act by choosing just some of the possible models.

They do this via a partial order among rules, so that rules will always fire if they are only defeated by less prefered ones, because these are prevented from firing.

UpdatingUpdates model dynamically evolving worlds.

Besides facts, these can contain rules: eg. legal or physical Laws, or Actions.

Updates differ from revisions, which are about an incomplete static world model.

Knowledge, whether complete or incomplete, can be updated to reflect world change.

New knowledge may contradict and override older one. New models may also be created by removing such contradictions.

Preferences and Updates Combined

Despite their differences preferences and updates display similarities.

Both can be seen as wiping out rules: in preferences the less prefered rules, so as to

remove models which are undesired in updates the older rules, including to obtain

models in otherwise inconsistent theories

This view helps putting them together into a single uniform framework.

In this framework, preferences can be updated.

Formalization :

Updating

Dynamic LPs

DLP is a framework for LP updatesIt provides meaning to sequences of LPs:

P1 P2 … Pn

Intuitively, the meaning of such a sequence results from updating P1 with the rules from P2, and then updating the result with … the rules from Pn

Inertia is applied to rules rather than to literals

Updates of LPs by LPsTo represent negative information, DLP

allows for not in rule heads.

P2: not tv_on p_failure.p_failure.

P1: sleep not tv_on.watch tv_on.tv_on.

M1 = {tv_on, watch}

Goals are evaluated wrt the last state.

M2 = {sleep, p_failure}

The 1st rule of P1 is inherited

P3: not p_failure. M3 = {tv_on, watch}

Generalized LPs

For deletions, default negation is needed in heads.

The semantics is given by a generalization of the stable models semantics.

Definitions: where literals not A are considered as new atoms Default(P,M) = {not A : r P, head(r) = A

M |= body(r) } M is a stable model of P iff:

M = least(P U Default(P,M))

Rejection of Rules

Older rules conflicting with more recent ones should be rejected, i.e. their inertia is stopped.

Given a model M and a state s, reject all rules in previous states for which a later rule exists having complementary head and true body:

Definition:Reject(s,M) = {r Pi : r’ Pj, head(r)=not head(r’) i < j s M |= body(r’) }

Semantics of Updates

The models of a DLP at a state s are obtained by rejecting older rules, if in conflict, and by adding the default literals:

Definition: where P is the union of all program in the DLP

M is a stable model of a DLP at state s iff:

M = least( [P – Reject(s,M)] U Default(P,M) )

A translation to a single GLP has been defined

An implementation exists

DLP example

P1: sleep not tv_on.watch tv_on.tv_on.

P2: not tv_on p_failure.p_failure.

P3: not p_failure.

M2 = {pf,s} is a SM at state 2

Reject(2,M2) = {tv_on}

Default(P1 U P2,M2) = {not tv, not w}

least(P1 U P2 - {tv_on} U {not tv, not w}) = {pf,s,not tv, not w}

M3 = {w,tv} is a SM at state 3

Reject(3,M3) = {p_failure}

Default(P,M3) = {not s, not pf}

least= {tv,w}

Formalization :

Prefering

Prioritized generalized LPs

A generalized program P plus a strict partial order < over the rules of P.

r1 < r2 means “r1 is prefered to r2”Given the priorities among the rules, what

stable models to prefer? Brewka and Eiter defined prefered answer-sets The definition is based on two general principles

One capturing minimalityAnother capturing relevance

Principles for preferences

Principle I (minimality) If M1 is a SM generated by rules R U

{r1}, M2 by rules R U {r2}, and r1 < r2, then M2 cannot be prefered

Principle II (relevance) Adding a rule not applicable (i.e. with

false body) in a prefered stable model M, cannot render M unprefered

Preference by rule removal

Since r1 < r2, and r1’s head defeats r2’s body (i.e. r1 head is a, and r2 body contains not a), r2 should be removed.

The only stable model of P - {r2} is SM1

r1: a not b.r2: b not a.r1 < r2

SM1 = {a} (generated by {r1})SM2 = {b} (generated by {r2})SM1 is prefered (by principle I)

Remove less prefered rules, whose body is defeated by the head of a more prefered one

Preference by rule removal

But, with the reasoning before, r3 is removed (defeated by the head of r1).

Why shouldn’t r3 be removed? r1’s body is defeated in whichever model In M2, b is true because of r4 and not of r1 r1 is unsupported (true head and false body) in M2

r1: b not c.r2: c not d.r3: a not b.r4: b not a. r1 < r2 < r3 < r4

SM1 = {a,c} (generated by {r2,r3})SM2 = {b,c} (generated by {r2,r4})SM2 shouldn’t be prefered (by principle I)

Preference by rule removal

Unsupported rules cannot be used to defeat other rules.

r1: b not c.r2: c not d.r3: a not b.r4: b not a. r1 < r2 < r3 < r4

SM1 = {a,c} (generated by {r2,r3})SM2 = {b,c} (generated by {r2,r4})SM2 shouldn’t be prefered (by principle I)

Leaving unsupported rules, doesn’t influence the least model (their body is false)

Given a SM, remove unsupported rules

Preference by rule removal

Remove less prefered rules, whose head defeats the true body of a more prefered one

r1: a not b.r2: b not c.r1 < r2

SM = {b} (generated by {r2})SM is not prefered (by principle II).Consider, eg, the addition of c a.

Prefered Stable Models

Here we only consider rules without atoms in bodies. See the paper for the general case.

Def. (Unsupported and unprefered rules):

Unsup(P,M) = {r P : M |= head(r) M | body(r)}Unpref(P,M) is the least set including Unsup(P,M), and

every rule r such that:

r’ P – Unpref(P,M): r’ < r [ not head(r’) body(r) (not head(r) body(r’) M |= body(r)) ]

Preferred SMs (cont)

Definition: M is a prefered SM of (P,<) iff:M = least( [P – Unpref(P,M)] U Default(P,M) )

Proposition: If M is prefered SM of (P,<) then M is SM of P.

Theorem: Prefered stable models and BE’s prefered

answer-set coincide on normal programs

Formalization :

Combination of preferences and updates

Updating LPs with preferences

With preferences and updates viewed as rejection of rules, it’s not difficult to combine both.

We are given: a DLP P1 P2 … Pn

a strict partial order over rules of all the Pis

To allow updates in the preference relation, the order cannot be fixed

It must be described by a language for updates another DLP for defining <

Dynamic Prioritized LPs

Sequences of pairs:(P1, R1) (P2, R2) … (Pn, Rn)

The alphabet of the Pis doesn’t include <

The alphabet of the Ris includes <

The set of constants in the Ris include all rules in the Pis

Semantics of prioritized DLPs

Semantics is given by a fixpoint definition.

M is a SM at state s if it’s the least model of the program obtained by: First removing all rules rejected by updates Only then removing all unprefered rules,

taking the relation < in MMoreover the relation < in M must be a

strict partial order.

Example

You like fast cars, and your budget doesn’t allow expensive ones. Not buying expensive cars as preference over buying fast ones.

Moreover, you know:

P1: Facts plusr1: not buy(X) avoid(X).r2: avoid(X) not buy(X), expensive(X).r3: buy(X) not avoid(X), fast(X).r4: avoid(Y) buy(X), fast(X), X Y.

safe(a). fast(b). expensive(b). safe(b). fast(c).

R1: r2 < r3.r3 < r4.

The only prefered SM includes {buy(c),avoid(a),avoid(b)}

Example (cont)

Your significant other insists that you should buy a safe car:

P2: r5: buy(X) not avoid(X), safe(X).r6: avoid(Y) buy(X), safe(X), X Y.

R2: r5 < r3.r5 < r4.r6 < r4.r6 < r4.r2 < r5.r2 < r6.

The only prefered SM at state 2 includes {buy(a),avoid(b),avoid(c)}

Example (cont)

Cars a are out of stock:

P3: r7: not buy(a). R3: {}

At state 3, r5 of state 2 is rejected by the newer rule r7, and the only prefered SM now includes:

{buy(c),avoid(a),avoid(b)}

Conclusions and ongoing work

Conclusions

We have motivated the need for coupling updates and preferences

And met it with the LP paradigm, via a fixpoint semantics, that allows for updating of preferences too

This approach is more general than other ones

Ongoing work

A transformational semantics of the framework into normal LPs, generalizing the one for updates alone

Automatically ensuring the strictness of < after an update

Applications to e-commerce B2B contracts, legal reasoning, security policy, software composition, and rational agents