Bridging the Gap between Human and Automated Reasoning Is...

96
Proceedings of the Third Workshop on Bridging the Gap between Human and Automated Reasoning Is Logic and Automated Reasoning a Foundation for Human Reasoning? Bridging-2017 a CogSci-17 workshop supported by IFIP London, UK, July 26th, 2017 Edited by Ulrich Furbach and Claudia Schon

Transcript of Bridging the Gap between Human and Automated Reasoning Is...

Page 1: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

Proceedings of the Third Workshop on

Bridging the Gap betweenHuman and Automated

Reasoning

Is Logic and Automated Reasoning aFoundation for Human Reasoning?

Bridging-2017

a CogSci-17 workshop

supported by IFIP

London, UK, July 26th, 2017

Edited by Ulrich Furbach and Claudia Schon

Page 2: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

Preface

Reasoning is a core ability in human cognition. Its power lies in the ability totheorize about the environment, to make implicit knowledge explicit, to gener-alize given knowledge and to gain new insights. It is a well researched topic incognitive psychology and cognitive science and over the past decade impressiveresults have been achieved. Early researchers often used propositional logic as anormative framework with its well-known deficiencies. Central results like find-ings from the Wason selection task or the suppression task inspired a shift frompropositional logic and the assumption of monotonicity in human reasoning to-wards other reasoning approaches. This includes but is not limited to modelsusing probabilistic approaches, mental models, or non-monotonic logics.

Automated deduction, on the other hand, is mainly focusing on the auto-mated proof search in logical calculi. And indeed there is tremendous successduring the last decades. Recently a coupling of the areas of cognitive scienceand automated reasoning is addressed in several approaches. For example thereis increasing interest in modeling human reasoning within automated reasoningsystems including modeling with answer set programming, deontic logic or ab-ductive logic programming. There are also various approaches within AI researchfor common sense reasoning.

The goal of this workshop is to bring together leading researchers from artifi-cial intelligence, automated deduction, computational logics and the psychologyof reasoning that are interested in a computational foundations of human rea-soning – both as speakers and as audience members. Its ultimate goal is to shareknowledge, discuss open research questions, and inspire new paths.

In total, nine papers were submitted to the workshop. From these, eighthave been accepted for presentation. The papers present the following strands:cognitive models, logic programming approaches to model human reasoning;syllogistic reasoning; computational models for human reasoning.

Finally, the Bridging-17 organizers seize the opportunity to thank the Pro-gram Committee members for their most valuable comments on the submissions,the authors for inspiring papers, the audience for their interest in this workshop,the local organizers from the CogSci 2017 team, and the Workshops Chairs.

We hope that in the years to come, Bridging will become a platform for di-alogue and interaction for researchers in both cognitive science and automatedreasoning and will effectively help to bridge the gap between human and auto-mated reasoning.

November, 2017Koblenz

Ulrich FurbachClaudia Schon

i

Page 3: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

Table of Contents

Predicting Responses of Individual Reasoners in Syllogistic Reasoningby using Collaborative Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

Ilir Kola and Marco Ragni

The Search for Cognitive Models: Standards and Challenges . . . . . . . . . . . . 10Marco Ragni and Nicolas Riesterer

The Weak Completion Semantics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18Emmanuelle-Anna Dietz Saldanha and Steffen Holldobler and IsabellyLouredo Rocha

Informalizing Formal Logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31Antonis Kakas

Agent Morality via Counterfactuals in Logic Programming . . . . . . . . . . . . . 39Luıs Moniz Pereira and Ari Saptawijaya

Towards Cognitive Social Machines for Bridging the Cognitive-Computational Gap in Creativity and Creative Reasoning . . . . . . . . . . . . . . 54

Ana-Maria Olteteanu

Principles and Clusters in Human Syllogistic Reasoning . . . . . . . . . . . . . . . . 69Emmanuelle-Anna Dietz Saldanha, Steffen Holldobler, and Richard Morbitz

Satisfiability for First-order Logic as a Non-Modal Deontic Logic . . . . . . . 84Robert Kowalski

ii

Page 4: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

Program Committee

Emmanuelle Diez Saldanha, University of DresdenUlrich Furbach, University of KoblenzSteffen Holldobler, University of DresdenAntonis C. Kakas, University Cyprus, CyprusSangeet Khemlani, Naval Research Lab, USARobert A. Kowalski Imperial College London, GBLuıs Moniz Pereira, Universidade Nova Lisboa, PortugalMarco Ragni, University of FreiburgClaudia Schon, University of KoblenzFrieder Stolzenburg, Harz University of Applied SciencesMariusz Urbanski, University Poznan, PolandAdam Mickiewicz, University Poznan, Poland

iii

Page 5: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

Predicting responses of individual reasoners in syllogistic reasoning by using collaborative filtering

Ilir Kola1 and Marco Ragni1

1 Cognitive Computation Lab, University of Freiburg, 79110 Freiburg, Germany [email protected] [email protected]

Abstract. A syllogism consists of two premises each containing one of four quantifiers (All, Some, Some not, None) and two out of three objects totaling in 64 reasoning problems. The task of the participants is to draw or evaluate a conclusion, given the premise information. Most, if not all cognitive theories for syllogistic reasoning, focus on explaining and sometimes predicting the ag-gregated response pattern for participants of a whole psychological experiment. While only few theories focus on the level of an individual reasoner that might have a specific mental representation that explains her response pattern. If dif-ferent reasoners can be grouped into similar answer patterns then it is possible to identify even cognitive styles that depend on the underlying representation. To test the idea of individual predictions, we start by developing a pair-wise similarity function based on the subjects’ answers to the task. For 10% of the subjects, we randomly delete 15% of their answers. By using collaborative fil-tering techniques, we check whether it is possible to predict the deleted answers of a specific individual solely by using the answers given by similar subjects to those specific questions. Results show that not only the correct answer is pre-dicted in around 70% of the cases, and the answer is in the top two predictions in 89% of the cases, which outperforms other theoretical approaches, but the predictions are as well accurate for cases where participants deviate from the correct answer. This implies that there are cognitive principles responsible for the patterns. If these principles are identified, then there is no need for complex models, because even simple ones can achieve high accuracy. This supports that individual performance in reasoning tasks can be predicted leading to a new level of cognitive modeling.

Keywords: computational reasoning, individual differences, syllogisms, col-laborative filtering, machine learning

1 Introduction

Reasoning problems have been studied in such diverse disciplines as psychology, philosophy, cognitive science, as well as in computer science. From an artificial intel-ligence perspective, modeling human reasoning is crucial if we want to have artificial agents which help us in everyday life. To be successful at this, it is important to un-derstand that each individual can have a different reasoning pattern. Sometimes devia-

1

Page 6: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

2

tions of the individual participants from the norms of classical logic have led to a qualification of such reasoners as rather irrational (e.g., [27]). Another possibility is that there is a so-called bounded rationality [6]. An indicator could be that these “de-viators” are inherently consistent in their answers and even more that their answers can be predicted. Most previous work has focused on overall distribution of answers, trying to predict the most chosen answer by subjects. However, as noted by Pachur and colleagues [18], in presence of individual differences, tests of group mean differ-ences can be highly misleading. For this reason, we focus on individual subjects and try to predict the exact answer they would give.

Collaborative filtering, a method employed in recommender systems [23], can show that a single reasoner does not deviate from similar reasoners, and that conse-quently her answers can be predicted based on answers of the similar reasoners.

The rest of this paper is structured as follows: first, we give an introduction to theo-ries on reasoning and individual differences, syllogistic reasoning, and recommender systems. Then, we present a model which uses collaborative filtering to predict an-swers in the syllogistic reasoning task and compare it to other models or theoretical predictions. Lastly, we draw conclusions and suggest further steps for research.

2 Background

2.1 Theories on reasoning and individual differences in reasoning

Scientist have tried to understand human reasoning for a long time. Up to date, there are at least five more prominent theories on how people reason. These theories are based on heuristics [3,4,6,14] mental logic [24,25], pragmatic reasoning schemas [2], mental models [8], and probability theory [15]. Oaksford and Chater [16] offer a gen-eral review of these theories.

The need for all these theories is caused by the fact that people differ in how they answer to reasoning tasks. Theories usually aim at explaining general answering pat-terns, but if we focus on individual answers then these differences are even more vast. These differences can be caused by intellectual abilities, memory capacity, strategies being used, among others [17, 26].

2.2 Syllogistic reasoning

In a syllogistic task, subjects are presented with two premises, and they have to evalu-ate what follows or whether a third given conclusion necessarily follows. Consider the following example [12]:

Some Artists are Bakers, All Bakers are Chemists. Therefore, some Artists are Chemists.

Each premise can have four possible moods, two of which are affirmative (Some,

All), and two are negative ones (Some not, No). The premises have two terms each,

2

Page 7: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

3

but overall only three terms are used. This is because the first two premises always share a common term (in this case bakers), and the third premise asks about the re-maining two terms (artists and chemists). Terms can have four figures, based on their configuration:

Figure 1 Figure 2 Figure 3 Figure 4 A-B B-A A-B B-A B-C C-B C-B B-C

Since each premise can have four moods, and there are four possible figures, there

can be 64 distinct pairs of premises. 27 of them have a conclusion which is valid in classical logic, whereas for the remaining 37 there is no valid conclusion. The conclu-sion (a third statement) allows again four possible moods, and two figures (A-C or C-A), so overall there are 512 syllogisms that can be evaluated.

Studies using syllogisms with different forms of content from abstract to realistic one have shown that errors are not random, but are systematically according to two main factors: figure and mood (see [5]). Syllogistic reasoning has caught the attention of many researchers. Khemlani and Johnson-Laird [12] provide a review of seven theories of syllogistic reasoning. We will describe the ones which perform better in the meta-analysis, and they will be later used as a baseline for the performance of our model.

The first theory, illicit conversions [1,22] is based on a misinterpretation of the quantifiers interpreting All B are A when given All A are B and Some B are not A when told Some A are not B. Both these conversions are logically invalid, and lead to errors such as inferring All c are a given the premises All A are B and All C are B. In order to predict the answers of syllogisms, this theory uses classical logic conversions and operators, as well as the two aforementioned invalid conversions.

The verbal models theory [20] claim that reasoners built verbal mental models from syllogistic premises and either formulates a conclusion or declares that nothing follows. The model then performs a reencoding of the information based on the in-formation that the converse of the quantifiers Some and No are valid. In another ver-sion, the model also reencodes invalid conversions. The authors argue that a crucial part of deduction is the linguistic process of encoding and reencoding the information, rather than looking for counterexamples.

Unlike the previous example, mental models (formulated for syllogisms first in [7]) are inspired by the use of counterexamples. The core idea is that individuals under-stand that a putative conclusion is false if there is a counterexample to it. The theory states that when being faced with a premise, individuals build a mental model of it based on meaning and knowledge. E.g. when given the premise All Artists are Bee-keepers the following model is built:

Artist Beekeeper Artist Beekeeper

3

Page 8: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

4

Each row represents the properties of an individual, and the ellipsis denotes indi-viduals which are not artists. This model can be fleshed out to an explicit model which contains information on all potential individuals:

Artist Beekeeper Artist Beekeeper

Beekeeper In a nutshell, the theory states that many individuals simply reach a conclusion

based on the first implicit model, which can be wrong (in this case it would give the impression that All Beekeepers are Artists). However, there are individuals who built other alternative models in order to find counterexamples, which usually leads to a logically correct answer.

2.3 Collaborative filtering and recommender systems

Recommender systems are software tools used to provide suggestions for items which can be useful to users [23]. One way to implement a recommender system is through collaborative filtering. In a nutshell, collaborative filtering suggests that if Alice likes items 1 and 2, and Bob likes items 1, 2 and 3, then Alice also probably likes item 3. More formally, in collaborative filtering we look for patterns in observed preference behavior, and try to predict new preferences based on those patterns. Users’ prefer-ences for the items are stored as a matrix, in which each row represents a user and each column represents an item. Then, for each user we build a similarity function to see who are the users which have similar preferences. This means, for each user we have a neighborhood of other users similar to them. Then, when a certain item has not been rated by our user, we rely on this neighborhood to see how would our user rate that item. If the rate would be high enough, we can recommend that item to the user.

Fig. 1. Users’ ratings represented as a matrix

The main challenge in this case would be to select the appropriate similarity func-

tion, and to determine the adequate size of the neighborhood.

4

Page 9: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

5

3 Predicting performance in syllogistic reasoning by using collaborative filtering

3.1 Motivation

As aforementioned, people make mistakes when solving reasoning tasks such as syl-logisms. When it comes to preference behavior, we have seen that collaborative filter-ing can achieve very good results in predicting which items to recommend to users. This shows that people are consistent in their preferences. Could it be the case that people are also consistent in the way they perform in reasoning tasks, and can we predict their answers (including errors) in the aforementioned reasoning domains? We will explore this by using collaborative filtering to predict participants’ behavior in reasoning tasks.

3.2 The experimental setting

For this model, we will use an unpublished data set from an online experiment con-ducted at the Cognitive Computation Lab (University of Freiburg). It includes data from 140 subjects which completed all 64 syllogistic tasks. Each subject was present-ed with two premises, and had to choose between nine answer options (the eight mood/figure combinations, plus the ninth option being No Valid Conclusion).

3.3 The model

In our setting, the users are the 140 subjects of the study, and the items are the 64 tasks. We define the similarity function as follows:

𝑠𝑖𝑚 = 𝑛𝑠𝑎𝑚𝑒𝐴𝑛𝑠𝑤𝑒𝑟𝑠

𝑁 (1)

where N represents the amount of questions which were answered by both subjects. As we can see, similarity is a function between 0 and 1.

We start by randomly selecting 14 subjects for which there exists at least one other subject with a similarity of 0.6 or higher, and then randomly deleting 10 of their an-swers. These will be the answers which have to be predicted.

The model computes the pair-wise similarities between subjects, and then whenev-er for the current subject there is a missing answer, it identifies all subjects in its neighborhood (i.e., subjects with a similarity higher than 0.35) which have answered that task, and performs a “weighted voting” as following:

for answer in possible_answers: for user in users: value[answer]=value[answer]+sim[user]*given[user]

where sim[user] represents the similarity of our subject with the user which we are currently computing, and given[user] is a binary attribute showing whether

5

Page 10: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

6

the user gave this answer to the task or not. We perform this weighting inspired by the intuition that answers given by more similar subjects should matter more. Then, the answer with the highest value is the predicted one.

3.4 Results

The model is very simple, and it does not include any learning, its performance is, however, fairly accurate. It is important to notice that the model predicts one out of nine possible options, so a model which is simply guessing would be on average cor-rect in about 11% of times. Our model compares the predicted answers to the true ones, and reports the percentage of correctly predicted answers.

In order to interpret the result better it would be useful to compare the performance of our model to other models or theoretical predictions. As we already stated, most theories do not focus on individual answer predictions, but on most chosen answers. For example, a theory can state that for the premises All A are B, Some B are C then people draw the answers Some A are C, Some C are A or All A are C. We try to see what these theories would predict for our individual missing answers, and we use the relaxation that if the missing answer is one of the predicted answers from the theory, then it is counted as correct. We notice that this is quite a big relaxation, since there are theories which predict three to four answers for the same pair of premises, which means they would of course achieve a better accuracy than our model which always predicts just one answer. We calculate the accuracy of the predictions of theories based on illicit conversions, verbal models, and mental models, as well as the predic-tions of mReasoner, an implementation of the mental models theory.

One thing to keep in mind is that for some syllogisms there is more than one valid answer, however subjects could select in our experiment only one answer. This can cause a difficulty for our comparison as we need to deal with cognitive theories that often predict up to four or five answers per syllogism. For this reason, we construct two other versions of our model. Instead of predicting only one answer, we checked what would be the accuracy of the prediction if we predict the top two and top three most voted answers. We repeat the procedure 100, 300 and 500 times (to check if results converge); the results are reported in Table 1:

Exact Top 2 Top 3 IC VM MM mReasoner

100 runs 0.68 0.89 0.95 0.61 0.77 0.95 0.87 300 runs 0.69 0.88 0.95 0.62 0.77 0.95 0.87 500 runs 0.69 0.89 0.95 0.62 0.77 0.95 0.87

Table 1. The accuracy of the cognitive theories in predicting missing answers. The reported results are average accuracies over 100, 300, and 500 runs. (Exact, Top2 and Top3 refer to our model producing 1, 2 and 3 answers; IC refers to Illicit Conversions; VM refers to Verbal Models; MM refers to Mental Models)

6

Page 11: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

7

3.5 Discussion

The results show that our model which predicts the exact answer does not only per-form reliably better than chance, but even manages to outperform the theoretical pre-dictions based on illicit conversions, which for almost half of the syllogisms predicts more than one answer. Furthermore, we notice that our model with the two most vot-ed predictions outperforms the predictions of the verbal models as well as of mRea-soner, which is right now one of the state-of-the-art predictors for syllogistic reason-ing. Another thing which is important to notice is that our model reaches the same accuracy even if we delete 32 (out of the 64) answers for up to 50% of the partici-pants, showing robust performance.

We notice that the top performance is achieved by the predictions made by the mental models theory. However, it is important to notice that for almost half of the syllogisms this theory predicts four or even five answers, which means it has an ad-vantage for this type of metric. Still, our model which predicts the top 3 answers (still less than the mental models predictions) achieves the same performance.

mReasoner is an implementation which is based on mental models, but it has some parameters which limit the number of predicted answers for each syllogism (it pre-dicts one answer for 7 syllogisms, and more than two for 16 syllogisms). In this com-parison, we used the default setting for mReasoner, and we see that our model which predicts the top two answers has a better performance.

Khemlani and Johnson-Laird [13] propose a model where mReasoner learns pa-rameters for individual subjects in a small dataset consisting of 20 participants, and then simulates the answers of each subject and compares them to the true answers. They report a mean correlation to the data of 0.7, which means on average in 70% of the cases mReasoner made the right prediction. This result is comparable to our basic model, but built on general cognitive principles. Both approaches differ in their meth-odology: Our approach requires participants data to classify and predict other reason-ers and does not have cognitive principles, while on the other hand mReasoners is built on cognitive principles but trains the system parameters on the whole dataset, so it is not actually predicting. A combination of both methods to reach a “prediction” based on cognitive principles is important.

4 Conclusions and future steps

These results show that collaborative filtering can help in predicting individual per-formance for reasoning tasks, but also that there are new challenges (especially by the performance boost when considering the top two predictions). First of all, it will be interesting to test the same model with data from other reasoning domains, e.g., the Wason selection task [28]. This would allow us to test for consistency across different reasoning domains. Secondly, as we mentioned the model is simple, it would be inter-esting to build a more adaptive model which learns from the subjects’ answers and can identify cognitive principles. This could be achieved by analyzing potential rea-

7

Page 12: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

8

sons for differences in performance, combined to using more advanced techniques from machine learning to build the recommender system.

One alternative would be to formalize the tasks by using ternary logic, and then learn how different subjects map logical operators to truth tables. Ternary logic has shown to provide high flexibility in modeling Wason’s selection task [21]. Another alternative would be to include theories’ predictions to the task, and check whether a subject is consistent with the predictions of a certain theory (i.e. we would find simi-larities with theoretical predictions rather than with other subjects). This would also help us for cases where there are not enough subjects to build informative similarity functions among them.

We tried to use machine learning techniques to cluster the data in order to identify potential reasoning profiles, however the dataset seems to be too diverse. A method called fcclusterdata, a hierarchal clustering technique from the sckit-learn package [19] in Python, identifies more than 40 clusters (for the 140 participants), whereas by using the k-medoids technique, in which we can specify the number of clusters, for up to 6 clusters the similarity of subjects in the cluster remains low and we do not achieve better performance. Studies [17, 26] have identified reasons which might lead to individual differences such as level of intellect, memory capacity etc. Our intuition is that although these reasons are similar for different individuals, the way they are presented in people makes it difficult to create clusters. For example, an individual can have high intellectual capacity but bad memory, another one medium intellectual capacity and very good memory, and so on. This is why we think that an approach which focuses on finding similar reasoners for each individual can be more effective.

Reasoners are relatively consistent in their performance in syllogistic reasoning, since some tend to give similar answers and often predictable mistakes. This means it is possible to build reasoning models which can identify a person’s reasoning pattern, and exploit it to better understand the overall reasoning process. This is exactly what our simple model does, and in its relaxed version it manages to be as good as state of the art complex reasoning models.

References

1. Chapman, L. J., & Chapman, J. P. Atmosphere effect re-examined. Journal of experi-mental psychology, 58(3), 220. (1959)

2. Cheng, P. W., & Holyoak, K. J. Pragmatic reasoning schemas. Cognitive psychology, 17(4), 391-416. (1985)

3. Evans, J. St. B. T. Heuristic and analytic processes in reasoning. British Journal of Psy-chology, 75, 451-468. (1984)

4. Evans, J. St. B. T. Bias in human reasoning: Causes and consequences. Hillsdale, NJ: Erlbaum. (1989)

5. Evans, J. S. B., Newstead, S. E., & Byrne, R. M. Human reasoning: The psychology of de-duction. Psychology Press. (1993)

6. Gigerenzer, G., & Hug, K. Domain-specific reasoning: Social contracts, cheating, and per-spective change. Cognition, 43(2), 127-171. (1992)

7. Johnson-Laird, P. N. Models of deduction. Reasoning: Representation and process in chil-dren and adults, 7-54. (1975)

8

Page 13: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

9

8. Johnson-Laird, P. N. Mental models: Towards a cognitive science of language, inference, and consciousness (No. 6). Harvard University Press. (1983)

9. Johnson-Laird, P. N., & Steedman, M. The psychology of syllogisms. Cognitive psycholo-gy 10.1: 64-99. (1978)

10. Johnson-Laird, P. N., & Wason, P. C. A theoretical analysis of insight into a reasoning task. Cognitive Psychology, 1(2), 134-148. (1970)

11. Kaufman, L., & Rousseeuw, P. J. Finding Groups in Data: An Introduction to Cluster Analysis, Wiley New York Google Scholar. (1990)

12. Khemlani, S., & Johnson-Laird, P. N. Theories of the syllogism: A meta-analysis. Psycho-logical Bulletin, Vol 138(3), May 2012, 427-457. (2012)

13. Khemlani, S., & Johnson-Laird, P. N. How people differ in syllogistic reasoning. In Pro-ceedings of the 36th Annual Conference of the Cognitive Science Society. Austin, TX: Cognitive Science Society. (2016)

14. Newell, A., & Simon, H. A. Human problem solving (Vol. 104, No. 9). Englewood Cliffs, NJ: Prentice-Hall. (1972)

15. Oaksford, M., & Chater, N. A rational analysis of the selection task as optimal data selec-tion. Psychological Review, 101, 608-631. (1994)

16. Oaksford, M., & Chater, N. Theories of reasoning and the computational explanation of everyday inference. Thinking & Reasoning, 1(2), 121-152. (1995)

17. Oberauer, K., Süß, H. M., Wilhelm, O., & Sander, N. Individual differences in working memory capacity and reasoning ability. Variation in working memory, 49-75. (2007)

18. Pachur, T., Bröder, A., & Marewski, J. N. The recognition heuristic in memory‐based in-ference: is recognition a non‐compensatory cue? Journal of Behavioral Decision Making, 21(2), 183-210. (2008)

19. Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., ... & Vanderplas, J. Scikit-learn: Machine learning in Python. Journal of Machine Learning Re-search, 12(Oct), 2825-2830. (2011)

20. Polk, T. A., & Newell, A. Deduction as verbal reasoning. Psychological Review, 102(3), 533. (1995)

21. Ragni, M., Dietz, E. A., Kola, I., & Hölldobler, S. Two-Valued Logic is Not Sufficient to Model Human Reasoning, but Three-Valued Logic is: A Formal Analysis. Bridging 2016 – Bridging the Gap between Human and Automated Reasoning, 1651:61–73. (2016)

22. Revlis, R. Two models of syllogistic reasoning: Feature selection and conversion. Journal of Verbal Learning and Verbal Behavior, 14(2), 180-195. (1975)

23. Resnick, P., & Varian, H. R. Recommender systems. Communications of the ACM, 40(3), 56-58. (1997)

24. Rips, L. J. Cognitive processes in propositional reasoning. Psychological review, 90(1), 38. (1983)

25. Rips, L. J. The psychology of proof: Deductive reasoning in human thinking. MIT Press. (1994)

26. Stanovich, K. E., & West, R. F. Individual differences in rational thought. Journal of ex-perimental psychology: general, 127(2), 161. (1998)

27. Tversky, A., & Kahneman, D. Availability: A heuristic for judging frequency and proba-bility. Cognitive psychology, 5(2), 207-232. (1973)

28. Wason, P. C. Reasoning. New Horizons in Psychology. pp. 135-151. (1966)

9

Page 14: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

The Search for Cognitive Models:Standards and Challenges

Marco Ragni and Nicolas Riesterer

Cognitive Computation LabAlbert-Ludwigs-Universitat Freiburg

Abstract. Cognitive modeling is the distinguishing factor of cognitivescience and the method of choice for formalizing human cognition. Inorder to bridge the gap between logic and human reasoning, a numberof foundational research questions need to be rigorously answered. Theobjective of this paper is to present relevant concepts and to introducepossible modeling standards as well as key discussion points for cognitivemodels of human reasoning.

Keywords: Cognitive Modeling; Human Reasoning; Logic

1 Introduction

All sciences are defined by their respective research objectives and methods.Cognitive science in particular is special in this regard, because it is an inter-disciplinary field located at the boundaries of many other domains of researchsuch as artificial intelligence, psychology, linguistics, computer science, and neu-roscience. As a result, its goals and methods are diverse mixtures with influencesfrom the neighboring fields.

The core research question of cognitive science focuses on investigating in-formation processing in the human mind in order to gain an understanding ofhuman cognition as a whole. To this end, it primarily employs the method of cog-nitive modeling as a means of capturing the latent natural processes of the mindby well-defined mathematical formalizations. The challenge of cognitive model-ing is to develop models which are capable of representing highly complex andpotentially unobservable processes in a computational manner while still guar-anteeing their interpretability in order to advance the level of understanding ofcognition.

This paper discusses high-level cognitive models of reasoning. In particular,it gives a brief introduction into the following three core research questions:

1. What characterizes a cognitive model?

2. What is a “good” cognitive model?

3. What are current challenges?

10

Page 15: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

2 What is a cognitive model?Step 1: Model Generation

A theory of reasoning is defined as cognitively adequate [22] with respect to areasoning task T and a human reasoner R, if the theory is (i) representationallyadequate, i.e., it uses the same mental representation as a human reasoner does,(ii) operationally adequate, i.e., the theory specifies the same operations thereasoner employs, and (iii) inferentially adequate, i.e., the theory draws the sameconclusions based on the operations and mental representation as the humanreasoner. While the inferential adequacy of a theory T can be determined fromthe given responses of a reasoner for a given task, it is impossible to directlyobserve the operations and mental representations a reasoner applies. They canonly be determined by means of reverse engineering, i.e. the identification offunctionally equivalent representations and operations leading to the generationof a given reasoner’s output.

A mental representation is localized within the specific cognitive architectureof the human mind, which the reasoning process operates on. Hence, we need todistinguish between cognitive architectures and a cognitive models. A cognitivearchitecture is a tuple hD, Oi consisting of a data structure D (which can containan arbitrary number of substructures) and a set of operations O specified inany formal language to manipulate the data structure. The goal of a cognitivearchitecture is to specify the often type dependent flow of information (e.g.,visual or auditory) between di↵erent memory-related cognitive structures in thehuman mind. This imposes constraints on the data structures of the reasoner andthe corresponding mental operations. An example for a cognitive architecture isACT-R which uses so-called modules, i.e., data structures for specific types ofinformation, and production rules as a set of general operations [2].

A cognitive computational model for a cognitive task T in a given cognitivearchitecture specifies algorithms based on (a subset of) operations defined onthe data structure of the underlying cognitive architecture. The application ofthose algorithm results in the computation of an input-output mapping for thecognitive task T with the goal of representing human cognition.

3 What is a “good” cognitive model?Step 2: Model Evaluation

The definition of a cognitive computational model (cognitive model for short) israther general and allows for a large space of possible model candidates. Drivenby the motivation that a cognitive theory should be explanatory for humanperformance, a “good” cognitive model is never just a simulation model, i.e., amodel that solely reproduces existing experimental data. Instead, it must alwaysmake explicit assumptions about the latent workings of the mind.

Based on several criteria from the literature [18] the following list can serveas a starting point for defining principles of “good” cognitive modeling:

11

Page 16: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

1. The model has transparent assumptions. All operations and parameters aretransparent and the model’s responses can be explained by model operations.

2. The model’s principles are independent from the test data. A model cannotbe developed on the same data it is tested on. To avoid overfitting to aspecific dataset, fine-tuning on the test data is not allowed.

3. The model generates quantitative predictions. The model computes the sametype of answers a human reasoner gives based on the input she receives.Model predictions can be compared with the test data by mathematicaldiscrepancy functions often applied in mathematical psychology and AI, suchas the Root-Mean-Square Error (RMSE), statistical information criteria, orothers (see below).

4. The model predicts behavior of an individual human reasoner. Often, modelspredict an average reasoner. However, aggregating data increases the noiseand eliminates individual di↵erences.

5. The model covers several relevant reasoning phenomena and predicts newphenomena. The goal of a cognitive model is not to just fit data perfectly,but to explain latent cognitive reasoning processes in accordance with theresults obtained from psychological studies. Ultimately, models are supposedto o↵er an alternative view on cognition allowing for the derivation of newphenomena that can be validated or falsified by conducting studies on humanreasoners.

These points also introduce an ordering based on the importance of themodeling principles. Points 1 and 2 are general requirements we consider to bemandatory for any serious modeling attempts. Points 4 and 5 are important forgeneral cognitive models which are supposed to shed light on the inner workingsof the mind. For the reverse engineering process and a comparison of di↵erentmodels that share all points 1-5, criteria 3 is the most important one.

There are di↵erent methods for assessing the quality of models. On theirvery basis, they all share the idea of defining a discrepancy metric that canbe used to quantify the value of a specific model in comparison with others.Most fundamentally, the RMSE defines the discrepancy based on the distancebetween the model predictions and outcomes observed in real world experiments.More sophisticated statistical approaches based on the likelihood of data, suchas the �2 or G2 metrics, can be interpreted as test statistics with significantresults indicating large di↵erences to the data [3]. However, since models donot only di↵er with respect to the goodness of fit, but also with respect totheir complexity, further information must often be integrated into the modelcomparison process. Akaike’s Information Criterion (AIC) [1] and the BayesianInformation Criterion (BIC) [21] are metrics based on G2, that incorporate thenumber of free parameters as an indication of complexity. FIA is an informationtheoretic approach that quantifies complexity based on the minimum descriptionlength principle [8]. Furthermore, there are purely Bayesian approaches to theproblem of model comparison. By relying on Bayes’ Theorem, the Bayes Factor(BF ) measures the relative fit of two models by integrating uncertainties about

12

Page 17: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

the data and parameters under the model. It quantifies whether the data providesmore evidence for or against one model being compared with an alternative [15].

4 Challenges

The field of cognitive science can benefit greatly from interdisciplinary work andresults. This ranges from the application of recent advances in modeling methodsfrom computer science and statistics, and extends all the way to exploiting theknowledge gained in the field of theoretical psychology.

However, in order to foster this collaborative approach that could potentiallyresult in faster and more goal-oriented progress, the field needs to address severalopen questions and relevant challenges:

1. What are relevant benchmark problems? In computer science andAI, well-defined benchmark problems have been great aids to the field. Byorganizing annual competitions and generally maintaining low barriers forentry, progress could be boosted in various domains, such as planning orsatisfiability solving for logic formulae. Additionally, the rigorous definitionof benchmarks allowed for a fair comparison between di↵erent approachesbased on well-defined criteria triggering a competitive spirit for improvingthe state-of-the-art of the respective domains.We see the necessity to introduce the field of cognitive science and espe-cially the domain of human reasoning to the concept of competition, as well.Without defining benchmark problems and providing the data to approachthem in a clear and direct manner, the field risks to drown in the contin-uously increasing stream of cognitive theories arguing to explain parts ofhuman reasoning. In order to guarantee progression, we consider the defini-tion of explicit criteria for model comparison and their application based oncommonly accepted and publicly available datasets mandatory.While psychological experiments can provide benchmark problems, theyneed to be di↵erentiated with respect to priority. So far, no criteria for theidentification of relevant problems have been introduced in the literature.However, they are necessary for the development of a generally acceptedbenchmark. The following list compiles experiments and phenomena as wellas general remarks that should be taken into account when formalizing abenchmark problem:

(a) Phenomena/experiments that have often been modeled and/or cited are:– Conditional and propositional reasoning:

• Simple conditional inferences [17]• Counterfactual reasoning [5]• Rule testing: The Wason Selection Task [?]• Illusions in propositional reasoning [9]• Suppression e↵ect [4]

– Relational reasoning:• Preference e↵ect [19]

13

Page 18: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

• Pseudo-transitive relations [7]• Complex relations [6]• Indeterminacy e↵ect [4]• Visual impedence e↵ect [13]

– Syllogistic reasoning:

• Reasoning patterns on the 64 syllogisms [10, 11]• Belief-bias e↵ect [12]• Generalized quantifiers [16]

(b) Data from the literature that can be included in the a benchmark needsto include a description of the information a reasoner received as wellas her response. Aggregated data of individual reasoners can help toformulate an intuition or can give an indication for an e↵ect. However,for developing a profound model, answers of the individual reasoners arenecessary.

2. How to translate existing descriptive theories into computationalcognitive models?Most cognitive theories are not defined algorithmically. Instead they are oftenbased on verbal descriptions alone. However, for purposes of fair mathemat-ical comparison, a formalization of these theories is required. The challengehere is to develop a model implementation of the theory that is as closeto the original theory as possible while making all additional assumptionsmade by the modeler explicit. There currently is no accepted methodologyfor general theory implementation.

3. How could a general cognitive modeling language be specified?The field of action planning greatly benefits from having a general PlanningDomain Definition Language (PDDL). On one hand, PDDL allows for theeasy definition and introduction of new problems. On the other hand, itforces planners to be defined generally without exploiting domain-dependentshortcuts and heuristics.

Especially when considering the goal to construct a model for unified cog-nition, finding a common cognitive modeling language might be beneficial.However, the task of defining a language which is accepted by most modelersis not an easy endeavor as the list of potential reasoning domains is quite ex-tensive, and each has its own specific set of requirements. Additionally, thereare very di↵erent modeling approaches beyond the purely symbolic methodsthat are commonly found in planning introducing even more complexity forthe language desired. Examples include models based on artificial neurons,hybrid approaches, Bayesian models, and abstract description based modelssuch as Multinomial Processing Trees (MPTs).

4. What are properties of the human data structures that influencethe reasoning process?While working memory is resource bounded, long-term memory is not. Butthere are additional cognitive features that can have an influence on reason-ing such as the background knowledge, cognitive bottlenecks, parallel pro-cessing, etc. These limitations are often not represented in cognitive theories

14

Page 19: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

but crystallized in cognitive architectures [14]. However, general approachesfor developing and comparing these architectures have yet to be identified.

5 Desirables: Standards, Networks, and Competitions

5.1 Cognitive Modeling Standards

Cognitive models are usually developed in a post-hoc fashion with the goal tofit to an existing set of experimental data. Alternatively, cognitive models canbe created following a mixture of data- and theory-based approaches with un-defined overlap. Irrespective of the motivation and development process, a faircomparison of models must be based on well-defined criteria (such as those in-troduced in Section 3). Generally, the research community of the field needsto settle on which criteria are mandatory, which are desirable, and which arenot worthwile to pursue further. In order to develop and maintain this set ofmodeling standards, close communication between researchers is necessary.

5.2 Cognitive Modeling Network

Researchers dealing with similar tasks are scattered among many diverse dis-ciplines and research communities with few to none overlap. Amongst others,researchers developing cognitive models for reasoning can be found in

– MathPsych community (MathPsych conference) and a mailing list

– Cognitive modeling community (ICCM conference)

– Knowledge representation and reasoning community (AI conferences like IJ-CAI, AAAI, KR) and

– Reasoning community (with the Thinking conference and the annual Lon-doner Reasoning Workshop) and a mailing list

However, there often is no overlap between the individual communities. Ajoint e↵ort to combine the approach is neccessary.

5.3 Competitions

As introduced in Section 4, competitions allow to compare di↵erent approachesand to test ideas. Additionally, the test data serves as a benchmark for futurecognitive models and to aid the development of comprehensive models of unifiedcognition. One way is to embrace a more competitive perspective on modeldevelopment. By introducing challenges on comprehensive benchmarks, modelsthat perform best according to a predefined list of criteria (connecting strictlyquantitative requirements with theoretical profoundness) are selected.

15

Page 20: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

6 Conclusion

This paper introduced challenges and research questions the fields of cognitivescience and cognitive modeling in particular need to address. In order to ensureprogress in the understanding of the mind, models have to transcend the stateof simulations focusing on fitting experimental data. The goal for modeling isto construct model candidates that account for prominent phenomena discov-ered in cognitive psychology. By comparing these models on fair grounds andextracting new phenomena from the computational formalizations that can inturn be validated or falsified on experimental data, the field can advance towardsa unified model of cognition.

One aim of this paper is to make general cognitive modeling principles avail-able to the diverse communities, to open the discussion of standards, to fosterthe interdisciplinary research, and to tackle one of the core problems of high-levelcognition: human reasoning

References

1. H. Akaike. A new look at the statistical model identification. IEEE Transactionson Automatic Control, 19(6):716–723, Dec 1974.

2. J. R. Anderson. How can the human mind occur in the physical universe? OxfordUniversity Press, New York, 2007.

3. William H Batchelder and David M Riefer. Theoretical and empirical review ofmultinomial process tree modeling. Psychonomic Bulletin & Review, 6(1):57–86,1999.

4. R. M. J. Byrne. Suppressing valid inferences with conditionals. Cognition, 31:61–83, 1989.

5. R. M. J. Byrne. Mental models and counterfactual thoughts about what mighthave been. Trends in cognitive sciences, 6(10):426–431, 2002.

6. G. P. Goodwin and P. N. Johnson-Laird. Reasoning about relations. PsychologicalReview, 112:468–493, 2005.

7. G. P. Goodwin and P. N. Johnson-Laird. Transitive and pseudo-transitive infer-ences. Cognition, 108:320–352, 2008.

8. Peter D Grunwald. The minimum description length principle. MIT press, 2007.9. S. Khemlani and P. N. Johnson-Laird. Disjunctive illusory inferences and how to

eliminate them. Memory & Cognition, 37(5):615–623, 2009.10. S. Khemlani and P. N. Johnson-Laird. Theories of the syllogism: A meta-analysis.

Psychological Bulletin, January 2012.11. S. Khemlani and P. N. Johnson-Laird. How people di↵er in syllogistic reasoning.

In Proceedings of the 36th Annual Conference of the Cognitive Science Society.Austin, TX: Cognitive Science Society, 2016.

12. K. C. Klauer, J. Musch, and B. Naumer. On belief bias in syllogistic reasoning.Psychological Review, 107(4):852–884, 2000.

13. M. Knau↵ and P. N. Johnson-Laird. Visual imagery can impede reasoning. Memory& Cognition, 30:363–71, 2002.

14. I. Kotseruba, O. J. A. Gonzalez, and J. K. Tsotsos. A review of 40 years of cognitivearchitecture research: Focus on perception, attention, learning and applications.arXiv preprint arXiv:1610.08602, 2016.

16

Page 21: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

15. Michael D Lee and Eric-Jan Wagenmakers. Bayesian cognitive modeling: A prac-tical course. Cambridge university press, 2014.

16. M. Oaksford and N. Chater. A Rational Analysis of the Selection Task as OptimalData Selection. Psychological Review, 101(4):608–631, 1994.

17. K. Oberauer. Reasoning with conditionals: A test of formal models of four theories.Cognitive Psychology, 53:238–283, 2006.

18. M. Ragni. Raumliche Reprasentation, Komplexitat und Deduktion: Eine kognitiveKomplexitatstheorie[Spatial representation, complexity and deduction: A cognitivetheory of complexity]. PhD thesis, Albert-Ludwigs-Universitat Freiburg, 2008.

19. M. Ragni and M. Knau↵. A theory and a computational model of spatial reasoningwith preferred mental models. Psychological Review, 120(3):561–588, 2013.

20. David M Riefer and William H Batchelder. Multinomial modeling and the mea-surement of cognitive processes. Psychological Review, 95(3):318–339, 1988.

21. Gideon Schwarz. Estimating the dimension of a model. Ann. Statist., 6(2):461–464,03 1978.

22. Gerhard Strube. The role of cognitive science in knowledge engineering. Contem-porary knowledge engineering and cognition, pages 159–174, 1992.

17

Page 22: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

The Weak Completion Semantics

Emmanuelle-Anna Dietz Saldanha and Ste↵en Holldobler andIsabelly Louredo Rocha

International Center for Computational Logic, TU Dresden, 01062 Dresden, [email protected] and [email protected] and [email protected]

Abstract This is a gentle introduction to the weak completion seman-tics, a novel cognitive theory which has been successfully applied to anumber of human reasoning tasks. In this paper we do not focus on for-malities but rather on principles and examples. The reader is assumedto be familiar with classical propositional logic and the suppression task.

1 Introduction

The weak completion semantics is a novel cognitive theory, which recently hasoutperformed twelve established cognitive theories on syllogistic reasoning [19,23]. It is based on ideas first expressed in [27], viz. to encode knowledge as logicprograms and, in particular, to use licenses for inferences when encoding condi-tionals, to make assumptions about the absense of abnormalities, to interpreteprograms under a three-valued (Kleene) logic [20], to compute a supported modelfor each program as least fixed point of an appropriate semantic operator, and toreason with respect to these least fixed points. But the weak completion seman-tics di↵ers from the approach presented in [27] in that all concepts are formallyspecified, it is based on a di↵erent three-valued ( Lukasiewics) logic [21],1 all re-sults are rigorously proven, and it has been extended in many di↵erent ways. Inparticular, the weak completion semantics has been applied to the suppressiontask [6], to the selection task [7, 9], to the belief bias e↵ect [24], to reasoningabout conditionals [3, 5, 9], to human spatial reasoning [4], to syllogistic rea-soning [22, 23], and to contextual reasoning [10, 25]. Furthermore, there existsa connectionist encoding of the weak completion semantics based on the coremethod [8, 15, 17].

Modeling human reasoning tasks under the weak completion semantics isdone in three stages. Firstly, the background knowledge is encoded as the weakcompletion of a logic program, i.e. a finite set of facts, rules, and assumptions.The program is specified with respect to certain principles, some of which havebeen identified in cognitive science and computational logic, others are new prin-ciples which need to be confirmed in future experiments. Secondly, a supportedmodel for the weak completion of the program is computed. It turns out thatunder the Lukasiewicz logic this model is unique and can be obtained as the leastfixed point of an appropriate semantic operator. Thirdly, reasoning is done with

1 Alternatively, the three-valued logic S3 [26] could be applied as well.

18

Page 23: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

2 Authors Suppressed Due to Excessive Length

respect to the unique supported model. This three-stage process is augmentedby abduction if needed.

In this paper a gentle introduction to the weak completion semantics is pro-vided. We will give an informal introduction into the three stages focussing onthe suppression task in Sections 2 and 3 and on reasoning about indicative con-ditionals in Section 4. In each case, we will discuss how the programs, i.e. thesets of facts, rules, and assumptions are obtained, how they are weakly com-pleted, how their unique supported models are generated, and how reasoning isperformed with respect to these models. We will avoid formal definitions, theo-rems, and proofs; they can be found in [14] and the referenced technical papers.However, we assume the reader to be familiar with classical propositional logic.

2 Reasoning with respect to Least Models

2.1 Modus Ponens

Knowledge is encoded as positive facts, negative assumptions, and rules. Con-sider the statements she has an essay to write and if she has an essay to write,then she will study late in the library from the suppression task [1]. The firststatement will be encoded in propositional logic as the fact e >, where edenotes that she has an essay to write and > is a constant denoting truth. Thesecond statement is a conditional which will be encoded as a license for inferences` e^¬ab1 following [27], where ` denotes that she will study late in the libraryand ab1 is an abnormality predicate. As in the given context nothing abnormalis known about the conditional, the assumption ab1 ? is added, where ? isa constant denoting falsehood. This expression is called an assumption because– as illustrated later – it can be overriden if more knowledge becomes available.The given implications – a logic program – are weakly completed by adding theonly-if-halves to obtain the set

K1 = {e$ >, `$ e ^ ¬ab1, ab1 $ ?}.

The left- and the right-hand-sides of the equivalences are considered as definien-dum and definiens, respectively. In particular, the propositional variables e, `,and ab1 are defined by >, e ^ ¬ab1, and ?, respectively. In other words, the setK1 is a set of definitions which encode the given background knowledge.

If a subject is asked whether she will study late in the library, then a modelfor this set is constructed. In a model, propositional variables are mapped to thetruth values true, false, and unknown such that all equivalences occurring in K1

are simultaneously mapped to true. In fact, there is always a unique least modelif a set like K1 is interpreted under the three-valued Lukasiewicz logic [16, 21],2

whose truth tables are depicted in Table 1.

2 This does not hold if a set is interpreted under Kleene logic [20]. For example, theequivalence a$ b has two minimal models. In the first minimal model both, a and b,are mapped to true. In the second minimal model both, a and b, are mapped to false.The interpretation, where both, a and b, are mapped to unknown is not a model fora$ b.

19

Page 24: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

The Weak Completion Semantics 3

¬> ?U U? ?

^ > U ?> > U ?U U U ?? ? ? ?

_ > U ?> > > >U > U U? > U ?

> U ?> > U ?U U > >? ? U >

$ > U ?> > U ?U U > U? ? U >

Table1. The truth tables of the Lukasiewicz logic, where true, false, and unknown areabbreviated by >, ?, and U, respectively.

In the example, the model is constructed in two steps.3 In the first step,e$ > and ab1 $ ? are satisfied by the following mapping:

true falsee ab1

In the second step, because the right-hand-side of the equivalence `$ e ^ ¬ab1

is evaluated to true under the given mapping, its left-hand-side ` must also betrue and will be added to the model:

true falsee ab1

`

The query whether she will study late in the library can now be answered posi-tively given this model.

2.2 Alternative Arguments

If the statement if she has a textbook to read, then she will study late in thelibrary is added to the example discussed in Section 2.1, then this statement willbe encoded by the rule ` t^¬ab2 and the assumption ab2 ?, where t denotesthat she has a textbook to read. Weakly completing the given implications weobtain the set

K2 = {e$ >, `$ (e ^ ¬ab1) _ (t ^ ¬ab2), ab1 $ ?, ab2 $ ?}.4

If a subject is asked whether she will study late in the library, then a modelfor K2 is constructed as follows. In the first step, e$ >, ab1 $ ?, and ab2 $ ?are satisfied by the following mapping:

true falsee ab1

ab2

3 In [16,27], a function is defined which computes this model.4 The set does not include the equivalence t $ ?. In logic programming this equiva-

lence is added under the completion semantics [2].

20

Page 25: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

4 Authors Suppressed Due to Excessive Length

Because e ^ ¬ab1 is true under this mapping, so is the right-hand side of theequivalence `$ (e^¬ab1)_ (t^ ab2) and, consequently, ` must be true as well:

true falsee ab1

ab2

`

The query whether she will study late in the library can now be answered posi-tively given this model.

2.3 Additional Arguments

If the statement if the library is open, then she will study late in the libraryis added to the example discussed in Section 2.1, then this statement will beencoded by the rule ` o^¬ab3 and the assumption ab3 ?, where o denotesthat the library is open. As argued in [27] a subject being confronted with theadditional statement may become aware that not being open is an exception forthe rule ` e ^ ab1. This can be encoded by the rule ab1 ¬o. Likewise, shemay not go to the library without a reason and the only reason mentioned so faris writing an essay. Thus, not having an essay to write is an exception for therule ` o ^ ¬ab3. This can be encoded by adding the rule ab3 ¬e. Weaklycompleting all implications we obtain the set

K3 = {e$ >, `$ (e ^ ¬ab1) _ (o ^ ¬ab3), ab1 $ ?_ ¬o, ab3 $ ?_ ¬e}.

The example shows how the intial assumption ab1 ? is overriden by ab1 ¬o.In K3 the definition of ab1 is now ?_¬o which is semantically equivalent to ¬o.Likewise ab3 ? is overriden by ab3 ¬e.

If a subject is asked whether she will study late in the library, then a modelfor K3 is constructed as follows. In the first step, e $ > is satisfied by thefollowing mapping:

true falsee

Because the right-hand-side of the equivalence ab3 $ ?_¬e is mapped to false,ab3 must be mapped to false as well:

true falsee

ab3

The remaining propositional variables `, ab1, and o are neither forced to be truenor false and, hence, remain unknown. The constructed mapping is a modelfor K3. As ` is not mapped to true, suppression is taking place.

21

Page 26: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

The Weak Completion Semantics 5

2.4 The Denial of the Antecedent

Now suppose that in the example discussed in Section 2.1 the fact that she hasan essay to write is replaced by she does not have an essay to write. This denialof the antecedent is encoded by e ? instead of e >. Weakly completingthe implications we obtain the set

K4 = {e$ ?, `$ e ^ ¬ab1, ab1 $ ?}.

If a subject is asked whether she will study late in the library, then a model forK4 is constructed as follows. In the first step, e$ ? and ab1 $ ? are satisfiedby the following mapping:

true falsee

ab1

Under this mapping the right-hand-side of the equivalence ` $ e ^ ¬ab1 ismapped to false and, consequently, ` will be mapped to false as well:

true falsee

ab1

`

The query whether she will study late in the library can now be answered nega-tively given this model.

The cases, where the denial of the antecedent is combined with alternativeand additional arguments can be modelled in a similar way, but now the alter-native argument leads to suppression [6].

3 Skeptical Abduction

3.1 The A�rmation of the Consequent

Consider the conditional if she has an essay to write, then she will study late inthe library. As before, it is encoded by the rule ` e^¬ab1 and the assumptionab1 ?. Their weak completion is

K5 = {`$ e ^ ¬ab1, ab1 $ ?}.

As the least model of this set we obtain:

true falseab1

Under this model the propositional variables ` and e are mapped to unknown.Hence, if we observe that she will study late in the library, then this observation

22

Page 27: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

6 Authors Suppressed Due to Excessive Length

cannot be explained by this model. We propose to use abduction [13] in order toexplain the observation. Because e is the only undefined propositional letter inthis context, the set of abducibles is {e >, e ?}. The observation ` can beexplained by selecting e > from the set of abducibles, weakly completing itto obtain e$ >, and adding this equivalence to K5. Thus, we obtain K1 againand conclude that she has an essay to write.

3.2 Alternative Arguments and the A�rmation of the Consequent

Consider the conditionals if she has an essay to write, then she will study latein the library and if she has a textbook to read, then she will study late in thelibrary. As in Section 2.2 they are encoded by two rules and two assumptions,which are weakly completed to obtain

K6 = {`$ (e ^ ¬ab1) _ (t ^ ¬ab2), ab1 $ ?, ab2 $ ?}.

As the least model of this set we obtain:

true falseab1

ab2

Under this model the propositional variables `, e, and t are mapped to unknown.Hence, if we observe that she will study late in the library, then this observationcannot be explained by this model. In order to explain the observation we con-sider the set {e >, e ?, t >, t ?} of abducibles because e and t areundefined in K6. There are two minimal explanations, viz. e > and t >.Both are weakly completed to obtain e $ > and t $ >, and are added to K6

yielding K2 and

K7 = {t$ >, `$ (e ^ ¬ab1) _ (t ^ ¬ab2), ab1 $ ?, ab2 $ ?},

respectively. We can now construct the least models for K2 and K7:

true false true falsee ab1 t ab1

ab2 ab2

` `

Both models explain `, but they give di↵erent reasons for it, viz. e and t. Moreformally, the literals `, e, t, ¬ab1, and ¬ab2 follow credulously from the back-ground knowledge K6 and the observation ` because for each of the literals thereexists a minimal explanation such that the literal is true in the least model of thebackground knowledge and the explanation. But only the literals `, ¬ab1, and¬ab2 follow skeptically from the background knowledge K6 and the observation `because all literals are true in the least models of the background knowledge andeach minimal explanation. Hence, if a subject is asked whether she will studylate in the library then a subject constructing only the first model and, thus,

23

Page 28: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

The Weak Completion Semantics 7

reasoning credulously, will answer positively. On the other hand, a subject con-structing both models and, thus, reasoning skeptically, will not answer positively.As reported in [1] only 16% of the subjects answer positively. It appears thatmost subjects either reason credulously and construct only the second model orthey reason skeptically.

4 Indicative Conditionals

In this section we will extend the weak completion semantics to evaluate in-dicative conditionals. In particular, we will consider obligation and factual con-ditionals. Consider the conditionals if it rains, then the streets are wet and ifit rains, then she takes her umbrella taken from [9]. The conditionals have thesame structure, but their semantics appears to be quite di↵erent.

4.1 Obligation Conditionals

The first conditional is an obligation conditional because its consequence is oblig-atory. We cannot easily imagine a case, where the condition it rains is true andits consequence the streets are wet is not. Moreover, the condition appears to benecessary as we cannot easily imagine a situation where the consequence is trueand the condition is not. We may be able to imagine cases where a flooding ora tsunami has occurred, but we would expect that such an extraordinary eventwould have been mentioned in the context. We are also not reasoning about aspecific street or a part of a street, where the sprinkler of a careless homeownerhas sprinkled water on the street while watering the garden.

4.2 Factual Conditionals

The second conditional is a factual conditional. Its consequence is not obligatory.We can easily imagine the case, where the condition it rains is true and itsconsequence she takes her umbrella is false. She may have forgotten to take herumbrella or she has decided to take the car and does not need the umbrella.Moreover, the condition does not appear to be necessary as she may have takenthe umbrella for many reasons like, for example, protecting her from sun. Thecondition is su�cient. The circumstance where the condition is true gives usadequate grounds to conclude that the consequence is true as well, but there isno necessity involved.

4.3 Encoding Obligation and Factual Conditionals

When we consider the two conditionals as background knowledge, then theirdi↵erent semantics should be reflected in di↵erent encodings. Following the prin-ciples developed in Section 2 we obtain

K8 = {s$ r ^ ¬ab4, u$ r ^ ¬ab5, ab4 $ ?, ab5 $ ?},

24

Page 29: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

8 Authors Suppressed Due to Excessive Length

where s, r, and u denote that the streets are wet, it rains, and she takes herumbrella, respectively. Its least model is:

true falseab4

ab5

The propositional variables s, r, and u are unknown. Because r is undefinedin K8, the set of abducibles contains r > and r ?. Because the secondconditional is a factual one, it should not necessarily be the case that r beingtrue implies u being true as well. This can be prevented by adding ab5 > tothe set of abducibles because this fact can be used to override the assumptionab5 ?. Moreover, because the condition of the second conditional is su�centbut not necessary, observing u may not be explained by r being true but by someother reason. Hence, u > is also added to the set of abducibles. Alltogether,we obtain the set

A8 = {r >, r ?, ab5 >, u >}

of abducibles for K8.

4.4 The Evaluation of Indicative Conditionals

Let if X then Y be a conditional, where the condition X and the consequence Yis a literal. We would like to evaluate the conditional with respect to somebackground knowledge. The background knowledge is represented by a finiteset K of definitions and a finite set A of abducibles. As discussed in Section 2.1,each set of definitions has a unique least model; let M be this model. Consideringthe sets K8 and A8, then let M8 be the least model of K8, i.e. the mapping, whereab4 and ab5 are mapped to false and all other propositional letters occurring inthe example are mapped to unknown.

Because M is a mapping assigning a truth value to each formula, we cansimply write M(X) or M(Y ) to obtain the truth values for the literals X and Y ,respectively. The given conditional if X then Y shall be evaluated as follows:

1. If M(X) is true, then the conditional is assigned to M(Y ).2. If M(X) is false, then the conditional is assigned to true.3. If M(X) is unknown, then the conditional is evaluated with respect to the

skeptical consequences of K given A and considering X as an observation.

The first case is the standard one: The condition X of the conditional is true and,hence, the value of the conditional hinges on the value of the consequence Y .If Y is mapped to true, then the conditional is true; if Y is mapped to unknown,then the conditional is unknown; if Y is mapped to false, then the conditionalis false.

The second case is also standard if conditionals are viewed from a purelylogical point: if X is mapped to false, then the conditional is true independent

25

Page 30: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

The Weak Completion Semantics 9

of the value of the consequence Y . However, humans seem to treat conditionalswhose condition is false di↵erent. In particular, the conditional may be viewedas a counterfactual. In this case, the background knowledge needs to be revisedsuch that the condition becomes true. This case has been considered in [5], butit is beyond the scope of this introduction to discuss it here.

The third case is interesting: If the condition of a conditional is unknown,then we view the condition as an observation which needs to be explained. More-over, we consider only skeptical consequences computed with respect to minimalexplanations.

4.5 The Denial of the Consequent

As a first example consider the conditional if the streets are not wet, then itdid not rain (if ¬s then ¬r). Its condition ¬s is unknown under M8. Applyingabduction we find the only minimal explanation r ? for the observation ¬s.Together with the background knowledge K8 we obtain

K9 = {s$ r ^ ¬ab4, u$ r ^ ¬ab5, ab4 $ ?, ab5 $ ?, r $ ?}.

Its least model is:true false

ab4

ab5

rsu

It explains ¬s. Moreover, the consequence ¬r of the conditional is mapped totrue making the conditional true as expected.

As a second example consider the conditional if she did not take her umbrella,then it did not rain (if ¬u then ¬r). Its condition ¬u is unknown under M8.Applying abduction we find two minimal explanations for the observation ¬u,viz. r ? and ab5 >. Together with the background knowledge K8 we obtainK9 and

K10 = {s$ r ^ ¬ab4, u$ r ^ ¬ab5, ab4 $ ?, ab5 $ ?_>},

respectively. Their least models are:

true false true falseab4 ab5 ab4

ab5 ursu

Whereas the first explanation explains ¬u by stating that it did not rain, thesecond explanations explains ¬u by stating that the abnormality ab5 is true.

26

Page 31: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

10 Authors Suppressed Due to Excessive Length

She may have simply forgotten her umbrella when she left home. Whereas thefirst explanation entails that it did not rain, the background knowledge and thesecond explanation does neither entail r nor ¬r. Hence, ¬r follows credulously,but not skeptically from the background knowledge and the observation ¬u.Because conditionals are evaluated skeptically, the conditional is evaluated tounknown as expected.

4.6 The A�rmation of the Consequent

As another example consider the conditional if the streets are wet, then it rained(if s then r). Its condition s is unknown under M8. Applying abduction we findthe only minimal explanation r > for the observation s. Together with thebackground knowledge K8 we obtain:

K11 = {s$ r ^ ¬ab4, u$ r ^ ¬ab5, ab4 $ ?, ab5 $ ?, r $ >}.

Its least model is:true false

r ab4

ab5

su

It explains s. Moreover, the consequence r of the conditional is mapped to truemaking the conditional true as well.

As final example consider the conditional if she took her umbrella, then itrained (if u then r). Its condition u is again unknown under M8. Applyingabduction we find two minimal explanations, viz. r > and u >. Togetherwith the background knowledge K8 we obtain K11 and

K12 = {s$ r ^ ¬ab4, u$ (r ^ ¬ab5) _ >, ab4 $ ?, ab5 $ ?, },

respectively. Their least models are:

true false true falser ab4 u ab4

ab5 ab5

su

Whereas the first explanation explains u by stating that it rained, the secondexplanation explains u by stating that she took her umbrella for whatever reason.As before, r follows credulously but not skeptically. Hence, the conditional isevaluated to unknown. Skeptical reasoning yields the expected answer again,whereas a creduluous approach does not.

In [9] it is also shown that the approach adequately models the abstract aswell as social version of the selection task [12,28]. The conditional if there is the

27

Page 32: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

The Weak Completion Semantics 11

letter D on one side of the card, then there is the number 3 on the other sideis considered as a factual one with necessary condition, whereas the conditionalif a person is drinking beer, then the person must be over 19 years of age isconsidered as an obligation with su�cient condition. Reasoning skeptically yieldsthe adequate answers.

5 Conclusion

The weak completion semantics is a novel cognitive theory which has been ap-plied to adequately model various human reasoning tasks. Background knowl-edge is encoded as a set of definitions based on the following principles:

– positive information is encoded as facts,– negative information is encoded as assumptions,– conditionals are encoded as licenses for inferences, and– the only-if halves of definitions are added.

For each set of definitions a set of abducibles is constructed as follows:

– all facts and assumptions for the propositional letters which are undefinedin the background knowledge are added,

– the abnormalities of factual conditionals are added as facts, and– the conclusions of conditionals with su�cient condition are added as facts.

The background knowledge admits a least supported model under Lukasiewiczlogic, which can be computed as the least fixed point of an appropriate semanticoperator. Reasoning is performed with respect to the least supported model.If an observation is unknown under the least supported model, then skepticalabduction using minimal explanations is applied. There exists a connectionistrealization.

The approach presented in this paper is restricted to propositional logic anddoes neither consider counterfactuals nor contextual abduction. These extensionsare presented in [5,10,22,23]. In particular, if the weak completion semantics isextended to first-order logic, then additonal principles are applied in the con-struction of the background knowledge like

– existential import and Gricean implicature,– unknown generalization,– search for alternative models,– converse interpretation,– blocking of conclusions by double negatives,– negation by transformation,

but it is beyond the scope of this introduction to discuss these principles.There are a variety of open problems and questions. For example, skepti-

cal abduction is exponential [11, 18]. Hence, it is infeasible that humans reasonskeptically if the reasoning episodes become larger. We hypothesize that humansgenerate some, but usually not all minimal explanations and reason skepticallywith respect to them. Which explanations are generated? Are short or simpleexplanations preferred? Are more explanations generated if more time is avail-able? Is the generation of explanations biased and, if so, how is it biased? Doesattention play a role?

28

Page 33: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

12 Authors Suppressed Due to Excessive Length

Acknowledgements The authors would like to thank Ana Oliveira da Costa,Luıs Moniz Pereira, Tobias Philipp, Marco Ragni, and Christoph Wernhardt formany useful discussions and comments.

References

1. R. Byrne. Suppressing valid inferences with conditionals. Cognition, 31:61–83,1989.

2. K. Clark. Negation as failure. In H. Gallaire and J. Minker, editors, Logic andDatabases, pages 293–322. Plenum, New York, 1978.

3. E.-A. Dietz and S. Holldobler. A new computational logic approach to reason withconditionals. In F. Calimeri, G. Ianni, and M. Truszczynski, editors, Logic Pro-gramming and Nonmonotonic Reasoning, 13th International Conference, LPNMR,volume 9345 of Lecture Notes in Artificial Intelligence, pages 265–278. Springer,2015.

4. E.-A. Dietz, S. Holldobler, and R. Hops. A computational logic approach to humanspatial reasoning. In IEEE Symposium Series on Computational Intelligence, pages1637–1634, 2015.

5. E.-A. Dietz, S. Holldobler, and L. M. Pereira. On conditionals. In G. Gottlob,G. Sutcli↵e, and A. Voronkov, editors, Global Conference on Artificial Intelligence,volume 36 of Epic Series in Computing, pages 79–92. EasyChair, 2015.

6. E.-A. Dietz, S. Holldobler, and M. Ragni. A computational logic approach to thesuppression task. In N. Miyake, D. Peebles, and R. P. Cooper, editors, Proceedingsof the 34th Annual Conference of the Cognitive Science Society, pages 1500–1505.Cognitive Science Society, 2012.

7. E.-A. Dietz, S. Holldobler, and M. Ragni. A computational logic approach tothe abstract and the social case of the selection task. In Proceedings EleventhInternational Symposium on Logical Formalizations of Commonsense Reasoning,2013. commonsensereasoning.org/2013/proceedings.html.

8. E.-A. Dietz Saldanha, S. Holldobler, C. D. P. Kencana Ramli, and L. PalaciosMedinacelli. A core method for the weak completion semantics with skepticalabduction. Technical report, TU Dresden, International Center for ComputationalLogic, 2017. (submitted).

9. E.-A. Dietz Saldanha, S. Holldobler, and I. Louredo Rocha. Obligation versus fac-tual conditionals under the weak completion semantics. In S. Holldobler, A. Ma-likov, and C. Wernhard, editors, Proceedings of the Second Young Scientists’ In-ternational Workshop on Trends in Information Processing, volume 1837, pages55–64. CEUR-WS.org, 2017. http://ceur-ws.org/Vol-1837/.

10. E.-A. Dietz Saldanha, S. Holldobler, and L. M. Pereira. Contextual reasoning:Usually birds can abductively fly. In Logic Programming and Nonmonotonic Rea-soning, 14th International Conference, LPNMR, 2017. (to appear).

11. E.-A. Dietz Saldanha, S. Holldobler, and T. Philipp. Contextual abduction and itscomplexity issues. In Proceedings of the 4th International Workshop on Defeasibleand Ampliatice Reasoning, 2017. (to appear).

12. R. Griggs and J. Cox. The elusive thematic materials e↵ect in the Wason selectiontask. British Journal of Psychology, 73:407–420, 1982.

13. C. Hartshorne and A. Weiss, editors. Collected Papers of Charles Sanders Peirce.Belknap Press, 1932.

29

Page 34: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

The Weak Completion Semantics 13

14. S. Holldobler. Weak completion semantics and its applications in human reasoning.In U. Furbach and C. Schon, editors, Bridging 2015 – Bridging the Gap betweenHuman and Automated Reasoning, volume 1412 of CEUR Workshop Proceedings,pages 2–16. CEUR-WS.org, 2015. http://ceur-ws.org/Vol-1412/.

15. S. Holldobler and Y. Kalinke. Towards a new massively parallel computationalmodel for logic programming. In Proceedings of the ECAI94 Workshop on Com-bining Symbolic and Connectionist Processing, pages 68–77. ECCAI, 1994.

16. S. Holldobler and C. D. P. Kencana Ramli. Logic programs under three-valued Lukasiewicz’s semantics. In P. M. Hill and D. S. Warren, editors, Logic Pro-gramming, volume 5649 of Lecture Notes in Computer Science, pages 464–478.Springer-Verlag Berlin Heidelberg, 2009.

17. S. Holldobler and C. D. P. Kencana Ramli. Logics and networks for human reason-ing. In C. Alippi, M. M. Polycarpou, C. G. Panayiotou, and G. Ellinasetal, editors,Artificial Neural Networks – ICANN, volume 5769 of Lecture Notes in ComputerScience, pages 85–94. Springer-Verlag Berlin Heidelberg, 2009.

18. S. Holldobler, T. Philipp, and C. Wernhard. An abductive model for human reason-ing. In Proceedings Tenth International Symposium on Logical Formalizations ofCommonsense Reasoning, 2011. commonsensereasoning.org/2011/proceedings.html.

19. S. Khemlani and P. N. Johnson-Laird. Theories of the syllogism: A meta-analysis.Psychological Bulletin, 138(3):427–457, 2012.

20. S. Kleene. Introduction to Metamathematics. North-Holland, 1952.21. J. Lukasiewicz. O logice trojwartosciowej. Ruch Filozoficzny, 5:169–171, 1920.

English translation: On Three-Valued Logic. In: Jan Lukasiewicz Selected Works.(L. Borkowski, ed.), North Holland, 87-88, 1990.

22. A. Oliviera da Costa, E.-A. Dietz Saldanha, and S. Holldobler. Monadic reasoningusing weak completion semantics. In S. Holldobler, A. Malikov, and C. Wernhard,editors, Proceedings of the Second Young Scientists’ International Workshop onTrends in Information Processing, volume 1837. CEUR-WS.org, 2017. http://

ceur-ws.org/Vol-1837/.23. A. Oliviera da Costa, E.-A. Dietz Saldanha, S. Holldobler, and M. Ragni. A

computational logic approach to human syllogistic reasoning. In Proceedings of the39th Annual Conference of the Cognitive Science Society, 2017. (to appear).

24. L. M. Pereira, E.-A. Dietz, and S. Holldobler. An abductive reasoning approach tothe belief-bias e↵ect. In C. Baral, G. D. Giacomo, and T. Eiter, editors, Principlesof Knowledge Representation and Reasoning: Proceedings of the 14th InternationalConference, pages 653–656, Cambridge, MA, 2014. AAAI Press.

25. L. M. Pereira, E.-A. Dietz, and S. Holldobler. Contextual abductive reasoningwith side-e↵ects. In I. Niemela, editor, Theory and Practice of Logic Programming(TPLP), volume 14, pages 633–648, Cambridge, UK, 2014. Cambridge UniversityPress.

26. N. Rescher. Many-valued logic. McGraw-Hill, New York, NY, 1969.27. K. Stenning and M. van Lambalgen. Human Reasoning and Cognitive Science.

MIT Press, 2008.28. P. C. Wason. Reasoning about a rule. The Quarterly Journal of Experimental

Psychology, 20:273–281, 1968.

30

Page 35: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

Informalizing Formal Logic?

Antonis Kakas

Department of Computer Science, University of Cyprus, [email protected]

Abstract. This paper discusses how the basic notions of formal logiccan be expressed in terms of argumentation and how formal classical(or deductive) reasoning can be captured as a dialectic argumentationprocess. Classical propositional logical entailment of a formula is under-stood via the wining arguments between those supporting the formulaand arguments supporting its contradictory or negated formula. Henceboth informal and formal logic are captured uniformly in terms of anargumentation and its dialectic process.

1 Introduction

Informal Logic is usually equated with argumentation as used in real-life every-day situations. On the other hand, formal logic is concerned with the strict andprecise reasoning in mathematics and science. There are several works aiming tocapture informal logic in a precise formal setting such as that found in the arti-cle “Formalizing informal logic” [10] where informal logic is placed in the formalargumentation framework setting of the Carneades Argumentation System [3].

This paper is concerned with the other direction of linking formal logic toinformal logic - taking informal logic as synonymous to argumentation. The aimis to reconstruct formal logic entirely in terms of argumentation enabling usto view formal deductive reasoning of classical logic as a process of dialecticargumentation.

The paper rests on the technical work of Argumentation Logic [5, 6] wherethis reformulation of classical Propositional Logic in terms of argumentation iscarried out in formally precise terms. This work is based on notions coming fromthe fairly recent development of argumentation theory in Artificial Intelligence.The purpose of this paper is to unravel the technical results and present them ina generally accessible way thus providing a uniform argumentation view of bothinformal and formal logic.

Informalizing formal logic will be possible as a limiting case of “strict di-alectic argumentation” where the arena of arguments together with the notionsof counter-argument and defending argument are tightly fixed. This rigidity ofthe argumentation framework is to be expected since our task is to recoverstrict formal reasoning. The importance though of this reformulation of formallogical reasoning is that the strictness in the argumentation framework can be

? A full version of this paper is in preparation.

31

Page 36: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

subsequently relaxed in cases where this is appropriate, as for example in com-monsense reasoning. As a result we have a uniform way of capturing both formaland informal reasoning, smoothly moving from one to the other.

2 Logical Arguments

The construction of arguments in informal logic typically follows some acceptedargument schemas that would link premises to a conclusion or a position ofthe argument. Logical arguments are arguments whose link between premisesand supporting position rests on a precise logical proof in some formal logicalsystem such as Classical Logic1. Hence to informalize formal logic one starts byconsidering the set of proof rules in a logical proof system as argument schemes.Arguments can then be identified with sets of logical formulae that undersome of the proof rule argument schemes derive and thus support a conclusionor position of the argument. The chosen proof rule argument schemes are calleddirect argument schemes. The support of a conclusion φ by an argumentA is given through a direct derivation of φ from A. This will be denoted byA `DD φ where DD denotes the chosen set of proof rules.

There are two important conditions that need to be applied to this choice ofproof rule argument schemes. The first is that these argument schemes of proofrules need to be considered as strict schemes, i.e. that arguments constructedunder these can not be defeated by questioning the validity of the chosen proofrules. The other condition has a more technical nature and requires that theproof rules of Reduction ad Absurdum (RA) are excluded from this initialchoice of core argument schemes. This rests on the observation that the RA rulescontain an element of evaluation of arguments, as they rest on first recognizingthat their posited hypothesis (or argument) is inconsistent (invalid), and hencecannot be considered as a primary scheme of construction of arguments.

The main technical task then in the re-formulation of formal logic in argu-mentation terms is to recover at the semantic level of argumentation the RAproofs of formal logic.

Let us illustrate these ideas with a simple example. Suppose that the premisesof propositional logic theory, T , are given by:

q → ¬p (1)

r → ¬p (2)

Given additional premises about q and r we can construct arguments for andagainst p. For example, if in addition we are given q in T then we can constructan argument A1 with premises the sentences (1) and q supporting the conclusion

1 We will confine ourselves to the case of Classical Logic and more specifically to clas-sical Propositional Logic although the ideas presented would apply more generallyto other formal logics.

32

Page 37: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

¬p, as there is a direct derivation (using the proof rule of Modus Ponens) of thisconclusion from these premises.

Given this theory T of premises to construct arguments that support p wewould need to base these on formulae that are outside T . We will call suchpremises hypotheses and arguments that are build on them hypothetical ar-guments. For example, we can simply build an argument A2 supporting p basedon the hypothesis of itself. We will see below the significance of this difference inthe type of premises used when we consider the argumentation process betweenarguments. As expected, arguments whose premises are entirely drawn from thegiven theory will be stronger or preferred to hypothetical arguments allowing forexample A1 to win over A2 and as a result the theory T to logically conclude¬p.

3 Logical Reasoning as Dialectic Argumentation

In an argumentation framework, given a position of interest we can distinguishpro arguments and con arguments, i.e. arguments that support the positionand arguments that oppose the position. Arguments from these different classesattack each other or are counter-arguments of each other based on someform of conflict between them2. In formal logic contradiction is capture via theconflict between formulae and their negation. Generally, this (symmetric) attackrelation for formal propositional logic can be captured through the joint directderivation of an inconsistency, namely of any formula and its negation, normallydenoted by ⊥. So two arguments A1 and A2 attack each other if and only ifA1 ∪A2 `DD ⊥.

We will then view formal logical reasoning as a dialectic argumentation pro-cess for and against formulae and their negation. Arguments will be evaluatedwith respect to the other arguments that can be constructed and in particularevaluated against their counter-arguments. Arguments are acceptable whenthey exhibit a good dialectic quality, namely that they can defend against allattacking arguments. Analogously, an argument is non-acceptable if thereis at least one counter-argument that it cannot be defend against.

To turn this into a precise definition that would then capture the strict logi-cal reasoning of propositional logic we notice that the defence argument againstany counter-argument must also be required to be acceptable and importantlyto be acceptable within the context of the original argument that we want tobe acceptable. Thus the central notion of acceptability of arguments is a rela-tive notion that is recursively specified by (here S and S0 are sets of arguments):

2 In Artificial Intelligence the attacking relation in an argumentation framework [4, 2]often contains more information than simply this symmetric incompatibility of thearguments involved. This extra information, as we will see below, pertains to therelative strength or preference of the arguments involved in the attack.

33

Page 38: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

“S is acceptable w.r.t. S0 if for any attacking argument, A against S thereexits a defending argument D that is acceptable w.r.t. S0 extended by S.”

Analogously, for the non-acceptability of S w.r.t. S0 we need to have anattacking argument whose all possible defences are recursively non-acceptablew.r.t. S0 extended by S.

The defence relation between arguments normally captures the relativestrength or preference between arguments. An argument can defend againsta counter-argument if it is preferred over the attacking argument or they arenon-comparable in preference. The preference and its defence relation in manydomains of argumentation comes from domain specific information. Neverthelessfor the quite general framework of logical reasoning as captured by Argumenta-tion Logic [5] the preference and ensuing defence relation is minimal. It consistsof two elements:

– Arguments which are entirely made out of premises in the given theory Tare strictly preferred over arguments that contain hypothetical sentencesand thus can be defended against only by other arguments that also consistentirely from premises in T .

– A hypothetical formula φ and its complement φc are equally preferred.We are free to choose equally between the two (provided that one is not alsoa direct consequence of the given theory T ) with this choice allowing us totake the side appropriately needed to defend against attacks.

Note that when the given premises T are consistent the first element ofdefence means that attacking arguments that are made entirely from T can notbe defended against. Hence an argument that is attacked by an argument madeentirely of premises in T cannot be acceptable. Similarly, an argument S madeentirely from T is attacked only by arguments containing hypothetical formulaeand so can always be defended against by S. Hence such arguments are alwaysacceptable.

In the simple example given above, we can then see that p is acceptablysupported, given that the premises T contains the sentences (1), (2) and q.

To illustrate a more complex case of the dialectic argumentation process andhow this captures formal logical conclusions of PL, let us consider that insteadof q we have the premise:

¬q → r (3)

Hence we are now considering the theory T consisting of sentences (1), (2),and (3). The position of p can only be supported by arguments that containthis as a hypothesis or directly derive this from a set of formulae that containshypotheses. Then the non-acceptability of such arguments can be determinedby considering the counter-argument consisting of the premises 1 from T andhypothesis {q}. The dialectical process of argumentation that shows that this

34

Page 39: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

attack cannot be defended against is depicted in figure 1 where for simplicity weonly show the hypothesis part of the arguments involved3.

{p}

{q}attack : T∪{p}∪{q}`DD⊥OO

{¬q}defence by opposing the hypothesisKS

{p}attack : T ∪{¬q}∪{p}`DD⊥OO

Fig. 1. Dialectic process of argumentation for determining the non-acceptability ofp, with respect to the empty set of arguments, given T = {(1), (2), (3)}, in order todetermine the classical entailment of ¬p from T . Arguments are shown only by theirhypotheses as indicated in brackets.

The informal reading of this figure is as follows: The argument of supposingp is attacked by the hypothesis q. The canonical objection or defence to thiscounter-argument is to assume ¬q. But this defence is in conflict with the argu-ment p that it is meant to be defending as together they directly derive through(2) and (3) an inconsistency. Hence although in general there is a defence againstthe objection (by taking the opposite view) this is not possible in the context ofthe particular argument that we wish to be acceptable.

This example illustrates how defences against attacks to an argument must“hold well together” in the sense that they need to be conflict free or in otherwords they should not contain an internal attack or counter-argument relationbetween them. None of the defences should form a con argument against anyof the other defences and of course against the original argument of interest. Ingeneral, for any acceptable argument there must exist a set of defences againstall its attacks that is attack free, i.e. directly consistent under `DD.

This rationality property of the set of defences points to the connection be-tween acceptability of arguments and the formal logical notion of satisfiabilityof the formulae composing the argument. In fact, in the case where the givenpremises T are classically consistent and we have a model for the set of formu-lae in an argument A then we can choose the defences for A from the set offormulae that are made true in this model and hence A would be acceptable.In other words, satisfiability implies acceptability and vice versa and thus forclassically consistent premises T formal logical entailment coincides with scep-tical argumentation reasoning given by: a formula φ is sceptically concludedby argumentation if and only if φ is dialectically acceptable (w.r.t. the emptyset of initially accepted formulae) and ¬φ is dialectically non-acceptable. This

3 For simplicity we also assume here that `DD contains only the Modus Ponens proofrule argument schema.

35

Page 40: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

then gives the logical equivalence of formal classical (propositional) logic andargumentation logic thus informalizing formal logic.

4 Beyond Classical Reasoning

Propositional Logic or full Classical Predicate Logic are not equipped or de-signed to deal with contradictory information. When the given premises T areinconsistent formal classical reasoning collapses where every formula is triviallyentailed. In contrast, argumentation is concerned exactly with how to deal withconflicting information and positions. Hence the argumentation based reformu-lation of formal classical reasoning that we have described above, mainly for thecase of consistent premises T , would or should carry through when T becomesclassically inconsistent.

Consider in our example that we are given p as an additional premise to thoseof {(1), (2), (3)} that we already have. This turns the set of premises classicallyinconsistent. But the argumentation-based formulation of logic that we havedescribed above will not trivialize. For example, it would sceptically concludethat p holds, without also concluding that ¬p holds as PL does.

The dialectic argumentation proof in figure 1, that gave us earlier the conclu-sion that p is non-acceptable (and hence that ¬p holds since also it is acceptable),now changes. Indeed, we now have another way to defend against the attack(s)containing the hypothesis q. We can now defend using the premise p that, as wehave explained above, is preferred over arguments that contain hypotheses asit is made purely of sentences in the given premises T . On the other hand, anyargument supporting ¬p will be attacked by the argument made purely fromthe premise p. But this cannot be defended against since there is no direct orexplicit information to its contrary in the premises. Hence p is acceptable andany argument supporting ¬p is not acceptable, thus p is sceptically concluded.Another example of escaping from the trivialization of formal classical reasoningwould be the case where we have in the premises T both p and ¬p. Then eachof these premises would defend against each other and so both arguments wouldbe acceptable and therefore there will be no sceptical conclusion for whether pholds or not.

The strict conditions on the argumentation framework that we have imposedso that we can match the strict reasoning of formal logic can be further relaxed byallowing the premises themselves to take a defeasible nature, e.g. implications torepresent only a “normally or mostly” nature of associating their condition withtheir conclusion. Then relative preferences amongst this defeasible knowledgeenriches the defence relation by rendering some arguments stronger and henceable to defend against (some of) their counter-arguments but not vice-versa.

This is particularly appropriate when we consider the informal logic of com-mon sense human reasoning [9, 8, 1] where people normally reason within acontext and where the common sense knowledge is in this form of loose asso-ciations. General or individual human biases give preference to some of thesestatements and we can then understand common sense reasoning in argumenta-

36

Page 41: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

tion terms in the same way we have expressed formal logical reasoning. Argumen-tation thus gives a uniform umbrella framework covering the whole spectrum ofreasoning from the very strict formal reasoning to the flexible informal reasoning.

5 Conclusions

We have described how formal classical reasoning can be captured through thesame process of dialectic argumentation that is normally associated with informallogic. This reformulation of logic in terms of argumentation has been shown [6]to be complete for propositional logic. The same approach can be applied moregenerally to first order predicate logic. An interesting example of this is that ofAristotelian syllogisms when these are seen as canonical forms of strict classicalreasoning of predicate logic. Furthermore, syllogisms have been studied as anexample of human cognitive reasoning, see e.g. [7], where it is observed thathumans do not reason according to formal logic but that their interpretationof syllogism is indeed a case of informal logic. In a recent challenge4 to modelthe cognitive syllogistic reasoning of humans, argumentation was shown as apromising approach towards this goal.

By varying the degree of flexibility within the argumentation framework andits dialectic process we can move from formal logic to informal logic and back.Argumentation thus provides a way to unify these two worlds of logic, normallyconsidered as very different, under the same conceptual framework. It provides auniform umbrella framework covering the whole spectrum of reasoning from thevery strict formal reasoning to the extremely flexible informal human reasoning.

Acknowledgement

I would like to thank Loizos Michael and Francesca Toni for their continuedcollaboration on argumentation. This has been very useful in writing this paper.

References

1. I.-A. Diakidoy, A., L. Michael, and R. Miller. Story Comprehension through Argu-mentation. In Proceedings of the 5th International Conference on ComputationalModels of Argument (COMMA), pages 31–42, 2014.

2. P. M. Dung. On the Acceptability of Arguments and its Fundamental Role inNonmonotonic Reasoning, Logic Programming and n-person Games. ArtificialIntelligence, 77:321–357, 1995.

3. Thomas F. Gordon and Douglas Walton. The carneades argumentation framework- using presumptions and exceptions to model critical questions. In ComputationalModels of Argument: Proceedings of COMMA 2006, September 11-12, 2006, Liv-erpool, UK, pages 195–207, 2006.

4. A. Kakas, R. Kowalski, and F. Toni. Abductive Logic Programming. Journal ofLogic and Computation, 2(6):719–770, 1992.

4 This challenge was announced at https://www.cc.uni-freiburg.de/syllogchallenge.

37

Page 42: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

5. A. Kakas, F. Toni, and P. Mancarella. Argumentation Logic. In Proceedings of the5th International Conference on Computational Models of Argument (COMMA),pages 12–27, 2014.

6. Antonis C. Kakas, Paolo Mancarella, and Francesca Toni. On argumentation logicand propositional logic. Studia Logica, Jul 2017.

7. Sangeet Khemlani and P. N. Johnson-Laird. Theories of the syllogism: A meta-analysis. Psychological Bulletin, 138:427–457, 2012.

8. R. Kowalski. Computational Logic and Human Thinking: How to Be ArtificiallyIntelligent. Cambridge University Press, New York, NY, USA, 2011.

9. K. Stenning and M. van Lambalgen. Human Reasoning and Cognitive Science.MIT Press, 2008.

10. Douglas Walton and Thomas F. Gordon. Formalizing Informal Logic. InformalLogic, 35(4):508–538, 2015.

38

Page 43: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

Agent Morality via Counterfactualsin Logic Programming

Luıs Moniz Pereira1 and Ari Saptawijaya2

1 NOVA-LINCS, Lab. for Computer Science and Informatics, Universidade Nova de Lisboa,Portugal

[email protected] Faculty of Computer Science, Universitas Indonesia, Indonesia.

[email protected]

Abstract. This paper presents a computational model, via Logic Programming(LP), of counterfactual reasoning with applications to agent morality. Counterfac-tuals are conjectures about what would have happened, had an alternative eventoccurred. In the first part, we show how counterfactual reasoning, inspired byPearl’s structural causal model of counterfactuals, is modeled using LP, by ben-efiting from LP abduction and updating. In the second part, counterfactuals areapplied to agent morality, resorting to this LP-based approach. We demonstrateits potential for specifying and querying moral issues, by examining viewpointson moral permissibility via classic moral principles and examples taken from theliterature. Finally, we discuss some potential extensions of our LP approach tocover other aspects of counterfactual reasoning and show how these aspects arerelevant in modeling agent morality.

Keywords: abduction, counterfactuals, logic programming, morality, non-monotonicreasoning.

1 Introduction

Counterfactuals capture the process of reasoning about a past event that did not occur,namely what would have happened had this event occurred; or, vice-versa, to reasonabout an event that did occur but what if it had not. An example from [5]: Lightninghits a forest and a devastating forest fire breaks out. The forest was dry after a long hotsummer and many acres were destroyed. One may think of a counterfactual about it,e.g., “if only there had not been lightning, then the forest fire would not have occurred”.Counterfactuals have been widely studied, in philosophy [6, 19], psychology [5, 21,31]. They also have been studied from the computational viewpoint [4, 11, 26, 27, 39],where approaches in Logic Programming (LP), e.g., [4, 27, 39], are mainly based onprobabilistic reasoning.

In the first part of this paper, we report on our approach of using LP abduction andupdating in a procedure for evaluating counterfactuals, taking the established approachof Pearl [26] as reference. LP lends itself to Pearl’s causal model of counterfactuals:(1) The inferential arrow in a LP rule is adept at expressing causal direction; and (2)

39

Page 44: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

LP is enriched with functionalities, such as abduction and defeasible reasoning withupdates. They can be exploited to establish the counterfactuals evaluation procedure ofPearl’s: LP abduction is employed for providing background conditions from observa-tions made or evidences given, whereas defeasible logic rules allow achieving at selectpoints adjustments to the current model via hypothetical updates of intervention. Ourapproach therefore concentrates on pure non-probabilistic counterfactual reasoning inLP – thus distinct from but complementing existing probabilistic approaches – by in-stead resorting to abduction and updating, in order to determine the logical validity ofcounterfactuals under the Well-Founded Semantics [38].

Counterfactual thinking in moral reasoning has been investigated particularly viapsychology experiments (see, e.g., [9, 21]), but it has only been limitedly explored inmachine ethics. In the second part of the paper, counterfactual reasoning is applied tomachine ethics, an interdisciplinary field that emerges from the need of imbuing au-tonomous agents with the capacity for moral decision making to enable them to func-tion in an ethically responsible manner via their own ethical decisions. The potential ofLP for machine ethics has been reported in [13, 18, 29, 32], where the main character-istics of morality aspects can appropriately be expressed by LP-based reasoning, suchas abduction, integrity constraints, preferences, updating, and argumentation. The ap-plication of counterfactual reasoning to machine ethics – herein by resorting to our LPapproach – therefore aims at more generally taking counterfactuals to the wider contextof the aforementioned well-developed LP-based non-monotonic reasoning methods.

In this paper, counterfactuals are specifically engaged to distinguish whether aneffect of an action is a cause for achieving a morally dilemmatic goal or merely a side-effect of that action. The distinction is essential for establishing moral permissibilityfrom the viewpoints of the Doctrines of Double Effect and of Triple Effect, as scruti-nized herein through several off-the-shelf classic moral examples from the literature. Bymaterializing these doctrines in concrete moral dilemmas, the results of counterfactualevaluation –supported by our LP approach– are readily comparable to those from theliterature. Note that, even though the LP technique introduced in this paper is relevantfor modeling counterfactual moral reasoning, its use is general, not specific to morality.

In the final part of the paper, we discuss some potential extensions of our LP ap-proach to cover other aspects of counterfactual reasoning. These aspects include as-sertive counterfactuals, extending the antecedent of a counterfactual with a LP rule,and abducing the antecedent of a counterfactual in the form of intervention. These as-pects are relevant in modeling agent morality, which opens the way for further researchtowards employing LP-based counterfactual reasoning to machine ethics.

2 Abduction in Logic Programming

We start by recapping basic notation in LP and review how abduction is expressed andcomputed in LP.

A literal is either an atom L or its default negation not L, named positive andnegative literals, respectively. They are negation complements to each other. The atomstrue and false are true and false, respectively, in every interpretation. A logic programis a set of rules of the form H ← B, naturally read as “H if B”, where its head H is an

40

Page 45: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

atom and its (finite) body B is a sequence of literals. When B is empty (equal to true),the rule is called a fact and simply written H . A rule in the form of a denial, i.e., withfalse as head, is an integrity constraint.

Abduction is a reasoning method where one chooses from available hypothesesthose that best explain the observed evidence, in a preferred sense. In LP, an abductivehypothesis (abducible) is a 2-valued positive literal Ab or its negation complement Ab∗

(denotes not Ab), whose truth value is not initially assumed, and it does not appear inthe head of a rule. An abductive framework is a triple 〈P,A, I〉, where A is the set ofabducibles, P is a logic program such that there is no rule in P whose head is inA, andI is a set of integrity constraints.

An observationO is a set of literals, analogous to a query or goal in LP. Abducing anexplanation forO amounts to finding consistent abductive solutions S ⊆ A to a goalO,whilst satisfying the integrity constraints, where abductive solutions S entail O is truein the semantics obtained after replacing the abducibles S in P by their abduced truthvalue. Abduction in LP can be accomplished by a top-down query-oriented procedurefor finding a query’s abductive solution by need. The solution’s abducibles are leavesin its procedural query-rooted call-graph, i.e., the graph is recursively generated by theprocedure calls from literals in bodies of rules to heads of rules, and thence to the literalsin a rule’s body. The correctness of this top-down computation requires the underlyingsemantics to be relevant, as it avoids computing a whole model (to warrant its existence)in finding an answer to a query. Instead, it suffices to use only the rules relevant to thequery – those in its procedural call-graph – to find its truth value. The 3-valued Well-Founded Semantics [38], employed by us, enjoys this relevancy property [8], i.e., itpermits finding only relevant abducibles and their truth value via the aforementionedtop-down query-oriented procedure. Those abducibles not mentioned in the solutionare indifferent to the query, and remain undefined.

3 LP-based Counterfactuals

Our LP approach in evaluating counterfactuals is based Pearl’s approach [26]. Therein,counterfactuals are evaluated based on a probabilistic causal model and a calculus ofintervention. Its main idea is to infer background circumstances that are conditional oncurrent evidences, and subsequently to make a minimal required intervention in the cur-rent causal model, so as to comply with the antecedent condition of the counterfactual.The modified model serves as the basis for computing the counterfactual consequent’sprobability.

Since each step of our LP approach mirrors the one corresponding in Pearl’s, our ap-proach therefore immediately compares to Pearl’s, benefits from its epistemic adequacy,and its properties rely on those of Pearl’s. We apply the idea of Pearl’s approach to logicprograms, but leaving out probabilities, employing instead LP abduction and updatingto determine the logic validity of counterfactuals under Well-Founded Semantics.

3.1 Causation and Intervention in LP

Two important ingredients in Pearl’s approach of counterfactuals are causal model andintervention. With respect to an abductive framework 〈P,A, I〉, observation O corre-

41

Page 46: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

sponds to Pearl’s definition for evidence. That is, O has rules concluding it in programP , and hence does not belong to the set of abduciblesA. Recall that in Pearl’s approach,a model consists of set of background variables, whose values are conditional on case-considered observed evidences. These background variables are not causally explainedin the model, as they have no parent nodes in the causal diagram of the model. In termsof LP abduction, they correspond to a set of abducibles E ⊆ A that provide abduc-tive explanations to observation O. Indeed, these abducibles have no preceding causalexplanatory mechanism, as they have no rules concluding them in the program.

Besides abduction, our approach also benefits from LP updating, which is supportedby well-established theory and properties, cf. [1, 2]. It allows a program to be updatedby asserting or retracting rules, thus changing the state of the program. LP updatingis therefore appropriate for representing changes and dealing with incomplete infor-mation. The specific role of LP updating in our approach is twofold: (1) updating theprogram with the preferred explanation to the current observation, thus fixing in theprogram the initial abduced background context of the counterfactual being evaluated;(2) facilitating an apposite adjustment to the causal model by hypothetical updates ofcausal intervention on the program, affecting defeasible rules. Both roles are sufficientlyaccomplished by fluent (i.e., state-dependent literal) updates, rather than rule updates.In the first role, explanations are treated as fluents. In the second, reserved predicatesare introduced as fluents for the purpose of intervention upon defeasible rules. For thelatter role, fluent updates are particularly more appropriate than rule updates (e.g., inter-vention by retracting rules), because intervention is hypothetical only. Removing awayrules from the program would be an overkill, as the rules might be needed to elaboratejustifications and introspective debugging.

3.2 Evaluating Counterfactuals in LP

The procedure to evaluate counterfactuals in LP essentially takes the three-step processof Pearl’s approach as its reference. The key idea of evaluating counterfactuals withrespect to an abductive framework, at some current state (discrete time) T , is as follows.

In step 1, abduction is performed to explain the factual observation.3 The observa-tion corresponds to the evidence that both the antecedent and the consequence literalsof the present counterfactual were, in the considered past moment, factually false.4 Forotherwise the counterfactual would be trivially true when making the antecedent false,or irrelevant for the aim of making the consequent true. There can be multiple expla-nations available to an observation; choosing a suitable one among them is a pragmaticissue, which can be dealt with integrity constraints or preferences [7, 28]. The explana-tion fixes the abduced context in which the counterfactual is evaluated by updating theprogram with the explanation.

3 We assume that people are using counterfactuals to convey truly relevant information ratherthan to fabricate arbitrary subjunctive conditionals (e.g., “If I had been watching, then I wouldhave seen the cheese on the moon melt during the eclipse”). Otherwise, implicit observationsmust simply be made explicit observations, to avoid natural language conundrums or ambigu-ities [12].

4 This interpretation is in line with the corresponding English construct, cf. [15], commonlyknown as third conditionals.

42

Page 47: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

In step 2, defeasible rules are introduced for atoms forming the antecedent of thecounterfactual. Given the past event E, that renders its corresponding antecedent literalfalse, held at factual state TE < T , its causal intervention is realized by a hypotheticalupdate H at state TH = TE + ∆H , such that TE <TH <TE + 1 ≤ T . That is, ahypothetical update strictly takes place between two factual states, thus 0 < ∆H < 1.In the presence of defeasible rules this update permits hypothetical modification of theprogram to consistently comply with the antecedent of the counterfactual.

In step 3, the Well-Founded Model (WFM) of the hypothetical modified programis examined to verify whether the consequence of the counterfactual holds true at stateT . One can easily reinstate to the current factual situation by canceling the hypotheticalupdate, e.g., via a restorative new update withH’s complement at state TF = TH+∆F ,such that TH < TF < TE + 1.

Based on the aforementioned ideas, our approach is defined below, abstracting fromthe above state transition detail. In the sequel, the Well-Founded Model of program P isdenoted by WFM(P ). As our counterfactual procedure is based on the Well-FoundedSemantics, the standard logical consequence relation P |= F used below presupposesthe Well-Founded Model of P in verifying the truth of formula F , i.e., whether F istrue in WFM(P ).

Procedure 1. Let 〈P,A, I〉 be an abductive framework, where program P encodes themodeled situation on which counterfactuals are evaluated. Consider a counterfactual “ifPre had been true, then Conc would have been true”, where Pre and Conc are finiteconjunctions of literals.

1. Abduction: Compute an explanation E ⊆ A to the observation O = OPre ∪OConc ∪OOth, where:

– OPre = {compl(Li) |Li is in Pre},– OConc = {compl(Li) |Li is in Conc}, and– OOth is other (possibly empty) observations: OOth ∩ (OPre ∪OConc) = ∅.

Update program P with E, obtaining program P ∪ E.2. Action: For each literal L in conjunction Pre, introduce a pair of reserved meta-

predicates make(B) and make not(B), where B is the atom in L. These twometa-predicates are introduced for the purpose of establishing causal intervention:they are used to express hypothetical alternative events to be imposed. This stepcomprises two stages:(a) Transformation:

– Add rule B ← make(B) to program P ∪ E.– Add not make not(B) to the body of each rule in P whose head is B. If there

is no such rule, add rule B ← not make not(B) to program P ∪ E.Let (P ∪ E)τ be the resulting transform.(b) Intervention: Update program (P∪E)τ with literalmake(B) ormake not(B),for L = B or L = not B, resp. Assuming that Pre is consistent, make(B) andmake not(B) cannot be imposed at the same time.Let (P ∪ E)τ,ι be the program obtained after these hypothetical updates of inter-vention.

3. Prediction: Verify whether (P ∪ E)τ,ι |= Conc and I is satisfied in WFM((P ∪E)τ,ι).

43

Page 48: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

This three-step procedure defines valid counterfactuals. Let 〈P,A, I〉 be an abductiveframework, where program P encodes the modeled situation on which counterfactualsare evaluated. The counterfactual

“If Pre had been true, then Conc would have been true”

is valid given observation O = OPre ∪ OConc ∪ OOth iff O is explained by E ⊆ A,(P ∪ E)τ,ι |= Conc, and I is satisfied in WFM((P ∪ E)τ,ι).

Since the Well-Founded Semantics supports top-down query-oriented proceduresfor finding solutions, checking validity of counterfactuals, i.e., whether their conclusionConc follows (step 3), given the intervened program transform (step 2) with respect tothe abduced background context (step 1), in fact amounts to checking in a derivationtree whether query Conc holds true while also satisfying I.

Example 1. Recall the example in the introduction. Let us slightly complicate it by hav-ing two alternative abductive causes for the forest fire, viz., storm (which implies light-ning hitting the ground) or barbecue. Storm is accompanied by strong wind that causesthe dry leaves falling onto the ground. Note that dry leaves are important for forestfire in both cases. This example is expressed by abductive framework 〈P,A, I〉, usingabbreviations b, d, f, g, l, s for barbecue, dry leaves, forest fire, leaves on the ground,lightning, and storm, resp., where A = {s, b, s∗, b∗}, I = ∅, and P as follows:

f ← b, d. f ← b∗, l, d, g. l← s. g ← s. d.The use of b∗ in the second rule of f is intended so as to have mutual exclusive expla-nations.

Consider counterfactual “if only there had not been lightning, then the forest firewould not have occurred”, where Pre = not l and Conc = not f .

1. Abduction: Besides OPre = {l} and OConc = {f}, say that g is observed too:OOth = {g}. Given O = OPre ∪ OConc ∪ OOth, there are two possible expla-nations: E1 = {s, b∗} and E2 = {s, b}. Consider a scenario where the minimalexplanation E1 (in the sense of minimal positive literals) is preferred to updateP , to obtain P ∪ E1. This updated program reflects the evaluation context of thecounterfactual, where all literals of Pre and Conc were false in the initial factualsituation.

2. Action: The transformation results in program (P ∪ E1)τ :f ← b, d. f ← b∗, l, d, g. g ← s. d.l← make(l) l← s, not make not(l)

Program (P ∪E1)τ is updated with make not(l) as the required intervention, viz.,“if there had not been lightning”.

3. Prediction: We verify that (P∪E1)τ,ι |= not f . That is, not f holds with respect tothe intervened modified program for explanation E1 = {s, b∗} and the interventionmake not(l). Note, I = ∅ is trivially satisfied in WFM((P ∪ E1)τ,ι).

We thus conclude that, for this E1 scenario, the given counterfactual is valid.

Example 2. In the other explanatory scenario of Example 1, where E2 (instead of E1)is preferred to update P , the counterfactual is no longer valid. In this case, (P ∪E1)τ =(P ∪E2)τ , and the required causal intervention is also the same: make not(l). But wenow have (P ∪E2)τ,ι 6|= not f . Indeed, the forest fire would still have occurred but dueto an alternative cause: barbecue.

44

Page 49: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

4 Counterfactuals in Morality

People typically reason about what they should or should not have done when theyexamine decisions in moral situations. It is therefore natural for them to engage coun-terfactual thoughts in such settings. Counterfactual thinking has been investigated inthe context of moral reasoning, notably by psychology experimental studies, e.g., tounderstand the kind of counterfactual alternatives people tend to imagine in contem-plating moral behaviors [21] and the influence of counterfactual thoughts in moral judg-ment [24]. As argued in [9], the function of counterfactual thinking is not just limited tothe evaluation process, but occurs also in the reflective one. Through evaluation, coun-terfactuals help correct wrong behavior in the past, thus guiding future moral decisions.Reflection, on the other hand, permits momentary experiential simulation of possiblealternatives, thereby allowing careful consideration before a moral decision is made,and to subsequently justify it.

Morality and normality judgments typically correlate. Normality mediates moralitywith causation and blame judgments. The controllability in counterfactuals mediatesbetween normality, blame and cause judgments. The importance of control, namelythe possibility of counterfactual intervention, is highlighted in theories of blame thatpresume someone responsible only if they had available some control of the outcome[40].

The potential of LP to machine ethics has been reported in [13,18,29] and with em-phasis on LP abduction and updating in [32]. Here we investigate how moral issues caninnovatively be expressed with counterfactual reasoning by resorting to a LP approach.We particularly look into its application for examining viewpoints on moral permissi-bility, exemplified by classic moral dilemmas from the literature on the Doctrines ofDouble Effect (DDE) [23] and of Triple Effect (DTE) [17].

DDE is first introduced by Thomas Aquinas in his discussion of the permissibilityof self-defense [3]. The current version of this principle emphasizes the permissibilityof an action that causes a harm by distinguishing whether this harm is a mere side-effect of bringing about a good result, or rather an intended means to bringing aboutthe same good end [23]. According to the Doctrine of Double Effect, the former actionis permissible, whereas the latter is impermissible. In [14], DDE has been utilized toexplain the consistency of judgments, shared by subjects from demographically diversepopulations, on a number of variants of the classic trolley problem [10]: A trolley isheaded toward five people walking on the track, who are unable to get off the track intime. The trolley can nevertheless be diverted onto a side track, thereby preventing itfrom killing the five people. However, there is a man standing on the side track. Thedilemma is therefore whether it is morally permissible to divert the trolley, killing theman but saving the five. DDE permits diverting the trolley since that action does notintend to harm the man on the side track in order to save the five.

Counterfactuals may provide a general way to examine DDE in moral dilemmas,by distinguishing between a cause and a side-effect as a result of performing an actionto achieve a goal. This distinction between causes and side-effects may explain thepermissibility of an action in accordance with DDE. That is, if some morally wrongeffect E happens to be a cause for a goal G that one wants to achieve by performingan action A, and not a mere side-effect of A, then performing A is impermissible. This

45

Page 50: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

is expressed by the counterfactual form below, in a setting where action A is performedto achieve goal G:

If not E had been true, then not G would have been true.The evaluation of this counterfactual form identifies permissibility of action A from itseffectE, by identifying whether the latter is a necessary cause for goalG or a mere side-effect of actionA. That is, if the counterfactual proves valid, then E is instrumental as acause ofG, and not a mere side-effect of actionA. SinceE is morally wrong, achievingG that way, by means of A, is impermissible; otherwise, not. Note that the evaluationof counterfactuals in this application is considered from the perspective of agents whoperform the action, rather than from others’ (e.g., observers).

There has been a number of studies, both in philosophy and psychology, on the rela-tion between causation and counterfactuals. The counterfactual process view of causalreasoning [22], for example, advocates counterfactual thinking as an essential part ofthe process involved in making causal judgments. This relation between causation andcounterfactuals can be important for providing explanations in cases involving harm,which underlie people’s moral cognition [36] and trigger other related questions, suchas “Who is responsible?”, “Who is to blame?”, “Which punishment would be fair?”,etc. Herein, we explore the connection between causation and counterfactuals, focusingon agents’ deliberate action, rather than on causation and counterfactuals in general.More specifically, our exploration of this topic links it to the Doctrines of Double Ef-fect and Triple Effect and dilemmas involving harm, such as the trolley problem cases.Such cases have also been considered in psychology experimental studies concerningthe role of gender and perspectives (first vs. third person perspectives) in counterfactualthinking in moral reasoning, see [24]. The reader is referred to [6] and [16] for a moregeneral and broad discussion on causation and counterfactuals.

We exemplify an application of this counterfactual form in two off-the-shelf mil-itary cases from [35] – abbreviations in parentheses: terror bombing (teb) vs. tacticalbombing (tab). The former refers to bombing a civilian target (civ) during a war, thuskilling civilians (kic), in order to terrorize the enemy (ror), and thereby get them toend the war (ew). The latter case is attributed to bombing a military target (mil), whichwill effectively end the war (ew), but with the foreseen consequence of killing the samenumber of civilians (kic) nearby. According to DDE, terror bombing fails permissibil-ity due to a deliberate element of killing civilians to achieve the goal of ending the war,whereas tactical bombing is accepted as permissible.

Example 3. We first model terror bombing with ew as the goal, by considering theabductive framework 〈Pe,Ae, Ie〉, where Ae = {teb, teb∗}, Ie = ∅ and Pe:

ew ← ror ror ← kic kic← civ civ ← teb

We consider counterfactual “if civilians had not been killed, then the war would not haveended”, where Pre = not kic and Conc = not ew. The observation O = {kic, ew},with OOth being empty, has a single explanation Ee = {teb}. The rule kic← civtransforms into kic← civ, not make not(kic). Given interventionmake not(kic), thecounterfactual is valid, because (Pe ∪Ee)τ,ι |= not ew. That means the morally wrongkic is instrumental in achieving the goal ew: it is a cause for ew by performing teb andnot a mere side-effect of teb. Hence teb is DDE morally impermissible.

46

Page 51: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

Example 4. Tactical bombing with the same goal ew can be modeled by the abductiveframework 〈Pa,Aa, Ia〉, where Aa = {tab, tab∗}, Ia = ∅ and Pa:

ew ← mil mil← tab kic← tabGiven the same counterfactual, we now have Ea = {tab} as the only explanation tothe same observation O = {kic, ew}. Note that the transform contains rule kic ←tab, not make not(kic), which is obtained from kic ← tab. By imposing the inter-vention make not(kic), one can verify that the counterfactual is not valid, because(Pa∪Ea)τ,ι 6|= not ew. Therefore, the morally wrong kic is just a side-effect in achiev-ing the goal ew. Hence tab is DDE morally permissible.

Example 5. Consider two countries a and its ally, b, that concert a terror bombing,modeled by the abductive framework 〈Pab,Aab, Iab〉, where Aab = {teb, teb∗}, Iab =∅ and Pab below. The abbreviations kic(X) and civ(X) refer to ‘killing civilians bycountry X’ and ‘bombing a civilian target by country X’. As usual in LP, underscore( ) represents an anonymous variable.

ew ← ror ror ← kic( )kic(X)← civ(X) civ( )← teb

Being represented as a single program (rather than a separate knowledge base for eachagent), this scenario should appropriately be viewed as if a joint action performed bya single agent. Therefore, the counterfactual of interest is “if civilians had not beenkilled by a and b, then the war would not have ended”. That is, the antecedent of thecounterfactual is a conjunction: Pre = not kic(a) ∧ not kic(b). Given Eab = {teb},one can easily verify that (Pab ∪Eab)τ,ι |= not ew, and the counterfactual is valid: theconcerted teb is DDE impermissible.

This application of counterfactuals can be challenged by a more complex scenario,to distinguish moral permissibility according to DDE vs. DTE. DTE [17] refines DDEparticularly on the notion about harming someone as an intended means. That is, DTEdistinguishes further between doing an action in order that an effect occurs and doingit because that effect will occur. The latter is a new category of action, which is notaccounted for in DDE. Though DTE also classifies the former as impermissible, it ismore tolerant to the latter (the third effect), i.e., it treats as permissible those actionsperformed just because instrumental harm will occur.

Kamm [17] proposed DTE to accommodate a variant of the trolley problem, viz.,the Loop Case [37]: A trolley is headed toward five people walking on the track, andthey will not be able to get off the track in time. The trolley can be redirected onto a sidetrack, which loops back towards the five. A fat man sits on this looping side track, whosebody will by itself stop the trolley. Is it morally permissible to divert the trolley to thelooping side track, thereby hitting the man and killing him, but saving the five? This casestrikes most moral philosophers that diverting the trolley is permissible [25]. Referringto a psychology study [14], 56% of its respondents judged that diverting the trolley inthis case is also permissible. To this end, DTE may provide the justification, that it ispermissible because it will hit the man, and not in order to intentionally hit him [17].Nonetheless, DDE views diverting the trolley in the Loop case as impermissible.

We use counterfactuals to capture the distinct views of DDE and DTE in the Loopcase.

47

Page 52: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

Example 6. We model the Loop case with the abductive framework 〈Po,Ao, Io〉, wheresav, div, hit, tst, mst stand for save the five, divert the trolley, man hit by the trolley,train on the side track and man on the side track, resp., with sav as the goal, Ao ={div, div∗}, Io = ∅, and Po:

sav ← hit hit← tst,mst tst← div mst.DDE views diverting the trolley impermissible, because this action redirects the

trolley onto the side track, thereby hitting the man. Consequently, it prevents the trol-ley from hitting the five. To come up with the impermissibility of this action, it is re-quired to show the validity of the counterfactual “if the man had not been hit by thetrolley, the five people would not have been saved”. Given observation O = OPre ∪OConc = {hit, sav}, its only explanation is Eo = {div}. Note that rule hit← tst,msttransforms into hit← tst,mst, not make not(hit), and the required intervention ismake not(hit). The counterfactual is therefore valid, because (Po∪Eo)τ,ι |= not sav.This means hit, as a consequence of action div, is instrumental as a cause of goal sav.Therefore, div is DDE morally impermissible.

DTE considers diverting the trolley as permissible, since the man is already on theside track, without any deliberate action performed in order to place him there. In Po,we have the fact mst ready, without abducing any ancillary action. The validity of thecounterfactual “if the man had not been on the side track, then he would not have beenhit by the trolley”, which can easily be verified, ensures that the unfortunate event ofthe man being hit by the trolley is indeed the consequence of the man being on the sidetrack. The lack of deliberate action (exemplified here by pushing the man – psh forshort) in order to place him on the side track, and whether the absence of this actionstill causes the unfortunate event (the third effect) is captured by the counterfactual “ifthe man had not been pushed, then he would not have been hit by the trolley”. Thiscounterfactual is not valid, because the observation O = OPre ∪ OConc = {psh, hit}has no explanation E ⊆ Ao, i.e., psh 6∈ Ao, and no fact psh exists either. This meansthat even without this hypothetical but unexplained deliberate action of pushing, theman would still have been hit by the trolley (just because he is already on the side track).Though hit is a consequence of div and instrumental in achieving sav, no deliberateaction is required to cause mst, in order for hit to occur. Hence div is DTE morallypermissible.

Next, we consider a more involved trolley example.

Example 7. Consider a variant of the Loop case, viz., the Loop-Push Case (see alsoExtra Push Case in [17]). Differently from the Loop case, now the looping side track isinitially empty, and besides the diverting action, an ancillary action of pushing a fat manin order to place him on the side track is additionally performed. This case is modeledby the abductive framework 〈Pp,Ap, Ip〉, whereAp = {div, psh, div∗, psh∗}, Ip = ∅,and Pp:

sav ← hit hit← tst,mst tst← div mst← pshRecall the counterfactuals considered in the discussion of DDE and DTE of the Loopcase:

– “If the man had not been hit by the trolley, the five people would not have beensaved.” The same observation O = {hit, sav} provides an extended explanation

48

Page 53: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

Ep1 = {div, psh}. That is, the pushing action needs to be abduced for havingthe man on the side track, so the trolley can be stopped by hitting him. The sameintervention make not(hit) is applied to the same transform, resulting in a validcounterfactual: (Pp ∪ Ep1)τ,ι |= not sav.

– “If the man had not been pushed, then he would not have been hit by the trolley.”The relevant observation is O = {psh, hit}, explained by Ep2 = {div, psh}.Whereas this counterfactual is not valid in DTE of the Loop case, it is valid inthe Loop-Push case. Given rule psh ← notmake not(psh) in the transform andintervention make not(psh), we verify that (Pp ∪ Ep2)τ,ι |= not hit.

From the validity of these two counterfactuals it can be inferred that, given the divertingaction, the ancillary action of pushing the man onto the side track causes him to be hitby the trolley, which in turn causes the five to be saved. In the Loop-Push, DTE agreeswith DDE that such a deliberate action (pushing) performed in order to bring aboutharm (the man hit by the trolley), even for the purpose of a good or greater end (to savethe five), is likewise impermissible.

5 Extending LP-based Counterfactuals

Our approach, in Section 3, specifically focuses on evaluating counterfactuals in or-der to determine their validity. We identify some potential extensions of this LP-basedapproach to other aspects of counterfactual reasoning:

1. We consider the so-called assertive counterfactuals, where a counterfactual is givenas being a valid statement, rather than a statement whose truth validity has to be de-termined. The causality expressed by such a valid counterfactual may be useful forrefining an existing knowledge base. For instance, suppose we have a rule statingthat the lamp is on if the switch is on, written as lamp on ← switch on. Clearly,providing the fact switch on, we have lamp on true. Now consider that the fol-lowing counterfactual is given as being a valid statement:

“If the bulb had not functioned properly, then the lamp would not be on”

There are two ways that this counterfactual may refine the rule about lamp on.First, the causality expressed by this counterfactual can be used to transform therule into:

lamp on← switch on, bulb ok.

bulb ok ← not make not(bulb ok).

So, the lamp will be on if the switch is on – that is still granted – but subject to anupdate make not(bulb ok), which captures the condition of the bulb. In the otheralternative, an assertive counterfactual is rather directly translated into an updatingrule, and need not transform existing rules.

2. We may extend the antecedent of a counterfactual with a rule, instead of just literals.For example, consider the following program (assuming an empty abduction, so as

49

Page 54: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

to focus on the issue):

warm blood(M)← mammal(M).

mammal(M)← dog(M).

mammal(M)← bat(M).

dog(d). bat(b).

Querying ?- bat(B), warm blood(B) assures us that there is a warm blood bat,viz., B = b.Now consider the counterfactual:

“If bats were not mammals they would not have warm blood”.

Transforming the above program using our procedure obtains:

warm blood(M)← mammal(M).

mammal(M)← make(mammal(M)).

mammal(M)← dog(M), not make not(mammal(M)).

mammal(M)← bat(M), not make not(mammal(M)).

dog(d). bat(b).

The antecedent of the given counterfactual can be expressed as the rule:

make not(mammal(B))← bat(B).

We can check using our procedure that, given this rule intervention, the above coun-terfactual is valid: not warm blood(b) is true in the intervened modified program.

3. Finally, we can easily imagine the situation where the antecedent Pre of a coun-terfactual is not given, though the conclusion Conc is, and we want to abduce Prein the form of interventions. That is, the task is to abduce make and make not,rather than imposing them, while respecting the integrity constraints, such that thecounterfactual is valid.Tabling abductive solutions [33] may be relevant in this problem. Suppose thatwe already abduced an intervention Pre1 for a given Conc1, and we now wantto find Pre2 such that the counterfactual “If Pre1 and Pre2 had been the case,then Conc1 and Conc2 would have been the case” is valid. In particular, whenabduction is performed for a more complex conclusion Conc1 and Conc2, thesolution Pre1, which has already been abduced and tabled, can be reused in theabduction of such a more complex conclusion, leading to the idea that problems ofthis kind of counterfactual reasoning can be solved in parts or in a modular way.

Indeed, the above three aspects may have relevance in modeling agent morality:

1. In assertive counterfactuals, the causality expressed by a given valid counterfactualcan be useful for refining moral rules, which can be achieved through incrementalrule updating. This may further the application of moral updating and evolution.

50

Page 55: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

2. The extension of a counterfactual with a rule antecedent opens up another possibil-ity to express exceptions in moral rules. For instance, one can express an exceptionabout lying, such as “If lying had been done to save an innocent from a murderer,then it would not have been wrong”. That is, given a knowledge base about lyingfor human H:

lying wrong(H)← lying(H), not make not(lying wrong(H)).

The antecedent of the above counterfactual can be represented as a rule:

make not(lying wrong(H))← save from murderer(H, I), innocent(I).

3. Given that the conclusion of a counterfactual is some moral wrong W , abducing itsantecedent in the form of intervention can be used for expressing a prevention ofW , viz., “What could I have done to prevent a wrong W ?”.

6 Concluding Remarks

This paper presents a formulation of counterfactuals evaluation by means of LP ab-duction and updating. The approach corresponds to the three-step process in Pearl’sstructural theory, but omits probability to concentrate on a naturalized logic. We ad-dressed too how to examine (non-probabilistic) moral reasoning about permissibility,employing this LP approach to distinguish between causes and side-effects as a resultof agents’ actions to achieve a goal.

The three potential extensions of our LP approach to cover other aspects of counter-factual reasoning, as well as their applications to machine ethics, are worth exploringin future. Apart from these identified extensions, our present LP-based approach forevaluating counterfactuals may as well be suitable to address moral justification, viacompound counterfactuals: “Had I known what I know today, then if I were to havedone otherwise, something preferred would have followed”. Such counterfactuals, typ-ically imagining alternatives with worse effect – the so-called downward counterfactu-als [20], may provide moral justification for what was done due to lack, at the time, ofthe current knowledge. This is accomplished by evaluating what would have followed ifthe intent had been otherwise, other things (including present knowledge) being equal.It may justify that what would have followed is not morally superior than the actualensued consequence. We have started, in [30], to explore the application of our presentLP-based approach to evaluate compound counterfactuals for moral justification. Fur-ther application of compound counterfactuals, to justify an exception for an action tobe permissible, that may lead to agents’ argumentation following Scanlon’s contractu-alism [34], is another path of future investigation.

Acknowledgements

Luıs Moniz Pereira acknowledges the support from Fundacao para a Ciencia e a Tec-nologia (FCT/MEC) NOVA LINCS PEst UID/CEC/04516/2013.

51

Page 56: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

References

1. J. J. Alferes, A. Brogi, J. A. Leite, and L. M. Pereira. Evolving logic programs. In Procs.European Conference on Artificial Intelligence (JELIA 2002), volume 2424 of LNCS, pages50–61. Springer, 2002.

2. J. J. Alferes, J. A. Leite, L. M. Pereira, H. Przymusinska, and T. Przymusinski. Dynamicupdates of non-monotonic knowledge bases. Journal of Logic Programming, 45(1-3):43–70, 2000.

3. T. Aquinas. Summa Theologica II-II, Q.64, art. 7, “Of Killing”. In W. P. Baumgarth andR. J. Regan, editors, On Law, Morality, and Politics. Hackett, 1988.

4. C. Baral and M. Hunsaker. Using the probabilistic logic programming language P-log forcausal and counterfactual reasoning and non-naive conditioning. In Procs. 20th InternationalJoint Conference on Artificial Intelligence (IJCAI), 2007.

5. R. M. J. Byrne. The Rational Imagination: How People Create Alternatives to Reality. MITPress, Cambridge, MA, 2007.

6. J. Collins, N. Hall, and L. A. Paul, editors. Causation and Counterfactuals. MIT Press,Cambridge, MA, 2004.

7. P. Dell’Acqua and L. M. Pereira. Preferential theory revision. Journal of Applied Logic,5(4):586–601, 2007.

8. J. Dix. A classification theory of semantics of normal logic programs: II. weak properties.Fundamenta Informaticae, 3(22):257–288, 1995.

9. K. Epstude and N. J. Roese. The functional theory of counterfactual thinking. Personalityand Social Psychology Review, 12(2):168–192, 2008.

10. P. Foot. The problem of abortion and the doctrine of double effect. Oxford Review, 5:5–15,1967.

11. M. L. Ginsberg. Counterfactuals. Artificial Intelligence, 30(1):35–79, 1986.12. Paul Grice. Studies in the Way of Words. Harvard University Press, Cambridge, MA, 1991.13. T. A. Han, A. Saptawijaya, and L. M. Pereira. Moral reasoning under uncertainty. In Procs.

18th International Conference on Logic for Programming, Artificial Intelligence and Rea-soning (LPAR), volume 7180 of LNCS, pages 212–227. Springer, 2012.

14. M. Hauser, F. Cushman, L. Young, R. K. Jin, and J. Mikhail. A dissociation between moraljudgments and justifications. Mind and Language, 22(1):1–21, 2007.

15. M. Hewings. Advanced Grammar in Use with Answers: A Self-Study Reference and PracticeBook for Advanced Learners of English. Cambridge University Press, New York, NY, 2013.

16. C. Hoerl, T. McCormack, and S. R. Beck, editors. Understanding Counterfactuals, Under-standing Causation: Issues in Philosophy and Psychology. Oxford University Press, Oxford,UK, 2011.

17. F. M. Kamm. Intricate Ethics: Rights, Responsibilities, and Permissible Harm. OxfordUniversity Press, Oxford, UK, 2006.

18. R. Kowalski. Computational Logic and Human Thinking: How to be Artificially Intelligent.Cambridge University Press, New York, NY, 2011.

19. D. Lewis. Counterfactuals. Harvard University Press, Cambridge, MA, 1973.20. K. D. Markman, I. Gavanski, S. J. Sherman, and M. N. McMullen. The mental simulation of

better and worse possible worlds. Journal of Experimental Social Psychology, 29:87–109,1993.

21. R. McCloy and R. M. J. Byrne. Counterfactual thinking about controllable events. Memoryand Cognition, 28:1071–1078, 2000.

22. T. McCormack, C. Frosch, and P. Burns. The relationship between children’s causal andcounterfactual judgements. In C. Hoerl, T. McCormack, and S. R. Beck, editors, Under-standing Counterfactuals, Understanding Causation. Oxford University Press, Oxford, UK,2011.

52

Page 57: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

23. A. McIntyre. Doctrine of double effect. In E. N. Zalta, editor, The Stanford Encyclope-dia of Philosophy. Center for the Study of Language and Information, Stanford University,Fall 2011 edition, 2004. http://plato.stanford.edu/archives/fall2011/entries/double-effect/.

24. S. Migliore, G. Curcio, F. Mancini, and S. F. Cappa. Counterfactual thinking in moral judg-ment: an experimental study. Frontiers in Psychology, 5:451, 2014.

25. M. Otsuka. Double effect, triple effect and the trolley problem: Squaring the circle in loopingcases. Utilitas, 20(1):92–110, 2008.

26. J. Pearl. Causality: Models, Reasoning and Inference. Cambridge University Press, Cam-bridge, MA, 2009.

27. L. M. Pereira, J. N. Aparıcio, and J. J. Alferes. Counterfactual reasoning based on revisingassumptions. In Procs. International Symposium on Logic Programming (ILPS 1991), pages566–577. MIT Press, 1991.

28. L. M. Pereira, P. Dell’Acqua, A. M. Pinto, and G. Lopes. Inspecting and preferring abduc-tive models. In K. Nakamatsu and L. C. Jain, editors, The Handbook on Reasoning-BasedIntelligent Systems, pages 243–274. World Scientific Publishers, 2013.

29. L. M. Pereira and A. Saptawijaya. Modelling Morality with Prospective Logic. In M. An-derson and S. L. Anderson, editors, Machine Ethics, pages 398–421. Cambridge U. P., 2011.

30. L. M. Pereira and A. Saptawijaya. Programming Machine Ethics, volume 26 of Studies inApplied Philosophy, Epistemology and Rational Ethics (SAPERE). Springer, 2016.

31. N. J. Roese. Counterfactual thinking. Psychological Bulletin, 121(1):133–148, 1997.32. A. Saptawijaya and L. M. Pereira. Towards modeling morality computationally with logic

programming. In PADL 2014, volume 8324 of LNCS, pages 104–119. Springer, 2014.33. A. Saptawijaya and L. M. Pereira. TABDUAL: a tabled abduction system for logic programs.

IfCoLog Journal of Logics and their Applications, 2(1):69–123, 2015.34. T. M. Scanlon. What We Owe to Each Other. Harvard University Press, Cambridge, MA,

1998.35. T. M. Scanlon. Moral Dimensions: Permissibility, Meaning, Blame. Harvard University

Press, Cambridge, MA, 2008.36. P. E. Tetlock, P. S. Visser, R. Singh, M. Polifroni, A. Scott, S. B. Elson, P. Mazzocco, and

P. Rescober. People as intuitive prosecutors: the impact of social-control goals on attributionsof responsibility. Journal of Experimental Social Psychology, 43:195–209, 2007.

37. J. J. Thomson. The trolley problem. The Yale Law Journal, 279:1395–1415, 1985.38. A. van Gelder, K. A. Ross, and J. S. Schlipf. The well-founded semantics for general logic

programs. Journal of the ACM, 38(3):620–650, 1991.39. J. Vennekens, M. Bruynooghe, and M. Denecker. Embracing events in causal modeling:

Interventions and counterfactuals in CP-logic. In JELIA 2010, volume 6341 of LNCS, pages313–325. Springer, 2010.

40. B. Weiner. Judgments of Responsibility: A Foundation for a Theory of Social Conduct. TheGuilford Press, New York, NY, 1995.

53

Page 58: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

Towards Cognitive Social Machinesfor Bridging the Cognitive-Computational Gap

in Creativity and Creative Reasoning

Ana-Maria Olteteanu1 ?

Universitat Bremen, GermanyCognitive Systems, Bremen Spatial Cognition Center

Abstract. This position paper presents a view on bridging the cognitive-computational gap in the field of creativity, creative reasoning and prob-lem solving. Starting from the levels of Cognition and Computation,their potential bootstrap is discussed in relation to the concept of Cog-nitive Social Machines. Five distinct aspects of bridging the cognitive-computational gap in creative reasoning using cognitive social machinesare described and discussed in the creativity domain. These aspects referto (i) building systems and models which solve creative reasoning tasks;(ii) computationally generated tools for a deeper understanding of cog-nition; (iii) cognitively-inspired processes and knowledge representationtypes; (iv) computational cognitive assistance, support and training and(v) evaluative informativity metrics for cognitive social machines.

1 Introduction

In an experiment room, a human participant is given an image to look at, like theone in Figure 1a, and asked what they can see. Presuming that she sees a snail,she is asked whether she can also see an elephant (or the other way around).Presuming she can see both, she is asked to try to switch between seeing oneor the other, while pressing a key for each successful switch she manages. Theresponse times will be measured to see how often she can do this switch. Thisability to re-encode features will be considered as a potential correlate for hercreativity levels.

Another participant may be prompted to focus on the image in Figure 1b(stimulus provided by [46]). He will then be asked what this image represents.He might provide answers like: a boardroom meeting, around a triangular table,viewed from above; a pendant; three bottles of wine arranged around a triangleof cheese on a shelf, etc. The number of his answers, the semantic domains theyspan, their novelty as rated by other human participants or their originalityin comparison to answers from other participants, will be rated to provide acreativity score for his answers.

? Corresponding author: Ana-Maria Olteteanu. Email:[email protected]

54

Page 59: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

2

(a) (b)

Fig. 1: Task examples from the study of creative cognition: (a) An elephant-snailambiguous figureand (b) Pattern meanings test example

Meanwhile, a computational system may analyse newspaper articles or onlinenews to construct their mood for the day. Subsequently, it may determine whatarticle to base a poem on, and what template to use for this poem. After writinga poem, the system might computationally generate a framing for this poem,like the following [7]:

It was generally a bad news day. I read an article in the Guardian entitled:“Police investigate alleged race hate crime in Rochdale”. Apparently, “Stringer-Prince, 17, has undergone surgery following the attack on Saturday in whichhis skull, eye sockets and cheekbone were fractured” and “This was a completelyunprovoked and relentless attack that has left both victims shoked by their ordeal”.I decided to focus on mood and lyricism, with an emphasis on syllables andmatching line lengths, with very occasional rhyming. I like how words like attackand snake sound together. I wrote this poem.

Framing, thus the ability of the system to provide a commentary on the wholeprocess, would be evaluated as part of its creativity too, besides the poem.

These example reflect how the study of creativity is approached from thecognitive versus computational realm. Both are supposed to measure creativity,however, the agendas of the fields seem quite di↵erent. The di↵erences do notinvolve just tasks, but also of goals and methods. The fields have di↵erent identi-ties, communities, and provide di↵erent interesting answers to di↵erent researchquestion contexts.

Given these di↵erences, can a cognitive-computational bridge can be built?Could such a bridge serve both the cognitive and computational communities?This paper explores how cognitive social machines can be used for bridging thecognitive-computational gap in the creativity domain.

The rest of this paper is organized as follows. The background of and di↵er-ences between creative cognition and computational creativity are briefly sum-marized, and a direction for bridging the gap is proposed in section 2. Sections 3to 7 elaborate on this direction, by exploring five di↵erent aspects of a cognitive-computational bootstrap using cognitive social machines. A summary discussionof the approach’s key points is provided in section 8.

55

Page 60: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

3

2 Background

Creativity, creative reasoning and creative problem solving are fields studiedacross cognitive and computational disciplines with fairly di↵erent goals andmethods.

Human creativity is studied in cognitive psychology, with the purpose ofunderstanding the human creative process. Various creativity tests [26, 17, 20,46, 25, 10] are deployed in evaluating creative performance, and for studyinghypotheses of how various conditions impact creativity and creative problemsolving.

The computational creativity community studies the question what does ittake for a machine to be creative. It builds computationally creative systems ina wide variety of fields, including mathematics [24, 6, 4], music [39, 38, 44, 11],art [3, 5], poetry and text composition [2, 15, 16, 7], architecture and design [42],discovery of physical laws [21–23], magic trick making [48]; and video games [9].Computational creativity also devises ways of evaluating computational creativ-ity [47, 41, 40, 8, 45, 19]

In the middle ground of the creative cognition and computational creativityfields, a few research projects (i) computationally study processes that are alsopresupposed to play a role in the human cognition literature or (ii) computa-tionally implement cognitive creativity theories. Some examples of such projectshave produced work on concept blending [13], analogy [14, 12], re-interpretationand re-representation [27].

However, the authors believe that a stronger bootstrap between the cognitiveand computational sides of the creativity coin is possible if comparability of cog-nitive and computing approaches is allowed for, and if the bootstrap is situatedin the domain of cognitive social machines. While previous work has aimed atproviding an initial point of reference for comparability [33], this work focuseson situating such a bootstrap in the context of cognitive social machines. Forthis, a working definition of cognitive social machines would be useful.

Social machines are defined as an environment comprising humans and tech-nology interacting and producing outputs or action which would not be possi-ble without both parties present.1. One of the primary characteristics of socialmachines is that, having both human and computational participants, the linebetween computational process and human process becomes blurred [43]. Whilesocial machines are generally imagined around the web [18], this is just a toolthat makes the blending of human and computational work more likely.

Cognitive systems, on the other hand, are considered to be systems whichtake inspiration, simulate or aim to replicate cognitive process, type of knowledgeand performance. Depending on one’s definition, these can range from cognitivecomputational models which aim to predict and replicate human performace,to systems inspired by a cognitive metaphor. Sometimes the term cognitive sys-tems is also used in a way which is similar to the concept of human-computerinteraction (HCI), to describe the fact that particular systems interact well with

1 From https://en.wikipedia.org/wiki/Social_machine - retrieved 17.06.2017.

56

Page 61: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

4

their user, taking their user’s cognitive limitations into account, or taking intoaccount other cognitive phenomena – like user attention span.

We define cognitive social machines as sets of agents (humans and cognitivesystems), their processes (artificially or naturally cognitive) and data, which actproductively, informing and enabling eachother’s progress.

We believe cognitive and computational input and process can be boot-strapped in cognitive social machines in such a way that both cognitive andcomputational fields gain and advance from it. The main research question ofthis paper is thus:

How can we bootstrap Cognition and Computationto yield Cognitive Social Machines that are:

– (a) greater than the sum of their parts– (b) which help improve both Cognition and Computation?

In this paper, this question will be deployed on the specific domain of cre-ativity and creative problem solving. This research can be perceived as operatingon three levels:

– Cognitive level (Cog) - human reasoning and process as examined via cog-nitive science tools

– Computational level (Comp) - artificial creative cognitive systems– Coupling (1) - cognitive social machines

The premise of what follows is that, if ways to reliably boost the computationallevel via the cognitive level (Cog ! Comp) and the cognitive level via thecomputational level (Cog Comp) can be found, then a successful (Cog 1Comp) coupling level can be achieved.

In the following, five aspects of cognition, computation, and their cognitivesocial machines coupling are described, in the context of the domain of creativityand creative problem solving. These five aspects are:

- SYSTEMS – Systems and models which solve reasoning and creativitytasks (section 3);

- TOOLS – Computationally generated tools for cognitive science (section4);

- PROCESSES & KR – Cognitively-inspired processes and knowledge rep-resentation types (section 5);

- ASSISTANCE – Computational cognitive assistance, support and training(section 6) and

- METRICS – Evaluative informativity metrics of social machines (section7).

3 Systems and models which solve reasoning andcreativity tasks (SYSTEMS)

The first aspect refers to computational work done to enable models and pro-totype systems which are capable of creative problem solving feats, similar and

57

Page 62: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

5

comparable to humans. The process of realizing this aspect starts by choosinga creative reasoning ability, then finding a creativity test which evaluates thisability in humans. If such a test does not exist, a cognitive form of evaluation canbe built. The next steps are to search for a source of cognitive knowledge acqui-sition, understand the types of cognitive process involved in the ability (or havea good cognitively-inspired process hypothesis) and to attempt to implementthis ability in a computational solver. Afterwards, comparative evaluation canbe performed between the human and the computational solvers. This processis shown in Figure 2.

Fig. 2: The process of the SYSTEMS aspect

Besides providing systems which are capable of solving tasks similar to thetasks humans can solve, this aspect provides other benefits. Systems which areimplemented using cognitive processes, and which can be evaluated using compa-rable tasks can (i) later be used by cognitive psychologists as tools to understandand base more refined cognitive models on and (ii) can shed light on possibil-ities of cognitive process which remain ambiguous while only theorized about,without implementation.

For example, the Remote Associates Test [26] is a test used to measure cre-ativity as a function the cognitive ability for association. The format of the testis that three words are given to a participant, like Dew, Comb and Bee. Theparticipant is asked to come up with an answer that relates to all these threewords – a possible answer in this case is Honey. In an attempt to cognitivelysolve this test computationally, [31] applied the principles of a cognitive frame-work of creative problem solving [36, 28] and built a system (comRAT-C) whichsolves the Remote Associates Test via a cognitively inspired process of asso-ciation, divergence, and convergence on association overlap. Not only was thecomputational solver correct in a great proportion of cases, but it brought tolight the issue that sometimes multiple answers might be plausible, somethingwhich was not examined in human normative data, where only one correct an-swer is given. The system also correlated with human data: the harder the queryfor humans, the lower the probability metric provided by the system. Thus thesystem can be used as a tool by cognitive psychologists to build more refined

58

Page 63: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

6

models starting from the initial coarse mechanism, which is now known to solvethe task in a way that correlates with human performance.

Cognitive Social MachineThe cognitive social machine at the SYSTEMS level can be described as follows.Cognitive science and the human part of the social machine provide the cognitiveknowledge, cognitive process and cognitive evaluation. These are used to con-struct and assess computational systems (Cog ! Comp). The computationalsystems in turn become models and tools, which can then be used to betterunderstand cognitive functioning (Comp! Cog).

This approach can be generalized for a variety of tasks, as shown in Table 1.For example, in the Wallach Kogan similarity test, human participants are askedto provide ways in which various concepts, like fruits or objects are alike. A sys-tem generating ad-hoc similarities on an equivalent dataset of objects as the onesgiven to humans could be implemented and used for comparability. Various typesof object similarity algorithms exist which could be cognitively evaluated. Moresuch algorithms could be created with inspiration from the cognitive processes.Systems which implement them could further be used by cognitive modelers, andin tasks in which cognitive similarity between computational-human partners isimportant (as will be shown in section 6).

4 Computationally generated tools for cognitive science(TOOLS)

The second aspect refers to using cognitive and computational principles andvariants from the existing systems in the previous section, together with cognitivedata, in order to construct creativity and creative reasoning task generators.Systems which allow for the generation of large datasets of creativity and creativereasoning queries can be used to control for various parameters of such queries.Sets of queries can then be designed to investigate specific empirical questions,at a depth which is impossible without computational intervention in craftingthe stimuli. This process is shown in Figure 3.

Fig. 3: The process of the TOOLS aspect

59

Page 64: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

7

Table 1: Examples of SYSTEMS applications

Test Example task System/ability

Remote Associates Test COTTAGE SWISS CAKE comRAT – RAT solver [31]

The Alternative Uses test What can you use a brick for? Creative object replacementsystem [32]

Similarity test Tell me all the ways in which Generating ad-hoc similaritiesWallach Kogan an apple and an orange are alike

Ambiguous figures Feature grouping system

Pattern Meanings Test Multiple memory searchWallach Kogan based on features

Insight tests Practical object problem solver

60

Page 65: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

8

For example, here is an application of this aspect involving the Remote As-sociates Test (RAT). Cognitive psychologists administering this test do not nor-mally have control over variables like the frequency of each of the query words,the frequency of the answer, the probability (based of frequency) that a partic-ular answer would be found. There is a small normative dataset of compoundRAT queries often used in the literature, comprising 144 items [1]. The prin-ciples of knowledge organization from the comRAT-C solver [31] were reverseengineered to create a RAT-query generator (comRAT-G). With this generator,a set of test items that spans the entirety of English language nouns was created[35]. comRAT-G provided 17 million items which can be used by cognitive psy-chologists in their work to understand the human creative process, by controllingfrequency and probability variables of the query words and the answer words.This can allow for more complex experimental designs. From both a computa-tional and cognitive perspective, this system opens the door to the explorationof interesting questions, like for example what is a good Remote Associates Testquery, which requires creativity to answer?

Cognitive Social MachineThe cognitive social machine at the TOOLS level can be described as follows.Humans help evaluate the quality of the computationally created test items(Cog ! Comp). The computationally created items are then used to deeperunderstand the human creative reasoning process (Comp! Cog).

Various tasks could be generated based on various types of cognitive data,a few examples of which are shown in 2. For example, cognitive data on wordsassociates (rather than compound words) can be used to generate the functionalversion of the Remote Associates Test. Data on cognitive visual similarity canbe used to generate some of the Wallach Kogan visual tests, etc. Some initialwork has been done in this direction [34, 37, 30].

Table 2: Examples of TOOLS applications

Test to generate Based on cognitive data Control for what variables

functional Remote Word associates, frequency, probability,Associates Test cognitive ontologies word order, associate rank

Visual associates tests cognitive visual visual similarity,associates, collocation strength of association

Wallach Kogan visual similarity No. of matches,matches rank, orientation

Insight Tests object properties, no. of solving paths,cognitive strategies no. of restructurings,classical tests strength of functional fixedness

61

Page 66: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

9

5 Cognitively-inspired processes and knowledgerepresentation types (PROCESSES & KR)

The third aspect – PROCESSES & Knowledge Representation – refers to (i)learning from cognitive processes to get inspiration for computational processesand (ii) learning from cognitive knowledge representation to support new types ofcomputational knowledge representation. The aim of this, as shown in Figure 4,is to obtain cognitively friendly and innovative types of processes and knowledgerepresentation.

Fig. 4: The process of the PROCESSES & KR aspect

Cognitive Social MachineThe cognitive social machine at the level of PROCESSES & KR can be de-scribed as follows. Humans provide inspiration of new processes and new types ofknowledge organization (Cog ! Comp). Computational systems using cognitiveprocesses and knowledge representation may be capable of new tasks, of tacklingold tasks in new ways and may be more cognitively friendly (Comp! Cog).

This aspect of the bootstrapping can lead to innovations of process andknowledge representation, and to cognitive adaptations of already existing pro-cesses and types of knowledge representation, as shown in Table 3. For example,the cognitive process of restructuring and re-representation can inspire new typesof multiple pattern matching by framing sets of initial features from multiplerepresentation perspectives.

Systems which process, encode knowledge and communicate information incognitively inspired manner might be perceived as more cognitively friendly, andthus be able to o↵er better support and assistance to natural cognitive agents.

6 Computational cognitive assistance, support andtraining (ASSISTANCE)

Aspect four refers to using systems capable of tackling similar tasks as humans,and using cognitive or cognitively inspired processed, types of knowledge orga-nization in order to provide computational cognitive assistance. Systems under

62

Page 67: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

10

Table 3: Examples of PROCESSES applications

Process or KR Application

Creative association New forms of semantic networks

Object grounding & Linked data and new KRproperties comparison for adaptive robotics

ways of looking at data fromRestructuring and multiple perspectives;re-representation multiple pattern matching and framing

with same set of initial features

this descriptions can be roughly split into supportive systems (S) and trainingsystems (T). Supportive systems would aim to assist their user in performingcreative and creative reasoning tasks, as a partner, co-creator, co-reasoner or“muse”. Training systems would aim to help maintain or enhance creative andcreative reasoning abilities of their human partner. Input from the partner willalso be used to improve the assistive system.

Assistive systems could for example propose ideas for cooking new recipes,reusing objects, furniture and room redesign, as shown in Table 4. However,they could also be used in a deeper form of cognitive support, to provide thekind of information which would lead to productive association of ideas andrestructuring. Training systems could be used to target and improve precisecreativity related skills, like the ability to exit functional fixedness.

Table 4: Examples of ASSISTANCE applications

Support (S) or training (T) for: Examples

(S) Creative problem solving cooking, reusing objects,in household tasks furniture and room redesign

– functional fixedness exits(T) Creative and – further reach in searchesinductive reasoning – multiple modalities and types of

intelligence engaged (user suited)

(S) Providing the right information for – promotes productive association of ideasCR in cognitive tasks and search – leads to restructuring

(S) Recycling - recommenders and – object recycling in householdscrowdsourced human data – support for recycling of wind turbines

63

Page 68: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

11

7 Evaluative informativity metrics of cognitive socialmachines (METRICS)

The metrics level aims to evaluate the successful functioning of the cognitivesocial machines set in place, and to optimize the distribution of processes, andinformation flow. This level thus deals with questions as the following:

– Which part of processing and knowledge representation should each of thecognitive and computational partners do?

– What are the information gains of various processing and knowledge rep-resentation set-ups? (of various types of organizining and distributing theparts of the machine)

– How does working together increase the generativity of both natural andartificial cognitive systems?

– What measures protect working together? Information coherence? Informa-tion structure?

While the first two questions pertain to organizing and optimizing the ma-chine, the third question focuses on generativity as an evaluative metric – thusthe increase in productive capacity of both natural and artificial systems. Thisproductive capacity can be seen both as the ability to solve more problems, andthe ability to come up with new solutions – it is thus a creativity and creativereasoning type of metric. Other cognitive metrics could also be devised to assessvarious cognitive social machine set-ups.

The fourth question addresses measures which protect working together fromthe perspective of cognitive adaptation of computational and cognitive parts toeach other. Thus systems which possess non-contradictory information, knowl-edge organized in similar ways or similar ways of structuring information [29]might be more productive than other systems, through being better adapted toworking together. Complementarity of such measures could also be more formallydefined, in ways in which it is usually defined for social interactions between nat-ural agents. While this has some connections to the fields of HCI and adaptiverobotics. It focuses on cognitive informational measures which have as e↵ect theprotection of cognitive resources and the protection of cognitive social work donein partnership.

8 Conclusion and Future Work

A coherent view of bridging the cognitive-computational gap in the domainof creativity and creative problem solving was proposed, using cognitive socialmachines. Five aspects where described to showcase the possible uses of thisview.

In the SYSTEMS aspect, the human part of the machine provides cognitiveknowledge, processes and access to cognitive evaluation. These are used by thecomputational part to build systems which can perform tasks and have abilitieswhich are similar or comparable to those of humans. This allows for comparative

64

Page 69: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

12

evaluation between the cognitive and computational counterparts. The systemscan also be used as tools for futher cognitive models.

In the TOOLS aspect, the computational systems previously constructed areused to generate ample creativity and creative reasoning tasks. Such tasks areevaluated by humans in comparison to classical datasets of those tasks, or viaother types of qualitative and quantitative assessment. The validated tasks canbe used to control for variables and allow for more complex empirical designs,and thus provide more precise tools to explore human cognition with. Method-ological and computational questions about what does it constitute a good cre-ativity or creative reasoning task can lead to developments in the theoreticaland philosophical foundations of the concept of creativity.

In the PROCESSES & KR aspect, cognitive processes and cognitive types ofknowledge representation are used to inform the computational process and thecomputational types of knowledge representation. This cognitive inspiration canlead to new types of processes and knowledge representation, but also to morecognitively friendly systems, which process and communicate information in away that is more similar to their users.

In the ASSISTANCE aspect, all the previous work is used to support andtrain human creativity, creative reasoning and creative problem solving. To closethe loop, the assistive systems learns from the human creative activity, fromfeedback or from human performance.

The METRICS aspect is used to optimize, evaluate and protect cognitivesocial machine set-ups.

As future work, the authors intend to provide a more formal description of(i) information flow, (ii) process and KR replication and inspiration and (iii)cognitive social machine metrics. A set of case studies will also be observed indepth through the lens of this approach, in all its five aspects.

Acknowledgements

Ana-Maria Olteteanu gratefully acknowledges the support of the German Re-search Foundation (Deutsche Forschungsgemeinschaft - DFG) for the CreativeCognitive Systems2 (CreaCogs) project OL 518/1-1.

References

1. Bowden, E.M., Jung-Beeman, M.: Normative data for 144 compound remote as-sociate problems. Behavior Research Methods, Instruments, & Computers 35(4),634–639 (2003)

2. Carpenter, J.: Electronic text composition project. The Slought Foundation (2004)3. Cohen, H.: The further exploits of AARON, painter. Stanford Humanities Review

4(2), 141–158 (1995)4. Colton, S.: Automated theory formation in pure mathematics. Springer Science &

Business Media (2012)

2 http://creacogcomp.com/

65

Page 70: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

13

5. Colton, S.: The painting fool: Stories from building an automated painter. In:Computers and creativity, pp. 3–38. Springer (2012)

6. Colton, S., Bundy, A., Walsh, T.: On the notion of interestingness in automatedmathematical discovery. International Journal of Human-Computer Studies 53(3),351–375 (2000)

7. Colton, S., Goodwin, J., Veale, T.: Full-FACE poetry generation. In: Proceedingsof the Third International Conference on Computational Creativity. pp. 95–102(2012)

8. Colton, S., Pease, A., Charnley, J.: Computational creativity theory: The FACEand IDEA descriptive models. In: Proceedings of the Second International Confer-ence on Computational Creativity. pp. 90–95 (2011)

9. Cook, M., Colton, S.: Ludus ex machina: Building a 3D game designer that com-petes alongside humans. In: Proceedings of the 5th International Conference onComputational Creativity (2014)

10. Duncker, K.: On problem solving. Psychological Monographs 58(5, Whole No.270)(1945)

11. Eppe, M., Confalonieri, R., Maclean, E., Kaliakatsos, M., Cambouropoulos, E.,Schorlemmer, M., Kuhnberger, K.U.: Computational invention of cadences andchord progressions by conceptual chord-blending (2015)

12. Falkenhainer, B., Forbus, K.D., Gentner, D.: The structure-mapping engine: Algo-rithm and examples. Artificial intelligence 41(1), 1–63 (1989)

13. Fauconnier, G., Turner, M.: Conceptual integration networks. Cognitive science22(2), 133–187 (1998)

14. Gentner, D.: Structure-mapping: A theoretical framework for analogy. Cognitivescience 7(2), 155–170 (1983)

15. Gervas, P.: Engineering linguistic creativity: Bird flight and jet planes. In: Proceed-ings of the NAACL HLT 2010 Second Workshop on Computational Approaches toLinguistic Creativity. pp. 23–30. Association for Computational Linguistics (2010)

16. Greene, E., Bodrumlu, T., Knight, K.: Automatic analysis of rhythmic poetry withapplications to generation and translation. In: Proceedings of the 2010 Conferenceon Empirical Methods in Natural Language Processing. pp. 524–533. Associationfor Computational Linguistics (2010)

17. Guilford, J.P.: The nature of human intelligence. McGraw-Hill (1967)18. Hendler, J., Berners-Lee, T.: From the semantic web to social machines: A research

challenge for ai on the world wide web. Artificial Intelligence 174(2), 156–161 (2010)19. Jordanous, A.: A standardised procedure for evaluating creative systems: Compu-

tational creativity evaluation based on what it is to be creative. Cognitive Com-putation 4(3), 246–279 (2012)

20. Kim, K.H.: Can we trust creativity tests? A review of the Torrance Tests of CreativeThinking (TTCT). Creativity Research Journal 18(1), 3–14 (2006)

21. Langley, P.: Bacon. 1: A general discovery system. In: Proc. 2nd Biennial Conf.of the Canadian Society for Computational Studies of Intelligence. pp. 173–180(1978)

22. Langley, P.: Data-driven discovery of physical laws. Cognitive Science 5(1), 31–54(1981)

23. Langley, P., Bradshaw, G.L., Simon, H.A.: Bacon. 5: The discovery of conservationlaws. In: IJCAI. vol. 81, pp. 121–126 (1981)

24. Lenat, D.B.: AM: An artificial intelligence approach to discovery in mathematicsas heuristic search. Tech. rep., DTIC Document (1976)

25. Maier, N.R.: Reasoning in humans. II. The solution of a problem and its appearancein consciousness. Journal of Comparative Psychology 12(2), 181 (1931)

66

Page 71: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

14

26. Mednick, S.A., Mednick, M.: Remote associates test: Examiner’s manual. HoughtonMi✏in (1971)

27. O’Hara, S., Indurkhya, B.: Incorporating (re)-interpretation in case-based reason-ing. Topics in case-based reasoning pp. 246–260 (1994)

28. Olteteanu, A.M.: Publications of the Institute of Cognitive Science, vol. 01-2014,chap. Two general classes in creative problem-solving? An account based on thecognitive processes involved in the problem structure - representation structurerelationship. Institute of Cognitive Science, Osnabruck (2014)

29. Olteteanu, A.M.: Publications of the Institute of Cognitive Science, vol. 02-2015,chap. The Input, Coherence, Generativity (ICG) Factors. Towards a Model ofCognitive Informativity Measures for Productive Cognitive Systems. Institute ofCognitive Science, Osnabruck (2015)

30. Olteteanu, A.M.: In: Proceedings of the Workshop on Computational Creativity,Concept Invention, and General Intelligence (C3GI2016). vol. 1767. CEUR-Ws,Osnabruck (2016)

31. Olteteanu, A.M., Falomir, Z.: comRAT-C: A computational compound remote as-sociate test solver based on language data and its comparison to human perfor-mance. Pattern Recognition Letters 67, 81–90 (2015)

32. Olteteanu, A.M., Falomir, Z.: Object replacement and object composition in acreative cognitive system. towards a computational solver of the Alternative UsesTest. Cognitive Systems Research 39, 15–32 (2016)

33. Olteteanu, A.M., Falomir, Z., Freksa, C.: Artificial cognitive systems that can an-swer human creativity tests: An approach and two case studies. IEEE Transactionson Cognitive And Developmental Systems pp. 1–7 (2016)

34. Olteteanu, A.M., Gautam, B., Falomir, Z.: Towards a visual remote associates testand its computational solver. In: Proceedings of the Third International Work-shop on Artificial Intelligence and Cognition 2015. vol. 1510, pp. 19–28. CEUR-Ws(2015)

35. Olteteanu, A.M., Schultheis, H., Dyer, J.B.: Constructing a repository of compoundRemote Associates Test items in American English with comRAT-G. BehaviorResearch Methods, Instruments, & Computers (accepted)

36. Olteteanu, A.M.: From simple machines to Eureka in four not-so-easysteps.Towards creative visuospatial intelligence. In: Muller, V. (ed.) FundamentalIssues of Artificial Intelligence, Synthese Library, vol. 376, pp. 159–180. Springer(2016)

37. Olteteanu, A.M.: Towards using cognitive word associates to create functional re-mote associates test problems. In: Signal-Image Technology & Internet-Based Sys-tems (SITIS), 2016 12th International Conference on. pp. 612–617. IEEE (2016)

38. Pachet, F.: Musical virtuosity and creativity. In: Computers and Creativity, pp.115–146. Springer (2012)

39. Pearce, M., Wiggins, G.: Improved methods for statistical modelling of monophonicmusic. Journal of New Music Research 33(4), 367–385 (2004)

40. Pease, A., Winterstein, D., Colton, S.: Evaluating machine creativity. In: Workshopon Creative Systems, 4th International Conference on Case Based Reasoning. pp.129–137 (2001)

41. Ritchie, G.: Some empirical criteria for attributing creativity to a computer pro-gram. Minds and Machines 17(1), 67–99 (2007)

42. Schneider, S., Fischer, J.R., Konig, R.: Rethinking automated layout design: devel-oping a creative evolutionary design method for the layout problems in architectureand urban design. In: Design Computing and Cognition’10, pp. 367–386. Springer(2011)

67

Page 72: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

15

43. Shadbolt, N.R., Smith, D.A., Simperl, E., Van Kleek, M., Yang, Y., Hall, W.:Towards a classification framework for social machines. In: Proceedings of the22nd International Conference on World Wide Web. pp. 905–912. ACM (2013)

44. Smith, B.D., Garnett, G.E.: Reinforcement learning and the creative, automatedmusic improviser. In: Evolutionary and Biologically Inspired Music, Sound, Artand Design, pp. 223–234. Springer (2012)

45. Ventura, D.: Mere generation: Essential barometer or dated concept? In: Proceed-ings of the Seventh International Conference on Computational Creativity (2016)

46. Wallach, M.A., Kogan, N.: Modes of thinking in young children: A study of thecreativity-intelligence distinction. Holt, Rinehart & Winston (1965)

47. Wiggins, G.A.: Towards a more precise characterisation of creativity in AI. In:Case-based reasoning: Papers from the workshop programme at ICCBR. vol. 1,pp. 113–120 (2001)

48. Williams, H., McOwan, P.W.: Manufacturing magic and computational creativity.Frontiers in Psychology 7 (2016)

68

Page 73: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

Principles and Clustersin Human Syllogistic Reasoning

Emmanuelle-Anna Dietz Saldanha, Steffen Holldobler, and Richard Morbitz?

International Center for Computational Logic, TU Dresden, Germany,{dietz,sh}@iccl.tu-dresden.de, [email protected]

Abstract. It seems widely accepted that human reasoning cannot bemodeled by means of Classical Logic. Psychological experiments haverepeatedly shown that participants’ answers systematically deviate fromthe classical logically correct answers. Recently a new approach on mod-eling human syllogistic reasoning has been developed which seems toperform the best compared to other state-of-the-art cognitive theories.We take this approach as starting point, yet instead of trying to modelthe human reasoner, we aim at identifying clusters of reasoners, whichcan be characterized by principles or by heuristic strategies.

1 Introduction

In recent years, a new cognitive theory based on the Weak Completion Seman-tics (WCS) has been developed. It has its roots in the ideas first expressed byStenning and van Lambalgen [19], but is mathematically sound [8], and has beensuccessfully applied to various human reasoning tasks. An overview can be foundin [7]. Hence, it was natural to ask whether WCS is competitive in syllogistic rea-soning and how it performs wrt the cognitive theories evaluated in [12]. Considerthe following quantified statements:

All a are b.Some c are not b. (AO3)

Classical logically Some c are not a follows from these premises. However, ac-cording to [12], the majority of participants in experimental studies, concludedthat no valid conclusion and Some c are not a follows. Yet, these two responsesexclude each other, i.e. it is unlikely that the participants who answered no validconclusion are the same ones who answered Some c are not a, and vice versa.

The four quantifiers and their formalization in FOL are given in Table 1.The entities can appear in four different orders called figures shown in Table 2.Hence, a problem can be completely specified by the quantifiers of the first andsecond premise and the figure. The example discussed above is AO3.

Recently, a computational logic approach to human syllogistic reasoning hasbeen developed under the Weak Completion Semantics, which identifies sevenprinciples for modeling the logical form of the representation of quantified state-ments in human reasoning [1]. The results of this approach achieved a match

? The authors are mentioned in alphabetical order.

69

Page 74: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

Mood First-order logic Short

affirmative universal ∀X(a(X)→ b(X)) Aabaffirmative existential ∃X(a(X) ∧ b(X)) Iabnegative universal ∀X(a(X)→ ¬b(X)) Eabnegative existential ∃X(a(X) ∧ ¬b(X)) Oab

Table 1. The moods and their formalization.

1st Premise 2nd Premise

Fig. 1 a-b b-cFig. 2 b-a c-bFig. 3 a-b c-bFig. 4 b-a b-c

Table 2. The four figures.

of 89% with respect to the conclusions participants gave, based on the datareported in [12]. This result stands out because the best of the twelve otherstate-of-the-art cognitive theories, only achieved a match of 84%.

While reasoning with conditionals humans seems to take certain assumptionsfor granted which however are not stated explicitly in the task description. Aspsychological experiments show, these assumptions seem not to be arbitrary butinstead are systematic in the sense that they are repeatedly made by participants.Furthermore, some assumptions reappear in various experiments, whereas otherassumptions are only made in very few experiments or only by some participants.In order to identify and structure these assumptions, we view them as principlesthat are either applied or ignored by the participants who have to solve thetask. As starting point, we take the syllogistic reasoning approach presentedin [1]. However, a major drawback of this approach is that only the matchingwith respect to the aggregated data is considered, i.e. the approach models thehuman reasoner. However, the above example and other examples such as casesof the Wason Selection Task reported in [15], serve as indication that the humanreasoner does not exist, but instead we might better search for clusters of humanreasoners. These clusters might be expressed by principles, i.e. some clustersmight apply some principles that are not applied by other clusters.

The paper is structured as follows: First, we present the principles for therepresentation of quantified statements, motivated by findings from CognitiveScience and Linguistics. Next, the Weak Completion Semantics is introduced andthe encoding of quantified statements within this approach in Section 3 and 4.Then the clusters and heuristics are discussed and finally an overall evaluationof the Weak Completion Semantics is presented.

2 Principles about Quantified Statements

Eight principles for developing a logical form of quantified statements are pre-sented. They originate from [1,2] except of the principles in Section 2.5 and 2.8.

2.1 Quantified Statements as Implication (conditionals)

Independent of the quantifiers mood, we decide to formalize any relation betweentwo objects of a quantified statement by means of the implication such that thefirst object is the antecedent and the second object the conclusion in the implica-tion. For instance, the statement All a are b is expressed as ∀X(a(X) → b(X)).

70

Page 75: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

2.2 Licenses for Inferences (licenses)

[19] proposed to formalize conditionals in human reasoning not by inferencesstraight away, but rather by licenses for inferences. Given the quantified state-ment All a are b, a license for this inference can then be expressed by All a thatare not abnormal, are b. Given the previous formalization of this statement as∀X(a(X)→ b(X)), we extend this implication by conjoining a(X) together withan abnormality predicate as follows: ∀X(a(X) ∧ ¬abpq(X)→ b(X)).Further, the closed-world assumption with respect to the abnormality predicateis expressed by nothing is abnormal wrt X, i.e. ¬abpq(X).

2.3 Existential Import and Gricean Implicature (import)

Humans understand quantifiers differently due to a pragmatic understanding ofthe language. For instance, in natural language, we normally do not quantifyover things that do not exist. Consequently, for all implies there exists. Thisappears to be in line with human reasoning and has been called the Griceanimplicature [6]. This corresponds to what sometimes in literature is also calledexistential import and assumed by several theories like the theory of mentalmodels [11] or mental logic [18]. Likewise, [19] have shown that humans requireexistential import for a conditional to be true.

Furthermore, as mentioned by [12], the quantifier some a are b often impliesthat some a are not b, which again is implied by the Gricean implicature: Some-one would not state some a are b if that person knew that all a are b. As theperson does not say all a are b, but some a are b instead, we assume that notall a are b, which in turn implies some a are not b.

2.4 Unknown Generalization (unknownGen)

Humans seem to distinguish between some y are z and some z are y, as theresults reported by [12] show. Nevertheless, if we would represent some y are zby ∃X(y(X) ∧ z(X)) then this is semantically equivalent to ∃X(z(X) ∧ y(X))because conjunction is commutative in FOL. Likewise, humans seem to distin-guish between some y are z and all y are z, as we have already discussed inSection 2.3. Accordingly, if we only observe that an object o belongs to y and zthen we do not want to conclude both, some y are z and all y are z.

In order to distinguish between some y are z and all y are z, we introducethe following principle: If we know that some y are z, then there must not onlybe an object o1, which belongs to y and z but there must be another objecto2, which belongs to y and for which it is unknown whether it belongs to z. Toexpress this idea, we can make use of the the principle (licenses) presented inSection 2.2 as follows: We replace ¬abpq(X) by ¬abpq(o1), i.e. the closed-worldassumption about abnormal is only applied wrt o1.

71

Page 76: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

2.5 Deliberate Generalization (deliberateGen)

If all of the principles introduced so far are applied to an existential premise,the only object about which an inference can be made is the one resulting fromthe existential import principle. This is because the abnormality introduced bythe licenses for inferences principle has to be false for inference, but due to theunknown generalization principle it is unknown for other objects.

There is, however, evidence that some humans still draw conclusions in suchcircumstances [12]. We believe that they do not take into account abnormalitiesregarding objects that are not related to the premise.

2.6 Converse Implication (converse)

Although there seems to be some evidence that humans distinguish betweensome y are z and some z are y (see the results reported in [12]) we propose thatpremises of the form Iab imply Iba and vice versa. If there is an object whichbelongs to y and z, then there is also an object which belongs to z and y.

2.7 Search Alternative Conclusions to NVC (searchAlt)

Our hypothesis is that when participants are faced with a NVC conclusion (novalid conclusion), they might not want to accept this conclusion and proceedto check whether there exists unknown information that is relevant. This infor-mation may be explanations about the facts coming either from an existentialimport or from unknown generalization. We use only the first as source for ob-servations, since they are used directly to infer new information.

2.8 Contraposition (contraposition)

In FOL, a conditional statement of the form ∀(X)(a(X) ← b(X)) is logicallyequivalent to its contrapositive ∀(X)(¬b(X)← ¬a(X)). This contraposition alsoholds for the syllogistic moods A and E. There is evidence in [12] that some ofthe participants make use of this equivalence when solving syllogistic reasoningtasks. We believe that when they encounter a premise with the mood A (e.g. Alla are b), then they might reason with the contrapositive conditional as well.

3 Weak Completion Semantics

The general notation, which we will use in the paper, is based on [13].

3.1 Contextual Logic Programs

Contextual logic programs are (data) logic programs extended by the truth-functional operator ctxt, called context [5]. (Propositional) contextual logic pro-gram clauses are expressions of the forms A ← L1 ∧ . . . ∧ Lm ∧ ctxt(Lm+1) ∧

72

Page 77: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

. . . ∧ ctxt(Lm+p) (called rules), A ← > (called facts), A ← ⊥ (called negativeassumptions) and A ← U (called unknown assumptions). A is an atom andthe Li with 1 ≤ i ≤ m + p are literals. A is called head and L1 ∧ . . . ∧ Lm ∧ctxt(Lm+1)∧ . . .∧ ctxt(Lm+p) as well as >,⊥ and U, standing for true, false andunknown respectively, are called body of the corresponding clauses. A contextual(logic) program is a set of contextual logic program clauses. gP denotes the setof all ground instances of clauses occurring in P. atoms(P) denotes the set ofall atoms occurring in gP. A is defined in P iff P contains a rule or a fact withhead A. A is undefined in P iff A is not defined in P. The set of all atoms thatare undefined in P is denoted by undef(P). The definition of A in P is definedas def (A,P) = {A ← Body | A ← Body is a rule or a fact occurring in P}. ¬Ais negatively assumed in P iff P contains an negative assumption with head A,no unknown assumption with head A and def (A,P) = ∅. We omit the wordcontextual when we refer to programs, if not stated otherwise.

3.2 Integrity Constraints

A set of integrity constraints IC consists of clauses of the form U ← Body,where Body is a conjunction of literals and U denotes the unknown. Hence, aninterpretation maps an integrity constraint to > iff Body is either mapped to Uor ⊥. This understanding is similar to the definition of the integrity constraintsfor the Well-founded Semantics in [14]. Given an interpretation I and a set ofintegrity constraints IC, I satisfies IC iff all clauses in IC are true under I.

3.3 Three-Valued Lukasiewicz Logic Extended by ctxt Connective

We consider the three-valued Lukasiewicz logic together with the ctxt connective,for which the corresponding truth values are >, ⊥ and U, meaning true, falseand unknown, respectively. A three-valued interpretation I is a mapping fromatoms(P) to the set of truth values {>,⊥,U}, represented as a pair I = 〈I>, I⊥〉of two disjoint sets of atoms: I> = {A | A is mapped to > under I} and I⊥ ={A | A is mapped to ⊥ under I}. Atoms which do not occur in I> ∪ I⊥ aremapped to U. The truth value of a given formula under I is determined accordingto the truth tables in Table 3. I(F ) = > means that a formula F is mappedto true under I. A three-valued model M of P is a three-valued interpretationsuch that M(A ← Body) = > for each A ← Body ∈ P. Let I = 〈I>, I⊥〉 andJ = 〈J>, J⊥〉 be two interpretations. I ⊆ J iff I> ⊆ J> and I⊥ ⊆ J⊥. I is theleast model of P iff for any other model J of P it holds that I ⊆ J .

3.4 Forward Reasoning: Least Models under the Weak Completion

For a given P, consider the following transformation: 1.For each ground atom Awhich is defined in P, replace all clauses of the form A← Body1, . . . , A← Bodymoccurring in gP by A ← Body1 ∨ . . . ∨ Bodym. 2. Replace all occurrences of ←by ↔. The obtained ground program is called weak completion of P or wcP.

73

Page 78: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

F ¬F> ⊥⊥ >U U

∧ > U ⊥> > U ⊥U U U ⊥⊥ ⊥ ⊥ ⊥

∨ > U ⊥> > > >U > U U⊥ > U ⊥

← > U ⊥> > > >U U > >⊥ ⊥ U >

↔ > U ⊥> > U ⊥U U > U⊥ ⊥ U >

L ctxt(L)

> >⊥ ⊥U ⊥

Table 3. The truth tables for the connectives under the three-valued Lukasiewicz logicand for ctxt(L). L is a literal, >, ⊥, and U denote true, false, and unknown, respectively.

Consider the following semantic operator, which is due to Stenning and vanLambalgen [19]: Let I = 〈I>, I⊥〉 be an interpretation. ΦP(I) = 〈J>, J⊥〉, where

J> = {A | A← Body ∈ def (A,P) and Body is true under 〈I>, I⊥〉}J⊥ = {A | def (A,P) 6= ∅ and

Body is false under 〈I>, I⊥〉 for all A← Body ∈ def (A,P)}

The least fixed point of ΦP is denoted by lfpΦP , if it exists. [9] showed thatnon-contextual programs as well as their weak completions always have a leastmodel under Lukasiewicz logic, which can be obtained as the least fixed pointof Φ. However, for programs with the ctxt operator this property only holds ifthe programs do not contain cycles [5]. We define P |=wcs F iff P is acyclic andlfpΦP |= F . In the remainder of this paper, we only consider acyclic programsand MP denotes the least fixed point of ΦP .

3.5 Backward Reasoning: Explanations by Means of Abduction

An abductive framework 〈P,A, IC, |=wcs〉 consists of a program P, a set A ofabducibles, a set IC of integrity constraints, and the entailment relation |=wcs.The set of abduciblesA = {A← > | def (A,P) = ∅} ∪ {A← ⊥ | A ∈ undef(P)}.Let 〈P,A, IC, |=wcs〉 be an abductive framework and observation O a set ofliterals. O is explainable in 〈P,A, IC, |=wcs〉 if and only if there exists an E ⊆ A,such that P ∪ E |= L for all L ∈ O and P ∪ E satisfies IC. E is then calledexplanation for O given P and IC. We restrict E to be minimal, i.e. there doesnot exist any other explanation E ′ ⊆ A for O such that E ′ ⊆ E .

Among the minimal explanations, it is possible that some of them entail acertain formula F while others do not. There exist two strategies to determinewhether F is a valid conclusion in such cases. F follows credulously, if it isentailed by at least one explanation. F follows skeptically, if it is entailed byall explanations. Due to previous results on modeling human reasoning [3,4,1],skeptical abduction is applied. The set of observations wrt P, OP , as follows:

OP = {{A} | A← > ∈ def(A,P) ∧ (A← B1 ∧ · · · ∧Bn) ∈ def(A,P)},

where n > 0 and Bi is a literal for all 1 ≤ i ≤ n. These are the atoms thatoccur in the head of a both rule and a fact. In the following, the idea is findan explanation for each observation O ∈ OP where the observation is furtherrestricted by considering only facts that result from certain principles.

74

Page 79: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

3.6 Encoding Aspects about Quantified Statements

Negation by Transformation (transformation) The logic programs we con-sider do not allow heads of clauses to be negative literals. A negative conclu-sion ¬p(X) is represented by introducing an auxiliary formula p′(X) togetherwith the clause p(X)← ¬p′(X) and the integrity constraint U← p(X)∧ p′(X).This is a widely used technique in logic programming. Together with the princi-ple (licenses) introduced in Section 2.2, this additional clause is extended by thefollowing two clauses: p(X)← ¬p′(X) ∧ ¬abnpp(X). abnpp(X)← ⊥.Additionally, the integrity constraint U ← p(X) ∧ p′(X) states that an objectcannot belong to both, p and p′.

No Derivation through Double Negation (doubleNeg) A positive conclu-sion can be derived from double negation within two conditionals. Consider thefollowing two conditionals with each one having a negative premise: If not a,then b. If not b then c. Additionally, assume that a is true. Let us encode thetwo conditionals and the fact that a is true as P = {b ← ¬a, c ← ¬b, a ← >}.wcP is {b↔ ¬a, c↔ ¬b, a↔ >} where MP = 〈{a, c}, {b}〉 |= a ∧ c. It appearsto be the case that humans do not reason in such a way, considering the re-sults of the participants’ responses in [12]. Accordingly, we block them throughabnormalities.

4 Quantified Statements as Logic Programs

Based on the principles and encoding aspects in Section 2 and Section 3.6, weencode the quantified statements into logic programs. The programs are specifiedusing the predicates y and z and depending on the figures shown in Table 2, whereyz can be replaced by ab, ba, cb or bc. Here, all principles regarding a premiseare described. However, we will later assume different clusters of reasoners, someof which do not apply certain principles (see Section 5). For such clusters, theclauses associated with the principles not applied are removed from the program.

4.1 All y are z (Ayz)

All y are z is represented by PAyz, which consists of the following clauses:

z(X) ← y(X) ∧ ¬abyz(X). (conditionals&licenses)abyz(X) ← ⊥. (licenses)

y(o) ← >. (import)abyz(X) ← ctxt(z′(X)). (contraposition & licenses & deliberateGen)

y′(X) ← ¬z(X) ∧ ¬abzy(X). (contraposition & conditionals & licenses)abzy(X) ← ⊥. (contraposition & licenses)

y(X) ← ¬y′(X) ∧ ¬abnyy(X). (contraposition & transformation&licenses)

The first two clauses are obtained by applying the principles of representingquantified statements as implication and licenses for inferences. The third clausefollows by the principle of existential import and Gricean implicature. The last

75

Page 80: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

four clauses result from applying the contraposition principle. The deliberategeneralization principle must also be used, because otherwise inference of ¬z(X)would not be possible. It defeats the original assumption abyz(X) ← ⊥ in thesense that the weak completion of

abyz(X)← ⊥, abyz(X)← ctxt(z′(X))

is abyz(X)↔ ⊥∨ ctxt(z′(X)), which is equivalent to abyz(X)↔ ctxt(z′(X)). Asthe contrapositive conditional would have a negative atom in the head, the nega-tion by transformation encoding is used. Note that there is no import of an objectfor which ¬z(X) holds, because this does not follow from the premises. Conse-quently, the abnormality introduced by the principles licenses for inferences andnegation by transformation does not have to be assumed as false for any object.MPAyz

is 〈{y(o), z(o)}, {abyz(o)}〉. If contraposition is applied, by negation bytransformation we have the following integrity constraint: U← y(X) ∧ y′(X).

4.2 No y is z (Eyz)

No y is z is represented by PEyz, which consists of the following clauses:

z′(X)← y(X) ∧ ¬abynz(X). (transformation & licenses)abynz(X)← ⊥. (licenses)z(X)← ¬z′(X) ∧ ¬abnzz(X). (transformation & licenses)y(o1)← >. (import)abnzz(o1)← ⊥. (licenses & doubleNeg)y′(X)← z(X) ∧ ¬abzny(X). (converse & transformation & licenses)abzny(X)← ⊥. (converse&licenses)y(X)← ¬y′(X) ∧ ¬abnyy(X). (converse & transformation & licenses)z(o2)← >. (converse&import)abnyy(o2)← ⊥. (converse & licenses & doubleNeg)

In addition, we have the following two integrity constraints:

U← z(X) ∧ z′(X). (transformation)U← y(X) ∧ y′(X). (converse & transformation)

The first two clauses in PEyz are obtained by applying the principles of repre-senting quantified statements as conditionals and using licenses for inferences,where z′(X) is an auxiliary formula used to denote the negation of z(X). z′(X)is related to z(X) by the third clause applying negation by transformation. Inaddition, this principle enforces the integrity constraint. The fourth clause ofPEyz follows by the principle of Gricean implicature and the fifth because oflicenses for inferences and no derivation through double negation. The last fiveclauses are obtained by the same reasons as the first five clauses together withthe principle of converse implication. Note that the last clause in PEyz cannot begeneralized to all X, because otherwise we allow conclusions by double negatives.Therefore we apply the encoding doubleNeg. MPEyz

is

〈{y(o1), z′(o1), z(o2), y′(o2)},{abynz(o1), abnzz(o1), z(o1), abzny(o2), abnyy(o2), y(o2)}〉.

76

Page 81: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

4.3 Some y are z (Iyz)

Some y are z is represented by PIyz, which consists of the following clauses:

z(X) ← y(X) ∧ ¬abyz(X). (conditionals & licenses)abyz(o1) ← ⊥. (unknownGen & licenses)

y(o1) ← >. (import)y(o2) ← >. (unknownGen)

abyz(X) ← ctxt(z′(X)). (licenses & deliberateGen)abyz(o2) ← U. (licenses & deliberateGen)

y(X) ← z(X) ∧ ¬abzy(X). (converse & conditionals& licenses)abzy(o3) ← ⊥. (converse& licenses & unknownGen)

z(o3) ← >. (converse & import)z(o4) ← >. (converse & unknownGen)

abzy(X) ← ctxt(y′(X)). (converse & licenses & deliberateGen)abzy(o4) ← U. (converse & licenses & deliberateGen)

The first two clauses are again obtained by the principles of representing quanti-fied statements as conditionals and using licenses for inferences. The abnormalitypredicate is restricted to the object o1, which is assumed to exist by the princi-ple of Gricean implicature, represented by the third clause. The fourth clause isobtained by the principle of unknown generalization. The fifth and sixth clauseare obtained by the principle of unknown generalization. The last six clausesare obtained by the same reasons as the first six clauses together with the prin-ciple of converse implication. MPIyz

is 〈{y(o1), y(o2), z(o1)}, {abyz(o1)}〉. Noteabyz(o2) is an unknown assumption in PIyz. Accordingly, z(o2) stays unknownin MPIyz

.

4.4 Some y are not z (Oyz)

Some y are not z is represented by POyz which consists of the following clauses:

z′(X) ← y(X) ∧ ¬abynz(X). (conditionals & transformation & licenses)abynz(o1) ← ⊥. (unknownGen & licenses)

z(X) ← ¬z′(X) ∧ ¬abnzz(X). (transformation & licenses)y(o1) ← >. (import)y(o2) ← >. (unknownGen)

abnzz(o1) ← ⊥. (doubleNeg & licenses)abnzz(o2) ← ⊥. (doubleNeg & licenses)

In addition, we have the following integrity constraint:

U ← z(X) ∧ z′(X). (transformation)

The first four clauses as well as the integrity constraint are derived as in theprogram PEyz except that object o1 is used instead of o and abynz is restrictedto o1 like in PIyz. The fifth clause of POyz is obtained by the principle ofunknown generalization. The last two clauses are again not generalized to allobjects for the same reason as previously discussed in Section 4.2 for the rep-resentation of E: The generalization of abnzz to all objects can lead to con-clusions through double negation, in case there is a second premise. MPOyz

is〈{y(o1), y(o2), z′(o1)}, {abynz(o1), abnzz(o1), abnzz(o2), z(o1)}〉.

77

Page 82: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

4.5 Entailment of Syllogisms

We define whenMP entails a conclusion, where yz is to be replaced by ac or ca.

All (A) P |= Ayz iff there exists an object o such that P |=wcs y(o) and for allobjects o we find that if P |=wcs y(o) then P |=wcs z(o).

No (E) P |= Eyz iff there exists an object o1 such that P |=wcs y(o1) and forall objects o1 we find that if P |=wcs y(o1) then P |=wcs ¬z(o1) and if thereexists an object o2 such that P |=wcs z(o2) and for all objects o2 we findthat if P |=wcs z(o2) then P |=wcs ¬y(o2).

Some (I) P |= Iyz iff there exists an object o1 such that P |=wcs y(o1) ∧ z(o1)and there exists an object o2 such that P |=wcs y(o2) and P 6|=wcs z(o2) andthere exists an object o3 such that P |=wcs z(o3)∧ y(o3) and there exists anobject o4 such that P |=wcs z(o4) and P 6|=wcs y(o4).

Some Are Not (O) P |= Oyz iff there exists an object o1 such that P |=wcs

y(o1) ∧ ¬z(o1) and there exists an object o2 such that P |=wcs y(o2) andP 6|=wcs ¬z(o2).

NVC When no previous conclusion can be derived, no valid conclusion holds.

4.6 Accuracy of Predictions

We have nine different answer possibilities for each of the 64 syllogisms:

Aac, Eac, Iac, Oac, Aca, Eca, Ica, Oca and NVC.

For every syllogism, we define a list of length 9 for the predictions of the WeakCompletion Semantics, where the first element represents Aac, the second ele-ment represents Eac, and so forth. When Aac is predicted under the Weak Com-pletion Semantics for a given syllogism, then the value of the first element of thislist is a 1, otherwise it is a 0, and the same holds for the other eight elementsin the list. Analogously, for every syllogism we define a list of the participants’conclusions of length 9 containing either 1 or 0 for all nine answer possibilities,depending on whether the majority concluded Aac, Eac, and so forth. For eachsyllogism we compare each element of both lists as follows, where i is the ithelement of both lists:

comp(i) =

{1 if both lists have the same value for the ith element

0 otherwise

The matching percentage of this syllogism is then computed by∑9

i=1 comp(i)/9.Note that the percentage of the match does not only take in account when theWeak Completion Semantics correctly predicts a conclusion, but also wheneverit correctly rejected a conclusion.

5 Clusters and Heuristics

We consider clusters of human reasoners in terms of principles. Each clusteris a group of humans that applies the same principles. When identifying such

78

Page 83: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

clusters, e.g. by among the participants of [12], the principles used by a singlecluster should lead to a significant answer for the syllogism in question. Asthe answers of all participants have been accumulated in the meta-analysis, thecombined answers of all clusters should exactly correspond to the significantanswers for that syllogism.

5.1 Basic Principles

Basic principles are assumed to be applied by all reasoners, regardless of anycluster. These are conditionals, licenses, import, and unknownGen. Note that theyare not necessarily applicable to every syllogism: unknownGen may only be usedfor premises with an existential mood.

5.2 Advanced Principles and Clusters

Advanced principles are assumed to be used by not all humans, making themthe starting point for clusters. Advanced principles considered in this paper areconverse, deliberateGen, contraposition, and searchAlt, but there may exist more.When two individuals differ in the sense that one applies such a principle andthe other one does not, we assume that they belong to different clusters.

As an example, consider the syllogism AO3 introduced in Section 1. Accordingto the encoding described in Section 4, it is represented as the following logicprogram PAO3,basic if only the basic principles are applied:

b(X) ← a(X) ∧ ¬abab(X). b′(X)← c(X) ∧ ¬abcnb(X). c(o3) ← >.abab(X)← ⊥. c(o2) ← >. abnbb(o2)← ⊥.a(o1) ← >. b(X) ← ¬b′(X) ∧ ¬abnbb(X). abcnb(o2)← ⊥.

abnbb(o3)← ⊥.M of PAO3,basic is

〈 {a(o1), b(o1), c(o2) , c(o3) , b′(o2)},{abab(o1), abab(o2), abab(o3), abcnb(o2), abnbb(o2), abnbb(o3)}〉.

NVC follows from this model. If additionally contraposition is used, then weconsider the following program instead:

PAO3,contraposition = PAO3,basic ∪ {a′(X)← ¬b(X) ∧ ¬abba(X), abba(X)← ⊥,a(X)← ¬a′(X) ∧ ¬abnaa(X), abab(X)← ctxt(b′(X))}.

M of PAO3,contraposition is as follows:

〈 {a(o1), abab(o2), b(o1), c(o2) , c(o3) , a′(o2), b′(o2)},{ a(o2) , abab(o1), abab(o3), abcnb(o2), abnba(o1), abnba(o2),

abnba(o3), abnbb(o2), abnbb(o3), b(o2), a′(o1)}〉.It entails the conclusion Oca. Let us assume there are two clusters of peoplewhose reasoning process differs in the application of the contraposition principle.We unite the conclusions predicted for the clusters just as the answers of the

79

Page 84: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

Basic principles

1− pcontraposition pcontraposition

NVC Oca

Fig. 1. MPT for the syllogism AO3.

participants of psychological studies are accumulated, obtaining {Oca,NVC}.These are exactly the significant answers reported in [12].

In order to represent what principles lead to what conclusions, MultinomialProcessing Trees (MPTs) [17] are used. They have been suggested for modelingcognitive theories, because they represent cognitive processes as probabilisticprocedures, thus being able to predict multiple answers and even their quanti-tative distribution [16]. We set the latent states (inner nodes) of the MPTs tothe decisions whether to use certain principles and put the corresponding con-clusions in the leaves. An MPT for the AO3 syllogism based on the clusteringdescribed above is presented in Figure 1. The parameter pcontraposition models theprobability that an individual applies the contraposition principle and thereforebelongs to the corresponding cluster. It can be trained from experimental datawith algorithms like Expectation-Maximization [10]. Note that the MPT cannotpredict all possible conclusions for a syllogism. This issue is addressed below.

5.3 Heuristic Strategies

Some theories suggest that some humans do not use logic at all to solve a syl-logism, but rely on heuristics such as the atmosphere bias [21] or the matchingbias [20]. Given the participants’ answers presented in [12], it seems that oftenanswers are given by a small amount of people (less then 5%). Many of theseanswers, but also some significant ones, are not (yet) explainable by the WeakCompletion Semantics. A plausible explanation for that is that these peoplesimply guess or use one of the heuristics mentioned below (educated guess).

A generative approach to model this lies in using MPTs. A MPT for a randomguess can lead to all nine conclusions. MPTs for a particular heuristic strategyonly take into account the valid conclusions under the corresponding theory. Forthe atmosphere bias, universal and affirmative conclusions are excluded when oneof the premises is existential or negative, resp. In the case of identical moods,the conclusion must have this mood as well. For the matching bias, the followingorder from the most to the least conservative quantifier is defined on moods:

E > O = I > A

80

Page 85: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

A conclusion may not be answered if it is less conservative than one of thepremises wrt. that order. We have also observed biased conclusions in the dataof [12] that may be explained by one of these heuristic strategies: in almost allsyllogisms with figure 1, Xac is answered where X is the least conservative moodfrom the premises that is still allowed under the matching strategy (I is preferredover O). The answer Xca is not given at all.

As an alternative to generating the answers given by a cluster of guessersusing MPTs, the following inversed process can be considered: predictions ofthe Weak Completion Semantics that are not in accordance with a particularheuristic strategy are not given by a cluster using that strategy. In the filteringapproach, these conclusions are suppressed in the predictions for such a cluster.If no conclusion remains, NVC is answered instead. As it is likely that someparticipants does not use logic [20], such clusters must be modeled under theWeak Completion Semantics by using the generative of the filtering approach.As a consequence, MPTs can construct a prediction for all answer possibilities.

5.4 A Clustering Approach

Based on the principles and heuristic strategies described in this paper, theparticipants of [12] have been partitioned into three clusters using logic and twoclusters applying heuristic strategies:

1. Basic principles, searchAlt, and converse for I2. Basic principles, converse for I and deliberateGen3. Basic principles, converse for I, E, and contraposition for A4. Matching strategy5. Biased conclusions in figure 1

Abduction was only used in one cluster because of the computational effortit requires. Although it would be interesting to model it for different clusters,except for converse, no other advanced principle would have an impact, becausethey do not add existential imports. According to the results of [1], abductionhas the same results independent of whether only the converse I mood or boththe converse I and E mood are used. The matching strategy was implementedusing the filtering approach. The biased conclusions in figure 1 heuristics wasimplemented using the generative approach such that its prediction overwritesthe answers of other clusters, except NVC.

5.5 Evaluation

We evaluate the predictions of WCS based on the clustering approach describedin Section 5.4. The prediction for the syllogism AO3 and the overall resultsare compared with other cognitive theories in Table 4. The Weak CompletionSemantics predicts the participants’ answers in [12] correctly for 33 out of the 64Syllogisms. For 19 syllogisms there is one incorrect prediction, for 11 syllogismsthere are two and for one syllogism there are three mismatches.

81

Page 86: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

Syllogism Participants PSYCOP Verbal Models Mental Models Conversion WCS

AO3 Oca Oca Oca Oca Oca Oca

NVC Ica Iac NVC NVC Oac NVC NVC

Overall 100% 77 % 84 % 78 % 83% 92 %

Table 4. Comparison of the Weak Completion Semantics with other cognitive theories.

6 Conclusions

The starting point of this paper was the cognitive theory based on the WeakCompletion Semantics and the principles defined in [1,2]. We have successfullyextended this approach by introducing two new principles and applying a clus-tering approach to model individual differences in human reasoning. This alsotakes into account that some people may not use logic at all, but rather guess orapply heuristic strategies. The clustering presented in Section 5.4 is only the cur-rently known best clustering under WCS but we don’t know whether it is alreadythe optimal one. However, due to the combinatorial explosion1, it is difficult tofind the global optimum. Future work may investigate alternative clusters andpossibly identify new principles. The question whether the predictions change ifabduction is applied to more than one cluster would be particularly interesting.

Finally, we have applied Multinomial Processing Trees to model that differentprinciples lead to different conclusions. This information is lost if the predictionsfor all clusters are accumulated. This shows how much we depend on the wayexperimental results are reported. If we would have more insight about the pat-terns participants opted for, we could model single syllogisms by MPTs insteadof fitting to the overall results.

References

1. Ana Costa, Emmanuelle-Anna Dietz, Steffen Holldobler, and Marco Ragni. Acomputational logic approach to human syllogistic reasoning. In TBA, editor, Pro-ceedings of the 39th Annual Conference of the Cognitive Science Society, COGSCI2017, page TBA. Cognitive Science Society, 2017.

2. Ana Costa, Emmanuelle-Anna Dietz Saldanha, and Steffen Holldobler. Monadicreasoning using weak completion semantics. In Steffen Holldobler, Andrey Malikov,and Christoph Wernhard, editors, Proceedings of the Young Scientist’s Second In-ternational Workshop on Trends in Information Processing (YSIP2) 2017. CEURWorkshop Proc., 2017.

3. E.-A. Dietz, S. Holldobler, and M. Ragni. A computational logic approach to thesuppression task. In N. Miyake, D. Peebles, and R. P. Cooper, editors, Proc. of the34th Annual Conference of the Cognitive Science Society, pages 1500–1505, 2012.

4. Emmanuelle-Anna Dietz. A computational logic approach to the belief bias inhuman syllogistic reasoning. In Patrick Brezillon, Roy Turner, and Carlo Penco,editors, 10th International and Interdisciplinary Conference on Modeling and Using

1 For n principles, there are up to 2n possible clusters. Additionally, it is unknown ifthe current set of principles is already complete.

82

Page 87: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

Context CONTEXT 2017, Proceedings, volume 10257, pages 692–707. Springer,2017.

5. Emmanuelle-Anna Dietz Saldanha, Steffen Holldobler, and Luıs Moniz Pereira.Contextual reasoning: Usually birds can abductively fly. In TBA, editor, 14th int.Conference on Logic Programming and Nonmonotonic Reasoning, volume TBA ofLNAI. Springer, 2017.

6. H. P. Grice. Logic and conversation. In Peter Cole and Jerry L. Morgan, editors,Syntax and semantics, volume 3. Academic Press, 1975.

7. S. Holldobler. Weak completion semantics and its applications in human reasoning.In U. Furbach and Claudia Schon, editors, Proceedings of the Workshop on Bridg-ing the Gap between Human and Automated Reasoning on the 25th InternationalConference on Automated Deduction, pages 2–16. CEUR-WS.org, 2015.

8. S. Holldobler and C. D. Kencana Ramli. Logic programs under three-valued Lukasiewicz semantics. In P. M. Hill and D. S. Warren, editors, 25th int. Con-ference on Logic Programming, LNCS 5649, Lecture Notes in Computer Science,pages 464–478. Springer, 2009.

9. S. Holldobler and C. D. Kencana Ramli. Logics and networks for human reason-ing. In int. Conference on Artificial Neural Networks, LNCS, 5769, pages 85–94.Springer, 2009.

10. Xiangen Hu and William H Batchelder. The statistical analysis of general process-ing tree models with the em algorithm. Psychometrika, 59(1):21–47, 1994.

11. P. N. Johnson-Laird. Mental models: towards a cognitive science of language, in-ference, and consciousness. Harvard University Press, Cambridge, MA, 1983.

12. S. Khemlani and P. N. Johnson-Laird. Theories of the syllogism: A meta-analysis.Psychological Bulletin, pages 427–457, 2012.

13. J. W. Lloyd. Foundations of Logic Programming. Springer, 1984.14. Luıs M. Pereira, Joaquim Nunes Aparıcio, and Jose Julio Alferes. Hypothetical

reasoning with well founded semantics. In B. Mayoh, editor, Scandinavian Con-ference on Artificial Intelligence: Proc. of the SCAI’91, pages 289–300. IOS Press,Amsterdam, 1991.

15. Marco Ragni, Emmanuelle-Anna Dietz, Ilir Kola, and Steffen Holldobler. Two-valued logic is not sufficient to model human reasoning, but three-valued logic is:A formal analysis. In Claudia Schon and Ulrich Furbach, editors, Proceedings ofthe Workshop on Bridging the Gap between Human and Automated Reasoning co-located with 25th International Joint Conference on Artificial Intelligence IJCAI,volume 1651 of CEUR Workshop Proc., pages 61–73. CEUR-WS.org, 2016.

16. Marco Ragni, Henrik Singmann, and Eva-Maria Steinlein. Theory comparison forgeneralized quantifiers. In CogSci, 2014.

17. David M Riefer and William H Batchelder. Multinomial modeling and the mea-surement of cognitive processes. Psychological Review, 95(3):318–339, 1988.

18. L. J. Rips. The psychology of proof: Deductive reasoning in human thinking. MITPress, 1994.

19. K. Stenning and M. van Lambalgen. Human Reasoning and Cognitive Science. ABradford Book. MIT Press, 2008.

20. N. E. Wetherick and K. J. Gilhooly. ‘atmosphere’, matching, and logic in syllogisticreasoning. Current Psychology, 14(3):169–178, 1995.

21. R. S. Woodworth and S. B. Sells. An atmosphere effect in formal syllogistic rea-soning. Journal of Experimental Psychology, 18(4):451, 1935.

83

Page 88: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

Satisfiability for First-order Logic as a Non-Modal Deontic Logic

Robert Kowalski

Imperial College London, United Kingdom [email protected]

Abstract. In modal deontic logics, the focus is on inferring logical consequences, for example inferring whether an obligation O mail, to mail a letter, logically implies O [mail ∨ burn], an obligation to mail or burn the letter. Here I present an alternative approach in which obligations are sentences (such as mail) in first-order logic (FOL), and the focus is on satisfying those sentences by making them true in some best model of the world. To facilitate this task and to make it manageable, candidate models are defined by a logic program (LP) extended by means of candidate action assumptions (A). The resulting combination of FOL, LP and A is a variant of abductive logic programming (ALP). 1 Goal satisfaction in FOL In the abductive logic programming (ALP) approach to reasoning about obligations of [8, 10], candidate assumptions (A), representing actions and other “abducibles”, and logic programs (LP), representing an agent’s beliefs, are combined with first-order logic (FOL), representing an agent’s goals. However, this characterisation of ALP is potentially misleading, because it fails to identify the primary role of goals, and the supporting role of beliefs and assumptions in helping to make goals true. Here is a more abstract characterisation, which is formulated entirely in terms of FOL, and does not mention A or LP at all: A goal satisfaction problem is a tuple 〈G, M0, W〉 where:

G is a set of sentences in FOL, representing goals.

M0 is a classical FOL model-theoretic structure, representing a partial history of the world.

W is a set of classical FOL model-theoretic structures, representing alternative extensions of M0.

M∈W satisfies a goal satisfaction problem 〈G, M0, W〉 if and only if

G is true in M.

We will see that the models M0 and W can be defined constructively, by using a logic program P to define M0, and by using a set of candidate actions A to specify alternative ways of extending M0. We will also see that it is possible to satisfy goals without generating complete models, by using backward reasoning.

84

Page 89: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

There can be many models that satisfy the same goal satisfaction problem, and some models may be better than others. Usual formulations of ALP ignore this fact, and are neutral with respect to the choice between different models satisfying the same goals. In contrast, in deontic logic, an agent is obliged to satisfy its goals in the best way possible. To capture this normative character of deontic logic in FOL/ALP, we introduce a partial ordering between the models in W. A normative goal satisfaction problem is a tuple 〈G, M0, W, <〉 where:

< is a strict partial ordering over W, where M < M’ represents M’ being better than M.

M∈W satisfies a normative goal satisfaction problem 〈G, M0, W, <〉 if and only if

M satisfies the goal satisfaction problem 〈G, M0, W〉 and there does not exist M’∈W such that M’ satisfies 〈G, M0, W〉 and M < M’.

There is an obvious relationship with the possible world semantics of modal logics: W is like a frame of possible worlds. The extension of M0 to M is like the accessibility relation between possible worlds. The partial ordering < is like the preference relation in preference-based deontic logics, such as those of [5] and [14]. However, whereas preference-based deontic logics build the preference relation into the semantics, here the partial ordering is external to the logic, which is simply FOL. Moreover, while in modal logics, actions and events are normally represented by labels on the accessibility relation, here they are “reified”, as part of the record of a partial history of the world. Deontic logic, the logic of obligation, contrasts with alethic logic, the logic of necessity. In alethic logic, a necessary truth cannot be falsified. But in deontic logic, an obligation is a normative ideal. If an obligation is violated, it does not cause the end of the world, but it results in a world that is less than ideal. The focus in modal logics on deriving logical consequences makes it difficult to deal with violations, conflicting norms, and contrary-to-duty obligations. In contrast, the focus in FOL/ALP on goal satisfaction turns these problems into pragmatic choices between alternative models. Moreover, it makes it possible for an agent, aspiring towards the normative deal, to fall short, but nonetheless succeed in generating a best model possible with the limited resources available. 2 Logic programs as constructive definitions of models.

These definitions of goal satisfaction imply that, for an agent to satisfy its goals G, whether in some best way or not, the agent needs to search the space of alternative world histories W, to find some model M ∈ W of G. For this to be feasible in practice, the models M ∈ W must be constructible, and the space W itself must be searchable. This is where ALP comes in. Logic programs in ALP provide a constructive representation of models, and an efficient way to guarantee truth in a model without necessarily generating the model in its totality.

85

Page 90: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

In our variant of ALP, the given model M0 is defined by a logic program P, and the space of candidate models M∈W is defined by logic programs P ∪ Δ, where Δ ⊆ A and A is a set of ground atomic sentences representing candidate assumptions. In ordinary abduction, the goals G represent observations, and A represents candidate hypotheses that can be used to explain G. In default reasoning, A represents assumptions that conditions are normal, and G represents constraints that ensure conditions are not assumed to be normal if they are exceptional. In deontic applications, G represents obligations and prohibitions, and A represents candidate actions that can be used to satisfy G. In the simplest case, a logic program is a set of definite clauses of the form conclusion ← condition1 ∧ … ∧ conditionn, where conclusion and each conditioni is an atomic formula. All variables are universally quantified with scope the entire clause. Every such set of definite clauses P has a unique minimal model M [3]. The minimal model M can be viewed as the intended model of P, and P can be regarded as a constructive definition of M. In LP, it is convenient to represent models M as Herbrand interpretations, which are sets of atomic sentences representing all the atomic sentences that are true in M. A Herbrand interpretation M of a logic program P is then a minimal model of P if and only if M ⊆ M’ for any other Herbrand model M’ of P. Minimal models can be constructed by forward reasoning: using modus ponens to exhaustively derive atomic sentences that are instances of the conclusions of clauses from atomic sentences that are instances of the conditions of clauses. Forward reasoning can also be used to satisfy a goal: guessing a set of candidate assumptions Δ ⊆ A, adding them to P, generating the minimal model M of P ∪ Δ, and then checking whether M is also a model of G. However, in the general case this kind of reasoning is not computationally feasible. It is not feasible to generate candidate Δ blindly and independently of G; nor is it feasible to generate models in their totality. In contrast, backward reasoning using SLD resolution [9] is computationally feasible. SLD resolution reasons backwards from G, reducing goals that match the conclusion of a clause in P to subgoals that are the instantiated conditions of the clause. It continues this process of goal-reduction, until all subgoals are solved directly either by atomic sentences in P or by assumptions in A. In this way, backward reasoning generates only assumptions Δ that are relevant to satisfying G. Moreover, it does so by generating only the assumptions Δ, which together with P determine the minimal model of P ∪ Δ. In the following three, simple examples, the logic program P is either an empty set of clauses {} or a singleton set containing the clause X = X. Backward reasoning generates only those assumptions/actions in A or those instances of X = X that are needed to satisfy the goal. Moreover, in the first two examples, the preference relation {} is empty. So all models are equally good. 3 Map colouring as goal satisfaction The classic map colouring problem illustrates the use of FOL/ALP for goal satisfaction. However, it can also be formulated in deontic terms:

86

Page 91: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

It is obligatory that every country is painted with a colour. It is forbidden to paint the same country two different colours. It is forbidden to paint two adjacent countries the same colour. The problem can be stated as a goal G in FOL: { ∀X [country(X) → ∃ C [colour(C) ∧ paint(X, C)]], ∀X C1 C2 [paint(X, C1) ∧ paint(X, C2) → C1 = C2], ∀X Y C ¬ [adjacent(X, Y) ∧ paint(X, C) ∧ paint(Y, C)] } The map can be represented by an initial model M0, defined by a logic program P that specifies the countries, the colours and the adjacency relation. For a simple problem with two adjacent countries and two colours, this is given by the following program P, which also includes a definition of the identity relation:

{country(iz), country(oz), adjacent(iz, oz), colour(red), colour(blue), X = X}

Because the program P is so simple, the only difference between P and its minimal model M0 is that P contains a general clause X = X, whereas M0 contains all variable free instances iz = iz, oz = oz, red = red, blue = blue of the clause. The alternative ways of colouring the two countries are given by the set of candidate actions A: { paint (iz, red), paint (iz, blue), paint (oz, red), paint (oz, blue)} There are exactly two models that satisfy the goals G; and, given no restrictions on the preference relation, both are equal good: M1 = M0 ∪ { paint (iz, red), paint (oz, blue)} and M2 = M0 ∪ { paint (iz, blue), paint (oz, red)} The map colouring problem can also be expressed in modal deontic logic, formalising the English statement of the problem. It would then be possible to infer the following logical consequence: O [paint (iz, red) ∧ paint (oz, blue)] ∨ O [paint (iz, blue) ∧ paint (oz, red)] which describes all solutions of the problem. It is not obvious how to infer a single solution, which would be more useful in practice. 4 Ross’s Paradox SDL (Standard Deontic Logic) is commonly used as a basis for comparison between different deontic logics. It is a propositional logic with a modal operator O

87

Page 92: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

representing obligation. Among the many problems of SDL, which also affects many other deontic logics, is Ross’s Paradox [12]. It is obligatory that the letter is mailed. If the letter is mailed, then the letter is mailed or the letter is burned. i.e. O mail, mail → mail ∨ burn. In SDL it is a logical consequence of O mail that O [mail ∨ burn]. As McNamara [11] puts it, “it seems rather odd to say that an obligation to mail the letter entails an obligation that can be fulfilled by burning the letter (something presumably forbidden), and one that would appear to be violated by not burning it if I don't mail the letter”. The “Paradox” can be understood as another example of the inadequacy of logical consequence for dealing with problems of satisfying obligations. Here is a formulation of the paradox as a goal satisfaction problem 〈G, M0, W, <〉: G = {mail, ¬ burn} M0 = {} W = {M0 ∪ Δ | Δ ⊆ A} where A = {mail, burn} = {{}, {mail}, {burn}, {mail, burn}} < = {} P = {} and M0 is the minimal model of P. M = {mail} is the only minimal model that satisfies G. But mail ∨ burn is true in M. So satisfying G, entails satisfying mail ∨ burn. But, contrary to suggestions that may be associated with the fact that O [mail ∨ burn] is a logical consequence of O mail, satisfying the goal mail ∨ burn does not satisfy the goal mail. Viewed in this way, Ross’s Paradox is not a paradox at all, but rather, as Fox [4] also argues, a confusion between satisfying an obligation and implying that one obligation is a logical consequence of another. 5 Chisholm’s Paradox Ross’s Paradox and the map colouring problem do not need preferences between alternative models. However, the need for preferences arises with Chisholm’s Paradox [2]: It ought to be that Jones goes to assist his neighbours. It ought to be that, if Jones goes, then he tells them he is coming. If Jones doesn't go, then he ought not tell them he is coming. Jones doesn't go. i.e. O go O (go → tell) ¬go → O ¬tell ¬go

88

Page 93: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

Much of the discussion [1] in the deontic logic literature concerns the problems that arise with alternative representations of the conditional obligations in the second and third sentences. I will not repeat this discussion here, but will present the example as a normative goal satisfaction problem 〈G, M0, W, <〉: G = {go → tell, ¬go → ¬tell} M0 = {} W = {M0 ∪ Δ | Δ ⊆ A} where A = {go, tell} = {{}, {go}, {tell}, {go, tell}} M < M’ if go ∉ M and go ∈ M’. Here the “obligation” for Jones to go is represented by the preference for models in which go is true over models in which go is false. There are two models M1 = {} and M2 = {go, tell} that make G true. But M2 is better than M1. So only M2 satisfies 〈G, M0, W, <〉. This means that Jones must go and tell. Now suppose that Jones doesn’t go. We can represent this simply by removing go from the candidate actions A, updating the problem to 〈G, M0, W’, <〉 where W’ = {{}, {tell}}. The only (and best) candidate model that now satisfies the updated problem is the less than ideal model M1 = {}, which means that Jones must not tell, as is intuitively correct. In a more realistic representation with explicit time, when we discover that Jones doesn’t go, we would update M0 to a new world history in which going is no longer an option. Notice that, in any case, whether we update M0, W or both, the solution of the updated problem changes, even though the goals remain the same. This gives the satisfiability problem for FOL a non-monotonic character, even though logical consequence in FOL is monotonic. 6 LPS (Logical Production System) The logic and computer language LPS [6, 7], is a scaled-down implementation of ALP, intended for practical applications of goal satisfaction. Goals in LPS have the simplified form of rules antecedent → consequent, which are a logical reconstruction of condition-action rules in production systems [13]. All variables in the antecedent of a rule are universally quantified with scope the entire rule, and all variables in the consequent but not in the antecedent are existentially quantified with scope the consequent of the rule. Variables include time variables, and all times in the consequent are later than or equal to all times in the antecedent. Computation in LPS observes external events and executes actions, generating a sequence M0 ⊆ … Mi-1 ⊆ Mi ⊆…. of partial histories, which in the limit determines a model M = M0 ∪… Mi-1 ∪ Mi ∪…. which makes all the goals true. Each history Mi is obtained from the previous history Mi-1 by updating Mi-1 with the set of all external events and actions that take place between Mi-1 and Mi. These updates are performed destructively, like change of state in the real world, and like destructive updates in imperative computer languages.

89

Page 94: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

Computation in LPS gives rules both a logical and imperative interpretation. Whenever any instance of an antecedent of a rule becomes true in some history Mi, forward reasoning treats the corresponding instance of the consequent of the rule as a command to perform actions, to make that instance of the consequent true in some future history Mj, j ≥ i+1. Clauses conclusion ← conditions in LP also have both a logical and imperative interpretation. In addition to their purely logical interpretation, they have an imperative interpretation as procedures, which use backward reasoning to decompose a goal of determining whether a conclusion is true or of making a conclusion true to the subgoal of determining or making the conditions true. Conditions representing actions are made true by adding them to future histories Mj. In the current implementation, the only way to indicate preferences between alternative future histories is to use the order in which clauses are written. Histories generated by clauses that are written earlier are given preference over histories generated by clauses written later. There is also an in-built preference for models that satisfy goals as soon as possible. An online implementation of LPS is accessible from http://lps.doc.ic.ac.uk/. The LPS examples notebook in the examples menu contains executable links to the map colouring problem, and to several other examples having a deontic interpretation. The first steps notebook presents an example in which two agents have conflicting goals. Agent bob wants the light on whenever he is in a room, and agent dad wants the light off whenever there is a room in which a light is on. It may be natural to think of bob’s goal as a personal goal, and dad’s goal as an obligation. But LPS makes no distinction between the two kinds of goals. 7 Conclusions In this short paper, I have formulated deontic reasoning in ALP as goal satisfaction in FOL, with the A and LP components of ALP used to define the space of candidate models. I have argued that backward reasoning with LP overcomes the need to generate complete models, and makes it possible to avoid generating candidate actions blindly without relevance to the goals that need to be satisfied. I have also argued that, for philosophical applications, the focus on goal satisfaction in ALP is more useful than the focus on logical consequence in modal deontic logics. References

1. Carmo, J. and Jones, A. J. (2002) Deontic logic and contrary-to-duties. Handbook of philosophical logic. Springer.

2. Chisholm, R. M. (1963) Contrary-to-Duty Imperatives and Deontic Logic. Analysis. 3. van Emden, M. H., & Kowalski, R. A. (1976) The semantics of predicate logic as a

programming language. JACM. 4. Fox, C. (2015) The Semantics of Imperatives. The Handbook of Contemporary Semantic

Theory, 3, 433-469. 5. Hansson, B. (1969) An analysis of some deontic logics. Nous. 6. Kowalski, R. and Sadri, F. (2014) A logical characterization of a reactive system language.

Proceedings of RuleML 2014, Springer Verlag.

90

Page 95: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

7. Kowalski, R. and Sadri, F. (2016) Programming in logic without logic programming. Theory and Practice of Logic Programming, 16(3), 269-295.

8. Kowalski, R. and Satoh, K. (2017) Obligation as optimal goal satisfaction. Journal of Philosophical Logic, Springer. https://link.springer.com/article/10.1007/s10992-017-9440-3

9. Kowalski, R. (1974) Predicate logic as programming language. In IFIP congress Vol. 74, 569-544.

10. Kowalski, R. (2011) Computational logic and human thinking: how to be artificially intelligent. Cambridge University Press.

11. McNamara, P. (2006) Deontic logic. Handbook of the History of Logic, 7, 197-289. 12. Ross, A. (1941) Imperatives and Logic. Theoria, 7, 53–71. 13. Simon, H. (2001) Production systems. In The MIT encyclopedia of the cognitive sciences. MIT

press. 14. van Benthem, J., Grossi, D. and Liu, F. (2014) Priority structures in deontic logic. Theoria

91

Page 96: Bridging the Gap between Human and Automated Reasoning Is ...ratiolog.uni-koblenz.de/proceedings2017.pdf · alogue and interaction for researchers in both cognitive science and automated

Author Index

Dietz Saldanha, Emmanuelle-Anna, 18, 69

Holldobler, Steffen, 18, 69

Kakas, Antonis, 31Kola, Illir, 1Kowalski, Robert, 84

Louredo Rocha, Isabelly, 18

Morbitz, Richard, 69

Olteteanu, Ana-Maria, 54

Pereira, Luıs Moniz, 39

Ragni, Marco, 1, 10

Riesterer, Nicolas, 10

Robert Kowalski, 84

Saptawijaya, Ari, 39