Cognitive processes in propositional reasoning.

34
Psychological Review 1983, Vol. 90, No. 1, 38-71 Copyright 1983 by the American Psychological Association, Inc. 0033-295X/83/9001-0038$00.75 Cognitive Processes in Prepositional Reasoning Lance J. Rips University of Chicago Propositional reasoning is the ability to draw conclusions on the basis of sentence connectives such as and, if, or, and not. A psychological theory of prepositional reasoning explains the mental operations that underlie this ability. The ANDS (A Natural Deduction System) model, described in the following pages, is one such theory that makes explicit assumptions about memory and control in de- duction. ANDS uses natural deduction rules that manipulate propositions in a hierarchically structured working memory and that apply in either a forward or a backward direction (from the premises of an argument to its conclusion or from the conclusion to the premises). The rules also allow suppositions to be introduced during the deduction process. A computer simulation incorporating these ideas yields proofs that are similar to those of untrained subjects, as assessed by their decisions and explanations concerning the validity of arguments. The model also provides an account of memory for proofs in text and can be extended to a theory of causal connectives. The importance of deductive reasoning lo cognitive theory lies in its ccntrality among other modes of thought. Explanations of peo- ple's statements and actions presuppose some degree of logical consistency (Davidson, 1970; Dennett, 1981). If we explain why Steve stayed away from the dinner party by saying that Steve believes all parties are boring, we tacitly assign to him the ability to deduce, from his general belief and his recognition that this is a party, the conclusion that this party will be boring. True, we sometimes at- tribute to others deductive errors, particu- larly when the route to the conclusion is long and complicated, and we will see many in- stances in what follows. Nevertheless, these mistakes are only discernible against a core of logically accurate thought. As Davidson (1970) puts it, "Crediting people with a large degree of consistency cannot be counted mere charity: it is unavoidable if we are to I would like to thank Jonathan Adler, Norman Brown, Carol Cleland, Allan Collins, Fred Conrad, Donald Fiske, Gary Kahn, Lola Lopes, Bob McCauley, Jim McCawley, Gregg Oden, Jay Russo, Steve Schacht, Paul Thagard, and especially Sandy Marcus for their help with this article. The research was supported by U.S. Public Health Service Grant K02 MH00236 and National Sci- ence Foundation Grant BNS 80-14131. Requests for reprints should be sent to Lance Rips, Behavioral Sciences Department, University of Chicago, 5848 South University Avenue, Chicago, Illinois 60637. be in a position to accuse them meaningfully of error and some degree of irrationality" (P. 96). This logical core is evident in subjects' unanimous intuitions about the validity of certain problems. Virtually all subjects are willing to accept Argument 1, which has the familiar modus ponens form, and indeed, it is very difficult to imagine what one could say to convince someone who affirmed the premises of the argument (i.e., the first two sentences of Argument 1) but denied the con- clusion (the final sentence). 1. If there is an M on the blackboard, there is an R. There is an M. There is an R. One can often bring new evidence to bear in persuading someone to change his or her be- lief in particular propositions, but it is not clear what sort of evidence would be relevant in overcoming resistance to the validity of this inference. Further, any proof or expla- nation of Argument 1 is likely to be no more convincing than the argument itself (Carroll, 1895; Haack, 1976; Quine, 1936). If Argu- ment 1 is not conclusive, what is? Thus, a major goal for a psychological theory of de- duction is to account for the pervasive appeal of such arguments. Of course, not all deductive inferences are as obvious as that of Argument 1, and even 38

Transcript of Cognitive processes in propositional reasoning.

Psychological Review1983, Vol. 90, No. 1, 38-71

Copyright 1983 by the American Psychological Association, Inc.0033-295X/83/9001-0038$00.75

Cognitive Processes in Prepositional Reasoning

Lance J. RipsUniversity of Chicago

Propositional reasoning is the ability to draw conclusions on the basis of sentenceconnectives such as and, if, or, and not. A psychological theory of prepositionalreasoning explains the mental operations that underlie this ability. The ANDS(A Natural Deduction System) model, described in the following pages, is onesuch theory that makes explicit assumptions about memory and control in de-duction. ANDS uses natural deduction rules that manipulate propositions in ahierarchically structured working memory and that apply in either a forward ora backward direction (from the premises of an argument to its conclusion orfrom the conclusion to the premises). The rules also allow suppositions to beintroduced during the deduction process. A computer simulation incorporatingthese ideas yields proofs that are similar to those of untrained subjects, as assessedby their decisions and explanations concerning the validity of arguments. Themodel also provides an account of memory for proofs in text and can be extendedto a theory of causal connectives.

The importance of deductive reasoning locognitive theory lies in its ccntrality amongother modes of thought. Explanations of peo-ple's statements and actions presuppose somedegree of logical consistency (Davidson, 1970;Dennett, 1981). If we explain why Stevestayed away from the dinner party by sayingthat Steve believes all parties are boring, wetacitly assign to him the ability to deduce,from his general belief and his recognitionthat this is a party, the conclusion that thisparty will be boring. True, we sometimes at-tribute to others deductive errors, particu-larly when the route to the conclusion is longand complicated, and we will see many in-stances in what follows. Nevertheless, thesemistakes are only discernible against a coreof logically accurate thought. As Davidson(1970) puts it, "Crediting people with a largedegree of consistency cannot be countedmere charity: it is unavoidable if we are to

I would like to thank Jonathan Adler, Norman Brown,Carol Cleland, Allan Collins, Fred Conrad, DonaldFiske, Gary Kahn, Lola Lopes, Bob McCauley, JimMcCawley, Gregg Oden, Jay Russo, Steve Schacht, PaulThagard, and especially Sandy Marcus for their help withthis article. The research was supported by U.S. PublicHealth Service Grant K02 MH00236 and National Sci-ence Foundation Grant BNS 80-14131.

Requests for reprints should be sent to Lance Rips,Behavioral Sciences Department, University of Chicago,5848 South University Avenue, Chicago, Illinois 60637.

be in a position to accuse them meaningfullyof error and some degree of irrationality"(P. 96).

This logical core is evident in subjects'unanimous intuitions about the validity ofcertain problems. Virtually all subjects arewilling to accept Argument 1, which has thefamiliar modus ponens form, and indeed, itis very difficult to imagine what one couldsay to convince someone who affirmed thepremises of the argument (i.e., the first twosentences of Argument 1) but denied the con-clusion (the final sentence).1. If there is an M on the blackboard, there is an R.

There is an M.

There is an R.

One can often bring new evidence to bear inpersuading someone to change his or her be-lief in particular propositions, but it is notclear what sort of evidence would be relevantin overcoming resistance to the validity ofthis inference. Further, any proof or expla-nation of Argument 1 is likely to be no moreconvincing than the argument itself (Carroll,1895; Haack, 1976; Quine, 1936). If Argu-ment 1 is not conclusive, what is? Thus, amajor goal for a psychological theory of de-duction is to account for the pervasive appealof such arguments.

Of course, not all deductive inferences areas obvious as that of Argument 1, and even

38

COGNITIVE PROCESSES IN REASONING 39

those that seem straightforward at first glancemay turn out to be beyond the reach of somesubjects. For example, only 61% of 10th- andllth-grade students accept Argument 2 asvalid, according to some data of Osherson(1975, p. 146).

2. If there is not both an M and a P on the blackboard,then there is an R.

If there is no M, then there is an R.

This inference presents no problem for log-ical analysis, since there are well-known al-gorithms or proof procedures for deciding itsvalidity (see Chang & Lee, 1973, for a sur-vey). But although these formal methods maybe of use in suggesting psychological hy-potheses, a psychological theory must explainthe difficulties encountered by the unsuc-cessful subjects as well as the correct proce-dures of the successful ones. This providesa second goal for the research that follows.

Fulfilling these goals is vital to cognitivepsychology because an account of inferenceis part of the explanation of many cognitivecomponents. To take just a few examples:(a) Most current theories of comprehensionassume some sort of reasoning mechanismin order to explain how people anticipateupcoming information in text and relate newinformation to what has come before (see,e.g., Clark, 1977; Warren, Nicholas, & Tra-basso, 1979). (b) One's understanding of oth-ers and one's commitment to social attitudescan be seen as the product of certain (notnecessarily optimal) inference strategies (e.g.,Hastie, in press; Nisbett & Ross, 1980). (c)Piagetians hypothesize that small changes inbase-level inference procedures are respon-sible for massive changes in cognitive devel-opment, (d) According to standard theoriesof perception, knowledge of external objectsis based on inference from visual cues, par-ticularly when the objects are three-dimen-sional, occluded, or ambiguous ones (e.g.,Fodor & Pylyshyn, 1981; Ullman, 1980).

This dependence of cognitive explanationson reasoning is no accident. Because theseexplanations are couched in terms of men-tally represented beliefs and goals, and be-cause reasoning is our basic means for chang-ing or reconciling beliefs, reasoning is natur-ally invoked in accounts of comprehension,

intellectual development, and the like. Ob-viously, the kind of reasoning that figures inthese explanations is not always deductiveinference of the type illustrated by Argu-ments 1 and 2. But where beliefs and goalsare propositional in form, as in many currentpsychological models (e.g., J. R. Anderson,1976; Miller & Johnson-Laird, 1976), it isplausible that one important brand of infer-ence deals with propositional connectivessuch as if, and, or, and not. Propositional in-ference has been a focus of study in logic formore than a century, but only recently hasit received from psychologists anything likea general treatment (Braine, 1978; Johnson-Laird, 1975; Osherson, 1975). The effort inthis article is to extend these earlier modelsto a more powerful theory that can be givenfirmer empirical support.

The approach to this problem outlinedbelow takes the form of a computer modelof propositional reasoning. The model isnicknamed ANDS—for A Natural Deduc-tion System—and, as its name implies, isdescended from the formal natural-deduc-tion procedures pioneered by Gentzen (1935/1969) and Jaskowski (1934). It is also relatedto artificial intelligence (AI) theorem proverssuch as PLANNER (Hewitt, Note 1; see alsoBledsoe, 1977). A working version of ANDShas been implemented in the LISP program-ming language. Within its natural-deductionframework, ANDS provides a simple way tohandle temporary assumptions or "supposi-tions" that facilitate human reasoning (Rips& Marcus, 1977). In addition, it provides aspecific account of working memory duringdeduction and of processes that manage itsmemory structures. ANDS also possesses theability to reason both forward from the prem-ises and backward from the conclusion.These features are set out below in the firstsection of this article. An overview of ANDSis followed by a closer look at its main com-ponents: its memory structures and inferenceroutines. The second section of the article isdevoted to an examination of ANDS's sim-ilarity to human inference, as evidenced byprotocol data, recall performance for proofsin text, and judgments of the validity of sam-ple arguments. The final section considerspossible ways to expand ANDS's basic infer-ence abilities.

40 LANCE J. RIPS

ANDS as a Theory ofPrepositional Reasoning

Overview

ANDS's central assumption—one that itshares with the other psychological modelscited above—is that deductive reasoning con-sists in the application of mental inferencerules to the premises and conclusion of anargument. The sequence of applied rulesforms a mental proof or derivation of theconclusion from the premises, where theseimplicit proofs are analogous to the explicitproofs of elementary logic. In the simplestcase, a mental proof has a single step, formedby applying just one of these internal rules.For example, suppose that among the stockof rules is one that specifies that propositionsof the form IF p, q and /; jointly imply theproposition q, where p and q are arbitrarypropositions. This is the modus ponens rule,mentioned above, and we can see that thisschema matches the premises and conclusionof Argument 1. Because of this match be-tween the argument and the inference rule,the conclusion of Argument 1 is cognitivelyderivable or provable from the premises. Wecan also say that Argument 1 is cognitivelyacceptable or valid (although in doing so wedepart somewhat from the way valid is tra-ditionally used in logic). In more complexexamples of deductive reasoning, more thana single rule will be needed to derive the con-clusion; but in all such cases, the rules arethe ultimate authority in determining whicharguments are valid.

ANDS's assumption about inference rulesis not unassailable since there are other well-known methods for showing that argumentsare valid. These include truth tables (Witt-genstein, 1921/1961) and various kinds ofdiagrams (e.g., Gardner, 1958). These meth-ods are, of course, rule governed in the senseof being algorithms for evaluating arguments,but they do not involve rules like the modusponens example that manipulate proposi-tions directly. There is no evidence to date,however, that these alternative methods cansuccessfully predict subject responses formore than a very narrow range of reasoningproblems. Moreover, when these methodshave been compared to proposition-manip-ulating systems, the latter have provided a

better account of the data (Osherson, 1974,1975). It is certainly possible that a nonpro-positional procedure will eventually be foundthat is sufficiently accurate and general; butfor the present, inference rules appear themost viable choice (however, see Johnson-Laird, 1982, for a different view of this issue).

In short, ANDS is a theory of deductivereasoning that consists of prepositional in-ference rules embodied in a set of compu-tational routines. The routines apply therules to produce a proof when an argumentis evaluated. Both the routines and the proofare claimed to correspond to those used in-tuitively by subjects who have not receivedformal training in logic. This section of thearticle is devoted to an explanation of thesecomponents, using as an illustration a sampleargument that ANDS evaluates with a fairlysimple, but nontrivial, proof. The first partof the section considers informally what aproof of this argument might look like andcompares this informal proof to ANDS'sproof in a preliminary way. The comparisonpoints out some of ANDS's special features,and these features are then described in de-tail, concentrating on the shape of the proofsin ANDS's working memory and on the in-ference routines that construct them. Thenext part returns to the sample argument andshows how its proof is built up step by step.(A lengthier proof, which illustrates some fur-ther aspects of ANDS, is included in the Ap-pendix.) The last part of this section com-pares ANDS to other psychological deduc-tion systems.

An initial proof of Argument 2. The sam-ple argument I will focus on is Argument 2,so we should take a moment to think aboutits meaning. Recall that the premise was Ifthere is not both an M and a P on the black-board, then there is an R, and the conclusionwas If there is no M, then there is an R. Ourtask is to decide whether this conclusion fol-lows from the premise. Looking at the prem-ise, we see that a main stumbling block is thephrase there is not both an M and a P. Onething to notice, however, is that situationsthat are consistent with this phrase are onesin which there is an M but no P, a P but noM, or neither a P nor an M. In other words,any arrangement of letters that lacks an Mor lacks a P is one in which "there is not both

COGNITIVE PROCESSES IN REASONING 41

an M and a P" We could therefore rephrasethe premise as If there is no M or no P, thenthere is an R. But remember that the first halfof the conclusion is If there is no M. Sup-posing that there really was no M on theblackboard, then surely the rephrased prem-ise tells us that there will be an R. If there isno M or no P, then there is an R and Thereis no M together imply There is an R. Hence,given the premise, it follows that if there isno M, there will be an R. But this is exactlythe conclusion, and Argument 2 must there-fore be valid.

The following proof summarizes thesesteps:3. a. If there is not both an M and a P (Premise)

on the blackboard, then thereis an R.

b. If there is no M or no P on the (Consequenceblackboard, then there is an R. of a)

c. Suppose there is no M on theblackboard.

d. Then there will be an R.

(Supposition)

(Consequenceof b and c)

e. Therefore, if there is no M, there (Consequencewill be an R. of c and d)

The initial sentence in this proof is the prem-ise of Argument 2, and the final sentence isits conclusion. The intermediate steps inLines b and d, as well as the conclusion itself,are logical consequences of earlier steps andare derived from these earlier propositions bymeans of inference rules. These rules mustbe specified in a formal deductive system butare left unstated in this example (see the fol-lowing section for a discussion of ANDS'sdeductive rules). Line c contains a temporaryassumption or supposition, which is used tosimplify deduction of the conclusion.

No axioms appear in the above proof, andthis is characteristic of "natural deduction,"the type of proof developed by Gentzen andothers cited above. These systems were in-tended to "come as close as possible to actualreasoning" (Gentzen, 1935/1969, p. 68) onthe theory that in "actual reasoning" the con-clusion is deduced directly from the premisesof the argument rather than from primitivepropositions or axioms. Natural deductiveproofs are typically simpler than axiomaticproofs, and for this reason such systems havebeen adopted in many elementary logic texts

(e.g., Copi, 1954; Fitch, 1952; McCawley,1980; Suppes, 1957; and Thomason, 1970b).Most previous psychological models of prep-ositional reasoning, though differing in otherrespects, have used natural deduction meth-ods (see Braine, 1978; Johnson-Laird, 1975;Osherson, 1975).

ANDS's proof of Argument 2. ANDS'sproof is very similar to Proof 3. Figure 1 dis-plays the contents of ANDS's working mem-ory at the conclusion of the proof, and wecan see that each of the propositions in Proof3 finds a place in this configuration. Theproof structure contains two parts, which arecalled the assertion tree and the subgoal tree.ANDS begins the problem by considering thepremise (Assertion 1 in Figure 1), and thisproposition (along with some inferences basedon it) is placed in the top node of the assertiontree. ANDS also starts by taking into accountthe conclusion, the proposition that it mustultimately prove. This proposition appearsat the top of the subgoal tree as Subgoal 2,and it is connected by a pointer to other prop-ositions (subgoals) whose truth guaranteesthe truth of the conclusion. As in Proof 3,ANDS sometimes needs to "suppose" tem-porarily that a particular proposition is trueand use this supposition to draw further in-ferences. When this happens, ANDS placesthe supposition in a subordinate node of theassertion tree. For example, Line c of Proof3 contained the supposition that there wasno M on the blackboard, and this same prop-osition Not M appears as Assertion 5 in thesubordinate node. (Admittedly, the two struc-tures in Figure 1 do not look much like trees,since they contain only a single branchapiece; in general, though, the assertion andsubgoal trees will be bushier, as can be seenin the Appendix and Figure Al.)

At the beginning of the problem, both theassertion tree and the subgoal tree are empty.The procedure begins when ANDS adds thepremise of the argument to the main branchof the assertion tree and the conclusion to thetop of the subgoal tree. The remaining prop-ositions are placed in the memory treesduring the course of the proof by inferenceroutines that continually inspect the trees'contents and respond to appropriate prepo-sitional patterns within them. In Figure 1,the numbering of the individual propositions

42 LANCE J. RIPS

Assertion Tree Subgoal Tree

I. If not (M and P) then R.

3. If not M or not P than R

8. If not M then R.

V:

R.

Not M.

Figure I . ANDS's memory structure at the conclusionof the proof of the argument, If not both M and P thenR; therefore, if not M then R. (See text for explanation.)

indicates the order in which the routines haveentered them in memory. First, one of theroutines spots the fact that the premise, If not(M and P) then R, can be paraphrased to read// not M or not P then R, and it places thisnew proposition in the assertion tree at 3.Second, ANDS notices that the conclusionis a conditional, If not M then R. As in theearlier proof, ANDS supposes that the firstpart of the conditional, Not M, is true, put-ting it in a new subordinate assertion node.At the same time, it sets up a subgoal(Subgoal 4 in the figure) to try to prove thatR is true; if it can show that R is true whenNot M is true, it will have derived the con-clusion itself. Finally, ANDS sees that R canbe deduced from Assertion 3, provided thatit can show that either Not M or Not P istrue. The subgoal of proving Not M is enteredat 6, but this proposition has already beenestablished (it is just Assertion 5), and thismatch between the subgoal and the assertionclinches the proof. Given the premise, R istrue if Not M is true, and, therefore, If notM then R is provable. After placing R and Ifnot M then R in the assertion tree, ANDSdeclares the argument valid. If ANDS hadrun out of applicable rules before finding thecritical match, it would have pronounced theargument invalid.

This example highlights ANDS's main fea-tures: ANDS's proofs always consist of anassertion tree containing the premises andother propositions derived from them and asubgoal tree containing the conclusion andother propositions that warrant it. Inferencerules fill in these working-memory trees inresponse to propositions previously placedinside them. The proof succeeds (and theargument is cognitively valid) if the rules canfind a match between suitable subgoals and

assertions. The proof fails (and the argumentis cognitively invalid) if the procedure runsout of applicable rules before finding amatch. A deeper understanding of the proofin Figure 1 requires a more explicit descrip-tion of both the rules and the tree structures,but the example will stand us in good steaduntil these elements have been examined.

Working-Memory Components

Reasoning in ANDS means constructinga proof in working memory. Inspection ofFigure 1 turned up the basic parts of theseproofs—the assertion and subgoal trees—butit remains to clarify their internal structure.In particular, we need to know the signifi-cance of the links and nodes of the figure andthe way in which these links and nodes areestablished. The first of these questions istaken up at this point; the second will be de-layed until we have had a chance to discussthe inference routines.

The assertion tree. ANDS's assertion treeencodes a natural-deduction proof, as wehave seen in comparing Proof 3 to the similarproof in Figure 1. The basic difference be-tween these two versions is that the assertiontree represents explicitly the relation of thesupposition (i.e., Not M) to the rest of theproof: Suppositions are always placed in anew node of the assertion tree.

Unlike most of the other propositions inthe assertion tree, suppositions are not nec-essarily true in those situations where thepremises are true, and it is for this reasonthat suppositions are segregated in nodes oftheir own. For instance, in Figure 1 Not Mneed not be true when the premise is true,since the premise only mentions what willhappen if there is no M (or no P). The ad-vantage of suppositions is that they providea way of exploring their consequences with-out our first having to know their truth value.(An analogy is the way one sometimes imag-ines contingencies in planning an importantaction so that one can anticipate potentialproblems or opportunities.) The result is amore streamlined proof than would be pos-sible if all the propositions in the proof hadto be guaranteed by the premises.

The logical consequences of a supposition,

COGNITIVE PROCESSES IN REASONING 43

like the supposition itself, are not guaranteedto be true, and they are therefore placed inthe same node as the supposition on whichthey are based. For example, in the assertiontree of Figure 1, the truth of R depends onthe supposition Not M, and for this reason,R too appears in the subordinate node. Bycontrast, the remaining propositions in theassertion tree do not depend on the truth ofNot M but on the premise alone, and ANDStherefore places them in the upper node ofthe tree with the premise itself. ANDS's in-ference routines are in charge of noticingwhen a supposition is used in a deductionand of placing the deduced proposition in theproper node of the tree.

In determining the consequences of a sup-position, ANDS's rules are free to use anyof the superordinate propositions in the tree.For example, Assertion 3 of Figure 1 wasused to deduce Assertion 7 in the subordinatenode. Thus, whereas propositions in subor-dinate (supposition) nodes are not necessar-ily true in the superordinate nodes, super-ordinate propositions are always true in thesubordinate contexts. This one-way logicalrelation is represented by the pointers in theassertion tree.

The ability to represent suppositions is animportant characteristic of ANDS's proofsand is responsible for the form of the asser-tion tree. Branching in the tree will occur ifANDS considers one or more trial supposi-tions before hitting on a fruitful one (see theAppendix proof for examples of unsuccessfulsuppositions). This hierarchical system ofnodes in the assertion tree replaces the brack-eting or numbering devices that are morecommon in textbook presentations of naturaldeduction.'

The subgoal tree. If we think of the as-sertion tree as recording the logical steps lead-ing from the premises to the conclusion ofan argument, the subgoal tree indicates thereverse pathway from the conclusion to thepremises. Typically, ANDS proves its theo-rems from "outside in," working alternatelybackward from the conclusion (in the subgoaltree) and forward from the premises (in theassertion tree). Unlike the assertion tree,however, the subgoal tree has no obviouscounterpart in formal logic proofs. Perhapsthe closest equivalent to ANDS's use of

subgoals occurs when an author breaks acomplex theorem into lemmas or announcesat the beginning of a problem that proof ofa given formula is sufficient to establish theconclusion. Informal argumentation pro-vides a better analogy to the subgoal tree.Argumentative discourse often starts with astatement on the part of a speaker that ischallenged by another participant. Thespeaker can meet this challenge by producingevidence for the statement, where this evi-dence can be questioned in turn (Toulmin,1958). Thus, the original statement plays thepart of the conclusion in an argument, themain goal that the speaker wants to establish.The evidence is a subgoal, which, if agreedupon, would support the truth of the maingoal. Disputes about the evidence can leadto further sub-subgoals on the part of thespeaker, and so on, until common ground isreached.

A basic motivation for the subgoal tree isprocessing efficiency: Subgoals keep the proofprocedure aimed in the direction of the ar-gument's conclusion rather than allowing itto produce implications at random from thepremises (see Newell & Simon, 1972). Thesubgoal tree tells the system that if it wantsto deduce conclusion SO, it should first de-duce the propositional subgoal SI; if it wantsto deduce SI, it should try to deduce prop-osition S2, and so on, until an achievablesubgoal Sk has been located. This subgoalchaining is achieved by making many ofANDS's inference rules sensitive to the cur-rent subgoal at any given state of the prob-lem. If at some stage one of these rules noticesthat the current subgoal is Si and that somefurther proposition Sj will serve to establishSi, then this inference procedure will placeSj in the subgoal tree as the now-currentsubgoal. This process is monitored by ANDSto ensure that the subgoal does not duplicatea previously attempted one. Whenever thecurrent subgoal fails (none of the rules ap-plies to it), ANDS backs up to the nearest

1 The term assertion tree may be something of a mis-nomer, since it contains suppositions as well as flat-outentailments of the premises. Proof tree might have beenmore appropriate, except that ANDS's proofs also in-volve the subgoal tree. Assertion tree seems harmless ifwe bear in mind that not all of the propositions in thetree have equally factual status.

44 LANCE J. RIPS

superordinate goal and tries again. In otherwords, ANDS generates its subgoal tree in adepth-first manner.

The form of the subgoal tree is dictated bytwo considerations. First, because the finalgoal of the problem is to deduce the conclu-sion, the subgoals should be propositionswhose truth will warrant that of the conclu-sion itself. For any conclusion, ANDS mayconsider several subgoals of this sort (as il-lustrated in the Appendix), creating a treestructure. In general, deduction of any subgoalin the tree will be sufficient to guarantee thetruth of any proposition along the path lead-ing up from it. The pointers in the subgoaltree represent this "deducible from" rela-tion.2

Second, superimposed on this structureare nodes that correspond to those of theassertion, tree. Certain subgoals require ANDSto suppose that a particular proposition istrue, that is, to set up a new supposition nodein the assertion tree. This supposition is thenused as an aid in deducing the subgoal. Inthe proof of Figure 1, for example, ANDSassumes that Not M is true in order to deduceR. So when R is first added as a subgoal,ANDS creates a new assertion node with NotM as its initial proposition. A supposition-creating subgoal of this type is partitionedfrom the rest of the subgoals and is connectedto the relevant assertion node (with dottedlines in the figure) to indicate that the subgoalis to be deduced in the context of the sup-position. In the above example, R is to bededuced under the supposition that Not Mis true. By contrast, the main goal is to bededuced in a context in which only the prem-ise (and its entailments) are assumed true.Hence, it appears in the top node of thesubgoal tree, which is connected to the as-sertion node that contains the premise.

Both of these structural features—thesubgoal hierarchy and the nodes—are con-trolled by the inference rules. These rules in-spect the current subgoal to determinewhether there might be a new propositionwhose truth would establish the subgoal.Similarly, the rules are in charge of coordi-nating supposition nodes in the assertion treewith their counterparts in the subgoal tree.To see how all this is accomplished, we needto take a look at the rules themselves.

Processing Components

ANDS's inference routines control itsproofs by placing new assertions and subgoalsin memory. Most of these routines corre-spond to familiar rules in propositional logic.For example, ANDS has two routines (Rland Rl') corresponding to modus ponens.These relations are spelled out in Table 1,and the procedures themselves are summa-rized in Table 2. Each of these routines con-sists of a set of conditions that must bechecked against the memory trees to deter-mine whether the routine applies. If all theconditions are met, then a series of actionsare carried out that modify the trees in spe-cific ways. In the current version of ANDS,the routines are tested for applicability in afixed order, and the first routine whose con-ditions are fulfilled is then carried out. Ingeneral, the simpler routines are tested first,where simplicity is determined by the amountof mental bookkeeping that is required (ina sense that will become clear below). Oncea routine from this list has been applied, theprocess starts again at the top of the list insearch of further routines.

The Table 2 routines are not intended asan exhaustive set. This is true in the sensethat there are arguments in classical senten-tial logic that ANDS cannot prove. But moreimportant, there are undoubtedly deductiveprocedures that subjects use but that are notcurrently part of ANDS. Nevertheless, theroutines in Table 2 were suggested by subjectprotocols and can claim psychological plau-sibility for this reason. Although not ex-haustive, this set of routines can handle afairly wide range of propositional argumentsand permits us to test some of ANDS's as-sumptions.

We assume that each of the Table 2 rou-tines is available to a subject only on a prob-abilistic basis. For example, on a particulartrial of an experiment, a certain routine Riwill come to mind with probability pi (a pa-rameter of the model). This assumption is

2 As a minor qualification, ANDS must sometimesachieve both members of a pair of subgoals in order tofulfill the superordinate goal. See the Appendix for anexample of these joint subgoals (Subgoals 16 and 17 inFigure Al) .

COGNITIVE PROCESSES IN REASONING 45

important in deriving predictions fromANDS, as will be seen later. However, forpurposes of describing ANDS's proof pro-cedure, it is convenient to ignore the as-sumption temporarily. Thus, in the examplesbelow, we take all routines to be availablewith Probability 1 so that ANDS behaves asa kind of ideal logician within the compassof its remaining structural and processingconstraints.

There are some notational practices thatshould be kept in mind in reading Table 2.Italicized lower case letters (e.g., p, q, and r)are prepositional variables that can bematched against any (possibly complex)proposition in memory. However, words initalicized caps like IF and OR are logical con-stants and must exactly match those wordswithin a memory proposition. So, for ex-ample, the pattern IF p, q in Rule R1 willsuccessfully match the propositions "If Janeis in Tulsa, Morry is in Chicago" or "If Janeis in Tulsa and Sam is in Austin, Morry is inChicago," but not "Jane is in Tulsa and Samis in Austin" or "Sam is in Austin, and if Janeis in Tulsa, Morry is in Chicago." When let-

Table 1 (continued)

Rule nameInference-rule

schema

Law of ExcludedMiddle

// Introduction

p OR NOT p

11' p. q

Not Introduction

Or Elimination

NOT p

pORq

ANDSroutine

R8

R9

RIO

R l l

Table 1A Comparison Between Standard Logical Rulesand ANDS's Inference Routines

Rule name

Modus Ponens

Inference-ruleschema

IF p, qP

ANDSroutine

R l , Rl '

DeMorgan's Law NOT (p AND q)

DisjunctiveSyllogism

DisjunctiveModus Ponens

And Elimination

And Introduction

Or Introduction

NOT p OR NOT q

p OR qNOT p

IF p OR q, rPr

p AND q

P

PQp AND q

Pp OR q

R2'

R3, R3'

R4

R5, R5'

R6

R7

Note. Boxes represent subproofs within which the bot-tom line (or lines) is deduced from the top line (see, e.g.,Fitch, 1952). These correspond to subordinate assertionnodes in ANDS.

ters occur within the propositions them-selves, as in Arguments 1 and 2, they are cap-italized to distinguish them from variables.If a pattern matches a given assertion orsubgoal, then the variables in the pattern arebound to the corresponding propositions. Forinstance, if IF p, q is matched to the first ofthe sample sentences above, then p is boundto "Jane is in Tulsa" and q to "Morry is inChicago" for the remaining steps in the rule.Two different variables can be bound to thesame proposition, as would happen, for ex-ample, if IF p, q were matched to "If Janeis in Tulsa, Jane is in Tulsa." The condition-action format of ANDS's rules resembles thatof production systems (e.g., J. R. Anderson,1976; Newell & Simon, 1972), but no at-tempt has been made in ANDS to follow theprogramming conventions of production-system languages.

Forward versus backward rules. The in-

46 LANCE J. RIPS

Table 2Deduction Rules in ANDS

Rl (Modus Ponens, backward version)Conditions: 1. Current subgoal = q

2. Assertion tree contains IF p, qActions: 1. Set up subgoal to deduce p

2. If Subgoal 1 is achieved, add q toassertion tree

Rl ' (Modus Ponens, forward version)Conditions: 1. Assertion tree contains proposition

x = IF p, Q2. x has not been used by Rl or Rl '3. Assertion tree contains proposi-

tion pActions: 1. Add q to assertion tree

R2' (DeMorgan)Conditions: 1. Assertion tree contains a

proposition x with subformulaNOT (p AND q)

2. x has not been used by R2'Actions: 1. Set y = x

2. Substitute NOT p OR NOT q forNOT (p AND q) in y

3. Add y to assertion treeR3 (Disjunctive Syllogism, backward version)

Conditions: 1. Current subgoal = q2. Assertion tree contains /; OR q

(alternatively, q OR p)Actions: 1. Set up subgoal to deduce NOT p

2. If Subgoal 1 is achieved, add q toassertion tree

R3' (Disjunctive Syllogism, forward version)Conditions: 1. Assertion tree contains x = p OR q

2. x not used previously by R3 or R3'3. Assertion tree contains NOT p

(alternatively, NOT q)Actions: 1. If assertion tree contains NOT p,

add q to assertion tree2. If assertion tree contains NOT q,

add p to assertion treeR4 (Disjunctive Modus Ponens)

Conditions: 1. Current subgoal = q2. Assertion tree contains IF p OR

r, (IActions: 1. Set up subgoal to deduce p

2. If Subgoal 1 is achieved, add q toassertion tree and return

3. Set up subgoal to deduce r4. If Subgoal 3 is achieved, add q to

assertion treeR5 (And Elimination, backward version)

Conditions: 1. Current subgoal = p2. Assertion tree contains a

proposition with subformula x -p AND q (alternatively, q AND p)

Actions: 1. Set up subgoal to deduce x2. If Subgoal 1 is achieved, add /; to

assertion treeR5' (And Elimination, forward version)

Conditions: 1. Assertion tree contains x = pANDq

2. x not used previously by R5 or R5'Actions: 1. Add p to assertion tree

2. Add q to assertion tree

R6 (And Introduction)Conditions: 1. Current subgoal = p AND qActions: 1. Set up subgoal to deduce p

2. If Subgoal 1 is achieved, set upsubgoal to deduce q

3. If Subgoal 2 is achieved, add pAND q to assertion tree

R7 (Or Introduction)Conditions: 1. Current subgoal = p OR qActions: 1. Set up subgoal to deduce p

2. If Subgoal 1 is achieved, add p ORq to assertion tree and return

3. Set up subgoal to deduce q4. If Subgoal 3 is achieved, add /; OR

g to assertion tree

R8 (Restricted Law of Excluded Middle)Conditions: 1. Current subgoal = p OR q

2. Premises contain a propositionwith subformula NOT p(alternatively, NOT q)

Actions: 1. Add to assertion tree p OR NOT p(alternatively, q OR NOT q)

R9 (//Introduction)Conditions: 1. Current subgoal = //•' /;, qActions: 1. Add new subordinate node to

assertion tree containingassumption p

2. Set up corresponding subgoal nodeto deduce q

3. If Subgoal 2 is achieved, add IF p,q to superordinate node ofassertion tree

RIO (Not Introduction)Conditions: 1. Current subgoal = NOT p

2. Premise contains subformula q andconclusion contains subformulaNOT q (alternatively, premisecontains NOT q and conclu-sion q)

Actions: 1. Add new subordinate node toassertion tree containingassumption p

2. Set up corresponding subgoal nodeto deduce q

3. If Subgoal 2 is achieved, set upsubgoal node to deduce NOT q

4. If Subgoal 3 is achieved, add NOTp to superordinate node

R11 (Or Elimination)Conditions: 1. Current subgoal = r

2. Assertion tree contains p OR qActions: 1. Add new subordinate node to

assertion tree with assumption p2. Set up corresponding subgoal node

to deduce r3. If Subgoal 2 is achieved, add new

subordinate node to assertiontree with assumption q

4. Set up corresponding subgoal nodeto deduce r

5. If Subgoal 4 is achieved, add r tosuperordinate node

Note. All rules are stated in the form of condition-action pairs: The action portion of the rule will be executed onlyif all of the listed conditions have been fulfilled. In certain cases, ANDS tags a proposition to signal that a rule hasapplied to it, and these tags are stored in the proposition's property list. The tags are checked in Condition 2 ofRules Rl', R2', R3', and R5'. Further conditions on the position of the component propositions in the trees areomitted in this listing.

COGNITIVE PROCESSES IN REASONING 47

ference routines of Table 2 can be dividedinto two main types: forward rules that workfrom the premises of the argument to theconclusion and backward rules that workfrom the conclusion to the premises. Forwardrules will be denoted by a prime followingthe rule number; thus, in Table 2, Rl', R2',R3', and R5' are all forward rules. The un-primed rules—Rl and R3-R11—are back-ward rules. This forward/backward distinc-tion depends on the sensitivity of the rulesto problem subgoals. The condition part ofbackward rules contains a reference to thecurrent subgoal, and their actions typicallyinvolve placing one or more new subgoals inthe subgoal tree (though new assertions mayalso be added). Forward rules, however, op-erate independently of any subgoals. Theirconditions refer only to propositions in theassertion tree, and their actions result onlyin new assertions, never in new subgoals.Hewitt's (Note 1) PLANNER system includesboth forward rules ("antecedent theorems")and backward rules ("consequent theo-rems"), as do CONNIVER (McDermott &Sussman, Note 2), SNePS (Martins, McKay,& Shapiro, Note 3), and the deductive systemdescribed by Klahr (1978). J. R. Anderson,Greeno, Kline, and Neves (1981) have alsoincorporated forward and backward infer-ence in their simulation of high-school ge-ometry students' theorem proving. Let usfirst look at some examples of forward andbackward rules, and then examine the casefor their psychological validity.

As an illustration of the difference betweenforward and backward rules, we can considerRules Rl and Rl' from Table 2. Both of theserules implement the modus ponens infer-ence, a pattern that was exemplified by Ar-gument 1. However, Rl and Rl' carry outmodus ponens in different ways and so pro-vide an optimal contrast for our forward/backward distinction. Rule Rl' is the forwardversion of modus ponens, and a glance atTables 1 and 2 shows that it closely resemblesthe way this inference is usually expressed inlogic texts. The conditions of the rule checkto see whether propositions of the form IFp, Q and p exist in the assertion tree, and ifthey do, proposition q is then asserted. Theonly unfamiliar aspect of this routine is thatit explicitly tests whether modus ponens hasalready applied to the conditional proposi-

tion (see Condition 2 of Rl')- This is doneto keep the rule from applying repeatedly tothe same assertions in an endless loop. Oncemodus ponens has applied to IF p, q, apply-ing it again produces no new information,only redundant copies of q. Therefore, im-posing this condition prevents looping with-out restricting the power of the rule.

Rule Rl' allows us to conclude that q istrue whenever both p and IFp, q have alreadybeen asserted. However, suppose that wewould like to deduce q in a situation whereIF p, q is known to be true, but p is not. Anatural strategy might be to try to deduce pfrom other assertions because if p can be de-rived, modus ponens will permit us to con-clude that q. This strategy is the one imple-mented by Rule Rl, the backward version ofmodus ponens. The conditions on this ruleare met if the procedure notices that the cur-rent subgoal is the consequent of some pre-viously asserted conditional. (In a conditionalof the form IF p, q, q is said to be its "con-sequent" and p its "antecedent.") The actionpart of the rule then sets up the antecedentof the same conditional as the new subgoal.In other words, Rl adds p as a subgoal forq in the presence of an assertion IF p, q. Ifthis new subgoal can be derived, then q itselfhas been deduced and can be added to theassertion tree. Thus, the point of the rule isto motivate further reasoning that may ulti-mately lead to fulfillment of the original goal.

It is good to notice that backward ruleslike Rl are not passive implementations oflogical principles; they incorporate heuristicsthat specify the sorts of proof strategies thatare likely to pay off. The easiest way to ap-preciate this point is to consider an alterna-tive way of formulating backward modusponens that eliminates one of these heuris-tics. As just mentioned, Rule Rl is triggeredby an assertion IF p, q and a subgoal q, andits action is to produce a new subgoal p. Thismeans that the new subgoal can never be alonger proposition than the assertion thattriggers it, and this heuristic limits the amountof search that the rule can initiate. We cancontrast this with a different type of rule(Rl*) that would be triggered by an assertionp and a subgoal q and whose action wouldbe to introduce as a subgoal IF p, q. Fromthe point of view of logic, Rl and Rl* areequally reasonable, since all we have done is

48 LANCE J. RIPS

to reverse the role of the two premises ofmodus ponens—p and IF p, q—in the pro-cedure. However, from a strategic standpoint,R1 * is almost always a bad idea. For exam-ple, imagine a proof in which the currentsubgoal is q and the only assertion is /;. Thesepropositions meet the conditions on Rl*,and so this rule will set up a subgoal to deduceIF p, q. But no such deduction is possible inthe situation at hand because there is no validway to get from p alone to IF p, q. We oughtto give up at this point, but to make thingsworse, Rule Rl* is still applicable, and withassertion p and subgoal IF p, q, it producesthe new subgoal IF p, IF p, q. Again, thereis no way to deduce this subgoal, and againRl* applies to yield yet another subgoal IFp, IF p, IF p, q, and so on. In the case of Rl ,the number of new subgoals is limited by thefact that the subgoal (p) is contained in theassertion (IFp, q). Rl* drops this heuristicand gets us (literally) into no end of trouble.

Why do we need both forward and back-ward rules? In particular, why is it necessaryto have rules like Rl and Rl' that are forwardand backward versions of the same deductivescheme? The answer to these questions is thatthe two rule types play different roles in theo-rem proving, both equally important. Intu-itively, forward rules are useful in gettingstarted on a problem by carrying out obviousor routine deductions that clarify the mean-ing of the premises. At a later stage, they mayalso be helpful in simplifying intermediateresults. Forward strategies have been ob-served in subjects' performance on othermathematical tasks. For example, Simon andhis colleagues have documented forward useof equations by skilled subjects solving sim-ple word problems in kinematics and ther-modynamics (Bhaskar & Simon, 1977; Si-mon & Simon, 1978). Similarly, in provinggeometry theorems, high-school students mayconsider congruent parts of a figure on theassumption that these parts will eventuallybe helpful, without having a specific subgoalin mind (Greeno, 1976). Moreover, forwardroutines are needed in situations where noparticular goal is specified. For instance,Johnson-Laird and Steedman (1978) havegiven subjects sets of premises and have askedthem to generate their own conclusions fromthese sets. Subjects are clearly able to comply

with these instructions, but it would be im-possible for them to do so if all deductivesteps had to be tied to well-formulated goals.

On the other hand, forward rules alone arenot sufficient to account for ordinary rea-soning, and to see why not, consider the prob-lem of proving the following argument:

4. If there is both an M and a P, then there is an R.There is an M.There is a P.

There is an R.

An attempt to derive the conclusion on thebasis of Rl'—forward modus ponens—willfail in this situation because Rl requires anassertion to match the antecedent of the first-premise conditional (i.e., There is both an Mand a P), Although we have the makings forsuch an assertion in the second and thirdpremises, forward rules like Rl' lack the ca-pacity to set up a subgoal to combine thesepropositions. However, the backward version,Rl, encounters no such problems. Becausethe conclusion matches the consequent of thefirst premise, it will advertise the need toprove the antecedent. ANDS's conjunction-formation rule, R6, can then apply to pro-duce this proposition.

The problem in deriving the conclusion ofArgument 4 is that R6 is itself a backwardrule and is therefore unable to operate unlesstriggered by a subgoal. This suggests that oneway around the problem would be to trans-form R6 into a forward rule that would formconjunctions whenever possible. But such amove has a fatal difficulty. For not only woulda forward R6 produce the desired sentencebut it would also produce an infinite numberof undesired ones such as There is an M andthere is a P, and there is a P; There is an Mand there is a P, and there is a P, and thereis a P, and so on. Applying rules like R6 ina forward direction produces an avalancheof irrelevant conclusions, a process similarto the one Newell and Simon (1972) namedthe "British Museum" algorithm. Perhaps wecould limit this runaway productivity as wedid with R1' by prohibiting R6 from applyingmore than once to any proposition. But recallthat in the case of Rl', we were eliminatingonly multiple tokens of the same sentence.Here we would be eliminating different sen-

COGNITIVE PROCESSES IN REASONING 49

tence types, and some of these could proveuseful in other problems. For this reason, itseems that the best way to curb inferenceslike R6 (and others that operate on their ownoutput) is to treat them as backward rules,using them only as needed in support ofsubgoals.3

Supposition-creating rules. All of ANDS'sbackward rules operate by positing newsubgoals on the basis of old ones. However,some of the backward rules have the addi-tional task of advancing suppositions withinthe proof, producing new nodes on both theassertion and subgoal trees. A good exampleof such a rule is R9, which performs the in-ference called "If Introduction" in logic text-books (see Table 1). "If Introduction" is usedto prove conditional conclusions, and it doesso by assuming that the antecedent of theconditional is true and by seeing whether theconsequent can be deduced on that assump-tion. If this consequent is deduced, then theconditional itself must be true. This patternof reasoning is familiar from mathematics,and we have already run into an example inProof 3. Faced with the task of showing thatIf there is no M on the blackboard, then thereis an R, we made the assumption that thereis no M, and then went on to demonstratethat this assumption, coupled with earlierassertions, implies that there is an R. Theimplication justifies the conditional conclu-sion.

Rule R9 implements this deduction in thesteps shown in Table 2. Its only condition isthat the present subgoal have the IFp, q form.If this is the case, it builds a new subgoal nodedirectly subordinate to the one containing theconditional and houses the consequent (i.e.,q) in the new node. At the same time, it con-structs an assertion node for the antecedent(p). The consequent then becomes the cur-rent subgoal, and its deduction can be at-tempted on the basis of the antecedent as wellas any superordinate propositions. If thissubgoal can be fulfilled, the entire conditionalcan be added to the assertion tree just abovethe node containing the antecedent. Thus, inFigure 1, If not M, R is placed in the super-ordinate node of the assertion tree.4

There is an intimate connection betweensupposition-creating rules and conditionalpropositions. Sentences of the IF p, q form

(e.g., "If the problem is complex, Morrywon't be able to solve it") can often be para-phrased as Suppose that p; then q ("Supposethat the problem is complex; then Morrywon't be able to solve it"). More generally,Mackie (1973) and Rips and Marcus (1977)have given accounts of the uses of if in or-dinary English in terms of ifs ability to evokesuppositions. However, it is clear that sup-positions can also take part in inferences thatdo not directly involve conditionals, andANDS's Rules RIO and R l l perform de-ductions of this type. Rule RIO handles rc-ductio ad absurdum arguments in which agiven proposition is supposed true in orderto derive a contradiction from it. Becausecontradictions cannot be derived from truepropositions, their presence indicates that thesupposed sentence is false. The negation ofthat sentence can therefore be added to thesuperordinate node in the assertion tree (theAppendix deals with a proof of this type). In

3 Another way to limit the productivity of the forwardversion of R6 is to restrict it to produce only conjunc-tions that actually appear in the premises or conclusion.But although this restriction is successful in avoiding theproblems with Argument 4, it is too limiting in general.For example, consider a deduction system (e.g., Braine,1978, or Osherson, 1975) that contains the rule, "pAND{q OR r) implies (p AND q) OR (pAND r)." Within sucha system, one would like to be able to prove that argu-ments of the following kind are valid:

There is a T.There is an X or a Y.

There is a r and an X, or a T and a Y.

To prove this, we need to conjoin the premises to formThere is a T, and an X or a Y and then apply the aboverule. However, this conjunction is blocked by the aboverestriction on "And Introduction" because the conjunc-tion appears in neither the premises nor the conclusion.The point is that we need And Introduction to feed otherrules, and the proposed restriction sometimes makes thisimpossible. Notice, however, that casting And Introduc-tion as a backward rule does have exactly the desiredproperty, and this supports the formulation of R6 thatappears in Table 2.

4 A kind of bonus from the ANDS approach is thatsome "relevance" restrictions of the sort advocated byA. R. Anderson and Belnap (1975) fall out quite natu-rally. For example, because propositions are only sup-posed true if they are needed in support of a subgoal,they will usually take part in the deduction of thatsubgoal. (One can sometimes trick the model into ig-noring such restrictions, and it is an interesting questionwhether one can likewise get subjects to abandon them.)

50 LANCE J. RIPS

R11, each of the disjuncts of a propositionp OR q are supposed in turn. If some com-mon proposition r can be deduced, both onthe supposition that p and on the suppositionthat q, then r can be placed in the same as-sertion node as p OR q.

We can also expect supposition-creatingrules to take part in reasoning schemes thatgo beyond the kinds of arguments consideredso far, since the ability to suppose or to imag-ine something is hardly limited to dealingwith connectives like if, not, and or and mighteven be considered a distinctive feature ofhuman judgment. Reasoning about counter-factual situations obviously depends on thisability (Revlis & Hayes, 1972). But, in ad-dition, causal reasoning may also involvesupposing that a given event occurs in orderto contemplate the effects of this cause(Kahneman & Tversky, 1982; Tversky &Kahneman, 1978). Probabilistic reasoningmay similarly entail imagining that certainpropositions are true and estimating the like-lihood of other propositions on that basis(Ramsey, 1926/1980). I return to this pointlater in discussing how ANDS's suppositionscan be modified to account for subjects'judg-ments about causal arguments.

Of the rules in Table 2, supposition-cre-ating rules produce the most complex struc-tural changes, and because of this complexity,these rules are only attempted after othermeans of establishing a subgoal have failed.Similarly, the remaining backward rules (Rland R3-R8) are more complex than forwardrules because backward rules must set upsubgoals, monitor the status of the subgoals,and coordinate changes in the subgoal andassertion trees. Therefore, at each stage of aproblem, forward rules are checked first, nor-mal backward rules second, and supposition-creating rules last.

An ExampleWith our knowledge of ANDS's rules, we

are in a position to understand exactly howthey are used in constructing a proof. To il-lustrate this procedure, consider Argument2, which is repeated below:

2. If there is not both an M and a P on the blackboard,then there is an R.

If there is no M, then there is an R.

Figure 2 shows the stages in the proof of thisargument by picturing the state of the asser-tion and subgoal trees before and after eachof the rules in the proof has applied. The lastpanel of the figure is identical to that of Fig-ure 1 and represents the completed proof.The interest in Figure 2 is in the way theseproof trees are built up, proposition by prop-osition.

A complete psychological theory of de-ductive reasoning would have to includesome provision for handling natural-languagesentence structure, either by adapting thedeductive rules so that they apply to naturallanguage directly or by parsing natural lan-guage into the more formal system in whichthe rules are currently specified. But althoughthis problem is an interesting one, it has notbeen addressed in the present version of thetheory. Instead, ANDS accepts as input a listof premises and a conclusion in formal no-tation. For readability, ANDS's propositionsare represented here by sentences composedof connectives, proposition letters, and pa-rentheses. Thus, ANDS receives on input thepropositions If not (M and P) then R and Ifnot M then R in place of the sentences inArgument 2. At the beginning of the proofin Figure 2 (a), memory consists of just thissimplified premise in the assertion tree andthe corresponding conclusion in the subgoaltree.

To get things started, ANDS scans its setof rules (in the order Rl', R2', R3', R5', R3-R8, Rl , R9-R11) to see whether the condi-tions on any of these rules are satisfied. Asit turns out, Rule R2' applies in this settingsince the single proposition in the assertiontree contains within it the formula not (Mand P), which matches the pattern specifiedby Condition 1. Condition 2 checks to makesure that this proposition has not been usedin a previous application of R2', and becauseit obviously has not, both of the conditionsare fulfilled (see Table 2). The action part ofthis rule copies the original proposition, sub-stituting not M or not P for not (M and P)and adds this new proposition, If not M ornot P then R, to the assertion tree, as shownin Figure 2 (b). Note that Rule R2' makes nomention of subgoals and so provides us withanother example of a forward rule.

None of the other forward rules applies at

COGNITIVE PROCESSES IN REASONING 51

Assertion Tree Subgool Tree Rule

Figure 2. Proof of the argument, If not both M and Pthen R; therefore, if not M then R. (Panels a-e illustrateANDS's memory trees before and after each of the in-ference rules has applied.)

this point, but it is possible to do some workin the backward direction. The main goal ofthe problem is conditional in form and there-fore satisfies Rule R9, a supposition-creatingrule that we are already familiar with. In thiscontext, R9 supposes that the antecedent(Not M) of the conclusion is true by placingit in a subordinate node of the assertion tree.At the same time, it proposes a correspondingsubgoal, R, which it takes from the conse-quent of the conclusion, and places it in anew subgoal node. The result of these oper-ations is the memory structure shown in Fig-ure 2 (c). At this stage of the proof, then,ANDS is looking for some way to establishR in the context of the supposition node inthe assertion tree. If it can do so, the con-ditional conclusion is guaranteed.

The turning point of the proof occurs inFigure 2 (d). ANDS's search of the rulescomes up with R4, since the new subgoal Rand the assertion If not M or not P then Rmatch this rule's two conditions. The actionhalf of the rule knows, in effect, that Not M

together with the above assertion implies R,so it sets up a subgoal to deduce Not M.However, this subgoal is immediately fulfilledsince Not M has just been supposed. Becausethis subgoal is successful, R itself must betrue within the subordinate context, and Ac-tion 2 of R4 adds this proposition to the as-sertion tree. Note that if subgoal Not M hadfailed, ANDS would have continued by tryingto prove Not P. The fact that ANDS hit uponNot M before Not Pis a consequence of theordering of the steps in R4. But althoughordering the subgoals in the opposite waywould have slowed down this proof, it wouldnot have prevented ANDS from finding iteventually.

Nearly all of the pieces are now in place.Recall that R was introduced as a subgoal byRule R9, which was intent on proving themain conclusion of the argument. Becausethat subgoal has now been achieved, the con-clusion itself has been deduced (see Action3 of Rule R9). So as a finishing touch, thisconclusion too can be added to the assertiontree, as shown in the final memory config-uration in Figure 2 (e). And because this rep-resents the main goal of the problem, ANDScan declare the original argument valid. Theproof is, in fact, a fairly straightforward onedue to the speed with which ANDS foundthe correct solution path. For an example ofa more complex proof involving a numberof false starts, see the Appendix. But even ina simple proof like that of Figure 2, difficul-ties may arise if one of the crucial rules isunavailable or hard to apply. For instance,removing R9 from the set of rules not onlyblocks the above proof but makes it impos-sible for ANDS to reach the correct (valid)answer at all. This source of difficulty willconcern us later in accounting for subjects'performance on similar problems.

A Comparison to OtherPsychological Models

Our tour of ANDS's structural and pro-cessing characteristics suggests some ways tosort out differences among current psycho-logical models of deduction. On a first pass,the most obvious difference is the kind andnumber of the rules that the models adopt:22 rules in Osherson (1975), 18 in Braine

52 LANCE J. RIPS

(1978), 12 in Johnson-Laird (1975), and 14in the present model. These rule systems arenot merely notational variants, for some ar-guments are provable in one system but notin another. (For example, ANDS can provevalid the argument [P or Q] and not P; there-fore, Q, which cannot be proved by the Osh-erson model. On the other hand, Osherson'smodel, but not ANDS, can handle the ar-gument IfP and Q, R; therefore, if not R, notP or not Q.) Nonetheless, there are reasonsto think that this difference in rules is not themost revealing way to compare the models.First, the difference in number of rules issomewhat misleading since it depends on ar-bitrary decisions about how rules are to becounted. For example, the "And Elimina-tion" rule can be expressed either as two sep-arate rules ("p AND q entails p" and "p ANDq entails q ) or as a single rule (as in ANDS'sRule R5). Second, despite some nonoverlap,there is considerable agreement in choice ofrules. All models, for example, have somerule corresponding to "And Elimination"; allbut one has a rule for "Or Introduction" (e.g.,R7), and so on. Third, with the exception ofBraine, no one argues that his particularchoice of rules is definitive in covering all andonly the cognitively basic inference patterns.Rather, the claims are that the designatedrules constitute a subset of the basic ones andthat they allow us to predict certain factsabout those more complex inferences that arederivable from them. Fourth, the effects ofrule choice are apparent only if we know thecontrol mechanisms, since even the same setof rules can produce nonequivalent infer-ences in different control environments.

Whereas no processing assumptions areoffered by Braine,5 both Johnson-Laird andOsherson are quite explicit about these mat-ters, and this makes for a fruitful contrast.The main differences seem to lie in (a)whether the models can look ahead to theconclusion of the argument to be evaluatedand (b) whether they can make use ofsubgoals. In ANDS, the backward rules han-dle both the conclusion and its subgoals.However, the conclusion can be treated in-dependently of the subgoals, as the Johnson-Laird and Osherson models illustrate: Theformer allows subgoals without conclusion-

sensitivity and the latter conclusion-sensitiv-ity without subgoals.

Let's first consider Johnson-Laird's pro-posal. In this system, the primary deductivestep is an attempt to apply three speciallydesignated rules to the given set of premises.These rules are modus ponens, the disjunc-tive syllogism, and a final rule of the formNOT (p AND a); p; therefore, NOT q. Other"auxiliary" rules are used to aid these pri-mary inferences in a manner similar toANDS's backward rules. For example, giventwo premises of the form IF p, q and p ANDr, the model will attempt to carry out a mo-dus ponens inference, determine that the sec-ond premise is not of the requisite form, pro-pose an auxiliary goal to deduce p, and ac-complish this deduction through an auxiliaryrule (i.e., p AND r implies p). Other facilitiesmay also be called into play to produce neg-ative or conditional conclusions. What is ofinterest is that none of this depends on havinga conclusion available, since the primaryrules are triggered solely by the premises.Thus, the primary rules, like ANDS's for-ward rules, are initiated by premises, but likebackward rules, they can propose their ownsubgoals. The result is that although themodel is able to produce fairly powerful con-clusions, it is unable to evaluate arguments(premise-conclusion pairs) except in ex-

5 The main innovation in Braine's (1978) theory is theidea that conditional sentences in English have the samemeaning (and should be represented in the same way)as the inference relation between the premises and con-clusion of a valid argument. In particular, Braine ad-vocates treating both conditionals and inference relationsas material conditionals (true just in case the antecedentis false or the consequent is true). In a reply to Braine,Grandy (1979) questioned whether this identificationdoesn't distort the meaning of//. . . then; but a possiblymore serious problem is the account of the inferencerelation. Although children may initially confuse truth-functional and non-truth-functional relations, the avail-able evidence implies that by college age most subjectsacquire a notion of inferential validity that is logicallystronger than that of a material conditional (Osherson& Markman, 1975; Moshman, Note 4). Although ANDSprovisionally interprets if. . . then as a material con-ditional, it maintains the classical logic distinction be-tween the truth of a conditional and the validity of anargument. (A strengthened form of the conditional forcausal propositions is considered in the Extensions sec-tion later on.)

COGNITIVE PROCESSES IN REASONING 53

tremely simple cases.6 (In more recent work,Johnson-Laird, 1982, Note 5, has abandonedthe use of inference rules in favor of princi-ples of meaning combination associatedwith the connectives. However, the pointsraised above apply equally to this new sys-tem.)

Osherson's model has the opposite prop-erties. In this system, there are no subgoalsapart from the conclusion and no way to re-turn to a previous step in the problem oncethe current strategy has failed. At the firststep of a deduction, a rule is applied to thepremise to produce a new proposition, say,SI. On the second step, a rule is applied toS1 to produce S2, and so on, until either theconclusion has been reached or no more rulesapply. Once a rule has operated on a givenproposition, no memory of that propositionis retained, so it is impossible to return to itlater in the deductive process. However, toguide the derivation to the argument's con-clusion, the inference rules apply only if theconclusion has a specified form. For instance,the rule that derives p from p AND q operatesonly if some subformula of p appears in theconclusion with no subformula of q con-joined to it.

One might suspect that it would be easyto fool such a model by constructing the con-clusion of the argument in such a way as tolead it off the correct inferential path, andindeed, this is the case. For example, al-though the model can correctly evaluate theargument If M, N; therefore, either (if M, Nor O) or (both P and Q), it fails with the sim-ilar argument IfM, N; therefore, either (ifM,N or O) or (both M and Q). In the first prob-lem, the model correctly transforms IfM, Ninto IfM, N or O, and this sentence in turnis rewritten as the conclusion. The secondproblem ought to have exactly the sameproof, but in this case the first step ispreempted by another rule that is triggeredby the presence of M andQ in the conclusion.Instead of deducing IfM, N or O from IfM,N, it comes up with IfM and Q, N and thenhalts after two more futile steps. Of course,any program that uses heuristics to guide theproof can be tripped up, and this is no lesstrue of ANDS than of the Osherson model.The present point is that for the latter model,

this type of error is always fatal because thereis no way to return to the place in the proofwhere the wrong rule was applied.

In some sense, then, we can think of ANDSas combining the subgoal capability of John-son-Laird's theory with the look-ahead fea-ture of Osherson's. One can perhaps defendthe last two models as explanations of specialtypes of deduction—inference production inthe case of Johnson-Laird and simple infer-ence evaluations in the case of Osherson (seethe distinction between "logical intuition"and "proof finding" in Chapter 2 of Osher-son, 1975). However, it seems a reasonableworking hypothesis to regard these activitiesas two modes of a more general-purpose de-ductive program. Indeed, it would be odd tosuppose that humans have evolved three dif-ferent logical systems, one for production ofinferences, another for evaluation of simplearguments, and a third for more complicatedones. Granted that a general-purpose deduc-tion system is a live possibility, the above con-siderations suggest that it will have to incor-porate both some form of subgoals and somelook-ahead feature.

To sum up, ANDS's inference mechanismis more general than that of its progenitors,and it should therefore be better able to servethe varied demands placed on the inferencesystem by other cognitive components (e.g.,in comprehension or choice). Of course, gen-erality has been purchased at the price of in-creased complexity, and we should check

6 The emphasis on production over evaluation of ar-guments might be justified on the grounds that produc-tion can be used to test validity (i.e., it can double as anevaluation procedure). Given an argument with premisesPI, P2, . . . , Pk, and a conclusion C, one can add thenegation of the conclusion to the premises and checkwhether a contradiction can be produced from this aug-mented set of propositions. However, such a mechanismis not very realistic psychologically (though it does formthe basis for resolution theorem proving; see Chang &Lee, 1973). Furthermore, one could equally well supportthe primacy of evaluation over production on the basisof a (similarly unrealistic) procedure: In order to producevalid conclusions, one can systematically generate prop-ositions using the syntactic rules of the language, test tosee if they follow from the premises, and output any thatdo. In sum, although production and evaluation are re-lated logically, it is unlikely that one is psychologicallysubsumed by the other. (For further discussion of pro-duction, see the section on Extensions.)

54 LANCE J. RIPS

whether this complexity is required in ex-plaining actual instances of reasoning.

ANDS's Empirical Consequences

Protocol Examples

Most of the psychological research onprepositional reasoning has focused on onesentential connective at a time, for example,determining the way people answer questionsabout negative or conditional statements (seeWason & Johnson-Laird, 1972, for a reviewof this research and Osherson, 1975, for animportant exception to the one-connective-at-a-time strategy). However, ANDS's aim isto explain reasoning with arbitrary combi-nations of connectives, and so to evaluate thetheory's empirical adequacy, we need a moregeneral set of results. In the present section,I examine new data that bear on this ade-quacy issue, beginning with a sample pro-tocol from a subject who was trying to decidewhether Argument 2 is valid. Because thisargument is the same one used to illustrateANDS's proof procedure in Figure 2, the ex-ample can help assess ANDS's advantagesand disadvantages as a theory of reasoning.This protocol was collected before the theorywas formulated and shaped some of the de-cisions that went into it. For this reason, thesimilarities discussed below cannot be takenas detailed confirmation of the theory. How-ever, they may at least indicate the extent towhich ANDS's general structure is compat-ible with human styles of reasoning.

In the experiment from which the protocolwas taken, two groups of subjects evaluatedsingle-premise arguments while thinkingaloud. Four subjects were included in eachgroup, and the groups were assigned to sep-arate sets of 12 arguments. Each set con-tained six valid and six invalid problems (thevalid arguments were those of Osherson,1975, Table 11.6). Individual arguments werepresented to subjects one at a time in randomorder on index cards, which remained in viewwhile the problem was being solved. The sub-jects' task was to decide whether "the con-clusion had to be true if the premise wastrue." In addition, they were told to answerthe question first in whatever way seemed

natural to them. Then, in order to encouragethe subjects to mention all of the steps intheir thinking, they were asked to explaintheir answers once again as if they were talk-ing to a child who did not have a clear un-derstanding of the premise and conclusion.The sample protocol in Table 3 is a completetranscript from a subject who correctly eval-uated Argument 2. The subject was at thetime a graduate student in English literaturewho (like the other subjects in this experi-ment) had no previous formal training inlogic.

In the subject's initial solution, he firstreads the premise and conclusion of the ar-gument (Lines a and b in Table 3) and thenbegins working over the premise by para-phrasing it in various ways (Lines c-e). Themost helpful of these paraphrases occurs inLine d: "If you come upon any blackboardwithout an M or without a P . . . there willbe an R." From this proposition, the answerseems to be self-evident because after re-peating the conclusion in Line f, the subjectdeclares this conclusion to be true. Although

- the last step is not elaborated in the initialsolution, the subject's explanation to theimaginary child provides some insight intohis reasoning. He first tries to get the childto understand that if either an M or a P ismissing or if both of them are missing fromthe blackboard, then there has to be an R,essentially a restatement of Line d. He thenhas the child imagine a situation in which theantecedent of the conclusion is true: "Lookat this blackboard here. There is no M on it."Because the M is missing, then "there has tobe an R there."

There is a clear parallel between this in-formal proof and ANDS's assertion tree forthe same argument in Figure 1. In particular,Lines a, i, and the second part of Line e inthe transcript correspond to the premise(Assertion 1 in the figure), and the statementof the conclusion in Lines b and f to Assertion ;8. Lines d, k, and the second half of Line mare all closely related to Assertion 3 (i.e., Ifnot M or not P then R), and finally, the twosentences in Lines o and q are equivalent tothe two propositions of the subordinate node.It seems fair to say that the main thread ofthe subject's explanation is basically the one

COGNITIVE PROCESSES IN REASONING 55

Table 3Protocol From Subject 7 on the Problem, "If it is not true that there is both an M and a P, thenthere is an R; therefore, if there is no M, then there is an R"

Initial solutiona. The sentence above the line reads, "If it is not true that there is both an M and a P, then there is an R."b. The sentence beneath the line reads, "If there is no M, then there is an R."c. If it is not true that there is both an M and a P—• if you come upon a blackboard and there is an M and a

P, there will always be an R.d. If you come upon any blackboard without an M or without a P, without both of them together, there will be

an R.e. So with an M and a P, no R, and if it is not true that they're both there, then there is an R.f. Now the sentence below says if there is no M, then there is an R.g. That's true.h. Now I'm construing the top sentence a little differently, but I think that shows how I'm reasoning, incorrectly

or otherwise,i. If it is not true that there is both, then there is an R.

Explanation to a "child"j. OK. Anytime you see both the letters M and P on a blackboard, then you can be sure that there is no R.k. But if one of the letters is missing or if both of the letters is missing, then you'll see an R on the blackboard.1. Now look at this blackboard here,

m. There is no M on it, so if either an M or a P is missing or both of them are missing, then there has to beWR.

n. Now what about this blackboard?o. There is an M missing,p. What does that say?q. That says there has to be an R there.

abstracted in Proof 3 and that ANDS repro-duced in the earlier example.

However, the comparison between the sub-ject's proof and ANDS's proof of the sametheorem also points up a number of dissim-ilarities. The first of these is that some of thesubject's statements never appear in ANDS'sassertion or subgoal trees. This is true, forexample, of the sentence in Line c of Table3 ("if. . . there is an M and a P, there willalways be an R"), of the statement in the firsthalf of Line e, and of the very similar one inLine j ("Anytime you see both the letters Mand P on a blackboard, then you can be surethat there is no R"). What makes these state-ments initially puzzling is that they are of theform IF p, q and IF p, NOT q, and it is some-what unlikely that the subject believed bothof these sentences to be true at the same time.But because the first of them is never repeatedin the proof, it can plausibly be considereda slip of the tongue or a temporary misun-derstanding of the premise, which is laterabandoned in favor of the second type of sen-tence. The factors responsible for this errorare unclear, and ANDS makes no attempt toduplicate it.

Lines e and j by themselves are easier tounderstand. These sentences are the inverseof the original premise; that is, from If not(M and P) then R the subject has inferred IfM and P then not R. This inference is notvalid on most formal interpretations of IFand cannot be deduced from the rules inTable 2. Nevertheless, the inverse does followif the premise is understood as a bicondi-tional—Not (M and P) if and only if R—asmight be appropriate in certain contexts(Fillenbaum, 1977; Geis & Zwicky, 1971).For sentences of the type used here, a sub-stantial minority of subjects accepts the in-verse as valid on the basis of a premise con-ditional (Pollard & Evans, 1980). This be-havior could be simulated by providing ANDSwith the corresponding deduction rule (seethe section on Extensions below), though forthe present ANDS sticks to the classical in-terpretation of IF. Note that such a rulewould have to work in a forward directionto account for the protocol, because the in-verse does not seem to be motivated by anyobvious subgoal. Indeed, the inverse plays nodirect role in the proof at all, if this analysisof the subject's reasoning is correct.

56 LANCE J. RIPS

A second discrepancy between the twoproofs is that the subject omits mention ofone of the problem's subgoals, There is anR. (He does, of course, mention the maingoal or conclusion in Lines b and f.) This islikely due to the subgoal being a simple andobvious one, and it seems to be implicit inthe subject's explanation. In order to accountfor the directedness of the subject's proof, weneed to make some assumptions about hissubgoals, and those that appear in the subgoaltree of Figure 1 seem to be reasonable can-didates.

In the previous section, I noted that earlierpsychological models have problems in ac-counting for certain aspects of the deductiveprocess—look-ahead features in the case ofJohnson-Laird's model and subgoals in thecase of Osherson's. Some further excerptsfrom the protocols demonstrate how subjectsuse these features. Argument 5, for example,provides us with an instance where a look-ahead mechanism is particularly helpful,since this argument is easy to handle by sup-posing as true that part of the conclusion in-side the negative (i.e., There is both a Y andno R) and showing that this piece of the con-clusion yields a contradiction when takenwith the premise.5. If there is a Y or an M on the blackboard, then there

is an R.

It is not true that there is both a Y and no R.

ANDS's strategy on this problem is towork backward from the conclusion via RuleRIO, first supposing that there is a Fand noR and then hunting for a contradiction basedon this supposition. Subjects' strategies areremarkably similar. Thus, one subject's pro-tocol begins as follows:6. Ok, I think I have to take this one by analyzing the

second part of the second sentence first. It says thatthere's both a Y and no R, and above it says if thereis a Y, then there is an R. So then the second part ofthe second sentence is not true, so the second sentenceas a whole is true.

Another subject explains the validity of thesame argument in this way:

7. The bottom sentence says that—the second half ofthe bottom sentence says that there's a Y there andthere is no R there, and this cannot be because thetop sentence says that if there is a Y there then theremust be an R on the blackboard. So when you put

the whole bottom sentence together, it says that it isnot true that there is both a Y and no R. So this mustbe right.

The Johnson-Laird model cannot accountfor the subjects' reasoning in Excerpts 6 and7 since the deductive procedure in that modelhas no access to the argument's conclusionand, hence, cannot analyze it in the mannerof these subjects. (As presently formulated,the Osherson model also has trouble with thisexample. Although the model can at leastinspect the conclusion, it has no way to ex-tract part of the conclusion and use it to makefurther inferences. In other words, the Osh-erson system does not use suppositions,whereas the solutions in Excerpts 6 and 7depend on supposing Y and not R.)

There are also many illustrations in theprotocols of subjects giving up a particular lineof reasoning and switching to another. Onecase in point occurred with Argument 8:

8. If there is not a Q or not an N, then there is a B.

There is a Q or a B.

Part of the transcript from one subject work-ing on this problem shows a typical logicaldoubletake:9. That's wrong, this is not necessarily true. Because the

top sentence is a conditional, . . . because there isan or in there, that doesn't mean that there is a Q ora B. There might be. What it does mean is that there'sa Q or an N or else there's a B, but not that there'seither a Q or a B . . .1 messed this up again . . . IfQ is on the blackboard, then the bottom sentence istrue, because either Q or B is on the blackboard,namely, Q is on the blackboard. If Q is missing fromthe blackboard, then there must be a B on the black-board, because the top sentence says that if there isno Q or no A', then there is a B. It says that if Q ismissing from the blackboard, then there is a B, andthe bottom sentence is true.

This reversal is difficult to explain unlesssome provision is made in a deductive modelfor abandoning unsuccessful strategies andinitiating new ones. This is prohibited in thesystem developed by Osherson; but by con-trast, in a system with subgoals it is easy toabandon a failed subgoal, back up to a su-perordinate goal, and begin a new line of rea-soning.

As a more global test of ANDS's proofs,two judges, who were familiar with logic,rated the similarity between these proofs andthe subjects' protocols for the same argu-

COGNITIVE PROCESSES IN REASONING 57

ments. As a basis for comparison, thesejudges were also asked to rate the similaritybetween the transcribed protocols and theproofs generated by the Osherson system(1975, p. 262), the only alternative psycho-logical model that yields clear-cut predictionsfor these arguments. Two steps were taken tofacilitate the comparison: First, the stimuliwere limited to those arguments that werevalid in both the ANDS and Osherson mod-els and to those protocols in which the subjecthad also correctly reached a valid decision.This left a total of 32 protocols, representing10 different arguments and 8 subjects. Sec-ond, both types of proofs were reexpressedin a common format, that of Fitch (1952),with which the judges were acquainted. Thejudges were instructed to estimate the extentto which the protocols reflected the samemethod shown in the proof, and their ratingswere given on a scale of 0 to 5, where 0 in-dicated that the protocol method was defi-nitely not the same as the proof and 5 meantthat the method was definitely the same asthe proof.

Several of the protocols used in the exper-iment were not very revealing because sub-jects were inexplicit about their solutionstrategies despite our instructions. Thesevague responses place a limit on the maxi-mum similarity of the proofs to the tran-scripts. Nevertheless, the mean similarity rat-ing for ANDS's proofs was 3.64 and that ofOsherson's proofs 3.21. Taking the two judges'ratings as dependent measures, the differencebetween the proof systems is reliable in amultivariate analysis of variance in which theindividual protocols served as the unit ofanalysis: Wilks's A = .620, corresponding toF(2, 30) = 9.21, / > < .01 (see Rao, 1973, pp.555-556; the result is also significant by otherstandard multivariate criteria). For the rea-sons given above, these data should be inter-preted cautiously. However, they help bolsterthe claim that ANDS's assumptions are fairlyrealistic with respect to reasoning by un-trained subjects.

Memory for ProofsThe data most often cited in support of a

theory of reasoning are judgments about thevalidity of inferences. The protocols just ex-amined are a special kind of judgment of this

sort, and validity judgments are taken upagain in the following sections. However, be-cause ANDS makes specific assumptionsabout working memory for proofs, anotherway to evaluate the model is to see whichpropositions subjects recall after reading orlistening to a demonstration like that in Proof3. The rationale for such a test derives fromprevious experiments on memory for text(e.g., Kintsch, Kozminsky, Streby, McKoon,& Keenan, 1975; Meyer, 1975). A consistentfinding in these studies—the so-called levelseffect—is that recall probability for a givenproposition is correlated with the height ofthat proposition in a hierarchical reconstruc-tion of the text. High-level facts that areclosely related to the central theme of thepassage are more accurately recalled thanlow-level details. Because ANDS provides uswith a ready-made hierarchical analysis ofproofs in the form of its tree structures, anobvious prediction is that subjects should re-call propositions from the top of these treesbetter than more embedded propositions.

An experiment by Marcus (1982) on mem-ory for proofs allows us to test this prediction.In this study, subjects listened to a series ofpassages, each of which was an instantiationof a simple proof, and then attempted to re-call the passages in response to their titles.The crucial variable was whether the passagecontained lines that would appear in a sub-ordinate node of the assertion tree. Each pas-sage appeared in two versions, one with sub-ordinate sentences of this kind and the otherwithout. (Passages were balanced so that agiven subject saw only one version of each.)The sentences in Proof 10 are an example ofa passage with subordinate propositions:

10. a. Suppose the runner stretches before running.b. If the runner stretches before running, she will

decrease the chance of muscle strain.c. Under that condition, she would decrease the

chance of muscle strain.d. If she decreases the chance of muscle strain,

she can continue to train in cold weather.e. In that case, she could continue to train in

cold weather.f. Therefore, if the runner stretches before

running, she can continue to train in coldweather.

The assertion tree for this proof is shownon the left-hand side of Figure 3. The con-ditionals in Sentences b and d are premises

58 LANCE J. RIPS

If the runner stretches, shewill decrease strain.

If she decreases strain,shecan continue to train.

If the runner stretches, shecan continue to train.

Runner stretches.

She decreases strain.

She can continue to train

Proof A

The runner stretches.

If the runner stretches, shedecreases strain.

She decreases strain.

If she decreases strain,shecan continue to train.

She can continue to train.

If she did not decrease strain,she would ruin her chance.

Proof B

Figure 3. Assertion trees for two proof's from Marcus (1982). (Proof A is an embedded proof; Proof Bis nonembedded.)

of the argument and are placed in the topnode. However, Sentence a is introduced asan explicit supposition whose truth value isuncertain. Its role is exactly the same as thesupposition Not M in the proof of Figure 1 —it is temporarily assumed in order to facilitatefurther conclusions—and as in the case ofNot M, it belongs to the subordinate node ofthe tree. Propositions c and e are derived bymodus ponens from this supposition and theconditional premises. Finally, Sentence f isdeduced from Sentences a and e on the basisof Rule R9. Other subordinate proofs inMarcus's study used the supposition-creatingrules RIO or Rl 1 in place of R9.

We can compare the above example to itsnonembedded counterpart in Proof 11:11. a. The runner stretches before running.

b. If the runner stretches before running, she willdecrease the chance of muscle strain.

c. Therefore, she will decrease the chance ofmuscle strain.

d. If she decreases the chance of muscle strain,she can continue to train in cold weather.

e. Thus, she can continue to train in coldweather.

f. If she did not decrease the chance of musclestrain, she would ruin her chance ofrunning in the Boston Marathon.

The basic difference between Proofs 10 and11 is that the latter employs no supposition-creating rules and hence is represented by an

assertion tree with a single node, as shownat the right of Figure 3. The conditionalpremises in Sentences 1 Ib and 1 Id are iden-tical to the ones in Sentences lOb and lOd.But this time Sentence l la also serves as apremise, as does the filler item in Sentence11 f. (This last sentence was added to equatethe length of the two passages.) Propositions1 Ic and 1 le, like the corresponding sentencesin Proof 10, are obtained by modus ponensfrom the earlier assertions.

The important comparison in this exper-iment is between the recall rates for Propo-sitions lOa, lOc, and lOe on the one hand("embedded lines") and for Propositions 1 la,lie, and l ie on the other ("unembeddedcontrols"). Apart from the initial adverbialphrases,7 embedded lines and unembeddedcontrols have the same content; however,whereas embedded lines appear in the sub-ordinate node in their proof, the controlsbelong to the superordinate node. Thus, ifthe representations in Figure 3 are correctand if recall is better for superordinate ma-terial, subjects should be more accurate with

7 In a subsequent experiment, Marcus (1982) hasshown that the adverbial phrases alone do not accountfor the obtained differences between embedded lines andcontrols.

COGNITIVE PROCESSES IN REASONING 59

the controls than with the embedded items.In fact, subjects were correct on 44.6% oftrials with sentences like 11 a, lie, and liebut only 25.4% correct with lOa, lOc, andlOe, in accord with the above prediction.This result cannot be blamed on global dif-ferences between the passages. Such differ-ences would produce higher recall rates forsentences like 1 Ib than for equivalents likelOb, both of which are in the superordinatenodes of their respective assertion trees. How-ever, the obtained effect is in the oppositedirection, accuracy on 1 Ib-type sentencesbeing 71.2% versus 83.1% for sentences likelOb. (There is an obvious recall advantagefor Sentences lOb, 1 Ib, and their analogs overSentences lOc, 1 Ic, and the like. This couldbe due to a number of structural or contentfactors, which were not controlled in this ex-periment and which have no bearing on thebasic contrast.)

In short, these results confirm ANDS'smemory organization. Subordinate proposi-tions in the assertion tree exist only to pro-vide the grounds for a higher level conclusion,and because the truth of these subordinatepropositions is uncertain, they should be lessuseful in the context in which the problemis stated. Given limited memory capacity,forgetting from the bottom of the proof treeswould seem a more adaptive process thanalternative possibilities.

The Acceptability of Arguments

As an additional test of ANDS, we can tryfitting the model directly to subjects' deci-sions about the validity of arguments. Indoing so we make crucial use of the assump-tion, mentioned above, that the deductionrules in Table 2 are unavailable on some pro-portion of trials (where unavailability may bedue to failure to retrieve a rule, to recognizethe rule as applicable, or to apply it properly).If we know which rules are needed in provinga given argument and how likely these are tobe available, we can arrive at predictionsabout how often subjects will evaluate theargument correctly.

For example, let us suppose that Argument12 is presented to a group of subjects whoare asked to determine its validity.

12. If Judy is in Albany or Barbara is in Detroit,then Janice is in Los Angeles.

If Judy is in Albany or Janice is in Los Angeles,then Janice is in Los Angeles.

We assume that the subjects will attempt toconstruct a mental derivation in order to tellif the conclusion follows from the premise,doing implicitly what the subjects of our pro-tocol experiment were doing explicitly. If thesubjects are successful in deducing the con-clusion, they will judge it to be valid. Oth-erwise, they will either guess at the answerwith some probability, pg, or simply declarethe argument invalid (with probability 1 --pg). If ANDS is a correct model of the deri-vation process, the subjects' ability to deducethe conclusion depends on whether the nec-essary inference rules from Table 2 are avail-able. That is, the overall probability of a cor-rect valid response to Argument 12 shouldbe equal to the joint probability of having allessential rules available to construct the proofplus some increment due to correct guessing.Other factors — for example, misperceivingpart of the argument — could keep subjectsfrom finding a correct proof, but in order tosimplify the predictions, let's assume thatthese sources of error are negligible com-pared to rule availability.

When all of its deduction rules are avail-able, ANDS proves Argument 12 using R4,R9, and R 1 1 . If these rules are accessible withprobabilities p4,p9, and pn, and if we assumethese probabilities are independent, then thelikelihood of a correct response can be ex-pressed as

P( valid) = p4p9Pn + .5pg(l - ), (1)

where the first term is the probability of pro-ducing the proof and the second term reflectsa correct guess after failing to find a proof.As it stands, however, Equation 1 is not quiteright. It would be perfectly correct if the R4-R9-R1 1 proof were the only possible one, butit turns out that ANDS can still find a proofof Argument 12 even if R4 is missing: A com-bination of Rules R 1 and R7 fills the placeleft by R4. This means that additional termsmust be added to Equation 1 to reflect thisalternative derivation. The correct predictionequation is:

60 LANCE J. RIPS

/'(valid) = ptpgpn + (1 - p

+ .5/7g[l - PtPgPn - (1 ~ P^PiPrfgPn]. (2)

Here, the first term is again the probabilityof finding the original proof, the second termis the probability of finding the alternativeproof, and the final term is the probabilityof finding neither proof but making a correctguess. Because any further deletion of rulesresults in no proof at all, Equation 2 is ourprediction about the proportion of correctresponses to Argument 12. It should be clearthat prediction equations like Equation 2 canbe developed for any argument by system-atically deleting combinations of rules fromANDS's repertoire and seeing if the stripped-down model comes up with a proof.

Predictions were derived in this way for 32valid arguments that are listed schematicallyin Table 4. The arguments were constructedby selecting triples of rules from Table 2 andthen attempting to generate a proof fromeach triple. With the exception of R8, all ofthe rules in Table 2 appear in ANDS's proofsof two or more of the arguments. In additionto these critical problems, an equal numberof invalid arguments were produced by re-arranging the original premises and conclu-sions. These problems were included to checkwhether subjects were responding to the formof the individual sentences rather than to thelogical relations among them. Finally, 40filler arguments were added to this set, mostof which were simple valid problems.

Thirty-six subjects participated in this ex-periment, none of whom had any formalcourse-work in logic. For half of them, thearguments were presented in terms of prop-ositions about the location of people in cities,as in Argument 12. For the remaining sub-jects, the same problems were phrased withpropositions describing the actions of partsof a machine. For instance, these subjectswould have seen Argument 13 in place ofArgument 12.

13. If the light goes on or the piston expands, then thewheel turns.

If the light goes on or the wheel turns, then the wheelturns.

Within each of these two groups, atomicpropositions (e.g., "Judy is in Albany" or

"The light goes on") were randomly assignedto the arguments, with no proposition re-peated in more than one problem. The orderof the arguments was randomized separatelyfor each subject and presented in a singleprinted list. Beneath each problem were thephrases "necessarily true" and "not neces-sarily true." Subjects were asked to circle thefirst response if the conclusion had to be truewhenever the premise was true and to circlethe second response otherwise. A responsewas required for each problem, even if itmeant guessing. Subjects were also cautionedthat the answer to a problem did not dependon the answer to any other problem on theirlist. Subjects were tested in groups of varyingsize and were allowed to proceed at their ownpace, normally about 1 hour to evaluate the104 arguments.

Table 4 presents the results for the validproblems from this experiment, and a pre-liminary look shows that subjects correctlyjudged them as valid on 50.6% of trials, withscores for the individual arguments varyingfrom 16.7% on Problem 3 of the table to91.7% on Problem 27. Although all of thesearguments are valid in classical logic, theyclearly span a wide range of difficulty for oursubjects. The percentage of valid responsesto the invalid arguments (22.9%) was signif-icantly less than for the valid ones, F( 1, 34) =84.26, p < .01. To some extent, then, subjectswere successful in discriminating the twoproblem types, even though their overall hitrate was low. Because the valid and invaliditems were matched for the complexity oftheir premises and conclusions, this resultsuggests that complexity alone cannot ac-count for subjects' decisions. This is backedby relatively low correlations between thepercentage of valid responses in Table 4 andfactors such as the number of premises in theargument (r = -.23), the number of types ofatomic propositions (r = -.04), and the num-ber of proposition tokens (r = .10). An anal-ysis of variance of the valid items showed noreliable difference due to the content of theproblems (people in places or machine move-ments) and no interaction of content withscores on the individual arguments: for thecontent main effect, F(l, 34) = .20, p> .10;for the interaction, F(31, 1054) = 1.12, p >.10. For this reason, the results for the two

COGNITIVE PROCESSES IN REASONING 61

Table 4Observed and Predicted Percentages of Valid Responses to Stimulus Arguments

Argument Observed Predicted

1. ( p v ? ) & ~ p 33.3 33.3

q v r

2. s 66.7 70.2a

P v f l

~ p - (<? & i)

3. p -> ~ (9 & r) 16.7 32.4(~ Q v ~ i) — » ~ p

— p

4. p 22.2 30.6(P & </)

~ # v rc /n ., r\ , ~ oo o 70 na

(p v q)—>q

6. ~ p&g 41.7 40.5

<7 & ~ (p & r)

7. (p v?) -> ~ r 61.1 70.2"r v s

8. (p -> 9) & (p & r) 80.6 76.6

q& r

9. ( p v a ) — ~ s 55.6 41.2

~ p &s

10. 4 33.3 36.0"

p — > [(p & q) v r]

11. (pv ~<?)^ ~ p 22.2 35.6p v — q

~(q&r)

12. (p v q) -> ~ (r & .s) 75.0 70.4"

p -> (~ r v ~ s)

13. ~p 22.2 26.41

~(p&r)&(qvs)

Argument Observed Predicted

14. (p v r) -^ ~ s 50.0 38.1"

p - ~ (s & t)

15. ~ (p& q) 77.8 75.8(~ p V ~ 4) -> r

~ (p & f/) & r

16. (9 v r ) &.? 69.4 68.5"

17. p 33.3 40.5(P v q) -> ~ r

p & ~ (r & s)

18. p-> r 58.3 69.1a

(P & d) - r

19. .? 75.0 70.9"

p - (r & .?)

20. pv q 33.3 32.2"

~ p — • (# v r)

21. p 38.9 33.9(p v q) — rr — > s

s v /

22. p & q 47.2 37.6

q & (p v r)

23. ~ (p & 9) 23.0 35.5(~ p v ~ «•) -» ~ r

~ (r & 4-)

24. (p v j) — r 50.0 36.1.?

~ (r - ~ .9)

25. p — ~ ^ 36.1 33.9

p — ~ (</&/•)

26. ~ (p&q)&r 66.7 73.9(~ p v ~ «) — i'

.y

(table continued)

62

Table 4 (continued)

LANCE J. RIPS

Argument Observed Predicted

27. (p v q)-< ( r & s ) 91.7 86.9"

28. ~ rq v /•

36.1 38.7"

29. ~ (;; & ( 72.2 62.2

D&~~ ( p & q )

Argument Observed Predicted

30. (/> v (?) & [(c v .s) •

31. p v .v(p v r) — s

x v /

32. t~ (r& s)

{[(- rv~s)&t]v u}

83.3 75.8

26.1 36.0

36.1 33.S

Note. Arguments are given in logical form, where & = and, v = or, —> = if. . . then, and ~ = not. The originalarguments appeared as English sentences.* Causally invalid problems.

content types are collapsed in Table 4 and inthe analyses reported below.

Fitting the full model to the data in Table4 obviously requires a large number of pa-rameters—one for each of the rules used inthe proofs, plus an additional guessing pa-rameter. In order to reduce this numbersomewhat,, the guessing parameter, pg, wasestimated from the data for invalid problems.Assuming that valid responses to these prob-lems arise only from (bad) guesses, we cantake/jg equal to twice this proportion, or .458.Second, because of the similarities betweenRules Rl and Rl', R3 and R3', and R5 andR5', single parameters (p\, p ^ , and p5) wereallotted for each forward and backward pair.The remaining set of parameters is still fairlylarge (10 parameters in all); nevertheless, 22degrees of freedom remain from the Table 4data for an assessment of the model. Equa-tions for the valid problems were fit using theSTEPIT program (Chandler, 1969) with aleast-squares criterion, and the resulting pre-dicted values are shown in Table 4. Table 5gives the parameter estimates for the criticalrules.

In general, the predicted values yield a rea-sonable fit. The correlation of predicted andobserved proportions is .933, and the vari-ance accounted for by the model is signifi-cant, using the Problem X Subject interaction

from the analysis of variance as an error term,f(10, 1,054) = 26.43, p < .01. The F for theresidual variance is small but significant dueto the large numbers of degrees of freedom,F(2\, 1,054) = 1.88, p< .05. The standarderror of the means in Table 4 is .071 (basedon the above error term), and this can becompared to a root mean square deviationof .079. Although the model does not ac-count for all of the systematic varianceamong the means, it does quite well givenour somewhat restrictive assumptions (i.e.,

Table 5Availability Parameters for Rules in ANDS

RlR2R3R4R5R6R7R8R9RIOR l l

Rule

(Modus Ponens)(DeMorgan)(Disjunctive Syllogism)(Disjunctive Modus Ponens)(And Elimination)(And Introduction)(Or Introduction)(Restricted Excluded Middle)"(//'Introduction)(Not Introduction)(Or Elimination)

Parameterestimate

.723

.715

.7131.000.963

1.000.197

.861

.238

.858

a Rule R8 was not used in the proofs of the experimentalproblems.

COGNITIVE PROCESSES IN REASONING 63

no memory loss or miscomprehension andindependent retrieval of rules).

For the most part, the parameter estimatesare also sensible. As a check on parameterstability, the subjects were split randomly intotwo equal groups, and the model was fit sep-arately to their data. The resulting parametervalues were quite similar to each other andto those of Table 5. The correlation betweenthe free parameters was .871 for the two splithalves and .970 and .963 between each halfand the full model. The values also make in-tuitive sense. Among the rules with the high-est availabilities are those that deal with con-junctions (R5 and R6), in line with the ob-viousness of these rules. The least availablerule is R7, which introduces disjunctive con-clusions (e.g., "Dorothy is in Topeka or Jillis in Kansas City") on the basis of one of thedisjuncts ("Dorothy is in Topeka"). This ruleseems odd to many people, perhaps becauseit is misleading to assert a disjunction whenone of the disjuncts is already known to betrue (Grice, Note 6). Subjects in the protocolexperiment tended to reject arguments basedon R7 on the grounds that the new disjunctis irrelevant to any previous information.

More surprising is the fact that the param-eter value for Rule R4 is higher than the ap-parently similar R1. Both of these rules de-rive a conclusion on the basis of a conditionalpremise, the main difference being that R4is specialized for disjunctive antecedents. Thediscrepancy in the parameter estimates couldindicate a trade-off between them due to theiroverlapping function; however, the asymp-totic correlation between the estimates (asobtained from STEPIT) is positive and of mod-erate value (.34). A more likely possibility isthat the special conditional required by R4(IFp OR q, r) makes it a more obvious matchto the corresponding assertion. Rl is a moregeneral rule, but its generality may mean thatsubjects must abstract from more surfacedetail in applying it to these problems. Forexample, both Rl and R4 apply to a con-ditional such as "If Hilda is in Boston orKathy is in Las Vegas, then Eve is in Provi-dence." However, R4 provides a more specificmatch than does Rl. If the availability of arule depends on the salience of its compo-nents, the difference in specificity may helpaccount for the recovered parameter values.8

Note that in fitting the model to the ac-ceptability scores, we have made no assump-tions about memory capacity of the sort thatproved essential in accounting for Marcus'sresults. This is perhaps as it should be sincesubjects in the present experiment were un-der no time pressure and were able to viewthe premises and conclusions and to writedown whatever intermediate results theywished. In a different sort of experiment,however, capacity problems almost certainlywould come into play. We must leave it anopen question whether these limits on proofgeneration are best imposed as an upperbound on the total number of elements inthe working memory trees, on the length ofa branch in these trees, on the amount ofbranching at a node, or some combinationof these factors.

Extensions

The line that we have taken so far is con-servative, in that the sort of reasoning ac-complished by ANDS (and the reasoning at-tributed to subjects) is a subset of a well-un-derstood formal system: classical sententiallogic. It is obvious, however, that many per-fectly good inferences cannot be captured inclassical logic. For example, inferences aboutcausation, belief, obligation, logical necessity,and probabilistic or plausible inferences (e.g.,those described by Collins, 1978) all go be-yond classical logic in interesting ways.,Evenwithin the domain of prepositional reason-ing, it is conceivable that people rely on rulesthat are not sanctioned by the usual formaltreatment. It is unclear whether all of theseinference types can be handled by the kindof mechanisms embodied in ANDS, and itseems that the only way to resolve this ques-tion is by systematic theory building and ex-perimentation. By way of a start on a more

8 A somewhat more realistic model of the logical de-cision process would handle such facts by means of sep-arate parameters for the different aspects involved inapplying a rule. For example, one could estimate sepa-rately the difficulty in recognizing whether a rule's con-ditions are met (which may change from problem toproblem) and the difficulty in actually carrying out theactions of the rule (which might be problem invariant).No attempt has been made to incorporate this distinc-tion in the present model because of the already largenumber of parameters.

64 LANCE J. RIPS

general deductive model, the next sectionoutlines an extension of ANDS for causalreasoning together with a test of this exten-sion. Other possible modifications are dis-cussed in the conclusion.

Causal Reasoning

It is a notorious fact that people reasondifferently about causal connectives thanabout truth-functional conditional connec-tives. Experimental differences between con-ditional and causal reasoning have been doc-umented by Staudenmayer (1975), who com-pared the acceptability of argumentscontaining cause to similar arguments withif. . . then (e.g., X causes Y; Not Y; therefore,not X vs. Ij'X then Y; Not Y; therefore, not X).

In ANDS's framework, the easiest way tohandle causal propositions is through the useof suppositions and the rules that governthem. The basic idea is that in thinking abouta causal sentence, one tries to imagine a sit-uation where that cause obtains and then fillsin the details of the situation according toone's knowledge of its effects (Tversky &Kahneman, 1978). This suggests adding toANDS a rule like the following one, whichis analogous to Rule R9 of Table 2:

R12 (Cause Introduction):Conditions: 1. Current subgoal = q CAVSAI.LY-

DEPKNDS-ON p.9

Actions: 1. Add new subordinate causal nodeto assertion tree containingassumption /;.

2. Set up subgoal to deduce q in newnode.

3. If Subgoal 2 is achieved, add qCAUSALjy-DEPENDS-ON p tosuperordinate node of theassertion tree.

The subordinate causal node or C node inthis rule corresponds to the supposition nodeof Rules R9-R11; however, as we shall see,there are reasons to distinguish this new kindof assertion node from the old one. In ad-dition, we may want to add further rules tocharacterize other aspects of causal reason-ing. For example, Lewis (1973) suggests thefollowing two principles:

R13 (Cause Elimination):Conditions: 1. Assertion tree contains a proposi-

tion q CAUSALLY-DKPKNDS-ON p.1. Assertion tree contains a C node

Actions:

whose initial proposition is p,and which does not contain q.

1. Add q to the C node identified inCondition 2.

R14 (Cause Inversion):Conditions: I . Assertion tree contains a proposi-

tion q CAUSALLY-DEPENDS-ON p.2. Assertion tree contains a C node

whose initial proposition is NOTp, and which does not containNOT q.

Actions: 1. Add NOT q to the C node.

The first rule asserts that if q is causally de-pendent on p, then if p occurs, so will q. Thesecond asserts that if q is causally dependenton p, then if p does not occur, neither will q.

However, these kinds of rules are still notenough to capture ordinary causal reasoning,since there are too many arguments thatcome out valid under their aegis. For ex-ample, we can derive the conclusion "Cheryl'sbeing in Santa Fe causally depends on Mar-tha's being in New Brunswick" from the as-sertions "Cheryl is in Santa Fe" and "Marthais in New Brunswick," which is patently in-valid. Given the new rule R12, the proof ofthis argument is trivial: The condition onR12 is fulfilled by the conclusion of the ar-gument, which has the required q CAUS-ALLY-DEPENDS-ON p form. ANDS wouldthen add a C node to the assertion tree withthe supposition that "Martha is in New Brun-swick" and will set up a correspondingsubgoal node to deduce "Cheryl is in SantaFe." But this latter proposition is one of theoriginal assertions, and this allows the con-clusion to be derived by the third action stepof R12. To put this another way, causal sen-tences come out as truth functional on thebasis of R12-R14 (i.e., the truth of a causalsentence is determined by the truth of itscomponent propositions). But this is incor-rect. We need to know more than just thatMartha is in New Brunswick and that Cheryl

9 In Lewis's (1973) theory of causality, on which thefollowing rules are based, "causal dependence" is a moreproximal relation between two events than is "cause."Event e is causally dependent on Event c if it is the casethat (a) e would occur if c were to occur and (b) e wouldnot occur if c were not to occur. Event c causes Evente if there is a chain of events, d], d2 , . . . ,dk, such thatdi causally depends on c, d2 causally depends on d|,. . . , e causally depends on dk.

COGNITIVE PROCESSES IN REASONING 65

is in Santa Fe before concluding that thereis a causal relation involved. This means thatrules R12-R14 must be supplemented by anadditional restriction.

The problem lies in the way superordinatepropositions in the assertion tree get used insubordinate nodes. Recall that in the proofswe have studied, any proposition in the as-sertion tree could be used to carry out a de-duction in any node to which it was a su-perordinate. For example, in the proof ofFigure 1, the proposition If not M or not Pthen R in the superordinate node was usedwith the subordinate proposition Not M todeduce R. However, for causal inferences, thisprocedure won't do. We want the C nodes torepresent situations that result from partic-ular causes, and hence, propositions deducedin these nodes should be those causes' effects.This means that we can no longer import anysuperordinate proposition for a deductiondown below, since these propositions can in-troduce noneffects, as in the Martha-Cherylexample. The right policy is to restrictANDS's deductions in C nodes to proposi-tions that are actually placed there by thecausal rules, R12-R14. Under this restric-tion, causal sentences are no longer truthfunctional; in particular, the inference aboutMartha and Cheryl is blocked, because R12no longer has access to the premises "Marthais in New Brunswick" and "Cheryl is in SantaFe." (For a similar modal natural-deductionsystem for counterfactual conditionals, seeThomason, 1970a.)

The restriction on superordinate proposi-tions suggests a test of the augmented model.Given this limitation, it is clear that manyformerly valid arguments with conditionalsbecome invalid when the conditionals arereplaced by causal connectives. In particular,of the 32 arguments in Table 4, 20 remainvalid under a causal interpretation, whereasthe remaining 12 become invalid (the identityof these causally valid and causally invalidproblems is indicated in Table 4). The pre-diction is that if these problems are rephrasedin causal language and presented to a newgroup of subjects, acceptance rates for causallyinvalid arguments will decline, but accep-tance rates for causally valid arguments willremain at their previous levels.

This prediction was tested in a study quite

similar to the one reported above. The prob-lems again dealt with the actions of a ma-chine (the same set of machine parts andaction verbs were used), but sentences of theform IF p, q were replaced by ones of theform q CAUSALLY-DEPENDS-ON p. For exam-ple, Problem 13 above would be changed to14, which is a causally invalid argument:

14. That the wheel turns causally depends on the lightgoing on or the piston expanding.

That the wheel turns causally depends on the lightgoing on or the wheel turning.

As in the previous experiment, the criticalproblems were mixed with filler items, andthe entire set was presented to a new groupof 18 subjects for their validity judgments.

The results from this experiment are sum-marized in Table 6 along with those from theearlier study. One can see immediately thatthe percentage of valid judgments for the 20causally valid problems changes very littleacross conditions. However, the causally in-valid items show the expected dip when theconnective is changed from i f . . . then tocausally depends—for the interaction be-tween causal validity and condition, SE =2.6%; F(\, 51) = 25.78, p < .01. We shouldbe somewhat wary of this result, however,since many of the causally valid items do notcontain a conditional premise or conclusion(e.g., Argument 1 of Table 4), and these prob-lems are unchanged in moving from condi-tional to causal propositions. Perhaps itemsof this type mask effects of causal proposi-tions on the remaining causally valid prob-lems. We can look into this by removing thearguments with no conditionals and retabu-lating the results for the remaining problems.But as it turns out, the retabulated percent-ages are nearly the same as in Table 6: Forthe causally valid arguments, we now have49.3% for the people conditionals, 44.6% formachine conditionals, and 46.6% for ma-chine causals. It is unlikely, then, that themere mention of causal dependence lowersthe acceptance rates. What is important isthe logical role it plays in the arguments.

Two further points about these data de-serve mention. First, it has sometimes beenclaimed that conditional sentences are un-derstood as causal if the conditional is about

66 LANCE J. RIPS

two events that could plausibly stand in acausal relation (Legrenzi, 1970; Wason &Johnson-Laird, 1972, Chapter 6). Given thishypothesis, one might suppose that the con-ditional sentences about machines (as in Ar-gument 13) would be interpreted causallyand, hence, produce low acceptance rates forthe causally invalid problems. But this seemsnot to have been the case. In Table 6, per-centages for machine conditionals are aboutthe same as those for the people conditionals,and both of these percentages are higher thanfor explicitly causal statements, F( 1, 51) =86.86, p < .01. The failure of the above hy-pothesis is consistent with some earlier resultsof Marcus and Rips (1979; Rips & Marcus,1977), indicating that "causal content" of theantecedent and consequent is normally notsufficient to alter subjects' responses (thoughother changes in the content of the problemmay do so).

Second, causally valid and invalid prob-lems should all be valid for the conditional-machine and the conditional-people argu-ments. However, Table 6 reveals significantlyhigher acceptance rates for the causally in-valid arguments in these conditions, F(l,51) = 40.83, p< .01. The reason for this dif-ference may be that the causally valid itemsmore often depend on one of the inaccessiblerules discussed earlier (see Table 5); no at-tempt was made to equate the problems onthis factor. The existence of this difference,however, does not affect our basic conclusionsabout causal validity.

Some Remaining Questions

In ANDS, prepositional reasoning amountsto the application of certain kinds of deduc-tive rules within a certain kind of memoryconfiguration. Given an argument to evalu-ate, some of the rules operate in an automaticway on the premises to produce subsidiaryinferences; other rules are driven by the con-clusion to produce additional subgoals andsuppositions. If all goes well (if the argumentis valid and if the required rules are acces-sible), then the inferences will mesh with thesubgoals and the argument will be labeledvalid. In other cases, the deductive processruns out of rules to apply, and the argumentis deemed invalid (or a guess is made to de-

Table 6Percentage Valid Responses for Causally Validand Invalid Arguments

Argument formatCausally Causally

valid invalid

People conditionals (e.g.,If Judy is in Providence,Mary is in Baltimore) 45.9 62.0

Machine conditionals (e.g.,If the whistle blows, thewheel stops) 42.6 60.2

Machine causals (e.g.,That the wheel stopscausally depends on thewhistle blowing) 47.0 31.0

termine its validity). The resulting structureof the proof (or nonproof) is hierarchical inthat suppositions are subordinated to theoriginal premises and subgoals to the conclu-sion. The suppositions may themselves differin type: one sort for truth-functional reason-ing; a different sort for causal reasoning; andperhaps yet other kinds for reasoning aboutlogical necessity, belief, obligation, and othernon-truth-functional operators. In support ofthese assumptions, we have seen that themodel can predict validity judgments for abroad set of propositional arguments. In ad-dition, the model accords with subjects' recallof informal proofs and with subjects' overtconstruction of proofs. Finally, an extensionof the model can help explain differencesbetween reasoning with conditional andcausal sentences.

Nevertheless, a number of important is-sues about propositional reasoning are stillto be resolved. For one thing, all of the de-ductive rules mentioned above (the causalrules R12-R14 excepted) are valid with re-spect to classical logic. This is an assumptionthat ANDS shares with all of the other psy-chological natural-deduction models, but itis one that is not past doubt. For example,among the thinking-aloud protocols are hintsof "pathological rules" (Massey, 1981) thatare not standardly valid. I have already notedone or two instances in the discussion of Ta-ble 3; however, a more dramatic examplecomes from a different subject working onthe same problem (i.e., Argument 2), and afragment from the transcript of this subjectis worth citing:

COGNITIVE PROCESSES IN REASONING 67

15. The first sentence tells us that on a blackboard ifthere is no M or a P written on it, both M and Pare absent from the blackboard, an R can be writtenon it. If M is absent alone, still an R can't be writtenon it. If P is absent alone, still an R can't be writtenon it. They both have to be absent for an R to beon the blackboard. The second sentence is telling usthat there is no M on the blackboard but there isan R . . . We really can't tell if the sentence is trueor not because we don't know if a Pis not also there.

The subject interprets the premise If it is nottrue that there is both an M and a P, thenthere is an R to mean that If there is not anM and not a P, then there is an R and fromthis interpretation reasons that the conclu-sion If not M, R is invalid. An interestingquestion is just how deep this mistake goes(Cohen, 1981; Kahn & Rips, in press; Rips,in press). That is, should it be regarded as atemporary performance error or as a per-manent feature of the subject's reasoning? Ineither case, however, some new mechanismwill have to be added to ANDS before it canaccount for this type of deviation.

A second place where the model is incom-plete is in its handling of invalid arguments.ANDS assumes that an argument is invalidjust in case it fails to find a proof, but thereare indications from the protocols that sub-jects often cut short this exhaustive search.The simplest such cases occur when the con-clusion contradicts the premises, becausesubjects who detect the contradiction some-times make an immediate "invalid" re-sponse. For example, the invalid argumentThere is no L and there is a G; therefore, thereis no G and there is an L evoked from oneof the subjects the comment that the conclu-sion was "a total reverse of the top [sentence]. . . the bottom is wrong. You can't haveboth of the two. And the two totally contra-dict; it's one or the other. If the top is true,the bottom is false." For invalid argumentsnot involving a contradiction, subjects' usualresponse was, in effect, that the premises donot supply enough information to establishthe conclusion. This is consistent withANDS's exhaustive search strategy but alsowith other sorts of strategies (Collins, Note7). Suppose, for example, that subjects knowa rule that would yield the conclusion pro-vided that some extra piece of informationwere available. If this information is ob-viously absent, they may reason that this ab-

sence shows the argument to be invalid. Thisline of reasoning is usually fallacious, sincethere could be other ways of arriving at theconclusion aside from the rule in question.Nevertheless, there is some evidence that sub-jects use such a strategy for problems likethose considered above (Rips, in press). Inother words, ANDS appears to be missingrules or heuristics for direct detection of in-validity.

Inference production has also receivedslight treatment in the present model. Itwould certainly be desirable if ANDS couldproduce conclusions to a set of premises aswell as evaluate prespecified conclusions. Butso far, its ability to do this is rather limited.The only inferences it can produce are thoseof its forward rules. For example, ANDS canproduce the conclusion Q from the premisesIf (M and N), Q and M and N by forwardmodus ponens. But it is incapable of pro-ducing Q from If (M and N), Q; M; and N;where M and N are separate premises. Thedifficulty in the second case is that ANDS hasno way to produce a subgoal for the neededconjunction, and without this subgoal theAnd Introduction rule (R6) cannot apply (forgood reason, as we saw in the section on For-ward Versus Backward Rules). However, pilotstudies suggest that subjects have little troublewith the second problem. Johnson-Laird's(1975) proposal has an advantage over ANDSin this respect, since its production rules areable to set up subgoals. One way to handlethis problem in the ANDS framework is toremove Condition 1 of Rule Rl, allowingbackward modus ponens to run without hav-ing to be invoked by a subgoal. Rl can thenbring into play other backward rules like R6.Likewise, we can also remove Condition 1 ofRules R3, R5, RIO, and R l l , in this waygreatly increasing the number of conclusionsthat the model can produce. Thus, in a "con-clusion production" mode, the rules wouldoperate without having to be specifically trig-gered, whereas in the usual "argument eval-uation" mode, they would be responsive tosubgoals as before. It remains to be seen,however, whether this strategy correctly pre-dicts subjects' actual inferences.

Because the forms of reasoning are variedand because these forms impinge on most ofour mental activities, it is not surprising that

68 LANCE J. RIPS

many questions survive these modeling ef-forts. The hope is that the model provides aconsistent way of thinking about inference,one that can serve as a base for exploringother inference types. As mentioned at theoutset, many parts of cognitive psychologypresuppose some type of mechanism fordrawing inferences, and in the absence of aclearly specified and general inference theory,investigators have been forced to posit ad hocreasoning routines or heuristics in these sub-disciplines. But ad hoc inference mechanismsmean that the larger models they support aread hoc as well. There is little chance for prin-cipled explanations in these areas until thisinferential debt is cleared up. The presenteffort is far from the kind of general theorythat is needed. But if its basic notions—forexample, the distinction between forwardand backward inference and the use of sup-positions—carry over to other forms of rea-soning, then it will at least have provided ahint about where to begin.

Reference Notes1. Hewitt, C. Description and theoretical analysis (using

schemata) of PLANNER: A language for proving theo-rems and manipulating models in a robot (Tech. Rep.Al-TR-258). Cambridge, Mass.: Massachusetts Insti-tute of Technology, Artificial Intelligence Laboratory,1972.

2. McDermott, D. V., & Sussman, G. J. The CONNIVERreference manual (Memo No. 259). Cambridge, Mass:Massachusetts Institute of Technology, Artificial In-telligence Laboratory, 1972.

3. Martins, J. P., McKay, D. P., & Shapiro, S. C. Bi-directional inference (Tech. Rep. 174). Buffalo, N.Y.:State University of New York at Buffalo, Departmentof Computer Science, March 1981.

4. Moshman, D. College students' understanding of theconcept of inferential validity. Paper presented at themeeting of the Jean Piaget Society, Philadelphia, May1981.

5. Johnson-Laird, P. N. Prepositional representation &procedural semantics —> mental models. Paper pre-sented at the Centre National de la Recherche Scien-tifique Conference, Royaumont, France, June 1980.

6. Grice, H. P. Logic and conversation. William JamesLectures, Harvard University, 1967.

7. Collins, A. Personal communication, 1982.

ReferencesAnderson, A. R., & Belnap, N. D., Jr. Entailment: The

logic of relevance and necessity (Vol. 1). Princeton,N.J.: Princeton University Press, 1975.

Anderson, J. R. Language, memory, and thought. Hills-dale, N.J.: Erlbaum, 1976.

Anderson, J. R., Greeno, J. G., Kline, P. J., & Neves,

D. M. Acquisition of problem-solving skill. In J. R.Anderson (Ed.), Cognitive skills and their acquisition.Hillsdale, N.J.: Erlbaum, 1981.

Bhaskar, R., & Simon, H. A. Problem solving in se-mantically rich domains: An example from engineer-ing thermodynamics. Cognitive Science, 1977,1, 193-215.

Bledsoe, W. W. Non-resolution theorem proving. Arti-ficial Intelligence, 1977, 9, 1-35.

Braine, M. D. S. On the relation between the naturallogic of reasoning and standard logic. PsychologicalReview, 1978,55, 1-21.

Carroll, L. What the tortoise said to Achilles. Mind,1895, 4, 278-280.

Chandler, J. P. STEPIT: Finds local minima of a smoothfunction of several parameters. Behavioral Sciences,1969, 14, 81-82.

Chang, C.-L., & Lee, C.-T. Symbolic logic and mechan-ical theorem proving. New York: Academic Press,1973.

Clark, H. H. Inferences in comprehension. In D. Laberge& S. J. Samuels (Eds.), Basic processes in reading:Perception and comprehension. Hillsdale, N.J.: Erl-baum, 1977.

Cohen, L. J. Can human irrationality be experimentallydemonstrated? Behavioral and Brain Sciences, 1981,4, 317-370.

Collins, A. Fragments of a theory of human plausiblereasoning. In D. L. Waltz (Ed.), Theoretical issues innatural language processing—2. New York: Associa-tion for Computing Machinery, 1978.

Copi, I. M. Symbolic logic. New York: Macmillan, 1954.Davidson, D. Mental events. In L. Foster & J. W. Swan-

son (Eds.), Experience and theory. Amherst: Univer-sity of Massachusetts Press, 1970.

Dennett, D. C. True believers: The intentional strategyand why it works. In A. F. Heath (Ed.), Scientific ex-planation. Oxford, England: Clarendon Press, 1981.

Fillenbaum, S. Mind your p's and q's: The role of contentand context in some uses of and, or, and if. In G. H.Bower (Ed.), Psychology of learning and motivation(Vol. 11). New York: Academic Press, 1977.

Fitch, F. B. Symbolic logic. New York: Ronald Press,1952.

Fodor, J. A., & Pylyshyn, Z. W. How direct is visualperception? Some reflections on Gibson's "EcologicalApproach." Cognition, 1981, 9, 139-196.

Gardner, M. Logic machines and diagrams. New York:McGraw-Hill, 1958.

Geis, M. L., & Zwicky, A. M. On invited inferences.Linguistic Inquiry, 1971, 2, 561-566.

Gentzen, G. Investigations into logical deduction. InM. E. Szabo (Ed. and trans.), The collected papers ofGerhard Gentzen. Amsterdam: North-Holland, 1969.(Originally published, 1935.)

Grandy, R. E. Inference and if-then. Psychological Re-view, 1979, 86, 152-153.

Greeno, J. G. Indefinite goals in well-structured prob-lems. Psychological Review; 1976, 83, 479-491.

Haack, S. The justification of deduction. Mind, 1976,85, 112-119.

Hastie, R. Social inference. Annual Review of Psychol-ogy, in press.

COGNITIVE PROCESSES IN REASONING 69

Jaskowski, S. On the rules of supposition in formal logic.Studia Logica, 1934, 1, 5-32.

Johnson-Laird, P. N. Models of deduction. In R. J. Fal-magne (Ed.), Reasoning: Representation and processin children and adults. Hillsdale, N.J.: Erlbaum, 1975.

Johnson-Laird, P. N. Thinking as a skill. Quarterly Jour-nal of Experimental Psychology, 1982, 34A, 1-29.

Johnson-Laird, P. N., & Steedman, M. The psychologyof syllogisms. Cognitive Psychology, 1978, 10, 64-99.

Kahn, G. S., & Rips, L. J. Norms, competence, and theexplanation of reasoning. Behavioral and Brain Sci-ences, in press.

Kahneman, D., & Tversky, A. The simulation heuristic.In D. Kahneman, P. Slovic, & A. Tversky (Eds.), Judg-ment under uncertainty: Heuristics and biases. NewYork: Cambridge University Press, 1982.

Kintsch, W., Kozminsky, E., Streby, W. J., McKoon, G.,& Keenan, J. M. Comprehension and recall of text asa function of content variables. Journal of VerbalLearning and Verbal Behavior, 1975, 14, 196-214.

Klahr, P. Planning techniques for rule selection in de-ductive question-answering. In D. A. Waterman & F.Hayes-Roth (Eds.), Pattern-directed inference systems.New York: Academic Press, 1978.

Legrenzi, P. Relations between language and reasoningabout deductive rules. In G. B. Flores D'Arcais &W. J. M. Levelt (Eds.), Advances in psycholinguistics.Amsterdam: North-Holland, 1970.

Lewis, D. Causation. Journal of Philosophy, 1973, 70,556-567.

Mackie, J. L. Truth, probability, and paradox. Oxford,England: Clarendon Press, 1973.

Marcus, S. L. Recall of logical argument lines. Journalof Verbal Learning and Verbal Behavior, 1982, 21,549-562.

Marcus, S. L., & Rips, L. J. Conditional reasoning. Jour-nal of Verbal Learning and Verbal Behavior, 1979,18,199-223.

Massey, G. J. The fallacy behind fallacies. In P. A.French, T. E. Uehling, Jr., & H. K. Wettstein (Eds.),Midwest studies in philosophy (Vol. 6). Minneapolis:University of Minnesota Press, 1981.

McCawley, J. D. Everything that linguists have alwayswanted to know about logic but were ashamed to ask.Chicago: University of Chicago Press, 1980.

Meyer, B. J. F. The organization of prose and its effectson memory. Amsterdam: North-Holland, 1975.

Miller, G. A., & Johnson-Laird, P. N. Language andperception. Cambridge, Mass: Harvard UniversityPress, 1976.

Newell, A., & Simon, H. A. Human problem solving.Englewood Cliffs, N.J.: Prentice-Hall, 1972.

Nisbett, R., & Ross, L. Human inference: Strategies andshortcomings of social judgment. Englewood Cliffs,N.J.: Prentice-Hall, 1980.

Osherson, D. N. Logical abilities in children (Vol. 2).Hillsdale, N.J.: Erlbaum, 1974.

Osherson, D. N. Logical abilities in children (Vol. 3).Hillsdale, N.J.: Erlbaum, 1975.

Osherson, D. N., & Markman, E. Language and the abil-

ity to evaluate contradictions and tautologies. Cog-nition, 1975, 3, 213-226.

Pollard, P., & Evans, J. St. B. T. The influence of logicon conditional reasoning performance. QuarterlyJournal of Experimental Psychology, 1980, 32, 605-624.

Quine, W. V. Truth by convention. In O. H. Lee (Ed.),Philosophical essays for A. N. Whitehead. New York:Longmans, 1936.

Ramsey, F. P. Truth and probability. In H. E. Kyburg& H. E. Smokier (Eds.), Studies in subjective proba-bility. Huntington, N.Y.: Krieger, 1980. (Originallywritten, 1926.)

Rao, C. R., Linear statistical inference and its applica-tions. New York: Wiley, 1973.

Revlis, R., & Hayes, J. R. The primacy of generalitiesin hypothetical reasoning. Cognitive Psychology, 1972,3, 268-290.

Rips, L. J. Reasoning as a central intellective ability. InR. J. Sternberg (Ed.), Advances in the study of humanintelligence (Vol. 2). Hillsdale, N.J.: Erlbaum, in press.

Rips, L. J., & Marcus, S. L. Suppositions and the analysisof conditional sentences. In M. A. Just & P. A. Car-penter (Eds.), Cognitive processes in comprehension.Hillsdale, N.J.: Erlbaum, 1977.

Simon, D. P., & Simon, H. A. Individual differences insolving physics problems. In R. S. Siegler (Ed.), Chil-dren's thinking: What develops? Hillsdale, N.J.: Erl-baum, 1978.

Staudenmayer, H. Understanding conditional reasoningwith meaningful propositions. In R. J. Falmagne (Ed.),Reasoning: Representation and process in children andadults. Hillsdale, N.J.: Erlbaum, 1975.

Suppes, P. C. Introduction to logic. Princeton, N.J.: VanNostrand, 1957.

Thomason, R. H. A Fitch-style formulation of condi-tional logic. Logiaue et Analyse, 1970, 52, 397-412.(a)

Thomason, R. H. Symbolic logic. Toronto: Macmillan,1970. (b)

Toulmin, S. E. The uses of argument. Cambridge, En-gland: Cambridge University Press, 1958.

Tversky, A., & Kahneman, D. Causal schemas in judg-ments under uncertainty. In M. Fishbein (Ed.), Prog-ress in social psychology. Hillsdale, N.J.: Erlbaum,1978.

Ullman, S. Against direct perception. Behavioral andBrain Sciences, 1980, 3, 373-415.

Warren, W. H., Nicholas, D. W, & Trabasso, T. Eventchains and inferences in understanding narratives. InR. O. Freedle (Ed.), New directions in discourse pro-cessing: Advances in discourse processing (Vol. 2).Norwood, N.J.: Ablex, 1979.

Wason, P. C., & Johnson-Laird, P. N. Psychology of rea-soning. Cambridge, Mass.: Harvard University Press,1972.

Wittgenstein, L. Tractatus logico-philosophicus (D. F.Pears & B. F. McGuinness, Trans.). London: Rout-ledge & Kegan Paul, 1961. (Originally published,1921).

(Appendix follows on next page)

70 LANCE J. RIPS

Appendix

ANDS's Proof of a Difficult Theorem

Figure Al depicts the state of ANDS's data baseat the conclusion of its proof of the following ar-gument:

IfM, not (N and P).If not N or not P, then not M.

NotM.

The numbers of the propositions in the figure givethe order in which these statements are added tothe data base. The complexity of Figure Al is dueto some false starts that ANDS makes before hit-ting the right solution path. The successful partof the proof includes Subgoals 1,16, and 17 andAssertions 2, 3, 4, 15, 18, 19, 20, and 21.

At the beginning of the deduction, only thepremises (Assertions 2 and 3) and the conclusion(Subgoal 1) occupy the database. As an openingmove, ANDS notices that the premise in Assertion2 meets the conditions for the DeMorgan rule,R2', and so adds the transformed version of As-sertion 2 to the assertion tree as Assertion 4, fol-lowing the same line of reasoning as in the Figure1 example. The next rule whose conditions arefulfilled is R4, since the current subgoal is Not Mand since ANDS knows about an assertion If notN or not P, not M, which may be of help in de-ducing this subgoal. R4 sets up a subgoal to deduceNot N in Subgoal 5, and if this proposition hap-pened to reside in the assertion tree, no further

work would be necessary. But, unfortunately, nosuch assertion exists. Before giving up, however,ANDS looks to see whether Not N can be deducedfrom some further premise. Because the form ofthis proposition is negative, RIO is triggered,which attempts to prove Not N by reductio adabsurdum, that is, by deducing a contradictionfrom the assumption that N is true. This assump-tion is shown in Assertion 6. The contradictionthat ANDS proposes to prove is M and Not M,the first half of which is Subgoal 7. This particularchoice of a contradiction is motivated by the factthat M exists as part of the premise and Not Mas part of the conclusion, making it a likely can-didate. However, at this stage, ANDS knows noway to prove M, and it therefore gives up, return-ing to the main goal. The same strategy that ledto the subgoal branch from Subgoal 1 to Subgoal7 also leads to the next, equally futile branch fromSubgoal 1 to Subgoal 10.

ANDS's third try at proving Not M uses modusponens (Rl). The idea is that If not N or not P,not M (Assertion 3) allows one to deduce Not M,provided one can deduce Not N or not P. This isshown at Subgoal 11, and ANDS makes three at-tempts on it. Because of its disjunctive form, RuleR7 (Or Introduction) applies and sets up a subgoalto deduce Not N. But because this subgoal hasfailed before, ANDS avoids it. Similarly, Subgoal13, Not P, has also been tried and rejected. The

Assertion Tree Subgoal Tree

2. If M then not (N and P ).3. If not N or not P then not M4 If M then not N or not P.

Zl. Not M.

Figure Al. ANDS's memory structure at the conclusion of the proof of the argument, If M then not bothN and P; if not N or not P then not M; therefore, not M. (See text of the Appendix for an analysis of thisproof.) '

COGNITIVE PROCESSES IN REASONING 71

final possibility of proving Not N or not P is to useAssertion 4, IfM, then not N or not P, in a modusponens inference. But, alas, this requires deduc-tion ofM(Subgoal 14), which ANDS is still unableto prove.

Finally, ANDS attempts a reductio ad absur-dum proof of the main goal, Not M itself. It as-sumes that M is true at Assertion 15 and tries toprove both M and Not M. The first of these goalsis trivially fulfilled by the assumption itself, soSubgoal 16 is successful. The hard part is to proveNot M. But because M is assumed true and be-cause IfM, then not (N and P) is an assertion, Not(N and P) must be true by Rl' (forward modus

ponens). Similarly, the assumption M, togetherwith Assertion 4, yields Not N or not P by thesame rule. This last assertion again triggers Rl',since If not N or not P, not M exists as Assertion3. This gives the desired result, Not M, which com-pletes the contradiction. Because assumption Mled to this contradiction, it must be false, andhence the main goal Not M is true. ANDS vic-toriously adds this to the assertion tree at 21.

Received December 15, 1981Revision received June 11, 1982

Manuscripts Accepted for Publication

The Act Frequency Approach to Personality. David M. Buss (Department of Psychology and SocialRelations, Harvard University, 33 Kirkland Street, Cambridge, Massachusetts 02138) and Kenneth H.Craik.

A Cognitive-Social Learning Model of Social Skill Training. Gary W. Ladd (Department of Child De-velopment and Family Studies, Purdue University, West Lafayette, Indiana 47907) and Jacquelyn Mize.

The New Causal Principle of Cognitive Learning Theory. B. C. Phillips (School of Education, StanfordUniversity, Stanford, California 94305) and Rob Orton.

Temporal Dynamics and Decomposition of Reciprocal Determinism: A Reply to Phillips and Orton.Albert Bandura (Department of Psychology, Stanford University, Stanford, California 94305).

Reassessing the Automaticity-Control Distinction: Item Recognition as a Paradigm Case. Colin Ryan(Department of Psychology, University of Toronto, Toronto, Ontario, Canada M5S 1A1).