Bias and error in human judgment

44
European Journal of Social Psychology, Vol. 13,144 (1983) Bias and error in human judgment ARlE W. KRUGLANSKI Tel Aviv University ICEK AJZEN University of Massachusetts Abstract Currently prevalent views of human inference are contrasted with an integrated theory of the epistemic process. The prevailing views are characterized by the following orienting assumptions: (1) There exist reliable criteria of inferential validity based on objectively veridical or optimal modes of information processing. (2) Motivational and cognitive factors bias inferences away f i o m these criteria and thus enhance the likelihood of judgmental error. (3) The layperson’s epistemic process is pluralistic; it consists of a diverse repertory of infirmation-processing strategies (heuristics, schemas) selectively invoked under various circumstances. By contrast, the present analysis yielak the following conclusions: (1) There exist no secure criteria of validity. (2) Psychological factors that bias inferences away fiom any currently accepted criteria need not enhance the likelihood of error. (3) The infirerace process may be considered unitary rather than pluralistic. The various strategies and biases discussed in the literature typically confound universal epistemic process with specijk examples (or contents) of such processes. Empirical support for the present analysis is presented, including evidence refuting proposals that specijk contents of inference are of universal applicability; evidence suggesting that people do not, because of a reliance on subnormative heuristics, underutilize nonnative statistical information-rather, people seem unlikely to utilize any information if it is nonsalient or (subjectively) irrelevant; and evidence demonstrating that the tendency of belieji to persevere despite discrediting information can be heightened or lowered by introducting appropriate motivational orientations. INTRODUCTION A great deal of research in contemporary social psychology is devoted to the processes whereby people form and revise their beliefs about themselves and about Order of authorship was determined at random. Requests for reprints should be sent either to Icek Ajsen, Department of Psychology, University of Massachusetts,Amherst, MA 01003, U.S.A. or to Arie W. Kruglanski, Department of Psychology, Tel Aviv University, Ramat Aviv, Tel Aviv 69978, Israel. 0046-2772/83/010001-4aO4.40 0 1983 by John Wiley & Sons, Ltd. Received 12 October 1982

Transcript of Bias and error in human judgment

European Journal of Social Psychology, Vol. 1 3 , 1 4 4 (1983)

Bias and error in human judgment

ARlE W. KRUGLANSKI Tel Aviv University

ICEK AJZEN University of Massachusetts

Abstract

Currently prevalent views of human inference are contrasted with an integrated theory of the epistemic process. The prevailing views are characterized by the following orienting assumptions: (1) There exist reliable criteria of inferential validity based on objectively veridical or optimal modes of information processing. (2) Motivational and cognitive factors bias inferences away fiom these criteria and thus enhance the likelihood of judgmental error. (3) The layperson’s epistemic process is pluralistic; it consists of a diverse repertory of infirmation-processing strategies (heuristics, schemas) selectively invoked under various circumstances. By contrast, the present analysis yielak the following conclusions: (1) There exist no secure criteria of validity. (2) Psychological factors that bias inferences away fiom any currently accepted criteria need not enhance the likelihood of error. (3) The infirerace process may be considered unitary rather than pluralistic. The various strategies and biases discussed in the literature typically confound universal epistemic process with specijk examples (or contents) of such processes. Empirical support for the present analysis is presented, including evidence refuting proposals that specijk contents of inference are of universal applicability; evidence suggesting that people do not, because of a reliance on subnormative heuristics, underutilize nonnative statistical information-rather, people seem unlikely to utilize any information i f it is nonsalient or (subjectively) irrelevant; and evidence demonstrating that the tendency of belieji to persevere despite discrediting information can be heightened or lowered by introducting appropriate motivational orientations.

INTRODUCTION

A great deal of research in contemporary social psychology is devoted to the processes whereby people form and revise their beliefs about themselves and about

Order of authorship was determined at random. Requests for reprints should be sent either to Icek Ajsen, Department of Psychology, University of Massachusetts, Amherst, MA 01003, U.S.A. or to Arie W. Kruglanski, Department of Psychology, Tel Aviv University, Ramat Aviv, Tel Aviv 69978, Israel.

0046-2772/83/010001-4aO4.40 0 1983 by John Wiley & Sons, Ltd.

Received 12 October 1982

2 A. W. Kruglmki and I. Ajzen

the world in which they live. Interest in such inference processes was stimulated by the development of attribution theories concerned primarily with lay explanations of human behaviour (Heider, 1958; Jones and Davis, 1965; Kelley, 1967) and by work on judgment under uncertainty dealing largely with the intuitive prediction of events (see Slovic, Fischoff and Lichtenstein, 1977 for a review). Early analyses of attribution likened individuals to naive scientists who, for the most part, make systematic use of available information in their attempts to explain their own or another person’s behaviour (e.g. Heider, 1958; Kelley, 1967). Similarly, early research on human judgment led to the conclusion that, by and large, people form and revise their beliefs in accordance with the normative principles of statistical models (c$ Peterson and Beach, 1967).

This view of the inference process has changed drastically in recent years. People’s judgments are no longer viewed as following rationally and impartially from the relevant information. Instead, attributions and predictions are said to be subject to systematic biases and errors. Some early indications of this revised view can be found in Heider’s (1958) discussion of factors likely to bias the otherwise sensible process of causal attribution; his qualifications were further elaborated by Jones and Davis (1965) and Kelley (1967). The strongest challenge to the view that people process information in a reasonable fashion, however, has come from the human judgment camp. Dissatisfaction with the normative models as descriptions of the inference process can perhaps be traced to the work of Tversky and Kahneman (1973,1974; Kahneman and Tversky, 1973) who argued that, instead of using sophisticated information-processing strategies, people rely on rather simple intuitive heuristics or rules of thumb in making their judgments. Although these intuitive strategies can result in reasonable inferences, they are said to produce biases and sysematic errors of judgment (see also Nisbett and Ross, 1980).

Contemporary research on bias and error in human judgment is decidedly empirical in character. It lacks a clearly articulated theory, and even the central concepts of ‘error’ and ‘bias’ are not explicitly defined. Nor is it easy to find a clear characterization of the objective, or unbiased, inference process from which lay judgments are presumed to deviate. Nevertheless, it is possible to discern in this research certain assumptions and orientations that lend it direction and structure. In the present paper we attempt to reconstruct these major orienting assumptions, focussing in particular on postulated biases and errors in human judgment. We contrast this view with an alternative perspective on lay epistemology, some aspects of which were recently discussed by Kruglanski and his colleagues (Kruglanski, 1979,1980; Kruglanski, Hamel, Maides and Schwartz, 1978; Kruglanski and Jaffe, in press). We consider the implications of our lay epistemic framework for several key issues in research on biases and errors and we examine experimental evidence pertinent to these implications.

RESEARCH ON INFERENTIAL BIASES AND ERRORS

Many studies of inferential biases and errors share the following fundamental assumptions. (1) There exist known criteria of valid inference. (2) Psychological factors may bias judgments away from these criteria of validity and enhance the probability of error. (3) The process of lay inference is basically pluralistic; it

Bias and error in human judgment 3

subsumes a wide variety of disparate strategies variously labelled schemas, attributional principles, heuristics, scripts, intuitive theories, etc. On the following pages, we examine these assumptions in some detail.

Criteria of valid inference Normative1 models

Leading analyses of bias and error in the inference process imply the existence of recognized methods of judgment whose adoption would lessen the incidence of inaccuracies. These methods are sometimes referred to as ‘normative’ and are frequently identified with statistical or scientific modes of inference. For example, Kelley (1973) contrasted his analysis of variance model of attribution, which he viewed as optimal or idealized, with less perfect attributional principles and schemata (see Kelley, 1971,1972). The latter, while capable of yielding reasonable estimates of reality, are assumed occasionally to result in bias and error.

Similarly, Kahneman and Tversky (1973; Tversky and Kahneman, 1974) implied that statistical inference models (such as Bayes’ theorem) are normative and that judgments which deviate systematically from such normative models are erroneous and indicative of bias in the underlying inference process. Consistent with this approach, Ross and Lepper (1980) viewed the scientific method in general as superior or closer to normative than lay modes of inference and suggested that one way in which the tendency to err might be mitigated is instruction of lay persons in the canons of scientific (e.g. experimental) methodology.

Direct verinatwn

The existence of normative models is assumed to be restricted to certain domains of judgment (e.g. probabilistic estimates) while other domains (e.g. causal attributions) are assumed to lack, at least for the time being, equally normative formulations (cf: Fischhoff, 1976; Nisbett and Ross, 1980). In the absence of normative models, inferential errors are sometimes identified on the basis of what might be called ‘direct verification’, that is, by comparing intuitive judgments to such generally trusted procedures as ‘eye-balling’, counting, and so on. For example, direct verification is sometimes employed in tests of biases attributed to the availability heuristic (Tversky and Kahneman, 1974). The respondents’ quantitative estimates (such as their estimates of the proportions of male or female names on a list) are compared to the quantities as counted, and in this sense directly verified, by the experimenter.

The investigator’s perspective

For many judgments, however, neither normative models nor direct verification seem to be available. Here, the investigator’s own judgment as to what would constitute a valid inference is frequently used as a standard of veridicality, and deviations from this standard are considered erroneous. For example, it has been suggested that an actor’s behaviour should not be used to infer dispositional attributions when it is performed under strong situational constraints (Jones, 1979; Jones and Harris, 1967). A finding that the actor’s behaviour does affect attributions under these conditions is taken as evidence of error occasioned by a

4

bias toward placing disproportionate weight on the causal effects of dispositional relative to situational factors. Similarly, Nisbett and Wilson (1977a) argued that people are often unaware of factors that influence their own judgments and actions. To demonstrate the errors of inference resulting from this state of affairs, Nisbett and Wilson showed that research participants who are asked to explain their own behaviour fail to mention factors which, in the investigators’ judgment, had a demonstrable effect on the behaviour in question.

In sum, research on biases and errors typically assumes the existence of secure criteria of inferential validity in reference to which lay judgments must be assessed. Conformity of a judgment to a criterion is assumed to provide evidence for its accuracy whereas deviation from a criterion is considered evidence for its erroneous nature. We have identified three separate validity criteria commonly invoked by researchers in the area: normative models, often based on the laws of probability or scientific canon; direct verification, whereby the correct judgment is assumed to be immediately apparent to any reasonable person; and the investigator’s own perspective regarding what constitutes a realistic judgment under a given set of circumstances. In a later section we will have opportunity to question the adequacy of these criteria. First, however, we must consider another major assumption of research on the shortcomings of human inference, namely, the assumption that factors which cause judgment to deviate from a recognized criterion of validity are sources of bias and error.

A. W. Kruglanski and I. Ajzen

Bias in human inference

The various biases of human judgment that have been described in the psychological literature are often classified as either motivational or cognitive in origin (cf: Ross, 1977). Motivational biases are characterized by a tendency to form and hold beliefs that serve the individual’s needs and desires. Individuals are said to avoid drawing inferences they would find distasteful, and to prefer inferences that are pleasing or need-congruent. Being dependent on the momentary salience of different needs, such motivational influences could presumably yield judgmental biases and errors.

Even in the absence of motivated distortions, human judgments are assumed subject to biases of a more cognitive nature. Unlike motivational biases that are presumed to constitute largely irrational tendencies, cognitive biases are said to originate in the limitations of otherwise reasonable information-processing strategies. It has been proposed that individuals possess a considerable repertory of such suboptimal strategies which they bring to the task of predicting and explaining events (see Nisbett and Ross, 1980). Generally speaking, these strategies are assumed to direct people’s attention to some types of information and hypotheses and to lead to an underestimation or disregard of other information and hypotheses which, although relevant to the judgment in question, have no ready place in the information-processing strategy that is being employed.

Motivational biases

It has sometimes been argued that people have a general tendency to engage in ‘wishful thinking-to judge a future event as likely to the extent that they perceive the event to be desirable (see McGuire, 1960). Proposed motivational biases are tied to specific needs or desires people are assumed to bring to the situation. In

Bias and error in human judgment 5

principle, therefore, it is possible to postulate as many motivational biases as there are human needs that can be meaningfully distinguished from one another. Investigators of the attribution process have in fact considered a variety of needs that might provide the foundation for bias, but it appears that for the most part these needs fall into two broad categories: ego enhacement or defence and the need for effective control.

Ego enhancement and defence

Many investigators have argued that individuals are motivated to enhance and protect their egos. As a consequence of this presumed need, people are said to readily accept credit for success but to be reluctant to accept blame for failure. Research has thus attempted to demonstrate that individuals tend to attribute their own successes to internal factors and to blame external factors for their failures (e.g. Beckman, 1970; Johnson, Feigenbaum and Weiby, 1964). Some investigators have argued that such tendencies can be adequately accounted for in terms of informational as opposed to motivational factors. For example, assuming that people generally expect to have the ability and motivation to succeed, ascription of success to internal factors and failure to external factors may simply reflect their understandable tendency to believe in propositions that are consistent with expectations (cfi Ajzen and Fishbein, 1975; Feather, 1969; Miller and Ross, 1975). Nevertheless, several studies have attempted to control for possible informational differences (e.g. Miller, 1976; Ross and Sicoly, 1979) and have again reported evidence for self-serving biases. Reviews of this research, and evaluations of the extent to which it supports the operation of ego-protective and ego-enhancing biases, can be found in Miller and Ross (1975), Bradley (1978), Zuckerman (1979), Kelley and Michela (1980), Ross and Fletcher (in press), and Tetlock and Levi (1982).

Motivation for self-enhancement and self-protection could bias human inferences in areas other than explanation of success and failure. To give just one example, it could predispose people to attribute positive personality traits to themselves, their friends, or their reference groups, and to avoid attributing negative characteristics to these targets.

The idea of ‘egocentric attribution’ (Heider, 1958) or ‘false consensus’ (Ross, 1977) can be viewed as another example of an assumed bias toward self enhancement or defence. According to Ross (1977, p. 188), ‘. . .laymen tend to perceive a “false consensus”, that is, to see their own behaviour choices and judgments as relatively common and appropriate to existing circumstances while viewing alternative responses as uncommon, deviant, and inappropriate’. As a result, observers are said to judge an actor’s behaviour that differs from their own as deviant and therefore as revealing of the actor’s stable dispositions. Although Ross did not provide a clear rationale for the false consensus bias, it could be interpreted as a mechanism designed to maintain a favourable image of the self as a wise and judicious person whose judgments and behaviours are likely to be met by wide approval.

Effective control

A different motivating force is assumed to underlie causal attributions in Kelley’s (1 971) analysis. Kelley postulated that ‘attribution processes are to be understood,

6

not only as a means of providing the individual with a veridical view of his world, but also as a means of encouraging and maintaining his effective exercise of control in that world’ (p. 22). According to Kelley, the desire to exercise effective control may bias individuals to attribute events to controllable factors rather than to factors over which they have no control. In many cases of human behaviour, this motive could lead to a preference for explanation in terms of internal factors (see also Berscheid, Graziano, Monson and Dermer, 1976; Miller, Norman and Wright, 1978).

Hedonic relevance. Consistent with the proposed need to exercise effective control is the notion of ‘hedonic relevance’. An actor’s behaviour may sometimes have important rewarding or punishing implications for the observer. According to Jones and DeCharms (1957) and Jones and Davis (1965), such hedonic relevance increases attribution of the actor’s behaviour to internal factors. It may be argued that any bias of this kind is motivated by the observer’s desire to gain potential control over the positive or negative outcomes of the actor’s behaviour. Belief in a just world. Possibly related to effective control is the need, proposed

by Lerner (1970), to believe that the world is a fair and just place in which people get what they deserve and deserve what they get (Lerner, Miller and Holmes, 1976). In such a world one can enhance control over one’s outcomes by acting in a worthwhile manner deserving of positive reward. Research concerning the just-world hypotheses has often attempted to demonstrate that people tend to be blamed for their own misfortunes and to receive credit for positive outcomes, and that a ‘deserving person’ is expected to be rewarded whereas an ‘undeserving person’ is expected to be punished (e.g. Lerner, 1965; Lerner and Mathews, 1967).

Avoidability ofphysical harm. Walster (1966) suggested a bias motivated by the need to believe in the avoidability of accidental misfortunes and thus in the ability to control one’s own fate. According to Walster, if individuals concluded that an accident they observed was caused by circumstances beyond the actor’s control they would have to infer that they, too, might suffer a similar fate. To avoid such an undesirable inference, observers are said to prefer blaming the person who caused the accident. The need to assign responsibility for an accident to the person who caused it, rather than to circumstances, is assumed to increase with the severity of the consequences. Most research, therefore, has examined the effects of outcome severity on attribution of responsibility for an accident (see Fishbein and Ajzen, 1973 for a review and critical analysis of this research).

In conclusion, various needs are assumed to influence intuitive prediction and explanation of events. Although by no means exhaustive, the above discussion illustrates the range of motivating forces that have been considered. In addition, many related ideas have appeared over the years in the psychological literature. For example, ‘rationalization’-a central defence mechanism in psychoanalytic theory-is concerned with cognitive work aimed at upholding desirable beliefs. The ‘new look’ in perception research of the 1940’s and 1950’s emphasized ‘perceptual defence’ and ‘perceptual vigilance’; stimuli with negative affect were assumed to have a relatively high threshold of perception while positive stimuli were assumed to have a relatively low threshold of perception (Bruner and Postman, 1947; McGinnies, 1949). A somewhat related idea is expressed in Festinger’s (1957) theory of cognitive dissonance. Once committed to a position or course of action, people are assumed to be motivated to search for supportive or

A. W. Kruglmki and I. Ajzen

Bias and error in human judgment 7

consonant evidence and to avoid contradictory or dissonant evidence (for a review of research, see Freedman & Sears, 1965). Clearly, the general notion of need-based cognitions and perceptions is deeply ingrained in psychological theorizing.

Cognitive biases

Cognitive biases are assumed to arise because of people’s limited ability to attend to and properly process all the information that is potentially available to them. As in the case of motivational biases, the idea of a cognitive bias implies systematic errors, that is judgments that deviate systematically from some accepted norm or standard. Three broad classes of factors have been postulated as causing systematic errors in attribution and prediction: salience and availability of information, preconceived ideas or theories about people and events, and anchoring and perseverance phenomena.

Salience and availabiliiy

When drawing inferences, people by necessity rely only on information that is salient or available to them at the time they make their judgments. It has been suggested that, for a variety of reasons, this information will often be biased in such a way as to result in systematic errors of judgment.

SampZing bias. One obvious possibility is that the sample of information available to a person is not representative of the population as a whole. For example, an actor’s behaviour on a given occasion may be rather atypical of his or her behaviour in general. Unaware of this sampling bias, observers may derive erroneous conclusions concerning personal dispositions or other factors responsible for the actor’s behaviour. Nisbett and Ross (1980) have argued that individuals are remarkably insensitive to sampling bias. They cited research which suggests that even when the biased nature of the data is quite obvious, people base their judgments on these data, treating them as if they were representative of the population (Hamill, Wilson and Nisbett, Note 1; Nisbett and Borgida, 1975; Ross, Amabile and Steinmetz, 1977).

Selective attention. A second potential source of error is related to selectivity in attention and perception. Although all relevant information may be accessible to observers, they may focus selectively on those features of a situation that are perceptually salient (Arkin and Duval, 1975; Duval and Wicklund, 1972; McArthur and Post, 1977; Taylor and Fiske, 1975). When such selective attention is biased in a given direction, it could lead to systematic errors of judgment.

Based on Heider’s (1958, p. 54) assertion that ‘behaviour. . . has such salient properties it tends to engulf the total field’, many investigators have argued that in trying to explain an actor’s behaviour, observers tend to overestimate the importance of dispositional factors (internal to the actor) relative to environmental factors (e.g. Jones, 1979; Jones and Harris, 1967; Miller, 1976; Snyder and Frankel, 1976). In fact, this bias is assumed to be so pervasive that it has been called the ‘fundamental attribution error’ (Ross, 1977).

In a widely cited paper, Jones and Nisbett (1971) proposed that observers of an actor’s behaviour are more susceptible to this bias than are actors trying to account

8

for their own behaviour. Whereas observers are viewed as inclined to attribute the behaviour of an actor to such dispositional properties as the actor’s attitudes, abilities, or personality traits, the actor is presumed to have a tendency to explain his or her own behaviour by reference to environmental or situational factors. One reason for this expectation is that, in contrast to observers, actors are assumed to focus their attention on the environment rather than on their own behaviours.

Selective recall. According to Tversky and Kahneman (1974), when people assess the frequency of a class or the probability of an event they rely on an availability heuristic. Their judgments are said to be influenced by the ease with which instances of the class can be brought to mind. Although sampling bias and selective attention are also factors that may influence availability of information, Tversky and Kahneman focused primarily on biases due to selective retrieval from memory. Since recall of instances may be determined by factors that are largely unrelated to the actual frequency or probability of the event in question, reliance on the availability heuristic is said to produce systematic errors of judgment.

A. W. Kruglanski and I. Ajzen

Preconceptw ns

Individuals approach inferential judgments with a variety of preconceived ideas or theories about people and events. Of course, such preconceptions need not produce systematically biased judgments; inferences based in a prwri assumptions may in fact be quite accurate. However, it has been argued that reliance on intuitive preconceptions can lead to systematic bias. Generally speaking, it is often assumed that intuitive a priori conceptions guide the use of information in the prediction and explanation of events; they are said to sensitize people to certain hypotheses and items of information while predisposing them to disregard relevant information if it has no clear place in their intuitive ideas about the event or behaviour under consideration (Ajzen, 1977a; Bar-Hillel, 1980; Kelley, 1972; Nisbett and Wilson, 1977a; Tversky and Kahneman, 1980).

Presumed covariation. That our intuitive understanding of relationships among variables can influence our judgments has been known for some time. Beginning with the experiments of Asch (1946), work on impression formation has repeatedly demonstrated that respondents will draw far-ranging inferences about a stranger’s personality on the basis of a few traits attributed to the person. These judgments are said to follow perceived covariation among traits, or ‘implicit theories of personality’ (see Schneider, 1973 for a review of research in this area).

Kelley (1972) described two schemas that seem to represent preconceived ideas concerning the covariation of events. In the pairing schema, an individual assumes that relations among people are symmetrical such that if person A relates to person B in a given way (e.g. A dislikes or hates B), B relates to A reciprocally (i.e. B dislikes or hates A in return). The grouping schema involves transitivity besides symmetry, such that if personA relates in a given way to person B, and B to C (e.g. if A likes B, and B likes C) it is inferred that A stands in the same relation to C (transitivity) and B and C to A (symmetry).

Preconceived ideas concerning covariation among personality traits or behaviours are said to be capable of producing systematic biases and errors. A case in point is the ‘illusory correlation’ reported by Chapman and Chapman (1967). Participants in experiments were found to overestimate the co-occurence of

Bias and error in human judgment 9

informational items that intuitively appeared to go together and failed to detect relationships in the data that did not conform to their intuitive theories (see also Berman and Kenny, 1976; Shweder, 1975).

Representativeness. Another preconceived idea assumed to guide lay judgments is contained in the ‘representativeness heuristic’ proposed by Kahneman and Tversky (1973). People are said to appraise the likelihood that objectA belongs to class B by the degree to which, in its essential features, A is representative of B; that is, by the degree to which A resembles B. For example, ‘. . . the probability that Steve is a librarian is assessed by the degree to which he is representative of, or similar to, the stereotype of a librarian.. .’ (Tversky and Kahneman, 1974, p. 1124). According to Tversky and Kahneman, ‘This approach to the judgment of probability leads to serious errors, because similarity or representativeness is not influenced by several factors that should affect judgments of probability. . .’ (ibid) A case in point is the prior probability or base rate of the judged event. For example, a person is more likely to be an engineer if he or she is a member of a group containing 70 per cent engineers than one containing only 30 per cent engineers. This would be true even if we had a brief personality description of the indivual which is rather unrepresentative of the class of engineers. However, since the population base rate is not represented in any given instance, predictions of the person’s profession which rely on application of the representativeness heuristic are expected to take insufficient account of the prior probabilities (Hammerton, 1973; Kahneman and Tversky, 1973; Lyon and Slovic, 1976). Similar ideas have been expressed with respect to the impact of consensus information on causal attribution (Feldman, Higgins, Karlovac and Ruble, 1976; McArthur, 1976; Nisbett and Borgida, 1975). A comprehensive review of empirical research concerning the effects of base-rate information can be found in Borgida and Brekke (in press).

As another consequence of relying on the representativeness heuristic, people are said to disregard the accuracy of information or its reliability. That is, respondents are assumed to consider the diagnosticity of information with respect to a given proposition, but to neglect the extent to which the evidence is reliable or credible. As a result, judgments are said to be insufficiently regressive (Kahneman and Tversky, 1973), to disregard sample size (Tversky and Kahneman, 1971), and to ignore the fact that information retrieved from memory is less than perfectly reliable (Trope, 1978).

Causal theories. Many investigators have emphasized the importance of causal theories in the formation and revision of beliefs about various phenomena (e.g. Ajzen, 1977a; Kelley, 1972; Tversky and Kahneman, 1980). These intuitive theories represent a person’s understanding of the factors that influence a given behaviour or predispose the occurrence of certain events.

Kelley’s (1972, 1973) analysis of causal schemas describes a few prototypical theories of this kind. For instance, when using the multiple necessary cause schema, people assume that an effect is produced by several separate factors (causes) operating in conjunction so that given ‘. . .the information that the effect is present the schema for multiple necessary causes permits the inference that both causes were present. . .’ (Kelley, 1972, p. 6). In theperson-entity schema a person is said to entertain the possibilities (hypotheses) that an effect was caused by the actor, an external entity, or possibly a combination of the two.

In contrast to Tversky and Kahneman’s analyses which concentrate primarily on

10

the biasing potential of intuitive heuristics, Kelley’s portrayal of causal schemes emphasizes their relative reasonableness: ‘. . .These conceptions answer the attributor’s need for economical and fast attributional analysis by providing a framework within which bits and pieces of relevant information can be fitted in order to draw reasonably good inference’ (Kelley, 1972, p. 2). The presumed reason for employing such imperfect devices is the press of time and the competition of other interests that may make it impractical to use more thorough inferential procedures, such as the analysis of variance cube (Kelley, 1967).

As in the case of preconceived covariations, reliance on intuitive theories of cause and effect is assumed to predispose systematic errors of judgment. For example, Nisbett and Wilson (1977a) argued that, in reporting the factors that influence their own behaviour, people usually rely on their intuitive understanding of the factors that should have an effect. According to the authors, people are incapable of accurately reporting many of the factors that have a causal effect on their actions because their a prwri theories fail to direct their attention to these factors. Research has attempted to demonstrate discrepancies between the investigator’s understanding of the factors that influence the respondents’ behaviour and the explanations provided by the respondents themselves (e.g. Nisbett and Bellows, 1977; Nisbett and Wilson, 1977b). Other research suggesting that intuitive theories of cause and effect tend to guide social judgments can be found in Ajzen (1977a), Bar-Hillel(1980), and Tversky and Kahneman (1980).

A. W. Kruglanski and I. Ajzen

Anchoring and perseverance

A final source of cognitive bias is said to reside in the process whereby people revise their beliefs in the course of arriving at a final judgment. Salience, availability, or preconceived ideas may guide selection of initial hypotheses. Once formed, the initial hypothesis is assumed to serve as an anchor (Tversky and Kahneman, 1974) or cognitive set (Ajzen, Dalto and Blyth, 1979) that guides interpretation of new information (Higgins, Rhodes and Jones, 1977; Snyder and Cantor, 1979; Srull and Wyer, 1979).

Some investigators have (again) suggested that these tendencies are likely to result in systematic errors of judgment. According to Tversky and Kahneman (1 974), when an initial hypothesis serves as an anchor it provides a starting point that is adjusted to reach the final estimate. This process may result in error because ‘. . . adjustments are typically insufficient. . . (and) . . . different starting points yield different estimates. . . biased toward the initial values. . .’ (p. 1128). Suggestions regarding the perseverance of initial hypotheses and a reluctance to modify them despite undermining evidence have also been made by Ross (1977) and his associates (Nisbett and Ross, 1980; Ross, Lepper and Hubbard, 1975; Ross and Lepper, 1980).

Epistemic pluralism

Our discussion has revealed a long list of motivational and cognitive factors that are assumed to have a biasing influence on human judgment. They are said to deflect inferences away from some accepted criterion of validity, resulting in serious errors. Our review also implied another orienting assumption of research on biases and errors which may be called the assumption of epistemic pluralism. People are

Bias and error in hwnan judgment 11

assumed to possess a large repertory of judgmental strategies and to employ them selectively in different problem situations. For instance, the representativeness heuristic is assumed to be employed when the question of interest is whether some event belongs to a given class. In contrast, the availability heuristic is said to be invoked when a person judges the frequency of a given class or the probability of a given event. Kelley’s (1972) schemas are assumed to take effect when people lacked the information concerning multiple cases or multiple occasions indispensible for a full-fledged ANOVA. In a similar fashion, a variety of motivational factors has been assumed capable of biasing naive inferences in different situations. The need for ego-enhancement and defence is said to affect causal attributions in relation to success and failure; the need to believe in a just world is assumed to influence interpretations of rewards and punishments; and the need to believe in the avoidability of physical harm is said to bias judgments of responsibility for accidental misfortunes. This pluralistic approach to human judgment implies a research strategy that would attempt to identify the diverse cognitive processes and motivations in the lay persons’s repertory and the particular situations and problems to which they apply. Ideally, this research strategy would eventually permit prediction of information-processing behaviour from knowledge of the situation and problem under consideration.

Recapitulation

In the preceding sections we attempted to characterize the prevailing approach to human inference, in particular as it concerns judgmental errors and biases. There seems to exist an implicit commitment to the idea of recognized criteria of validity against which it is possible to compare intuitive judgments. In some cases, these criteria are equated with formally stated, normative models of inference; alternatively, direct observation provides the reference standard; and in yet other instances, the experimenter’s own perspective provides the criterion.

Motivational and cognitive factors are assumed to bias inferences away from these standards, thus producing systematic errors of judgment. Motivational biases have to do with the tendency to engage in ‘wishful thinking’; to bring one’s judgments in line with one’s needs and desires. They are said to reflect deviations from rationality: logic is supposedly compromised to accommodate self-serving motives such as the need for self-enhancement or effective control. By way of contrast, distortions produced by cognitive biases serve no hidden needs or desires. Instead, they are said to be inherent in the numerous judgmental strategies, heuristics, or schemas people are assumed to employ. These strategies are held to be suboptimal and inferior to the more ‘objective’ modes of information processing embodied in the accepted standards of validity, such as the normative inference models. On the whole, intuitive strategies are assumed to enhance the likelihood of error, although it is recognized that they can often result in reasonable judgments.

The long list of motivational and cognitive biases we have reviewed suggests a pluralistic conception of human inference. People’s intuitive judgments are assumed to rely on a repertory of information-processing modes employed selectively in different circumstances. This suggests the need to construct a comprehensive strategies by situations matrix that would allow the prediction of information-processing behaviour in various circumstances.

In the present paper we take issue with the foregoing view of the inference

12

process. In contrast to the prevailing epistemic pluralism, we believe that human judgments can be described as relying on a single, unitary process, and that the various strategies and heuristics mentioned in the literature are best viewed as specific instances of that process. On the following pages we provide an outline of a general theory of lay epistemology. Based on this theory, we examine the nature of the different cognitive heuristics and the role of bias and error in the inference process.

A. W. Kruglanski and I . Ajzen

A THEORY OF LAY EPISTEMOLOGY

Two dimensions of knowledge: content and conjidence

All knowledge may be characterized in terms of two aspects: its content and the confidence with which it is held. The contents of knowledge span a wide diversity of possible topics. One may ‘know’ that a given wine tastes good, that it is daytime now, that two plus two is four, or that the Earth is round. All these are specific instances of knowledge content, and our theory of lay epistemology assumes that they are arrived at via the same epistemic process.

Some knowledge contents are held with great confidence, others with less assurance. Strongly accepted propositions are often referred to as ‘facts’, while propositions in which we have less confidence are usually termed ‘hypotheses’. Thus, the difference between facts and hypotheses is assumed to be one of subjective confidence, rather than of objective veridicality. Note that we are not denying the distinction between ‘facts’ and ‘hypotheses’, only the implication that facts are objectively certain and cannot be falsified. The thesis that all human knowledge, including factual knowledge, is perennially conjectural and uncertain is fundamental to the nonjustificationist philosophy of science, propounded by such authors as Karl Popper (e.g. 1973), Tomas Kuhn (1962, 1970). Imre Lakatos (1968), and Paul Feyerabend 1976)’. This thesis should not be considered a mere philosophical abstraction devoid of empirical implications. For instance, propositions or ideas that at one time are assumed to be beyond doubt may be discredited and abandoned. The history of Western science provides rich support for this implication in the form of numerous recorded instances of ‘laws’ (e.g. those of Newtonian mechanics) that for a time were considered proven beyond the shadow of a doubt, yet were subsequently falsified and replaced by apparently more believable alternatives (cfi Popper, 1966, 1973; Kuhn, 1962). Not only scientific laws and theories, but also hard ‘perceptual facts’ may sometimes abdicate their credibility. This is demonstrated by countless magicians the world over to the surprised delight of their audiences.

The epistemic sequence: cognition-generation and validation

An epistemic sequence is a series of cognitive operations that the individual performs en route to the experience of knowing. The epistemic sequence is initiated by an epistemic purpose: an individual’s interest in a given item of information or the resolution of a given problem. Our conceptualization of knowledge in terms of

’For a recent review of the nonjustificationist arguments with particular implication for psychology see Weirner (1976).

Bias and error in human judgment 13

content and confidence suggests two phases of the epistemic sequence. In the first phase, the contents of knowledge, the specific cognitions or hypotheses, are generated. The second phase is concerned with validation. Here, the individual assigns a certain degree of confidence to a given hypothesis or set of hypotheses.

Cognition generation. Little is known about the ways in which cognitions are generated. Generally speaking, this process is related to the stream of consciousness (James, 1890) depending as it may on attentional shifts, the salience of ambient stimuli (Taylor and Fiske, 1975,1978; McArthur, 1981; Rumelhart and Ortony, 1977), and the momentary mental availability of various ideas and beliefs (Tversky and Kahneman, 1974). Below we speculate about conditions that may motivate the generation of hypotheses. These ideas may help us understand ‘why’ hypothesis generation occurred in a given instance, but as to the ‘how’ of its occurrence we are still largely in the dark. (For an excellent historical survey of thinking about the ‘how’ of hypothesis generation see Campbell, 1974.)

Cognition validation. The validation of hypothesis is accomplished via deductive logic: individuals are assumed to deduce their inferences from premises in which they already believe. In other words, a person comes to have confidence in a given inference if it is logically consistent with (or deducible from) other subjectively credible cognitions’. For example, a person who sees that rain drops are falling and hears the distinctive sound they make when hitting the window panes is likely to infer that it is raining. The deductive nature of reasoning in this instance becomes apparent when we assume that the person subscribes to the following premises: (1) Only when it is raining can we see rain drops falling and hear their sounds on window panes; and (2) exactly such a sight-sound combination is presently in evidence. Following the rules of deductive logic, these two premises yield the conclusion that it is indeed raining.

Reason fbr doubt: logical inconsistency. In the same way that the experience of knowledge regarding a given proposition depends on its logical consistency with (or deducibility from) one’s premises, the absence of knowledge (or doubt) will arise when logical inconsistency (contradition) among one’s cognitions is encountered3.

’This is not to gloss over the fact that deduction is generally considered merely one among the three primary modes of inference, the other two being induction (which Wundt (1862) considered the cornerstone of all inference), and analogy (cf: Graumann and Sommer, in press). However, both analogy and induction can be shown to constitute special cases of deduction. In analogy one infers that certain similarities between particulars imply further similarities because the particulars are assumed to represent a more general class from which these further similarities can be deduced. For instance, one might reason by analogy that ‘computers are problem-solving systems that proceed according to recursive rules’, ‘humans are problem solving systems’, therefore, ‘human proceed according to recursive rules’. This inference must have been mediated by the implicit assumption that ‘all problem-solving systems proceed according to recursive rules’, from which it follows by simple deduction that humans do insofar as they are problem-solving systems.

In inductive reasoning the very belief in inducrwn is the major premise of the syllogism. For example, one might believe that ‘if very many x’s are A’s rhen all x’s are A’s’. Furthermore one might note as a minor premise that the antecedent of the above conditional is instantiated and ‘many x’s are noted to be A’s’. From these two premises one might deduce the conclusion that all x’s are A’s. (For a similar deductive interpretation of statistical induction see Turner, 1965 pp. 473-477.) The grand philosophical debate concerning the justifiability of induction has centred around the question of whether such major premises as given above can be reasonably entertained. Popper (1959,1966,1973), for example, took a strong position that they cannot whereas his inductivist adversaries (e.g. Carnap, 1971; Hmtikka, 1968) that they can under some conditions. 3A special case of this would be if the person had no cognitions or ideas on the topic in question. In such circumstances all possibilities would seem equi-probable or equi-believable to the individual including contradictory possibilities.

14

Consider a person who is told, by one informant, that it is 5 p.m. and, by another, that it is 6 p.m. The person may be truly confused and remain confused until the inconsistency is resolved. The resolution of this inconsistency can be accomplished if one of the contradictory cognitions is denied. For example, if it turned out that the first informant’s watch had stopped, the datum he had provided could be discounted. Denial attempts are likely to be directed at the contradictory cognition that is held with relatively less confidence. For instance, the ‘hypothesis’ that it is sunny, based on last evening’s weather forecast, will be abandoned immediately when confronted with the visible ‘fact’ of a torrential downpour.

A. W. Kruglanski and I. Ajzen

A note on subjective logic

We do not mean to imply that every person is familiar with the intricacies of formal logic or that lay reasoning is immune to syllogistic errors of various kinds. Such claims would fly in the face of everyday experience as well as generally accepted findings in the psychology of reasoning (e.g. Woodworth and Sells, 1935; McGuire, 1960). We are suggesting, instead, that human beings are subjectively logical; that is, they operate deductively by forming ‘if-then’ linkages among cognitions and reaching their conclusions in accordance with such reasoning. This is not to say that the subjective linkages formed by one person would necessarily correspond to the ‘external data’ or ‘stimulus materials’ as adjudged by somebody else, or that another person (e.g. the experimenter, or a trained logician) would form the same linkages in the same situation.

Consider a sample of respondents presented with the following two premises: (1) All singers are professors and (2) all poets are professors. Imagine, further, that our respondents concluded from these premises that ‘all singers are poets’. A careful examination of the premises suggests that the respondents erred in this instance. Does this, however, mean that they were illogical? Not in our subjective sense of logic. Caring very little about professors and poets, and not believing the statements anyway, the respondents could have easily confused ‘all poets are professors’ with ‘all professors are poets’, which implies the conclusion reached that ‘all singers are poets’. the premises with which they were presented. For instance, the premise ‘all dogs have four legs’ is less likely to be confusing to most people than the premise ‘all A’s are BY, or ‘all zips are moms’. The former premise is therefore less likely to result in syllogistic errors. Indeed, cognitive psychologists have demonstrated that the tendency to commit logical errors of various kinds is markedly reduced when respondents are presented with everyday or concrete propositions that they understand and accept as opposed to abstract or symbolic statements of the kind employed by professional logicians (see Wason and Johnson-Laird, 1972, pp.

Cross-cultural research on reasoning yields evidence consistent with our arguments for subjective logic. While early authors (in particular Levy-Bruhl, 1966) contended that the reasoning of primitive peoples is pre-logical and fraught with contradictions, this argument is now generally rejected by cultural anthropologists (cfi Cole and Scribner, 1974). A primitive tribesman may frequently commit logical errors in response to propositions presented by a Westerner. Those are propositions alien to the tribesman and ones that he may fail to comprehend or

66-85).

Bias and error in human judgment 15

accept. But primitive people appear to reason as consistently as the rest of us when dealing with familiar material derived from their own special cultural heritage (ibid, p. 168).

The unstable epistemic equilibrium

According to the present epistemic theory, any conclusion or judgment can be deduced from different types of evidence depending on the conditional ‘if-then’ linkages (major premises) that an individual happens to form. As the potential number of such linkages is endless, there is a correspondingly inexhaustible number of evidential items relevant to any inference. Similarly, any amount of evidence is compatible with (or can be conditionally linked with) a vast number of alternative inferences or generalizations (cf: Campbell, 1969, p. 354). Thus no amount of evidence ever proves or can even be used to assign a meaningful probability to a given inference or generalization. No conclusion or inference is uniquely implied by a given body of information. In this sense, any conclusion or inference inevitably goes ‘beyond the information given’ (Bruner, 1957).

The particular inference reached thus depends heavily on a given individual‘s epistemic process during which alternative cognitions (comprising premises for the person’s conclusions) are generated. Such a sequential process during which cognitions are generated and validated has no unique point of termination. In principle, it could continue endlessly, as alternative competing (contradictory) cogitions are put forth and validated in tireless succession. In actuality, however, the process does come to a halt at some point. At any given time we seem to have in our possession various items of knowledge without which decision-making and action would be virtually inconceivable. Let us now consider this aspect of the inference process.

Freezing and unfreezing the epistemic sequence

To account for the crystallization of knowledge it is necessary to postulate ‘braking’ or ‘freezing’ mechanisms that terminate the potentially endless epistemic sequence. Similarly, to account for the modification of once firmly held beliefs it is necessary to postulate ‘releasing’ or ‘unfreezing’ mechanisms that may reactivate a previously halted sequence. Two categories of such freezing or unfreezing mechanisms are presently identified having to do with the individual’s capacity and motivation to cognize competing alternatives to a currently entertained inference.

Capacity. The capacity to generate cognitions on a given topic depends in part on the possession of relevant background information. Enhancing one’s store of knowledge via learning or education should thus improve one’s capacity to cognize various experiences in terms of meaningful hypotheses. For example, only a person who is familiar with the workings of an automobile engine is likely to hypothesize that the engine’s failure to ignite may be attributable to a tom timing chain.

In addition to background knowledge, situational factors may affect the person’s capability to generate hypotheses on a given topic. Incidental occurrences may enhance the mental availability of some ideas and enable their coupling in ‘if-then’ linkages with other available concepts. For instance, students of statistics may have in their store of knowledge a fair comprehension of the ‘regression to the mean’ notion, yet fail to realize its relevance to a set of data until some incidental event

16

makes its relevance salient (e.g. bumping into the statistics professor; recalling, for some reason, the Freudian concept of regression). The sophisticated scientist seems as susceptible to the vicissitudes of salience and availability as the layperson. Poincare’s intriguing personal account of a mathematician’s experiences in this regard attests to such susceptibility (in Campbell, 1974). Later we shall examine how experimentally manipulated comprehension and salience of statistical notions can increase the likelihood of their use by naive respondents.

Motivation. The individual’s tendency to generate alternative hypotheses may be influenced by at least three relevant motivations: the need for structure, the fear of invalidity, and the preference for desirable conclusions.

The need f i r structure is a desire to have knowledge on a given topic, any knowledge, as opposed to a state of ambiguity. This need is assumed to exert an inhibiting or braking influence on the hypothesis-generation process because the generation of alternative hypotheses endangers the existing structure. It stands to reason that one’s need for structure is heightened in situations where immediate action is required. For example, the press of time and the need to reach a decision may heighten the tendency to seek cognitive closure and to refrain from critical probing and extended assessment of a given, seemingly adequate, solution to a problem (cf: Frenkel-Brunswick, 1949; Smock, 1955; Tolman, 1948). Thus, a heightened need for structure is assumed to enhance the person’s tendency to adhere to an early hypothesis and to be insensitive to evidence for possible alternatives.

The fiar ofinvalidity stems from the threat of making a costly mistake. This fear is assumed to exert a facilitating influence on the hypothesis-generating process because of a reluctance to commit oneself to a given, potentially erroneous, hypothesis. For example, the greater the ridicule expected from significant others for the commission of an error, the greater might be an individual’s readiness to consider multiple alternative solutions before accepting any one as valid. Unlike the need for structure, the fear of invalidity is assumed to sensitize the individual to evidence for competing alternatives to a currently entertained hypothesis. While opposite in their effects on the cognition-generating process, need for structure and fear of invalidity are assumed to be orthogonal to each other; a person could be high on both, or low on both, or high on one and low on the other.

In turn orthogonal to the need for structure as well as the fear of invalidity is the preference fir desirable conc l~wns . The contents of some conclusions may correspond to the individual’s wishes while other contents may not. According to the present analysis, the preference for desirable conclusions disposes a person to continue or discontinue the generation of further alternative hypotheses depending on whether a given, currently entertained hypothesis does or does not correspond with this person’s desires. For example, recent work by Snyder and Wicklund (in press) suggests that individuals are sometimes motivated to attain a state of attributional ambiguity. This is likely to be the case when unambiguous knowledge might be painful or unpleasant. Our analysis suggests that individuals are more likely to reach conclusions congruent with their wishes than incongruent ones. The reader will recognize the similarity between the preference for desirable conclusions and the general notion of motivational bias discussed at length in an earlier portion of this paper.

A . W. Kruglanski and I. Ajzen

Bias and error in human judgment 17

Lay epistemology: a brief summary of the theory

Knowledge is characterized in terms of two aspects: its contents and the confidence with which it is held. Knowledge is reached in the course of a sequence in which cognitions are generated and validated. This epistemic sequence is initiated by an epistemic purpose, i.e. the individual’s interest in a given item of information. Cognitions are generated as part of the fluctuating stream of associations. The validation of these cognitions is accomplished via deductive logic: an hypothesis will be believed if it is deducible from already accepted evidence. However, beliefs can be highly precarious and unstable. At any time the person may generate plausible alternative hypotheses inconsistent with an original judgment. Such inconsistency casts doubt on the judgment at issue and may lead to its rejection as erroneous. Lack of capacity or motivation may induce a person to cease generating new hypotheses or ideas that might compete with a current judgment, but under appropriate conditions the process can be ‘unfrozen’ and the judgment can come to be doubted and possibly abandoned.

PREVALENT VIEWS OF HUMAN INFERENCE AND THE LAY EPISTEMIC PERSPECTIVE: SOME CONTRASTING IMPLICATIONS

Earlier we characterized the prevailing conceptions of human inference by reviewing their major orienting assumptions, namely, that there exist secure criteria of validity in reference to which the accuracy of lay judgments can be determined; that motivational or cognitive factors bias judgments away from those criteria, thereby increasing the likelihood of error; and that the human inference process is pluralistic in the sense that people selectively apply a variety of distinct strategies under different circumstances. We now contrast these orienting assumptions with implications derived from our theory of lay epistemology.

Validity criteria

In contrast to the accepted view of inference, the present theory denies the possibility of defining secure validity criteria. According to our theory (and the non justificationist philosophy of knowledge on which it rests) one is never objectively justified in holding a given proposition as true or in making quantitative (e.g. probabilistic) estimates of the extent to which it is. Let us contrast these ideas with accepted criteria of validity mentioned earlier: normative models, direct verifications, and the investigator’s perspective.

Normative models

According to the nonjustificationist approach, any model of empirical reality is a conceptual construction whose degree of actual correspondence to objective reality is in principle inestimable. This characterization is assumed to apply with equal force to formal models (e.g. mathematical or statistical) and to less formal conceptualizations phrased, for example, in ordinary English. Thus, the distinction between formal and informal conceptualizations is assumed to lie in the precision of

18

their formulation (mathematical terms are defined more precisely than English terms) but not in the degree to which their validity is estimable.

According to this reasoning, the ‘laws’ of physics, for example, no matter how formal their expressions, are mere conjectures whose degree of objective certitude cannot be established (c$ Popper, 1959; Kuhn, 1970). Identical arguments hold for the ‘normative’ models of inference, such as the probabilistic Bayesian model frequently referred to in the judgmental literature. Bayed theorem addresses the revision of subjective probabilities on the basis of new data. According to the present analysis, however, ‘data’ are as conceptual or conjectural as are ‘hypotheses’. No one would deny that Bayesian computations based on ‘invalid‘ data inputs are themselves invalid. Since there is no way of securely determining the validity of a given datum, the indeterminacy inevitably transfers to all applications of the Bayesian model (see also Weimer, 1976, pp. 220-223). In sum, according to the present approach, normative models of inference cannot be regarded as reliable criteria of judgmental validity, and inferences based on such models are not necessarily closer to objective reality than are inferences derived by alternative means.

A. W. Kruglanski and 1. Ajzen

Direct observation

The direct observation criterian of validity also falters for reasons just considered. Direct observation is a datum or a ‘fact’. But, strictly speaking, a fact is no less conceptual and hence no less indeterminate than is a hypothesis. In terms of the present theoretical perspective, therefore, the direct-verification criterion again seems of indeterminate reliability.

The experimenter’s perspective

It should be clear by now that the experimenter’s own perspective, sometimes used as a standard of validity, is from the present point of view inestimably prone to falsity. Indeed, criticism of this standard has been voiced in the recent judgmental literature. For example, Smith and Miller (1978) have argued that divergent explanations of a person’s behaviour provided by an experimenter and by the person himself do not provide evidence for the invalidity of self reports. Instead, they may simply demonstrate differences in perspective: ‘. . . the cause question has different answers depending on where the respondent stands and on what information he or she has available. . .’ (p. 357). Since there exists no generally accepted criterion of validity in these situations, it may be rather difficult to decide whether to believe the experimenter’s account or that of the respondents.

Bias versus error

In the conventional view of human judgment, bias, as a tendency to deviate systematically from an accepted criterion of validity, naturally leads to error. In fact, the terms bias and error are typically used interchangeably throughout the judgmental literature. According to the present perspective, however, bias need not result in error, if by the latter term is meant a departure from some accepted criterion of validity. If the accepted validity criteria are themselves arbitrary and of indeterminate reliability, it seems unjustifiable to defme bias by reference to these criteria.

Bias and error in human judgment 19

We define bias as a subjectively-based preference for a given conclusion or inference over possible alternative conclusions. According to our theory it is, in principle, possible to generate a vast number of alternative hypotheses consistent with a given array of evidence. The decision to stop the cognition-generating process at some point is assumed to be governed by such factors as the mental availability of a given conception and the person’s epistemically relevant motivations. Clearly, the strength of these factors and their direction (toward inhibiting or facilitating the cognition-generating process) are affected by subjective consideration: they vary widely from one individual to another and from one situation to the next. In this sense, then, all knowledge can be considered ‘biased’, for it is affected by various psycholological mechanisms whose specific manifestations vary across persons.

In a similar fashion, we also define error subjectiveZy as the type of experience a person might have following an encountered inconsistency between a given hypothesis, conclusion, or inference, and a firmly held belief. For instance, most of us would admit to an error about not having any money upon discovering a $100 bill in our wallets..Similarly, we might conclude that someone else is in error if that person’s inferences stood in blatant contradition to some of our firm beliefs. Thus, an experimenter might pronounce a respondent’s beliefs as mistaken if those were inconsistent with some of the experimenter’s solid knowledge. It is noteworthy that just as with the ‘truth’ label, the ‘error’ label can be attached to a given judgment only tentatively and might be revoked upon further examination. In the philosophy of science, for example, Popper’s followers were quick to point out that the falsification of theories is no more objectively secure or probable than the corroboration of theories (cf: Lakatos, 1968; Feyerabend, 1976).

According to these definitions, bias need not result in error. All knowledge is subject to bias, but not all knowledge need be experienced as erroneous. Indeed it can be shown that the various sources of bias listed in the contemporary literature need not result in erroneous inferences, as here defined. Consider first motivational biases. Parents may be strongly motivated to believe that their child is a gifted genius and therefore reach such a ‘biased’ conclusion with considerable facility. Yet it could so happen that this conclusion would be considered consistent with available evidence by impartial observers; nobody need judge this inference erroneous. The same is true of cognitive biases. Conclusions reached because certain items of evidence happen to be salient or available could be acclaimed as veridical by all. For example, a radar system may be deliberately designed to make signals from a specific type of aircraft particularly salient, and this ‘bias’ may enhance the likelihood of ‘veridical’ recognitions on the part of the radar operators. Preconceived ideas, theories, and schemas may result in expectations concerning the norms of behaviour at some public institution, such as a bank. These expectations may lead a person to expect certain patterns of conduct on the part of the institution’s representatives and, in the individual’s experience, those expectations might invariably be confirmed. Finally, biases produced by anchoring and perseverance of beliefs may also lead to ‘veridical‘ inferences, for example in those instances where new relevant evidence to which observers are impervious and close-minded, is itself ‘invalid’. An army general might doggedly adhere to his preconceptions regarding the low likelihood of a surprise attack despite numerous alarming reports from a group of intelligence agents. Such obstinacy, however,

20 A. W. Kruglanski and I. Ajzen

need not result in a mistake if, for instance, the group had itself been infiltrated by enemy spies.

Thus, the various sources of bias identified in the judgmental literature need not result in inferences that would be considered erroneous. On the face of it, this conclusion may appear fully compatible with the frequently stated qualification that the proposed sources of bias result in inferential errors only on some occasions, while often they yield reasonably valid inferences. Yet the accepted view of human judgment is committed to the position that, on the whole, intuitive strategies result in a greater preponderance of errors than do the alternative, more nearly optimal modes of information processing, characteristic in particular of scientific inference. By contrast, according to the present analysis the alternative, allegedly superior, modes of inference (including the scientific method) are as fallible and of as indeterminate validity as are the various lay strategies. Indeed, we will try to demonstrate that the motivational and cognitive ‘sources of bias’ identified in the judgmental literature do not represent deviations from a reasonable inference process, but are inherent in all human judgment, whether intuitive or scientific.

Epistemic pluraiism re-examiued

In this section we attempt to show that the numerous cognitive and motivational strategies discussed by different investigators constitute special cases or manifesta- tions of the processes described in the theory of lay epistemology. In particular, these strategies attempt to describe the contents of human judgment by identifying propositions from which certain conclusions can be deduced, by specifying the constructs or hypotheses that will be available in a given situation, and by naming specific conclusions individuals are likely to prefer under certain sets of conditions.

Contents of deductive logic

Consider for example the representativeness heuristic discussed by Tversky and Kahneman (1974). This heuristic addresses the case in which a person assesses the probability that ‘Object A belongs to class B’. In terms of the present analysis, this is a specific judgmental problem that a person may formulate in some circumstances, namely, the problem of classification or categorization. Of course, classification is merely one among a vast number of problems in which a person might be interested, e.g. whether A is taller than B, whether A is symmetrical with regard to B, whether A is a family relation to B, and so on.

The representativeness heuristic assumes that probability judgments involving classification are deduced by considering the degree to which object A is representative of class B, that is, the extent to which A resembles B. For most people, class membership implies that individual members possess the characteristics of the class. This proposition naturally leads to the judgment that if A possesses the defining characteristics of class B (that is, A resembles B or is similar to B) then A is a member of class B. Clearly, a strategy of this kind is quite consistent with the theory of lay epistemology which assumes that hypotheses are validated by examining whether they are consistent with, or deducible from, available evidence. In so doing the individual proceeds just as would scientists evaluating the validity of their hypotheses concerning class membership. Thus

Bias and error in human judgment 21

when scientists wish to determine whether a given athlete is a male or a female they might consider whether the person resembled the male or the female category in terms of chromosomal structure, because the latter is considered a defining characteristic (or implication) of sex classification.

Or consider the covariation principle articulated by Kelley (1971) whereby ‘an effect is attributed to the one of its possible causes with which over time it covaries’ (p. 3). The meaning of covariation is conjunctive presence and absence of two entities across different occasions. It should be clear that covariation (or correlation) is one implication of the notion of causality: ‘One implication of the proposition that in a given situation A was the cause of B is that under those circumstances the removal of A would eliminate B as well (conjunctive absence) whereas an introduction of A would lead to the appearance of B (conjunctive presence)’ (Kruglanski, 1980, p. 79). As a matter of fact, most people understand by the term cause something that (necessarily) covaries with the effect and temporally precedes it. When validating a causality hypothesis by observing temporal precedence and covariation, the person may be reasoning deductively, as follows: ‘Only ifA covaries with B (and temporally precedes it), isA the cause of B. A covaries with B (and temporally precedes it), therefore A is the cause of B’. While covariation is entailed by the concept of causality, it is not entailed by numerous alternative concepts in which a person might sometimes be interested. For example, the notion of location (A is located to the east of B) does not imply covariation, nor does the notion of order (A is taller or heavier than B), marriage (A is married to B), symmetry, and so on. Thus, the principle of covariation is a special case of the present theory in the sense that it identifies a special type of proposition (covariation) from which a given cognitive content, the content of causality, can be deduced. In other words, the principle illustrates how the deducibility standard of validation is applied to this particular cognitive content. Just as a person who attempts to validate the hypothesis of class membership may look for evidence of resemblance from which the hypothesis might be deduced, a person attempting to validate the causality hypothesis may examine the existence of covariation (and temporal ordering) from which this hypothesis may be deduced.

Similar considerations apply to the availability heuristic (Tversky and Kahneman, 1974), whereby ‘. . .people assess the frequency of a class or the probability of an event by the ease with which instances or occurrences can be brought to mind’ (p. 1127). Examination reveals that the availability heuristic, as applied to the determination of class frequency or probability, is also a special case of the present theory. A person might reason in the following fashion: ‘Assuming that the sample is representative, if it manifests a given class frequency then the population should manifest a similar frequency’ (major premise); ‘to the best of my recollection, the sample manifests frequency x 7 (minor premise); therefore ‘the population frequency is approximately x as well’. In short, the availability heuristic refers to the deductive inference of a population frequency from the (remembered or available) sample frequency; the representativeness heuristic refers to the deductive inference of class membership from the resemblance evidence; and the covariation principle refers to the deductive inference of causality from covariation evidence.

Similar analyses may be applied to numerous additional information-processing strategies, heuristics, or schemas. For example, it could be demonstrated that the

22 A . W. Kruglanski and I. Ajzen

causal schemas discussed by Kelley (1972) differ among themselves in the specific cognitive contents they address while again demonstrating in common the idea that any such contents are validated via the principle of logical consistency or deducibility. (For details, the interested reader is referred to Kruglanski, 1980).

Availability of specijic constructs or hypotheses

Several cognitive ‘biases’ discussed in the literature involve the concept of availability. As noted earlier, Heider (1958, p. 54) stated that ‘behaviour. . . has such salient properties it tends to engulf the total field’. Subsequently, many investigators have argued that because of such salience, observers attempting to explain an actor’s behaviour tend to overestimate the causal importance of dispositional relative to situational factors (e.g. Jones, 1979; Jones and Hams, 1967; Miller, 1976; Snyder and Frankel, 1976). In this case, situations and behaviours are singled out for special consideration in terms of their salience or availability for hypothesis generation.

In other instances, base rates (Kahneman and Tversky, 1973) and consensus information (e.g. Nisbett and Borgida, 1975) have been given special attention. These types of information are usually assumed to have relatively low salience and therefore to be ignored in favour of more readily available information, such as information concerning the individual case (see also Nisbett and Ross, 1980).

Clearly, these ideas are quite consistent with the principles embodied in the theory of lay epistemology. Unless cognitions are salient or available, they are unlikely to affect the inference process. In contrast to the prevailing approach, however, the theory of lay epistemology makes no attempt to identify the particular cognitions that will be salient in any given situation. Considering the large number of historical, situational, perceptual, and cognitive factors that may influence salience, such attempts are likely to produce inconclusive research findings. Support for this asertion will be considered below.

Preference for specijic conclusions

Finally, most of the motivations that are alleged to distort inferences can be viewed as special instances of the preference for desirable conclusions. Ego-enhancement and defence, the needs for effective control, for physical safety, and so on are all assumed to affect the epistemic process in the same way: by swaying it toward need-correspondent inferences. Again, as we shall see below, given the vast array of potential needs of this kind, attempts to specify activation of specific needs under given conditions are likely to prove counterproductive.

Summary

In contrast to most prevailing views of human inference, the theory of lay epistemology rejects the proposition that accuracy of intuitive judgments can be ascertained by comparing these judgments to ‘objective’ criteria of validity. Normative models, direct verification, and the experimenter’s perspective all represent conceptual frameworks with unknown veridicality. A distinction was drawn between error and bias. Error was defined as the subjective experience of inconsistency between a given hypothesis and another, firmly accepted cognition

Bias and error in human judgment 23

whose validity appears beyond doubt. Bias was defined as a subjectively based preference for a given conclusion over possible alternative conclusions. It was noted that, as presently defined, all knowledge is inevitably biased in that it depends on subjective factors in the epistemic process, particularly, the individual’s capacity and motivation for hypothesis generation. Although biased, not all knowledge: must be experienced as erroneous. Finally, it was noted that the pluralism of processes in the judgmental literature can be reduced to a relatively small number of constructs. The various cognitive and motivational strategies identified by different investigators were shown to represent specific instantiations of the deducibility principle, special cases of availability or saliency, and special cases of the preference for desirable conclusions.

EMPIRICAL EVIDSNCE

In the preceding sections we attempted to show how the present theory of epistemic behaviour differs conceptually from generally accepted views of human judgment. Beyond its conceptual properties, the present framework is, first and foremost, a psychological theory of inference with numerous empirical implications. On the following pages we consider some of the empirical evidence for our theory.

The content-process confusion and epktemic pluralism

The epistemic pluralism evidenced in the judgmental literature stems at least in part from a failure to distinguish between the process of inference and the specific instances or contents of that process. This failure sometimes results in inappropriate generalization of specific contents to the inference process in general. Examination of the empirical evidence demonstrates the fallacy of such generalization and points to the limiting conditions under which a given content is in fact descriptive of a person’s judgmental behaviour.

The internal-external partition

Much of the attribution literature to date has implied that the distinction between internal and external causes is basic to the process of causal attribution (cf: Harvey, Ickes and Kidd, 1976, 1978). According to the present analysis, however, the internal and external categories are simply two out of a vast number of possible causal categories, as are Kelley’s (1967) person, entity, time, and modality categories;, Weiner et al.’s (1971) ability, effort, luck, task-difficulty categories, and the endogenous-exogenous (means-ends) categories discussed by Kruglanski (1975).

From the present perspective it is not very meaningful to characterize any given causal category as more basic to the attribution process than possible alternative categories. Given the vast variety of potential causal categories, any single classification is of doubtful utility. On the other hand, the present epistemic analysis suggests the limiting conditions under which any causal category is likely to be invoked by the individual. Causal categories are invoked to the extent they are teleologically functional for the individual; that is, to the extent that they contain

24 A. W. Kruglanski and I. Ajzen

information relevant to the individual’s epistemic purpose or situational goal (cf: Jones and Thibaut, 1958; Kruglanski, 1980).

An experiment by Kruglanski et al. (1978) provides support for this idea. Participants were presented with one of two different objectives. For example, some of the participants had to decide ‘whether to invite John to a Saturday night party’, whereas other participants were to decide ‘whether it is worthwhile to buy tickets to a given movie for oneself and a friend’. To make their decisions, participants could choose between two sets of causal information. One set would resolve the question as to whether ‘John’s decision to attend a movie on Saturday night was caused by the movie’s properties or by John’s unique personality’. This pair of hypotheses was cast in terms of external versus internal causes, respectively. The second set of information would help the respondent decide whether ‘John’s decision to attend the movie was an end in itself, or a means of combatting loneliness’, a pair of hypotheses selected to correspond to Kruglanski’s (1975) endogenous versus exogenous causes.

Note that the internal-external set is teleologically functional to the objective of deciding whether or not to buy the movie tickets. Specifically, if John’s decision to attend the movie had been prompted by the movie’s attractive properties (external attribution), a decision to buy the tickets would be reasonable. Similarly, the means-ends problem is teleologically functional to the objective of deciding whether to invite John to the party. If John’s contemplated visit to the movies had been merely a means of combatting loneliness, he might well be persuaded to attend the party instead. As predicted, the respondents expressed a significantly greater interest in the internal (versus external) set of information when this set was teleologically functional to their objective. However, when the endogenous- exogenous set was more teleologically functional, they expressed a greater interest in this set than in the internal-external set. These results are consistent with the idea that the internal-external categories are not of basic or pervasive relevance to the attributor. As with all other possible categories they are of relevance when the information they convey appears functional to the advancement of the individual’s goals.

Attributwnal criteria

Like the partition into internal and external factors, the attributional criteria of Kelley’s (1967) ANOVA model have also been regarded as fundamental to the process of causal attribution. According to Kelley’s model, an effect is confidently ascribed to an external entity if the following four criteria are satisfied:

(1) Distinctiveness: The impression is attributed to the thing if it uniquely occurs when the thing is present and does not occur in its absence.

(2) Consistency over time: Each time the thing is present, the individual’s reaction must be the same or nearly so.

(3) Consistency over modality: This reaction must be consistent even though his mode of interaction with the thing varies.

(4) Consensus: Attributes of external origin are experienced the same by all observers (Kelley, 1967, p. 197).

The presumed centrality of these criteria to the attribution process is demonstrated, for example, by the considerable research efforts that have been expended to

Bias and error in human judgment 25

ascertain whether people are generally sensitive or insensitive to consensus information (see Nisbett and Borgida, 1975; Hansen and Donoghue, 1977; Wells and Harvey, 1977; Ruble and Feldman, 1976; Tyler, 1980).

According to the present analysis, people’s interest in Kelley’s criteria will be restricted to cases where they wish to determine whether an cbserved effect is attributable to an external entity as opposed to factors related to the actor, the modality, or the time (occasion). Consistency of the effect across different occasions allows one to deduce that the specific occasion was not the effect’s cause. Similarly, consistency over different modalities can be used to deduce that the specific modality did not cause the effect, and consensus across different persons implies that the effect was not caused by the specific actor. On the other hand, distinctiveness of the effect to a given external entity (a conjunctive presence or absence of the entity and the effect) allows one to deduce that this particular entity was the likely cause of the effect. (For a detailed analysis of Kelley’s criteria, see Kruglanski, 1980.)

The above interpretation suggests that an individual interested in attributional categories other than external entity actor, time, and modality would have little interest in the distinctiveness, concensus, and consistency criteria. This derivation was tested in an experiment by Kruglanski et al. (1978). Respondents were presented with one of several causal problems. Some respondents were given the standard problem which asked them to decide whether the person (actor) or the stimulus (external entity) was the cause of some event (similar to McArthur, 1972). Other respondents, however, had to decide whether the effect was endogenously or exogenously caused (cf: Kruglanski, 1975); whether it was caused by the actor’s ability or effort (cf. Weiner et al., 1971); or whether it was intentional or unintentional (cf: Weiner, 1974). The respondents were also shown two sets of information. One set always contained consistency, consensus, and distinctiveness information from which it was possible to deduce whether the actor or the external situation was the cause of the effect. The second informational set allowed the deduction of an answer to the particular problem posed to a given respondent: whether the cause was endogenous or exogenous, whether the effect was due to ability of effort, or whether it was due to intentional or unintentional factors. The respondent’s task was to rate which of the two informational sets was more use- ful to the resolution of their problem. As expected, they rated the consistency-distinctiveness set as more useful than either of the three alternative informational sets when their problem was to decide among the person versus stimulus attributions. However, they rated the information related to Kelley’s criteria as less useful when their problem was to make one of the alternative attributional choices. Thus, the widely discussed and researched attributional criteria of consistency, consensus, and distinctiveness do not seem of pervasive concern to people asked to explain an actor’s behaviour. As predicted by the present epistemic model, they seem of interest only when inferences deducible from these criteria happen to be of interest to the person making the attribution.

Actor verus situation attribution

Another example of the tendency to single out specific contents of attribution and generalize them to the process of attribution originated from Heider’s (1958) proposal that behaviour tends to ‘engulf the field’. Consequently, causal attribution

26 A . W . Kruglanski and I . Ajzen

was presumed to be levelled predominantly at the behaving person rather than at the surrounding situation. Jones and Nisbett (1971) qualified this hypothesis, suggesting that dispositional (i.e. person) attributions would be preferred by observers while situational ones would be preferred by actors. Research has indicated, however, that regardless of whether a person is the actor or an observer, when attention is drawn to the actor there is an increase in attributions of behaviour to dispositional factors (Taylor and Fiske, 1975), but when the environment is made more salient, there is an increase in attributions to external (situational) factors (Arkin and Duval, 1975). Consistent with this argument, Monson and Snyder (1977) uncovered approximately equal numbers of studies that reported actor-observer differences opposed to the predicted pattern, and both patterns of differences (see also Ajsen and Fishbein, in press). There is thus little support for systematically biased discrepancies between attributions made by actors and observers.

To conclude, the pluralism of constructs in the literature seems to reflect a tendency to single out a given content of inference (e.g. person versus situation attribution, consensus versus distinctiveness information) and to generalize it to all inferential judgments. Empirical research demonstrates, however, that this confusion between inferential process and contents leads to unsupportable generalizations. Thus, the person-environment (internal-external) categories do not seem basic to the process of attribution; instead they appear restricted to cases in which the information contained in these particular categories may in some way further the individual’s objectives. Sirpilarly, the informational criteria of distinctiveness, consensus and consistency appear to command no universal attention from attributors. Instead, they are of interest only when attributors happen to be concerned with specific (person-situation) inferences deducible from these criteria. Finally, there seems to exist no fundamental tendency to make person attributions over situational attributions. Rather, people seem to prefer dispositional attributions when person factors happen to be more salient or available to the individual, and to make situational attributions when external factors are more salient.

In addition to refuting the suggested generalizations of inferential contents and their incorporation into a theory of the inferential process, the research examined so far supports the present theory in a number of ways. First, people seem to formulate cognitions that are functional to advancing their goals or interests. Second, people validate their cognitions deductively; they come to believe in hypotheses or propositions if these are deduced from other credible cognitions or evidential criteria. Finally, the formulation of cognitions depends on their momentary saliency or mental availability. The last point is also pertinent to the following discussion of evidence regarding people’s presumed insensitivity to statistical information.

Statistics versus heuristics

An assumption underlying many contemporary analyses of human judgment is that statistical reasoning represents a superior (normative) mode of inference and that its application enhances the likelihood of veridical judgments. Further, laypeople are considered generally incapable of reasoning in statistical terms and instead are

Bias and error in human judgment 27

thought to employ intuitive or prescientific heuristics that often lead to erroneous inferences.

We quite agree that people frequently employ ‘heuristics’ in making everyday judgments. As shown in earlier sections of this paper, such heuristics simply constitute specific conceptions (like class membership or class frequency) that people draw upon in order to reach their inferences. We also agree that in most instances, these conceptions are not phrased in sophisticated statistical terms with which most people are simply unfamiliar (see &hen 1981, p. 326). Finally, we agree that the use of heuristics can lead to (subjectively defined) errors. However, our interpretation of the foregoing phenomena differs markedly from prevailing views. We regard statistics and the calculus of probability as a set of formally expressed concepts in terms of which certain events can be understood. There is no a prwri reason to assume that this set of concepts is necessarily superior to any other set of concepts. As with all concepts that people employ, statistical interpretations of phenomena are of indeterminate validity; we gain or lose confidence in them as a function of their consistency or inconsistency with other evidence‘. Further, the array of data accounted for by a statistical hypothesis is usually also open to interpretation in terms of competing, nonstatistical hypotheses.

Take statistical regression for example. When extreme scorers become less extreme on repeated testing this phenomenon could be accounted for by the notion of statistical regression. It could also be explained, however, by numerous nonstatistical hypotheses. Perhaps, deviant persons (extreme scorers) make every effort to become less deviant because of the social stigma frequently attached to deviance. Whether statistical regression or avoidance of social stigma better explains the data in a given case can be a matter for further investigation. For instance, statistical regression effects should occur regardless of whether the individuals are aware of their scores, whereas social stigma effets should depend on awareness.

Thus, within the present interpretative framework, statistical reasoning does not have a special status as compared to reliance on heuristics. Both refer to the contents of different conceptual constructions that people might employ in an attempt to impose order on experience. The way they do so is assumed to follow the guidelines of the present theory. According to this analysis there is nothing unusual about people’s occasional insensitivity to statistical notions. Under appropriate conditions people could be similarly insensitive to any other notions, particularly if these notions were insufficiently salient or their relevance (their if-then linkage) to the judgments of interest was not readily apparent. However, when salient and subjectively relevant, statistical notions should readily be employed by laypeople. Research examined in the following section provides evidence in support of this analysis.

Enhancing the salience of statistical information via within-subject designs Fischoff, Slovic and Lichtenstein (1 979) argued that studies demonstrating insensitivity to statistical information (e.g. Lyon and Slovic, 1976; Kahneman and Tversky, 1973; Tversky and Kahneman, 1971) typically employed between-subject

4For the argument that statistical induction is a special case of deduction, see Turner (1965), and also Footnote 2.

28

designs which exposed participants to only one value of the statistical information. Such a procedure tends to ensure low salience of the statistical information, thereby reducing the likelihood of its being utilized. It follows that respondents’ tendencies to use statistical information could be increased by providing this information in a within-subjects design which exposes them to several different values of the statistical variable. To test these ideas empirically, Fischoff et al. (1979) conducted three separate experiments employing within-subjects designs. In one experiment, the statistical information variable was base-rate frequency; in the second experiment, it was the predictive validity of scores; and in the third it was sample size. Despite individuating information, which should have allowed invocation of the ‘representativeness heuristic’, respondents in the first two experiments were sensitive to statistical information and employed it in general conformity with the statistical model. However, in the third experiment, the within-subjects design failed to increase sensitivity to the statistical information (regarding sample size).

Fischoff et al. (1979) commented on the possible criticism of within-subjects designs as susceptible to demand characteristics. Such demands could strongly suggest to respondents they are supposed to vary their reactions in response to the different values of the statistical information. Fischoff et al. aptly noted, however, that even if they existed, such demands could not, in and of themselves, determine the directionality of the judgments. Respondents were not only sensitive to statistical information but also appreciated the way in which this information was relevant to the judgments at hand. Moreover, the same criticism regarding demand characteristics would also have to be levelled against the use of within-subjects designs in the presentation of individuating information (e.g. Kahneman and Tversky, 1973).

As to the third experiment, respondents possibly failed to appreciate the information’s relevance to the requisite judgments. Indeed, the notion of an inverse relation between sample size and sampling variance seems highly sophisticated and complex. Thus, the relevance of sample size differences to shapes of estimated distributions (the dependent variable of the third experiment) may not have been appreciated by many respondents. Consistent with this argument, Bar-Hillel(l979) demonstrated that statistically naive respondents do take sample size into account when they are simply asked to express their confidence in the results obtained by samples of different sizes.

A. W. Kruglanski and I . Ajzen

Increasing the utilization of base-rate infomation via causal linkages

Ajzen (1977a) reasoned that in certain situations the relevance of statistical base-rate information to a given judgment may be made salient by linking it causally to the judged event. For example, the proportion of engineers or lawyers in a given sample does not have causal implications; it does not cause anyone to become an engineer or a lawyer. By contrast, the proportion of students passing or failing a given exam provides information about the exam’s ease or difficulty-a causal factor that allows prediction of the likelihood of future exam performance. Ajzen (1977a) conducted two experiments testing the hypothesis that the tendency to utilize base-rate information is stronger when this information has causal significance than when it lacks such significance. The data of both studies lent strong support to this hypothesis.

Bias and error in human judgment 29

Enhancing the perceived relevance of reliability and validity information

In A.jzen’s (1977a) experiments, base rates were made relevant to the required judgments in a manner unrelated to any statistical rationale. By contrast, two experiments conducted by Farkash (Note 2) attempted to increase the perceived judgmental relevance of statistical concepts by increasing the salience of the underlying statistical rationale.

The first experiment examined Kahneman and Tversky’s (1 973) suggestion that people pay little attention to the empirical reliability or predictive validity of information, relying instead on its perceived representativeness or diagnosticity in relation to the required judgment (see also Trope, 1978). For example, it follows from a statistical analysis that the distribution of predicted scores should match the distribution of a reliable predictor more closely than that of an unreliable predictor. In the latter case, the predicted scores should be more regressed; that is, they should cluster more closely around some central value, such as the mean. Kahnemani and Tversky (1973) reported data which seemed to indicate that people’s predictions are in fact insufficiently regressive, reflecting insensitivity to information regarding the predictor’s unreliability .

In contrast, we noted earlier that Fischoff et al. (1979) successfully induced respondents to make use of reliability information by increasing the salience of such information in a within-subjects design. Farkash (Note 2) demonstrated that people can also be sensitive to reliability information in a between-subjects design. Following Kahneman and Tversky (1973), respondents were asked to predict students’ grade point averages (GPA) from their scores on a mental concentration test. To clarify the importance of reliability information, Farkash’s instructions distinguished between mental concentration as an achievement-related ability and the particular test of mental concentration employed (whose reliability could be high or low).

To ensure the representativeness of the predictor variable in relation to GPA, respondents were told that ‘mental concentration is one of the most important determinants of success at school’. They were then shown 10 students’ mental concentration scores as assessd by a particular test of mental concentration. Half the participants were told that the reliability of this particular test was about 5 per cent (low reliability condition) and the other half that the reliability was about 95 per cent (high reliability condition). The statistical relevance of reliability to prediction was made salient by means of a weighting scale metaphor. A reliable scale was described as one which yields similar readings on successive weightings of the same object and an unreliable scale as one which yields disparate readings. The respondents’ task was to predict the GPA of each student from his or her mental concentration score. In marked contrast to the results reported by Kahneman and Tversky (1973), these predictions were found to be significantly more regressive in the low as opposed to the high reliability condition. Furthermore, respondents in the low reliability condition expressed significantly less confidence in their predictions; than did respondents in the high reliability condition.

In her second study, Farkash (Note 2) examined the hypothesis that, because of reliance on the representativeness heuristic, and contrary to the statistical rationale, people derive geater confidence from redundant than from nonredundant items of information. In support of this suggestion, Kaheman and Tversky (1973) had found

30 A. W. Kruglanski and I. Ajzen

that respondents predicted students’ grade point averages with greater confidence on the basis of scores on two tests that were said to be (and actually were) highly correlated, than on the basis of two uncorrelated tests.

To see whether respondents can be induced to take redundancy of information into account, Farkash made an attempt to increase the perceived relevance of the statistical information to the requisite judgments. Respondents were asked to predict final grade point averages from information concerning mean grades in a set of courses taken over two semesters and a set of courses taken over three semesters. In the high redundancy condition, the courses listed in the two sets covered identical subject matters, while in the low redundancy condition, they dealt with different subject?. As expected, respondents’ confidence in their judgments was significantly higher under conditions of low redundancy than under conditions of high redundancy.

Statistics versus heuristics: an epilogue

The data reviewed above demonstrate that even when information is highly representative, people are not stubbornly insensitive to statistical considerations. It was shown that when statistical information and its relevance to the required judgments are salient, respondents typically make use of that information. The perceived relevance of statistical information to particular judgments may derive from the respondents’ comprehension of statistical notions like reliability or intercorrelation (redundancy); or it may derive from nonstatistical connotations of statistical information, as in Ajzen’s (1977a) studies of causal implications.

According to the present theory, utilization by the lay person of any proposition depends on its situational salience and its apparent relevance to the judgments at hand. In this sense, statistical considerations are equivalent to any other kind of consideration. Thus, we contest the suggestion that statistical principles are uniquely underutilized by lay judges and that an increase in the degree to which they are utilized will necessarily improve judgmental accuracy. The present data are compatible with the interpretation that ‘statistics’ as well as ‘heuristics’ refer to the different contents of considerations people may invoke when making judgments. Whether a given judgment is reached via statistics or heuristics, it could subsequently come to be doubted in the face of inconsistent evidence, and it is impossible to estimate the likelihood that such evidence will emerge. Thus, the use of statistics does not guarantee judgmental veridicality any more than does the use of heuristics.

The above argument is not meant to imply that statistics are useless or that they should not be taught in schools (as recommended by Nisbett and ROSS, 1980). Indeed, it is our view that statistics constitute an important intellectual achievement and as such should definitely be disseminated in the educational system. Statistics represent a sophisticated way of thinking about particular aspects of events, a conceptual language like algebra, geometry, or differential calculus that can be used to phrase and deductively resolve various problems. However, the teaching of

-%e study also varied the reliability of the source who provided the information concerning the student’s grades. This manipulation again showed that respondents had more confidence in predictions based as reliable, rather than unreliable, information.

Bias and error in human judgment 31

statistics will not necessarily increase the overall validity of our inferences in the empirical realm, any more than the teaching of differential calculus.

Overcoming perseverant belkfi

Within the currently prevailing conceptions of human inference, the tendency of people’s beliefs to persevere despite exposure to discrediting evidence (Ross, Lepper and Hubbard, 1975) or to be excessively anchored at initial estimates (Tversky and Kahneman, 1974) is generally considered an unfortunate feature of intuitive judgments, setting them apart from scientific inferences and contributing to the preponderance of judgmental errors. By contrast, from the present perspective, belief perseverance is regarded as an inevitable feature of the epistemic process, as characteristic of lay as it is of scientific inferences, and bearing no necessary relation to the commission of mistakes. According to this interpretation, belief perseverance reflects the phenomenon of epistemic freezing whereby the person ceases at some point to generate hypotheses and accepts a given, currently plausible proposition as valid. Epistemic freezing is considered to be an inevitable feature of the judgmental process because of the potentially endless character of cognition generation. The epistemic sequence must come to a halt at some point lest the individual be left without any crystallized knowledge necessary for decision making and action. In this sense then, all firm knowledge can be regarded as ‘frozen’ knowledge and all confident beliefs as ‘perseverant beliefs’. That epistemic freezing occurs in science even as it does in lay inference has been demonstrated by Thomas Kuhn (1962) in his classical work on paradigms. According to Kuhn, scientific communities tend to be committed uncritically to particular sets of assumptions defining ‘paradigms’ and setting the constraints within which research activities are conducted for a period of time. However, paradigms can be unfrozen and eventually replaced by alternative paradigms. According to Kuhn, this might occur when anomalous evidence inconsistent with a current paradigm reaches such ‘alarming proportions’ that the paradigm’s overall validity comes to be doubted.

As noted earlier, the present interpretation of the freezing phenomenon suggests that it need not be related to inferential errors in any systematic way. Conversely, open-mindedness about a belief or the readiness to ‘unfreeze’ it need not signify an improvement in judgmental accuracy and may in fact sometimes results in ‘errors. Perhaps more interesting than the question of whether perseverance leads to ‘truth’ or ‘error’ is the identification of factors that may determine the degree of perseverance. The present epistemic theory contains several specific suggestions in this regard which were tested in the experiments described below.

Unfreezing self-perceptions in the debriefing paradigm

The first experiment in this series (Kruglanski and Meschiany, Note 3) replicated and extended the study by Ross, Lepper, and Hubbard (1975). In the latter study, participants were found to adhere to experimentally induced beliefs regarding their success or failure at a task of identifying which of a series of suicide notes were fictitious (as opposed to authentic). This perseverance occurred despite an extensive debriefing which disclosed the deception on which the original belief-induction had been based. Interpreting the perseverance effect of Ross et al.,

32 A. W. Kruglanski and I . Ajzen

(1975) as an instance of epistemic freezing suggests the type of factors that might work to relax it: heightened feat of invalidity, preference for a different conclusion, and credible evidence inconsistent with the perseverant belief.

Kruglanski and Meschiany first replicated a portion of the Ross et al. design. Respondents were exposed to fictitious success or failure experiences on the suicide notes task, and they were either debriefed or not debriefed concerning the deceptive nature of the feedback they had received. The data were consistent with Ross et aZ.3 original preseverance effect: in comparison to the failure condition, respondents exposed to the success feedback, even after debriefing, continued to believe that they had done significantly better at the task, that they would do better again in future performance of the task, and that they had a higher level of the required ability.

More interesting from the present vantage point are those conditions in which the epistemically relevant variables were specifically manipulated, after the debriefing had taken place. In the accuracy-set condition, an attempt was made to heighten the respondents’ fear of invalidity by asking them to evaluate their actual performance and leading them to believe that these evaluations would be publicly compared with their real scores. They were also informed that accuracy in self-perception is of considerable importance to an individual and is particularly valuable for decision-making and adjustment to new situations.

In the success-desirability condition, participants were led to believe that respondents with high actual performance would receive extra credits and would be invited to participate in further exciting research. In the failure-desirability condition these rewards were promised to participants with low actual performance. Finally, in the inconsistency condition, participants were shown their ‘actual scores’ which suggested that their performance on the task had been about average.

The results demonstrated that the perseverance effects described earlier were totally eliminated in the accuracy-set, success-desirability, and inconsistency conditions. In the failure-desirability condition, a significant success-failure difference remained only with respect to judgments as to how well participants believed they have done on the task. Possibly due to the high value people tend to place on success, the failure-desirability induction was not as powerful as the success-desirability induction. Generally, however, the data lent strong support to the present analysis which views belief perseverance as an instance of epistemic freezing.

Primacy effects, ethnic stereotypes, and anchoring phenomena

A series of experiments completed recently by Freund (Note 4) explored the proposition that epistemic freezing underlies such disparate phenomena as primacy effects in impression formation, ethnic stereotyping, and anchoring of numerical estimates in initial values. Specifically, Freund tested the contrasting effects of the need for structure and the fear of invalidity on epistemic freezing and unfreezing. As noted earlier, high need for structure is expected to induce epistemic freezing, while high fear of invalidity is expected to promote epistemic unfreezing. Need for structure and fear of invalidity may also be expected to yield an interaction effect. When the fear of invalidity is low (the person does not have a high stake in being

Bias and error in human judgment 33

right), need for structure may be the only salient motive in the situation. Because of a ceiling effect, further experimental enhancement of this motive might, therefore, have little additional impact on judgments. On the other hand, when fear of invalidity is high, judgments should be affected by variations in the need for structure. These predictions were examined in three areas: primacy effects, ethnic stereotypes, and anchoring phenomena.

Primacy eficects. A primacy effect in impression formation is often observed when individuals are presented with serial information about another person and are asked to form an impression of that individual. Under such circumstances it is often found that individuals base their impressions more on information appearing early in the sequence than on later information (see Asch, 1946; Luchins, 1957).

In Freund’s experiment, participants were presented with two contrasting descriptions of the past performance of a candidate for a new job. One description portrayed the candidate in a positive light and the other in a negative light. One half of the participants received the information in a positive-negative sequence and the remaining half in a negative-positive sequence. The respondents’ task was to predict the candidate’s chances of succeeding at the job. In the high fear of invalidity condition, respondents expected to have to rationalize their judgment to their peers as well as have it publicly compared with the candidate’s ‘actual’ success record. In the low fear of invalidity group, neither of these expectations was induced. Finally, in the high need for structure condition, the respondents had a limited amount of time to reach their judgments whereas in the low need for structure condition, the amount of time was unlimited.

The results of the study revealed a basic primacy effect. Ratings of the candidate’s success were significantly higher in the positive-negative sequence as compared with the negative-positive sequence. More importantly, the hypotheses derived from the present theory were supported without exception. Across both sequences, primacy effects were significantly stronger when the need for structure was high versus low, and significantly weaker when the fear of invalidity was high versus low. Finally, the predicted interactive effect of the structure and validity needs was also confirmed. Across the two informational sequences, the difference between high and low need for structure was significantly greater when fear of invalidity was high rather than low.

Ethnic stereotyping. Ethnic sterotyping or prejudice is said to occur when a member of a given group is judged on the basis of preconceptions regarding the group as a category rather than on the basis of evidence concerning the member as an individual (cf: Hamilton, 1979). Like primacy effects, ethnic stereotypes are presently assumed to represent freezing of the epistemic process so that information inconsistent with an existing belief is not seriously attended to or allowed to affect the perceiver’s mind. Given this interpretation, stereotyping should be affected by the needs for structure and validity in much the same ways as elaborated above with respect to primacy effects.

In an experimental test of this hypothesis, Freund (Note 4) presented students in the graduating class of an Israeli teachers’ college with a Hebrew composition (the topic of the essay was: ‘An interesting occurrence that happened to me’), presumably written by an 8th grader. In one experimental condition, the writer’s surname and his father’s birthplace suggested that his ethnic origin was Sephardic (Oriental). In a second condition, the writer’s ethnic origin was identified as

34

Ashkenazic (Occidental); and in a third (control) condition, no ethnic information was provided. The respondents’ task was to grade the composition for its literary excellence on a scale ranging from 40 points (representing ‘failure’) to 100 points (representing ‘excellent performance’).

In the high fear of invalidity condition, the respondents expected to have to justify their grade assignments to their peers and to have their tabulations publicly compared with the judgments of experts (a team of experienced teachers). In the low fear of invalidity condition, the respondents had no similar expectations; instead, they were told that literary compositions are inherently difficult to grade, and that ‘there are no right or wrong answers when it comes to humanistic subjects’. In the high need for structure condition, the respondents had to complete the grading within a restricted period of time (10 minutes), while in the low need for structure condition, they were allowed a full hour.

The results revealed a strong stereotyping effect: significantly higher grades were assigned to the essay ostensibly written by the Ashkenazi as compared to the Sephardi and unidentified writers who did not differ systematically from one another. More interestingly, the stereotyping effect was significantly stronger under high as compared to low need for structure conditions, and significantly weaker under high as opposed to low fear of invalidity conditions. Finally, variation in the need for structure had a greater effect on judgments when the fear of invalidity was high rather than low.

Anchoringphenomena. An anchoring effect in numerical judgments occurs when the individual begins with an initial estimate and fails to sufficiently adjust it in the light of subsequent information ( c j Tversky and Kahneman, 1974). For instance, the anchoring effect has been found to result in an overestimation of the probabilities of conjunctive events and an underestimation of the probabilities of disjunctive events (Bar-Hillel, 1973). Participants in Bar-Hillel’s experiment had to decide on each trial which of a pair of yoked events was more likely to occur. On some trials, a simple event (drawing a red marble from an urn containing the proportion q of red marbles) was paired with a conjunctive event, and on other trials, the simple event was paired with a disjunctive event. The initial q’s of the conjunctive events were always larger than those of the yoked simple events, yet the final conjunctive probabilities were always smaller. By contrast, the q’s of the disjunctive events were always smaller than those of the yoked simple events, yet the final disjunctive probabilities were always larger. Under these conditions, clear anchoring effects emerged: respondents judged conjunctive events to be more likely, and disjunctive events to be less likely, than the simple events with which they had been paired.

Freund (Note 4) conducted an extended replication of Bar-Hillel’s experi- ment superimposing upon it the (by now familiar) 2 x 2 design with two need-for-structure levels cross-cutting two fear-of-invalidity levels. In the high fear of invalidity condition, it was explained that the experimenter would announce the correct answers and publicly compare them with the estimates provided by the respondent. No such expectations were induced in the low fear of invalidity condition where, in addition, the experimenter announced that his interest was in group averages rather than in individual achievements. In the high need for structure condition, respondents worked under a time limit of 3 minutes whereas under the low need for srructrtre condition no time limit was imposed.

A . W. Kruglanski and I . Ajzen

Bias and error in human judgment 35

As expected, anchoring effects were found to be contingent or both, need for structure and fear of invalidity. The proportion of choices consistent with the anchoring phenomenon was significantly greater under high as opposed to low need for structure, and significantly smaller under high as opposed to low fear of invalid- ity. Furthermore, the need for structure manipulation had a greater effect on the anchoring tendency when the fear of invalidity was high rather than low.

Summary

The foregoing sections considered empirical research based on our theory of lay epistemology. The findings raise questions regarding several generalizations found in the current judgmental and attributional literature. It was shown that the distinc- tion between internal and external (dispositional and situational) causes of events is not of unique or fundamental concern for ordinary people. As with all other pos- sible causal categories, the internal-external partition seems to be of concern only when the information conveyed by those categories advances the individual’s epi- stemic purposes in some way.

Similarly, the attributional criteria of consensus, consistency, and distinctiveness do not appear to be of general interest to attributors. As with all other types of evidence, these criteria appear to be relevant only when inferences deducible from them (e.g. inferences regarding person or stimulus attributions) are of interest to the individual. Furthermore, the generalization that people prefer to make disposi- tional as opposed to situational attributions or that observers are inclined to exhibit this preference whereas actors manifest the opposite preference, has been found wanting. Rather, it now appears that, as with all inferences, when a causal explanation related to the person or to the situation is made salient, the tendency to accept it is enhanced (providing it is consistent with other available evidence).

Considerations of salience also qualify the generalization that people are typi- cally insensitive to statistical information and instead rely on intuitive heuristics. The evidence reviewed suggests that even in the presence of information allowing the utilization of heuristics (i.e. information representative of the required judg- ment), people can be quite responsive to statistical information, provided that its relevance to the requisite judgments is made salient.

Finally, we considered evidence suggesting that the perseverance of beliefs and the anchoring of estimates are not general and unfortunate characteristics of intui- tive judgments. Whether a given judgment will persevere or be modified in light of new evidence seems to depend on the relative strength of several factors that affect epistemic freezing: the need for structure, the fear of invalidity, the preference for desirable conclusions, and the salience of inconsistent information. It was shown that our analysis of the freezing phenomenon affords insight into such seemingly disparate research topics as primacy effects, ethnic stereotyping, belief persever- ance, and anchoring phenomena.

CONCLUDING REMARKS

The present analysis implies a twofold reorientation of research efforts in the area of social c:ognition: a clear separation between the contents and the process of

36

human inferences; and a shift away from the focus on inferential rationality (or veridicality).

A. W. Knrglanski and I. Ajzen

The content-process distinction

We attempted to show how failure to distinguish between inferential process and contents has led to a growing proliferation of constructs in the social cognition literature. It is important to point out that we are not objecting to the study of cognitive contents. Identification of contents is indispensible for the prediction and understanding of behaviour in specific situations. Nor are we advocating a shift away from the study of specific phenomena (contents) to the study of general phenomena (process). The importance of a phenomenon bears no necessary re- lation to its generality. In some cases a thorough understanding of a restricted phenomenon (e.g. the personality of a political leader) may have momentous con- sequences while understanding of a general phenomenon may occasionally be devoid of appreciable significance. Besides, there are no natural limits to levels of generality; one person’s process may be another person’s content if the latter focused on a higher level of generality than the former. Given such relativism, any recommendation to study universal rather than particular phenomena would be rather empty.

Our objection is related to the confusion between levels of the universal and the particular (process and contents). Most investigators would be disappointed if a learning theorist promised to deliver the definition of a reinforcer and instead proceeded to describe a carrot. The same is true of a cognitive or social psychologist who sets out to characterize how inferences in general are derived and instead proceeds to elaborate the contents of a few specific inferences. As we have seen, this confusion has led to inappropriate generalization of numerous inferential con- tents that were mistaken for aspects of the universal inference process. Since the contents of inferences are virtually unlimited, mistaken generalization of these contents threatens to make any theory of inference prohibitively cumbersome and complex. From the standpoint of a theory of inference, the various content categories that have been generalized are far too many, yet from the standpoint of the layman’s phenomenology they are far too few, coming nowhere near to exhausting the limitless categories and propositions populating people’s minds.

Instead of a content-ridden theory of process, we propose a two-step approach. First, a nomothetic theory of inferential process is elaborated. This step is followed by a deliberately ideographic description of inferential contents, characterized uni- quely for each situation of interest rather than being derived from a pre-ordained taxonomy of content categories. Within such an approach, the general theory of inference should be applicable to all possible content areas, including many areas of traditional concern to social and cognitive psychologists.

To give just one example, consider the topic of social comparison processes. In Festinger’s (1 954) original formulation, a distinction is drawn between abilities and opinions, two major content domains in which social comparison is presumed to take place. According to the present conception, however, the case of abilities may be considered an instance of the case of opinions in which people form opinions about their own abilities relative to the abilities of comparison individuals. In other words, abilities need not be conceived as separate from opinions, only as a sub- category of opinions whose specific content revolves about relative abilities. To the

Bias and error in human judgment 37

extent that opinions are actually inferences that people make, the present theory of the inference process should be applicable to the social comparison domain, just as it is applicable to any domain of inferential contents in which psychologists might be interested.

Rationality in human inference

Most current approaches in the area of social cognition seem tacitly committed to the position that it is possible to discriminate between levels of rationality and to pinpoint procedures that will ensure the veridicality of one’s inferences. Within clinical psychology, for example, ‘normals’ are generally considered to be more rational amd to have more veridical perceptions of the world than ‘neurotics’. In developmental (in particular, Piagetian) psychology, adult human beings are con- sidered more rational and conceptually accurate than children. In cultural anthro- pology, members of Western civilization were considered more rational and accurate than aboriginees (Levy-Bruhl, 1966). Within contemporary cogitive psychology, scientists and statisticians are considered more rational and less error- prone than laypeople. Yet evidence seems to belie such distinctions in degrees of rationality. For instance, research by Alloy and Abramson (1979) suggests that, in many instances, judgments made by depressive neurotics are more ‘veridical’ or congruent with the experimenter’s judgments than are judgments of normal indi- viduals (see also Lazarus, 1981; Levinsohn et al., in press). In cultural anthropol- ogy, the tlhesis of primitive irrationality has long been abandoned under the weight of contrajdictory evidence (Cole and Scribner, 1974). Modem philosophers of science never tire of stressing that the most widely accepted and revered scientific (or statistical:) theories are mere conjectures of ultimately unknown veridicality.

Searching for the paragon of rationality may well have a ‘will 0’ the wisp’ quality. It could lead right back into the content-process confusion discussed earlier. In any case, thus far the search for rationality has often led to mistaking beliefs of a given content (e.g. beliefs phrased in statistical terms, ‘normal’ beliefs, or Western beliefs) for the process (or method) of rational reasoning. One immediate practical implica- tion of this approach has been the proposal that people be instructed in ‘correct reasoning’ by inculcating in them a specific set of propositions. The present analysis suggests that psychologists might well refrain from taking sides in the rationality debate.

ACKNOWLEDGEMENTS

We are grateful to Dieter Frey, Volker Gadene, Kenneth Gergen, Avi Gotlieb, Ronnie Janoff-Bulman, Rachel Karniol, Norbert Schwarz, Fritz Strack, Shelagh Towson, and Yaacov Trope for their comments on earlier drafts of this paper.

REFERENCE NOTES

1. Hamill, R., Wilson, T. D. and Nisbett, R. E. (1979). ‘Ignoring sample bias: Inferences about collectivities from atypical cases’. Unpublished manuscripts, University of Michigan.

38 A. W. Kruglanski and I. Ajzen

2. Farkash, E. (1980). ‘Biases and errors in the progress of inference: On presumptive lesser accuracy of lay versus scientific judgments’. Unpublished master’s thesis, Tel-Aviv Uni- versity.

3. Kruglanski, A. W. and Meschiany, A. (1981). ‘Overcoming belief perseverance’. Unpub- lished manuscript, Tel-Aviv University.

4. Freund, T. (1981) ‘Freezing and unfreezing of primacy effects, ethnic stereotypes, and anchoring phenomena’. Unpublished master’s thesis, Tel-Aviv University.

REFERENCES

Ajzen, I. (1977a). ‘Intuitive theories of events and the effects of base rate information on prediction’, Journal of Personality and Social Psychology, 35: 303-314.

Ajxn, I., Dalto, C. A. and Blyth, D. P. (1979). ‘Consistency and bias in the attribution of attitudes’, Journal of Personality and Social Psychology, 37: 1871-1876.

Ajxn, I. and Fishbein, M. (1975). ‘A Bayesian analysis of attribution processes’, Psychological Bulletin, 82: 261-277.

Ajzen, I. and Fishbein, M. (in press). ‘Relevance and availability in the attribution process’. In: Jaspars, J., Fincham, F. and Hewstone, M. (Eds) Anribution Theory: Essays and Experiments, Academic Press, London.

Alloy, L. B. and Abramson, L. Y. (1979). ‘Judgment of contingency in depressed and non-depressed college students: A nondepressive distortion’, Journal of Experimental Psychology: General, 108,441-485.

Arkin, R. and Duval, S. (1975). ‘Focus of attention and causal attributions of actors and observers’, Journal of Experimental Social Psychology, 11: 427-438.

Asch, S. E. (1946). ‘Forming impressions of personality’, Journal of Abnornal and Social

Bar-Hillel, M. (1 973). ‘On the subjective probability of compound events’, Organizational

Bar-Hillel, M. (1979). ‘The role of sample size in sample evaluation’, Organizational

Bar-Hillel, M. (1980). ‘The base rate fallacy in probability judgments’, Acta Psychologica, 6:

Beckman, L. (1970). ‘Effects of students’ performance on teachers‘ and observers’ attributions of causality’, Journal of Educational Psychology, 61: 75-82.

Berman, J. S. and Kenny, D. A. (1976). ‘Correlation bias in observer ratings’, Journal of Personality and Social Psychology, 34: 263-273.

Berscheid, E., Graziano, W., Monson, T. and Demer, M. (1976). ‘Outcome dependency: Attention, attribution, and attraction’, Journal of Personality and Social Psychology, 34:

Borgida, E. and Brekke, N. (in press). ‘The base rate fallacy in attribution and prediction’. In: Harvey, J. H., Ickes, W. J. and Kidd, R. F. (Eds) New Directions in Attribution Research, Vol. 3. Erlbaum, Hillsdale, N.J.

Bradley, G. W. (1978). ‘Self-serving biases in the attribution process: A reexamination of the fact or fiction question’, Journal of Personality and Social Psychology, 36: 56-71.

Bruner, J. S., (1957). ‘Going beyond the information given’. In: Bruner, J. S . et al. (Eds) Contemporary Approaches to Cognition, Harvard University Press, Cambridge, Mass.

Bruner, J. S. and Postman, L. (1947). ‘Emotional selectivity in perception and reaction’, Journal of Personality, 16: 69-71.

Campbell, D. T. (1969). ‘Prospective: Artifact and control’. In: Rosenthal, R. and Rosnow, R. L. (Eds) Artifact in Behavioral Research, Academic Press, New York.

Campbell, D. T. (1974). ‘Evolutionary epistemology’. In: Schilpp, P. A. (Ed.) The Philosophy of Karl Popper, Vol. 14, I and 11. The library of living philosophers. Open Court Publishing Co., La Salle, Ill.

Carnap, R. (1971). Studies in Znductive Logic and Probability, University of California Press, Berkeley.

Chapman, L. J. and Chapman, J. P. (1967). ‘Genesis of popular but erroneous psycho-diagnostic observation’, Journal of Abnormal Psychology, 72: 193-204.

PSyChOlOgy, 41: 258-290.

Behavior and Human Performance, 9 396-406.

Behavior and Human Performance, 2 4 245-257.

578-589.

978-989.

Biar and error in human judgment 39

Cohep, L. J. (1981). ‘Can human irrationality be experimentally demonstrated? The Behavioral and Brain Sciences, 4: 317-370.

Cole, M. and Scribner, S. (1974). Culture and Thought: A Psychological Introduction, Wiley, New York.

Duval, S. and Wicklund, R. A. (1972). A Theory of Objective Self-awareness, Academic Press, New York.

Feather, N. T. (1969). ‘Attribution of responsibility and valence of success and failure in relation to initial confidence and task performance’, Journal of Personality and Social

Feldman, W. S., Higgins, E. T., Karlovac, M. and Ruble, D. N. (1976). ‘Use of consensus information in causal attribution as a function of temporal presentation and availability of direct information’, Journal of Personality and Social Psychology, 34: 694-698.

Festinger, L. (1954). ‘A theory of social comparison processes’, Human Relations, 7:

Festinger, IL. (1957). A Theory of Cognitive Dissonance, Row, Peterson, Evanston, Ill. Feyerabend, P. (1976). ‘On the critique of scientific reason’, In: Cohen, R. and Wartolsky,

M. (Eds) Boston Studies in the Philosophy of Science, Vol. 34, D. Reidel, Boston. Fischhoff, B. (1976). ‘Attribution theory and judgment under uncertainty’. In: Harvey, J.

H., Ickes, W. J. and Kidd, R. F. (Eds) New Directions in Attribution Research, Vol. 1, Erlbaum, Hillsdale, N.J.

Fischhoff, B., Slovic, P. and Lichtenstein, S. (1979). ‘Improving intuitive judgment by subjective sensitivity analysis’, Organizational Behavior and Human Perfirmance, 23: 339-359.

Fishbein, NI. and Ajzen, I. (1973). ‘Attribution of responsibility: A theoretical note’, Journal of Expefiimental Social Psychology, 9 148-1 5 3.

Freedman, J. L. and Sears, D. 0. (1965). ‘Selective exposure’. In: Berkowitz, L. (Ed.) Advances in Experimental Social Psychology, Vol. 2, Academic Press, New York.

Frenkel-Brunswick, E. (1949). ‘Intolerance of ambiguity as emotional and perceptual personality variable’, Journal of Personality, 18: 103-143.

Graumann, C. F. and Sommer, M. (in press). ‘Schema and inference models in cognitive social psychology’, International Journal of Theoretical Psychology: Annals, Vol. 1.

Hamilton, ID. L. (1979). ‘A cognitive attributional analysis of stereotyping’. In: Berkowitz, L. (Ed.) Advances in Experimental Social Psychology, Vol. 12, Academic Press, New York.

Hammerton, M. (1973). ‘A case of radical probability estimation’, Journal of Experimental

Harvey, J. H., Ickes, W. J. and Kidd, R. F. (1976, 1978). New Directions in Attribution

Heider, F. (1958). The Psychology of Interpersonal relations, Wiley, New York. Higgins, E. T., Rhodes, W. J. and Jones, C. R. (1977). ‘Category accessibility and impression

formation’, Journal of Experimental Social Psychology, 13: 141-154. Hintikka, T. (1968). ‘The varieties of information and scientific explanation’. In: Von

Rootsehar, N. and Stael, R. (Eds) Logic, Methodology and the Philosophy of Science, North Holland, Amsterdam.

Psychology, 13: 129-144.

11 7-140.

PSyChOlO,gY, 101: 252-254.

Reuearch, Vols 1 and 2. Hillsdale, New Jersey.

James, W. (1890). Principles of Psychology, Holt, New York. Johnson, T. J., Feigenbaum, R. and Weiby, M. (1964). ‘Some determinants and

consequences of the teacher’s perception of causation’, Journal of Educational

Jones, E. E. (1979). ‘The rocky road from acts to dispositions’, American Psychologist, 34:

Jones, E. E. and Davis, K. E. (1965). ‘From acts to dispositions’. In: Berkowitz, L. (Ed.)

Jones, E. E. and DeCharms, R. (1957). ‘Changes in social perception as a function of the

Jones, E. E and Harris, V. A. (1967). ‘The attribution of attitudes’, Journal of Experimental

Jones, E. E and Nisbett, R. E. (1971). The Actor and the Observer: Divergent Perceptions of

Psychology, 55: 237-246.

1 07-1 1 7.

Advances in Experimental Social Psychology, Vol. 2. Academic Press, New York.

personal relevance of behavior’, Socwmetry, u): 75-85.

Social Ps.vchology, 3: 1-24.

the Causes of Behavior, General Learning Press, New York.

40 A. W. Kruglanski and I. Ajzen

Jones, E. E. and Thibaut, T. W. (1958). ‘Interaction goals as bases of inference in interpersonal perception’. In: Tagiuri, R. and Petrulo, L. (Eds) Person Perception and Interpersonal Behavior, Stanford University Press, Stanford, CA.

Kahneman, D. and Tversky, A. (1973). ‘On the psychology of prediction’, Psychological Review, &o. 237-251.

Kelley, H. H. (1967). ‘Attribution theory in social psychology’, In: Levine, D. (Ed.) Nebraska Symposium on Motivation, University of Nebraska Press, Lincoln, Nebraska.

Kelley, H. H. (1971). Attribution in Social Interaction, General Learning Press, New York. Kelley, H. H. (1972). Causal Schemata and the Attribution Process, General Learning Press,

Kelley, H. H. (1973). ‘The process of causal attribution’, American Psychologist, 28: New York.

107-128. Kelley, H. H. and Michela, J. L. (1980). ‘Attribution theory and research’, Annual Review of

Psychology, 31: 457-501. Kruglanski, A. W. (1 975). ‘The endogenous-exogenous partition in ‘attribution theory’,

Psychological Review, 31: 387-406. Kruglanski, A. W. (1979). ‘Causal explanation, teleological explanation: On radical

particularism in attribution theory’, Journal of Personality and Social Psychology, 37: 1 447-1457.

Kruglanski, A. W. (1980). ‘Lay epistemo-logic-process and contents’, Psychological Review, 87: 70-87.

Kruglanski, A. W., Hamel, I. A., Maides, S. A. and Schwartz, J. M. (1978). ‘Attribution theory as a special case of lay epistemology’. In: Harvey, J. H., Ickes, W. J. and Kidd, R. F. (Eds) New Directions in Attribution Research, Vol. 2. Erlbaum, Hillsdale, N.J.

Kruglanski, A. W. and Jaffe, Y. (in press). ‘Lay epistemology: A theory for cognitive therapy’. In: Abramson, L. Y. (Ed.) An Attributional Perspective in Clinical Psychology, Guilford Press, New York.

Kuhn, T. S. (1 962). The Structure of Scientific Revolutions, The University of Chicago Press, Chicago.

Kuhn, T. S. (1970). ‘Logic of discovery or psychology of research?’ In: Lakatos, I. and Musgrave, A. (Eds) Criticism and the Growth of Knowledge, Cambridge University Press, Cambridge.

Lakatos, I. (1968). ‘II-Criticism and the methodology of scientific research programmes’, Proceedings of the Aristotalian Society, 69, 149-186.

Lazarus, R. S. (1981). ‘Cognitive behavior therapy as psychodynamics revisited’. In: Mahoney, M. 0. (Ed.) Cognition and Clinical Science, Plenum Press, New York.

Lerner, M. J. (1965). ‘Evaluation of performance as a function of performer’s reward and attractiveness’. Journal of Personality and Social Psychology, 1: 335-360.

Lerner, M. J. (1970). ‘The desire for justice and reactions to victims’. In: Macaulay, J. and Berkowitz, L. (Eds) Altruism and Helping Behavior, Academic Press, New York.

Lerner, M. J. and Mathews, G. (1967). ‘Reactions to suffering of others under conditions of indirect responsibility’, Journal of Personality and Social Psychology, 5: 31 9-325.

Lerner, M. J., Miller, D. T. and Holmes, J. G. (1976). ‘Deserving and the emergence of forms of justice’. In: Berkowitz, L. and Walster, E. (Eds) Advances in Experimental Social Psychology, Vol. 9. Academic Press, New York.

Levy-Bruhl, L. (1966). How Natives Think (translated from French), Washington Square Press, New York.

Levinsohn, P. M., Mischel, W., Chaplin, W. and Barton, R. (in press). ‘Social com- petence and depression: The role of illusory self perceptions‘, Journal of Abnormal Psychology.

Luchins, A. S . (1957). ‘Experimental attempts to minimize the impact of first impressions’. In: Hovland, C. E. (Ed.) The Order of Presentation in Persuasion, Yale University Press, New Haven CT.

Lyon, D. and Slovic, P. (1976). ‘Dominance of accuracy information and neglect of base rates in probability estimation’, Acta Psychologica, 40: 287-298.

Masterman, M. (1970). ‘The nature of a paradigm;. In: Lakatos, I. and Musgrave, A. (Eds) Criticism and the Growth of Knowledge, Cambridge University Press, Cambridge.

Bias and error in human judgment 41

McArthur, L. A. (1972). ‘The how and what of why: Some determinants and consequences of causal attribution’, Journal of Personality and Social Psychology, 22: 171-193.

McArthur, L. Z. (1976). ‘The lesser influence of consensus than distinctiveness information on causal attributions: A test of the person-thing hypothesis, Journal of Personality and Social Psychology, 33: 733-742.

McArthur, L. Z. (1981). ‘What grabs you? The role of attention in impression formation and causal attribution’. In: Higgins, E. T., Herman, P. and Zanna, M. P. (Eds) The Ontario Symposium on Personality and Social Psychology, Vol. 1. Lawrence Erlbaum Associates, Hillsdale., New Jersey.

McArthur, L. Z. and Post, D. (1977). ‘Figural emphsis and person perception’, Journal of Experimental Social PsycholoRy, 13: 520-535.

McG-innies, E. (1949). Emotionality and perceptual defense’, Psychological Review, 56: 244-25 1.

McGuire, W. J. (1960). ‘A syllogistic analysis of cognitive relationships’. In: Hovland, C. I. and Rosenberg, M. J. (Eds) Attitude Organization and Change, Yale University Press, New Haven, Conn.

Miller, D. T. (1976). ‘Ego involvement and attributions for success and failure’, Journal of Personality and Social Psychology, 34: 901-906.

Miller, D. T., Norman, S. A. and Wright, E. (1978). ‘Distortion in person perception as a consequence of the need for effective control’, Journal of Personality and Social

Miller, D. T. and Ross, M. (1975). ‘Self-serving biases in the attribution of causality: Fact or fiction?’ Psychological Bulletin, 82: 21 2-225.

Monson, T. C. and Synder, M. (1977). ‘Actors, observers, and the attribution process’, Journal of Experimental Social Psychology, 13: 89-1 11.

Nisbett, R. E. and Bellows, N. (1977). ‘Verbal reports about causal influences on social judgments: Private access versus public theories’, Journal of Personality and Social

Nisbett, R. E. and Borgida, E. (1975). ‘Attribution and the psychology of prediction’, Journal of Personality and Social Psychology, 32: 932-943.

Nisbett, R. E. and Ross, L. (1980). Human Znjkence: Strategies and Shortcomings of Social Judgment, Prentice-Hall, Englewood Cliffs, N.J.

Nisbett, R. E. and Wilson, T. D. (1977a). ‘Telling more than we can know: Verbal reports on mental processes’, Psychological Review, 84: 231-259.

Nisbett, R. E. and Wilson, T. D. (1977b). ‘The halo effect: Evidence for unconscious alteration of judgment’, Journal of Personality and Social Psychology, 35: 250-256.

Peterson, C. R. and Beach, L. R. (1967). ‘Man as an intuitive statistician’, Psychological Bulletin, 68: 29-46.

Popper, K. R. (1959). The Logic of Scientific Dkcovery, Harper, New York. Popper, K. R. (1966). Conjectures and Refutations, Basic Books, New York. Popper, K. R. (1973). Objective Knowledge: An Evolutionary Approach, Clarendon,

Oxford. Ross, L. (1977). ‘The intuitive psychologist and his shortcomings’. In: Berkowitz, L. (Ed.)

Advances in Experimental Social Psychology, Vol. 10. Academic Press, New York. Ross, L., Amabile, T. M. and Steinmetz, J. L. (1977). ‘Social roles, social control, and biases

in social-perception processes’, Journal of Personality and Social Psychology, 35:

Ross, L. and Lepper, M. R. (1980). ‘The perseverance of beliefs: Empirical and normative considerations’. In: Schweder, R. A. and Fiske, D. (Eds) New Directions for Methodology of Behavioral Science: Fallible Judgment in Behavioral Research, Jossey-Bass, San Francisco.

Ross, L., Lepper, M. R. and Hubbard, M. (1975). ‘Perseverance in self-perception and social perception: Biased attributional processes in the debriefing paradigm’, Journal of Personality and Social Psychology, 32: 880-892.

Ross, M. arid Sicoly, F. (1979). ‘Egocentric biases in availability and attribution’, Journal of Personality and Social Psychology, 37: 322-336.

Ross, M. arid Fletcher, G. (in press). ‘Social and cultural factors in cognition’. In: Lindzey,

PSychOlOgy, 36: 598-607.

Psychology, 35: 613-624.

485-494.

42 A. W. Kruglanski and I. Ajzen

G. and Aronson, E. (Eds) Handbook of Social Psychology, 3rd edn, Addison-Wesley, Reading, MA.

Ruble, D. N. and Feldman, N. S. (1976). ‘Order of consensus, distinctiveness and consistency information and causal attributions’, Journal of Personality and Social

Rummelhart, D. E. and Ortony, A. (1977). ‘The representation of knowledge in memory’, In: Anderson, R. C., Spiro, R. J. and Montague, W. E. (Eds) Schooling and the Acquisition of Knowledge, Erlbaum, Hillsdale, NJ.

Schneider, D. J. (1973). ‘Implicit personality theory: A review’, Psychological Bulletin, 7 9

Shweder, R. A. (1975). ‘How relevant is an individual difference theory of personality?’ Journal of Personality, 43: 455-484.

Slovic, P., Fischhoff, B. and Lichtenstein, S. (1977). ‘Behavioral decision theory’, Annual Review of Psychology, 27: 1-39.

Smith, E. R. and Miller, F. D. (1978). ‘Limits on perception of cognitive processes: A reply to Nisbett and Wilson’, Psychological Review, 85: 355-362.

Smock, C. D. (1955). ‘The influence of psychological stress on the “intolerance of ambiguity”’, Journal of Abnormal and Social Psychology, 50: 177-182.

Snyder, M. and Cantor, N. (1979). ‘Testing hypotheses about other people: The use of historical knowledge’, Journal of Experimental Social Psychology, 15: 330-342.

Snyder, M. L. and Frankel, A. (1976). ‘Observer bias: A stringent test of behavior engulfing the field‘, Journal of Personality and Social Psychology, 34. 857-864.

Snyder, M. L. and Wicklund, R. A. (in press). ‘Attribute ambiguity’. In: Harvey, J. H., Ickes, W. and Kidd, R. F. (Eds) New Directions in Attribution Research, Vol. 3. Erlbaum, Hillsdale, N.J.

Srull, T. K. and Wyer, R. S., Jr. (1979). ‘The role of category accessibility in the interpretation of information about persons: Some determinants and implications’, Journal of Personality and Social Psychology, 37: 1660-1672.

Taylor, S. and Fiske, S. (1975). ‘Point of view and perception of causality’, Journal of Personality and Social Psychology, 32: 439-445.

Taylor, S. E. and Fiske, S. T. (1978). ‘Salience, attention and attribution: Top of the head phenomena’. In: Berkowitz, L. (Ed.) Advances in Experimental Social Psychology, Vol. 11. Academic Press, New York.

Tetlock, P. E. and Levi, A. (1982). ‘Attribution bias: On the inconclusiveness of the cognition-motivation debate’, Journal of Experimental Social Psychology, 18: 68-88.

Tolman, E. C. (1948). ‘Cognitive maps in rats and men’, Psychological Review, 55: 189-208. Trope, Y. (1978). ‘Inferences of personal characteristics on the basis of information

retrieved from one’s own memory’, Journal of Personality and Social Psychology, 36:

Turner, M. B. (1965). Philosophy and the Science of Behavior, Appleton-Century-Crofts, New York.

Tversky, A. and Kahneman, D. (1971). ‘Belief in the law of small numbers’, Psychological Bulletin, 76, 105-110.

Tversky, A. and Kahneman, D. (1973). ‘Availability: A heuristic for judging frequency and probability’, Cognitive Psychology, 5: 207-232.

Tversky, A. and Kahneman, D. (1974). ‘Judgment under uncertainty: Heuristics and biases’, Science, 185: 1124-1131.

Tversky, A. and Kahneman, D. (1980). ‘Causal schemas in judgments under uncertainty’. In: Fishbein, M. (Ed.) Progress in Social Psychology, Vol. 1. Erlbaum, Hillsdale, N.J.

Tyler, T. R. (1980). ‘Impact of directly and indirectly experienced events: The origin of crime-related judgments and behaviors’, Journal of Personality and Social Psychology, 39,

Walster, E. (1966). ‘Assignment of responsibility for an accident’, Journal of Personality and

Wason, P. C. and Johnson-Laird, P. N. (1972). Psychology of Reasoning: Structure and

Weimer, W. B. (1976). Psychology and the Conceptual Foundations of Science, Erlbaum,

Psychology, 34: 930-937.

294-309.

93-106.

12-28.

Social Psychology, 3: 73-79.

Content, B. T. Batsford, London.

Hillsdale, N.J.

Bias and error in human judgment 43

Weiner, B.. (1974). Achievement Motivation and Attribution Theory, General Learning

Weiner, B., Frieze, I., Kukla, A., Reed, L., Rest, S. and Rosenbaum, R. (1971). Perceiving

Wells, G. L. and Harvey, J. H. (1977). ‘Do people use consensus information in making

Woodworth, R. S. and Sells, S. B. (1935). ‘An atmosphere effect in formal syllogistic

Wundt, W. (1 862). Beitrage zur theorie der Sinneswahmehmung, Winter, Leipzig. Zuckerman, M. (1979). ‘Attribution of success and failure revisited, or: The motivational

Press, Mlorristown, NJ.

the Causes of Success and Failure, General Learning Press, Momstown, NJ.

causal attributions?’ Journal of Personality and Social Psychology, 25: 270-293.

reasoning’, Journal of Experimental Social Psychology, 18 450-451.

bias is alive and well in attribution theory’, Journal of Personality, 47: 245-287.

Les vues qui prtvalent actuellement dans le domaine de I’inftrence humaine sont contrasttes avec une thtorie inttgrte du processus tpisttmique. Les conceptions prtdominantes sont caracttriees par les hypothbses de base suivantes. (1) 11 existe des critbres fidbles de validitt inftrentielle qui sont b a l s sur des modes de traitement d’information objectivement vrais ou optimaux. (2) Les facteurs cognitifs et motivationnels biaisent les infkrences, les Cloignent de ces crit2res fidbles et augmentent donc la probabilitk d’une erreur de jugement. (3) Le processus tpisttmique de la personne profane est pluraliste; il consiste en un rtpertoire de diverses strategies de traitement de I’information (proctdts heuristiques, schtmas) qui sont invoqutes tie fason dlective selon les circonstances. Cette analyse propose, par contre, les conclusions suivantes. (1) Il n’y a pas de critkres de validit6 absolument sfirs. (2) Les facteurs psychologiques qui biaisent les infCrences n’augmentent pas ntcessairement la probabilitt d’une erreur. (3) Le processus d’inftrence peut &tre c o n p comme unitaire plut6t que pluraliste. Les divers biais et strattgies qui discutts dans la littkrature confondent les processus Cpisttmiques universels avec des exemples (des contenus) spkcifiques. Des confirmations empiriques de cette analyse sont prtsenttes. On dfute le fait que des contenus spkcifiques d‘inftrence d e n t d‘application universelle. On suggbre que les gens ne sous-emploient pas les informations statistiques normative en raison d’un recours un prodd t heuristique non optimal; en fait, il semble improbable que les gens utilisent des informations non saillantes et (subjectivement) non pertinentes. On dtmontre que la tendance 2 maintenir ses croyances en dtpit d’informations contradictoires peut &tre augmentte ou diminute par la mise en place de motivations approprites.

ZUSAMMENFASSUNG

Allgemein vorherrschende Ansichten uber menschliche Schlussfolgerungsprozesse werden einer urnfawnden Theori epistemischer Prozesse gegenubergestellt. Diese vorherrschenden Ansichten ‘sind von folgenden Leitideen gekennzeichnet: (1) Es gibt verbindliche Kriterien fur Folgerungsvaliditat, die sich auf objektiv wahre oder optimale Arten der Infonnatioiisverarbeitung stutzen. (2) Motivationale und kognitive Faktoren beeinflussen den Folgel-ungsprozess kriterienwidrig und erhohen die Irrtumswahrscheinlichkeit des Urteils. (3) Der epistemische Prozess des Durchschnittsmenschen ist vielfiiltig: er besteht aus einer Menge von Informationsverarbeitungsstrategien (heuristische Modelle, Schemata), die den Umstanden entsprechend ausgewahlt werden. Demgegenuber fuhrt die vorgelegte Untersuchung zu folgenden Ergebnissen: (1) Es gibt keine sicheren Validitats-kriterien. (2) Psychologische Faktoren, welche die Schlussfolgerungsvorgange bezuglich algemein anerkannter Kriterien verfalschen, erhohen die Irrtumswahirscheinlichkeit nicht nvangsmassig. (3) Der Schlussfolgerungs-prozess kann vielmehr als einheitlich denn als vielfaltig betrachtet werden. Die verschiedenen Strategien und Abweichungen, die in der Literatur diskutiert werden, verwechseln oft allgemeine epistemische Prozesse mit spezifischen Beispielen (oder Inhalten) solcher Prozesse. Daten fur die vorliegenden Analysen einschliesslich einer Widerlegung der Hypothese, wonach

44

spezifische Folgerungsinhalte von universeller Anwendung sind, werden vorgestellt. Es wird im weiteren aufgezeigt, dass die Vpn nicht etwa, vertrauend auf subnormative heuristische Modelle, weniger Gebrauch machen von normativ-statistischen Informationen,-es scheint nicht, dass die Vpn Gebrauch machen von Informationen wenn diese nicht augenscheinlich oder (subjektiv) relevant sind. Es wird zudem Evidenz geboten, die zeigt, dass die Tendenz, trotz widersprechender Information auf einer Position zu beharren, erhoht oder vermindert werden kann indem angezeigte motivationale Elemente eingefiihrt werden.

Author's Address: Dr Arie W. Kruglanski, Department of Psychology, Tel Aviv University, Ramat Aviv, Tel Aviv 69978, Israel.

A. W. Kruglanski and I . Ajzen

Editorial Note Rejoinders or critical comments on this article are welcome. Please send these (3 copies) to Professor J-Ph. Leyens, Facult6 de Psychologie, 20 Voie du Roman Pays, B-1348 Louvain-la-Neuve, Belgium.