Ten Little Treasures of Game Theory and Ten Intuitive ...

21
Ten Little Treasures of Game Theory and Ten Intuitive Contradictions By Jacob K. Goeree and Charles A. Holt* This paper reports laboratory data for games that are played only once. These games span the standard categories: static and dynamic games with complete and incomplete information. For each game, the treasure is a treatment in which behavior conforms nicely to predictions of the Nash equilibrium or relevant refine- ment. In each case, however, a change in the payoff structure produces a large inconsistency between theoretical predictions and observed behavior. These con- tradictions are generally consistent with simple intuition based on the interaction of payoff asymmetries and noisy introspection about others’ decisions. (JEL C72, C92) The Nash equilibrium has been the center- piece of game theory since its introduction about 50 years ago. Along with supply and demand, the Nash equilibrium is one of the most commonly used theoretical constructs in economics, and it is increasingly being applied in other social sciences. Indeed, game theory has finally gained the central role envisioned by John von Neumann and Oscar Morgenstern, and in some areas of economics (e.g., industrial organization) virtually all recent theoretical de- velopments are applications of game theory. The impression one gets from recent surveys and game theory textbooks is that the field has reached a comfortable maturity, with neat clas- sifications of games and successively stronger (more “refined”) versions of the basic approach being appropriate for more complex categories of games: Nash equilibrium for static games with complete information, Bayesian Nash for static games with incomplete information, sub- game perfectness for dynamic games with com- plete information, and some refinement of the sequential Nash equilibrium for dynamic games with incomplete information (e.g., Robert Gib- bons, 1997). The rationality assumptions that underlie this analysis are often preceded by persuasive adjectives like “perfect,” “intuitive,” and “divine.” If any noise in decision-making is admitted, it is eliminated in the limit in a pro- cess of “purification.” It is hard not to notice parallels with theology, and the highly mathe- matical nature of the developments makes this work about as inaccessible to mainstream econ- omists as medieval treatises on theology would have been to the general public. The discordant note in this view of game theory has come primarily from laboratory ex- periments, but the prevailing opinion among game theorists seems to be that behavior will eventually converge to Nash predictions under the right conditions. 1 This paper presents a much more unsettled perspective of the current state of game theory. In each of the major types of games, we present one or more examples for which the relevant version of the Nash equilib- rium predicts remarkably well. These “trea- sures” are observed in games played only once by financially motivated subjects who have had prior experience in other, similar, strategic sit- uations. In each of these games, however, we show that a change in the payoff structure can produce a large inconsistency between theoret- * Goeree: Department of Economics, 114 Rouss Hall, University of Virginia, P.O. Box 400182, Charlottesville, VA 22904, and University of Amsterdam, Roetersstraat 11, 1018 WB Amsterdam, The Netherlands (e-mail: [email protected]); Holt: Department of Economics, 114 Rouss Hall, University of Virginia, P.O. Box 400182, Char- lottesville, VA 22904 (e-mail: [email protected]). We wish to thank Rachel Parkin and Scott Saiers for research assistance, and Colin Camerer, Monica Capra, Glenn Har- rison, Susan Laury, Melayne McInnes, Theo Offerman, Amnon Rapoport, Joel Sobel, and an anonymous referee for helpful comments. This research was funded in part by the National Science Foundation (SES–9818683). 1 For example, George J. Mailath’s (1998) survey of evolutionary models cites the failure of backward induction as the main cause of behavioral deviations from Nash pre- dictions. 1402

Transcript of Ten Little Treasures of Game Theory and Ten Intuitive ...

Page 1: Ten Little Treasures of Game Theory and Ten Intuitive ...

end

-en-f

*UnivVA101jg2nRoulottewishassirisoAmnhelpNati

Ten Little Treasures of Game Theoryand Ten Intuitive Contradictions

By Jacob K. Goeree and Charles A. Holt*

This paper reports laboratory data for games that are played only once. Thesgames span the standard categories: static and dynamic games with complete aincomplete information. For each game, the treasure is a treatment in whichbehavior conforms nicely to predictions of the Nash equilibrium or relevant refinement. In each case, however, a change in the payoff structure produces a larginconsistency between theoretical predictions and observed behavior. These cotradictions are generally consistent with simple intuition based on the interaction opayoff asymmetries and noisy introspection about others’ decisions.(JELC72, C92)

end

eob

i

ryas

gciee

bmh

s

ty

-isn-ld

-

illr

ntsor-

The Nash equilibrium has been the centpiece of game theory since its introductioabout 50 years ago. Along with supply andemand, the Nash equilibrium is one of thmost commonly used theoretical constructseconomics, and it is increasingly being appliin other social sciences. Indeed, game thehas finally gained the central role envisionedJohn von Neumann and Oscar Morgenstern, ain some areas of economics (e.g., industrorganization) virtually all recent theoretical developments are applications of game theoThe impression one gets from recent surveand game theory textbooks is that the field hreached a comfortable maturity, with neat clasifications of games and successively stron(more “refined”) versions of the basic approabeing appropriate for more complex categorof games: Nash equilibrium for static gamwith complete information, Bayesian Nash fostatic games with incomplete information, sugame perfectness for dynamic games with coplete information, and some refinement of t

-ced-ent-

Goeree: Department of Economics, 114 Rouss Haersity of Virginia, P.O. Box 400182, Charlottesville,

22904, and University of Amsterdam, Roetersstraat 118 WB Amsterdam, The Netherlands ([email protected]); Holt: Department of Economics, 114ss Hall, University of Virginia, P.O. Box 400182, Char-sville, VA 22904 (e-mail: [email protected]). Weto thank Rachel Parkin and Scott Saiers for resear

stance, and Colin Camerer, Monica Capra, Glenn Han, Susan Laury, Melayne McInnes, Theo Offermanon Rapoport, Joel Sobel, and an anonymous refereeful comments. This research was funded in part by thonal Science Foundation (SES–9818683).

14

r-

eindryyndal-y.ss-erhssr--

e

sequential Nash equilibrium for dynamic gamewith incomplete information (e.g., Robert Gib-bons, 1997). The rationality assumptions thaunderlie this analysis are often preceded bpersuasive adjectives like “perfect,” “intuitive,”and “divine.” If any noise in decision-making isadmitted, it is eliminated in the limit in a pro-cess of “purification.” It is hard not to noticeparallels with theology, and the highly mathematical nature of the developments makes thwork about as inaccessible to mainstream ecoomists as medieval treatises on theology wouhave been to the general public.

The discordant note in this view of gametheory has come primarily from laboratory experiments, but the prevailing opinion amonggame theorists seems to be that behavior weventually converge to Nash predictions undethe right conditions.1 This paper presents amuch more unsettled perspective of the currestate of game theory. In each of the major typeof games, we present one or more examples fwhich the relevant version of the Nash equilibrium predicts remarkably well. These “treasures” are observed in games played only onby financially motivated subjects who have haprior experience in other, similar, strategic situations. In each of these games, however, wshow that a change in the payoff structure caproduce a large inconsistency between theore

ll,

,:

chr-,

fore

1 For example, George J. Mailath’s (1998) survey ofevolutionary models cites the failure of backward inductionas the main cause of behavioral deviations from Nash pre-dictions.

02

Page 2: Ten Little Treasures of Game Theory and Ten Intuitive ...

xhtocssv

lyouorsic

fm-hI

e-

eee

nh-

thm’vtoeae

oittidea

oneisueteioy

t-

t

-

t

o

.

e

1403VOL. 91 NO. 5 GOEREE AND HOLT: TREASURES AND CONTRADICTIONS

ical prediction(s) and human behavior. For eample, a payoff change that does not alter tunique Nash equilibrium may move the datathe opposite side of the range of feasible desions. Alternatively, a payoff change may caua major shift in the game-theoretic predictionand have no noticeable effect on actual behaior. The observed contradictions are typicalsomewhat intuitive, even though they are nexplained by standard game theory. In a simtaneous effort-choice coordination game, fexample, an increase in the cost of playe“effort” decisions is shown to cause a dramatdecrease in effort, despite the fact thatanycom-mon effort is a Nash equilibrium for a range oeffort costs. In some of these games, it seelike the Nash equilibrium works only by coincidence, e.g., in symmetric cases where tcosts of errors in each direction are balanced.other cases, the Nash equilibrium has considable drawing power, but economically significant deviations remain to be explained.

The idea that game theory should be testwith laboratory experiments is as old as thnotion of a Nash equilibrium, and indeed, thclassic prisoner’s dilemma paradigm was ispired by an experiment conducted at tRAND Corporation in 1950. Some of the strategic analysts at RAND were dissatisfied withe received theory of cooperative and zero-sugames in von Neumann and Morgenstern(1944)Theory of Games and Economic Behaior. In particular, nuclear conflict was nothought of as a zero-sum game because bparties may lose. Sylvia Nasar (1998) describthe interest at RAND when word spread thatPrinceton graduate student, John Nash, had geralized von Neumann’s existence proof fzero-sum games to the class of all games wfinite numbers of strategies. Two mathemacians, Melvin Dresher and Merrill Flood, habeen running some game experiments with thcolleagues, and they were skeptical that humbehavior would be consistent with Nash’s ntion of equilibrium. In fact, they designed aexperiment that was run on the same day thheard about Nash’s proof. Each player in thgame had a dominant strategy to defect, bboth would earn more if they both used thcooperative strategy. The game was repea100 times with the same two players, and a faamount of cooperation was observed. OneNash’s professors, Al W. Tucker, saw the pa

-e

i-e

-

tl-r’

s

enr-

d

-e

s-

ths

n-rh-

irn

-

y

t

drf

-

offs for this game written on a blackboard, andhe invented the prisoner’s dilemma story thatwas later used in a lecture on game theory thahe gave in the Psychology Department at Stanford (Tucker, 1950).

Interestingly, Nash’s response to Dresher andFlood’s repeated prisoner’s dilemma experi-ment is contained in a note to the authors thawas published as a footnote to their paper:

The flaw in the experiment as a test ofequilibrium point theory is that the exper-iment really amounts to having the play-ers play one large multi-move game. Onecannot just as well think of the thing as asequence of independent games as onecan in zero-sum cases. There is just toomuch interaction . . . (Nasar, 1998 p.119).

In contrast, the experiments that we report inthis paper involved games that were playedonlyonce, although related results for repeatedgames with random matching will be citedwhere appropriate. As Nash noted, the advantage of one-shot games is that they insulatebehavior from the incentives for cooperationand reciprocity that are present in repeatedgames. One potential disadvantage of one-shogames is that, without an opportunity to learnand adapt, subjects may be especially prone tthe effects of confusion. The games used inthis paper, however, are simple enough instructure to ensure that Nash-like behaviorcan be observed in the “treasure” treatmentIn addition, the study of games played onlyonce is of independent interest given thewidespread applications of game theory tomodel one-shot interactions in economics andother social sciences, e.g., the FCC licenseauctions, elections, military campaigns, andlegal disputes.

The categories of games to be considered arbased on the usual distinctions: static versusdynamic and complete versus incomplete infor-mation. Section I describes the experimentsbased on static games with complete informa-tion: social dilemma, matching pennies, andcoordination games. Section II contains resultsfrom dynamic games with complete informa-tion: bargaining games and games with threatsthat are not credible. The games reported inSections III and IV have incomplete infor-mation about other players’ payoffs: in static

Page 3: Ten Little Treasures of Game Theory and Ten Intuitive ...

i

f-gine

icta

er,

bsl

tr

.9eorc

cd

as

,hin

e

a6egd

t

-

,

t-

s

1404 THE AMERICAN ECONOMIC REVIEW DECEMBER 2001

settings (auctions) and two-stage settings (snaling games).

It is well known that decisions can be afected by psychological factors such as framinaspiration levels, social distance, and heurist(e.g., Daniel Kahneman et al., 1982; CatheriEckel and Rick Wilson, 1999). In this paper wtry to hold psychological factors constant anfocus on payoff changes that are primarilyeco-nomic in nature. As noted below, economtheories can and are being devised to explainresulting anomalies. For example, the rationchoice assumption underlying the notion ofNash equilibrium eliminates all errors, but if thcosts of “overshooting” an optimal decision amuch lower than the costs of “undershootingone might expect an upward bias in decisionIn a game, the endogenous effects of suchases may be reinforcing in a way that create“snowball” effect that moves decisions weaway from a Nash prediction. Models that introduce (possibly small) amounts of noise inthe decision-making process can produce pdictions that are quite far from any Nash equlibrium (Richard D. McKelvey and Thomas RPalfrey, 1995, 1998; Goeree and Holt, 199Equilibrium models of noisy behavior havbeen used to explain behavior in a varietycontexts, including jury decision-making, bagaining, public goods games, imperfect pricompetition, and coordination (Simon PAnderson et al., 1998a, b, 2001a; C. MoniCapra et al., 1999, 2002; Stanley S. Reynol1999; Serena Guarnaschelli et al., 2001).

A second type of rationality assumption this built into the Nash equilibrium is that beliefare consistent with actual decisions. Beliefs anot likely to be confirmed out of equilibriumand learning will presumably occur in succases. There is a large recent literature oncorporating learning into models of adjustmein games that are played repeatedly with diffeent partners.2 These models include adaptivlearning (e.g., Vincent P. Crawford, 1995David J. Cooper et al., 1997), naive Bayesilearning (e.g., Jordi Brandts and Holt, 199Dilip Mookherjee and Barry Sopher, 1997), rinforcement or stimulus-response learning (e.Ido Erev and Alvin E. Roth, 1998), and hybri

2 See, for instance, Drew Fudenberg and David K. Le-vine (1998) for a survey.

g-

,cse

d

hel-a

e”s.i-a

l-oe-i-

).

f-e.as,

t

re

n-t

r-

;n;-.,

models with elements of both belief and rein-forcement learning (Colin Camerer and Teck-Hua Ho, 1999). Learning from experience is notpossible in games that are only played once, andbeliefs must be formed from introspectivethought processes, which may be subject toconsiderable noise. Without noise, iterated besresponses will converge to a Nash equilibrium,if they converge at all. Some promising ap-proaches to explaining deviations from Nashpredictions are based on models that limit play-ers’ capacities for introspection, either by lim-iting the number of iterations (e.g., Dale O.Stahl and Paul W. Wilson, 1995; RosemarieNagel, 1995) or by injecting increasing amountsof noise into higher levels of iterated beliefs(Goeree and Holt, 1999; Dorothea Ku¨bler andGeorg Weizsa¨cker, 2000). The predictions de-rived from these approaches, discussed in Section V, generally conform to Nash predictionsin the treasure treatments and to the systematicintuitive deviations in the contradiction treat-ments. Some conclusions are offered inSection VI.

I. Static Games with Complete Information

In this section we consider a series of two-player, simultaneous-move games, for whichthe Nash equilibria show an increasing degreeof complexity. The first game is a “social di-lemma” in which the pure-strategy Nash equi-librium coincides with the unique rationalizableoutcome. Next, we consider a matching penniesgame with a unique Nash equilibrium in mixedstrategies. Finally, we discuss coordinationgames that have multiple Nash equilibria, someof which are better for all players.

In all of the games reported here and insubsequent sections, we used cohorts of studensubjects recruited from undergraduate economics classes at the University of Virginia. Eachcohort consisted of ten students who were paid$6 for arriving on time, plus all cash they earnedin the games played. These one-shot gamefollowed an initial “part A” in which the sub-jects played the same two-person game for tenperiods with new pairings made randomly ineach period.3 Earnings for the two-hour ses-

er

3 We only had time to run about six one-shot games ineach session, so the data are obtained from a large numb

Page 4: Ten Little Treasures of Game Theory and Ten Intuitive ...

geits

da-i-rere

enaf

hene

mler-

m-

a-rsvene

ede

y-,

eifses

y 1e

anye-a-9iefsero-on,

4)

egnse

yete-n

ll,

llda.eoe

ctstht

pa-he

6 In other games, rationalizability may allow outcomesthat are not Nash equilibria, so it is a weaker concept thanthat of a Nash equilibrium, allowing a wider range ofpossible behavior. It is in this sense that Nash is morepersuasive when it corresponds to the unique rationalizable

1405VOL. 91 NO. 5 GOEREE AND HOLT: TREASURES AND CONTRADICTIONS

sions ranged from $15 to $60, with an averaof about $35. Each one-shot game began wthe distribution and reading of the instructionfor that game.4 These instructions containeassurances that all money earned would be pand that the game would be followed by “another, quite different, decision-making experment.” Since the one-shot treatments wepaired, we switched the order of the treasuand contradiction conditions in each subsequsession. Finally, the paired treatments wereways separated by other one-shot games odifferent type.

A. The One-Shot Traveler’s Dilemma Game

The Nash equilibrium concept is based on ttwin assumptions of perfect error-free decisiomaking and the consistency of actions and bliefs. The latter requirement may seeespecially strong in the presence of multipequilibria when there is no obvious way foplayers to coordinate. More compelling arguments can be given for the Nash equilibriuwhen it predicts the play of the unique justifiable, or rationalizable, action (B. DouglasBernheim, 1984; David G. Pierce, 1984). Rtionalizability is based on the idea that playeshould eliminate those strategies that are nea best response for any possible beliefs, arealize that other (rational) players will do thsame.5

To illustrate this procedure, consider thgame in which two players independently ansimultaneously choose integer numbers btween (and including) 180 and 300. Both plaers are paid thelower of the two numbers, andin addition, an amountR . 1 is transferredfrom the player with the higher number to thplayer with the lower number. For instance,one person chooses 210 and the other choo250, they receive payoffs of 2101 R and

eerth

4 These instructions can be downloaded from http://www.people.virginia.edu/;cah2k/datapage.html.

5 A well-known example for which this iterated deletionprocess results in a unique outcome is a Cournot duopolgame with linear demand (Fudenberg and Jean Tirole, 199pp. 47–48).

h

id

tl-a

--

rd

-

210 2 R respectively. SinceR . 1, the bestresponse is to undercut the other’s decision b(if that decision were known), and therefore, thupper bound 300 is never a best response topossible beliefs that one could have. Consquently, a rational person must assign a probbility of zero to a choice of 300, and hence 29cannot be a best response to any possible belthat rule out choices of 300, etc. Only the lowbound 180 survives this iterated deletion prcess and is thus the unique rationalizable actiand hence the unique Nash equilibrium.6 Thisgame was introduced by Kaushik Basu (199who coined it the “traveler’s dilemma” game.7

Although the Nash equilibrium for this gamcan be motivated by successively droppinthose strategies that are never a best respo(to any beliefs about strategies that have notbeen eliminated from consideration), this deltion process may be too lengthy for humasubjects with limited cognitive abilities. Whenthe cost of having the higher number is smai.e., for small values ofR, one might expectmore errors in the direction of high claims, weaway from the unique equilibrium at 180, anindeed this is the intuition behind the dilemmIn contrast, with a large penalty for having thhigher of the two claims, players are likely tend up with claims that are near the uniquNash prediction of 180.

To test these hypotheses we asked 50 subje(25 pairs) to make choices in a treatment wiR 5 180, andagain in a matched treatmenwith R 5 5. All subjects made decisions ineach treatment, and the two games were serated by a number of other one-shot games. T

d-s.e

outcome.7 The associated story is that two travelers purchase

identical antiques while on a tropical vacation. Their lug-gage is lost on the return trip, and the airline asks them tomake independent claims for compensation. In anticipationof excessive claims, the airline representative announces:“We know that the bags have identical contents, and we will

of sessions where part A involved a wide range of repeatgames, including public goods, coordination, price comptition, and auction games that are reported in other papeThe one-shot games never followed a repeated game ofsame type.

y3

entertain any claim between $180 and $300, but you willeach be reimbursed at an amount that equals theminimumofthe two claims submitted. If the two claims differ, we willalso pay a rewardR to the person making the smaller claimand we will deduct a penaltyR from the reimbursement tothe person making the larger claim.”

Page 5: Ten Little Treasures of Game Theory and Ten Intuitive ...

disa

0otoos

s

whpm.

isa

uerdste

sn

ynsvee-

ld--ls

1406 THE AMERICAN ECONOMIC REVIEW DECEMBER 2001

ordering of the two treatments was alternateThe instructions asked the participants to devtheir own numerical examples to be sure ththey understood the payoff structure.

Figure 1 shows the frequencies for each 1cent category centered around the claim labelthe horizontal axis. The lighter bars pertainthe high-R “treasure” treatment, where close t80 percent of all the subjects chose the Naequilibrium strategy, with an average claim o201. However, roughly the same fraction chothe highestpossible claim in the low-R treat-ment, for which the average was 280, as shoby the darker bars. Notice that the data in tcontradiction treatment are clustered at the oposite end of the set of feasible decisions frothe unique (rationalizable) Nash equilibrium8

Moreover, the “anomalous” result for the low-Rtreatment does not disappear or even diminover time when subjects play the game repeedly and have the opportunity to learn.9 Since

FIGURE 1. CLAIM FREQUENCIES IN A TRAVELER’S DILEMMA

FOR R 5 180 (LIGHT BARS) AND R 5 5 (DARK BARS)

isen

8 This result is statistically significant at all conventionallevels, given the strong treatment effect and the relativelarge number of independent observations (two paired oservations for each of the 50 subjects). We will not reporspecific nonparametric tests for cases that are so cleasignificant. The individual choice data are provided in thData Appendix for this paper on: http://www.peoplevirginia.edu/;cah2k/datapage.html.

e

t

.et

-n

hfe

ne-

ht-

the treatment change does not alter the uniqNash (and rationalizable) prediction, standagame theory simply cannot explain the mosalient feature of the data, i.e., the effect of thpenalty/reward parameter on average claims.

B. A Matching Pennies Game

Consider a symmetric matching penniegame in which the row player chooses betweeTop and Bottomand the column player simul-taneously chooses betweenLeft and Right, asshown in top part of Table 1. The payoff for therow player is $0.80 when the outcome is (Top,Left) or (Bottom, Right) and $0.40 otherwise.The motivations for the two players are exactlopposite: column earns $0.80 when row ear$0.40, and vice versa. Since the players haopposite interests there is no equilibrium in purstrategies. Moreover, in order not to be exploited by the opponent, neither player shoufavor one of their strategies, and the mixedstrategy Nash equilibrium involves randomizing over both alternatives with equaprobabilities. As before, we obtained decisionfrom 50 subjects in a one-shot version of thgame (five cohorts of ten subjects, who werrandomly matched and assigned row or columly

b-

TABLE 1—THREE ONE-SHOT MATCHING PENNIES GAMES

(WITH CHOICE PERCENTAGES)

Left (48) Right (52)Symmetric Top (48) 80, 40 40, 80

matchingpennies

Bottom(52) 40, 80 80, 40

Left (16) Right (84)Asymmetric Top (96) 320, 40 40, 80

matchingpennies

Bottom(4) 40, 80 80, 40

Left (80) Right (20)Reversed Top (8) 44, 40 40, 80

asymmetry Bottom(92) 40, 80 80, 40

t

y/e,

ntv-t-n

y

9 In Capra et al. (1999), we report results of a repeattraveler’s dilemma game (with random matching). Whesubjects chose numbers in the range [80, 200] withR 5 5,the average claimrose from approximately 180 in the firstperiod to 196 in period 5, and the average remained abo190 in later periods. Different cohorts played this game widifferent values ofR,and successive increases inR resulted

rlye.

dn

veh

in successive reductions in average claims. With a penaltreward parameter of 5, 10, 20, 25, 50, and 80 the averagclaims in the final three periods were 195, 186, 119, 138, 85and 81 respectively. Even though there is one treatmereversal, the effect of the penalty/reward parameter on aerage claims is significant at the 1-percent level. The paterns of adjustment are well explained by a naive Bayesialearning model with decision error, and the claim distribu-tions for the final five periods are close to those predicted ba logit equilibrium (McKelvey and Palfrey, 1995).

Page 6: Ten Little Treasures of Game Theory and Ten Intuitive ...

toe

r

troa

d

Inee

ln

-n

ft

i

yaf-

fe

ueeford

-

drt

-f

e

-

a:

11 This anomaly is persistent when subjects play thegame repeatedly. Jack Ochs (1995a, b) investigates a match-ing pennies game with an asymmetry similar to that of themiddle game in Table 1, and reports that the row players

1407VOL. 91 NO. 5 GOEREE AND HOLT: TREASURES AND CONTRADICTIONS

roles). The choice percentages are shownparentheses next to the decision labels in thepart of Table 1. Note that the choice percentagare essentially “50-50,” or as close as possibgiven that there was an odd number of playein each role.

Now consider what happens if the row player’s payoff of $0.80 in the (Top, Left) box isincreased to $3.20, as shown in the asymmematching pennies game in the middle partTable 1. In a mixed-strategy equilibrium,player’s own decision probabilities should besuch that theother player is made indifferentbetween the two alternatives. Since the columplayer’s payoffs are unchanged, the mixestrategy Nash equilibrium predicts thatrow’sdecision probabilities do not change either.other words, the row player should ignore thunusually high payoff of $3.20 and still choosTop or Bottom with probabilities of one-half.(Since column’s payoffs are either 40 or 80 foplaying Left and either 80 or 40 for playingRight, row’s decision probabilities must equaone-half to keep column indifferent betweeLeft and Right, and hence willing to random-ize.)10 This counterintuitive prediction is dramatically rejected by the data, with 96 perceof the row players choosing theTop decisionthat gives a chance of the high $3.20 payoInterestingly, the column players seemedhave anticipated this, and they playedRight 84percent of the time, which is quite close to theequilibrium mixed strategy of7⁄8. Next, we low-ered the row player’s (Top, Left) payoff to$0.44, which again should leave the row plaer’s own choice probabilities unaffected inmixed-strategy Nash equilibrium. Again the efect is dramatic, with 92 percent of the choicebeing Down, as shown in the bottom part oTable 1. As before, the column players seemto have anticipated this reaction, playingLeft80percent of the time. To summarize, the uniqNash prediction is for the bolded row-choicpercentages to be unchanged at 50 percentall three treatments. This prediction is violate

10 The predicted equilibrium probabilities for the rowplayer are not affected if we relax the assumption of risneutrality. There are only two possible payoff levels forcolumn so, without loss of generality, columns’ utilities forpayoffs of 40 and 80 can be normalized to 0 and 1. Henceven a risk-averse column player will only be indifferentwhen row uses choice probabilities of one-half.

inps

les

-

icf

n-

r

t

f.o

r

-

s

d

in an intuitive manner, with row players’choices responding to their own payoffs.11 Inthis context,the Nash mixed-strategy predictionseems to work only by coincidence,when thepayoffs are symmetric.

C. A Coordination Game with a SecureOutside Option

Games with multiple Nash equilibria poseinteresting new problems for predicting behavior, especially when some equilibria producehigher payoffs for all players. The problem ofcoordinating on the high-payoff equilibriummay be complicated by the possible gains anlosses associated with payoffs that are not paof any equilibrium outcome. Consider a coordination game in which players receive $1.80 ithey coordinate on the high-payoff equilibrium(H, H) $0.90 if they coordinate on the low-payoff equilibrium (L, L), and they receivenothing if they fail to coordinate (i.e., when oneplayer choosesH and the otherL). Suppose that,in addition, the column player has a securoptionSthat yields $0.40 for column and resultsin a zero payoff for the row player. This game isgiven in Table 2 whenx 5 0. To analyze theNash equilibria of this game, notice that for thecolumn player a 50-50 combination ofL andHdominates S, and a rational column playershould therefore avoid the secure option. Eliminating S turns the game into a standard 23 2coordination game that has three Nash equilibri

TABLE 2—AN EXTENDED COORDINATION GAME

L H SL 90, 90 0, 0 x, 40H 0, 0 180, 180 0, 40

k

e

continue to selectTop considerably more than one-half ofthe time, even after as many as 50 rounds. These results arereplicated in McKelvey et al. (2000). Similarly, Goeree etal. (2000) report results for ten-period repeated matchingpennies games that exactly match those in Table 1. Theresults are qualitatively similar but less dramatic than thosein Table 1, with row’s choice probabilities showing strong“own-payoff” effects that are not predicted by the Nashequilibrium.

Page 7: Ten Little Treasures of Game Theory and Ten Intuitive ...

.

e

r,y

-

h

tei

reto

d

inoi-ese’

sa

to

na

-

l

tdrk

;t-

tsy

t

rd

falee-al.”s

1408 THE AMERICAN ECONOMIC REVIEW DECEMBER 2001

both players choosingL, both choosingH, anda mixed-strategy equilibrium in which bothplayers chooseL with probability 2⁄3.

The Nash equilibria are independent ofx,which is the payoff to the row player when (L,S) is the outcome, since the argument for eliminating S is based solely on column’s payoffsHowever, the magnitude ofx may affect thecoordination process: forx 5 0, row is indif-ferent betweenL andH when column selectsS,and row is likely to preferH when column doesnot selectS (since thenL andH have the samenumber of zero payoffs for row, butH has ahigher potential payoff). Row is thus morlikely to chooseH, which is then also the opti-mal action for the column player. Howevewhenx is large, say 400, the column player maanticipate that row will selectL in which casecolumn should avoidH.

This intuition is borne out by the experimental data: in the treasure treatment withx 5 0, 96percent of the row players and 84 percent of tcolumn players chose the high-payoff actionH,while in the contradiction treatment withx 5400 only 64 percent of the row players and 7percent of the column players choseH. Thepercentages of outcomes that were coordinaon the high-payoff equilibrium were 80 for thtreasure treatment versus 32 for the contradtion treatment. In the latter treatment, an addtional 16 percent of the outcomes wecoordinated on the low-payoff equilibrium, bumore than half of all the outcomes were uncordinated, non-Nash outcomes.

D. A Minimum-Effort Coordination Game

The next game we consider is also a coornation game with multiple equilibria, but in thiscase the focus is on the effect of payoff asymmetries that determine the risks of deviatingthe upward and downward directions. The twplayers in this game choose “effort” levels smultaneously, and the cost of effort determinthe risk of deviation. The joint product is of thfixed-coefficients variety, so that each personpayoff is the minimum of the two efforts, minuthe product of the player’s own effort andconstant cost factor,c. In the experiment, we letefforts be any integer in the range from 110170. If c , 1, any common effort in thisrange is a Nash equilibrium, because a ulateral one-unit increase in effort above

-

e

6

ed

c-i-

-

i-

-

common starting point will not change theminimum but will reduce one’s payoff byc.Similarly, a one-unit decrease in effort willreduce one’s payoff by 12 c, i.e., the reduc-tion in the minimum is more than the savingsin effort costs whenc , 1. Obviously, ahigher effort cost increases the risk of raisingeffort and reduces the risk of lowering effort.Thus simple intuition suggests that effort levels will be inversely related to effort costs,despite the fact that any common effort leveis a Nash equilibrium.

We ran one treatment with a low effort cosof 0.1, and the data for 50 randomly matchesubjects in this treatment are shown by the dabars in Figure 2. Notice that behavior is quiteconcentrated at the highest effort level of 170subjects coordinate on the Pareto-dominant oucome. The high effort cost treatment (c 5 0.9),however, produced a preponderance of efforat the lowest possible level, as can be seen bthe lighter bars in the figure. Clearly, the extenof this “coordination failure” is affected by thekey economic variable in this model, eventhough Nash theory is silent.12

FIGURE 2. EFFORT CHOICE FREQUENCIES FOR AMINIMUM -EFFORT COORDINATION GAME WITH HIGH EFFORT COST

(LIGHT BARS) AND LOW EFFORT COST (DARK BARS)

s

i-

12 The standard analysis of equilibrium selection in co-ordination is based on the John C. Harsanyi and ReinhaSelten’s (1988) notion of risk dominance, which allows aformal analysis of the trade-off between risk and payofdominance. Paul G. Straub (1995) reports experimentevidence for risk dominance as a selection criterium. Theris no agreement on how to generalize risk dominance byond 2 3 2 games, but see Anderson et al. (2001b) forproposed generalization based on the “stochastic potentiaExperiments with repeated plays of coordination game

Page 8: Ten Little Treasures of Game Theory and Ten Intuitive ...

1409VOL. 91 NO. 5 GOEREE AND HOLT: TREASURES AND CONTRADICTIONS

TABLE 3—TWO VERSIONS OF THEKREPSGAME (WITH CHOICE PERCENTAGES)

Left (26) Middle (8) Non-Nash(68) Right (0)Basic game Top (68) 200, 50 0, 45 10, 30 20,2250

Bottom(32) 0,2250 10,2100 30, 30 50, 40

Left (24) Middle (12) Non-Nash(64) Right (0)Positive payoff frame Top (84) 500, 350 300, 345 310, 330 320, 50

Bottom(16) 300, 50 310, 200 330, 330 350, 340

hthism5tf

i

t

llrff

i-hf

ftuitu

lelde

art

sr-

te--

e,dlee-

sm

-

,

E. The Kreps Game

The previous examples demonstrate how tcold logic of game theory can be at odds wiintuitive notions about human behavior. Thtension has not gone unnoticed by some gatheorists. For instance, David M. Kreps (199discusses a variant of the game in the top parTable 3 (where we have scaled back the payoto levels that are appropriate for the laboratoryThe pure-strategy equilibrium outcomes of thgame are (Top, Left) and (Bottom, Right). Inaddition, there is a mixed-strategy equilibriumin which row randomizes betweenTopandBot-tom and column randomizes betweenLeft andMiddle. The only column strategy that is nopart of any Nash equilibrium is labeledNon-Nash.Kreps argues that column players witend to chooseNon-Nashbecause the otheoptions yield at best a slightly higher payo(i.e., 10, 15, or 20 cents higher) but could leato substantial losses of $1 or $2.50. Noticthat this intuition is based on payoff magntudes out of equilibrium, in contrast to Nascalculations based only on signs of payodifferences.

Kreps did try the high-hypothetical-payofversion of this game on several graduate sdents, but let us consider what happens wfinancially motivated subjects in an anonymo

i-

e-

in-te

tb-k

ededa

icl

ebeia

e

e)offs).s

de

f

-hs

laboratory situation. As before, we randomlypaired 50 subjects and let them make a singchoice. Subjects were told that losses woube subtracted from prior earnings, which werquite substantial by that point. As seen fromthe percentages in parentheses in the top pof the table, theNon-Nashdecision was se-lected by approximately two-thirds of the col-umn players. Of course, it is possible that thiresult is simply a consequence of “loss-avesion,” i.e., the disutility of losing someamount of money is greater than the utilityassociated with winning the same amoun(Daniel Kahneman et al., 1991). Since all thother columns contain negative payoffs, lossaverse subjects would thus be naturally inclined to chooseNon-Nash.Therefore, we rananother 50 subjects through the same gambut with 300 cents added to payoffs to avoilosses, as shown in the bottom part of Tab3. The choice percentages shown in parenthses indicate very little change, with close totwo-thirds of column players choosingNon-Nash as before. Thus “loss aversion” biaseare not apparent in the data, and do not seeto be the source of the prevalence ofNon-Nashdecisions. Finally, we ran 50 new subjects through the original version in the toppart of the table, with the (Bottom, Right)payoffs of (50, 40) being replaced by (350400), which (again) does not alter the equlibrium structure of the game. With this ad-mittedly heavy-handed enhancement of thequilibrium in that cell, we observed 96 percent Bottom choices and 84 percentRightchoices, with 16 percentNon-Nashpersistingin this, the “treasure” treatment.

II. Dynamic Games with Complete Information

As game theory became more widely usedfields like industrial organization, the complexity of the applications increased to accommoda

o-

nde

er

l

have shown that behavior may begin near the Paredominant equilibrium, but later converge to the equilirium that is worst for all concerned (John B. Van Huycet al., 1990). Moreover, the equilibrium that is selectmay be affected by the payoff structure for dominatstrategies (Russell Cooper et al., 1992). See GoereeHolt (1998) for results of a repeated coordination gamwith random matching. They show that the dynampatterns of effort choices are well explained by a simpevolutionary model of noisy adjustment toward highpayoffs, and that final-period effort decisions canexplained by the maximization of stochastic potentfunction.

Page 9: Ten Little Treasures of Game Theory and Ten Intuitive ...

,i

r

1na

.

14heouo.

-t

i-

ci-n

r

y

ti-

srh-

tyt

ea-

ct

.n

nt

,ce

13 See Beard and Beil (1994) for similar results in atwo-stage game played only once.

1410 THE AMERICAN ECONOMIC REVIEW DECEMBER 2001

dynamics and asymmetric information. One othe major developments coming out of thesapplications was the use of backward inductioto eliminate equilibria with threats that are no“credible” (Selten, 1975). Backward inductionwas also used to develop solutions to alternatinoffer bargaining games (Ariel Rubinstein1982), which was the first major advance on thhistorically perplexing topic since Nash’s(1950) axiomatic approach. However, thehave been persistent doubts that people are ato figure out complicated, multistagebackwardinduction arguments. Robert W. Rosenthal (198quickly proposed a game, later dubbed the “cetipede game,” in which backward induction overlarge number of stages (e.g., 100 stages) wthought to be particularly problematic (e.gMcKelvey and Palfrey, 1992). Many of the gamein this section are inspired by Rosenthal’s (198doubts and Randolph T. Beard and Beil’s (199experimental results. Indeed, the anomalies in tsection are better known than those in other stions, but we focus on very simple games with twor three stages, using parallel procedures and sjects who have previously made a numberstrategic decisions in different one-shot games

A. Should You Trust Others to Be Rational?

The power of backward induction is illustrated in the top game in Figure 3. The firs

FIGURE 3. SHOULD YOU TRUST OTHERS TO BE RATIONAL ?

fent

g-

s

eble

)-

as,s))isc-

b-f

player begins by choosing between a safe decsion,S,and a risky decision,R. If R is chosen,the second player must choose between a desionP that punishes both of them and a decisioN that leads to a Nash equilibrium that is also ajoint-payoff maximum. There is, however, asecond Nash equilibrium where the first playechoosesS and the second choosesP. The sec-ond player has no incentive to deviate from thisequilibrium because the self-inflicted punish-ment occurs off of the equilibrium path. Sub-game perfectness rules out this equilibrium brequiring equilibrium behavior in each sub-game, i.e., that the second player behave opmally in the event that the second-stagesubgame is reached.

Again, we used 50 randomly paired subjectwho played this game only once. The data fothis treasure treatment are quite consistent witthe subgame-perfect equilibrium; a preponderance of first players trust the other’s rationalityenough to chooseR, and there are no irrationalP decisions that follow. The game shown in thebottom part of Figure 3 is identical, except thathe second player only forgoes 2 cents bchoosingP. This change does not alter the facthat there are two Nash equilibria, one of whichis ruled out by subgame perfectness. The choicpercentages for 50 subjects indicate that a mjority of the first players did not trust othersto be perfectly rational when the cost ofirrationality is so small. Only about a third ofthe outcomes matched the subgame-perfeequilibrium in this game.13 We did a third treat-ment (not shown) in which we multiplied allpayoffs by a factor of 5, except that thePdecision led to (100, 348) instead of (100, 340)This large increase in payoffs produced an evemore dramatic result; only 16 percent of theoutcomes were subgame perfect, and 80 perceof the outcomes were at the Nash equilibriumthat is not subgame perfect.

B. Should You Believe a ThreatThat Is Not Credible?

The game just considered is a little unusual inthat, in the absence of relative payoff effectsthe second player has no reason to punish, sin

Page 10: Ten Little Treasures of Game Theory and Ten Intuitive ...

e-

t)

et

hen

-

fr

yrs

d-

hfat

wc-r

f

ntoeryene

id4

mryalal

e-sn-.

n-e

to”ek-sof

htoe

oforo-or

ee-er

1411VOL. 91 NO. 5 GOEREE AND HOLT: TREASURES AND CONTRADICTIONS

the first player’sR decision also benefits thesecond player. This is not the case for the gamin Figure 4, where anR decision by the firstplayer will lower the second player’s payoff. Asbefore, there are two Nash equilibria, with th(R, P) equilibrium ruled out by subgame perfectness. In addition to not being credible, ththreat to playP is a relatively costly punishmenfor the second player to administer (40 cents

The threat to playP in the top part of Figure4 is evidently not believed, and 88 percent othe first players choose theR strategy, withimpunity. The threat is cheap (2 cents) for thgame in the bottom part of the figure, and oucomes for 25 subject pairs are evenly dividebetween the subgame-imperfect outcome, tincredible threat outcome, and the subgamperfect outcome. Cheap threats often are (aapparently should be) believed. Again we sethat payoff magnitudes and off-the-equilibriumpath risks matter.

Since theP decisions in the bottom games oFigures 3 and 4 only reduce the second playepayoff by 2 cents, behavior may be affected bsmall variations in payoff preferences or emotions, e.g., spite or rivalry. As suggested bErnst Fehr and Klaus Schmidt (1999) and GaE Bolton and Axel Ockenfels (2000), playermay be willing to sacrifice own earnings inorder to reduce payoff inequities which woulexplain theP choices in the contradiction treat

FIGURE 4. SHOULD YOU BELIEVE A THREAT THAT

IS NOT CREDIBLE?

e

e

.

f

-de-de

’sy-

y

ments. Alternatively, the occurrence of the higfraction of P decisions in the bottom game oFigure 4 may be due to negative emotions thfollow the first player’sR decision, which re-duces the second player’s earnings (MattheRabin, 1993). Notice that this earnings redution does not occur when the first playechoosesR for the game in the bottom part ofFigure 3, which could explain the lower rate opunishments in that game.

The anomalous results of the contradictiotreatments may not come as any surpriseSelten, the originator of the notion of subgamperfectness. His attitude toward game theohas been that there is a sharp contrast betwestandard theory and behavior. For a long timhe essentially wore different hats when he dtheory and ran experiments, although his 199Nobel prize was clearly for his contributions intheory. This schizophrenic stance may seeinconsistent, but it may prevent unnecessaanxiety, and some of Selten’s recent theoreticwork is based on models of boundedly ration(directional) learning (Selten and JoachimBuchta, 1998). In contrast, John Nash was rportedly discouraged by the predictive failureof game theory and gave up on both experimetation and game theory (Nasar, 1998 p. 150)

C. Two-Stage Bargaining Games

Bargaining has long been considered a cetral part of economic analysis, and at the samtime, one of the most difficult problems foreconomic theory. One promising approach ismodel unstructured bargaining situations “as ifthe parties take turns making offers, with thcosts of delayed agreement reflected in a shrining size of the pie to be divided. This problem iparticularly easy to analyze when the numberalternating offers is fixed and small.

Consider a bargaining game in which eacplayer gets to make a single proposal for howsplit a pie, but the amount of money to bdivided falls from $5 in the first stage to $2 inthe second. The first player proposes a split$5 that is either accepted (and implemented)rejected, in which case the second player prposes a split of $2 that is either acceptedrejected by the first player. This final rejectionresults in payoffs of zero for both players, so thsecond player can (in theory) successfully dmand $1.99 in the second stage if the first play

Page 11: Ten Little Treasures of Game Theory and Ten Intuitive ...

etenoy

o

bca

ynto3i

n

e

nis

e

e

hae

s

eae

t

u

ls-s-

;r,t-d-stsidsethe-

cter-

as

eIf-t

-eeestor

shfvi--

t

oenerh

1412 THE AMERICAN ECONOMIC REVIEW DECEMBER 2001

prefers a penny to nothing. Knowing this, thfirst player should demand $3 and offer $2the other in the first stage. In a subgame-perfequilibrium, the first player receives the amouby which the pie shrinks, so a larger costdelay confers a greater advantage to the plamaking the initial demand, which seems reasoable. For example, a similar argument showthat if the pie shrinks by $4.50, from $5 t$0.50, then the first player should make ainitial demand of $4.50.

We used 60 subjects (six cohorts of ten sujects each), who were randomly paired for eaof the two treatments described above (alterning in order and separated by other one-shgames). The average demand for the first plawas $2.83 for the $5/$2 treatment, with a stadard deviation of $0.29. This is quite closethe predicted $3.00 demand, and 14 of theinitial demands were exactly equal to $3.00this treasure treatment. But the average demaonly increased to $3.38 for the other treatmewith a $4.50 prediction, and 28 of the 30 demands were below the prediction of $4.50. Rjections were quite common in this contradictiotreatment with higher demands and correspoingly lower offers to the second player, whichnot surprising given the smaller costs of rejectin“stingy” offers.

These results fit into a larger pattern surveyin Douglas D. Davis and Holt (1993 Chapter 5and Roth (1995); initial demands in two-stagbargaining games tend to be “too low” relativto theoretical predictions when the equilibriumdemand is high, say more than 80 percent of tpie as in our $5.00/$0.50 treatment, and initidemands tend to be close to predictions whthe equilibrium demand is 50–75 percent of thpie (as in our $5.00/$2.00 treatment). Intereingly, initial demands are “too high” when theequilibrium demand is less than half of the piHere is an example of why theoretical explantions of behavior should not be based on expiments in only one part of the parameter spacand why theorists should have more than juscasual, secondhand knowledge of the expemental economics literature.14 Many of the di-verse theoretical explanations for anomalo

ee

e

14 Another example is the development of theories ogeneralized expected utility to explain “fanning out” pref-erences in Allais paradox situations, when later experimenin other parts of the probability triangle found “fanning in.”

octtfern-s

n

-ht-

oter-

0nndt

--

nd-

g

d)e

eln

et-

.-r-e,a

ri-

s

behavior in bargaining games hinge on modeof preferences in which a person’s utility depends on the payoffs of both players, i.e., ditribution matters (Bolton, 1998; Fehr andSchmidt, 1999; Bolton and Ockenfels, 2000Miguel Costa-Gomes and Klaus G. Zaune2001). The role of fairness is illustrated dramaically in the experiment reported in Goeree anHolt (2000a), who obtained even larger deviations from subgame-perfect Nash predictionthan those reported here by giving subjecasymmetric money endowments that were paindependently of the bargaining outcome. Theendowments were selected to accentuatepayoff inequities that result in the subgameperfect Nash equilibrium, and hence their effewas to exaggerate fairness issues without alting the equilibrium prediction. The result (forseven different one-shot bargaining games) wfor demands to beinverselyrelated to the sub-game-perfect Nash predictions.

III. Static Games with Incomplete Information

William Vickrey’s (1961) models of auctionswith incomplete information constitute one of thmost widely used applications of game theory.private values are drawn from a uniform distribution, the Bayesian Nash equilibrium predicts thabids will be proportional to value, which is generally consistent with laboratory evidence. Thmain deviation from theoretical predictions is thtendency of human subjects to “overbid” (relativto Nash), which is commonly rationalized in termof risk aversion, an explanation that has leadsome controversy. Glenn W. Harrison (1989), foinstance, argues that deviations from the Naequilibrium may well be caused by a lack omonetary incentives since the costs of such deations are rather small: the “flat maximum critique.” Our approach here is to specify twoauction games with identical Nash equilibria, buwith differing incentives not to overbid.

First, consider a game in which each of twbidders receives a private value for a prize to bauctioned in a first-price, sealed-bid auction. Iother words, the prize goes to the highest biddfor a price equal to that bidder’s own bid. Eacbidder’s value for the prize is equally likely tobe $0, $2, or $5. Bids are constrained to binteger dollar amounts, with ties decided by thflip of a coin.

The relevant Nash equilibrium in this gam

f

ts

Page 12: Ten Little Treasures of Game Theory and Ten Intuitive ...

1413VOL. 91 NO. 5 GOEREE AND HOLT: TREASURES AND CONTRADICTIONS

TABLE 4—EQUILIBRIUM EXPECTED PAYOFFS FOR THE(0,2,5) TREATMENT (EQUILIBRIUM BIDS MARKED WITH AN ASTERISK*)

Bid 5 0 Bid 5 1 Bid 5 2 Bid 5 3 Bid 5 4 Bid 5 5

Value 5 $0 0* 20.5 21.66 23 24 25Value 5 $2 0.33 0.5* 0 21 22 23Value 5 $5 0.83 2 2.5* 2 1 0

TABLE 5—EQUILIBRIUM EXPECTED PAYOFFS FOR THE(0,3,6) TREATMENT (EQUILIBRIUM BIDS MARKED WITH AN ASTERISK*)

Bid 5 0 Bid 5 1 Bid 5 2 Bid 5 3 Bid 5 4 Bid 5 5

Value 5 $0 0* 20.5 21.66 23 24 25Value 5 $3 0.5 1* 0.83 0 21 22Value 5 $6 1 2.5 3.33* 3 2 1

f-msse

bi

oec

eo

-

e

ree

f

fote

rheine

esui--e6)e

i-

maf-e

d-or

eonr-

wotyntor0,he0,nd

with incomplete information about others’ preerences is the Bayesian Nash equilibriuwhich specifies an equilibrium bid for each posible realization of a bidder’s value. It istraightforward but tedious to verify that thNash equilibrium bids are $0, $1, and $2 forvalue of $0, $2, and $5 respectively, as canseen from the equilibrium expected payoffsTable 4. For example, consider a bidder withprivate value of $5 (in the bottom row) whfaces a rival that bids according to the proposNash solution. A bid of 0 has a one-half chanof winning (decided by a coin flip) if the rival’svalue, and hence the rival’s bid, is zero, whichappens with probability one-third. Thereforthe expected payoff of a zero bid with a value$5 equals1⁄2 p 1⁄3 p ($5 2 $0) 5 $5/6 5 0.83.If the bid is raised to $1, the probability of winning becomes1⁄2 (1⁄3 when the rival’s value is $0plus 1⁄6 when the rival’s value is $2). Hence, thexpectedpayoff of a $1 bid is1⁄2 p ($5 2$1) 5 $2. The other numbers in Table 4 aderived in a similar way. The maximum expectpayoff in each row coincides with the equilibriumbid, as indicated by anasterisk (p). Note that theequilibrium involves bidding about one-half othe value.15

Table 5 shows the analogous calculationsthe second treatment, with equally likely privavalues of $0, $3, or $6. Interestingly, this increase in values does not alter the equilibriubids in the unique Bayesian Nash equilibrium

einmnsOf

15 The bids would be exactly one-half of the value if thehighest value were $4 instead of $5, but we had to raise thighest value to eliminate multiple Nash equilibria.

,-

ae

na

de

h,f

d

r

-m,

as indicated by the location of optimal bids foeach value. Even though the equilibria are tsame, we expected more of an upward biasbids in the second (0, 3, 6) treatment. Thintuition can be seen by looking at payoff lossassociated with deviations from the Nash eqlibrium. Consider, for instance, the middlevalue bidder with expected payoffs shown in thsecond rows of Tables 4 and 5. In the (0, 3,treatment, the cost of bidding $1 above thequilibrium bid is $12 $0.835 $0.17, which isless than the cost of bidding $1 below the equlibrium bid: $12 $0.505 $0.50. In the (0, 2, 5)treatment, the cost of an upward deviation frothe equilibrium bid is greater than the cost ofdownward deviation; see the middle row oTable 4. A similar argument applies to the highvalue bidders, while deviation costs are thsame in both treatments for the low-value bider. Hence we expected more overbidding fthe (0, 3, 6) treatment.

This intuition is borne out by bid data for th50 subjects who participated in a single auctiunder each condition (again alternating the oder of the two treatments and separating the tauctions with other one-shot games). Eighpercent of the bids in the (0, 2, 5) treatmematched the equilibrium: the average bids flow-, medium-, and high-value bidders were $$1.06, and $2.64, respectively. In contrast, taverage bids for the (0, 3, 6) treatment were $$1.82, and $3.40 for the three value levels, aonly 50 percent of all bids were Nash bids. Thbid frequencies for each value are shownTable 6. As in previous games, deviations froNash behavior in these private-value auctioseem to be sensitive to the costs of deviation.

he

Page 13: Ten Little Treasures of Game Theory and Ten Intuitive ...

av-g

npdri-at

e

yfusedye

s

e

e

n

1414 THE AMERICAN ECONOMIC REVIEW DECEMBER 2001

course, this does not rule out the possibility thrisk aversion or some other factor may also hasome role in explaining the overbidding observed here, especially the slight overbiddinfor the high value in the (0, 2, 5) treatment.16

IV. Dynamic Games with IncompleteInformation: Signaling

Signaling games are complex and interestibecause the two-stage structure allows an oportunity for players to make inferences anchange others’ inferences about private infomation. This complexity often generates multple equilibria that, in turn, have stimulatedsequence of increasingly complex refinemenof the Nash equilibrium condition. Although itis unlikely that introspective thinking about thgame will produce equilibrium behavior in asingle play of a game this complex (except bcoincidence), the one-shot play reveals useinformation about subjects’ cognitive processe

In the experiment, half of the subjects werdesignated as “senders” and half as “responers.” After reading the instructions, we began bthrowing a die for each sender to determin

TABLE 6—BID FREQUENCIES

(EQUILIBRIUM BIDS MARKED WITH AN ASTERISK*)

(0, 2, 5) Treatment (0, 3, 6) Treatment

Bid Frequency Bid Frequency

Value 5 0 0* 20 Value5 0 0* 17

Value 5 2 1* 15 Value5 3 1* 52 1 2 113 0 3 2

Value 5 5 1 1 Value5 6 1 02* 5 2* 33 6 3 44 2 4 65 0 5 16 0 6 1

or

16 Goeree et al. (2001) report a first-price auction exper-iment with six possible values, under repeated randommatching for ten periods. A two-parameter econometricmodel that includes both decision error and risk aversionprovides a good fit of 67 value/bid frequencies and showthat both the error parameter and risk-aversion parameteare significantly different from zero. David Lucking-Reiley(1999) mentions risk aversion as a possible explanation fooverbidding in a variety of auction experiments.

te

g-

-

s

l.

-

whether the sender was of type A or B. Every-body knew that theex anteprobability of a typeA sender was one-half. The sender, knowinghis/her own type would choose a signal,Left orRight.This signal determined whether the pay-offs on the right or left side of Table 7 would beused. (The instructions used letters to identifythe signals, but we will use words here to facil-itate the explanations.) This signal would becommunicated to the responder that wamatched withthat sender. The responder wouldsee the sender’ssignal,Left or Right,but not thesender’s type, and then choose a response,C, D,or E. The payoffs were determined by Table7, with the sender’s payoff to the left in eachcell.

First, consider the problem facing a type Asender, for whom the possible payoffs fromsending aLeft signal (300, 0, 500) seem, insome loose sense, less attractive than thosfor sending aRight signal (450, 150, 1,000).For example, if each response is thought to bequally likely (the “principle of insufficientreason”), then theRight signal has a higherexpected payoff. Consequently, type A’s pay-offs have been made bold for theRight row inthe top right part of Table 7. Applying theprinciple of insufficient reason again, a type Bsender looking at the payoffs in the bottomrow of the table might be more attracted bythe Left signal, with payoffs of (500, 300,300) as compared with (450, 0, 0).17 There-fore, sender B’s payoffs are in bold for theLeft signal. In fact, all of the type B subjectsdid send theLeft signal, and seven of the tentype A subjects sent theRight signal. Allresponses in this game wereC, so all but threeof the outcomes were in one of the two cellsmarked by an asterisk. Notice that this is anequilibrium, since neither type of senderwould benefit from sending the other signal,and the respondent cannot do any better thathe maximum payoff received in the markedcells. This is a separating Nash equilibrium;the signal reveals the sender’s type.

The payoff structure for this game becomes alittle clearer if you think of the responses as oneof three answers to a request: Concede, Deny,

sr

r

17 These are not dominance arguments, since the respondercan respond differently to each signal, and the lowest payofffrom sending one signal is not higher than the highest payoffthat can be obtained from sending the other signal.

Page 14: Ten Little Treasures of Game Theory and Ten Intuitive ...

1415VOL. 91 NO. 5 GOEREE AND HOLT: TREASURES AND CONTRADICTIONS

TABLE 7—SIGNALING WITH A SEPARATING EQUILIBRIUM (MARKED BY ASTERISKS) (SENDER’S PAYOFF, RESPONDER’S PAYOFF)

Response toLeft signal Response toRight signal

C D E C D E

Type A sendsLeft 300, 300 0, 0 500, 300 Type A sendsRight 450, 900 150, 150 1,000, 300(*)

Type B sendsLeft 500, 500 300, 450 300, 0 Type B sendsRight 450, 0 0, 300 0, 150(*)

TABLE 8—SIGNALING WITHOUT A SEPARATING EQUILIBRIUM (SENDER’S PAYOFF, RESPONDER’S PAYOFF)

Response toLeft signal Response toRight signal

C D E C D E

Type AsendsLeft

300, 300 0, 0 500, 300 Type AsendsRight

450, 900 150, 150 1,000, 300

Type BsendsLeft

300, 300 300, 450 300, 0 Type BsendsRight

450, 0 0, 300 0, 150

ntoiddolibtha

libe

)ionf

sis

y-00senvia-

tsa-

ne-rty-x-isssees.rs

19 Brandts and Holt (1992, 1993) report experimentaldata that contradict the predictions of the intuitive criterion,i.e., the decision converged to an equilibrium ruled out bythat criterion.

20 Unlike the paired treatments considered previously,the payoff change for these signaling games does alter the

Evade. With some uncertainty about the seer’s type, Evade is sufficiently unattractiverespondents that it is never selected. Consthe other two responses and note that a senalways prefers that the responder choose Ccede instead of Deny. In the separating equirium, the signals reveal the senders’ types,responder always Concedes, and all playerssatisfied. There is, however, a second equirium for the game in Table 7 in which thresponder Concedes toLeft and DeniesRight,and therefore both sender types sendLeft toavoid being Denied.18 Backward induction ra-tionality (of the sequential Nash equilibriumdoes not rule out these beliefs, since a deviatdoes not occur in equilibrium, and the respodent is making a best response to the belieWhat is unintuitive about these beliefs (thatdeviant Right signal comes from a type B) ithat the type B sender is earning 500 in th(Left, Concede) equilibrium outcome, and nodeviation could conceivably increase this paoff. In contrast, the type A sender is earning 3in the Left side pooling equilibrium, and thitype could possibly earn more (450 or ev1,000), depending on the response to a detion. The In-Koo Cho and Kreps (1987)intui-

18 To check that the responder has no incentive to dev-ate, note that Concede is a best response to aLeft regardlessof the sender’s type, and that Deny is a best response todeviantRightsignal if the responder believes that it was senby a type B.

d-

erern--ere-

n-s.a

tive criterion rules out these beliefs, and selecthe separating equilibrium observed in the tresure treatment.19

The game in Table 8 is a minor variation othe previous game, with the only change bing that the (500, 500) in the bottom left paof Table 7 is replaced by a (300, 300) paoff.20 As before, consider the sender’s epected payoffs when each responsepresumed to be equally likely, which leadone to expect that type A senders will chooRight and that type B senders will choosLeft, as indicated by the bold payoff numberIn the experiment, 10 of the 13 type A sendedid chooseRight, and 9 of the 11 type Bsenders did chooseLeft. But the separationobserved in this contradiction treatment isnota Nash equilibrium.21 All equilibria for this

i

at

set of Nash equilibria.21 The respondents would prefer to Concede to aRight

signal and Deny aLeft signal. Type B senders would there-fore have an incentive to deviate from the proposed sepa-rating equilibrium and send aRight signal. In theexperiment, half of theLeft signals were Denied, whereasonly 2 of the 12Right signals were Denied.

Page 15: Ten Little Treasures of Game Theory and Ten Intuitive ...

neael

eioinsb

irr

h

i

mg

th-

e

lf

n

--

veaxi-

f

gese

rin

e-y-se

-sotre

ityeins-

e

pi--

1416 THE AMERICAN ECONOMIC REVIEW DECEMBER 2001

contradiction treatment involve “pooling,with both types sending the same signal.22

V. Explaining Anomalous Behaviorin One-Shot Games

Although the results for the contradictiotreatments seem to preclude a game-theorexplanation, many of the anomalous data pterns are related to the nature of the incentivThis suggests that it may be possible to deveformal models that explain both treasures acontradictions. Below we discuss several recapproaches that relax the common assumptof perfect selfishness, perfect decision-mak(no error), and perfect foresight (no surprise

As noted in Section II, the anomalies oserved for the dynamic games in Figures 3 a4 are consistent with models of inequity avesion (Fehr and Schmidt, 1999; Bolton and Okenfels, 2000), which assumes that people lhigher payoffs for themselves and dislike eaing less than the other person (“envy”) or eaing more (“guilt”). Inequity aversion also seemto play a role when players bargain over tdivision of a fixed amount of money (Goereand Holt, 2000a). However, it cannot explaobserved behavior in the contradiction matcing pennies treatments. Consider, for exaple, the “320” version of the matchinpennies game in Table 1. Since the columplayer is averse to the (320, 40) outcome,column player would only be willing to randomize betweenLeft and Right if the attrac-tiveness ofRight is increased by having throw player playBottom more often than the0.5 probability that would make a purely seish column player indifferent. This predictionthat the row player should playBottommore

22 For example, it is an equilibrium for both types to senRight if a Left signal triggers aC or a D response. TheDresponse toLeft is appropriate if the respondent thinks thdeviant signal comes from a type B sender, and theCresponse is appropriate if the deviant is thought to be of tyA. Beliefs that the deviant is of type A are intuitive, sinctype A earns 450 in equilibrium and could possibly eamore (500) by switching toLeft (if an E response follows).A second pooling equilibrium involves both types sendingLeft signal to which the respondents Concede. A deviaRight signal is Denied, which is appropriate if the respondent thinks the deviant signal comes from a type B sendAgain these beliefs are intuitive since the type B sendcould possibly gain by deviating.

tict-s.

opndntnsg).-ndr-c-ken-n-seenh-

-

ne

-,

often, is sharply contradicted by the data ithe middle part of Table 1.23,24

Another possibility is that behavior in oneshot games conforms to a simple heuristic. Indeed, some experimental economists hasuggested that subjects in the initial period ofrepeated game choose the decision that mamizes their security level, i.e., the “maximim”decision. For example, in the Kreps game oTable 3, the frequently observedNon-Nashde-cision maximizes column’s security. The strontreatment effects in the matching pennies gamcannot be explained in this way, however, sincin all three treatments each player’s minimumpayoff is the same for both decisions. A similaargument applies to the coordination gameTable 2. Moreover, the security-maximizingchoices in the traveler’s dilemma and thminimum-effort coordination game are the lowest possible decision, which is contradicted bthe high claim and effort choices in the contradiction treatments. Subjects may be risk averin unfamiliar situations, but the extreme riskaversion implied by maximum security is generally not observed. Furthermore, heuristicbased on reciprocity or a status quo bias do napply to single-stage, one-shot games whethere is neither a precedent nor an opportunto reciprocate. Nor can loss aversion be thprimary cause, since losses are impossiblemost of the games reported here, and the posibility of a loss had no effect in the Krepsgame.

As an alternative to simple heuristics, oncould try to model players’ introspectivethought processes. Previous models have tycally specified some process of belief formation, assuming that playersbestrespond to the

d

e

peern

ant-

er.er

23 Goeree et al. (2000) report formal econometric teststhat reject the predictions of inequity aversion models in thecontext of a group of repeated asymmetric matching penniesgames.

24 Payoff inequity aversion also has no effect in theminimum-effort coordination game; any common effortlevel is still a Nash equilibrium. To see this, note that aunilateral effort increase from a common level reducesone’s own payoff and creates an disadvantageous inequity.Similarly, a unilateral decrease from a common effort levelreduces one’s payoff and creates an inequity where oneearns more than the other, since their costly extra effort iswasted. Thus inequity aversion cannot explain the strongeffect of an increase in effort costs.

Page 16: Ten Little Treasures of Game Theory and Ten Intuitive ...

t-e-,

-d

ar

e

netod

stole”

dneeatfs

er

or

to

e-e

tois

to--

syita-ay

1417VOL. 91 NO. 5 GOEREE AND HOLT: TREASURES AND CONTRADICTIONS

resulting beliefs.25 The experiments reportedabove indicate thatmagnitudes(not justsigns)of payoff differences matter, and it is thus naural to consider a decision rule for which choicprobabilities are positively but imperfectly related to payoffs. The logit rule, for examplespecifies that choice probabilities,pi , for op-tions i 5 1, ... ,m, are proportional to exponential functions of the associated expectepayoffs,pi

e:

(1) pi 5exp~p i

e/m!

¥j 5 1,...,m

exp~p je/m!

, i 5 1, ... , m,

where the sum in the denominator ensures ththe probabilities sum to one, and the “erroparameter,”m, determines how sensitive choicprobabilities are to payoff differences.26

In order to use the “logit best response” i(1), we need to model the process of beliformation, since belief probabilities are usedcalculate the expected payoffs on the right siof (1). By the principle of insufficient reasonone might postulate that each of the otheractions are equally likely. This correspondsthe Stahl and Wilson (1995) notion of “leveone” rationality, which captures many of thfirst-period decisions in the “guessing game

,utgd

fs,

25 Perhaps the best-known model of introspection is Har-sanyi and Selten’s (1988) “tracing procedure.” This procedure involves an axiomatic determination of players’common priors (the “preliminary theory”) and the construc-tion of a modified game with payoffs for each decision thatare weighted averages of those in the original game and othe expected payoffs determined by the prior distributionBy varying the weight on the original game, a sequence obest responses for the modified game are generated. Thprocess is used to select one of the Nash equilibria of thoriginal game. Gonzalo Olcina and Amparo Urbano (1994also use an axiomatic approach to select a prior distributionwhich is then revised by a simulated learning process that iessentially a partial adjustment from current beliefs to besresponses to current beliefs. Since neither the HarsanySelten model nor the Olcina/Urbano model incorporates annoise, they predict that behavior will converge to the Nashequilibrium in games with a unique equilibrium, which is anundesirable feature in light of the contradictions data reported above.

26 As m goes to zero, payoff differences are “blown up,”and the probability of the optimal decision converges to 1In the other extreme, asm goes to , the choice probabilitiesconverge to 1/m independently of expected payoffs. See R.Duncan Luce (1959) for an axiomatic derivation of the logitchoice rule in (1).

t

f

e

reported by Nagel (1995).27 It is easy to verifythat level one rationality also provides goopredictions for both treasure and contradictiotreatments in the traveler’s dilemma, thminimum-effort coordination game, and thKreps game. There is evidence, however, thatleast some subjects form more precise belieabout others’ actions, possibly through highlevels of introspection.28 In the matching pen-nies games in Table 1, for example, a flat primakes column indifferent betweenLeft andRight, and yet most column players seemanticipate that row will chooseTop in the 320version andBottom in the 44 version of thisgame.

Of course, what the other player does dpends on what they think you will do, so thnext logical step is to assume thatothersmakeresponses to a flat prior, and then you respondthat anticipated response (Selten, 1991). ThisStahl and Wilson’s (1995) “level two” rational-ity. There is, however, no obvious reasontruncate the levels of iterated thinking. The notion of rationalizability discussed above, for example, involves infinitely many levels ofiterated thinking, with “never-best” responseeliminated in succession. But rationalizabilitseems to imply too much rationality, sincepredicts that all claims in the traveler’s dilemmwill be equal to the minimum claim, independent of the penalty/reward parameter. One wto limit the precision of the thought processwithout making an arbitrary assumption abothe number of iterations, is to inject increasinamounts of noise into higher levels of iteratethinking (Goeree and Holt, 1999; Ku¨bler andWeizsacker, 2000). Letfm denote the logit best-response map (for error ratem) on the right sideof (1). Just as a single logit response to beliep0, can be represented asp 5 fm(p0), a series ofsuch responses can be represented as:29

-

f.fise),s

ti/y

-

.

27 In our own work, we have used a noisy response to aflat prior as a way of starting computer simulations ofsimulations of behavior in repeated games (Brandts andHolt, 1996; Capra et al., 1999, 2002; Goeree and Holt,1999).

28 Costa-Gomes et al. (2001), for example, infer someheterogeneity in the amount of introspection by observingthe types of information that subjects acquire before makinga decision.

29 Goeree and Holt (2000b) use continuity arguments toshow that the limit in (2) exists even if the (increasing) errorparameters are person specific.

Page 17: Ten Little Treasures of Game Theory and Ten Intuitive ...

ao

ee

yn

-

-rtot

le

t

a

yer

e

r

-

-

gd

,

e,

l-

e,-e-s.

-r,

1418 THE AMERICAN ECONOMIC REVIEW DECEMBER 2001

(2) p 5 limn3`

fm1~fm2~... fmn~p0!!!,

where m1 # m2 # ... , with m` convergingto infinity.30 This assumption captures the idethat it becomes increasingly complex to dmore and more iterations.31 Sincefm for m 5 `maps the whole probability simplex to a singlpoint, the right side of (2) is independent of thinitial belief vectorp0. Moreover, the introspec-tion process in (2) yields a unique outcome evein games with multiple Nash equilibria. Notethat the choice probabilities on the left side o(2) generally do not match the beliefs at anstage of the iterative process on the right. Iother words, the introspective process allowfor surprises, which are likely to occur in oneshot games.

For games with very different levels of complexity such as the ones reported here, the erparameters that provide the best fit are likelybe different. In this case, the estimates indicathe degree of complexity, i.e., they serve asmeasurement device. For games of similar complexity, the model in (2) could be applied topredict behavior across games. We have usedto explain data patterns in a series of 37 simpmatrix games, assuming a simple two-paramter model for whichmn 5 mtn , wheret deter-mines the rate at which noise increases wihigher iterations (Goeree and Holt, 2000b). Thestimated value (t 5 4.1) implies that there ismore noise for higher levels of introspection,result that is roughly consistent with estimateobtained by Ku¨bler and Weizsa¨cker (2000) fordata from information-cascade experiments.

The analysis of introspection is a relativelunderstudied topic in game theory, as comparwith equilibrium refinements and learning, fo

,

y

30 The case of a constant parameter (m1 5 m2 5 ... 5 m)is of special interest. In this case, the process may noconverge for some games (e.g., matching pennies), bwhen it does, the limit probabilities,p*, must be invariantunder the logit map:fm( p*) 5 p*. A fixed point of thistype constitutes a “logit equilibrium,” which is a specialcase of the quantal-response equilibrium defined iMcKelvey and Palfrey (1995). It is in this sense that thelogit equilibrium arises as a limit case of the noisy intro-spective process defined in (2).

31 For an interesting alternative approach, see Capr(1998). In her model, beliefs are represented by degeneradistributions that put all probability mass at a single pointThe location of the belief points is,ex ante,stochastic.

n

f

s

or

ea-

ite-

he

s

d

example. Several of the models discussed abovdo a fairly good job of organizing the qualitativepatterns of conformity and deviation from thepredictions of standard theory, but there areobvious discrepancies. We hope that this papewill stimulate further theoretical work on mod-els of behavior in one-shot games. One potentially useful approach may be to elicit beliefsdirectly as the games are played (Theo Offerman, 1997; Andrew Schotter and Yaw Narkov,1998).

VI. Conclusion

One-shot game experiments are interestinbecause many games are in fact only playeonce; single play is especially relevant in appli-cations of game theory in other fields, e.g.international conflicts, election campaigns, andlegal disputes. The decision makers in thescontexts, like the subjects in our experimentstypically have experience in similar games withother people. One-shot games are also appeaing because they allow us to abstract away fromissues of learning and attempts to manipulatothers’ beliefs, behavior, or preferences (e.g.reciprocity, cooperativeness). This paper reports the results of ten pairs of games that arplayed only once by subjects who have experience with other one-shot and repeated gameThe Nash equilibrium (or relevant refinement)provides accurate predictions for standard versions of these games. In each case, howevethere is a matched game for which the Nashprediction clearly fails, although it fails in a waythat is consistent with simple (non-game-theoretic) intuition. The results for these expe-rienced subjects show:

(1) Behavior may diverge sharply from theunique rationalizable (Nash) equilibrium ina social (traveler’s) dilemma. In thesegames, the Nash equilibrium is located onone side of the range of feasible decisionsand data for the contradiction treatmenthave a mode on theoppositeside of thisrange. The most salient feature of the datais the extreme sensitivity to a parameterthat has no effect on the Nash outcome.

(2) Students suffering through game theoryclasses may have good reasons when thehave trouble understanding why a changein one player’s payoffs only affects the

tut

n

ate

.

Page 18: Ten Little Treasures of Game Theory and Ten Intuitive ...

awt)

-

yn-

r

tye

e

ss

-i-to

gge

n9

rinara

o

g.ge

-

yi-,enr

ce

icdg

n-f-hx--dla-

td-er--w

ncee-s

.”

1419VOL. 91 NO. 5 GOEREE AND HOLT: TREASURES AND CONTRADICTIONS

other player’s decision probabilities in amixed-strategy Nash equilibrium. The datfrom matching pennies experiments shostrong “own-payoff” effects that are nopredicted by the unique (mixed-strategyNash equilibrium. The Nash analysis seemto work only by coincidence, when the payoff structure is symmetric and deviationrisks are balanced.

(3) Effort choices are strongly influenced bthe cost of effort in coordination games, aintuitive result that is not explained by standard theory, sinceany common effort is aNash equilibrium in such games. Moreoveas Kreps conjectured, it is possible to design coordination games where the majoriof one player’s decisions correspond to thonly action that is not part of any Nashequilibrium.

(4) Subjects often do not trust others to brational when irrationality is relatively cost-less. Moreover, “threats” that arenot cred-ible in a technical sense may neverthelealter behavior in simple two-stage gamewhen carrying out these threats is nocostly.

(5) Deviations from Nash predictions in alternating-offer bargaining games and in prvate-value auctions are inversely relatedthe costs of such deviations. The effectsthese biases can be quite large in the gamconsidered.

(6) It is possible to set up a simple signalingame in which the decisions reveal the sinaler’s type (separation), even though thequilibrium involves pooling.

So what should be done? Reinhard Selteone of the three game theorists to share the 19Nobel Prize, has said: “Game theory is for proving theorems, not for playing games.”32 Indeed,the internal elegance of traditional game theois appealing, and it has been defended as bea normative theory about how perfectly rationpeople should play games with each otherather than a positive theory that predicts actubehavior (Rubinstein, 1982). It is natural tseparate normative and positive studies of indvidual decision-making, which allows one to

32 Selten reiterated this point of view in a personal com-munication to the authors.

s

,-

s

t

ofes

-

,4

-

yg

l,l

i-

compare actual and optimal decision-makinThis normative-based defense is not convincinfor games, however, since the best way for onto play a game depends on how othersactuallyplay, not on how some theory dictates that rational peopleshouldplay. John Nash, one ofthe other Nobel recipients, saw no waaround this dilemma, and when his experments were not providing support to theoryhe lost whatever confidence he had in threlevance of game theory and focused omore purely mathematical topics in his lateresearch (Nasar, 1998).

Nash seems to have undersold the importanof his insight, and we will be the first to admitthat we begin the analysis of a new strategproblem by considering the equilibria derivefrom standard game theory, before considerinthe effects of payoff and risk asymmetries oincentives to deviate. But in an interactive, strategic context, biases can have reinforcing efects that drive behavior well away from Naspredictions, and economists are starting to eplain such deviations using computer simulations and theoretical analyses of learning andecision error processes. There has been retively little theoretical analysis of one-shogames where learning is impossible. The moels of iterated introspection discussed here offsome promise in explaining the qualitative features of deviations from Nash predictions enumerated above. Taken together, these neapproaches to a stochastic game theory enhathe behavioral relevance of standard game thory. And looking at laboratory data is a lot lesstressful than before.

REFERENCES

Anderson, Simon P.; Goeree, Jacob K. and Holt,Charles A. “Rent Seeking with Bounded Ra-tionality: An Analysis of the All-Pay Auc-tion.” Journal of Political Economy,August1998a,106(4), pp. 828–53.

. “A Theoretical Analysis of Altruismand Decision Error in Public Goods GamesJournal of Public Economics,November,1998b,70(2), pp. 297–323.

. “The Logit Equilibrium: A Perspec-tive on Intuitive Behavioral Anomalies.”Southern Economic Journal, 2001a (forth-coming).

. “Minimum-Effort Coordination

Page 19: Ten Little Treasures of Game Theory and Ten Intuitive ...

i,

i-”

e

.”

-”

l

n

t

l

-A

ye

g

.”

-

1420 THE AMERICAN ECONOMIC REVIEW DECEMBER 2001

Games: Stochastic Potential and Logit Equlibrium.” Games and Economic BehaviorFebruary 2001b,34(2), pp. 177–99.

Basu, Kaushik. “The Traveler’s Dilemma: Par-adoxes of Rationality in Game Theory.American Economic Review,May 1994(Papers and Proceedings), 84(2), pp. 391–95.

Beard, Randolph T. and Beil, Richard O. “DoPeople Rely on the Self-interested Maximzation of Others: An Experimental Test.Management Science,February 1994,40(2),pp. 252–62.

Bernheim, B. Douglas.“Rationalizable StrategicBehavior.” Econometrica,July 1984,52(4),pp. 1007–28.

Bolton, Gary E. “Bargaining and DilemmaGames: From Laboratory Data Towards Thoretical Synthesis.”Experimental Econom-ics, 1998,1(3), pp. 257–81.

Bolton, Gary E and Ockenfels, Axel.“A Theoryof Equity, Reciprocity, and Competition.”American Economic Review,March 2000,90(1), pp. 166–93.

Brandts, Jordi and Holt, Charles A. “An Exper-imental Test of Equilibrium Dominance inSignaling Games.”American Economic Re-view,December 1992,82(5), pp. 1350–65.

. “Adjustment Patterns and EquilibriumSelection in Experimental Signaling GamesInternational Journal of Game Theory,1993,22(3), pp. 279–302.

. “Naive Bayesian Learning and Adjustments to Equilibrium in Signaling Games.Working paper, University of Virginia, 1996.

Camerer, Colin and Ho, Teck-Hua.“Experience-Weighted Attraction Learning in NormaForm Games.” Econometrica, July 1999,67(4), pp. 827–74.

Capra, C. Monica. “Noisy Expectation Forma-tion in One-Shot Games.” Ph.D. dissertatioUniversity of Virginia, 1998.

Capra, C. Monica; Goeree, Jacob K.; Gomez, Ro-sario and Holt, Charles A. “Anomalous Be-havior in a Traveler’s Dilemma?”AmericanEconomic Review,June 1999,89(3), pp.678–90.

. “Learning and Noisy Equilibrium Be-havior in an Experimental Study of ImperfecPrice Competition.”International EconomicReview,2002 (forthcoming).

Cho, In-Koo and Kreps, David M. “SignalingGames and Stable Equilibria.”Quarterly

-

-

,

Journal of Economics,May 1987,102(2), pp.179–221.

Cooper, David J.; Garvin, Susan and Kagel, JohnH. “Signaling and Adaptive Learning in anEntry Limit Pricing Game.”Rand Journal ofEconomics,Winter 1997,28(4), pp. 662–83.

Cooper, Russell; DeJong, Douglas V.; Forsythe,Robert and Ross, Thomas W.“Communica-tion in Coordination Games.”QuarterlyJournal of Economics,May 1992,107(2), pp.739–71.

Costa-Gomes, Miguel; Crawford, Vincent P. andBroseta, Bruno. “Cognition and Behavior inNormal-Form Games: An ExperimentaStudy.” Econometrica,September 2001,69,pp. 1193–235.

Costa-Gomes, Miguel and Zauner, Klaus G.“Ul-timatum Bargaining Behavior in Israel, Japan, Slovenia, and the United States:Social Utility Analysis.” Games and Eco-nomic Behavior,February 2001,34(2), pp.238–69.

Crawford, Vincent P. “Adaptive Dynamics inCoordination Games.”Econometrica,Janu-ary 1995,63(1), pp. 103–43.

Davis, Douglas D. and Holt, Charles A.Experi-mental economics.Princeton, NJ: PrincetonUniversity Press, 1993.

Eckel, Catherine and Wilson, Rick.“The HumanFace of Game Theory: Trust and Reciprocitin Sequential Games.” Working paper, RicUniversity, 1999.

Erev, Ido and Roth, Alvin E. “Predicting HowPeople Play Games: Reinforcement Learninin Experimental Games with Unique, MixedStrategy Equilibria.”American Economic Re-view,September 1998,88(4), pp. 848–81.

Fehr, Ernst and Schmidt, Klaus. “A Theory ofFairness, Competition, and CooperationQuarterly Journal of Economics,August1999,114(3), pp. 817–68.

Fudenberg, Drew and Levine, David K.The the-ory of learning in games.Cambridge, MA:MIT Press, 1998.

Fudenberg, Drew and Tirole, Jean.Game theory.Cambridge, MA: MIT Press, 1993.

Gibbons, Robert. “An Introduction to Applica-ble Game Theory.”Journal of Economic Per-spectives,Winter 1997,11(1), pp. 127–49.

Goeree, Jacob K. and Holt, Charles A. “AnExperimental Study of Costly Coordination.” Working paper, University of Vir-ginia, 1998.

Page 20: Ten Little Treasures of Game Theory and Ten Intuitive ...

-

o

r

g

d”

s.

.”

--

n,

sg

c”

t

-

1421VOL. 91 NO. 5 GOEREE AND HOLT: TREASURES AND CONTRADICTIONS

. “Stochastic Game Theory: For Playing Games, Not Just for Doing Theory.”Pro-ceedings of the National AcademySciences,September 1999,96, pp. 10,564–67.

. “Asymmetric Inequality Aversionand Noisy Behavior in Alternating-OffeBargaining Games.”European EconomicReview,May 2000a,44(4 – 6), pp. 1079 –89.

. “Models of Noisy Introspection.”Working paper, University of Virginia,2000b.

Goeree, Jacob K.; Holt, Charles A. and Palfrey,Thomas R. “Risk-Averse Behavior in Asym-metric Matching Pennies Games.” Workinpaper, University of Virginia, 2000.

. “Quantal Response Equilibrium anOverbidding in Private Value Auctions.Journal of Economic Theory,2001 (forth-coming).

Guarnaschelli, Serena; McKelvey, Richard D. andPalfrey, Thomas R. “An Experimental Studyof Jury Decision Making.”American Politi-cal Science Review,2001 (forthcoming).

Harrison, Glenn W. “Theory and Misbehavior ofFirst-Price Auctions.”American EconomicReview,September 1989,79(4), pp. 749–62.

Harsanyi, John C. and Selten, Reinhard.A gen-eral theory of equilibrium selection in gameCambridge, MA: MIT Press, 1988.

Kahneman, Daniel; Knetsch, Jack L. and Thaler,Richard H. “The Endowment Effect, LossAversion, and Status Quo Bias: AnomaliesJournal of Economic Perspectives.Winter1991,5(1), pp. 193–206.

Kahneman, Daniel; Slovic, Paul and Tversky,Amos. Judgement under uncertainty: Heuristics and biases.Cambridge: Cambridge University Press, 1982.

Kreps, David M. “Nash Equilibrium,” in JohnEatwell, Murray Milgate, and Peter Newmaeds.,The new Palgrave: Game theory.NewYork: Norton, 1995, pp. 176–77.

Kubler, Dorothea and Weizsacker, Georg. “Lim-ited Depth of Reasoning and Failure of Cacade Formation in the Laboratory.” Workinpaper, Harvard University, 2000.

Luce, R. Duncan. Individual choice behavior.New York: Wiley, 1959.

Lucking-Reiley, David. “Using Field Experi-ments to Test the Equivalence Between Aution Formats: Magic on the Internet.

f

-

-

American Economic Review,December1999,89(5), pp. 1063–80.

Mailath, George. “Do People Play Nash Equi-librium? Lessons from Evolutionary GameTheory.” Journal of Economic Literature.September 1998,36(3), pp. 1347–74.

McKelvey, Richard D. and Palfrey, Thomas R.“An Experimental Study of the CentipedeGame.”Econometrica,July 1992,60(4), pp.803–36.

. “Quantal Response Equilibria for Nor-mal Form Games.”Games and EconomicBehavior.July 1995,10(1), pp. 6–38.

. “Quantal Response Equilibria for Ex-tensive Form Games.”Experimental Eco-nomics,1998,1(1), pp. 9–41.

McKelvey, Richard D.; Palfrey, Thomas R. andWeber, Roberto A. “The Effects of PayoffMagnitude and Heterogeneity on Behavior in2 3 2 Games with Unique Mixed StrategyEquilibria.” Journal of Economic Behaviorand Organization.August 2000,42(4), pp.523–48.

Mookherjee, Dilip and Sopher, Barry. “Learningand Decision Costs in Experimental ConstanSum Games.”Games and Economic Behav-ior, April 1997, 19(1), pp. 97–132.

Nagel, Rosemarie. “Unraveling in GuessingGames: An Experimental Study.”AmericanEconomic Review,December 1995,85(5),pp. 1313–26.

Nasar, Sylvia.A beautiful mind.New York: Si-mon and Schuster, 1998.

Nash, John.“The Bargaining Problem.”Econo-metrica,April 1950, 18(2), pp. 155–62.

Ochs, Jack. “Games with Unique Mixed Strat-egy Equilibria: An Experimental Study.”Games and Economic Behavior,July 1995a,10(1), pp. 202–17.

. “Coordination Problems,” in John H.Kagel and Alvin E. Roth, eds.,Handbookof experimental economics.Princeton, NJ:Princeton University Press, 1995b, pp. 195–249.

Offerman, Theo. Beliefs and decision rules inpublic goods games: Theory and experiments.Dordrecht: Kluwer Academic Press,1997.

Olcina, Gonzalo and Urbano, Amparo. “Intro-spection and Equilibrium Selection in 23 2Matrix Games.” International Journal ofGame Theory,1994,23(3), pp. 183–206.

Pearce, David G. “Rationalizable Strategic

Page 21: Ten Little Treasures of Game Theory and Ten Intuitive ...

-

-

-

,

-

e

-

r.,

1422 THE AMERICAN ECONOMIC REVIEW DECEMBER 2001

Behavior and the Problem of Perfection.” Econometrica,July 1984, 52(4), pp.1029–50.

Rabin, Matthew. “Incorporating Fairness intoGame Theory and Economics.”AmericanEconomic Review,December 1993,83(5),pp. 1281–302.

Reynolds, Stanley S. “Sequential Bargainingwith Asymmetric Information: The Role ofQuantal Response Equilibrium in ExplainingExperimental Results.” Working paper, University of Arizona, 1999.

Rosenthal, Robert W. “Games of Perfect Infor-mation, Predatory Pricing and the ChainStore Paradox.”Journal of Economic Theory,August 1981,25(1), pp. 92–100.

Roth, Alvin E. “Bargaining Experiments,” inJohn H. Kagel and Alvin E. Roth, eds.,Hand-book of experimental economics.Princeton,NJ: Princeton University Press, 1995, pp253–348.

Rubinstein, Ariel. “Perfect Equilibrium in a Bar-gaining Model.” Econometrica, January1982,50(1), pp. 97–109.

Schotter, Andrew and Narkov, Yaw. “Equilibriain Beliefs and Our Belief in Equilibrium: AnExperimental Approach.” Working paperNew York University, 1998.

Selten, Reinhard. “Re-examination of the Per-fectness Concept for Equilibrium Points inExtensive Games.”International Journal ofGame Theory,1975,4, pp. 25–55.

. “Anticipatory Learning in Two-PersonGames,” in Reinhard Selten, ed.,Game equi-

.

librium models I. Berlin: Springer-Verlag,1991, pp. 98–154.

Selten, Reinhard and Buchta, Joachim.“Experi-mental Sealed Bid First Price Auctions withDirectly Observed Bid Functions” in I. E. D.Budescu and R. Zwick, eds.,Games and hu-man behavior: Essays in honor of AmnonRapoport.Hillside, NJ: Erlbaum Association,1998.

Stahl, Dale O. and Wilson, Paul W.“On Players’Models of Other Players: Theory and Experimental Evidence.”Games and Economic Be-havior, July 1995,10(1), pp. 218–54.

Straub, Paul G. “Risk Dominance and Coordi-nation Failures in Static Games.”QuarterlyReview of Economics and Finance.Winter1995,35(4), pp. 339–63.

Tucker, Al W. “A Two-Person Dilemma.”Mimeo, Stanford University, 1950. [Pub-lished under the heading “On Jargon: ThPrisoner’s Dilemma.”UMAP Journal,1980,1, p. 101.]

Van Huyck, John B.; Battalio, Raymond C. andBeil, Richard O. “Tacit Coordination Games,Strategic Uncertainty, and Coordination Failure.” American Economic Review,March1990,80(1), pp. 234–48.

Vickrey, William. “Counterspeculation, Auc-tions, and Sealed Tenders.”Journal of Fi-nance,1961,16, pp. 8–37.

von Neumann, John and Morgenstern, Oscar.Theory of games and economic behavioPrinceton, NJ: Princeton University Press1944.