Battigalli, P., Game Theory

245
GAME THEORY: Analysis of Strategic Thinking Pierpaolo Battigalli December 2013

description

Graduate Textbook in Game Theory

Transcript of Battigalli, P., Game Theory

  • GAME THEORY:

    Analysis of Strategic Thinking

    Pierpaolo Battigalli

    December 2013

  • Abstract

    These lecture notes introduce some concepts of the theory of games:rationality, dominance, rationalizability and several notions of equilibrium(Nash, randomized, correlated, self-confirming, subgame perfect, Bayesianperfect equilibrium). For each of these concepts, the interpretative as-pect is emphasized. Even though no advanced mathematical knowledge isrequired, the reader should nonetheless be familiar with the concepts ofset, function, probability, and more generally be able to follow abstractreasoning and formal arguments.

    Milano,P. Battigalli

  • Preface

    These lecture notes provide an introduction to game theory, the for-mal analysis of strategic interaction. Game theory now pervades mostnon-elementary models in microeconomic theory and many models in theother branches of economics and in other social sciences. We introducethe necessary analytical tools to be able to understand these models, andillustrate them with some economic applications.

    We also aim at developing an abstract analysis of strategic think-ing, and a critical and open-minded attitude toward the standard game-theoretic concepts as well as new concepts.

    Most of these notes rely on relatively elementary mathematics. Yet,our approach is formal and rigorous. The reader should be familiar withmathematical notation about sets and functions, with elementary linearalgebra and topology in Euclidean spaces, and with proofs by mathematicalinduction. Elementary calculus is sometimes used in examples.

    [Warning: These notes are work in progress! They probably contain some

    mistakes or typos. They are not yet complete. Many parts have been translated

    by assistants from Italian lecture notes and the English has to be improved.

    Examples and additional explanations and clarifications have to be added. So,

    the lecture notes are a poor substitute for lectures in the classroom.]

  • Contents

    1 Introduction 11.1 Decision Theory and Game Theory . . . . . . . . . . . . . . 11.2 Why Economists Should Use Game Theory . . . . . . . . . 21.3 Abstract Game Models . . . . . . . . . . . . . . . . . . . . . 41.4 Terminology and Classification of Games . . . . . . . . . . . 61.5 Rational Behavior . . . . . . . . . . . . . . . . . . . . . . . 111.6 Assumptions and Solution Concepts in Game Theory . . . . 13

    I Static Games 22

    2 Static Games: Formal Representation and Interpretation 23

    3 Rationality and Dominance 283.1 Expected Payoff . . . . . . . . . . . . . . . . . . . . . . . . 283.2 Conjectures . . . . . . . . . . . . . . . . . . . . . . . . . . . 293.3 Best Replies and Undominated Actions . . . . . . . . . . . 34

    4 Rationalizability and Iterated Dominance 494.1 Assumptions about players beliefs . . . . . . . . . . . . . . 504.2 The Rationalization Operator . . . . . . . . . . . . . . . . . 524.3 Rationalizability: The Powers of . . . . . . . . . . . . . . 544.4 Iterated Dominance . . . . . . . . . . . . . . . . . . . . . . 594.5 Rationalizability and Iterated Dominance in Nice Games . . 64

    5 Equilibrium 705.1 Nash equilibrium . . . . . . . . . . . . . . . . . . . . . . . . 735.2 Probabilistic Equilibria . . . . . . . . . . . . . . . . . . . . . 80

    v

  • 6 Learning Dynamics, Equilibria and Rationalizability 1056.1 Perfectly observable actions . . . . . . . . . . . . . . . . . . 1066.2 Imperfectly observable actions . . . . . . . . . . . . . . . . . 109

    7 Games with incomplete information 1117.1 Games with Incomplete information . . . . . . . . . . . . . 1127.2 Equilibrium and Beliefs: the simplest case . . . . . . . . . . 1217.3 The General Case: Bayesian Games and Equilibrium . . . . 1257.4 Incomplete Information and Asymmetric Information . . . 1367.5 Rationalizability in Bayesian Games . . . . . . . . . . . . . 1387.6 The Electronic Mail Game . . . . . . . . . . . . . . . . . . . 1567.7 Selfconfirming Equilibrium and Incomplete Information . . 160

    II Dynamic Games 165

    8 Multistage Games with Complete Information 1678.1 Preliminary Definitions . . . . . . . . . . . . . . . . . . . . 1688.2 Backward Induction and Subgame Perfection . . . . . . . . 1818.3 The One-Shot-Deviation Principle . . . . . . . . . . . . . 1918.4 Some Simple Results About Repeated Games . . . . . . . . 1978.5 Multistage Games With Chance Moves . . . . . . . . . . . . 2028.6 Randomized Strategies . . . . . . . . . . . . . . . . . . . . . 2048.7 Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213

    9 Multistage Games with Incomplete Information 2189.1 Multistage game forms . . . . . . . . . . . . . . . . . . . . . 2189.2 Dynamic environments with incomplete information . . . . 2209.3 Dynamic Bayesian Games . . . . . . . . . . . . . . . . . . . 2229.4 Bayes Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . 2249.5 Perfect Bayesian Equilibrium . . . . . . . . . . . . . . . . . 2279.6 Signaling Games and Perfect Bayesian Equilibrium . . . . . 229

    Bibliography 235

  • 1Introduction

    Game theory is the formal analysis of the behavior of interactingindividuals. The crucial feature of an interactive situation is thatthe consequences of the actions of an individual depend (also) on theactions of other individuals. This is typical of many games peopleplay for fun, such as chess or poker. Hence, interactive situations arecalled gamesand interactive individuals are called players.If a playersbehavior is intentional and he is aware of the interaction (which is notalways the case), he should try and anticipate the behavior of other players.This is the essence of strategic thinking. In the rest of this chapter weprovide a semi-formal introduction of some key concepts in game theory.

    1.1 Decision Theory and Game Theory

    Decision theory is a branch of applied mathematics that analyzes thedecision problem of an individual (or a group of individuals acting as asingle decision unit) in isolation. The external environment is a primitiveof the decision problem. Decision theory provides simple decision criteriacharacterizing an individuals preferences over different courses of action,provided that these preferences satisfy some rationality properties, suchas completeness (any two alternatives are comparable) and transitivity (ifa is preferred to b and b is preferred to c , then a is preferred to c). Thesecriteria are used to find optimal (or rational) decisions.

    Game theory could be more appropriately called interactivedecision theory. Indeed, game theory is a branch of applied mathematics

  • 2 1. Introduction

    that analyzes interactive decision problems: There are several individuals,called players, each facing a decision problem whereby the externalenvironment (from the point of view of this particular player) is givenby the other players behavior (and possibly some random variables). Inother words, the welfare (utility, payoff, final wealth) of each player isaffected, not only by his behavior, but also by the behavior of other players.Therefore, in order to figure out the best course of action each player hasto guess which course of action the other players are going to take.

    1.2 Why Economists Should Use Game Theory

    We are going to argue that game theory should be the main analyticaltool used to build formal economic models. More generally, game theoryshould be used in all formal models in the social sciences that adhereto methodological individualism, i.e. try to explain social phenomena asthe result of the actions of many agents, which in turn are freely chosenaccording to some consistent criterion.

    The reader familiar with the pervasiveness of game theory in economicsmay wonder why we want to stress this point. Isnt it well known that gametheory is used in countless applications to model imperfect competition,bargaining, contracting, political competition and, in general, all socialinteractions where the action of each individual has a non-negligible effecton the social outcome? Yes, indeed! And yet it is often explicitly orimplicitly suggested that game theory is not needed to model situationswhere each individual is negligible, such as perfectly competitive markets.We are going to explain why this is in our view incorrect: thebottom line will be that every completeformal model of an economic (orsocial) interaction must be a game; economic theory has analyzed perfectcompetition by taking shortcuts that have been very fruitful, but must beseen as such, just shortcuts.1

    If we subscribe to methodological individualism, as mainstreameconomists claim to do, every social or economic observable phenomenawe are interested in analyzing should be reduced to the actions of theindividuals who form the social or economic system. For example, if we

    1Our view is very similar to the original motivations for the study of game theoryoffered by the founding fathersvon Neumann and Morgenstern in their seminal bookThe Theory of Games and Economic Behavior.

  • 1.2. Why Economists Should Use Game Theory 3

    want to study prices and allocations, then we should specify which actionsthe individuals in the system can choose and how prices and allocationsdepend on such actions: if I = {1, 2, ..., n} is the set of agents, p is thevector of prices, y is the allocation and a = (ai)iI is the profile2 of actions,one for each agent, then we should specify relations p = f(a) and y = g(a).This is done in all models of auctions. For example, in a sealed-bid, first-price, single-object auction, a = (a1, ..., an) is the profile of bids by the nbidders for the object on sale, f(a1, ..., an) = max{a1, ..., an} (the object issold for a price equal to the highest bid) and g(a) is such that the objectis allocated to the highest bidder,3 who has to pay his bid. To be moregeneral, we have to allow the variables of interest to depend also on someexogenous shocks x, as in the functional forms p = f(a, x), y = g(a, x).Furthermore, we should account for dynamics when choices and shockstake place over time, as in yt = gt(a1, x1, ..., at, xt). Of course, all theconstraints on agents choices (such as those determined by technology)should also be explicitly specified. Finally, if we are to explain choicesaccording to some rationality criterion, we should include in the model thepreferences of each individual i over possible outcomes. This is what we calla complete modelof the interactive situation.4 We call variables, such asy, that depend on actions (and exogenous shocks) endogenous.(Actionsthemselves are endogenousin a trivial sense.) The rationale for thisterminology is that we try to analyze/explain actions and variables thatdepend on actions.

    At this point you may think that this is just a trite repetition of someabstract methodology of economic modelling. Well, think twice! The moststandard concept in the economists toolkit, Walrasian equilibrium, is notbased on a complete model and is able to explain prices and allocationsonly by taking a two-steps shortcut: (1) The modeler pretendsthat pricesare observed and taken as parametrically given by all agents (including all

    2Given a set of n individuals, we always call profilea list of elements from a givenset, one for each individual i. For instance, a = (a1, ..., an) typically denotes a profile ofactions.

    3Ties can be broken at random.4The model is still in some sense incomplete: we have not even addressed the issue of

    what the individuals know about the situation and about each other. But the elementssketched in the main text are sufficient for the discussion. Let us stress that what we callcomplete modeldoes not include the modelers hypotheses on how the agents choose,which are key to provide explanations or predictions of economic/social phenomena.

  • 4 1. Introduction

    firms) before they act, hence before they can affect such prices; this is akind of logical short-circuit, but it allows to determine demand and supplyfunctions D(p), S(p). Next, (2) market-clearing conditions D(p) = S(p)determine equilibrium prices. Well, this can only be seen as a (clever)reduced-form approach; absent an explicit model of price formation (suchas an auction model), the modeler postulates that somehow the choices-prices-choices feedback process has reached a rest point and he describesthis point as a market-clearing equilibrium. In many applications ofeconomic theory to the study of competitive markets, this is a veryreasonable and useful shortcut, but it remains just a shortcut, forced bythe lack of what we call a complete model of the interactive situation.

    So, what do we get when instead we have a complete model? As we aregoing to show a bit more formally in the next section, we get what gametheorists call a game.This is why game theory should be a basic toolin economic modelling, even if one wants to analyze perfectly competitivesituations. To illustrate this point, we will present a purely game-theoreticanalysis of a perfectly competitive market, showing not only how such ananalysis is possible, but also that it adds to our understanding of howequilibrium can be reached.

    1.3 Abstract Game Models

    A completely formal definition of the mathematical object called (non-cooperative) game will be given in due time. We start with a semi-formalintroduction to the key concepts, illustrated by a very simple example,a seller-buyer mini-game. Consider two individuals, S (Seller) and B(Buyer). Let S be the owner of an object and B a potential buyer. Forsimplicity, consider the following bargaining protocol: S can ask one Euro(1) or two Euros (2) to sell the object, B can only accept (a) or reject (r).The monetary value of the object for individual i (i = S,B) is denoted Vi.This situation can be analyzed as a game which can be represented witha rooted tree with utility numbers attached to terminal nodes (leaves),player labels attached to non terminal nodes, and action labels attachedto arcs:

  • 1.3. Abstract Game Models 5

    S

    B

    B

    1 2

    a r a r

    (1 VSVB 1

    ) (0

    0

    ) (2 VSVB 2

    ) (0

    0

    )

    Figure 1.1: A seller-buyer mini-game tree.

    The game tree represents the formal elements of the analysis: the setof players (or roles in the game, such as seller and buyer), the actions, therules of interaction, the consequences of complete sequences of actions andhow players value such consequences.

    Formally, a game form is given by the following elements:

    I, a set of players. For each player i I, a set Ai of actions which could conceivably

    be chosen by i at some point of the game as the play unfolds.

    C, a set of consequences. E , an extensive form, that is, a mathematical representation of the

    rules saying whose turn it is to move, what a player knows, i.e., hisor her information about past moves and random events, and whatare the feasible actions at each point of the game; this determinesa set Z of possible paths of play (sequences of feasible actions); apath of play z Z may also contain some random events such as theoutcomes of throwing dice;

    g : Z C, a consequence (or outcome) function which assignsto each play z a consequence g(z) in C.

  • 6 1. Introduction

    The above elements represents what the layperson would call the rulesof the game.To complete the description of the actual interactive situation(which may differ from how the players perceive it) we have to add playerspreferences over consequences (and - via expected utility calculations -lotteries over consequences):

    (vi)iI , where each vi : C R is a utility function representingplayer is preferences over consequences; preferences over lotteriesover consequences are obtained via expected utility comparisons.

    With this, we obtain the kind of mathematical structure calledgamein the technical sense of game theory. This formal description doesnot say how such a game would (or should) be played. The description ofan interactive decision problem is only the first step to make a prediction(or a prescription) about players behavior.

    To illustrate, in the seller-buyer mini-game: I = {S,B}; AS = {1, 2},AB = {a, r}; C is a specification of the set of final allocations of the objectand of monetary payments (for example, C = {oS , oB} {0, 1, 2}, where(oi, k) means object to i and k euros are given by B to S); E is representedby the game tree, without the utility numbers at endnodes; g is the rulestipulating that if B accepts the ask price p then B gets the object andgives p euros to S, if B rejects then S keeps the object and B keeps themoney (g(p, a) = (oB, p), g(p, r) = (oS , 0)); vi is a risk-neutral (quasi-linear) utility function normalized so that the utility of no exchange iszero for both players (vS(oB, p) = pVS , vB(oB, p) = VBp, Vi(oS , 0) = 0for i = S,B). Game theory provides predictions on the behavior of Sand B in this game based on hypotheses about players knowledge of therules of the game and of each other preferences, and on hypotheses aboutstrategic thinking. For example, B is assumed to accept an ask price ifand only if it is below his valuation. Whether the ask price is high or lowdepends on the valuation of S and what S knows about the valuation ofB. If S knows the valuation of B, he can anticipate Bs response to eachoffer and how much surplus he can extract.

    1.4 Terminology and Classification of Games

    Games come in different varieties and are analyzed with differentmethodologies. The same strategic interaction may be represented in a

  • 1.4. Terminology and Classification of Games 7

    detailed way, using the mathematical objects described above, or in a muchmore parsimonious way. The amount of details in the formal representationconstrains the methods used in the analysis. The terminology used to referto different kinds of strategic situations and to different formal objectsused to represent them may be confusing: Some terms that have almostthe same meaning in the daily language, such as perfect informationandcomplete informationhave very different meaning in game theory; otherterms, such as non-cooperative game,may be misleading. Furthermore,there is a tendency to confuse the substantive properties of the situationsof strategic interaction that game theory aims at studying and the formalproperties of the mathematical structures used in this endeavour. Here webriefly summarize the terminology and classification of game theory doingour best to dispel such confusion.

    1.4.1 Cooperative vs Non-Cooperative Games

    Suppose that the players (or at least some of them) could meet before thegame is played and sign a binding agreement specifying their course ofaction (an external enforcing agency e.g. the courts system will forceeach player to follow the agreement). This could be mutually beneficialfor them and if this possibility exists we should take it into account. Buthow? The theory of cooperative games does not model the processof bargaining (offers and counteroffers) which takes place before the realgame starts; this theory considers instead a simple and parsimoniousrepresentation of the situation, that is, how much of the total surpluseach possible coalition of players can guarantee to itself by means of somebinding agreement. For example, in the seller-buyer situation discussedabove, the (normalized) surplus that each player can guarantee to itself iszero, while the surplus that the {S,B} coalition can guarantee to itself isVS + VB. This simplified representation is called coalitional game. Forevery given coalitional game the theory tries to figure out which divisionof the surplus could result, or, at least, the set of allocations that are notexcluded by strategic considerations (see, for example, Part IV of Osborneand Rubinstein, 1994).

    On the other hand, the theory of non-cooperative games assumesthat either binding agreements are not feasible (e.g. in an oligopolisticmarket they could be forbidden by antitrust laws), or that the bargainingprocess which could lead to a binding agreement on how to play a game

  • 8 1. Introduction

    G is appropriately formalized as a sequence of moves in a larger game(G). The players cannot agree on how to bargain in (G) and this is thegame to be analyzed with the tools of non-cooperative game theory. It istherefore argued that non-cooperative game theory is more fundamentalthan cooperative game theory. For example, in the seller-buyer situation(G) would be the game-tree displayed above to illustrate the formalobjects comprised by the mathematical representation of a game. Theanalysis of this game reveals that if S knows VB he can use his first-moveradvantage to ask for the highest price below VB. Of course, the results ofthe analysis depend on details of the bargaining rules that may be unknownto an external observer and analyst. Neglecting such details, cooperativegame theory typically gives weaker but more robust predictions. But inour view, the main advantage of non-cooperative game theory is not somuch its sharper predictions, but rather the conceptual clarity of workingwith explicit assumptions and showing exactly how different assumptionslead to different results.

    Cooperative game-theory is an elegant and useful analytical toolkit.It is especially appropriate when the analyst has little understanding ofthe true rules of interaction and wishes to derive some robust results fromparsimonious information about the outcomes that coalitions of players canimplement. However, non-cooperative game theory is much more pervasivein modern economics, and more generally in modern formal social sciences.We will therefore focus on non-cooperative game theory.

    One thing should be clear, focusing on this part of the theory doesnot mean that cooperation is neglected. Non-cooperative game theory isnot the study of un-cooperative behavior, but rather a method of analysis.Indeed we find the name non-cooperative game theorymisleading, but itis now entrenched and we will conform to it.

    1.4.2 Static and Dynamic Games

    A game is static if each player moves only once and all players movesimultaneously (or at least without any information about other playersmoves). Examples of static games are: Matching Pennies, Stone-Scissor-Paper, and sealed bid auctions.

    A game is dynamic if some moves are sequential and some player mayobserve (at least partially) the behavior of his or her co-players. Examplesof dynamic games are: Chess, Poker, and open outcry auctions.

  • 1.4. Terminology and Classification of Games 9

    Dynamic games can be analyzed as if the players moved only onceand simultaneously. The trick is to pretend that each player chooses inadvance his strategy, i.e. a contingent plan of action specifying howto behave in every circumstance that may arise while playing the game.Each profile of strategies s = (si)iI determines a particular path, viz.(s), hence an outcome g((s)), and ultimately a profile of utilities, orpayoffs,(vi(g((s))))iI . This mapping s 7 (vi(g((s))))iI is calledthe normal form, or strategic form of the game. The strategic formof a game can be seen as a static game where players simultaneouslychoose strategies in advance. For example, in the seller-buyer mini-gamethe strategies of S (seller) coincide with his possible ask prices; on theother hand, the set of strategies of B (buyer) contains four responserules: SB = {a1a2, a1r2, r1a2, r1r2} where ap (rp) is the instruction accept(reject) price p.The strategic form is as follows:

    S \B a1a2 a1r2 r1a2 r1r2p = 1 1 VS , VB 1 1 VS , VB 1 0, 0 0, 0p = 2 2 VS , VB 2 0, 0 2 VS , VB 2 0, 0

    Figure 1.2: Strategic form of the seller-buyer minigame

    A strategy in a static game is just the plan to take a specific action, withno possibility to make this choice contingent on the actions of others, assuch actions are being simultaneously chosen. Therefore, the mathematicalrepresentation of a static game and its normal form must be the same. Forthis reason, static games are also called normal-form games,or strategic-formgames. This is an unfortunately widespread abuse of language. In acorrect language there should not be normal-form games; rather, lookingat the normal form of a game is a method of analysis.

    1.4.3 Assumptions About Information

    Perfect information and observable actions

    A dynamic game has perfect information if players move one at a timeand each player when it is his or her turn to move is informed of allthe previous moves (including the realizations of chance moves). If some

  • 10 1. Introduction

    moves are simultaneous, but each player can observe all past moves, wesay that the game has observable actions (or perfect monitoring,or almost perfect information). Examples of games with perfectinformation are the seller-buyer mini-game, Chess, Backgammon, Tic-Tac-Toe. An example of a game with observable actions is the repeated Cournotoligopoly, with simultaneous choice of outputs in every period and perfectmonitoring of past outputs. Note, perfect information is an assumptionabout the rules of the game.

    Asymmetric information

    A game with imperfect information features asymmetric information ifdifferent players get different pieces of information about past moves andin particular about the realizations of chance moves. Poker and Bridge aregames with asymmetric information because players observe their cards,but not the cards of others. Like perfect information, also asymmetricinformation is entailed by the rules of the game.

    Complete and incomplete information

    An event E is common knowledge if everybody knows E, everybodyknows that everybody knows E, and so on for all iterations of everybodyknows that.To use a suggestive semi-formal expression: for every m =1, 2, ... it is the case that [(everybody knows that)m E is the case].

    There is complete information in an interactive decision problemrepresented by a gameG =

    I, A, C, E , g, (vi)iI

    if it is common knowledge

    that G is the actual game to be played. Conversely, there is incompleteinformation if for some player i either it is not the case that [iknows that G is the actual game] or for some m = 1, 2, ... it is notthe case that [i knows that (everybody knows that)m G is the actualgame]. Most economic situations feature incomplete information becauseeither the outcome function, g, or players preferences, (vi)iI , are notcommon knowledge. Note, complete (or incomplete) information is not anassumption about the rules of the game, it is an assumption on playersinteractive knowledgeabout the rules and preferences. For example, inthe seller-buyer mini-game it can be safely assumed that there is commonknowledge of the outcome function g (who gets the object and monetarytransfers), but the valuation VS , VB need not be commonly known. For

  • 1.5. Rational Behavior 11

    some types of objects Vi is known only to i; for other types of objects,the seller may know the quality of the object better then the buyer, so VBcould be given by an idiosyncratic component known to B plus a qualitycomponent known to S.

    Although situations with asymmetric information about chance moves,such as Poker, are conceptually different from situations with incompleteinformation, we will see that there is a formal similarity which allows(at least to some extent) to use the same analytical tools to study bothsituations.

    Examples: In parlor games like chess and poker there is presumably complete information; indeed, the rules of the game are commonknowledge, and it may be taken for granted that it is common knowledgethat players like to win. On the other hand, auctions typically featureincomplete information because the competitors valuations of the objectson sale are not commonly known.

    1.5 Rational Behavior

    The decision problem of a single individual i can be represented as follows:i can take an action in a set of feasible alternatives, or decisions, Di; iswelfare (payoff, utility) is determined by his decision di and an externalstate i i (which represents a vector of variables beyond is control,e.g., the outcome of a random event and/or the decisions of other players)according to the outcome function

    g : Di i C

    and the utility function

    vi : C R.We assume that is choice cannot affect i and that i does not have anyinformation about i when he chooses beyond the fact that i i.The choice is made once and for all. (We will show in the part on dynamicgames that this model is much more general than it seems.) Assume,just for simplicity, that Di and i are both finite and let, for notationalconvenience, i = {1i, 2i, ..., ni}. In order to make a rationalchoice, i has to assess the probability of the different states in i. Suppose

  • 12 1. Introduction

    that his beliefs about i are represented by a probability distributioni (i) where

    (i) =

    {i Rn+ :

    nk=1

    i(ki) = 1

    }.

    Then is rational decision (given i) is to take any alternative thatmaximizes is expected payoff, i.e. any di such that

    di arg maxdiDi

    ii

    i(i)vi (g(di, i)) .

    Decision di is also called a best response, or best reply, to i. The

    probability distribution could be exogenously given (roulette lottery) orsimply represent is subjective beliefs about i (horse lottery, gameagainst an opponent). The composite function vi g is denoted ui, that is,ui(di, i) = vi (g(di, i)); ui is called the payoff function, because inmany games of interests the rules of the decision problem (or game) attachmonetary payoffs to the possible outcomes of the game and it is takenfor granted that the decision maker maximizes his expected monetarypayoff. In the finite case, ui is often represented as a matrix with k`entry uk`i = ui(d

    ki ,

    `i):

    1i 2i . . .

    d1i u11i u

    12i . . .

    d2i u21i u

    22i . . .

    ......

    .... . .

    Figure 1.3: Matrix representation of the payoff function

    This representation of a decision makers choice, of course, fits verywell the case of static games. In this case Di = Ai is simply the set offeasible actions and i may be the set Ai of the opponents feasibleaction profiles (or a larger space if there is incomplete information).

    However, the representation is more general: Consider a dynamic game.Its extensive form E specifies all the circumstances under which player i

  • 1.6. Assumptions and Solution Concepts in Game Theory 13

    may have to choose an action and the information i would have in thosecircumstances. Then player i can decide how to play this game according toa plan of action, or strategy, specifying how he would behave in any suchcircumstance given the available information. His uncertainty concernsthe strategies of others. Every possible profile of strategies induces aparticular outcome, or consequence. Therefore, the decision problem facedby a rational agent is the problem of choosing a strategy under uncertaintyabout the strategies chosen by the opponents (and possibly about otherunknowns). In the part of these lecture notes devoted to dynamic gameswe will discuss some subtleties related to the concepts of strategyandplan.

    1.6 Assumptions and Solution Concepts in GameTheory

    Game theory provides a mathematical language to formulate assumptionsabout the rules of the game and players preferences, that is, all theelements listed in section 1.3. But such assumptions are not enoughto derive conclusions about how the players behave in a given game.The central behavioral assumption in game theory is that players arerational (see Section 1.5). However, without further assumptions aboutwhat players believe about the variables which affect their payoffs, butare beyond their control (in particular their opponents behavior), therationality assumption has little behavioral content: we can only say thateach players choice is undominated, i.e. is a best response to some belief.In most games, this is too little to derive interesting results.5

    How can we obtain interesting results from our assumptions aboutthe rules of the game? The standard approach used in game theory isanalogous to the one used in the undergraduate economics analysis ofcompetitive markets: we formulate assumptions about preferences andtechnology and then we assume that economic agents plans are mutuallyconsistent best responses to equilibrium prices.

    As explained in section 1.2, there is an important difference: thetextbook analysis of competitive markets does not specify a price-

    5There are important exceptions. In many situations where each individual in a groupdecides how much to contribute to a public good, it is strictly dominant to contributenothing (see example 2 in chapter 2).

  • 14 1. Introduction

    formation mechanism and uses equilibrium market-clearing as a theoreticalshortcut to overcome this problem. On the contrary, in a game-theoretic model all the observable variables we try to explain depend onplayers actions (and exogenous shocks) according to an explicitly specifiedfunction, as in auction models.

    Yet, there are also similarities with the analysis of competitive markets.One could say that the role of prices is played (sic) by players beliefs sincewe assume that they are, in some sense, mutually consistent. The precisemeaning of the statement beliefs are mutually consistent is captured bya solution concept. The simplest and most widely used solution conceptin game theory, Nash equilibrium, assumes that players beliefs about eachother strategies are correct and each player best responds to his beliefs; asa result, each player uses a strategy that is a best response to the strategiesused by other players.

    Nash equilibrium is not the only solution concept used in game theory.Recent developments made clear that solution concepts implicitly captureexpressible6 assumptions about players rationality and beliefs, and someassumptions that are appropriate in some contexts are too weak, or simplyinappropriate, in different contexts. Therefore, it is very important toprovide convincing motivations for the solution concept used in a specificapplication.

    Let us somewhat vaguely define as strategic thinking whatintelligent agents do when they are fully aware of participating in aninteractive situation and form conjectures by putting themselves in theshoes of other intelligent agents. As the title of this book suggest,we mostly (though not exclusively) present game theory as an analysisof strategic thinking. We will often follow the traditional textbookgame theory and present different solution concepts providing informalmotivations for them, sometimes with a lot of hand-waving. However,you should always be aware that a more fundamental approach is beingdeveloped, where game theorists formulate explicit assumptions not onlyabout the rules of the game and preferences, but also about playersrationality and initial beliefs, and also about how beliefs change as the playunfolds. Implications about choices and observables can be derived directly

    6We can give a mathematical meaning to the label expressible, but, at the moment,you should just understand something that can be expressed in a clear and preciselanguage.

  • 1.6. Assumptions and Solution Concepts in Game Theory 15

    from such assumptions, without the mediation of solution concepts. Then,solution concepts become mere shortcuts to characterize the behavioralimplications of assumptions about rationality and beliefs.

    For example, a standard solution concept in game theory, subgameperfect equilibrium, requires players strategies to form an equilibrium notonly for the game itself, but also in every subgame.7 In finite gameswith (complete and) perfect information, if there are no relevant ties,there is a unique subgame perfect equilibrium that can be computedwith a backward induction algorithm. For two-stage games with perfectinformation, subgame perfect equilibrium can be derived from simpleassumptions about rationality and beliefs: players are rational and the firstmover believes that the second mover is rational. To illustrate, considerthe seller-buyer mini-game and assume players have complete information.Specifically, suppose that it is common knowledge that VS < 1, VB > 2.The assumption of rationality of the buyer, denoted with RB, implies thathe accepts every price below his valuation; since VB > 2, the only strategyconsistent with RB is a1a2. Rationality of the seller, denoted with RS ,implies that he attaches subjective probabilities to the four strategies ofthe buyer and chooses the ask price that maximizes his expected utility;if the seller has deterministic beliefs, he asks for the highest prices heexpects to be accepted, e.g., he asks p = 1 if he is certain of strategy a1r2.But if the seller is certain that the buyer is rational, given the completeinformation assumption, he assigns probability 1 to strategy a1a2. Thenthe rationality of the seller and the belief of the seller in the rationality ofthe buyer imply that the seller asks for the highest price, i.e. he uses thebargaining power derived from the knowledge that VB > 2 and the first-mover advantage. In symbols, write Bi(E) for i believes (with probabilityone) E,where E is some event; then RS BS(RB) implies that S asksfor p = 2; RB implies that p = 2 is accepted; RS BS(RB) RB impliessS = 2 and sB = a1a2, which is the subgame perfect equilibrium obtainedvia backward induction.

    Two-stage games with complete and perfect information are so simplethat the analysis above may seem trivial. On the other hand, the rigorousanalysis of more complicated dynamic games based on higher levels of

    7A precise definition of a subgame will be given in Part 2. For the time being, youshould think of a subgame as a smallergame that starts with a non-terminal node ofthe original game.

  • 16 1. Introduction

    mutual belief in rationality presents conceptual difficulties that will beaddressed only in the second part of these lecture notes. Here we furtherillustrate strategic thinking with a semi-formal analysis of a static gamerepresenting a perfectly competitive market. We hope this will also havethe side effect to make the reader understand that game theory is useful toanalyze every completemodel of economic interaction, including modelsof perfect competition.

    1.6.1 Game Theoretic Analysis of a Perfectly CompetitiveMarket

    Let us analyze the decisions of a large number n of small, identicalagricultural firms producing a crop (say, corn). For reasons that will beclear momentarily, it is convenient to index firms as equally spaced rationalnumbers in the interval (0, 1], that is, I = { 1n , 2n , ..., n1n , 1} (0, 1] withgeneric element i I. Each firm i decides in advance how much to produce.This quantity is denoted q(i). The output of corn q(i) will be ready to besold only in six months. Markets are incomplete: it is impossible to agreein advance on a price for corn to be delivered in six months. So, each firmi has to guess the price p for corn in six months. It is common knowledgethat the demand for corn (not explicitly micro-foundedin this example)is given by function D(p) = nmax{(ABp), 0}, that is, D(p) = n(ABp)if p AB and D(p) = 0 otherwise. It is also common knowledge that eachfirm will fetch the highest price at which the market can absorb the totaloutput for sale, Q =

    ni=1 q(i). Thus, each firm will sell each unit of

    output at the uniform price P(Qn

    )= 1B (A Qn ) if Qn A and p = 0

    if Qn > A (for notational convenience, we express inverse demand as a

    function of the average output Qn ). Each firm i has total cost functionC(q) = 12mq

    2, and marginal cost function MC(q) = qm . Each firm hasobvious preferences, it wants to maximize the difference between revenuesand costs. The technology (cost function) and preferences of each firm arecommon knowledge.8

    Now, we want to represent mathematically the assumption that eachfirm is negligiblewith respect to the (maximum) size of the market A.

    8The analysis can be generalized assuming heterogeneous firms, where each firm i ischaracterized by the marginal cost parameter m(i). Then is is enough to assume thateach i knows m(i) and that the average m = 1

    n

    ni=1 m(i) is common knowledge.

  • 1.6. Assumptions and Solution Concepts in Game Theory 17

    Instead of assuming a large, but finite set of firms given by a fine grid I inthe interval (0, 1], we use a quite standard mathematical idealization andassume that there is a continuum of firms, normalized to the unit intervalI = (0, 1]. Thus, the average output is q =

    10 q(i)di instead of

    1n

    ni=1 q(i).

    Each firm understands that its decision cannot affect the total output andthe market price.9

    Given that all the above is common knowledge, we want to derivethe price implications of the following assumptions about rationality andbeliefs:

    R (rationality): each firm has a conjecture about (q and) p and choosesa best response to such conjecture, for brevity, each firm is rational;10

    B(R) (mutual belief in rationality): all firms believe (with certainty)that all firms are rational;

    B2(R) = B(B(R)) (mutual belief of order 2 in rationality): all firmsbelieve that all firms believe that all firms are rational;

    ...Bk(R) = B(Bk1(R)) (mutual belief of order k in rationality): [all firms

    believe that]k all firms are rational...Note, we are using symbols for these assumptions. In a more formal

    mathematical analysis, these symbols would correspond to events in aspace of states of the world. Here they are just useful abbreviations.Conjunctions of assumptions will be denoted by the symbol , as inR B(R). It is a good thing that you get used to these symbols, butif you find them baing, just ignore them and focus on the words.

    What are the consequences of rationality (R) in this market? Let p(i)denote the price expected by firm i.11 Then i solves

    maxq0

    {p(i)q 1

    2mq2}

    which yieldsq(i) = m p(i).

    9This example is borrowed from Guesnerie (1992).10The symbols R, B(R) etc. below should be read as abbreviations of sentences. They

    can also be formally defined using mathematics, but this goes beyond the scope of theselecture notes.

    11This is, in general, the mathematical expectation of p according to the subjectiveprobabilistic conjecture of firm i.

  • 18 1. Introduction

    Since firms know the inverse demand function P (q) = max{(A q)/B, 0},each firm is expectation of the price is below the upper bound p0 := AB(:= means equal by definition): p(i) AB . It follows that

    q =

    10mp(i)di = m

    10p(i)di q1 := mA

    B,

    where q1 denotes the upper bound on average output when firms arerational.

    By mutual belief in rationality, each firm understands that q q1 andp max{(A mAB )/B, 0}. If m B, this is not very helpful: assumingbelief in rationality does not reduce the span of possible prices and does notyield additional implications for the endogenous variables we are interestedin. It is not hard to understand that rationality and mutual belief inrationality of every order yields the same result q mAB , where mAB Abecause m B.

    So, lets assume from now on that m < B. Note, this is the so calledcobweb stability condition. Given belief in rationality, B(R), each firmexpects a price at least as high as the lower bound

    p1 :=1

    B(A q1) = A

    B(1 m

    B) > 0.

    Therefore R B(R) implies that q = m 10 p(i)di mp1, that is, averageoutput is above the lower bound

    q2 := mp1 = Am

    B(1 m

    B)

    and hence the price is below the upper bound

    p2 :=1

    B(A q2) = A

    B

    (1 m

    B

    (1 m

    B

    ))=A

    B

    2k=0

    (mB

    )k.

    By B(R) and B2(R) (mutual belief of order 2 in rationality), each firmpredicts that p p2. Rational firms with such expectations choose anoutput below the upper bound

    q3 := mp2 = Am

    B

    (1 m

    B

    (1 m

    B

    ))= A

    m

    B

    2k=0

    (mB

    )k.

  • 1.6. Assumptions and Solution Concepts in Game Theory 19

    Thus, R B(R) B2(R) implies that q q3 and the price must be abovethe lower bound

    p3 :=1

    B(A q3) = A

    B

    3k=0

    (mB

    )k.

    Can you guess the consequences for price and average output ofassuming rationality and common belief in rationality (mutual belief inrationality of every order)? Draw the classical Marshallian cross of demand-supply, trace the upper and lower bounds found above and go on. You willfind the answer.

    More formally, define the following sequences of upper bounds andlower bounds on the price:

    pL : =A

    B

    Lk=0

    (mB

    )k, L even

    pL : =A

    B

    Lk=0

    (mB

    )k, L odd

    It can be shown by mathematical induction that RL1k=1

    Bk(R) (rationality

    and mutual belief in rationality of order k = 1, ..., L 1, with L 2)implies p pL if L is even, and p pL if L is odd. Since mB < 1,the sequence p2` = AB

    2`k=0

    (mB )k is decreasing in `, the sequencep2`+1 = AB

    2`+1k=0

    (mB )k is increasing in `, and they both converge tothe competitive equilibrium price p = AB+m , see Figure 1.4.

    A comment on rationalversus rationalizableexpectations.

    This price p is often called the rational expectations price, becauseit is the price that the competitive firms would expect if they performedthe same equilibrium analysis as the modeler. We refrain from usingsuch terminology. Game theory allows a much more rigorous and preciseterminology, the one that we have been using above. In these lecture notes,rationalityis a joint property of choices and beliefs and it means only

  • 20 1. Introduction

    p

    qq2 q q3 q1

    p1

    p

    p2

    p0

    MC(q)

    P (q)

    Figure 1.4: Strategic thinking in a perfectly competitive market.

    that agents best respond to their beliefs. The phrase rational beliefs,orrational expectationswas coined at a time when theoretical economistsdid not have the formal tools to be precise and rigorous in the analysisof rationality and interactive beliefs. Now that these tools exist, suchphrases as rational expectationsshould be avoided, at least in gametheoretic analysis. On the other hand, expectations may be consistent withcommon belief in rationality (e.g., under common knowledge of the game),and it makes sense to call such expectations rationalizable. Note well,the fact that there is only one rationalizable expectations price, p, is aconclusion of the analysis, it has not been assumed! Also, this conclusionholds only under the cobweb stability condition m < B. This contrastswith standard equilibrium analysis, whereby it is assumed at the outsetthat expectations are correct, whether or not this follows from assumptionslike common belief in rationality.

    What do we learn from this example? First, it shows that certainassumptions about rationality and beliefs (plus common knowledge of thegame) may yield interesting implications or not according to the givenmodel and parameter values. Here, rationality and common belief in

  • 1.6. Assumptions and Solution Concepts in Game Theory 21

    rationality have sharp implications for the market price and average outputif the cobweb stability condition holds. Otherwise they only yield a veryweak upper bound on average output.

    Second, recall that strategic thinking is what intelligent agents dowhen they are fully aware of participating in an interactive situation andform conjectures by putting themselves in the shoes of other intelligentagents; in this sense, the above is as good an analysis of strategic thinkingas any, and yet it is the analysis of a perfectly competitive market!

    Third, the example shows how solution concepts can be interpreted asshortcuts. Here we presented an iterated deletion of prices and conjectures,where one deletes prices that cannot occur when firms best respond toconjectures and delete conjectures that do not rule out deleted prices.Note, mutual beliefs of order k do not appear explicitly in this iterativedeletion procedure. The procedure is a solution concept of a kind and it canbe used as a shortcut to obtain the implications of rationality and commonbelief in rationality. Under cobweb stability, the rational expectationequilibrium price p results. This shows that under appropriate conditionswe can use the equilibrium concept to draw the implications of rationalityand common belief in rationality. This theme will be played time andagain.

  • Part I

    Static Games

    22

  • 2Static Games: FormalRepresentation andInterpretation

    A static game is an interactive situation where all players movesimultaneously and only once.1 The key features of a static game areformally represented by a mathematical structure (a list of sets andfunctions)

    G = I, C, (Ai)iI , g, (vi)iIwhere:

    I = {1, 2, ..., n} is the set of individuals or players, typically denotedby i, j;

    Ai is the set of possible actions for player i; ai, ai , ai, ai arealternative action labels we frequently use;

    1Sometimes static games are also called normal form gamesor strategic formgames.As we mentioned in the Introduction, this terminology is somewhat misleading.The normal, or strategic form of a game has the same structure of a static game,but the game itself may have a sequential structure. The normal form of showsthe payoffs induced by any combination of plans of actions of the players. Some gametheorists, including the founding fathers von Neumann and Morgenstern, argue that froma theoretical point of view all strategically relevant aspects of a game are contained in itsnormal form. Anyway, here by static gamewe specifically mean a game where playersmove simultaneously.

    23

  • 24 2. Static Games: Formal Representation and Interpretation

    g : A1 A2 ... An C is the outcome (or consequence)function which captures the essence of the rules of the game, beyondthe assumption of simultaneous moves;

    vi : C R is the Von Neumann-Morgenstern utility function ofplayer i.

    The structure I, C, (Ai)iI , g, that is, the game without the utilityfunctions, is called game form. The game form represents the essentialfeatures of the rules of the game. A game is obtained by adding to the gameform a profile of utility functions (vi)iI = (v1, ..., vn), which representplayers preferences over lotteries of consequences, according to expectedutility calculations.

    From the consequence function g and the utility function vi of playeri, we obtain a function that assigns to each action profile a = (aj)jI theutility vi(g(a)) for player i of consequence g(a). This function

    ui = vi g : A1 A2 ...An R

    is called the payoff function of player i. The reason why ui is calledthe payoff function is that the early work on game theory assumed thatconsequences are distributions of monetary payments, or payoffs, and thatplayers are risk neutral, so that it is sufficient to specify, for each player i,the monetary payoff implied by each action profile. But in modern gametheory ui(a) = vi(g(a)) is interpreted as the von Neumann-Morgensternutility of outcome g(a) for player i. If there are monetary consequences,then

    g = (g1, ..., gn) : A Rn,where mi = gi(a) is the net gain of player i when a is played. Assumingthat player i a selfish, then function vi is strictly increasing in mi andconstant in each mj with j 6= i (note that selfishness is not an assumptionof game theory, it is an economic assumption that may be adopted in gametheoretic models). Then function vi captures the risk attitudes of playeri. For example, i is strictly risk averse if and only if vi is strictly concave.

    Games are typically represented in the reduced form

    G = I, (Ai, ui)iI

  • 25

    which shows only the payoff functions. We often do the same. However, itis conceptually important to keep in mind that payoff functions are derivedfrom a consequence function and utility functions.

    For simplicity, we often assume that all the sets Ai (i I) are finite;when the sets of actions are infinite, we explicitly say so, otherwise they aretacitly assumed to be finite. This simplifies the analysis of probabilisticbeliefs. A static game with finite action sets is called a finite staticgame (or just a finite game, if staticis clear from the context). When thefiniteness assumption is important to obtain some results, we will explicitlymention that the game is finite in the formal statement of the result.

    We call profile a list of objects (xi)iI = (x1, ..., xn), one object xifrom some set Xi for each player i. In particular, an action profileis a list a = (a1, ..., an) A := iIAi.2 The payoff function uinumerically represents the preferences of player i among the differentaction profiles a, a, a... A. The strategic interdependence is due tothe fact that the outcome function g depends on the entire action profilea and, consequently, the utility that a generic individual i can achievedepends not only on his choice, but also on those of other players.3 Tostress how the payoff of i depends on a variable under is control as wellas on a vector of variables controlled by other individuals, we denote byi = I\{i} the set of individuals different from i, we define

    Ai := A1 ...Ai1 Ai+1 ...Anand we write the payoff of i as a function of the two arguments ai Aiand ai Ai, i.e. ui : Ai Ai R.

    In order to be able to reach some conclusions regarding playersbehavior in a game G, we impose two minimal assumptions (further

    2Recall that := means equal by definition.3Note, in our formulation, players utility depends only on the consequence

    determined by action profile a through the outcome function g. Thus, we are assuming aform of consequentialism. If you think that consequentialism is an obvious assumption,think twice. A substantial experimental evidence suggests agents also care about otherplayers intentions: agent i may evaluate the same action profile differently depending onhis belief about opponents intentions. To provide a formal analysis of players intentions,we need to specify what an agent thinks other agents will do, what an agent thinks otheragents think other agents will do, what an agent thinks other agents think other agentthink that other agents will do and so on. Although the analysis of these issues is beyondthe scope of these notes, the interested reader is referred to the growing literature onpsychological games.

  • 26 2. Static Games: Formal Representation and Interpretation

    assumptions will be introduced later on as necessary):[1] Each player i knows Ai, Ai and his own payoff function

    ui : Ai Ai R.[2] Each player is rational (see next chapter for a more formal

    definition of rationality).It could be argued that assumption [1] (i knows ui) is tautological.

    Indeed, it seems almost tautological to assume that every individual knowshis preferences over A. However, recall that ui = vi g; thus, although itmay be reasonable to take for granted that i knows vi, i.e. his preferencesover the set of consequences C, i might not know the outcome function g.The following example clarifies this point.

    Example 1. (Knowledge of the utility and payoff functions)Players 1 and 2 work in a team for the production of a public good. Theiraction is the effort ai [0, 1]. The output, y, depends on efforts accordingto a Cobb-Douglas production function

    y = K(a1)1(a2)

    2 .

    The cost of effort measured in terms of public good is ci(ai) = (ai)2. The

    outcome function g : [0, 1] [0, 1] R3+ is theng(a1, a2) = (K(a1)

    1(a2)2 , (a1)

    2, (a2)2).

    The utility function of player i is vi(y, c1, c2) = y ci. The payoff functionis then

    ui(a1, a2) = vi(g(a1, a2)) = K(a1)1(a2)

    2 (ai)2.If i does not know all the parameters K, 1 and 2 (as well as the functionalforms above), then he does not know the payoff function ui, even if heknows the utility function vi.

    Example 1 also clarifies the difference between gamein the technicalsense of game theory, and the rules of the gameas understood by thelayperson (see the Introduction). In the example above, the rules ofthe gamespecify the action set [0, 1] and the outcome function, that isfunctions y = K(a1)

    1(a2)2 and ci(ai) = (ai)

    2 (i = 1, 2). In order tocomplete the description of the game in the technical sense, we also haveto specify the utility functions vi(y, c1, c2) = y ci (i = 1, 2).4

    4This specification assumes that players are risk-neutral with respect to theirconsumption of public good.

  • 27

    Unlike the games people play for fun (and those that experimentalsubjects play in most game theoretic experiments), in many economicgames the rules of the game may not be fully known. In the above example,for instance, it is possible that player i knows his own productivityparameter i, but does not know K or i. Thus, assumption [1] issubstantive: i may not know the consequence function g and hence hemay not know ui = vi g.

    The complete information assumption that we will consider later on(e.g. in chapter 4) is much stronger; recall from the Introduction that thereis complete information if the rules of the game and players preferencesover (lotteries over) consequences are common knowledge. Although, as weexplained above, the rules of the game may be only partially known, thereare still many interactive situations where it is reasonable to assume thatthey are not only known, but indeed commonly known. Yet, assumingcommon knowledge of players preferences is often far-fetched. Thus,complete information should be thought as an idealization that simplifiesthe analysis of strategic thinking. Chapter 7 will introduce the formal toolsnecessary to model the absence of complete information.

  • 3Rationality and Dominance

    In this chapter we analyze static games from the perspective of decisiontheory. We take as given the conjecture1 of a player about opponentsbehavior. This conjecture is, in general, probabilistic, i.e. it can beexpressed by assigning subjective probabilities to the different actionprofiles of the other players. A player is said to be rational if he maximizeshis expected payoff given his conjecture. The concept of dominance allowsto characterize which actions are consistent with the rationality assumptionwithout knowing a players conjecture.

    3.1 Expected Payoff

    The payoff function ui, which is a derived utility function, implicitlyrepresents the preferences of i over different probability measures, orlotteries, over A. The preferred lotteries are those that yield a higherexpected payoff. We explain this in details below.

    The set of probability measures over a generic domain X is denoted by

    1In general, we use the word conjectureto refer to a players belief about variablesthat affect his payoff and are beyond his control, such as the actions of other players ina static game.

    28

  • 3.2. Conjectures 29

    (X). If X is finite2

    (X) :=

    { RX+ :

    xX

    (x) = 1

    }.

    Note, X can be regarded as a subset of (X): a point x X correspondsto the degenerate probability measure that satisfies (x) = 1. Hence,with a slight abuse of notation, we can write X (X) (more abuses ofnotation will follow).

    Definition 1. Consider a probability measure (X), where X is afinite set. The subset of the elements x X to which assigns a positiveprobability is called support of and is denoted by3

    Supp := {x X : (x) > 0} .Suppose that an individual has a preference relation over a set of

    alternatives X, then one can consider (X) as a set of lotteries. In astatic game, letting X = A, we assume that the individual holds a completepreference relation not only over A, but over (A) as well. This preferencerelation is represented by the payoff function ui as follows: is preferredto if and only if the expected payoff of is higher than the expectedpayoff of , i.e.

    aA (a)ui(a) >

    aA (a)ui(a).(Of course, i cannot

    fully control the lottery over A, because ai is chosen by the opponents.Nevertheless he can still have preferences about things he does not fullycontrol.)

    3.2 Conjectures

    The reason why one needs to introduce preferences over lotteries andexpected payoffs is that individual i cannot observe other individualsactions (ai) before making his own choice. Hence, he needs to forma conjecture about such actions. If i were certain of the other players

    2Recall that RX+ is the set of non negative real valued functions defined over thedomain X. If X is the finite set, X = {x1, ..., xn}, RX+ is isomorphic to Rn+, the positiveorthant of the Euclidean space Rn.

    3If X is a closed and measurable subset of Rm, we define the support of a probabilitymeasure (X) as the intersection of all closed sets C such that (C) = 1. If X isfinite, this definition is equivalent to the previous one.

  • 30 3. Rationality and Dominance

    choices, then one could represent is conjecture simply with an actionprofile ai Ai. However, in general, i might be uncertain about otherplayers actions and assign a strictly positive (subjective) probability toseveral profiles ai, ai, etc.

    Definition 2. A conjecture of player i is a (subjective) probabilitymeasure i (Ai). A deterministic conjecture is a probabilitymeasure i (Ai) that assigns probability one to a particular actionprofile ai Ai.

    Note, we call conjecturea (probabilistic) belief about the behavior ofother players, while we use the term beliefto refer to a more general typeof uncertainty.

    Remark 1. The set of deterministic conjectures of player i coincides withthe set of other players action profiles, Ai (Ai).

    One of the most interesting aspect of game theory consists indetermining players conjectures, or, at least, in narrowing down the set ofpossible conjectures, combining some general assumptions about playersrationality and beliefs with specific assumptions about the given game G.However, in this chapter we will not try to explainplayers conjectures.This is left for the following chapters.

    For any given conjecture i, the choice of a particular action aicorresponds to the choice of the lottery over A that assigns probabilityi(ai) to each action profile (ai, ai) (ai Ai) and probability zero tothe profiles (ai, ai) with ai 6= ai. Therefore, if a player i has conjecture iand chooses (or plans to choose) action ai, the corresponding (subjective)expected payoff is

    ui(ai, i) :=

    aiAi

    i(ai)ui(ai, ai).

    There are manydifferent ways to represent graphically actions/lotteries, conjectures, andpreferences when the number of available actions is small. Let us focus ourattention on the instructive case where player is opponent (i) has onlytwo actions, denoted l (left) and r (right). Consider, for instance, thefunction ui represented by the following payoff matrix for player i:

  • 3.2. Conjectures 31

    i\ i l ra 4 1

    b 1 4

    c 2 2

    e 4 0

    f 1 1

    Matrix 1

    Given that i has only two actions, we can represent the conjecturei of i about i with a single number: i(r), the subjective probabilitythat i assigns to r. Each action ai corresponds to a function that assignsto each value of i(r), the expected payoff of ai. Since the expected payofffunction (1 i(r))ui(ai, l) + i(r)ui(ai, r) is linear in the probabilities,every action is represented by a line, such as aa, bb, cc etc. in Figure 3.1.

    i(`)

    i(r)

    1

    2

    3

    4 a

    ab

    b

    c c

    e

    e

    f f

    0 1

    Figure 3.1: Expected payoff as a function of beliefs.

  • 32 3. Rationality and Dominance

    From Figure 3.1 it is apparent that only actions a, b and e arejustifiableby some conjecture. In particular, if i(r) = 0, then a ande are both optimal; if 0 < i(r) < 12 , then a is the only optimal action (theline aa yields the maximum expected payoff). If i(r) > 12 , then b is theonly optimal action (the line bb yields the maximum expected payoff). Ifi(r) = 12 , then there are two optimal actions: a and b.

    3.2.1 Mixed actions

    In principle, instead of choosing directly a certain action, a player coulddelegate his decision to a randomizing device, such as spinning a roulettewheel, or tossing a coin. In other words, a player could simply choose theprobability of playing any given action.

    Definition 3. A random choice by player i, also called a mixed action,is a probability measure i (Ai). An action ai Ai is also called pureaction.

    Remark 2. The set of pure actions can be regarded as a subset of the setof mixed actions, i.e. Ai (Ai).

    It is assumed that (according is beliefs) the random draw of an actionof i is stochastically independent of the other players actions. For example,the following situation is excluded : i chooses his action according to the(random) weather and he thinks his opponents are doing the same, so thatthere is correlation between ai and ai even though there is no causal linkbetween ai and ai (this type of correlation will be discussed in section5.2.2 of Chapter 5). More importantly, player i knows that moves aresimultaneous and therefore by changing his actions he cannot cause anychange in the probability distribution of opponents actions. Hence, ifplayer i has conjecture i and chooses the mixed action i, the subjectiveprobability of each action profile (ai, ai) is i(ai)i(ai) and is expectedpayoff is

    ui(i, i) :=

    aiAi

    aiAi

    i(ai)i(ai)ui(ai, ai) =

    aiAi

    i(ai)ui(ai, i).

    If the opponent has only two feasible actions, it is possible to use a graph torepresent the lotteries corresponding to pure and mixed actions. For eachaction ai, we consider a corresponding point in the Cartesian plane with

  • 3.2. Conjectures 33

    coordinates given by the utilities that i obtains for each of the two actionsof the opponent. If the actions of the opponents are l and r, we denote suchcoordinates x = ui(, l) and y = ui(, r). Any pure action ai correspondsto the vector (x, y) = (ui(ai, l), ui(ai, r)) (a row in the payoff matrix ofi). The same holds for the mixed actions: i corresponds to the vector(x, y) = (ui(i, l), ui(i, r)). The set of points (vectors) corresponding tothe mixed actions is simply the convex hull of the points corresponding tothe pure actions.4 Figure 3.2 represents such a set for matrix 1.

    How can conjectures be represented in such a figure? It is quitesimple. Every conjecture induces a preference relation on the space ofpayoff pairs. Such preferences can be represented through a map ofiso-expected payoff curves (or indifference curves). Let (x, y) be thegeneric vector of (expected) payoffs corresponding to l and r respectively.The expected payoff induced by the conjecture i is i(l)x + i(r)y.Therefore, the indifference curves are straight lines defined by the equation

    y = ui(r)

    i(l)i(r)

    x, where u denotes the constant expected payoff. Every

    conjecture i corresponds to a set of parallel lines with negative slope(or, in the extreme cases, horizontal or vertical slope) determined by theorthogonal vector (i(l), i(r)). The direction of increasing expected payoffis given precisely by such orthogonal vector. The optimal actions (pure ormixed) can be graphically determined considering, for any conjecture i,the highest line of iso-expected payoff among those that touch the set offeasible payoff vectors (the shaded area in Figure 3.2).

    Allowing for mixed actions, the set of feasible (expected) payoffs vectorsis a convex polyhedron (as in Figure 2). To see this, note that if i and iare mixed actions, then also p i + (1 p) i is a mixed action, wherep i + (1 p) i is the function that assigns to each pure action ai theweight pi(ai)+(1p)i(ai). Thus, all the convex combinations of feasiblepayoff vectors are feasible payoff vectors, once we allow for randomization.

    However, the idea that individuals use coins or roulette wheels torandomize their choices may seem weird and unrealistic. Furthermore,as it can be gathered from Figure 3.2, for any conjecture i and any mixedaction i, there is always a pure action ai that yields the same or a higherexpected payoff than i (check all possible slopes of the iso-expected payoff

    4The convex hull of a set of points X Rk is the smallest convex set containingX, that is, the intersection of all the convex sets containing X.

  • 34 3. Rationality and Dominance

    ui(, `)

    ui(, r)

    1

    2

    3

    4

    1 2 3 4

    {

    }(r) = 23(`) = 1

    3

    }

    {(r) = 23(`) = 1

    3

    a

    b

    c

    e

    f

    Figure 3.2: State contingent representation of expected payoff.

    curves and verify how the set of optimal points looks like).5 Hence, a playercannot be strictly better off by choosing a mixed action rather than a pureaction.

    The point of view we adopt in these lecture notes is that players neverrandomize (although their choices might depend on extrinsic, payoff-irrelevant signals, as in Chapter 5, Section 5.2.2). Nonetheless, it willbe shown that in order to evaluate the rationality of a given pureaction it is analytically convenient to introduce mixed actions. (InChapter 5, we discuss interpretations of mixed actions that do not involverandomization.)

    3.3 Best Replies and Undominated Actions

    Definition 4. A (mixed) action i is a best reply to conjecture i if

    i (Ai), ui(i , i) ui(i, i),5A formal proof of this result will be provided in the next section.

  • 3.3. Best Replies and Undominated Actions 35

    that isi arg max

    i(Ai)ui(i,

    i).

    The set of pure actions that are best replies to conjecture i is denoted by6

    ri(i) := Ai arg max

    i(Ai)ui(i,

    i).

    The correspondence ri : (Ai) Ai is called best replycorrespondence.7 An action ai is called justifiable if there exists aconjecture i (Ai) such that ai ri(i).

    Note that, even if Ai is finite, one cannot conclude without proofthat the set of pure best replies to the conjecture i is non-empty, i.e.that ri(

    i) 6= . In principle, it could happen that in order to maximizeexpected payoff it is necessary to use mixed actions that assign positiveprobability to more than one pure action. However, we will show thatri(

    i) 6= for every i provided that Ai is finite (or, more generally, thatAi is compact and ui continuous in ai, two properties that trivially hold ifAi is finite); thus it is never necessary to use non-pure actions to maximizeexpected payoff.

    We say that a player is rational if he chooses a best reply to theconjecture he holds. The following result shows, as anticipated, that arational player does not need to use mixed actions. Therefore, it can beassumed without loss of generality that his choice is restricted to the setof pure actions.

    Lemma 1. Fix a conjecture i (Ai). A mixed action i is a bestreply to i if and only if every pure action in the support of i is a bestreply to i, that is

    i arg maxi(Ai)

    ui(i, i) Suppi ri(i).

    Proof. This is a rather trivial result. But since this is the first proofof these notes we will go over it rather slowly.

    6Recall: Ai is a subset of (Ai).7A correspondence : X Y is a multi-function that assigns to every element

    x X a set of elements (x) Y . A correspondence : X Y can be equivalently beexpressed as a function with domain X and range 2Y , the power set of Y .

  • 36 3. Rationality and Dominance

    (Only if) We first show that if Suppi is not included in ri(i),

    then i is not a best reply to i. Let ai be a pure action such that

    i (ai ) > 0 and assume that, for some i, ui(i,

    i) > ui(ai ,

    i), sothat ai Suppi \ri(i).8 Since ui(i, i) is a weighted average ofthe values {ui(ai, i)}aiAi , there must be a pure action ai such thatui(a

    i,

    i) > ui(ai ,

    i). But then we can construct a mixed action i thatsatisfies ui(

    i,

    i) > ui(i ,

    i) by shifting probability weightfrom ai toai, which is possible because

    i (ai ) > 0. Specifically, for every ai Ai, let

    i(ai) =

    0, if ai = a

    i ,

    i (ai) +

    i (ai ), if ai = a

    i,

    i (ai), if ai 6= ai , ai.i is a mixed action since

    aiAi

    i(ai) =

    aiAi

    i (ai) = 1. Moreover,

    it can be easily checked that

    ui(i,

    i) ui(i , i) = i (ai )[ui(ai, i) ui(ai , i)] > 0,where the inequality holds by assumption. Thus ai is not a best reply toi.

    (If) Next we show that if each pure action in the support of i is abest reply, then i is also a best reply. It is convenient to introduce thefollowing notation:

    ui(i) := max

    i(Ai)ui(i,

    i).

    By definition of ri(i),

    ai ri(i), ui(i) = ui(ai, i) (3.3.1)ai Ai, ui(i) ui(ai, i). (3.3.2)

    Assume that Suppi ri(i). Then, for every i (Ai)ui(

    i ,

    i) =

    aiSuppii (ai)ui(ai,

    i) =

    airi(i)i (ai)ui(ai,

    i) = ui(i)

    = ui(i)aiAi

    i(ai) =aiAi

    i(ai)ui(i)

    aiAi

    i(ai)ui(ai, i).

    8Recall that X\Y denotes the set of elements of X that do not belong to Y :X\Y = {x X : x / Y }.

  • 3.3. Best Replies and Undominated Actions 37

    The first equality holds by definition, the second follows from Suppi ri(

    i), the third holds by eq. (3.3.1), the fourth and fifth are obvious andthe inequality follows from (3.3.2).

    In Matrix 1, for example, any mixed action that assigns positiveprobability only to a and/or b is a best reply to the conjecture (12 ,

    12).

    Clearly, the set of pure best replies to (12 ,12) is ri((

    12 ,

    12)) = {a, b}.

    Note that if at least one pure action is a best reply among all pure andmixed actions, then the maximum that can be attained by constrainingthe choice to pure actions is necessarily equal to what could be attainedchoosing among (pure and) mixed actions, i.e.

    ri(i) 6= max

    i(Ai)ui(i,

    i) = maxaiAi

    ui(ai, i).

    This observation along with Lemma 1 yields the following:

    Corollary 1. For every conjecture i (Ai),

    ri(i) = arg max

    aiAiui(ai,

    i).

    Hence, it is not necessary to use mixed actions to maximize expected payoff.Moreover, if Ai is finite, the best reply correspondence has non-emptyvalues, that is, ri(

    i) 6= for every i (Ai).9

    Proof. For every conjecture i, the expected payoff function ui(, i) :(Ai) R is continuous in i and the domain (Ai) is a compact subsetof RAi . Hence, the function has at least one maximizer i . By Lemma 1,Suppi ri(i), so that ri(i) 6= . As we have seen above, this impliesthat

    maxi(Ai)

    ui(i, i) = max

    aiAiui(ai,

    i)

    and therefore ri(i) = arg maxaiAi ui(ai, i).

    Recall that, for a given function f : X Y and a given subset C X,

    f(C) := {y Y : x C, y = f(x)}9The same conclusion holds if A is compact and ui is continuous.

  • 38 3. Rationality and Dominance

    denotes the set of images of elements of C. Analogously, for a givencorrespondence : X Y ,

    (C) := {y Y : x C, y (x)}

    denotes the set of elements y that belong to the image (x) of somepoint x C. In particular, we use this notation for the best replycorrespondence. For example, ri ((Ai)) is the set of justifiable actions(see Definition 4).

    A question should spontaneously arise at this point: can wecharacterize the set of justifiable actions with no reference to conjecturesand expected payoff maximization? In other words, can we conclude thatan action will not be chosen by a rational player without checking directlythat it is not a best reply to any conjecture? The answer comes from theconcept of dominance.

    Definition 5. A mixed action i dominates a (pure or) mixed actioni if it yields a strictly higher expected payoff whatever the choices of theother players are:

    ai Ai, ui(i, ai) > ui(i, ai).

    The set of pure actions of agent i that are not dominated by any mixedaction is denoted by NDi.

    The proof of the following statement is left as an exercise for the reader.

    Remark 3. If a (pure) action ai is dominated by a mixed action ithen, for every conjecture i (Ai), ui(ai, i) < ui(i, i). Hence, adominated action is not justifiable; equivalently, a justifiable action cannotbe dominated.

    In Matrix 1, for instance, the action f is dominated by c, which in turnis dominated by the mixed action that assigns probability 12 to a and b.Therefore, NDi {a, b, e}. Given that a, b, and e are best replies to atleast one conjecture, we have that {a, b, e} NDi. Hence, NDi = {a, b, e}.

    The following lemma states that the converse of Remark 3 holds.Therefore, it provides a complete answer to our previous question aboutthe characterization of the set of actions that a rational player would neverchoose.

  • 3.3. Best Replies and Undominated Actions 39

    Lemma 2. (see Pearce [29]) Fix an action ai Ai in a finite game. Thereexists a conjecture i (Ai) such that ai is a best reply to i if andonly if ai is not dominated by any (pure or) mixed action. In other words,the set of undominated (pure) actions and the set of justifiable actionscoincide:

    NDi = ri ((Ai)) .

    Proof. The proof of this result uses an important result known asFarkas lemma.

    Lemma 3 (Farkas Lemma). Let M be a n m matrix and c be an m-dimensional vector. Then exactly one of the following statements is true:(1) there exists a vector x Rm such that Mx c;(2) there exists a vector y Rn such that y 0, yTM = c and yT c > 0.

    We will use Farkas Lemma to prove that an action is justifiable if andonly if it is not dominated. Since the game is finite, let k = |Ai| andm = |Ai| and label elements in Ai as {1, 2, ..., k} and elements in Ai as{1, 2, ...,m} .10 Then, construct an k m matrix U in which the (w, z)-thcoordinate is given by:

    ui (ai , z) ui (w, z)

    ai is justifiable if and only if we can find a conjecture i (Ai) such

    that Ui 0. This condition can be rewritten as follows: ai is justifiableif and only if we can find x Rm satisfying inequality:

    Mx c, (3.3.3)

    where M is a (k +m+ 1)m matrix and c is a (k +m+ 1)-dimensionalvector defined as follows:

    M =

    UIm1Tm

    , c = 0k0m

    1

    (Im denotes the m-dimensional identity matrix, that is an m m matrixhaving 1s along the main diagonal and 0s everywhere else; 1s and 0s denotethe s-dimensional vectors having respectively 1s and 0s everywhere).

    10Recall that |X| denotes the cardinality of set X.

  • 40 3. Rationality and Dominance

    Note, ifm

    s=1 xs > 1, we can normalize the vector substituting each

    x = (x1, ..., xm) with x =

    (x1ms=1 xs

    , ..., xmms=1 xs

    )and satisfy inequality

    (3.3.3).If ai is not justifiable, inequality (3.3.3) does not hold and Farkas lemmaimplies we can find y Rk+m+1 such that:

    y 0yT c> 0

    yTM = c

    By construction yT c = yk+m+1; thus the second condition can be rewrittenas yk+m+1 > 0. Furthermore, and y

    TM = c implies that for everyz Ai = {1, 2, ...,m} :

    ks=1

    ys [ui (ai , z) ui (as, z)] + yk+z + yk+m+1 = 0

    Since yk+z 0, we must have ys > 0 for some s Ai. Then, wecan construct a probability distribution y,11 such that for all z Ai,k

    s=1 ysui (as, z) > ui (ai , z) . We conclude that a

    i is dominated by mixed

    action y.

    It is possible to understand intuitively why the lemma is true byobserving Figure 2. Note that the only ifpart of Lemma 2 is just Remark3. The set of justifiable actions corresponds to the extreme points (thekinks) of the North-East boundary of the shaded convex polyhedronrepresenting the set of feasible (pure and randomized) choices. An actionai is not justifiable if and only if the corresponding point lies (strictly)below and to the left of this boundary. But for any point (x, y) below andto the left of the North-East boundary, there exists a corresponding point(x, y) on the boundary that dominates (x, y).12

    11Ifks=1 ys 6= 1, we can once more normalize the vector substituting each element

    yr withyrks=1

    ys.

    12As Figure 2 suggests, a similar result relates undominated mixed actions and mixedbest replies: the two sets are the same and correspond to the North-East boundaryofthe feasible set. But since we take the perspective that players do not actually randomize,we are only interested in characterizing justifiable pure actions.

  • 3.3. Best Replies and Undominated Actions 41

    We can thus conclude that a rational player always choosesundominated actions and that each undominated action is justifiable asa best reply to some conjecture.

    In some interesting situations of strategic interaction an actiondominates all others. In those cases, a rational player should choose thataction. This consideration motivates the following:

    Definition 6. An action ai is dominant if it dominates every otheraction, i.e. if

    ai Ai\ {ai } ,ai Ai, ui(ai , ai) > ui(ai, ai).

    You should try to prove the following statement as an exercise:

    Remark 4. Fix an action ai arbitrarily. The following conditions areequivalent:(0) action ai is dominant;(1) action ai dominates every mixed action i 6= ai ;(2) action ai is the unique best reply to every conjecture.

    The following example illustrates the notion of dominant action.The example shows that, assuming that players are motivated bytheir material self-interest, individual rationality may lead to sociallyundesirable outcomes.

    Example 2. (Linear Public Good Game). In a community composedof n individuals it is possible to produce a quantity g of a public goodusing an input x according to the production function y = kx. Both y andx are measured in monetary units (say in Euros). To make the exampleinteresting, assume that the productivity parameter k satisfies 1 > k > 1n .A generic individual i can freely choose how many Euros to contribute tothe community for the production of the public good. The communitymembers cannot sign a binding agreement on such contributions becauseno authority with coercive power can enforce the agreement. Let Wi beplayer is wealth. The game can be represented as follows: Ai = [0,Wi],ai Ai is the contribution of i; consequences are allocations of the publicand private good (y,W1a1, ...,Wnan); agents are selfish,13 hence their

    13And risk-neutral.

  • 42 3. Rationality and Dominance

    utility function can be written as

    vi(y,W1 a1, ...,Wn an) = y ai,which yields the payoff function

    ui(a1, ..., an) = k(

    nj=1

    aj) ai.

    It can be easily checked that ai = 0 is dominant for any player i: justrewrite ui as

    ui(ai, ai) = kj 6=i

    aj (1 k)ai

    and recall that k < 1. The profile of dominant actions (0, ..., 0) is Pareto-dominated by any symmetric profile of positive contributions (, ..., ) (with > 0). Indeed, ui(0, ..., 0) = 0 < (nk 1) = ui(, ..., ), where theinequality holds because k > 1n . Let S(a) =

    ni=1 ui(a) be the social

    surplus; the surplus maximizing profile is ai = Wi for each i.

    An action could be a best reply only to conjectures that assign zeroprobability to some action profiles of the other players. For instance, actione in matrix 1 is justifiable as a best reply only if i is certain that i doesnot choose r. Let us say that a player i is cautious if his conjecture assignspositive probability to every ai Ai. Let

    o(Ai) :={i (Ai) : ai Ai, i(ai) > 0

    }be the set of such conjectures.14 A rational and cautious player choosesactions in ri (

    o(Ai)). These considerations motivate the followingdefinition and result.

    Definition 7. A mixed action i weakly dominates another (pure or)mixed action i if it yields at least the same expected payoff for every actionprofile ai of the other players and strictly more for at least one ai:

    ai Ai, ui(i, ai) ui(i, ai),ai Ai, ui(i, ai) > ui(i, ai).

    14o(Ai) is the relative interior of (Ai) (superscript o should remind you thatthis is a relatively open set, i.e. the intersection between the simplex (Ai) RAiand the strictly positive orthant RAi++ , an open set in R

    Ai).

  • 3.3. Best Replies and Undominated Actions 43

    The set of pure actions that are not weakly dominated by any mixed actionis denoted by NWDi.

    Lemma 4. (See Pearce [29]) Fix an action ai Ai in a finite game.There exists a conjecture i o(Ai) such that ai is a best reply to i ifand only if ai is not weakly dominated by any pure or mixed action:

    NWDi = ri(0(Ai)

    ).

    The intuition is similar to the one given for Lemma 2. Looking atFigure 2 one can see that the set of (pure or mixed) actions that are notweakly dominated corresponds to the part with strictly negative slope ofthe North-East boundary of the feasible set. This coincides with the set ofpoints that maximize the expected payoff for at least one conjecture thatassigns strictly positive probability to all actions of i, that is, a conjecturethat is represented by a set of parallel lines with strictly negative slope(neither horizontal, nor vertical).

    Definition 8. An action ai is weakly dominant if it weakly dominatesevery other (pure) action, i.e. if for every other action ai Ai\{ai } thefollowing conditions hold:

    ai Ai, ui(ai , ai) ui(ai, ai),ai Ai, ui(ai , ai) > ui(ai, ai).

    You should prove the following statement as an exercise:

    Remark 5. Fix an action ai arbitrarily. The following conditions areequivalent:(0) ai is weakly dominant;(1) ai weakly dominates every mixed action i 6= ai ;(2) ai is the unique best reply to every strictly positive conjecture

    i o(Ai);(3) ai is the only action with the property of being a best reply to everyconjecture i(note that if i is not strictly positive, i (Ai)\o(Ai),then there may be other best replies to i).

    If a rational and cautious player has a weakly dominant action, thenhe will choose such an action. There are interesting economic exampleswhere individuals have weakly dominant actions.

  • 44 3. Rationality and Dominance

    Example 3. (Second Price Auction). An art merchant has to auctiona work of art (e.g. a painting) at the highest possible price. However, hedoes not know how much such work of art is worth to the potential buyers.The buyers are collectors who buy the artwork with the only objective tokeep it, i.e. they are not interested in what its future price might be.The authenticity of the work is not an issue. The potential buyer i iswilling to spend at most vi > 0 to buy it. Such valuation is completelysubjective, meaning that if i were to know the other players valuations, hewould not change his own.15 Following the advice of W. Vickrey, winner ofNobel Prize for Economics,16 the merchant decides to adopt the followingauction rule: the artwork will go to the player who submits the highestoffer, but the price paid will be equal to the second highest offer (in case ofa tie between the maximum offers, the work will be assigned by a randomdraw). This auction rule induces a game among the buyers i = 1, ..., nwhere Ai = [0,+), and

    ui(ai, ai) =

    vi max ai, if ai > max ai,

    0, if ai < max ai,1

    1+|arg max ai|(vi max ai) if ai = max ai

    where max ai = maxj 6=i (a1, ..., ai1, ai+1, ..., an) (recall that, for everyfinite set X, |X| is the number of elements of X, or cardinality of X). Itturns out that offering ai = vi is the weakly dominant action (can youprove it?). Hence, if the potential buyer i is rational and cautious, he willoffer exactly vi. Doing so, he will expect to make some profits. In fact,being cautious, he will assign a positive probability to event [max ai < vi].Since by offering vi he will obtain the object only in the event that theprice paid is lower than his valuation vi, the expected payoff from this offeris strictly positive.

    15If a buyer were to take into account a potential future resale, things might be ratherdifferent. In fact, other potential buyers could hold some relevant information thataffects the estimate of how much the artwork could be worth in the future. Similarly, ifthere were doubts regarding the authenticity of the work, it would be relevant to knowthe other buyers valuations.

    16See Vickrey [34].

  • 3.3. Best Replies and Undominated Actions 45

    3.3.1 Best replies and Undominated Actions in Nice Games

    Many models used in applications of game theory to economics and otherdisciplines have the f