GamesTheorySB,ESA,etc

139
Part I THE CLASSICAL THEORY OF GAMES 1 Static games of complete information In this chapter we consider games of the following simple form: rst the play- ers simultaneously choose actions, then the players receive payo/s that depend on the combination of actions just chosen. Within the class of such static (or simultaneous-move) games we restrict attention to games of complete infor- mation. That is, each players payo/ function (the function that determines the players payo/ from the combination of action by the players) is common knowledge among all the players. 1

Transcript of GamesTheorySB,ESA,etc

Page 1: GamesTheorySB,ESA,etc

Part I

THE CLASSICAL THEORY OFGAMES

1 Static games of complete information

In this chapter we consider games of the following simple form: �rst the play-ers simultaneously choose actions, then the players receive payo¤s that dependon the combination of actions just chosen. Within the class of such static (orsimultaneous-move) games we restrict attention to games of complete infor-mation. That is, each player�s payo¤ function (the function that determinesthe player�s payo¤ from the combination of action by the players) is commonknowledge among all the players.

1

Page 2: GamesTheorySB,ESA,etc

1.1 Zero-sum two-person games

We consider a game with two players, the player 1 and the player 2. Whatplayer 1 wins is just what player 2 loses, and vice versa.In order to have an intuitive understanding of a such game we introduce

some related basic ideas through a few simple examples.Example 1.1. (Matching pennies)Each of two participants (players) puts down a coin on the table without

letting the other player see it. If the coins match, that is, if both coins showheads or both show tails, player 1 wins the two coins. If they do not match,player 2 wins the two coins. In other words, in the �rst case, player 1 receives apayment of 1 from player 2, and, in the second case, player 1 receives a paymentof �1.These outcomes can be listed in the following table:

Player 2

Player 11 (heads) 2 (tails)

1 (heads) 1 �12 (tails) �1 1

Also, they can be written in the payo¤s matrices for the two players:

H1 =

�1 �1�1 1

�; H2 =

��1 11 �1

�:

We say that each player has two strategies (actions, moves). In thematrix H1 the �rst row represents the �rst strategy of player 1, the second rowrepresents the second strategy of player 1. If player 1 chooses his strategy 1, itmeans that his coin shows heads up. Strategy 2 means tails up. Similarly, the�rst and the second columns of matrix H1 correspond respectively to the �rstand the second strategies of player 2. In H2 we have the same situations, butfor player 2. �Remark 1.1. This gambling contest is a zero-sum two-person game.

Brie�y speaking, a game is a set of rules, in which the regulations of the entireprocedure of competition (or contest, or struggle), including players, strate-gies, and the outcome after each play of the game is over, etc., are speci�callydescribed. �Remark 1.2. The entries in above table form a payo¤ matrix (of player

1, that is H1). The matrix H2 is the payo¤ matrix of player 2, and we have

H1 +Ht2 = O2;

where Ht2 is the transpose of H2. �

Remark 1.3. The payo¤ is a function of the strategies of the two players.If, for instance, players 1�s coin shows heads up (strategy 1) and player 2�s coinalso shows heads up (strategy 1), then the element h11 = 1 denotes the amountwhich player 1 receives from player 2. Again, if player 1 chooses strategy 2(tails) and player 2 chooses strategy 1 (heads), then the element h21 = �1 is

2

Page 3: GamesTheorySB,ESA,etc

the payment that player 1 receives. In this case, the payment that player 1receives is a negative number. This means that player 1 loses one unit, that is,player 1 pays one unit to player 2. �Example 1.2. (Stone-paper-scissors)Scissors defeats paper, paper defeats stone, and stone in turn defeats scissors.

There are two players: 1 and 2. Each player has three strategies. Let strategies1, 2, 3 represent stone, paper, scissors respectively. If we suppose that thewinner wins one unit from the loser, then the payo¤ matrix is

Player 2

Player 1

1 2 31 0 �1 12 1 0 �13 �1 1 0

Remark 1.4. The payo¤ matrices for the two players are:

H1 =

24 0 �1 11 0 �1�1 1 0

35 , H2 =

24 0 �1 11 0 �1�1 1 0

35 :We have H1 = H2 and H1 +H

t2 = O3. �

Example 1.3. We consider zero-sum two-person game for which the payo¤matrix is given in the following table:

Player 2

Player 1

pnq 0 1 20 0 1 41 -1 2 72 -4 1 83 -9 -2 7

We have the payo¤ matrices:

H1 =

26640 1 4�1 2 7�4 1 8�9 �2 7

3775 ; H2 =

24 0 1 4 9�1 �2 �1 2�4 �7 �8 �7

35Player 1 has four strategies, while player 2 has three strategies.Remark 1.5. The payo¤ of player 1 (that is, the amount that player 2 pays

to player 1) can be determined by the function

f : f0; 1; 2; 3g � f0; 1; 2g ! Z; f(p; q) = q2 � p2 + 2pq: �

In each of the above examples there are two players, namely player 1 andplayer 2, and a payo¤matrix, H1 (there is H2 too such that H1+H

t2 = 0). Each

3

Page 4: GamesTheorySB,ESA,etc

player has several strategies. The strategies of player 1 are represented by therows of the payo¤matrix H1, and those of player 2 by the columns of the payo¤matrix H1. (The strategies of player 2 are represented by the rows of the payo¤matrix H2, and those of player 1 by the columns of the payo¤ matrix H2.)The player 1 chooses a strategy from his strategy set, and player 2, indepen-

dently, chooses a strategy from his strategy set. After the two choices have beenmade, player 2 pays an amount to player 1 as the outcome of this particularplay of the game. The amount is shown in the payo¤matrix. This amount maybe with positive, 0, or negative value. If the payo¤ is positive, player 1 receivesa positive amount from player 2, that is, player 1 wins an amount from player2. If the payo¤ is negative, player 1 receives a negative amount from player 2,that is, player 1 loses an amount to player 2 (player 2 wins an amount fromplayer 1). The gain of player 1 equals the loss of player 2. What player 1 winsis just what player 2 loses, and vice versa, For this, such a game is called azero-sum game.

1.2 Matrix games

In what follows we suppose that player 1 has m strategies and player 2 has nstrategies. We denote by aij ,i = 1;m, j = 1; n, the payo¤ which player 1 gainsfrom player 2 if player 1 chooses strategy i and player 2 chooses strategy j. So,we obtain the payo¤ matrix H1(� A):

A = (aij) = (1)

=

24 a11 a12 ::: a1n::: ::: ::: :::am1 am2 ::: amn

35De�nition 1.1. We call matrix game the game which is completely de-

termined by above matrix A. �To solve the game, that is, to �nd out the solution (what maximum payo¤

has the player 1 and what strategies are chosen by both players to do this) weexamine the elements of matrix A.In this game, player 1 wishes to gain a payo¤ aij as large as it is possible,

while player 2 will do his best to reach a value aij as small as it is possible. Theinterests of the two players are completely con�icting.If player 1 chooses strategy i he can be sure to obtain at least the payo¤

min1�j�n

aij : (2)

This is the minimum of the ith-row element in the payo¤ matrix A.Since player 1 wishes to maximize his payo¤ he can choose strategy i so as

to make the value in (2) as large as it is possible. That is to say, player 1 canchoose strategy i in order to receive a payo¤ not less than

max1�i�m

min1�j�n

aij : (3)

4

Page 5: GamesTheorySB,ESA,etc

In other words, if player 1 makes his best choice, the payo¤ which player 1receives cannot be less than the value given in (3).Similarly, if player 2 chooses his strategy j, he will lose at most

max1�i�m

aij : (4)

Now, player 2 wishes to minimize his lose so, he will try to choose strategyj so as to obtain the minimum of the value in (4). Namely, player 2 can choosej so as to have his loss not greater than

min1�j�n

max1�i�m

aij : (5)

So, if player 2 makes his best choice, the payo¤which player 1 receives cannotbe greater than the value given by (5).We have seen that player 1 can choose the strategy i to ensure a payo¤which

is at leastv1 = max

1�i�mmin1�j�n

aij ;

while player 2 can choose the strategy j to make player 1 get at most

v2 = min1�j�n

max1�i�m

aij :

Is there any relationship between these two values, v1 and v2?Lemma 1.1. The following inequality holds: v1 � v2, that is

v1 = max1�i�m

min1�j�n

aij � min1�j�n

max1�i�m

aij = v2: (6)

Proof. For every i we have

min1�j�n

aij � aij ; j = 1; n;

and for every j we have

aij � max1�i�m

aij ; i = 1;m:

Hence the inequalitymin1�j�n

aij � max1�i�m

aij

holds, for all i = 1;m and all j = 1; n.Since the left-hand side of the last inequality is independent of j, taking the

minimum with respect to j on both sides we have

min1�j�n

aij � min1�j�n

max1�i�m

aij = v2; i = 1;m;

that is,min1�j�n

aij � v2:

5

Page 6: GamesTheorySB,ESA,etc

Since the right-hand side of the last inequality is independent of i, takingthe maximum with respect to i on both sides we obtain

max1�i�m

min1�j�n

aij � v2;

that is, v1 � v2, and the proof is completed. �Let us examine the three examples from the section 1.1.In Example 1.1 we have m = 2, n = 2, therefore

v1 = max1�i�2

min1�j�2

aij = max(�1;�1) = �1;

v2 = min1�j�2

max1�i�2

aij = min(1; 1) = 1:

So, in Example 1.1 we have v1 < v2.In Example 1.2, we have m = 3, n = 3, therefore

v1 = max1�i�3

min1�j�3

aij = max(�1;�1;�1) = �1;

v2 = min1�j�3

max1�i�3

aij = min(1; 1; 1) = 1:

So, in Example 1.2 we have v1 < v2.In Example 1.3, we have m = 4, n = 3, therefore

v1 = max1�i�4

min1�j�3

aij = max(0;�1;�4;�9) = 0;

v2 = min1�j�3

max1�i�4

aij = min(0; 2; 8) = 0:

So, in Example 1.3 we have v1 = v2.

1.3 Saddle points in pure strategies

There are situations in which v1 = v2. Consequently we giveDe�nition 1.2. If the elements of the payo¤ matrix A of a matrix game

satisfy the following equality

v1 = max1�i�m

min1�j�n

aij = min1�j�n

max1�i�m

aij = v2; (7)

then the quantity v(= v1 = v2) is called the value of the game. �Remark 1.6. The value v is the common value of those given in (3) and

(5). �The value of the game in Example 1.3 is v = 0.If the equality (7) holds, then there exist an i� and a j� such that

min1�j�n

ai�j = max1�i�m

min1�j�n

aij = v;

andmax1�i�m

aij� = min1�j�n

max1�i�m

aij = v:

6

Page 7: GamesTheorySB,ESA,etc

Thereforemin1�j�n

ai�j = max1�i�m

aij� :

But, obviously we have

min1�j�n

ai�j � ai�j� � max1�i�m

aij� :

Thusmax1�i�m

aij� = ai�j� = v = min1�j�n

ai�j :

Therefore, for all i and all j

aij� � ai�j� = v � ai�j : (8)

Consequently, if player 1 chooses the strategy i�, then the payo¤ cannotbe less than v if player 2 departs from the strategy j�; if player 2 chooses thestrategy j�, then the payo¤ cannot exceed v if player 1 departs from the strategyi�.De�nition 1.3. We call i� and j� optimal strategies of players 1 and 2

respectively. The pair (i�; j�) is a saddle point (in pure strategies) of thegame. We say that i = i�, j = j� is a solution (or Nash equilibrium) of thegame. �Remark 1.7. The relationship (8) shows us that the payo¤ at the saddle

point (i�; j�) (solution of the game) is the value of the game. When player 1sticks to his optimal strategy i�, he can hope to increase his payo¤ if player 2departs from his optimal strategy j�. Similarly, if player 2 sticks to his optimalstrategy j�, player 1�s payo¤may decrease if he departs from his optimal strategyi�. Thus if the game has a saddle point (i�; j�) then the equality (7) holds andai�j� = v. �Remark 1.8. A matrix game may have more than one saddle point. How-

ever, the payo¤s at di¤erent saddle points are all equal, the common value beingthe value of the game. �Example 1.4. Consider the matrix game with the payo¤ matrix

A =

24 4 3 6 21 2 0 05 6 7 5

35 :We have for the minimum of its rows

min(4; 3; 6; 2) = 2; min(1; 2; 0; 0) = 0; min(5; 6; 7; 5) = 5

and then the maximum of these minimums:

max(2; 0; 5) = 5 = v1:

Now, we have for the maximum of its columns

max(4; 1; 5) = 5; max(3; 2; 6) = 6; max(6; 0; 7) = 7; max(2; 0; 5) = 5;

7

Page 8: GamesTheorySB,ESA,etc

and then the minimum of these maximums:

min(5; 6; 7; 5) = 5 = v2:

How v1 = v2 = 5 we have saddle point. It is easy to verify that (3; 1) and(3; 4) are both saddle points because

a31 = a34 = v = 5: �

Remark 1.9. If the matrix game has a saddle point (i�; j�), then it is veryeasy to found it. Really, by the De�nition 1.3 of a saddle point (8), the valueai�j� is an element in the payo¤ matrix A=(aij) which is at the same time theminimum of its row and the maximum of its column. �In Example 1.3, (1; 1) is a saddle point of the game because a11 = 0 is the

smallest element in the �rst row and at the same time the largest element inthe �rst column. In Example 1.4 a31 = a34 = 5 are two smallest elements inthe third row, and at the same time the largest element in the �rst and fourthcolumns, respectively.A matrix game can have several saddle points. In this case we can prove the

following result:Lemma 1.2. Let (i�; j�) and (i��; j��) be saddle points of a matrix game.

Then (i�; j��) and (i��; j�) are also saddle points, and the values at all saddlepoints are equal, that is

ai�j� = ai��j�� = ai�j�� = ai��j� : (9)

Proof. We prove that (i�; j��) is a saddle point. The fact that (i��; j�) is asaddle point can be proved in a similar way.Since (i�; j�) is a saddle point, we have

aij� � ai�j� � ai�j

for all i = 1;m and all j = 1; n. Since (i��; j��) is a saddle point, we have

aij�� � ai��j�� � ai��j

for all i = 1;m and j = 1; n. From these inequalities we obtain

ai�j� � ai�j�� � ai��j�� � ai��j� � ai�j� ;

which proves (9). By (9) and the above inequalities, we have

aij� � ai�j�� � ai�j

for all i = 1;m and all j = 1; n. Hence (i�; j��) is a saddle point. �From this lemma we see that a matrix game with saddle points has the

following properties:�the exchangeability or rectangular property of saddle points,

8

Page 9: GamesTheorySB,ESA,etc

�the equality of the values at all saddle points.Example 1.5. The game with the payo¤ matrix

A =

24 3 0 �5�7 �1 42 1 1

35has the saddle point (3; 2) because v1 = 1, v2 = 1 and a32 = v = 1. �Example 1.6. The pair (3; 3) is a saddle point for the game with the payo¤

matrix

A =

24 0 �1 �11 0 �11 1 0

35 :We have v = 0. �Example 1.7. The pair (2; 3) is a saddle point for the game with the payo¤

matrix

A =

24 2 �3 �1 41 2 0 1�2 3 �1 �2

35 :The value of the game is v = 0.Example 1.8. The game with the payo¤ matrix

A =

24 4 1 12 1 1�7 �1 4

35has four saddle points because we have (see Lemma 1.2)

a12 = a13 = a22 = a23 = 1 = v:

Example 1.9. The game with the payo¤ matrix

A =

24 7 5 60 9 414 1 8

35hasn�t a saddle point in the sense of De�nition 1.2 because

v1 = max(5; 0; 1) = 5

andv2 = min(14; 9; 8) = 8:

1.4 Mixed strategies

We have seen so far that there exist matrix games which have saddle points andmatrix games that don�t.

9

Page 10: GamesTheorySB,ESA,etc

When a matrix game hasn�t saddle point, that is, if

v1 = max1�i�m

min1�j�n

aij < min1�j�n

max1�i�m

aij = v2 (10)

we cannot solve the game in the sense given in the previous section. The payo¤matrix given in Example 1.2 (Stone-paper-scissors) hasn�t saddle point becausev1 = �1 < 1 = v2. The same situation is in Example 1.9, where v1 = 5 < 8 = v2.About the game given in Example 1.2, with the payo¤ matrix

A =

24 0 �1 11 0 �1�1 1 0

35we can say the following.Player 1 can be sure to gain at least v1 = �1, player 2 can guarantee that

his loss is at most v2 = 1. In this situation, player 1 will try to gain a payo¤greater than �1, player 2 will try to make the payo¤ (to player 1) less than 1.For these purposes, each player will make e¤orts to prevent his opponent from�nding out his actual choice of strategy. To accomplish this, player 1 can usesome chance device to determine which strategy he is going to choose; similarly,player 2 will also decide his choice of strategy by some chance method. This isthe mixed strategy that we introduce in this section.We consider a matrix game with the payo¤matrix A = (aij) where i = 1;m,

j = 1; n.De�nition 1.4. A mixed strategy of player 1 is a set of m numbers

xi � 0, i = 1;m satisfying the relationshipPm

i=1 xi = 1. A mixed strategy ofplayer 2 is a set of n numbers yj � 0, j = 1; n, satisfying

Pnj=1 yj = 1. �

Remark 1.10. The numbers xi and yj are probabilities. Player 1 chooseshis strategy i with probability xi, and player 2 chooses his strategy j withprobability yj . Hence xiyj is the probability that player 1 chooses strategy iand player 2 chooses strategy j with payo¤ aij for player 1 (and �aij for player2). �In opposite to mixed strategies, the strategies in the saddle points are called

pure strategies. The pure strategy i = i0 is a special mixed strategy: xi0 = 1,xi = 0 for i 6= i0.Let X = (x1; x2; : : : ; xm) and Y = (y1; y2; : : : ; yn) be the mixed strategies of

players 1 and 2, respectively.De�nition 1.5. The expected payo¤ of player 1 is the following real

numbermXi=1

nXj=1

aijxiyj (11)

which is obtained through multiplying every payo¤ aij by the correspondingprobability xiyj and summing for all i and all j. �Player 1 wishes to maximize the expected payo¤, while player 2 wants to

minimize it.

10

Page 11: GamesTheorySB,ESA,etc

Let Sm and Sn be the sets of allX = (x1; x2; : : : ; xm) and Y = (y1; y2; : : : ; yn)respectively, satisfying the following conditions

xi � 0; i = 1;m;mXi=1

xi = 1;

yj � 0; j = 1; n;nXj=1

yj = 1:

If player 1 uses the mixed (or no) strategy X 2 Sm, then his expected payo¤is at least

minY 2Sn

mXi=1

nXj=1

aijxiyj : (12)

Player 1 can choose X 2 Sm such as to obtain the maximum of the value in(12), that is he can be sure of an expected payo¤ not less than

v1 = maxX2Sm

minY 2Sn

mXi=1

nXj=1

aijxiyj : (13)

If player 2 chooses the strategy Y 2 Sn, then the expected payo¤ of player1 is at most

maxX2Sm

mXi=1

nXj=1

aijxiyj : (14)

Player 2 can choose Y 2 Sn such as to obtain the minimum of the value in(14), that is, he can prevent player 1 from gaining an expected payo¤ greaterthan

v2 = minY 2Sn

maxX2Sm

mXi=1

nXj=1

aijxiyj : (15)

As in the case studied in section 1.2 (Lemma 1.1) we have the followingresult:Lemma 1.3. For all X = (x1; x2; : : : ; xm) 2 Sm and all Y = (y1; y2; : : : ; yn) 2

Sn the following inequality holds v1 � v2, that is

v1 = maxX2Sm

minY 2Sn

mXi=1

nXj=1

aijxiyj � minY 2Sn

maxX2Sm

mXi=1

nXj=1

aijxiyj = v2 (16)

Proof. For all X 2 Sm and all Y 2 Sn, we have

minY 2Sn

mXi=1

nXj=1

aijxiyj �mXi=1

nXj=1

aijxiyj :

11

Page 12: GamesTheorySB,ESA,etc

Then, taking the maximum for all X 2 Sm on both sides of the inequality,we get

v1 = maxX2Sm

minY 2Sn

mXi=1

nXj=1

aijxiyj � maxX2Sm

nXi=1

nXj=1

aijxiyj :

This inequality holds for all Y 2 Sn. Therefore,

v1 = maxX2Sm

minY 2Sn

mXi=1

nXj=1

aijxiyj � minY 2Sn

maxX2Sm

mXi=1

nXj=1

aijxiyj = v2;

that is, v1 � v2, and the proof is completed. �The main result of this chapter is the well-known fundamental theorem of

the theory of matrix game, the minimax theorem. This is the aim of thefollowing section.

1.5 The minimax theorem

J. von Neumann was the �rst which proved this theorem. We present herevon Neumann�s proof given in [15].Theorem 1.1. If the matrix game has the payo¤ matrix A = (aij), then

v1 = v2, that is,

v1 = maxX2Sm

minY 2Sn

mXi=1

nXj=1

aijxiyj = minY 2Sn

maxX2Sm

mXi=1

nXj=1

aijxiyj = v2: (17)

To prove this theorem we need some auxiliary notions and results.Let A = (aij) be a m� n matrix, and

a(1) = (a11; a21; : : : ; am1); a(2) = (a12; a22; : : : ; am2); : : : ;

a(n) = (a1n; a2n; : : : ; amn)

obtained by using the columns of matrixA, that are n points in them-dimensionalEuclidean space Rm.De�nition 1.6. We call the convex hull (CH) of the n points a(1); a(2); : : : ; a(n)

the set CH = CH(a(1); a(2); : : : ; a(n)) de�ned by

CH =naj a 2 Rm; a = t1a

(1) + t2a(2) + � � �+ tna(n);

tk 2 R; tk � 0; k = 1; n;nXk=1

tk = 1o: �

Remark 1.11. The elements of CH are expressed as a convex linear com-bination of the n points a(1); a(2); : : : ; a(n). CH is a convex set, this can be easyveri�ed by showing that every convex linear combination of two arbitrary pointsof CH also belongs to CH.

12

Page 13: GamesTheorySB,ESA,etc

Lemma 1.4. Let CH be the convex hull of a(1); a(2); : : : ; a(n). If 0 62 CH,then there exist m real numbers �1; �2; : : : ; �m such that for every point a 2 CH,a = (a1; a2; : : : ; am) we have

�1a1 + �2a2 + � � �+ �mam > 0:

Proof. Since 0 62 CH, there exists a point � = (�1; �2; : : : ; �m) 2 CH,� 6= 0, such that the distance j�j from � to 0 is the smallest. This is equivalentto the statement that �21 + �

22 + � � �+ �2m > 0 is the smallest.

Now, let a = (a1; a2; : : : ; am) be an arbitrary point in CH. Then

�a+ (1� �)� 2 CH; 0 � � � 1;

andj�a+ (1� �)�j2 � j�j2;

ormXi=1

[�ai + (1� �)�i]2 =mXi=1

[�(ai � �i) + �i]2 =

= �2mXi=1

(ai � �i)2 + 2�mXi=1

(ai � �i)�i +mXi=1

�2i �mXi=1

�2i :

Thus, if � 6= 0, we obtain

mXi=1

(ai � �i)2 + 2mXi=1

(ai�i � �2i ) � 0:

Now let �! 0; we get

mXi=1

ai�i �mXi=1

�2i > 0;

and the lemma is proved. �Remark 1.12. This result is usually referred to as the theorem of the

supporting hyperplanes. It states that if the origin 0 doesn�t belong to theconvex hull CH of the n points a(1); a(2); : : : ; a(n), then there exists a supportinghyperplane p passing through 0 such that CH lies entirely in one side of p, thatis, in one of the two half-spaces formed by p.Lemma 1.5. Let A = (aij) be an arbitrary m� n matrix. Then either(1) there exist numbers y1; y2; : : : ; yn with

yj � 0; j = 1; n;nXj=1

yj = 1;

such thatnXj=1

aijyj = ai1y1 + ai2y2 + � � �+ ainyn � 0; i = 1;m;

13

Page 14: GamesTheorySB,ESA,etc

or(2) there exist numbers x1; x2; : : : ; xm with

xi � 0; i = 1;m;mXi=1

xi = 1

such that

mXi=1

aijxi = a1jx1 + a2jx2 + � � �+ amjxm > 0; j = 1; n:

Proof. We consider the convex hull of the n+m points

a(1) = (a11; a21; : : : ; am1); a(2) = (a12; a22; : : : ; am2); : : : ;

a(n) = (a1n; a2n; : : : ; amn)

e(1) = (1; 0; : : : ; 0); e(2) = (0; 1; 0; : : : ; 0); : : : ; e(m) = (0; 0; : : : ; 1):

We denote by CH this convex hull. We distinguish two cases:(1) 0 2 CH, respectively (2) 0 62 CH.Let 0 2 CH be. Then there exist real numbers

t1; t2; : : : ; tn+m � 0;n+mXj=1

tj = 1

such thatt1a

(1) + t2a(2) + � � �+ tna(n) + tn+1e(1)+

+tn+2e(2) + � � �+ tn+me(m) = 0;

that is, 0 was written as a convex linear combination of the above n+m points.Expressed in terms of the components, the ith equation (there are m equa-

tions), ist1ai1 + t2ai2 + � � �+ tnain + tn+i � 1 = 0:

Hencet1ai1 + t2ai2 + � � �+ tnain = �tn+i � 0; i = 1;m: (18)

It follows that t1 + t2 + � � �+ tn > 0, for otherwise we have

t1 = t2 = � � � = tn = 0 = tn+1 = � � � = tn+m;

which contradicts thatPn+m

j=1 tj = 1.Dividing each inequality of (18) by t1 + t2 + � � �+ tn > 0 and putting

y1 =t1

t1 + :::+ tn; y2 =

t2t1 + :::+ tn

; : : : ; yn =tn

t1 + :::+ tn

14

Page 15: GamesTheorySB,ESA,etc

we obtainnXj=1

aijyj = ai1y1 + � � �+ ainyn � 0; i = 1;m:

(2) 0 62 CH. By Lemma 1.4, there exists � = (�1; : : : ; �m) 2 CH such that

�a(j) = �1a1j + �2a2j + � � �+ �mamj > 0; j = 1; n; �e(i) = �i > 0; i = 1;m:(19)

Dividing each inequality in (19) by �1 + � � �+ �m > 0 and putting

x1 =�1

�1 + :::�m; x2 =

�2�1 + :::�m

; : : : ; xm =�m

�1 + :::�m

we obtain

mXi=1

aijxi = a1jx1 + a2jx2 + � � �+ amjxm > 0; j = 1; n:

This complete the proof of Lemma. �Proof of Theorem 1.1. We have proved that v1 � v2 in Lemma 1.3, so it

is su¢ cient to give the proof for v1 � v2.By Lemma 1.5, one of the following two statements holds.(1) There exist y1; y2; : : : ; yn � 0,

Pnj=1 yj = 1, such that

nXj=1

aijyj � 0; i = 1;m:

Hence, for any X = (x1; x2; : : : ; xm) 2 Sm we have

mXi=1

0@ nXj=1

aijyj

1Axi � 0:

Therefore

maxX2Sm

mXi=1

nXj=1

aijxiyj � 0:

It follows that

v2 = minY 2Sn

maxX2Sm

mXi=1

nXj=1

aijxiyj � 0: (20)

(2) There exist x1; x2; : : : ; xm � 0,Pm

i=1 xi = 1, such that

mXi=1

aijxi > 0; j = 1; n:

15

Page 16: GamesTheorySB,ESA,etc

Hence, for any Y = (y1; y2; : : : ; yn) 2 Sn, we havenXj=1

mXi=1

aijxi

!yj � 0:

Therefore,

minY 2Sn

mXi=1

nXj=1

aijxiyj � 0:

It follows that

v1 = maxX2Sm

minY 2Sn

mXi=1

nXj=1

aijxiyj � 0: (21)

By (20) and (21) it follows that, either v1 � 0 or v2 � 0, that is, neverv1 < 0 < v2. We repeat the above judgement with the new matrix B = (aij�k),where k is an arbitrary number. Because

Pmi=1 xi = 1 and

Pnj=1 yj = 1 we

obtain never v1 � k < 0 < v2 � k, or never v1 < k < v2. Therefore, v1 < v2is impossible, for otherwise there would be a number k satisfying v1 < k < v2,thus contradicting the statement "never v1 < k < v2". We have proved v1 � v2.�Remark 1.13. For another proof of the minimax theorem �an inductive

proof �see [20]. Here the new statement of minimax theorem is the following:Let A = (aij) be an arbitrary m�n matrix, and Sm and Sn respectively sets

of points X = (x1; x2; : : : ; xm) and Y = (y1; y2; : : : ; yn) satisfying

xi � 0; i = 1;m;

mXi=1

xi = 1; yj � 0; j = 1; n;

nXj=1

yj = 1:

Then we have

maxX2Sm

min1�j�n

mXi=1

aijxi = minY 2Sn

max1�i�n

nXj=1

aijyj : � (22)

1.6 Saddle points in mixed strategies

In this section, we show, that for any matrix game, a saddle point alwaysexists.Let A = (aij) be the payo¤ matrix of an m � n matrix game. If X =

(x1; x2; : : : ; xm) 2 Sm and Y = (y1; y2; : : : ; yn) 2 Sn are respectively mixedstrategies of players 1 and 2, then the expected payo¤

Pmi=1

Pnj=1 aijxiyj can

be written in matrix notationmXi=1

nXj=1

aijxiyj = XAY t:

16

Page 17: GamesTheorySB,ESA,etc

De�nition 1.7. A pair (X�; Y �) 2 Sm � Sn is called a saddle point (inmixed strategies)(or Nash equilibrium) of the matrix game A = (aij) if

XAY �t � X�AY �t � X�AY t; (23)

for all X 2 Sm and all Y 2 Sn. �The following important result establishes the equivalence between the ex-

istence of a saddle point and the minimax theorem.Theorem 1.2. The m� n matrix game A = (aij) has a saddle point if and

only if the numbers

maxX2Sm

minY 2Sn

XAY t and minY 2Sn

maxX2Sm

XAY t (24)

exist and are equal.Proof. "=)" The two numbers in (24) both exist, obviously (there are

optimal values of continuous functions de�ned on compact sets). Assume thatm � n matrix game has a saddle point (X�; Y �). That is to say that, theinequalities from relationship (23) hold for all X 2 Sm and all Y 2 Sn. Fromthe �rst inequality in (23), we obtain

maxX2Sm

XAY �t � X�AY �t

henceminY 2Sn

maxX2Sm

XAY t � X�AY �t: (25)

Similarly, from the second inequality in (23), we have

X�AY �t � minY 2Sn

X�AY t � maxX2Sm

minY 2Sn

XAY t: (26)

From (25) and (26) it follows that

v2 = minY 2Sn

maxX2Sm

XAY t � maxX2Sm

minY 2Sn

XAY t = v1:

But it is known (see Lemma 1.3) that the reverse inequality v1 � v2 holds.Therefore,

v1 = maxX2Sm

minY 2Sn

XAY t = minY 2Sn

maxX2Sm

XAY t = v2;

and the necessity of the condition is proved."(=" Assume that the two values in (24) are equal. Let X� 2 Sm and

Y � 2 Sn be, such that

maxX2Sm

minY 2Sn

XAY t = minY 2Sn

X�AY t; (27)

minY 2Sn

maxX2Sn

XAY t = maxX2Sm

XAY �t: (28)

17

Page 18: GamesTheorySB,ESA,etc

By the de�nitions of minimum and maximum, we have

minY 2Sn

X�AY t � X�AY �t; X�AY �t � maxX2Sm

XAY �t: (29)

Since the left-hand sides of (27) and (28) are equal, all terms in (27) through(29) are equal to each other. In particular, we have

maxX2Sm

XAY �t = X�AY �t:

Therefore, for all X 2 Sm,

XAY �t � X�AY �t: (30)

Similarly, for all Y 2 Sn,

X�AY �t � X�AY t: (31)

By (30) and (31), it results that (X�; Y �) is a saddle point of XAY t, andthe su¢ ciency of the condition is proved. �De�nition 1.8. If (X�; Y �) is a saddle point (see De�nition 1.7), then

we say that X�; Y � are respectively optimal strategies of players 1 and 2,and v = X�AY �t is the value of the game. We also say that (X�; Y �) is asolution(or a Nash equilibrium) of the game. �Remark 1.14. By Theorem 1.2 the value v of the game is the common

value of v1 = maxX2Sm

minY 2Sn

XAY t and v2 = minY 2Sn

maxX2Sm

XAY t.

The de�nition of a saddle point shows us that, as long as player 1 sticksto his optimal strategy X�, he can be sure to get at least the expected payo¤v = X�AY �t no matter which strategy player 2 chooses; similarly, as long asplayer 2 sticks to his optimal strategy Y �, he can hold player 1�s expected payo¤down to at most v no matter how player 1 makes his choice of strategy. �Now, we give some essential properties of optimal strategies. To do this, we

introduce �rst, some notations.For the matrix A = (aij) we denote the ith row vector of A by Ai� and the

jth column vector of A by A�j . Thus

XA�j =mXi=1

aijxi; Ai�Yt =

nXj=1

aijyj ;

andXA�j is the expected payo¤when player 1 chooses the mixed strategyX andplayer 2 chooses the pure strategy j, again Ai�Y t is the expected payo¤ whenplayer 2 chooses the mixed strategy Y and player 1 chooses the pure strategy i.We give some essential properties of optimal strategies.Lemma 1.6. Let A = (aij) be the payo¤ matrix of an m � n matrix game

whose value is v. The following statements are true:(1) If Y � is an optimal strategy of player 2 and Ai�Y �t < v, then x�i = 0 in

every optimal strategy X� of player 1.

18

Page 19: GamesTheorySB,ESA,etc

(2) If X� is an optimal strategy of player 1 and X�A�j > v, then y�j = 0 inevery optimal strategy Y � of player 2.Proof. We prove only (1). The proof of (2) is similar. Since Y � is an

optimal strategy of player 2, we have Ai�Y �t � v, i = 1;m. We denote byS1 = fij Ai�Y �t < vg, S2 = fij Ai�Y �t = vg.Then we can write

v = X�AY �t =mXi=1

x�iAi�Y�t =

=Xi2S1

x�iAi�Y�t +

Xi2S2

x�iAi�Y�t =

Xi2S1

x�iAi�Y�t +

Xi2S2

x�i v:

Hence

v

1�

Xi2S2

x�i

!=Xi2S1

x�iAi�Y�t;

that is,

vXi2S1

x�i =X

i 2 S1x�iAi�Y �t; orXi2S1

(v �Ai�Y �t)x�i = 0:

Since i 2 S1 implies v �Ai�Y �t > 0, we have x�i = 0. �Remark 1.15. This result states that if player 2 has an optimal strategy

Y � in a matrix game with value v, and if player 1, by using the ith pure strategycannot attain the expected payo¤ v, then the pure strategy i is a bad strategyand cannot appear in any of his optimal mixed strategies. �Lemma 1.7. Let A = (aij) be the payo¤ matrix of an m � n matrix game

whose value is v. The following statements are true:(1) X� 2 Sm is an optimal strategy of player 1 if and only if v � X�A�j,

j = 1; n.(2) Y � 2 Sn is an optimal strategy of player 2 if and only if Ai�Y �t � v,

i = 1;m.Proof. We prove only (1), the proof of (2) is similar. Necessity ("=)") of

the condition follows directly from the de�nition of a saddle point.To prove the su¢ ciency ("(=") of the condition, assume that v � X�A�j ,

j = 1; n.Let (X�; Y �) be a saddle point of the game, that is XAY �t � X�AY � �

X�AY t, for all X 2 Sm and all Y 2 Sn.We prove that (X�; Y �) is a saddle point of the game. Let Y = (y1; y2; : : : ; yn) 2

Sn be any mixed strategy of player 2. Multiplying both sides of inequalityv � X�A�j , j = 1; n, by yj and summing for j = 1; n we obtain

v �nXj=1

X�A�jyj = X�AY t:

In particular, v � X�AY �t. But, the de�nition of saddle point impliesX�AY �t � X�AY �t = v. It follows that XAY �t � X�AY �t � X�AY t, which

19

Page 20: GamesTheorySB,ESA,etc

proves us that (X�; Y �) is a saddle point of the game. Hence, X� is an optimalstrategy of player 1. �Remark 1.16. If the value of a game is known, the above lemma can be

used to examine whether a given strategy X� of player 1 is optimal, or a givenstrategy Y � of player 2 is optimal. �Example 1.10. The matrix game with the payo¤ matrix

A =

24 2 3 11 2 33 1 2

35has the value v = 2, and X� = Y � =

�13 ;

13 ;

13

�are the optimal strategies for the

players 1 and 2. According to Remark 1.16 the pure strategy x2 = 1, namelyX2 = (0; 1; 0) is a bad strategy. Really, we have

v �X2A�1 = 2� (0; 1; 0)

0@ 213

1A = 2� 1 = 1;

so v > X2A�1:

Thus the pure strategy X2 = (0; 1; 0); is a bad strategy. The same for theothers strategies.

Also, according to Lemma 1.7, the strategy X� = (1=3; 1=3; 1=3) of player1 is optimal. Really, we have v = 2; and

X�A:1 = (1=3; 1=3; 1=3)213= 2;

X�A:2 = (1=3; 1=3; 1=3)321= 2;

X�A:3 = (1=3; 1=3; 1=3)132= 2;

therefore v = 2 = X�A:j ; j = 1; 3:

Moreover, we have

20

Page 21: GamesTheorySB,ESA,etc

v �A2�Y �t = 2� (1; 2; 3)

0@ 131313

1A = 2� 2 = 0;

v �A1�Y �t = 2� (2; 3; 1)

0@ 131313

1A = 2� 2 = 0;

and

v �A3�Y �t = 2� (3; 1; 2)

0@ 131313

1A = 2� 2 = 0;

so, Y � is an optimal strategy.The game hasn�t saddle point in pure strategy because we have

v1 = maxmin aij = max(1; 1; 1) = 1;

whilev2 = minmax aij = min(3; 3; 3) = 3:

1.7 Domination of strategies

There are situations in which, an examination of the elements of the payo¤matrix shows us that player 1 will never use a pure strategy since each elementof this row (pure strategy) is smaller than the corresponding element in theother row (pure strategy). For example, we consider the matrix game whosepayo¤ matrix is

A =

24 2 �1 10 1 �11 �2 0

35 :In this matrix A the elements of third row are smaller than the corresponding

elements in the �rst row. Consequently, the player 1 will never use his thirdstrategy. Hence, regardless of which strategy player 2 chooses, player 1 will gainmore by choosing strategy 1 than by choosing strategy 3. Strategy 3 of player1 can only appear in his optimal mixed strategies with probability zero.Thus, in order to solve the matrix game with the payo¤ matrix A, the third

row can be deleted and we need to consider only the resulting matrix

A0 =

�2 �1 10 1 �1

�:

Now, in this matrix A0 each element of the �rst column is greater thanthe corresponding element of the third column. So, player 2 will lose less bychoosing strategy 3 than by choosing strategy 1. Thus, the �rst strategy ofplayer 2 will never be included in any of his optimal mixed strategies withpositive probability.

21

Page 22: GamesTheorySB,ESA,etc

Therefore, the �rst column of the matrix A0 can be deleted to obtain A" =1 �1�1 1

:

It is easy to verify that this 2�2matrix game has the mixed strategy solutionX� = Y � =

�12 ;

12

�and v = 0.

Returning to the original 3�3matrix game with payo¤matrix A, its solutionis

X� =

�1

2;1

2; 0

�; Y � =

�0;1

2;1

2

�; v = 0:

Remark 1.17. We have seen that in matrix game with the payo¤ matrixA, player 1 will never use his strategy 3 since strategy 1 gives him a greaterpayo¤ than strategy 3. Similarly, in matrix game with the payo¤ matrix A0,player 2 will never use his strategy 1 since it always costs him a greater lossthan strategy 3. Therefore the strict dominated strategies will not play by arational player 1, so they can be eliminated, and the strict dominant strategieswill not play by a rational player 2, so they can be eliminated. �De�nition 1.9. Let A = (aij) be the payo¤ matrix of an m � n matrix

game. Ifakj � alj ; j = 1; n (32)

we say that player 1�s strategy k dominates strategy l.If

aik � ail; i = 1;m (33)

we say that player 2�s strategy k dominates strategy l.If the inequalities in (32) or (33) are replaced by strict inequalities, we say

that the strategy k of player 1 or 2 strictly dominates his strategy l. �Remark 1.18. It can be proved that in the case in which a pure strategy is

strict dominated by a pure strategy (or by a convex linear combination of severalother pure strategies), then we can delete the row or column in the payo¤matrixcorresponding to the dominated pure strategy and solve the reduces matrixgame. The optimal strategies of the original matrix game can be obtain fromthose of the reduced one by assigning the probability zero to the pure strategycorresponding to the deleted row or column. �Remark 1.19. If the domination isn�t strict, we can still obtain a solution

for the original game from that of the reduced game. But, the deletion of a rowor column may involve loss of some optimal strategies of the original game. �Example 1.11. Let be the payo¤ matrix of a matrix game

A =

24 2 1 43 1 21 0 3

35 :Strategy 3 of player 2 is dominated by his strategy 2, so we can delete the

third column of the payo¤ matrix and we obtain

A0 =

24 2 13 11 0

35 :22

Page 23: GamesTheorySB,ESA,etc

Then, strategy 1 of player 2 is dominated by his strategy 2, so the �rstcolumn can be deleted; one obtain

A00 =

24 110

35 :Strategy 3 of player 1 is dominated by his strategy 2 (or 1), so we delete the

third row and it result

A000 =

�11

�:

The reduced game has the pure strategiesX�1 = (1; 0), X

�2 = (0; 1), Y

� = (1),hence the original game has the pure strategies X�

1 = (1; 0; 0), X�2 = (0; 1; 0),

Y � = (0; 1; 0). The value of game is v = 1. �Remark 1.20. The game in Example 1.11 has the saddle points (1; 2) and

(2; 2). The optimal strategies of this game are X� = (t1; t2; 0), Y � = (0; 1; 0)where t1; t2 � 0, t1 + t2 = 1, that is, X� is the convex linear combination ofpure strategies X�

1 and X�2 . �

Example 1.12.In the matrix game with the payo¤ matrix

A =

26643 2 4 03 4 2 34 3 4 20 4 0 8

3775 :we can delete the strategies dominated and so we get the reduce game with

the matrix4 20 8

. It is easy to verify that the optimal strategies of 2 x 2 matrix

game are X� = ( 45 ;15 ); Y

� = ( 35 ;25 ) and the value of game is v =

165 . Therefore

X� = (0; 0; 45 ;15 ), Y

�1 = (0:0;

35 ;

25 ) are optimal strategies of the original matrix

game, and v = 165 . There exists the optimal strategy Y

�2 = (0:

815 ;

13 ;

215 ) too. �

Remark 1.21. In the 3 � 3 matrix game, and in the 3 � 2 matrix gameobtained above we used domination of a strategy by a convex linear combinationwith t1 = t2 =

12 . �

Remark 1.22. The deletion of a certain row or column of a payo¤ matrixusing non-strict domination of strategies may result in a reduced game whosecomplete set of solutions does not lead to the complete set of solutions of theoriginal larger game. That is, the solution procedure may lose some optimalstrategies of the original game. This situation appears, for example, for matrixgame with payo¤ matrix

A =

24 3 5 34 �3 23 2 3

35 :We get the reduced game with the matrix

A00 =

�5 3�3 2

�;

23

Page 24: GamesTheorySB,ESA,etc

which has the optimal mixed strategies X�1 =

�13 ;

23

�, Y �1 =

�12 ;

12

�. Thus the

original game has the optimal mixed strategies X�1 =

�13 ;

23 ; 0�, Y �1 =

�0; 12 ;

12

�.

But, we have again the optimal pure strategies X�2 = (1; 0; 0), Y

�2 = (0; 0; 1).

Really, all convex linear combinations of X�1 and X

�2 are optimal (mixed)

strategies of player 1, respectively, all convex linear combinations of Y �1 and Y�2

are optimal strategies of player 2.

1.8 Solution of 2� 2 matrix gameWriting these equations in terms of elements of the payo¤ matrix, we have:

ap+ c(1� p) = v; aq + b(1� q) = v; bp+ d(1� p) = v; cq + d(1� q) = v:

The equations in p give us p� = d�ca+d�b�c , and the equations in q give us

q� = d�ba+d�b�c . Then v =

ad�bca+d�b�c .

Remark 1.23. The above formulae are also valid for the case a > d, a > c,d > b, d > c. �Example 1.13. The 2� 2 matrix game with the payo¤ matrix

A =

�5 3�3 2

�has solution in pure strategies X� = (1; 0), Y � = (0; 1), v = 3. We havev1 = max(3;�3) = 3, v2 = min(5; 3) = 3 and a12 = 3. �Example 1.14. The 2� 2 matrix game with the payo¤ matrix

A =

�3 20 5

�hasn�t solution in pure strategies. We have v1 = max(2; 0) = 2, v2 = min(3; 5) =3. Thus, we obtain

p� =5� 0

3 + 5� 2� 0 =5

6; q� =

5� 23 + 5� 2� 0 =

3

6=1

2;

hence X� =�56 ;

16

�, Y � =

�12 ;

12

�. Then the value of game is v = 15�0

6 = 52 .

Indeed we have

v = X�AY �t =

�5

6;1

6

��3 20 5

��1212

�=

=

�15

6;15

6

��1212

�=15

6=5

2: �

Remark 1.24. For the 2 � 2 matrix game with no saddle point, an inter-esting technique of solution is described by Williams. Let be the payo¤ matrix

A =

�a bc d

�:

24

Page 25: GamesTheorySB,ESA,etc

First, subtract each element of the second column from the correspondingelement of the �rst column: a � b and c � d. Then take absolute values of thetwo di¤erences and reverse the order of the absolute values: jc� dj and ja� bj.The ratio jc�dj

ja�bj is the ratio of x1 and x2 in player 1�s optimal strategy, namely

X� = (x1; x2) = (p; 1�p). Hence x1x2 =jc�djja�bj , and how x1+x2 = 1, we get x1; x2.

The similar technique, but with the rows, lead us to Y � = (y1; y2) = (q; 1� q).�Example 1.15. In the case of Example 1.14, we have

A =

�3 20 5

�;

hence, 3� 2 and 0� 5| {z } that is 1 and � 5| {z }in the �rst step.Then 1 and � 5| {z }! 5 and 1| {z }

in the second step.In the end we have x1

x2= 5, hence x1 = 5x2. How x1 + x2 = 1 we obtain

6x2 = 1, that is x2 = 16 , x1 =

56 . Thus X

� =�56 ;

16

�.

In the �rst step we have 3� 0 and 2� 5| {z } that is 3 and � 3| {z }with the elements of rows. Then, in the second step, we take absolute values ofthe two di¤erences and reverse the order of the absolute values

3 and � 3| {z } ! 3 and 3| {z }The ratio 3=3 is the ratio of y1 to y2 in player 2�s optimal strategy, hence

y1 = y2. So, we obtain y1 = y2 =12 , that is Y

� =�12 ;

12

�. These results are the

same as those of Example 1.14. �

1.9 Graphical solution of 2� n and m� 2 matrix gamesIn the case of 2�n and m�2 matrix games we can present a graphical methodfor �nding the solution. We illustrate the method by a 3 � 2 matrix game.Suppose that the payo¤ matrix A is

A =

24 a bc de f

35 :Denote player 1�s pure strategies by T;M;B and player 2�s pure strategies

by L;R. Assume that player 2 uses the mixed strategy Y = (y1; y2) = (y; 1�y),where 0 � y � 1. Suppose that y = 1 and y = 0 represent the pure strategies L

25

Page 26: GamesTheorySB,ESA,etc

and R respectively. So, we can write

A =

y 1� yL R

TMB

24 a bc de f

35If player 2 chooses the pure strategy L, that is, y = 1, and if player 1 chooses

the pure strategy T , the payo¤ is a, as it is shown in Fig. 1.1. If player 2 choosesthe pure strategy R, that is, y = 0, the payo¤ corresponding to T is b. We jointhe line ab in Fig. 1.1.

Figure 1.1: Mixed strategy Y

Now, we suppose that player 2 chooses a mixed strategy Y = (y; 1 � y),represented by P in the �gure. Then it can see that the height PQ representsthe expected payo¤ when player 2 uses Y and player 1 uses T . This amount is

A1�Yt = ay + b(1� y):

Similarly, corresponding to player 1�s strategies M and B we have the linecd and ef and the amounts are

A2�Yt = cy + d(1� y)

A3�Yt = ey + f(1� y):

The heights of the points on these lines represent the expected payo¤ ifplayer 2 uses Y while player 1 uses M and B, respectively.For any mixed strategy Y of player 2, his expected lost is at more the

maximum of the three ordinates on the lines ab; cd; ef at the point y, thatis,

max1�i�3

Ai�Yt = max

1�i�3

2Xj=1

aijyj : (34)

The graphic of this function is represented by the heavy black line in theFig. 1.1.Player 2 wishes to choose an Y so as to minimize the maximum function in

(34). We see from the �gure that he should choose the mixed strategy corre-sponding to the point A0. At this point the expected payo¤ is

A0B0 = minY 2S2

max1�i�3

2Xj=1

aijyj

and A0B0 is the value of the game.

26

Page 27: GamesTheorySB,ESA,etc

The graphical solution of a 2� n matrix game is similar. We explain it forthe case n = 3 and let the payo¤ matrix A of the game be

A =

�a b cd e f

�:

Denote player 1�s pure strategies by U;D and player 2�s pure strategies byL;M;R.Assume that player 1 uses the mixed strategy X = (x1; x2) = (x; 1 � x),

where 0 � x � 1. Suppose that x = 1 represents the pure strategy U and x = 0represents the pure strategy D.If player 1 chooses the pure strategy U , that is when x = 1, and if player 2

chooses the pure strategy L, the payo¤ is a, as it is shown in Fig. 1.2. If player1 chooses D, that is, x = 0, the payo¤ corresponding to L is d. We join the linead in the �gure.Now suppose that player 1 chooses a mixed strategy X = (x; 1 � x) repre-

sented by P in the �gure. Then it can see that the height PQ represents theexpected payo¤ when player 1 uses X and player 2 uses L. The amount is

XA�1 =2Xi=1

ai1xi = ax+ d(1� x):

Similarly, corresponding to player 2�s strategiesM and R we have the line beand cf . The heights of the points on these lines represents the expected payo¤sif player 1 uses X while player 2 uses M and R respectively.

Figure 1.2: Mixed strategy X

For any mixed strategy X of player 1, his expected payo¤ is at least theminimum of the three ordinates on the lines ad; be; cf at the point x, that is,

min1�j�3

XA�j = min1�j�3

2Xi=1

aijxi: (35)

The graphic of this function is represented by the heavy black line in the�gure.Player 1 wishes to choose an X so as to maximize the minimum function in

(35). We see from the �gure that he should choose the mixed strategy corre-sponding to the point A0. At this point the expected payo¤ is

A0B0 = maxX2S2

min1�j�3

2Xi=1

aijxi = maxX2S2

min1�j�3

XA�j ;

which is the value of the game.

27

Page 28: GamesTheorySB,ESA,etc

We note that the point B0 in Fig. 1.2. is the intersection of the lines ad andcf . The abscissa x = x� of the point A0 and the value of A0B0 can be evaluatedby solving a system of two linear equations in two unknowns.

Remark 1.25. The graph also shows us that player 2�s optimal strategydoesn�t involve his pure strategy M. Therefore, the solution of the 2 x 3 matrix

game can be obtained from the solution of the 2 x 2 matrix game�a cd f

�: �

The graphical method described above can be used to solve all 2�n matrixgames.Example 1.16. Find out the solution of 2� 4 matrix game with the payo¤

matrix

A =

�1 5 5 34 1 3 2

�:

The third column is dominated by the fourth column and so it can be elim-inate. We have the payo¤ matrix

A =L M R

UP

�1 5 34 1 2

�Now suppose that player 1 chooses a mixed strategy X = (x; 1� x). In the

Figure 1.3 we have the lines ad, be and cf corresponding to player 2�s strategiesL;M and R.

Figure 1.3: X for Example 1.16.

We see from the �gure that player 1 should choose the mixed strategy cor-responding to the point A0. The abscissa x = x� of the point A0 and the valueof A0B0 can be evaluated by solving the system of two linear equations corre-sponding to strategies L and R. The system of linear equations is�

3x+ y = 4 (L)x� y = �2 (R)

and the solution is x = 12 , y =

52 . Thus the optimal mixed strategy of player

1 is X =�12 ;

12

�, and the value of game is v = 5

2 . To �nd the optimal mixedstrategy of player 2 we have Y = (q1; q2; q3) and equality�

1

2;1

2

��1 5 34 1 2

�24 q1q2q3

35 = 5

2:

So, we obtain 52q1 + 3q2 +

52q3 =

52 , and because q1 + q2 + q3 = 1 we get

q2 = 0 and q1 + q3 = 1. Thus we have Y = (q; 0; 1� q), where q = q1 2 [0; 1].For the original matrix game the optimal strategies of player 2 are Y =

(q; 0; 0; 1� q), q 2 [0; 1]. The value of game is v = 52 .

28

Page 29: GamesTheorySB,ESA,etc

1.10 Solution of 3� 3 matrix gameTo obtain the solution of 3�3matrix game we use the fact that a linear functionon a convex polygon can reach its maximum (minimum) only at a vertex of thepolygon.Consider the payo¤ matrix of an arbitrary 3� 3 matrix game given by

A =

24 a11 a12 a13a21 a22 a23a31 a32 a33

35 :A mixed (pure) strategy for the player 1 has the form X = (x1; x2; x3) with

x1; x2; x3 � 0 and x1 + x2 + x3 = 1. The value of the game is

v = maxX2S3

min1�j�3

XA�j = maxX2S3

minfXA�1; XA�2; XA�3g: (36)

Consider the equations

XA�1 = XA�2; XA�2 = XA�3; XA�3 = XA�1: (37)

Each equation represents a straight line which divides the whole plane intotwo half-planes.The conditions x1; x2; x3 � 0, x1 + x2 + x3 = 1 show us that x1; x2; x3 are

baricentric coordinates of the point X = (x1; x2; x3). The set of all pointsin the closed equilateral triangle 123 with the vertices (1; 0; 0), (0; 1; 0), (0; 0; 1)is the simplex S3.The numbers x1; x2; x3 with the above conditions represents the distances

from X to the sides of triangle S3 with the vertices 1, 2, 3 respectively. Theequations of the three sides 23, 31, 12 of the triangle are x1 = 0, x2 = 0, x3 = 0,respectively (see Fig. 1.4.).

Figure 1.4: Baricentric coordinates

Equation XA�1 = XA�2, for instance, divides the whole plane into two half-planes. The points X in one half-plane satisfy the condition XA�1 < XA�2,while those in the other half-plane satisfy the condition XA�1 > XA�2.The same situation is for other two equations in (37).The three lines (37) either intersect at one point or are parallel to each other.

In both cases these lines divide the whole plane into three regions R1; R2; R3,see Fig. 1.5. (The points outside of triangle can be regarded as points with oneor two of the three coordinates x1; x2; x3 assuming negative values.)

Figure 1.5: The three regions

29

Page 30: GamesTheorySB,ESA,etc

In the region R1 we have

min1�j�3

XA�j = XA�1;

in the region R2 we have

min1�j�3

XA�j = XA�2;

and in the region R3 we have

min1�j�3

XA�j = XA�3:

Therefore, the value of game (36) can be written as

v = maxX2S3

min1�j�3

XA�j =

= max

�min

X2S3\R1

XA�1; minX2S3\R2

XA�2; minX2S3\R3

XA�3

�: (38)

To determine the value v, we should �rst compute

minX2S3\Rj

XA�j ; j = 1; n:

Each of the sets S3 \ Rj , j = 1; n, is a convex polygon. It is su¢ cientto evaluate the values of XA�j at the relevant vertices of this polygon and tomake a comparison between these values. The maximum value must be v. Theoptimal strategies of player 1 can be determined by comparison.The optimal strategies of player 2 can be determined in a similar manner,

after the value v of the game is determined. We have

v = minY 2S3

max1�i�3

Ai�Yt =

= minY 2S3

maxfA1�Y t; A2�Y t; A3�Y tg =

= min

�max

Y 2S3\T1A1�Y

t; maxY 2S3\T2

A2�Yt; maxY 2S3\T3

A3�Yt

�;

where Ti is the region in which the linear function Ai�Y t satis�es

Ai�Yt = max

1�i�3Ai�Y

t; i = 1;m:

It su¢ ces to compute the values of Ai�Y t at the vertices of convex polygonsand to make a comparison between them. The minimum value must be v, andthe vertices Y at which the minimum is assumed are points corresponding tothe optimal strategies of player 2.Remark 1.26. To simplify the computation we can add a convenient con-

stant to each element of the initial matrix. �

30

Page 31: GamesTheorySB,ESA,etc

Example 1.17. Let us compute the value of game and �nd out the optimalstrategies of the game for which the payo¤ matrix is

A =

24 4 2 33 4 24 0 8

35 :To simplify the computation we add the constant �4 to each element of the

matrix. The result is the matrix

A =

24 0 �2 �1�1 0 �20 �4 4

35 :For this matrix game A we have, with a mixed strategy X = (x1; x2; x3),

XA�1 = �x2; XA�2 = �2x1 � 4x3; XA�3 = �x1 � 2x2 + 4x3:

The equation of the line XA�1 = XA�2 is

2x1 � x2 + 4x3 = 0; or 3x1 + 5x3 = 1:

The equation of the line XA�2 = XA�3 is

�x1 + 2x2 � 8x3 = 0; or � 3x1 � 10x3 = �2:

The equation of the line XA�3 = XA�1 is

�x1 � x2 + 4x3 = 0; or 5x3 = 1:

The regions R1; R2; R3 in which min1�j�3

XA�j are equal with XA�1, XA�2,

XA�3 respectively are shown in Fig. 1.6.

Figure 1.6: The three regions for Example 1.17

We evaluate XA�1 at the point�0; 45 ;

15

�. It results

�0;4

5;1

5

�0@ 0�10

1A = �45:

The values of XA�2 at the points�23 ;

13 ; 0�; (1; 0; 0); (0; 0; 1) and

�0; 45 ;

15

�are: �

2

3;1

3; 0

�0@ �20�4

1A = �43; (1; 0; 0)

0@ �20�4

1A = �2

(0; 0; 1)

0@ �20�4

1A = �4;�0;4

5;1

5

�0@ �20�4

1A = �45:

31

Page 32: GamesTheorySB,ESA,etc

The value of XA�3 at the points�23 ;

13 ; 0�; (0; 1; 0) and

�0; 45 ;

15

�are

�2

3;1

3; 0

�0@ �1�24

1A = �43; (0; 1; 0)

0@ �1�24

1A = �2

and �0;4

5;1

5

�0@ �1�24

1A = �45:

By comparison of the above �ve values, (� 45 ;�4 and �2); we see that the maxi-

mum value of the matrix game is v = � 45 , and the vertex at which the maximum

is reached is X�=�0; 45 ;

15

�. Thus X�=

�0; 45 ;

15

�is the optimal strategy of player

1.

We proceed in a similar way to �nd out the optimal strategy of player 2.We get that the vertices Y �1 =

�0; 35 ;

25

�, Y �2 =

�815 ;

13 ;

215

�represent optimal

strategies of player 2. Hence Y � = �Y �1 + (1 � �)Y �2 , 0 � � � 1. By comingback to the original matrix game with the payo¤ matrix B we obtain the valuevB = vA + 4 that is vB = 16

5 . The optimal strategies are

X� =

�0;4

5;1

5

�;

Y � =

�8(1� �)15

;3

5�+

1� �3

;3

5�+

2(1� �)15

�; 0 � � � 1: �

Remark 1.27. We have the same result as that obtained in Example 1.12where we used the elimination of dominated strategies. �

1.11 Matrix games and linear programming

Next, we formulate the matrix game problem as a linear programming problem.Let A = (aij) be the payo¤ matrix of a matrix game. It isn�t a restriction toassume that aij > 0 for all i = 1;m and all j = 1; n. Then the value v of thegame must be a positive number.By choosing a mixed strategy X 2 Sm player 1 can get at least the expected

payo¤min1�j�n

XA�j = u:

Therefore, we have XA�j � u, j = 1; n, that is

mXi=1

aijxi � u; j = 1; n

withmXi=1

xi = 1; xi � 0; i = 1;m:

32

Page 33: GamesTheorySB,ESA,etc

We denote xiiu = x0i, i = 1;m: Then the above problem becomes

mXi=1

aijx0i � 1; j = 1; n

mXi=1

x0i =1

u

x0i � 0; i = 1;m:

Player 1 wishes to maximize u, (this maximum is the value v of the game),that is, he wishes to minimize 1

u . Thus the problem reduces to the followinglinear programming problem8>>>><>>>>:

[min]f = x01 + x02 + � � �+ x0mPm

i=1 aijx0i � 1; j = 1; n

x0i � 0; i = 1;m

(39)

Similarly, player 2, by choosing a mixed strategy Y 2 Sn, can keep player 1from getting more than

max1�i�m

Ai�Yt = w:

So, we have Ai�Y t � w, i = 1;m, that is,

nXj=1

aijyj � w; i = 1;m;

wherenXj=1

yj = 1; yj � 0; j = 1; n:

We denote yjw = y0j , j = 1; n.

Since player 2 wishes to minimize w (this minimum is also the value v ofthe game), that is, he wishes to maximize 1

w , the above problem reduces to thefollowing linear programming problem, which is the dual of (39), formulatedabove: 8>>>><>>>>:

[max]g = y01 + y02 + � � �+ y0nPn

j=1 aijy0j � 1; i = 1;m

y0j � 0; j = 1; n

(40)

Thus the solution of a matrix game is equivalent to the problem of solvinga pair of dual linear programming problems.Remark 1.28. Due to the duality theorem, well known in linear program-

ming, it is enough to solve one of those above problems. �

33

Page 34: GamesTheorySB,ESA,etc

Example 1.18. We consider the same matrix game as in Example 1.17.Thus we have

B =

24 4 2 33 4 24 0 8

35 :To obtain aij > 0 we add the constant 1 at each element of matrix B and

so we obtain

A =

24 5 3 44 5 35 1 9

35 :The corresponding linear programming problem (40) is8>>>>>>>>>>>><>>>>>>>>>>>>:

[max]g = y01 + y02 + y

03

5y01 + 3y02 + 4y

03 � 1

4y01 + 5y02 + 3y

03 � 1

5 y01 + y02 + 9y

03 � 1

y01; y02; y

03 � 0

In order to solve this problem we use the simplex method. The simplexmatrix can be written successively2664

5 3 4 1 0 0 14 5 3 0 1 0 1

5 1 9 0 0 1 11 1 1 0 0 0 0

3775 3�!

!

26640 2 �5 1 0 �1 00 21=5 �21=5 0 1 �4=5 1=51 1=5 9=5 0 0 1=5 1=50 4=5 �4=5 0 0 �1=5 �1=5

3775 �5;�4;�1!

!

26640 1 �5=2 1=2 0 �1=2 0

0 0 63=10 �21=10 1 13=10 1=5

1 0 23=10 �1=10 0 3=10 1=50 0 6=5 �2=5 0 1=5 �1=5

3775!

!

26640 1 0 �1=3 25=63 1=63 5=630 0 1 �1=3 10=63 13=63 2=63

1 0 0 2=3 �23=63 11=63 8=63

0 0 0 0 �4=21 �1=21 �5=21

3775Thus the solution is gmax = 5

21 , y01 =

863 , y

02 =

563 , y

03 =

263 , y

04 = 0, y

05 = 0,

y06 = 0, x01 = 0, x

02 =

421 , x

03 =

121 .

34

Page 35: GamesTheorySB,ESA,etc

We have gmax = 1w , hence w =

215 is the value of game with the matrix A.

Also, y1 = y01w =863 �

218 =

815 , y2 =

13 , y3 =

215 , x1 = 0, x2 =

45 , x3 =

15 .

The problem has still another solution because we have26641=2 1 0 0 3=14 13=126 3=211=2 0 1 0 �1=42 37=126 2=213=2 0 0 1 �23=42 11=42 4=210 0 0 0 �4=21 �1=21 �5=21

3775Therefore y01 = 0, y02 = 3=21, y03 = 2=21, y04 = 4=21, y05 = 0, y06 = 0, thus

y1 = 0, y2 = 3=5, y3 = 2=5.In conclusion, the solution of matrix game with payo¤ matrix B is:

v = w � 1 = 21

5� 1 = 16

5; X� =

�0;4

5;1

5

�;

Y �1 =

�8

15;1

3;2

15

�; Y �2 =

�0;3

5;2

5

�hence

Y � = �Y �1 + (1� �)Y �2 ; 0 � � � 1: �Remark 1.29. In a next section we will do an another approach for this

kind of problems. �

1.12 De�nition of the non-cooperative game

For each game there are n players, n 2 N, n � 2. In our mathematical consider-ations it is important the existence of the players and the possibility to identifyand to distinguish them between the others players. The set of the players Iis identi�ed with the set of �rst n non zero natural numbers I = f1; 2; : : : ; ng.Each player i, i 2 I can apply many strategies. In the case of an e¤ectivegame the player i, in the moments of the decision during the game, may choosefrom a set of variants Si. We consider that Si is a �nite set, for every i. Be-cause from mathematical point of view the concrete nature of the variants isn�tessential but the possibility to identify them is important, we denote generallySi = f1; : : : ;mig and we consider in what follows the general notation Si = fsig,i = 1; n and for each �xed i, si = 1;mi. If we take a strategy of each playerthen we obtain a situation (strategy) of the game s = (s1; : : : ; sn) which it isan element of the cartezian product S1�� � ��Sn =

Qi2I Si. For every situation

s, each player i obtains a payo¤ Hi(s). So, H is a function de�ned on the setof all situations s and we call it the payo¤matrix of the player i.De�nition 1.10. The ensemble � = < I; fSig; fHig; i 2 I > is called non-

cooperative game. Here I and Si are sets which contain natural numbers,Hi = Hi(s), i 2 I, are real functions de�ned on the set S, s 2 S, S =

Qi2I Si.

35

Page 36: GamesTheorySB,ESA,etc

Remark 1.30. We call the function Hi the payo¤ matrix because its set ofvalues can be e¤ective written as a n-dimensional matrix of type fm1; : : : ;mng.So, we can accept the name matrix game when we want to underline that thisgame is given by a n-dimensional matrix. �Example 1.19. Two players put on the table a coin of same kind. If the

both players choose same face, then the �rst player take the two coins, and incontrary case, the second player take the two coins. (See the Example 1.1).The �rst player is denoted by 1 and the second by 2. So, I = f1; 2g. Each

player has two strategies, S1 = S2 = f1; 2g. If s1 = 1 or s1 = 2, then the player1 chose "heads" respectively "tails". Similarly are the values s2 = 1 respectivelys2 = 2 for the player 2. It follows that S = S1�S2 = f(1; 1); (1; 2); (2; 1); (2; 2)g.Then this game is

� =< I; S1; S2;H1;H2 > :

The payo¤ matrix H1(s) of the player 1 can be written as:

H1(s) =

�H1(1; 1) H1(1; 2)H1(2; 1) H1(2; 2)

�=

�1 �1�1 1

�;

where the rows correspond to the strategies of player 1 and the columns to thestrategies of player 2.The payo¤ matrix of the player 2 is

H2(s) =

�H2(1; 1) H2(2; 1)H2(1; 2) H2(2; 2)

�=

��1 11 �1

�and here the rows correspond to the strategies of player 2 and the columnscorrespond to the strategies of player 1. �Remark 1.31. A general notation for the payo¤ matrices is given by the

following table:

Situation Payo¤ matrixs1 . . . sn H1 . . . Hn

So, for the game considered in Example 1.19 the payo¤ matrices are

Situation Payo¤ matrixs1 s2 H1 H2

1 1 1 -11 2 -1 12 1 -1 12 2 1 -1

36

Page 37: GamesTheorySB,ESA,etc

1.13 De�nition of the equilibrium point

Let us consider a non-cooperative game

� =< I; fSig; fHig; i 2 I > :

We suppose that the game repeats oneself many times.Example 1.19 shows us that it isn�t in advantage for every player to apply

the same strategy all the time. If, for example, the player 1 applies only thestrategy 1, then the player 2 observe this thing and he applies the strategy 2and so the player 1 will be a loser all the time.The similar situation is, if the player 1 applies the strategy 2 all the time.Similarly, for the player 2.So, it follows that in every situation s = (s1; s2), each player can choose

a preferred situation s0, in opposite with the strategy s, which exists at thatmoment of time. This strategy can be obtained by modifying only the strategyof the player with another one.

Given situation Preferred situations s0 for the player

1 2(1,1) (1,2)(1,2) (2,2)(2,1) (1,1)(2,2) (2,1)

Hence, by repeating the game, it is necessary to apply each strategy si withthe probability (relative frequency) pisi , in order to obtain a payo¤ as much asit is possible, for every player, in all games which are played. That is to ensurethe possible average value of the game for every player. For the row matrix ofall probabilities pisi , si = 1;mi, which correspond to the player i, we use thenotation Pi = [pi1 ; : : : ; pimi

]. The vector Pi, for all values of the probabilities, iscalled the mixed strategy of the player i. If only a probability from the vector Piis di¤erent from 0, and it is equal with 1, then Pi is the pure strategy si of theplayer. If all strategies Pi, i = 1; n are pure strategies, then P = (P1; : : : ; Pn) ispure strategy (the situation s = (s1; : : : ; sn)) of the whole game.We denote Ji the row matrix which contains 1 and, so, we can write PiJ ti = 1,

where t is the symbol for the transposed matrix.We denote Pi the mixed strategy of all players except the player i. We

suppose that each player �xed his strategy which is independent of those ofthe others players: Pi =

Qj 6=i Pj , where we consider this product of the vec-

tors as a cartezian product (each component with each component). Whenwe write the elements of the vector Pi we consider the lexicographic ordo-nation of the elements. For example, if P1 = [p11; p12], P2 = [p21; p22; p23],P3 = [p31; p32; p33; p34] are the mixed strategies of the players I = f1; 2; 3g, thenwe have

P1 = P2 � P3 = [p21p31; p21p32; p21p33; p21p34; p22p31; p22p32;

37

Page 38: GamesTheorySB,ESA,etc

p22p33; p22p34; p23p31; p23p32; p23p33; p23p34];

P2 = P1 � P3 = [p11p31; p11p32; p11p33; p11p34; p12p31; p12p32; p12p33; p12p34];

P3 = P1 � P2 = [p11p21; p11p22; p11p23; p12p21; p12p22; p12p23]:

De�nition 1.11. We say that the non-cooperative game is solved if wecan determine those mixed strategies (solutions) Pi, PiJ ti = 1, i = 1; n, forwhich considering a constant vector Pi, the payo¤ function Fi = PiHiP

tihas

the maximum value, for every i = 1; n.We denote the strategies Pi, i = 1; n as P = (P1; : : : ; Pn).The mathematical object obtained here is called the equilibrium point

(Nash equilibrium) of the game. �Example 1.20. The data of non-cooperative game from Example 1.19 can

be represented in the following form: (P1 = P2, P2 = P1):

P1P1nP2 p21 p22p11 1 �1p12 �1 1

P2P2nP1 p11 p12p21 �1 1p22 1 �1

In this case the corresponding system is:

p11 + p12 = 1; p21 + p22 = 1;

F1 = (p21 � p22)p11 + (�p21 + p22)p12;

F2 = (�p11 + p12)p21 + (p11 � p12)p22:

If P1; P2 is the solution of the problem, then it isn�t any vector with prob-abilities P 01 = [p011; p

012] for which F

01 = F1(P

01 ; P2) > F1 = F1(P1; P2), where

F 01 = (p21 � p22)p011 + (p21 + p22)p

012 and there isn�t any vector with proba-

bilities P 02 = [p021; p022] for which F

02 = F2(P1; P

02 ) > F2 = F2(P1; P2), where

F 02 = (�p11 + p12)p021 + (p11 � p12)p022.For example, P1 =

�12 ;

12

�, P2 =

�12 ;

12

�is a solution of the game and we have

F1 = F2 = 0. �

1.14 The establishing of the equilibrium points of a non-cooperative game

The solution of the game from Example 1.20 have been obtained by a privateprocedure, by using the elements of the matrices of this game. But we don�t usea general method to solve every non-cooperative game and for every solution ofthe game.In order to solve the non-cooperative game as in the De�nition 1.11, we

suppose that we obtained the mixed strategies Pi, i = 1; n, and we write thepayo¤ functions Fi in the matriceal form Fi = Pi�

ti, where �i = [Fi; : : : ; Fi] is

a row matrix with mi components that are equal with Fi.

38

Page 39: GamesTheorySB,ESA,etc

We remind that Ji is a row vector with mi components all equal with 1.So, by the given de�nition, it results that we can write PiHiP

ti= Pi�

ti, hence

Pi(HiPi � �ti) = 0 where Pi � 0, i = 1; n and HiPti� �ti � 0.

If the j-component of the vector HiPti��ti is positive, then by multiplying,

on the left, with the vector P �i , with all its components equal with 0, except thej-component that it is equal with 1, it results that F �i = P �i HiP

ti> Fi. But

this is in opposite with De�nition 1.11 which shows us that Fi is the maximumvalue of the expression PiHiP

ti, for �xed Pi.

For every values of the probabilities pisi , 0 � pisi � 1, that is for the solutionthat gives us the maximum too, the maximum value of the payo¤ function Fiis obtained (between others values) for a strategy si for which

Hi(si)Pti= max

s0i

Hi(s0i )P

ti:

Here s0i is an arbitrary strategy. We denote Hi(si)Pti, respectively Hi(s

0i )P

ti

the element with row index si, respectively s0i of the matrix HiPti.

By introducing a row matrix Ti with independent non-negative variablestisi , si = 1;mi, Ti = [ti1 ; : : : ; timi ], for every player i, we can write a matricealequation that is with the inequation HiP

ti��ti � 0 equivalent: HiPi��ti+T ti =

0, or �ti �HiPti= T ti .

We haveTheorem 1.3. The determination of the equilibrium points of a non-

cooperative game consists in solving, in non-negative numbers, of the system ofmultilinear equations: PiJ ti = 1, HiP

ti��ti+T ti = 0, PiT ti = 0, where i = 1; n.�

Remark 1.32. We consider that the unknown real values Fi have beenwritten as di¤erence between two non-negative values F 0i and F

00i , Fi = F 0i �F 00i ,

in order to have all the unknowns as non-negative numbers. �Remark 1.33. To solve the problem formulated by Theorem 1.3, we can

apply a method for solving the systems of equations and inequations with anarbitrary degree in non-negative numbers. Such a method can be the completeelimination method. �Remark 1.34. Because the determination of the equilibrium points of a

non-cooperative game consists in solving of a system of the multilinear equa-tions, we can call this theory as "the theory of multilinear games". �From the previous presentation we don�t obtain that the solution is e¤ective

and it is nonempty.So, the following Nash�s theorem is important:Theorem 1.4. Every non-cooperative game has nonempty solution.We don�t present here the proof of this theorem.Example 1.21. Because of Theorem 1.3, the problem given in Example

1.20 is with the problem of solving in non-negative numbers P1 � 0, P2 � 0,T1 � 0, T2 � 0 of a system with multilinear equations equivalent:

P1Jt1 = 1; H1P

t1� �t1 + T

t1 = 0; P1T

t1 = 0

P2Jt2 = 1; H2P

t2� �t2 + T t2 = 0; P2T

t2 = 0;

39

Page 40: GamesTheorySB,ESA,etc

whereP1 = [p11; p12]; P2 = [p21; p22];

P1 = P2; P2 = P1; J1 = J2 = [1; 1]

T1 = [t11; t12]; T2 = [t21; t22];

H1 =

�1 �1�1 1

�; H2 =

��1 11 �1

�;

�1 = [F01 � F 001 ; F 01 � F 001 ];�2 = [F 02 � F 002 ; F 02 � F 002 ]

and

F 01 � 0; F 001 � 0; F 02 � 0; F 002 � 0; F1 = F 01 � F 001 ; F2 = F 02 � F 002

or in the developed form:

p11 + p12 = 1; p21 + p22 = 1;

p21 � p22 � F 01 + F 001 + t11 = 0; � p21 + p22 � F 01 + F 001 + t12 = 0;

�p11 + p12 � F 02 + F 002 + t21 = 0; p11 � p12 � F 02 + F 002 + t22 = 0;

p11t11 = 0; p12t12 = 0; p21t21 = 0; p22t22 = 0:

By solving this system with complete elimination method we obtain the samesolution as that obtained by the private procedure given in Example 1.20. �Remark 1.35. Because of the non-negativity of the unknowns we have the

following equivalences:

p11t11 + p12t12 = 0 ) p11t11 = 0; p12t12 = 0

p21t21 + p22t22 = 0 ) p21t21 = 0; p22t22 = 0

and so the equation PiTti = 0 can be replaced by mi equations of the form

pisitisi = 0, si = 1;mi, for every i, i = 1; n. �

1.15 The establishing of the equilibrium points of a bi-matrix game

De�nition 1.12. The non-cooperative game for two players is called bi-matrixgame. �Such a game let us to solve it easily. The problem given by Theorem 1.2,

because P1 = P2 and P2 = P1, can be decomposed in three problems that areindependent. The subproblem (41) consists in solving in non-negative numbersP2 of a system with linear equations8<: P2J

t2 = 1

H1Pt2 � �t1 + T t1 = 0;

(41)

40

Page 41: GamesTheorySB,ESA,etc

the subproblem (42) consists in solving in non-negative numbers P1 of thesystem of equations

8<: P1Jt1 = 1

H2Pt1 � �t2 + T t2 = 0;

(42)

and both subproblems can be solved by simplex method. Because the generalsolution is a linear convex combination of the basic solutions, it results that wemust select those basic solutions (P1; P2) for which it is veri�ed the subproblem(43) too, that is given by the system of equations

8<: P1Tt1 = 0

P2Tt2 = 0:

(43)

If for an arbitrary index s1, 1 � s1 � m1, the unknown t1s1 is a componentof a basic solution of subproblem (41) and t1s1 6= 0 (t1s1 = 0 when there isdegenerate case), then p1s1 = 0. So, in all cases t1s1 6= 0 we have p1s1 = 0.Similarly, if t2s2 6= 0 then it results p2s2 = 0, the property that let us to �ndthe solution which verify the system (43).The general solution can be obtained by linear convex combination of all

basic solutions P1 corresponding to a �xed P2 and linear convex combination ofall basic solutions P2, which correspond to a P1.Example 1.22. The problem given in Example 1.21 refers to a bi-matrix

game. The three systems are the following:8>>>><>>>>:p21 + p22 = 1

p21 � p22 � F 01 + F 001 + t11 = 0

�p21 + p22 � F 01 + F 001 + t12 = 0

(410)

8>>>><>>>>:p11 + p12 = 1

�p11 + p12 � F 02 + F 002 + t21 = 0

p11 � p12 � F 02 + F 002 + t22 = 0

(420)

p11t11 = 0; p12t12 = 0; p21t21 = 0; p22t22 = 0: (430)

To subproblem (410) it corresponds the simplex matrix given below. Therow corresponding to the objective function (that will be minimized) is equal 0.

S1 =

26641 1 0 0 0 0 11 �1 �1 1 1 0 0�1 1 �1 1 0 1 00 0 0 0 0 0 0

377541

Page 42: GamesTheorySB,ESA,etc

We obtain the following basic solutions

X11 =

�1

2;1

2; 0; 0; 0; 0

�; X12 = [1; 0; 1; 0; 0; 2]; X13 = [0; 1; 1; 0; 2; 0]:

Here, we use the symbol X to have a uniformized notation of the unknowns,

x1 = p21; x2 = p22; x3 = F 01; x4 = F 001 ; x5 = t11; x6 = t12:

Such uniformizations will be used in what follows every time when they areuseful to us. To subproblem (420) it corresponds the following simplex matrix:

S2 =

26641 1 0 0 0 0 1�1 1 �1 1 1 0 01 �1 �1 1 0 1 00 0 0 0 0 0 0

3775and it has the basic solutions

X21 =

�1

2;1

2; 0; 0; 0; 0

�; X22 = [1; 0; 1; 0; 2; 0]; X23 = [0; 1; 1; 0; 0; 2]:

We denote X 0ij , i = 1; 2, j = 1; 3 the vectors obtained by omission of com-

ponents F 0i , F00i , i = 1; 2. We obtain

X 011 =

�1

2;1

2; 0; 0

�; X 0

12 = [1; 0; 0; 2]; X 013 = [0; 1; 2; 0]

X 021 =

�1

2;1

2; 0; 0

�; X 0

22 = [1; 0; 2; 0]; X 023 = [0; 1; 0; 2]:

To establish the pairs of solutions (X 01i; X

02j) that would be the solutions

of bi-matrix game it must satisfy the condition: t1s1 6= 0 ) p1s1 = 0 andt2s2 6= 0) p2s2 = 0.We observe that there exists only a solution: P1 = P2 =

�12 ;

12

�, obtained for

t11 = t12 = t21 = t22 = 0 and F 01 = F 002 = F 02 = F 002 = 0. So F1 = F2 = 0. �

1.16 The establishing of equilibrium points of an antago-nistic game

De�nition 1.13. We call antagonistic game a bi-matrix game with the bi-dimensional matrices H1 and H2 for which the following relationship is satis�ed:H1 +H

t2 = 0, where 0 is the zero matrix. �

Because every equality is with two inequality equivalent, the systems (41),(42) and (43) from 1.15 can be written as:

42

Page 43: GamesTheorySB,ESA,etc

8>>>><>>>>:H1P

t2 � �t1 � 0

�J2P t2 � �1

J2Pt2 � 1

(44)

8>>>><>>>>:P1H1 � �2 � 0

P1Jt1 � 1

�P1J t1 � �1

(45)

8<: P1(H1Pt2 � �t1) = 0

(P1H1 � �2)P t2 = 0(46)

where �1 contains as elements one and the same value F1, and �2 containsone and the same value �F2.From subproblem (46) it results that P1�t1 = �2P

t2 , hence �F2 = F1. We

can consider these values as minimax values (the minimum of some maximvalues) obtained by minimization of function F1 (the maximum is equal in�nite)respectively maximin (the maximum of some minimum values) obtained bymaximization of function �F2 = F1 (the minimum is equal in�nite). Addingto system (44) the function F1 = F 01 � F 001 and to system (45) the function�F2 = �F 02 + F 002 , it results two linear programming problems, that are dualproblems.Certainly, we can use the simpli�ed notation F = F1 = �F2 and we can

consider that we determine F =MIN by using the system (44) and F =MAXby using the system (45). So, at least a private solution of the antagonistic gamecan be obtained by solving only one of systems (44), (45). The antagonistic gamemay have, as a bi-matrix game, another solutions which result by solving thesystems (41), (42) and (43), by setting �F2 = F1 = F .Because of the symmetry of the systems (44) and (45) and setting F =

F1 = �F2, it results the following theorem relative to antagonistic games (vonNeumann-Morgenstern theorem):

Theorem 1.5. The minimum with respect to P2 of the maximum (minimax)of the function F (P1; P2) with respect to P1, for �xed P2, is equal to themaximum with respect to P1 of the minimum (maximin) of the function

F (P1; P2) with respect to P2, for �xed P1, namely

minP2 maxP1 F (P1; P2) = maxP1 minP2 F (P1; P2): (47)

Remark 1.36. In the case of a bi-matrix game, as a generalization ofthe condition which appears in the antagonistic game, we can formulate thequestion: for which solution (P1; P2) the function F1+F2 reaches the minimumvalue?

43

Page 44: GamesTheorySB,ESA,etc

This formulation leads us to the problem of cooperation: when there iscooperation and when there isn�t? For the antagonistic game we have F1+F2 =0.Example 1.23. The bi-matrix game given in Example 1.22 is an antag-

onistic game. Because it has only a solution, this solution can be obtained ifwe minimize the function F = F1, namely F = F 01 � F 001 , supposing that weuse, in order to solve the system (44), by replacing the row that contains onlyzeros in the simplex matrix S1 by the corresponding to the function to minimize[0; 0; 1;�1; 0; 0; 0; 0]. So, we obtain the simplex matrix:2664

1 1 0 0 0 0 11 �1 �1 1 1 0 0�1 1 �1 1 0 1 00 0 1 �1 0 0 0

3775that, by reducing leads us to the same solution P1 = P2 =

�12 ;

12

�for which

F 01 = F 002 = 0. So FMIN = 0.

1.17 Applications in economics

In this sequel we give some applications of games in economics. We will beevaluate the payo¤ function for each player, which will be dependent of theplayer�s strategies. Here the sets of strategies are real intervals.

1.17.1 Cournot model of duopoly [21]

We consider a very simple version of Cournot�s model. Let q1 and q2 denote thequantities of a homogeneous product, produced by �rms 1 and 2, respectively.Let P (Q) = a�Q be the market-clearing price when the aggregate quantity onthe market is Q = q1 + q2. Hence we have

P (Q) =

8<: a�Q; for Q < a

0; for Q � a:

Assume that the total cost to �rm i of producing quantity qi is Ci(qi) = cqi.That is, there are no �xed costs and the marginal cost is constant at c, wherewe assume c < a. Suppose that the �rms choose their quantities simultaneously.We �rst translate the problem into a "continuous" game. For this, we specify:the players in the game (the two �rms), the strategies available to each player(the di¤erent quantities it might produce), the payo¤ received by each player foreach combination of strategies that could be chosen by the players (the �rm�spayo¤ is its pro�t). We will assume that output is continuously divisible andnegative outputs are not feasible. Thus, each �rm�s strategy space is Si = [0;1),the nonnegative real numbers, in which case a typical strategy si is a quantitychoice, qi � 0. Because P (Q) = 0 for Q � a, neither �rm will produce a

44

Page 45: GamesTheorySB,ESA,etc

quantity qi > a. The payo¤ to �rm i, a function of the strategies chosen by itand by the other �rm, its pro�t, can be written as

�i(qi; qj) = qi[a� (qi + qj)� c]:

How we know an equilibrium point (Nash equilibrium) is the pair (q�1 ; q�2)

where q�i , for each �rm i, solves the optimization problem

max0�qi<1

�i(qi; q�j ) = max

0�qi<1qi[a� (qi + q�j )� c]:

Assuming q�j < a� c (as will be shown to be true), the �rst order condition for�rm i�s optimization problem is necessary and su¢ cient

qi =12 (a� q

�j � c): (48)

Thus, if the quantity pair (q�1 ; q�2) is to be a Nash equilibrium, the �rm�s

quantity choices must satisfy

q�1 =1

2(a� q�2 � c); q�2 =

1

2(a� q�1 � c):

Solving this pair of equations yields q�1 = q�2 =a�c3 , which is indeed less than

a� c, as assumed.The intuition behind this equilibrium is simple.Each �rm would of course like to be a monopolist in this market, in which

case it would choose qi to maximize �i(qi; 0) = qi(a�qi�c), it would produce themonopoly quantity qm = a�c

2 , and earn the monopoly pro�t �i(qm; 0) =(a�c)24 .

Given that there are two �rms, aggregate pro�ts for the duopoly would bemaximized by setting the aggregate quantity q1 + q2 equal to the monopolyquantity qm, as would occur if qi =

qm2 for each i, for example. The problem

with this arrangement is that each �rm has an incentive to deviate: becausethe monopoly quantity is low, the associated price P (qm) is high, and at thisprice each �rm would like to increase its quantity, in spite of the fact that suchan increase in production drives down the market-clearing price. To see thisformally, use (48) to check that qm

2 isn�t �rm 2�s best response to the choice ofqm2 by �rm 1.In the Cournot equilibrium, in contrast, the aggregate quantity is higher, so

the associated price is lower, so the temptation to increase output is reduced �reduced by just enough that each �rm is just deterred from increasing its outputby the realization that the market-clearing price will fall.Remark 1.37. Rather than solving for Nash equilibrium in the Cournot

game algebraically, one could instead proceed graphically, using the best re-sponse to a �rm:

R2(q1) =12 (a� q1 � c) ��rm 2�s best response, and

R1(q2) =12 (a� q2 � c) ��rm 1�s best response.

A third way to solve for this Nash equilibrium is to apply the process ofiterated elimination of strictly dominated strategies (see [7]). �

45

Page 46: GamesTheorySB,ESA,etc

1.17.2 Bertrand model of duopoly [21]

This Bertrand�s model is based on suggestion that �rms actually choose prices,rather than quantities as in Cournot�s model. The Bertrand�s model is a dif-ferent game than Cournot�s model because: the strategy spaces are di¤erent,the payo¤ functions are di¤erent. Thus we obtain other equilibrium point, butthe equilibrium concept used is the Nash equilibrium de�ned in the previoussections.We consider the case of di¤erentiated products. If �rms 1 and 2 choose prices

p1 and p2, respectively, the quantity that consumers demand from �rm i is

qi(pi; pj) = a� pi + bpj ;

where b > 0 re�ects the extent to which �rm i�s product is a substitute for�rm j�s product. This is an unrealistic demand function because demand for�rm i�s product is positive even when �rm i charges an arbitrarily high price,provided �rm j also charges a high enough price. We assume that there are no�xed costs of production and that marginal costs are constant at c, where c < a,and that the �rms act simultaneously (choose their prices). We translate theeconomic problem into a non-cooperative game. There are again two players.This time, however, the strategies available to each �rm are the di¤erent pricesit might charge, rather than the di¤erent quantities it might produce. We willassume that negative prices are not feasible but that any non-negative pricecan be charged �there is no restriction to prices denominated in pennies. Thuseach �rm�s strategy space can again be represented as Si = [0;1), and a typicalstrategy si is now a price choice, pi � 0.We will again assume that the payo¤ function for each �rm is just its pro�t.

The pro�t to �rm i when it chooses the price pi and its rival choose the pricepj is

�i(pi; pj) = qi(pi; pj)(pi � c) = (a� pi + bpj)(pi � c):Thus, the price pair (p�1; p

�2) is Nash equilibrium if, for each �rm i, p�i solves

the problem

max0�pi<1

�i(pi; p�j ) = max

0�pi<1(a� pi + bp�j )(pi � c):

The solution to �rm i�s optimization problem is

p�i =1

2(a+ bp�j + c):

Therefore, if the price pair (p�1; p�2) is to be a Nash equilibrium, the �rm�s

price choices must satisfy

p�1 =1

2(a+ bp�2 + c) and p�2 =

1

2(a+ bp�1 + c):

Solving this pair of equations yields

p�1 = p�2 =a+ c

2� b :

46

Page 47: GamesTheorySB,ESA,etc

1.17.3 Final-o¤er arbitration [6]

Many public-sector workers are forbidden to strike; instead, wage disputes aresettled by binding arbitration. Many other disputes including medical malprac-tice cases and claims by shareholders against their stockbrokers, also involvearbitration. The two major forms of arbitration are conventional and �nal-o¤erarbitration. In �nal-o¤er arbitration, the two sides make wage o¤ers and thenthe arbitrator piks one of the o¤ers as the settlement. In conventional arbitra-tion, in contrast, the arbitrator is free to impose any wage as the settlement.We now derive the Nash equilibrium wage o¤ers in a model of �nal-o¤er

arbitration.Suppose the parties to the dispute are a �rm and a union and the dispute

concerns wages. First, the �rm and the union simultaneously make o¤ers, de-noted by wf and wu, respectively. Second, the arbitrator chooses one of the twoo¤ers as the settlement. Assume that the arbitrator has an ideal settlement shewould like to impose, denoted by x. Assume, further that, after observing theparties�o¤ers, wf and wu, the arbitrator simply chooses the o¤er that is closerto x: provided that wf < wu, the arbitrator chooses wf if x <

wf+wu2 , chooses

wu if x >wf+wu

2 and chooses wf or wu if x =wf+wu

2 . The arbitrator knows xbut the parties do not. The parties believe that x is randomly distributed ac-cording to a probability distribution denoted by F , with associated probabilitydensity function denoted by f . Thus, the parties believe that the probabilitiesPfwf choseng and Pfwu choseng depend of arbitrator�s behavior, and canbe expressed as

Pfwf choseng = P

�x <

wf + wu2

�= F

�wf + wu

2

�and

Pfwu choseng = 1� F�wf + wu

2

�:

Thus, the expected wage settlement is

wfPfwf choseng+ wuPfwu choseng = wfF

�wf + wu

2

�+

+wu

�1� F

�wf + wu

2

��:

We assume that the �rm wants to minimize the expected wage settlementimposed by the arbitrator and the union wants to maximize it.If the pair of o¤ers (w�f ; w

�u) is to be a Nash equilibrium of the game between

the �rm and the union, w�f must solve the optimization problem

minwf

�wfF

�wf + w

�u

2

�+ w�u

�1� F

�wf + w

�u

2

���

47

Page 48: GamesTheorySB,ESA,etc

and w�u must solve the optimization problem

maxwu

�w�fF

�w�f + wu

2

�+ wu

�1� F

�w�f + wu

2

���:

Thus, the wage-o¤er pair (w�f ; w�u) must solve the �rst-order conditions for

these optimization problems

(w�u � w�f ) 12f�w�f+w

�u

2

�= F

�w�f+w

�u

2

�and (49)

(w�u � w�f ) 12f�w�f+w

�u

2

�= 1� F

�w�f+w

�u

2

�:

It result

F�w�f+w

�u

2

�= 1

2 ; (50)

that is, the average of the o¤ers must equal the median of the arbitrator�spreferred settlement. Substituting (50) into either of the �rst-order conditionsthen yields

w�u � w�f = 1

f

�w�f+w�u2

� : (51)

Remark 1.38. Suppose that the arbitrator�s preferred settlement is nor-mally distributed with mean m and variance ��, in which case the densityfunction is given by

f(x) =1

�p2�e�

(x�m)2

2�2 ; m; � 2 R; � > 0:

We know that, in the case of normal distribution, the median of the distri-bution equals the mean m of the distribution. Thus, (50) and (51) become:

wf + wu2

= m; w�u � w�f =1

f(m)= �

p2�;

and the Nash equilibrium o¤ers are

w�u = m+ �

r�

2; w�f = m� �

r�

2: �

48

Page 49: GamesTheorySB,ESA,etc

1.17.4 The problem of the commons [9]

Consider the n farmers in a village. Each summer, all the farmers graze theirgoats on the village green. Denote the number of goats the ith farmer ownsby gi and the total number of goats in the village by G = g1 + g2 + � � � + gn.The cost of buying and caring for a goat is c, independent of how many goats afarmer owns. The value to a farmer of grazing a goat on the green when a totalof G goats are grazing is v(G) per goat. Since a goat needs at least a certainamount of grass in order to survive, there is a maximum number of goats thatcan be grazed on the green, Gmax : v(G) > 0 for G < Gmax but v(G) = 0 forG � Gmax. Also since the �rst few goats have plenty of room to graze, addingone more does little harm to those already grazing, but when so many goats aregrazing that they are all just barely surviving, that is G is just below Gmax,then adding one more dramatically harms the rest. Formally: for G < Gmax,v0(G) < 0 and v00(G) < 0.During the spring, the farmers simultaneously choose how many goats to

own. Assume goats are continuously divisible. A strategy for farmer i is thechoice of a number of goats to graze on the village green, gi. Assuming thatthe strategy space is [0;1) covers all the choices that could possibly be ofinterest to the farmer, [0; Gmax) would also su¢ ce. The payo¤ to farmer i fromgrazing gi goats, when the numbers of goats grazed by the other farmers are(g1;; gi�1; gi+1; : : : ; gn); is

giv(g1 + � � �+ gi�1 + gi + gi+1 + � � �+ gn)� cgi: (52)

Thus, if (g�1 ; : : : ; g�n) is to be a Nash equilibrium then, for each i, g�i must

maximize (52) given that the other farmers choose (g�1 ; : : : ; g�i�1; g

�i+1; : : : ; g

�n).

The �rst-order condition for this optimization problem is

v(gi + g�1 + � � �+ g�i�1 + g�i+1 + � � �+ g�n) + (53)

+giv0(gi + g

�1 + � � �+ g�i�1 + g�i+1 + � � �+ g�n)� c = 0

Substituting g�i into (53), summing over all n farmer�s �rst-order conditions,and then dividing by n, yields

v(G�) + 1nG

�v0(G�)� c = 0; (54)

where G� = g�1 + � � �+ g�n.The �rst-order condition (52) re�ects the incentives faced by a farmer who is

already grazing gi goats but is considering adding one more, or a tiny fraction ofone more. The value of the additional goat is v(gi+g�1+� � �+g�i�1+g�i+1+� � �+g�n)and its cost is c. The harm to the farmer�s existing goats is v0(gi + g�1 + � � � +g�i�1+ g

�i+1+ � � �+ g�n) per goat, or giv0(gi+ g�1 + � � �+ g�i�1+ g�i+1+ � � �+ g�n) in

total. The common resource is over utilized because each farmer considers only

49

Page 50: GamesTheorySB,ESA,etc

his her own incentives, not the e¤ect of his or her actions on the other farmers,hence the presence of G�v0(G�)=n in (54).Remark 1.39. The social optimum, denoted by G��, solves the problem

max0�G<1Gv(G)�Gc, the �rst-order condition for which is

v(G��) +G��v0(G��)� c = 0:

We have G� > G��. �

1.18 Exercises and problems solved

1. Let be a zero-sum two-person game with the payo¤ matrix

H1 = A =

26649 3 16 5 83 4 106 5 6

3775 :Which is the payo¤ matrix of player 2? What strategies has the player 1

and 2, respectively?Solution. The payo¤ matrix of player 2 is

H2 =

24 �9 �6 �3 �6�3 �5 �4 �5�1 �8 �10 �6

35 ;because H1 +H

t2 = O4;3.

The player 1 has four strategies, because the matrix A has four rows. Theplayer 2 has three strategies, because in the matrix A there are three columns.2. Two players write independently, one of the numbers 1, 2 or 3. If they

have written the same number then the player 1 pays to player 2 equivalentin unities monetary of this number. In the contrary case the player 2 pays toplayer 1 this number of unities monetary that he has chosen. Which is thepayo¤ matrix of this game?Solution. Easily we get that the payo¤ matrix of player 1 is

A =

24 �1 1 12 �2 23 3 �3

35 :3. What game in previous problems has the saddle point?Solution. For the �rst game we have

v1 = max1�i�4

min1�j�3

aij = max(1; 5; 3; 5) = 5;

v2 = min1�j�3

max1�i�4

aij = min(9; 5; 10) = 5:

50

Page 51: GamesTheorySB,ESA,etc

How v1 = v2 = 5 it results that the �rst game has saddle point. It is easilyto verify that (2,2) and (4,2) are both saddle points because a22 = a42 = v = 5.Thus i� = 2, i�� = 4 are optimal strategies of player 1, and j� = 2 is the

optimal strategy of player 2.For the second game we have

v1 = max1�i�4

min1�j�3

aij = max(�1;�2;�3) = �1;

v2 = min1�j�3

max1�i�4

aij = min(3; 3; 2) = 2:

Thus, the second game hasn�t a saddle point in the sense of pure strategiesbecause v1 = �1 < 2 = v2.4. Which are the expected payo¤s of player 1 in the previous games?Solution. For the �rst game, let X = (x1; x2; x3; x4), Y = (y1; y2; y3) be

the mixed strategies of players 1 and 2, respectively. Then the expected payo¤of player 1 is

4Xi=1

3Xj=1

aijxiyj = 9x1y1 + 3x1y2 + x1y3 + 6x2y1 + 5x2y2 + � � �+ 6x4y3:

For the second game, let X = (x1; x2; x3), Y = (y1; y2; y3) be the mixedstrategies of players 1 and 2, respectively. Then the expected payo¤ of player 1is

3Xi=1

3Xj=1

aijxiyj = �x1y1 + x1y2 + x1y3 + 2x2y1 � 2x2y2 + 2x2y3+

+3x3y1 + 3x3y2 � 3x3y3:5. Using the iterated elimination of strictly dominated strategies solve the

matrix game with the payo¤ matrix

A =

24 0 �1 �11 0 �11 1 0

35 :Solution. In this matrix A the elements of �rst row is smaller than the

corresponding elements of the third row. Consequently, the player 1 will neveruse his �rst strategy. The �rst row will be eliminated. We obtain the payo¤matrix

A0 =

�1 0 �11 1 0

�:

Now, in this matrix A0 each element of �rst column is greater than thecorresponding element of the third column. Thus, the �rst strategy of player 2will never be included in any of his optimal mixed strategies, therefore, the �rstcolumn of the matrix A0 can be deleted to obtain

A00 =

�0 �11 0

�:

51

Page 52: GamesTheorySB,ESA,etc

Similarly, we obtain successive,

A000 = [1 0] and AIV = [0]:

Thus, the optimal (pure) strategies are X� = (0; 0; 1), Y � = (0; 0; 1) andthe value of game is v = 0. We have, actually, a saddle point (i�; j�) = (3; 3),because a33 = v = 0.6. Find the optimal strategies of the following matrix game with the payo¤

matrix

a) A =

�2 01 3

�; b) A =

�1 22 0

�; c) A =

�1 �1�1 1

�.

Solution. These games are 2� 2 matrix game. Thus, we can use the mixedstrategies X� = (p; 1� p), Y � = (q; 1� q) where

p� =d� c

a+ d� b� c ; q� =d� b

a+ d� b� c ; v =ad� bc

a+ d� b� c :

a) We obtain

p� =3� 1

2 + 3� 0� 1 =1

2; q� =

3� 02 + 3� 0� 1 =

3

4;

v =2:3� 0:1

2 + 3� 0� 1 =3

2;

hence X� =�12 ;

12

�, Y � =

�34 ;

14

�, v = 3

2 :b) We have

p� =0� 2

1 + 0� 2� 2 =2

3; q� =

0� 21 + 0� 2� 2 =

2

3;

v =1:0� 2:2

1 + 0� 2� 2 =4

3;

hence X� =�23 ;

13

�, Y � =

�23 ;

13

�, v = 4

3 .c) We obtain

p� =1� (�1)

1 + 1� (�1)� (�1) =1

2; q� =

1

2; v =

1:1� (�1)(�1)4

= 0;

hence X� = Y � =�12 ;

12

�, v = 0.

7. Solve the problem 6 with the procedure described in the Remark 1.24(the Williams method).Solution. Let X = (x1; x2), Y = (y1; y2) be the mixed strategies for players

1 and 2, respectively. Here x1+ x2 = 1 and x1x2= jc�dj

ja�bj , y1+ y2 = 1,y1y2= jd�bj

jc�aj .

a) We obtain x1 + x2 = 1, x1x2 =j1�3jj2�0j = 1, hence x1 = x2, x1 = x2 =

12 ,

respectively y1 + y2 = 1,y1y2= j3�0j

j1�2j = 3, hence 3y2 = y1, y1 = 34 , y2 =

14 . Thus

X� =�12 ;

12

�, Y � =

�34 ;

14

�and

v� =

�1

2;1

2

��2 01 3

� �3=41=4

�=

52

Page 53: GamesTheorySB,ESA,etc

=

�3

2;3

2

��3=41=4

�=9

8+3

8=12

8=3

2:

b) We have x1 + x2 = 1, x1x2 =j2�0jj1�2j = 2, hence x1 = 2x2, x1 =

23 , x2 =

13 ,

respectively y1 + y2 = 1, y1y2 =j0�2jj2�1j = 2, y1 = 2

3 , y2 =13 . Thus X

� = Y � =�23 ;

13

�, and

v� =

�2

3;1

3

��1 22 0

� �2=31=3

�=

=

�4

3;4

3

��2=31=3

�=8

9+4

9=12

9=4

3:

c) We have x1 + x2 = 1, x1x2 =j�1�1jj1�(�1)j = 1, hence x1 = x2, x1 = x2 =

12 ,

respectively y1 + y2 = 1, y1y2= j1�(�1)j

j�1�1j = 1, y1 = y2, y1 = y2 =12 . Thus

X� = Y � =�12 ;

12

�, and

v� =

�1

2;1

2

��1 �1�1 1

� �1=21=2

�= (0; 0)

�1=21=2

�= 0:

8. Solve the problem 6 with the graphical method described for 2 � n andm� 2 matrix games.Solution. Let X = (x; 1 � x), Y = (y; 1 � y) be, respectively, the mixed

strategies for the players 1 and 2. The lines ac, bd, and ab, cd respectively, willbe represented in an illustrative �gure.

a) The payo¤ matrix is A =�2 01 3

�; thus we have the lines

Figure 1.7: The problem 8. a)

The intersection points have, respectively, the abscissa x = 12 ; y =

34 ; hence

X� = (12 ;12 ); Y

� = (34 ;14 ); v =

32 :

b) The payo¤ matrix is A =�1 22 0

�; thus we have the lines

Figure 1.8: The problem 8. b)

The intersection points have, respectively, the abscissa x = 23 ; y =

23 ; hence

X� = Y � = (23 ;13 ); v =

43 :

c) The payo¤ matrix is A =�1 �1�1 1

�; thus we have the lines

53

Page 54: GamesTheorySB,ESA,etc

Figure 1.9: The problem 8. c)

The intersection points have, respectively, the abscissa x = 12 ; y =

12 ; hence

X� = Y � = ( 12 ;12 ); v = 0:

9. Using the graphical method, solve the following matrix games with thepayo¤ matrices:

a) A =

�2 9 6 38 3 7 5

�; b) A =

24 5 69 41 8

35 :Solution. a) Let X = (x; 1�x) be the mixed strategy for the player 1. The

lines ae, df , cg, dh are represented in the following �gure.

Figure 1.10: The problem 9.a)

The abscissa x = x� of the point A0, and the value of A0B0 = v, can be eval-uated by solving the system of two linear equations corresponding to strategiestwo and four of player 2. The linear equations correspond to lines which passthrough points (1,3), (0,5) and respectively (1,9), (0,3) (see the heavy black linein �gure), that is 8<: 2x+ y = 5

6x� y = �3:

The solution is x = 14 , y =

92 . Hence X =

�14 ;

34

�, v = 9

2 .For the mixed strategy of player 2 we have Y = (q1; q2; q3; q4) and the equal-

ity �1

4;3

4

��2 9 6 38 3 7 5

�2664q1q2q3q4

3775 = 9

2

We obtain �26

4;18

4;27

4;18

4

�2664q1q2q3q4

3775 = 9

2;

hence26

4q1 +

18

4q2 +

27

4q3 +

18

4q4 =

9

2:

Thus we have 8>>>><>>>>:26q1 + 18q2 + 28q3 + 18q4 = 18;

q1 + q2 + q3 + q4 = 1;

qj � 0; j = 1; 4

54

Page 55: GamesTheorySB,ESA,etc

with the solution q1 = 0, q3 = 0, q2 = q, q4 = 1� q.The optimal strategies of player 2 are Y = (0; q; 0; 1� q), where q 2 [0; 1].b) Let Y = (y; 1 � y) be the mixed strategy for the player 2. The lines ab,

cd and ef are represented in the following �gure.

Figure 1.11: The problem 9. b)

The linear equations correspond to lines which pass through points (0,4),(1,9) and (0,8), (1,1) (see the heavy black line in �gure), that is,8<: �5y + z = 4

7y + z = 8:

The solution is y = 13 , z =

173 . Hence Y =

�13 ;

23

�, v = 17

3 .For mixed strategy of player 1 we have X = (p1; p2; p3) and the equality

(p1; p2; p3)

24 5 69 41 8

35� 1=32=3

�=17

3:

We obtain

(p1; p2; p3)

24 17=317=317=3

35 = 17

3;

hence17

3p1 +

17

3p2 +

17

3p3 =

17

3:

Thus we have 8>>>><>>>>:17p1 + 17p2 + 17p3 = 17

p1 + p2 + p3 = 1

pi � 0; i = 1; 3

For p1 = 0 we obtain p2 =712 ; p3 =

512 from the equality XA:1 = XA:2;

namely 5p1+9p2+ p3 = 6p1+4p2+8p3 = 173 . The optimal strategies of player

1 is X = (0; 712 ;512 ).

10. Solve the matrix game with the payo¤ matrix

A =

24 6 0 38 �2 34 6 5

35 :Solution. We use the method described for the 3 � 3 matrix game. Thus

we have, with the mixed strategy X = (x1; x2; x3);

XA�1 = 6x1 + 8x2 + 4x3; XA�2 = �2x2 + 6x3; XA�3 = 3x1 + 3x2 + 5x3:

55

Page 56: GamesTheorySB,ESA,etc

The equation of the line XA�1 = XA�2 is 6x1 + 8x2 + 4x3 = �2x2 + 6x3; or6x1 + 10x2 � 2x3 = 0. But x1 + x2 + x3 = 1, so we get 2x1 + 6x3 = 5.The equation of the line XA�2 = XA�3 is �2x2 + 6x3 = 3x1 + 3x2 + 5x3 or

3x1 + 5x2 � x3 = 0, that is 2x1 + 6x3 = 5.The equation of the line XA�3 = XA�1 is 3x1+3x2+5x3 = 6x1+8x2+4x3,

or 3x1 + 5x2 � x3 = 0, that is 2x1 + 6x3 = 5.We obtain only the equation 2x1 + 6x3 = 5, hence the solutions are x1 = p,

x2 =1�4p6 , x3 =

5�2p6 , p 2 [0; 1]. Thus X =

�p; 1�4p6 ; 5�2p6

�, p 2 [0; 1]. The

values XA�1, XA�2, XA�3 are28�4p6 , and this is maximum 14

3 when p = 0.Hence X� =

�0; 16 ;

56

�is optimal strategy of player 1.

For player 2, let Y = (y1; y2; y3) be a mixed strategy. We have A1�Y t =6y1 + 3y3, A2�Y t = 8y1 � 2y2 + 3y3, A3�Y t = 4y1 + 6y2 + 5y3.The equation of the line A1�Y t = A2�Y

t is 6y1 + 3y3 = 8y1 � 2y2 + 3y3 or2y1 � 2y2 = 0, hence y1 = y2.The equation of the line A2�Y t = A3�Y

t is 8y1�2y2+3y3 = 4y1+6y2+5y3,or 4y1 � 8y2 � 2y3 = 0, hence 2y1 � 4y2 = y3. This equation is 3y1 � 3y2 = 1and it means that the lines y1 = y2 and 3y1 � 3y2 = 1 are parallel.The equation of the line A3�Y t = A1�Y

t is 4y1 + 6y2 + 5y3 = 6y1 + 3y3, or2y1 � 6y2 � 2y3 = 0, hence y1 � 3y2 � y3 = 0. This equation is 2y1 � 2y2 = 1,and thus this line is parallel to another.The line 3y1 � 3y2 = 1 is essential, because the intersection of the regions

R1 and R2 is this line, and the region R2 = ;: So we must consider the valuesA1:Y

t and A1:Yt in the cases Y1 = ( 23 ;

13 ; 0), and Y2 =

�13 ; 0;

23

�. We get that

Y �1 =�23 ;

13 ; 0�, Y �2 =

�13 ; 0;

23

�are the optimal strategies for the player 2. Hence

Y � = �Y �1 + (1� �)Y �2 , � 2 [0; 1], is the solution for player 2 and also, v = 143 .

11. Using the linear programming problem, solve the following matrix gameswith the payo¤ matrices:

a) A =

�0 25 1

�; b) A =

24 6 0 3 58 �2 3 94 6 5 4

35 :Solution. We consider the linear programming problem (40)

[max]g = y01 + y02 + � � �+ y0n

nXj=1

aijy0j � 1; i = 1;m

y0j � 0; j = 1; n:

a) We have[max]g = y01 + y

02

2y02 � 1

5y01 + y02 � 1

y01; y02 � 0

56

Page 57: GamesTheorySB,ESA,etc

and so the simplex matrix is24 0 2 1 0 1

5 1 0 1 11 1 0 0 0

35!24 0 2 1 0 11 1=5 0 1=5 1=50 4=5 0 �1=5 �1=5

35!

!

24 0 1 1=2 0 1=21 0 �1=10 1=5 1=100 0 �2=5 �1=5 �3=5

35 :Thus gmax = 3=5 = 1=w ) w = 5=3, y01 = 1=10 ) y1 = 1=6, y02 = 1=2 )

y2 = 5=6, y03 = 0 ) y3 = 0, y04 = 0 ) y4 = 0, x01 = 2=5 ) x1 = 2=3,x02 = 1=5) x2 = 1=3. We have X� = (2=3; 1=3), Y � = (1=6; 5=6) and v = 5=3.b) The simplex matrix in this case is2664

6 0 3 5 1 0 0 1

8 �2 3 9 0 1 0 14 6 5 4 0 0 1 11 1 1 1 0 0 0 0

3775!

!

26640 3=2 3=4 �7=4 1 �3=4 0 1=41 �1=4 3=8 9=8 0 1=8 0 1=8

0 7 7=2 �1=2 0 �1=2 1 1=20 5=4 5=8 �1=8 0 �1=8 0 �1=8

3775!

!

26640 0 0 �23=14 1 �9=14 �3=14 1=71 0 1=2 31=28 0 3=28 1=28 1=7

0 1 1=2 �1=14 0 �1=14 1=7 1=14

0 0 0 �1=28 0 �1=28 �5=28 �3=14

3775We have gmax = 3=14 = 1=w, hence w = 14=3, y01 = 1=7 ) y1 = 2=3,

y02 = 1=14 ) y2 = 1=3, y03 = 0 ) y3 = 0, y04 = 0 ) y4 = 0, x01 = 0 ) x1 = 0,x02 = 1=28) x2 = 1=6, x03 = 5=28) x3 = 5=6.So, an optimal solution is X� = (0; 1=6; 5=6), Y �1 = (2=3; 1=3; 0; 0), v = 14=3.There exists another optimal solution because we have the matrix2664

0 0 0 �23=14 1 �9=14 �3=14 1=71 �1 0 15=16 0 5=28 �3=28 1=140 2 1 �1=7 0 �1=7 2=7 1=70 0 0 �1=28 0 �1=28 �5=28 �3=14

3775 :Thus y01 = 1=14 ) y1 = 1=3, y02 = 0 ) y2 = 0, y03 = 1=7 ) y3 = 2=3,

y04 = 0) y4 = 0, and so we have Y �2 = (1=3; 0; 2=3; 0).The optimal solution of matrix game is X� = (0; 1=6; 5=6), Y � = �Y �1 +(1�

�)Y �2 , � 2 [0; 1], where Y �1 = (2=3; 1=3; 0; 0), Y �2 = (1=3; 0; 2=3; 0), and v = 14=3.12. The payo¤ matrix in general representation. As technological

utilization, three �rms use water from the same source. Each �rm has twostrategies: the �rm build a station that makes water pure (strategy 1) or it uses

57

Page 58: GamesTheorySB,ESA,etc

water that isn�t pure (strategy 2). We suppose that if at most one �rm useswater which isn�t pure then the water that exists it is good to it and this �rmare not expenses. If at least two �rms uses water that isn�t pure, then every�rm that uses water loses 3 monetary unities (u.m.). By using the station thatmakes water pure, it costs 1 u.m. for the �rm that do it.Write the payo¤ matrix of this game.Solution. The payo¤ matrix is given in Table 1. Let us consider, for

example, the situation (1,2,2). The �rms 2 and 3 use water that isn�t pure. So,every �rm has 3 u.m. as a expense (negative payo¤). For the �rm 1 that hasthe station to do water pure, there is a expense equal 1 u.m. more.

Table 1Situation Payo¤ matrixs1 s2 s3 H1 H2 H3

1 1 1 -1 -1 -11 1 2 -1 -1 01 2 1 -1 0 -11 2 2 -4 -3 -32 1 1 0 -1 -12 1 2 -3 -4 -32 2 1 -3 -3 -42 2 2 -3 -3 -3

13. The payo¤matrix in bi-dimensional representation. Two facto-ries produce the same type of production A, respectively B in two assortmentA1 and A2, respectively B1 and B2. The products are interchangeable. Bymaking a test in advance, we obtain that the preferences given in percentagesare the following representation:

AnB B1 B2A1 40 90A2 70 20

The percentages given in the above table refer to the �rst factory (�rstproduction), the percentage for the second factory (second production) are thecomplementarities percentages (face from the total percentage 100%). Writethe payo¤ matrix.Solution.We have the general representation:

Situation Payo¤ matrixs1 s2 H1 H2

1 1 40 601 2 90 102 1 70 302 2 20 80

58

Page 59: GamesTheorySB,ESA,etc

which is with the following bi-dimensional representation equivalent:

H1 =

�40 9070 20

�; H2 =

�60 3010 80

�:

14. Solving of the bi-matrix game. We consider Problem 13 withthe following modi�cation of purchasing conditions: we remark that 50% fromthose buyers that buy the product A2, respectively B2, buy the product B2,respectively A2, too. By expressing the sales in absolute value, by considering1000 units that have been sale in the condition of the �rst version of the problem,we ask:1. to express the payo¤ matrix2. to solve the non-cooperative bi-matrix game.Solution. We have the table:

Situation Payo¤ matrixs1 s2 H1 H2

1 1 400 6001 2 900 1002 1 700 3002 2 600 900

where in the situation (2,2) there are

600 = 200 +1

2� 800; 900 = 800 +

1

2� 200:

The bi-dimensional writing is:

H1 =

�400 900700 600

�; H2 =

�600 300100 900

�:

2. The corresponding simplex matrices are:

SA =

266664inj 1 2 3 4 51 1 1 0 0 1 =2 400 900 -1 1 0 �3 700 600 -1 1 0 �4 0 0 0 0 0 MIN

377775

SB =

266664inj 1 2 3 4 51 1 1 0 0 1 =2 600 300 -1 1 0 �3 100 900 -1 1 0 �4 0 0 0 0 0 MIN

377775By solving the linear programming problems we obtain the following solu-

tions:

59

Page 60: GamesTheorySB,ESA,etc

P p1 p2 p3 p4 p5 p6P1 0,55 0,45 463,64 0 0 0P2 1 0 600 0 0 500P3 0 1 900 0 600 0

Q q1 q2 q3 q4 q5 q6Q1 0,5 0,5 650 0 0 0Q2 1 0 700 0 300 0Q3 0 1 900 0 0 300

The value of the game for the �rst factory is FA = 650 and for the second isFB = 463; 54. These pairs of solutions (P;Q) are equilibrium points that verifythe condition: p4+i 6= 0 ) qi = 0 and q4+i 6= 0 ) pi = 0. We observe that thesingle equilibrium point is (P1; Q1): P1 = [0; 55; 0; 45], Q1 = [0; 5; 0; 5].15. Let us consider the game 13, that is a antagonistic game with constant

sum 100%.1. Solve the game.2. Write the structure matrices.Solution. 1. The simplex table corresponding to this game is:

1 2 3 4 51 1 1 0 0 1 =2 40 90 -1 1 0 �3 70 20 -1 1 0 �4 0 0 1 -1 0 MIN

By solving the linear programming problem we obtain: P = [0; 5; 0; 5], Q =[0; 7; 0; 3]. The value of the game is 55%.2. The structures matrices of the game are:

�A =

AnB B1 B2A1 14 13,5 27,5A2 24,5 3 27,5

38,5 16,5 55

�B =

BnA A1 A2B1 21 10,5 31,5B2 1,5 12 13,5

22,5 22,5 45

So, we can see that the syntetique situation expressed in percentages aboutthe structure of the types of production are the following:

A1: 27,5%, A2: 27,5%, B1: 31,5%, B2: 13,5%if the production of both factories is 100%. Because of antagonistic marketcompetition the second factory realizes a less sale that the �rst, that it is 45%from all sales.16. Relation between information and income. Let us consider Prob-

lem 15, by supposing that the second factory, at the moment of choosing itsstrategy, knows the strategy applied by the �rst factory.

60

Page 61: GamesTheorySB,ESA,etc

We ask:1. Write the matrix of game.2. Solve the game.3. Compare the results obtained here with those obtained by solving Problem

15 and interpret the di¤erence between these two solutions.Solution. 1. Because the second factory knows the strategy applied by the

�rst factory, it can apply another two strategies obtained by combination ofstrategies B1 and B2:strategy B1 respond to the strategy A1;strategy B2 respond to the strategy A2;strategy B2 respond to the strategy A1;strategy B1 respond to the strategy A2.We denote Q0 = [q01; q

02; q

03; q

04] the strategy of the second factory in agreement

with another four strategies to respond to two strategies P = [p1; p2] of the �rstfactory.We denote V 0 the value of the new game. The matrix of the game is given

by the following table:

AnB B1 B1 B2 B2B1 B2 B1 B2

A1 40 40 90 90A2 70 20 70 20

2. By elimination of the dominate column 3, the corresponding simplex tableis:

inj 1 2 3 4 5 61 1 1 1 0 0 1 =2 40 40 90 -1 1 0 �3 70 20 20 -1 1 0 �4 0 0 0 1 -1 0 MIN

Solving this linear programming problem we obtain: P = [1; 0], Q0 =�1[0; 1; 0; 0] + �2[0; 4; 0; 6; 0; 0], �1; �2 � 0, �1 + �2 = 1, v0 = 40%.3. To compare the results with those obtained in 15, we write the structure

matrices of the game (rows and columns for which the strategy is equal zerowill be empty).

�A =

AnB B1 B1B2 B2

A1 16�2 40�1 + 24�2 4016�2 40�1 + 24�2

�B =

BnA A1B1 : B1 24�2 24�2B1 : B2 60�1 + 36�2 60�1 + 36�2

60 60

61

Page 62: GamesTheorySB,ESA,etc

We remark a decreasing equal V 0 � V = �15% for the �rst factory and anincreasing equal 15% for the second factory, as a result of the fact that it ownsan information important to it.How is separated all production 100% of both factories?The �rst factory produces only the assortment A1 as 40% and the second

factory produces only the assortment B1, as 24�2% (inside the strategy B1 : B1),(60�1 + 36�2)% (inside the strategy B1 : B2), namely a total of 60%.

1.19 Exercises and problems unsolved

Let be a zero-sum two-person game with the payo¤ matrix

H1 = A =

24 3 6 9 610 6 1 84 5 3 5

35 :Which is the payo¤ matrix of player 2?What strategies have the player 1 and the player 2?2. (The Morra game) Two players show simultaneous one or two �ngers from

the left hand and in the same time yells the number of �ngers that the believethat shows the opponent. If a player forecasts the number of �ngers showedby the opponent, he receives so many unities monetary as much as �ngers theyshowed together. If the both players forecast or neither forecast no then neitherreceives nothing. Which is the payo¤ matrix of this game?3. What game in previous problems has the saddle point?4. Which are the expected payo¤s of player 1 in the previous games?5. Using the iterated elimination of strictly dominated strategies solve the

matrix game with the payo¤ matrix

A =

26641 �1 �2 03 0 2 44 5 1 52 3 �1 3

3775 :6. Find the optimal strategies of the following matrix game with the payo¤

matrix:

a) A =

�2 35 2

�; b) A =

�6 �14 5

�; c) A =

�2 43 1

�:

7. Solve the problem 6 with the Williams method.8. Solve the problem 6 with the graphical method for 2�n and m�2 matrix

games.9. Using the graphical method, solve the following matrix games with the

payo¤ matrices:

a) A =

�2 1 43 5 1

�; b) A =

26642 43 11 65 0

3775 :

62

Page 63: GamesTheorySB,ESA,etc

10. Solve the matrix game with the payo¤ matrix

A =

24 1 �1 �2�1 1 12 �1 0

35 :11. Using the linear programming problem solve the following matrix game

with the payo¤ matrix:

a) A =

24 2 3 0�1 8 �30 �1 2

35 ; b) A =

24 7 5 60 9 414 1 8

35 :12. A factory produces three types of production A: A1, A2, A3. To produce

one unit of product we use three types of materials: B: B1 �metal, B2 �woodenmaterial, B3 �plastic material. The expenses with pole materials in a unit ofproduction are given in the table:

BnA A1 A2 A3B1 4 4 6B2 3 5 3B3 5 2 4

Write the matrix of the game in general representation.13. Two branches have to do investments in four objectives. The strategy i

consists to �nance the objective i, i = 1; 4. In accordance to all considerations,the payo¤s of the �rst branch are given by the matrix:

A =

26640 1 �1 2�1 0 3 20 1 2 �12 0 0 0

3775 :We suppose that every branch materializes its payo¤ in agreement with

another one: that is what the �rst wins the second loses and what the �rst losesthe second wins.Write the matrix of the game in general representation.14. Let us consider two persons playing a bi-matrix non-cooperative game,

given by the matrices

A =

�1 73 4

�; B =

�1 87 8

�:

Solve the game.15. In order to get an economical and social development of a town, it

appears the problem to build or not to build two economical objectives. Thereare two strategies for the corresponding ministry and for the leaders of thetown: 1 �the building of �rst objective; 2 �the building of second objective.The people that represent the town may have two strategies: 1 � they agree

63

Page 64: GamesTheorySB,ESA,etc

with the proposal of Ministry; 2 � they don�t agree with it. The strategiesapply independent. The payo¤s are given by the matrices:

A =

��10 21 �1

�; B =

�5 �2�1 1

�.Solve the non-cooperative game.16. Let us consider Problem 12, and we ask:16.1. What are the percentages p1 : p2 : p3 that we have to make the supply

in advance (supply before to know the volume of the contracts for the nextperiod of time) with prime materials in order to obtain that the stock will besurely used and to ensure a maximum value of the production?16.2. Find a production plan corresponding to a total production of 4 mil-

lions u.m.17. Solve the antagonistic game given in Problem 13.

Answers

1. H2 =

2664�3 �10 �4�6 �6 �5�9 �1 �3�6 �8 �5

3775 ; three strategies for player 1 and for strategiesfor player 2.

2. H1 = A =

26640 2 �3 0�2 0 0 33 0 0 �40 �3 4 0

3775The rows are: L11; L12; L21; L22; where L11 means 1 �nger, 1 yells, L12�

1 �nger, 2 yells, L21� 2 �ngers, 1 yells, L22� 2 �ngers, 2 yells.3. v1 = maxmin aij = 3, v2 = minmax aij = 6, there isn�t saddle point, in

pure strategy, for �rst game; v1 = �2, v2 = 2, there isn�t saddle point, in purestrategy, for the second game.4.P3

i=1

P4j=1 aijxiyj = 3x1y1 + 6x1y2 + � � �+ 5x3y4;P4

i=1

P3j=1 aijxiyj = 2x1y2 � 3x1y3 � � � �+ 4x4y3.

5. X� = (0; 2=3; 1=3; 0), Y � = (0; 1=6; 5=6; 0), v = 5=3.6. a) X� = (1=4; 3=4), Y � = (3=4; 1=4), v = 1b) X� = (3=4; 1=4), Y � = (1=8; 7=8), v = 17=4c) X� = (3=4; 1=4), Y � = (1=2; 1=2), v = 5=2.9. a) X� = (1=2; 1=2), Y � = (3=4; 0; 1=4), v = 5=2.b) X� = (0; 1=3; 0; 2=3), Y � = (1=3; 2=3), v = 1=3.10. X� = (0; 3=5; 2=5), Y � = (2=5; 3=5; 0), v = 1=5.11. a) X� = (1=2; 0; 1=2), Y �1 = (1=2; 0; 1=2), Y

�2 = (0; 1=3; 2=3), v = 1

b) X� = (0; 7=12; 5=12), Y � = (0; 1=3; 2=3), v = 17=3.14. First solution: (P;Q), P = [1; 0], Q = [0; 1], FA = FB = 7.

64

Page 65: GamesTheorySB,ESA,etc

Second solution: (P;Q): P = [0; 1], Q = �1Q1 + �2Q2, Q1 = [0; 6; 0; 4],Q2 = [1; 0], �1 � 0, �2 � 0, �1 + �2 = 1, FA = 3; 4�1 + 3�2, FB = 8.15. (P;Q), P = [0; 33; 0; 67], Q = [0; 21; 0; 79], FA = �0; 57, FB = 0; 33.16. 16.1. 1:0:016.2. A1: a = 2680000�1 + 2000000�2 u.m.A2: b = 1320000�1 + 2000000�2 u.m.�1; �2 � 0; �1 + �2 = 1.17. P = [0; 3; 0; 11; 0; 26; 0; 33], Q = [0; 28; 0; 38; 0; 17; 0; 17].The �rst factory wins 0,56 u.m.

1.20 References

1. Blaga, P., Muresan, A.S., Lupas, Al., Applied mathematics, Vol. II, Ed.Promedia Plus, Cluj-Napoca, 1999 (In Romanian)2. Ciucu, G., Craiu, V., Stef¼anescu, A., Mathematical statistics and opera-

tional research, Ed. Did. Ped., Bucuresti, 1978 (In Romanian)3. Craiu, I., Mihoc, Gh., Craiu, V., Mathematics for economists, Ed. Sti-

inti�c¼a, Bucuresti, 1971 (In Romanian)4. Dani, E., Numerical methods in games theory, Ed. Dacia, ClujNapoca,

1983 (In Romanian)5. Dani, E., Muresan, A.S., Applied mathematics in economy, Lito. Univ.

Babes-Bolyai, Cluj-Napoca, 1981 (In Romanian)6. Faber, H., An analysis of �nal-o¤er arbitration, J. of Con�ict Resolution,

35, 1980, 683-7057. Gibbons, R., Games theory for applied economists, Princeton University

Press, New Jersey, 19928. Guiasu, S., Malita, M., Games with three players, Ed. Stiinti�c¼a, Bu-

curesti, 1973 (In Romanian)9. Hardin, G., The tragedy of the commons, Science, 162, 1968, 1243-124810. Muresan, A.S., Operational research, Lito. Univ., Babes-Bolyai, Cluj-

Napoca, 1996 (In Romanian)11. Muresan, A.S., Applied mathematics in �nance, banks and exchanges,

Ed. Risoprint, Cluj-Napoca, 2000 (In Romanian)12. Muresan, A.S., Blaga, P., Applied mathematics in economy, Vol. II, Ed.

Transilvania Press, Cluj-Napoca, 1996 (In Romanian)13. Muresan, A.S., Rahman, M., Applied mathematics in �nance, banks and

exchanges, Vol. I, Ed. Risoprint, Cluj-Napoca, 2001 (In Romanian)14. Muresan, A.S., Rahman, M., Applied mathematics in �nance, banks and

exchanges, Vol. II, Ed. Risoprint, Cluj-Napoca, 2002 (In Romanian)15. von Neumann, J., Morgenstern, O., Theory of games and economic

behavior (3 rd edn), Princeton University Press, New Jersey, 195316. Onicescu, O., Strategy of games with applications to linear programming,

Ed. Academiei, Bucuresti, 1971 (In Romanian)17. Owen, G., Game theory (2 nd edn), Academic Press, New York, 1982

65

Page 66: GamesTheorySB,ESA,etc

18. Schatteles, T., Strategically games and economic analysis, Ed. Stiinti�c¼a,Bucuresti, 1969 (In Romanian)19. Tirole, J., The theory of industrial organization, M I T Press, 198820. Wang, J., An inductive proof of von Neumann�s minimax theorem, Chi-

nese J. of Operations Research, 1 (1987), 68-7021. Wang, J., The theory of games, Clarendon Press, Oxford, 1988

2 Static games of incomplete information

In this chapter we consider games of incomplete information (Bayasiangames) that is, games in which at least one player is uncertain about anotherplayer�s payo¤ function. One common example of a static game of incompleteinformation is a sealed-bid auction: each bidder knows his own valuation for thegood being sold but doesn�t know any other bidder�s valuation; bids are submit-ted in sealed envelopes, so the players�move can be thought of as simultaneous.

2.1 Static Bayesian games and Bayesian Nash equilibrium

In this section we de�ne the normal-form representation of a static Bayesiangame and a Bayesian Nash equilibrium in such a game. Since these de�nitionsare abstract and bit complex, we introduce the main ideas with a simple exam-ple, namely Cournot competition under asymmetric information. Considera Cournot duopoly model with inverse demand given by P (Q) = a�Q, whereQ = q1 + q2 is the aggregate quantity on the market. Firm 1�s cost functionis C1(q1) = cq1. Firm 2�s cost function is C2(q2) which has the probabilisticdistribution

C2(q2) :

�cLq2 cHq21� � �

�;

where cL < cH . Furthermore, information is asymmetric because �rm 2 knowsits cost function and �rm 1�s, but �rm 1 knows its cost function and only that�rm 2�s marginal cost c has the probabilistic distribution

c :

�cL cH1� � �

�:

This situation may be when �rm 2 could be a new entrant to the industry,or could have just invented a new technology. All of this is common knowledge:�rm 1 knows that �rm 2 has superior information, �rm 2 knows that �rm 1knows this, and so on. Naturally, �rm 2 may want to chose a di¤erent (andpresumable lower) quantity if its marginal cost is high than if it is low. Firm 1,for its part, should anticipate that 2 may tailor its quantity to its cost in thisway. Let q2�(c) be denote �rm 2�s quantity choices as a function of its cost, thatis

q�2 =

�q�2(cL); if c = cLq�2(cH); if c = cH :

(41)

66

Page 67: GamesTheorySB,ESA,etc

Let q�1 be denote �rm 1�s single quantity choice. If �rm 2�s cost is low, it willchose q�2(cL) to solve the problem

maxq2 [(a� q�1 � q2)� cL]q2:

Similarly, if �rm 2�s cost is high, q�2(cH) will solve the problem

maxq2 [(a� q�1 � q2)� cH ]q2Firm 1 knows that �rm 2�s cost is low with probability 1 � � and should

anticipate that �rm 2�s quantity choice will be q�2(cL) or q�2(cH), depending on

�rm 2�s cost. Thus �rm 1 chooses q�1 to solve the problem

maxq1(1� �)[(a� q1 � q�2(cL))� c]q1 + �[(a� q1 � q�2(cH))� c]q1

so as to maximize expected pro�t. The �rst-order conditions for these threeoptimization problems are

q�2(cL) =a� q�1 � cL

2; q�2(cH) =

a� q�1 � cH2

;

and

q�1 = :(1� �)[a� q�2(cL)� c] + �[a� q�2(cH)� c]

2

Assume that these �rst-order condition characterize the solutions to theearlier optimization problems. Then, the solutions to the three �rst-order con-ditions are

q�2(cL) =a� 2cL + c

3� �

6(cH � cL);

q�2(cH) =a� 2cH + c

3+1� �6(cH � cL);

and

q�1 =a� 2c+ (1� �)cL + �cH

3

Compare q�2(cL); q�2(cH) and q

�1 to the Cournot equilibrium under complete

information with costs c1 and c2. Assuming that the values of c1 and c2 aresuch that both �rms�equilibrium quantities are both positive, �rm i producesq�i =

a�2ci+cj3 in this complete-information case. In the incomplete-information,

in contrast, q�2(cH) is greater thana�2cH+c

3 and q�2(cL) is less thana�2cL+c

3 : Thisoccurs because �rm 2 not only tailors its quantity to its cost but also respondsto the fact that �rm 1 cannot do so. If �rm 2�s cost is high, for example, itproduces less because its cost is high but also produces more because it knowsthat �rm 1 will produce a quantity that maximizes its expected pro�t and thusis smaller than �rm 1 would produce if it know �rm 2�s cost to be high.

67

Page 68: GamesTheorySB,ESA,etc

2.2 Normal-form representation of static Bayesian games

Recall that the ensemble � =< I; fSig; fHig; i 2 I > is a non-cooperative game(see De�nition 1.10), where Si is player i�s strategy space and Hi is player i�spayo¤, hence Hi(s) = Hi(s1; s2; : : : ; sn) is player i�s payo¤ when the playerschoose the strategies (s1; s2; : : : ; sn):Remark 2.1. The non-cooperative game can also describe it as � = <

I; fAig ; fHig; i 2 I >, where Ai is player i�s action space and Hi is playeri�s payo¤, hence Hi(a) = Hi(a1; a2; :::; an) is player i�s payo¤ when the playerschoose the actions a = (a1; a2; :::; an): In a simultaneous-move game of completeinformation a strategy for a player is simply an action, but in a dynamic gameof complete information (�nitely or in�nitely repeated game) a strategy can bedi¤erent of action. A player�s strategy is a complete plan of action - it speci�esa feasible action for the player in every contingency in which the player might becalled upon to act. Hence, in a dynamic game a strategy is more complicated.�To prepare for our description of the timing of a static game of incomplete

information, we describe the timing of a static game of complete informationas follows: (1) the players simultaneously choose actions (player i chooses aifrom the feasible set Ai), and then (2) payo¤s Hi(a1; a2; :::; an) are received.Now we want to develop the normal-form representation of a static Bayesiangame, namely a simultaneous-move game of incomplete information.The �rst step is to represent the idea each player knows his own payo¤

function but may be uncertain about the other players�payo¤ functions. Letplayer i�s possible payo¤ functions be representedHi(a1; a2; :::; an; ti), where ti iscalled player i�s type and belongs to a set of possible types (or type space) Ti.Each type ti corresponds to a di¤erent payo¤ function that player i might have.Given this de�nition of a player�s type, saying that player i knows his own

payo¤ function is equivalent to saying that player i knows his type. Likewise,saying that player i may be uncertain about the other players�payo¤ functionis equivalent to saying that player i may be uncertain about the types of otherplayers, denoted by t�i = (t1; :::; ti�1; ti+1; :::; tn):We use T�i to denote the set of all possible values of t�i, and we use the

probability distribution pi(t�ijti) to denote player i0s belief about the otherplayers�types, t�i, given player i0s knowledge of his type, ti.Remark 2.2. In most of application the player�s types are independent, in

which case pi(t�ijti) doesn�t depend on ti, so we can write player i0s belief aspi(t�i). �De�nition 2.1. The normal-form representation of an n-player sta-

tic Bayesian game speci�es the players� action spaces A1; A2; : : : ; An, theirtype spaces T1; T2; : : : ; Tn, their beliefs p1; p2; : : : ; pn, and their payo¤ functionsH1;H2; : : : ;Hn: �Remark 2.3. We use � = < I; fAig; fTig; fpig; fHig; i 2 I > to denote

n-player static Bayesian game. �Remark 2.4. Player i0s type, ti, is privately known by player i, determines

player i0s payo¤ function, Hi(a1; a2; : : : ; an; ti) and is a member of the set of

68

Page 69: GamesTheorySB,ESA,etc

possible types Ti: Player i0s belief pi(t�ijti) describes i0s uncertainty about then� 1 other players�possible types, t�i, given i0s own type, ti. �Example 2.1. In the Cournot game the �rms�actions are their quantity

choices, q1 and q2. Firm 2 has two possible cost functions and thus two possiblepro�t or payo¤ functions:

H2(q1; q2; cL) = [(a� q1 � q2)� cL]q2and

H2(q1; q2; cH) = [(a� q1 � q2)� cH ]q2:

Firm 1 has only one possible payo¤ function

H1(q1; q2; c) = [(a� q1 � q2)� c]q1:

Thus, �rm 1�s type space is T1 = fcg, and �rm 2�s type space is T2 =fcL; cHg. �Example 2.2. Suppose that player i has two possible payo¤ functions. We

would say that player i has two types, ti1 and ti2, that player i0s type space isTi = fti1; ti2g, and that player i0s two payo¤ functions are Hi(a1; a2; : : : ; an; ti1)and Hi(a1; a2; : : : ; an; ti2): We can use the idea that each of a player�s typescorresponds to a di¤erent payo¤ function the player might have to representthe possibility that the player might have di¤erent sets of feasible actions, asfollows. Suppose that player i0s set of feasible actions is fa; bg with probabilityq and fa; b; cg with probability 1� q. Then we can say that i has two types andwe can de�ne i0s feasible set of actions to be fa; b; cg for both types but de�nethe payo¤ from taking action c to be �1 for type ti1: �Remark 2.5. The timing of a static Bayesian game is as follows:(1) nature draws a type vector t = (t1; t2; : : : ; tn), where ti is drawn from

the set of possible types Ti;(2) nature reveals ti to player i but not to any other player;(3) the players simultaneously choose actions, player i choosing ai from the

feasible set Ai;(4) payo¤s Hi(a1; a2; : : : ; an; ti) are received.Because nature reveals player i0s type to player i but no to player j in step

(2), player j doesn�t know the complete history of the game when actions arechosen in step (3). �Remark 2.6. There are games in which player i has private information

not only about his own payo¤ function but also about another player�s payo¤function. We capture this possibility by allowing player i0s payo¤ to depend notonly on the actions (a1; a2; : : : ; an) but also on all the types (t1; t2; : : : ; tn). Wewrite this payo¤ as Hi(a1; a2; : : : ; an; t1; t2; : : : ; tn): �Remark 2.7. The second technical point involves the beliefs, pi(t�ijti):We

will assume that it is common knowledge that in step (1) of the timing of astatic Bayesian game, nature draws a type vector t = (t1; t2; : : : ; tn) accordingto the prior probability distribution p(t): When nature then reveals ti to playeri, he can compute the belief pi(t�ijti) using Bayes�rule

69

Page 70: GamesTheorySB,ESA,etc

pi(t�ijti) =p(t�i; ti)

p(ti)=

p(t�i; ti)Pt�i2T�i

p(t�i; ti):

70

Page 71: GamesTheorySB,ESA,etc

2.3 De�nition of Bayesian Nash equilibrium

First, we de�ne the players�strategy spaces in the static Bayesian games. Weknow that a player�s strategy is a complete plan of action, specifying a feasibleaction in every contingency in which the player might be called on to act. Giventhe timing of a static Bayesian game, in which nature begins the game bydrawing the players�types, a (pure) strategy for player i must specify a feasibleaction for each of player i0s possible types.De�nition 2.2. In the static Bayesian game �, a strategy for player i is

a function si, where for each type ti 2 Ti, si(ti) speci�es the action from thefeasible set Ai that type ti would choose if drawn by nature. �The strategy spaces aren�t given in the normal-form representation of the

Bayesian game. Instead, in a static Bayesian game the strategy spaces areconstructed from the type and action spaces: player i0s set of possible (pure)strategies, Si, is the set of all possible functions with domain Ti and range Ai:Remark 2.8. In discussion of dynamic games of incomplete information we

will do distinction between two categories of strategies. Thus, in a separatingstrategy each type ti 2 Ti chooses a di¤erent action ai 2 Ai. In poolingstrategy all types choose the same action. We introduce the distinction hereonly to help describe the wide variety of strategies that can be constructed froma given pair of type and action spaces, Ti and Ai. �Example 2.3. In the asymmetric-information Cournot game in Example 2.1

the solution consists of three quantity choices: q�2(cL); q�2(cH) and q

�1 . In terms

of De�nition 2.2 of a strategy, the pair (q�2(cL); q�2(cH)) is �rm 2�s strategy and

q�1 is �rm 1�s strategy. Firm 2 will choose di¤erent quantity depending on itscost. It is important to note, however, that �rm 1�s single quantity choice shouldtake into account that �rm 2�s quantity will depend on �rm 2�s cost in this way.Thus, if our equilibrium concept is to require that �rm 1�s strategy be a bestresponse to �rm 2�s strategy, then �rm 2�s strategy must be a pair of quantities,one for each possible cost type, else �rm 1 simply cannot compute whether itsstrategy is indeed a best response to �rm 2�s. �Given the de�nition of a strategy in a Bayesian game, we turn next to the

de�nition of a Bayesian Nash equilibrium. The central idea is both simple andfamiliar: each player�s strategy must be a best response to the other player�sstrategies. That is, a Bayesian Nash equilibrium is simply a Nash equilibriumin a Bayesian game.De�nition 2.3. In the static Bayesian game � the strategies s� = (s�1; s

�2; : : : ; s

�n)

are a (pure-strategy) Bayesian Nash equilibrium if for each player i and foreach of i0s types ti 2 Ti, si(ti) solves the problem

maxai2Ai

Xt�i2T�i

Hi(s�1(t1); : : : ; s

�i�1(ti�1); ai; s

�i+1(ti+1); : : : ; s

�n(tn); t)pi(t�ijti):

�Remark 2.9. In the Bayesian Nash equilibrium no player wants to chance

his strategy, even if the chance involves only one action by one type. �

71

Page 72: GamesTheorySB,ESA,etc

Remark 2.10. We can show that a �nite static Bayesian game there existsa Bayesian Nash equilibrium, perhaps in mixed strategies. �

2.4 The revelation principle

An important tool for designing games when the players have private informa-tion, due to Myerson [ ], in context of Bayesian games, is the revelation principle.It can be applied in the auction and bilateral-trading problems described in theprevious sections, as well as in a wide variety of other problems. Before we stateand prove the revelation principle for static Bayesian games, we sketch the waythe revelation principle is used in the auction and bilateral-trading problems.Consider a seller who wishes to design an auction to maximize his expected

revenue. The highest bidder paid money to the seller and received the good, butthere are many other possibilities. The bidders might have to pay an entry fee.More generally, some of the losing bidders might have to pay money, perhapsin amounts that depend on their own and others�bids. Also, the seller mightset a reservation price - a �oor below which bids will not be accepted. Moregenerally, the good might stay with the seller with some probability, and mightnot always go to the highest bidder when the seller does release it.The seller can use the revelation principle to simplify this problem in two

ways. First, the seller can restrict attention to the following class of games:1) The bidders simultaneously make claims about their types (their valua-

tions). Bidderican claim to be any type � i from i0s set of feasible types Ti, nomatter what i0s true type, ti.2) Given the bidders�claims (�1; �2; : : : ; �n), bidder i pays xi(�1; �2; : : : ; �n)

and receives the good with probability qi(�1; �2; : : : ; �n):For each possible combination of claims (�1; �2; : : : ; �n); the sum of the prob-

ability q1(�1; �2; : : : ; �n) + � � �+ qn(�1; �2; : : : ; �n) must be less than or equal toone. The second way the seller can use the revelation principle is to restrictattention to those direct mechanisms in which it is a Bayesian Nash equilibriumfor each bidder to tell the truth - that is, payment and probability functions

x1(�1; �2; : : : ; �n); : : : ; xn(�1; �2; : : : ; �n);

q1(�1; �2; : : : ; �n); : : : ; qn(�1; �2; : : : ; �n)

such that each player i0s equilibrium strategy is to claim � i(ti) = ti for eachti 2 Ti:De�nition 2.4. Static Bayesian game in which each player�s only action

is to submit a claim about his type is called direct mechanism. A directmechanism in which truth - telling is a Bayesian Nash equilibrium is calledincentive - compatible. �Remark 2.11. Outside the context of auction design, the revelation princi-

ple can again be used in these two ways. Any Bayesian Nash equilibrium in anappropriately chosen new Bayesian game, where by "represented" we mean thatfor each possible combination of the players�types (t1; t2; : : : ; tn), the players�

72

Page 73: GamesTheorySB,ESA,etc

actions and payo¤s in the new equilibrium are identical to those in the old equi-librium. No matter what the original game, the new Bayesian game is always adirect mechanism; no matter what the original equilibrium, the new equilibriumin the new game is always truth - telling. �The following result holdTheorem 2.1. (The revelation principle).Any Bayesian Nash equilibrium of any Bayesian game can be represented by

an incentive - compatible direct mechanism.Proof. Consider the Bayesian Nash equilibrium s� = (s�1; s

�2; : : : ; s

�n) in

Bayesian game � = < I; fAig; fTig; fpig; fHig; i 2 I > : We will constructa direct mechanism with a truth - telling equilibrium that represent s�. Theappropriate direct mechanism is a static Bayesian game with the same typesspaces and beliefs as � but with new action spaces and new payo¤ functions.The new action spaces are simple. Player i0s feasible actions in the directmechanism are claims about i0s possible types. That is, player i0s action spaceis Ti: The new payo¤ functions are more complicated. They depend not only onoriginal game �, but also on the original equilibrium in that game, s�: The ideais to use the fact that s� is an equilibrium in � to ensure that truth - telling is anequilibrium of the direct mechanism, as follows. The fact that s� is a BayesianNash equilibrium of � means that for each player i, s�i is i

0s best response tothe other players�strategies (s�1; : : : ; s

�i�1; s

�i+1; : : : ; s

�n):

Hence, for each of i0s types ti 2 Ti, s�i (ti) is the best action for i to choosefrom Ai, given that the other players�strategies are (s�1; : : : ; s

�i�1; s

�i+1; : : : ; s

�n):

Thus, if i0s type is ti and we allow i to choose an action from a subset of Aithat includes s�i (ti), then i

0s optimal choice remains s�i (ti), again assuming thatthe other functions in the direct mechanism are chosen so as to confront eachplayer with a choice of exactly this kind.We de�ne the payo¤s in the direct mechanism by substituting the players�

type reports in the new game, � = (�1; �2; : : : ; �n), into their equilibrium strate-gies from the old game, s�, and then substituting the resulting actions in theold game, s�(�) = (s�1(�1); s

�2(�2); : : : ; s

�n(�n)), into the payo¤ functions from

the old game. Formally, i0s payo¤ function is

vi(� ; t) = Hi(s�(�); t)

,where t = (t1; t2; : : : ; tn):We conclude the proof by showing that truth - telling is a Bayesian Nash

equilibrium of this direct mechanism. By claiming to be type � i from Ti, player iis in e¤ect choosing to take the action s�i (� i) from Ai: If all the other players tellthe truth, then they are in e¤ect playing the strategies (s�1; : : : ; s

�i�1; s

�i+1; : : : ; s

�n):

But we argued earlier that if they play these strategies, then when i0s type is tithe best action forito choose is s�i (ti): Thus, if the other players tell the truth,then when i0s type is ti the best type to claim to be is ti: That is, truth -telling is an equilibrium. Hence, it is a Bayesian Nash equilibrium of the static

73

Page 74: GamesTheorySB,ESA,etc

Bayesian game �0 = < I; fTig; fTig; fpig; fHig; i 2 I > for each player i to playthe truth - telling strategy � i(ti) = ti for every ti 2 Ti: �In [4], Harsanyi suggested that player j0s mixed strategy represents player

i0s uncertainty about j0s choice of a pure strategy, and that j0s choice in turndepends on the realization of a small amount of private information.A mixed strategy Nash equilibrium in a game of complete information can

be interpreted as a pure - strategy Bayesian Nash equilibrium in a closed relatedgame with a little bit of incomplete information. The crucial feature of a mixed- strategy Nash equilibrium is not that player j chooses a strategy randomly, butrather than player i is uncertain about player j0s choice; this uncertainty ariseeither because of randomization or because of a little incomplete information,as in the following example.Example 2.4. Consider a bi - matrix game (like Battle of the sexes) in

which the players, although have known each other for quite some time, players1 and 2 aren�t quite sure of each other�s payo¤. We suppose that player�s 1payo¤ if both attend the �rst strategy is 2 + t1, where t1 is privately known byplayer 1; player�s 2 payo¤ if both attend the second strategy is 2 + t2, where t2is privately known by player 2; and t1; t2 are independent draws from a uniformdistribution on [0; x]:In terms of the static Bayesian game in normal form � =< f1; 2g; A1; A2; T1; T2; p1; p2;H1;H2 >,

the action spaces are A1 = A2 = f1; 2g, the type spaces are T1 = T2 = [0; x], thebeliefs are p1(t2) = p2(t1) =

1x for all t1 and t2, and the payo¤s are as follows

in the table

Situation Payo¤ matrixs1 s2 H1 H2

1 1 2 + t1 11 2 0 02 1 0 02 2 1 2 + t2

We will construct a pure - strategy Bayesian Nash equilibrium of this incom-plete information static game, in which player 1 plays strategy 1 if t1 exceedsa critical value, c1, and plays strategy 2 otherwise and player 2 plays strategy2 if t2 exceeds a critical value, c2, and plays strategy 1 otherwise. In such anequilibrium, player 1 plays strategy 1 with the probability x�c1

x and player 2plays strategy 2 with the probability x�c2

x . We will show that as the incom-plete information disappears, that is, as x approaches zero, the players�behaviorin this pure strategy Nash equilibrium approaches their behavior in the mixedstrategy Nash equilibrium in the original game of complete information. Theoriginal game have the payo¤ matrices

H1 =

�2 00 1

�; H2 =

�1 00 2

74

Page 75: GamesTheorySB,ESA,etc

and there are two pure strategy Nash equilibria (1; 1) and (2; 2) and a mixedstrategy Nash equilibrium in which player 1 plays strategy 1 with the proba-bility 2

3 and player 2 plays strategy 2 with the probability23 : Really, the both

probabilities x�c1x and x�c2

x approach 23 as x approaches zero.

Suppose that players 1 and 2 play the strategies just described. For a givenvalue of x, we will determine values of c1 and c2 such that these strategies area Bayesian Nash equilibrium. Given player�s 2 strategy, player 1�s expectedpayo¤s from playing strategy 1 and from playing strategy 2 are

c2x(2 + t1) + (1�

c2x):0 =

c2x(2 + t1)

and

c2x:0 + (1� c2

x):1 = 1� c2

x;

respectively. Thus playing strategy 1 is optimal if and only if

t1 �x� 3c2

= c1:

Similarly, given player�s strategy, player 2�s expected payo¤s from playingstrategy 2 and from playing strategy 1 are

(1� c1x):0 +

c1x(2 + t2) =

c1x(2 + t2)

and

(1� c1x):1 +

c1x:0 = 1� c1

x;

respectively. Thus, playing strategy 2 is optimal if and only if

t2 �x

c1� 3 = c2:

The above relationships yields to c2 = c1 and c22 + 3c2 � x = 0. Solvingthe quadratic then shows that the probability that player 1 plays strategy 1,namely x�c1

x , and the probability that player 2 plays strategy 2, namely x�c2x ;

both equal

1� �3 +p9 + 4x

2x;

which approaches 23 as x approaches zero. Thus, as the incomplete informationdisappears, the players�behavior in this pure strategy Bayesian Nash equilib-rium of the incomplete information game approaches their behavior in the mixedstrategy Nash equilibrium in the original game of complete information. �

75

Page 76: GamesTheorySB,ESA,etc

2.5 Exercises and problem solved

1. (An auction) There are two bidders, i = 1; 2. Bidder i has a valuation vi forthe good - that is, if bidder i gets the good and pays the price p, then i0s payo¤ isvi�p: The two bidders�valuations are independently and uniformly distributedon [0; 1]. Bids are constrained to be nonnegative. The bidders simultaneouslysubmit their bids. The higher bidder wins the good and pays the price shebid; the other bidder gets and pays nothing. In the case of a tie, the winneris determined by a �ip of a coin. The bidder are risk - neutral. All of this iscommon knowledge. Formulate this problem as a static Bayesian game, and�nd out a Bayesian Nash equilibrium.Solution In terms of a static Bayesian game � =< I;A1; A2; T1; T2; p1; p2;H1;H2 >

where I = f1; 2g, the action space is Ai = [0;1) that is, player i0s action isto submit a nonnegative bid, bi, and his type is his valuation vi, hence thetype space is Ti = [0; 1]. We must identify the beliefs and the payo¤ functions.Because the valuations are independent, player i believes that vj is uniformlydistributed on [0; 1], no matter what the value of vi. Player i0s payo¤ functionHi : A1 �A2 � T1 � T2 ! R is given by the relationship

Hi(b1; b2; v1; v2) =

8<:vi � bi; if bi > bjvi�bi2 ; if bi = bj0; if bi < bj :

(42)

To derive a Bayesian equilibrium of this game we construct the players�strategy spaces. We know that in a static Bayesian game a strategy is a functionfrom type space to action space, bi : Ti ! Ai; vi 7! bi(vi), where bi(vi) speci�esthe bid that each of i0s types (valuations) would choose. In a Bayesian Nashequilibrium, player 10s strategy b1(v1) is a best response to player 20s strategyb2(v2), and vice versa. The pair of strategies (b1(v1); b2(v2)) is a Bayesian Nashequilibrium if for each vi in [0; 1], bi(vi) solves the problems

maxbi(vi � bi)Pfbi > bj(vj)g+

1

2(vi � bi)Pfbi = bj(vj)g; i = 1; 2:

We simplify the exposition and calculations by looking for a linear equilib-rium b1(v1) = a1 + c1v1 and b2(v2) = a2 + c2v2: For a given value of vi, playeri0s best response solves the problem

maxbi(vi � bi)Pfbi > aj + cjvjg;

where we have used the fact that Pfbi = bj(vj)g = 0, because bj(vj) =aj+ cjvj and vj is uniformly distributed, so bj is uniformly distributed. Since itis pointless for player i to bid above j0s maximum bid, we have aj � bi � aj+cj ,so

Pfbi > aj + cjvjg = Pfvj <bi � ajcj

g = bi � ajcj

:

76

Page 77: GamesTheorySB,ESA,etc

Player i0s best response is therefore

bi(vi) =

�vi+ai2 ; if vi � ajaj ; if ai < aj :

(43)

We prove that aj � 0: If we have 0 < aj < 1 then there are some values ofvi such that vi < aj ; in which case bi(vi) isn�t linear, rather, it is �at at �rstand positively sloped later. Since we are looking for a linear equilibrium, wetherefore rule out 0 < aj < 1; focusing instead on aj � 1 and aj � 0: But theformer cannot occur in equilibrium since it is optimal for a higher type to bidat least as much as a lower type�s optimal bid, we have cj � 0; but then aj � 1would imply that bj(vj) � vj ; which cannot be optimal. Thus, if bi(vi) is tobe linear, then we must have aj � 0; in which case bi(vi) = vi+ai

2 ; so ai =aj2

and ci = 12 : We can repeat the same analysis for player j under the assumption

that player i adopts the strategy bi(vi) = ai + civi: This yields ai � 0; aj = ai2 ;

and cj = 12 : Combining these two sets of results then yields ai = aj = 0 and

ci = cj =12 : That is, bi(vi) =

vi2 :

Remark 2.12. Note well that we aren�t restricting the players� strategyspaces to include only linear strategies. Rather, we allow the players to choosearbitrary strategies but ask whether there is an equilibrium that is linear. Itturns out that because the players�valuations are uniformly distributed, a linearequilibrium not only exists but is unique. We �nd out that bi(vi) = vi

2 : Thatis, each player submit a bid equal to half her valuation. Such a bid re�ects thefundamental trade - o¤ a bidder faces in an action: the higher the bid, the morelikely the bidder is to win; the lower the bid, the larger the gain if the bidderdoes win. �Remark 2.13. One might wonder whether there are other Bayesian Nash

equilibrium of game treated in the �rst problem. Also, how equilibrium biddingchanges as the distribution of the bidders�valuations changes. Neither of thesequestions can be answered using the technique just applied: it is fruitless totry to guess all the functional forms other equilibria of this game might have,and a linear equilibrium doesn�t exist for any other distribution of valuations.We derive, next, a symmetric Bayesian Nash equilibrium (namely, the players�strategies are identical, there is a single function b(vi) such that player 10s strat-egy b1(v1) is b(v1) and player 20s strategy b2(v2) is b(v2); and this single strategyis a best response to itself, again for the case of uniformly distributed valua-tions. Under the assumption that the players�strategies are strictly increasingand di¤erentiable, we show that the unique symmetric Bayesian Nash equilib-rium is the linear equilibrium. The technique we use can easily be extended toa broad class of valuation distributions, as well as case of n bidders. �2. In the game of problem 1 there are other Bayesian Nash equilibria ?

Derive a symmetric Bayesian Nash equilibrium.Solution As we just have mentioned in Remark 2.13, it is fruitless to try

to guess all the functional forms other equilibria. Suppose player j adopts thestrategy b, and assume that b is strictly increasing and di¤erentiable. Then fora given value of vi, player i0s optimal bid solves the problem

77

Page 78: GamesTheorySB,ESA,etc

maxbi(vi � bi)Pfbi > b(vj)g:

Let b�1(bj) be denote the valuation that bidder j must have in order to bidbj : That is, b�1(bj) = vj if bj = b(vj): Since vj is uniformly distributed on [0; 1],Pfbi > b(vj)g = Pfb�1(bi) > vjg = b�1(bi): The �rst - order condition forplayer i0s optimization problem is therefore

�b�1(bi) + (vi � bi)d

dbib�1(bi) = 0:

This �rst - order condition is an implicit equation for bidder i0s best responseto the strategy b played by bidder j, given that bidder i0s valuation is vi. If thestrategy b is to be a symmetric Bayesian Nash equilibrium, we require that thesolution to the �rst - order condition be b(vi) : that is, for each of bidder i0spossible valuations, bidder i doesn�t with to deviate from the strategy b, giventhat bidder j plays this strategy. To impose this requirement, we substitutebi = b(vi) into the �rst - order condition yielding

�b�1(b(vi)) + (vi � b(vi))d

dbib�1(b(vi)) = 0:

We have b�1(b(vi)) = vi; of course. Furthermore, ddbi(b�1(b(vi))) =

1b0(vi)

:

That is, ddbib�1(vi) measures how much bidder i0s valuation must change to

produce a unit change in the bid, whereas b0(vi) measures how much the bidchanges in response to a unit change in the valuation. Thus, b must satisfy the�rst - order di¤erential equation,

�vi + (vi � b(vi))1

b0(vi)= 0;

which is more convenient expressed as vib0(vi) + b(vi) = vi: The left - handside of this di¤erential equation is d

dvi(vib(vi)): Integrating both sides of the

equation therefore yields

vib(vi) =1

2v2i + k;

where k is a constant of integration. To eliminate k, we need a bounda-ry condition. Fortunately, simple economic reasoning provides one: no playershould bid more that his valuation. Thus, we require b(vi) � vi for every vi. Inparticular, we require b(0) � 0: Since bids are constrained to be nonnegative,this implies that b(0) = 0; so k = 0 and b(vi) = vi

2 ; as claimed.3.(A double auction) Consider a trading game called a double auction.

The seller names an asking price, ps; and the buyer simultaneously names ano¤er price, pb. If pb � ps, then trade occurs at price p =

pb+ps2 , if pb < ps, then

no trade occurs. The buyer�s valuation for the seller�s good is vb, the sellers�sis vs.These valuations are private information and are drawn from independentuniform distributions on [0:1]: If the buyer gets the good for price p, then the

78

Page 79: GamesTheorySB,ESA,etc

buyer�s utility is vb � p; if there isn�t trade, then the buyer�s utility is zero. Ifthe seller sells the good for price p, then the seller�s utility is p�vs; if there isn�ttrade, then the seller�s utility is zero. Find out the Bayesian Nash equilibria.Solution In this static Bayesian game, a strategy for the buyer is a func-

tion pb specifying the price the buyer will o¤er for each of the buyer�s possiblevaluations, namely pb(vb): Likewise, a strategy for seller is a function ps speci-fying the price the seller will demand for each of the seller �s valuations, namelyps(vs). A pair of strategies (pb(vb); ps(vs)) is a Bayesian Nash equilibrium if thefollowing two conditions hold. For each vb in [0; 1], pb(vb) solves the problem

maxpbfvb �

pb +M [ps(vs)jpb � ps(vs)]

2gP (pb � ps(vs));

where M [ps(vs)jpb � ps(vs)] is the expected price the seller will demand, con-ditional on the demand being less than buyer�s o¤er of pb. For each vs in [0; 1],ps(vs) solves the problem

maxpsfps +M [pb(vb)jpb(vb) � ps]

2� vsgP (pb(vb) � ps);

where M [pb(vb)jpb(vb) � ps] is the expected price the buyer will o¤er, con-ditional on the o¤er being greater that the seller�s demand of ps.There are many Bayesian Nash equilibria of this game. Consider the fol-

lowing one - price equilibrium, for example, in which trade occurs at a singleprice if it occurs at all. For any value x in [0; 1], let the buyer�s strategy beto o¤er x if vb � x and to o¤er zero otherwise, and let seller�s strategy be todemand x if vs � x and to demand one otherwise. Given the buyer�s strategy,the seller�s choices amount to trading at x or not trading, so the seller�s strategyis a best response to the buyer�s because the seller - types who prefer trading atx to not trading do so, and vice versa. The analogous argument shows that thebuyer�s strategy is a best response to the seller�s so these strategies are indeeda Bayesian Nash equilibrium. In this equilibrium, trade occurs for the (vs; vb)pairs which can be indicated in a �gure; trade would be e¢ cient for all (vs; vb)pairs such that vb � vs, but doesn�t occur in the two regions for which vb � vsand vb � x or vb � vs and vs � x.We now derive a linear Bayesian Nash equilibrium of the double auction.

As in previous problem, we aren�t restricting the player�s strategy spaces toinclude only linear strategies. Rather, we allow the players to choose arbitrarystrategies but ask whether there is an equilibrium that is linear.Many other equilibria exist besides the one-price equilibria and the linear

equilibrium, but the linear equilibrium has interesting e¢ ciency properties,which we describe later.Suppose the seller�s strategy is ps(vs) = as + csvs. Then ps is uniformly

distributed on [as; as + cs]; so the �rst relationship (the �rst problem) becomes

maxpb[vb �

1

2(pb +

as + pb2

)]pb � ascs

;

the �rst - order condition for which yields

79

Page 80: GamesTheorySB,ESA,etc

pb =2

3vb +

1

3as:

Thus, if the seller plays a linear strategy, then the buyer�s best response isalso linear. Analogously, suppose the buyer�s strategy is pb(vb) = ab + cbvb:Then pb is uniformly distributed on [ab; ab+ cb], so the second relationship (thesecond problem) becomes

maxps[1

2(ps +

ps + ab + cb2

)� vs]ab + cb � ps

cb;

the �rst - order condition for which yields

ps =2

3vs +

1

3(ab + cb):

Thus, if the buyer plays a linear strategy, then the seller�s best responseis also linear. If the players�linear strategies are to be best responses to eachother, the relationship for pb implies that cb = 2

3 and ab =as3 , and relationship

for ps implies that cs = 23 and as =

ab+cb3 : Therefore, the linear equilibrium

strategies are

pb(vb) =2

3vb +

1

12

and

ps(vs) =2

3vs +

1

4:

Recall that trade occurs in the double auction if and only if pb � ps: Thelast relationship shows us that the trade occurs in the linear equilibrium if andonly if vb � vs +

14 : �

A �gure with this situations reveals that seller - types above 34 make demandsabove the buyer�s highest o¤er, pb(1) = 3

4 , and buyer - types below14 make o¤ers

below the seller�s lowest o¤er, ps(0) = 14 . The depictions of which valuation pairs

trade in the one - price and linear equilibrium, respectively. In both cases, themost valuable possible trade, namely vs = 0 and vb = 1, does occur. But the one- price equilibrium misses some valuable trades (such as vs = 0 and vb � x� ";where � is small) and achieves some trades that are worth next to nothing,such as vs = x � � and vb � x + �. The linear equilibrium, in contrast, missesall trades worth next to nothing but achieves all trades worth at least 1

4 : Thissuggest that the linear equilibrium may dominate the one - price equilibria, interms of the expected gains the players, receive, but also raises the possibilitythat the players might do even better in an alternative equilibrium.In [9] Myerson and Satterthwaite show that, for the uniform valuation dis-

tributions considered here, the linear equilibrium yields higher expected gainsfor the player than any other Bayesian Nash equilibria of the double auction(including but far from limited to the one - price equilibria). This implies that

80

Page 81: GamesTheorySB,ESA,etc

there isn�t Bayesian Nash equilibrium of the double auction in which trade oc-curs if and only if vb � vs; that is, it is e¢ cient.They also show that this latter result is very general: if vb is continuously

distribution on [xb; yb] and vs is continuously distributed on [xs; ys]; where yb >xs and ys > xb, then there isn�t bargaining game the buyer and seller wouldwillingly play that a Bayesian Nash equilibrium in which trade occurs if andonly if is e¢ cient.Remark 2.14. The revelation principle can be used to prove this general

result, and then translating the result into Hall and Lazear�s employment model.If the �rm has private information about the worker�s marginal product (m) andthe worker has private information about his outside opportunity (v), then thereisn�t bargaining game that the �rm and the worker would willingly play thatproduces employment if and only if it is e¢ cient, that is, m � v: �

81

Page 82: GamesTheorySB,ESA,etc

2.6 Exercises and problems unsolved

1. Consider a Cournot duopoly operating in a market with inverse demandP (Q) = a�Q, where Q = q1+q2 is the aggregate quantity on the market. Both�rms have total costs Ci(qi) = cqi, but demand is uncertain: it is low, a = aL,with probability 1 � �, and high, a = aH , with probability �. Furthermore,information is asymmetric: �rm 1 knows whether demand is high or low, but�rm 2 doesn�t. All of this is common knowledge. The two �rms simultaneouslychoose quantities. What are the strategy spaces for the two �rms ? Makeassumption concerning aH ; aL; �; and c such that all equilibrium quantities arepositive. What is the Bayesian Nash equilibrium of this game ?2. Consider the following asymmetric - information model of Bertrand

duopoly with di¤erentiated products. Demand for �rm i is qi(pi; pj) = a� pi �bipj :Costs are zero for both �rms. The sensitivity of �rm i0s demand to �rmsj0s price is either high or low. That is, bi is either bH or bL, where bH > bL > 0:For each �rm bi = bH with probability � and bi = bL with probability 1 � �,independent of the realization of bj . Each �rm knows its own bi but not itscompetitor�s. All of this is common knowledge. What are the action spaces,type space, beliefs, and utility functions in this game ? What are the strategyspaces ? What conditions de�ne a symmetric pure - strategy Bayesian Nashequilibrium of this game ? Solve for such an equilibrium.3. Find all the pure - strategy Bayesian Nash equilibrium in the following

static Bayesian game:1. Nature determines whether the payo¤s are as in Game 1 or as in Game

2, each game being equally likely.2. Player 1 learns whether nature has drawn Game 1 or Game 2, but player

2 doesn�t.3. Player 1 chooses either T or B; player 2 simultaneously chooses either L

or R.4. Payo¤s are given by the game drawn by nature.

L R L RT 1,1 0,0 T 0,0 0,0B 0,0 0,0 B 0,0 2,2

Game 1 Game 2

4. Recall from Section 1.1 of Chapter 1 that Matching pennies hasn�t pure -strategy Nash equilibrium but has one mixed - strategy Nash equilibrium: eachplayer plays H with probability 1=2:

Player 2

Player 1H T

H 1,-1 �1,1T �1,1 1,�1

82

Page 83: GamesTheorySB,ESA,etc

Provide a pure - strategy Bayesian Nash equilibrium of a corresponding gameof incomplete information such that as the incomplete information disappears,the players�behavior in the Bayesian Nash equilibrium approaches their behav-ior in the mixed - strategy Nash equilibria in the original game of completeinformation.5. Consider a �rst - price, sealed - bid auction in which the bidders�val-

uations are independently and uniformly distributed on [0; 1]. Show that ifthere are n bidders, then the strategy of bidding n�1

n times one�s valuation is asymmetric Bayesian Nash equilibrium of this auction.6. Consider a �rst - price, sealed - bid auction in which the bidders�val-

uations are independently and identically distributed according to the strictlypositive density f(vi) on [0; 1]. Compute a symmetric Bayesian Nash equilib-rium for the two - bidder case.7. Reinterpret the buyer and seller in the double auction analyzed in problem

3 (A double auction) from Section 2.5 as a �rm that knows a worker�s marginalproduct (m) and a worker who knows his outside opportunity (v), respectively.In this context, trade means that the worker is employed by the �rm, and theprice at which the parties trade is worker�s wage w. If there is trade then the�rm�s payo¤ is m�w and the worker�s is w; if there isn�t trade then the �rm�spayo¤ is zero and the worker�s is v. Suppose thatm and v are independent drawsfrom a uniform distribution on [0; 1], as in the text. For purposes of comparison,compute the players�expected payo¤s in the linear equilibrium of the doubleauction. Now consider the following two trading games as alternatives to thedouble auction.Game I: Before the parties learn their private information, they sign a

contract specifying that if the worker is employed by the �rm then the worker�swage will be w, but also that either side can escape from the employmentrelationship at no cost. After the parties learn the values of their respectivepieces of private information, they simultaneously announce either that theyAccept the wage w or that they Reject that wage. If both announce Accept,then trade occurs; otherwise it doesn�t. Given an arbitrary value of w from[0; 1], what is the Bayesian Nash equilibrium of this game ? Draw a diagramshowing the type - pairs that trade. Find the value of w that maximize the sumof the players�expected payo¤s and compute this maximized sum.Game II: Before the parties learn their private information, they sign a

contract specifying that the following dynamic game will be used to determinewhether the worker joins the �rm and if so at what wage. After the parties learnthe values of their respective pieces of private information, the �rm chooses awage w to o¤er the worker, which the worker then accepts or rejects. Try toanalyze this game using backwards induction. Given w and v, what will theworker do ? If the �rm anticipates what the worker will do, then given m whatwill the �rm do ? What is the sum of the players�expected payo¤s ?

83

Page 84: GamesTheorySB,ESA,etc

2.7 References

1. Dani, E., Numerical method in games theory, Ed. Dacia, Cluj-Napoca,1983

2. Dani, E., Muresan, A.S., Applied mathematics in economy, Lito.Univ., Babes-Bolyai, Cluj-Napoca, 1981

3. Gibbons, R., Games theory for applied economists, Princeton Univer-sity Press, New Jersey, 1992

4. Harsanyi, J., Games with randomly distributed payo¤s: A new ra-tionale for mixed strategy equilibrium points, International Journal of GameTheory, 2, 1973, 1-23

5. Muresan, A.S., Operational research, Lito. Univ., Babes-Bolyai, Cluj-Napoca, 1996

6. Muresan, A.S., Applied mathematics in �nance, banks and exchanges,Ed. Risoprint, Cluj-Napoca, 2000

7. Muresan, A.S., Applied mathematics in �nance, banks and exchanges,Vol. I, Ed. Risoprint, Cluj-Napoca, 2001

8. Muresan, A.S., Applied mathematics in �nance, banks and exchanges,Vol. II, Ed. Risoprint, Cluj-Napoca, 2002

9. Myerson, R., Satterthwaite, M., E¢ cient mechanisms for bilateraltrading, Journal of Economic Theory, 28, 1983, 265-281

10. Owen, G., Game theory (2 nd edn.) Academic Press, New York,1982

11. Wang, J., The theory of games, Clarendon Press, Oxford, 1988

84

Page 85: GamesTheorySB,ESA,etc

Part II

THE ABSTRACT THEORY OFGAMES

85

Page 86: GamesTheorySB,ESA,etc

3 Generalized games and abstract economies

Fixed point theorems are the basic mathematical tools in showing the existenceof solution in game theory and economics. While I have tried to integrate themathematics and applications this chapter isn�t a comprehensive introduction toeither general equilibrium theory or game theory. Here only �nite-dimensionalspaces are used. While many of the results presented here are true in arbi-trary locally convex spaces, no attempt has been made to cover the in�nite-dimensional results.The main bibliographical source for this chapter is the Border�s book [10],

which I have been used in my lectures with the students in Computer Sciencefrom the Faculty of Mathematics and Informatics. Also, we use the recentlyresults obtained by Aliprantis, Tourky, Yannelis, Maugeri, Ray, D�Agata, Oetli,Schlager, Agarwal, O�Regan, Rim, Kim, Husai, Tarafdar, Llinares, Muresan,and so on.

3.1 Introduction

The fundamental idealization made in modelling an economy is the notion of acommodity. We suppose that it is possible to classify all the di¤erent goods andservices in the world into a �nite number,m, of commodities, which are availablein in�nitely divisible units. The commodity space is then Rm: A vector in Rmspeci�es a list of quantities of each commodity. It is commodity vectors that areexchanged, manufactured and consumed in the course of economic activity, notindividual commodities; although a typical exchange involves a zero quantity ofmost commodities. A price vector lists the value of a unit of each commodityand so belongs to Rm: Thus the value of commodity vector x at price p isPm

i=1 pixi = p:x:The principal participants in an economy are the consumers. We will assume

that there is a given �nite number of consumers. Not every commodity vectoris admissible as a �nal consumption for a consumer. The set Xi � Rm of alladmissible consumption vectors for consumer i is his consumption set. Thereare a variety of restrictions that might be embodied in the consumption set. Onepossible restriction that might be placed on admissible consumption vectors isthat they be nonnegative. Under this interpretation, negative quantities of acommodity in a �nal consumption vector mean that the consumer is supplyingthe commodity as a service.In a private ownership economy consumers are also partially characterized

by their initial endowment of commodities. This is represented as a pointwi in the commodity space. In a market economy a consumer must purchasehis consumption vector at the market prices. The set of admissible commodityvectors that he can a¤ord at prices p given an income Mi is called his budgetset and is just fx 2 Xijp:x � Mig. The budget set might well be empty. Theproblem faced by a consumer in a market economy is to choose a consumptionvector or set of them from the budget set. To do this, the consumer must havesome criterion for choosing. One way to formalize the criterion is to assume that

86

Page 87: GamesTheorySB,ESA,etc

the consumer has a utility index, that is, a real-valued function ui, ui : Xi !R; x 7! ui(x). The idea is that a consumer would prefer to consume vector xrather that vector y if ui(x) > ui(y) and would be indi¤erent if ui(x) = ui(y):The solution to the consumer�s problem is then to �nd all vectors x which

maximize u on the budget set. The set of solutions to a consumer�s problem forgiven prices is his demand set.The supplier�s problem is simple. Suppliers are motivated by pro�ts. Each

supplier j has a production set Yj of technological feasible supply vectors.A supply vector speci�es the quantities of each commodity supplied and theamount of each commodity used as an input. Inputs are denoted by negativequantities and outputs by positive ones. The pro�t or net income associatedwith supply vector y at price p is just

Pmi=1 piyi = p:y. The supplier�s problem

is then to choose a y from the set technologically feasible supply vectors whichmaximizes the associated pro�t. The set of pro�t maximizing production vectorsis the supply set.A variation on the notion of a noncooperative game is that of an abstract

economy. In an abstract economy, the set os strategies available to a playerdepends on the strategy choices of the other players. For example, the problem of�nding an equilibrium price vector for a market economy. This can be convertedinto a game where the strategy sets of consumers are their consumption setsdemands and those of suppliers are their production sets.

87

Page 88: GamesTheorySB,ESA,etc

3.2 Equilibrium of excess demand correspondences

There is a fundamental theorem for proving the existence of a market equilib-rium of an abstract economy [10].If is the excess demand multivalued mapping, the p is an equilibrium price

if 0 2 (p): The price p is a free disposal equilibrium price if there is a z 2 (p)such that z � 0:Theorem 3.1. (Gale-Debreu-Nikaido Lemma). Let : � ( Rm be

an upper hemi-continuous multivalued mapping with nonempty compact convexvalues such that for all p 2 �

p:z � 0 for each z 2 (p):

Put N = �Rn+1+ . Then fp 2 �jNT (p) 6= ;g of free disposal equilibrium

prices is nonempty and compact.Proof. For each p 2 � set

U(p) = fqj q:z > 0; (8)z 2 (p)g:

Then U(p) is convex for each p and p =2 U(p), and we have that U�1(p) isopen for each p.For if q 2 U�1(p), we have that p:z > 0 for all z 2 (q): Then since is

upper hemi-continuous, +[fxj p:x > 0g] is a neighborhood of q in U�1(p).Now p is U -maximal if and only if

for each q 2 �; there is a z 2 (p) with q:z � 0:

It is know that "if C � Rm is a closed convex cone and K � Rm is compactand convex then K

TC� 6= ; if and only if"

(8) p 2 C; (9) z 2 K p:z � 0:

So, p is U -maximal if and only if (p)TN 6= ;: Thus by a Sonnenschein�s

theorem, fpj (p)TN 6= ;g is nonempty and compact.�

Theorem 3.2. (Neuefeind Lemma). Let S = fpj p 2 Rm; p > 0;Pm

i=1 pi =1g. Let : S ( Rm be upper hemi-continuous with nonempty closed convex val-ues and satisfy the strong form of Walras�law

p:z = 0 for all z 2 (p)and the boundary conditionthere is a p� 2 S and a neighborhood V of � n S in � such that for all

p 2 VTS; p�:z > 0 for all z 2 (p):

Then the set fpjp 2 S; 0 2 (p)g of equilibrium prices for is compact andnonempty.Proof. De�ne the binary relation U on � by

p 2 U(q) iffp:z > 0 for all z 2 (q) and p; q 2 S or p 2 S; q 2 � n Sg:

88

Page 89: GamesTheorySB,ESA,etc

First show that the U -maximal elements are precisely the equilibrium prices.Suppose that p is U -maximal, that is, U(p) = ;: Since U(p) = S for all p 2 �nS,it follows that p 2 S: Since p 2 S and U(p) = ;;for each q 2 S; there is a z 2 (p) with q:z � 0: (*)Now (*) implies 0 2 (p). Suppose by way of contradiction that 0 =2 (p):

Then since f0g is compact and convex and (p) is closed and convex, by Sepa-rating hyperplane theorem, there is �p 2 Rm satisfying �p:z > 0 for all z 2 (p):Put p� = ��p+(1��)p. Then for z 2 (p), p�:z = ��p:z+(1��)p:z = ��p:z > 0for � > 0. (Recall that p:z = 0 for z 2 (p) by Walras�law). For � > 0 smallenough, p� > 0 so that the normalized price vector q� = (

Pp�i )

�1p� 2 S andq�:z > 0 for all z 2 (p), which violates (*).Conversely, if p is an equilibrium price, then 0 2 (p) and since p:0 = 0 for

all p, it follows that U(p) = ;.Next verify that U satis�es the hypotheses of Sonnenschein�s theorem.(ia) p =2 U(p): For p 2 S this follows from Walras�law. For p 2 � n S; p =2

S = U(p):(ib) U(p) is convex: For p 2 S, let q1; q2 2 U(p); that is, q1:z > 0; q2:z > 0

for z 2 (p): Then [�q1 + (1 � �)q2]:z > 0 as well. For p 2 � n S;U(p) = Swhich is convex.(ii) If q 2 U�1(p), then there is a p0 with q 2 intU�1(p0): There are two

cases: (a) q 2 S and (b) q 2 � n S:(iia) q 2 S

TU�1(p): Then p:z > 0 for all z 2 (q): Let H = fxjp:x > 0g,

which is open. Then by upper hemi-continuity, +[H] is a neighborhood of qcontained in U�1(p).(iib) q 2 (� n S)

TU�1(p): By boundary condition in state of the theorem,

q 2 intU�1(p�).�Theorem 3.3.(Grandmont�s Lemma). Let S = fpj p 2 Rm; p > 0;

Pmi=1 pi =

1g. Let : S ( Rm be upper hemi-continuous with nonempty compact convexvalues and satisfy the strong form of Walras�law

p:z = 0 for all z 2 (p)and the boundary conditionfor every sequence qn ! q 2 � n S and zn 2 (qn), there is a p 2 S (which

may depend on fzng) such that p:zn > 0 for in�nitely many n.Then has an equilibrium price p, that is, 0 2 (p):Proof. Set Kn = cofxjx 2 S; dist(x;� n S) � 1

ng. Then fKng is anincreasing family of compact convex sets and S =

SnKn: Let Cn be the cone

generated by Kn: Use a Debreu�s theorem to conclude that for each n, thereis qn 2 Kn such that (qn)

TC�n 6= ;: Let zn 2 (qn)

TC�n: Suppose that

qn ! q 2 � n S: Then by boundary condition, there is a p 2 S such thatp:zn > 0 in�nitely often. But for large enough n; p 2 Kn � Cn: Since zn 2 C�n;it follows that p:zn � 0; a contradiction.It follows then that no subsequence of qn converges to a point in � n S:

Since � is compact, some subsequence must converge to some p 2 S: Since isupper hemi-continuous with compact values, by sequential characterization ofhemi-continuity, there is a subsequence of zn converging to �z 2 (p): This �z

89

Page 90: GamesTheorySB,ESA,etc

lies inTn C

�n = �Rm+ : This fact together with the strong form of Walras�law

imply that �z = 0: �

3.3 Existence of equilibrium for abstract economies

3.3.1 Preliminaries

Let A a subset of a topological space X. We shall denote by 2A the familyof all subsets of A and by clA the closure of A in X. If A is a subset of avector space, we shall denote by coA the convex hull of A. If A is a nonemptysubset of a topological vector space X and S; T : A ! 2X are multivaluedmappings, then coT; clT; T

TS : A! 2X are multivalued mappings de�ned by

(coT )(x) = coT (x); (clT )(x) = clT (x) and (TTS)(x) = T (x)

TS(x) for each

x 2 A; respectively. Let B be a nonempty subset of A. Denote the restrictionof T on B by T jB :Let X be a nonempty subset of a topological vector space and x 2 X:

Let � : X ! 2X be a given multivalued mapping. A multivalued mapping�x : X ! 2X is said to be a � - majorant of � at x if there exists an openneighborhood Nx of x in X such that(a) for each z 2 Nx; �(z) � �x(z),(b) for each z 2 Nx; z =2 cl co �x(z) and(c) �xjNx

has open graph in Nx �X:The multivalued mapping � is said to be �-majorised if for each x 2 X

with �(x) 6= ;, there exists a �- majorant of � at x.It is clear that every multivalued mapping � having an open graph with

x =2 cl co �(x) for each x 2 X is a � - majorised multivalued mapping. Howeverthe following simple multivalued mapping shows a � - majorised multivaluedmapping which doesn�t have an open graph:The multivalued mapping � : X = (0; 1)! 2X is de�ned by�(x) = (0; x2] for each x 2 X:Then � hasn�t open graph but �x(z) = (0; z) for all z 2 X is a � - majorant

of � at any x 2 X:We now state the following de�nition.De�nition 3.1. LetX and Y be two topological spaces. Then a multivalued

mapping T : X ! 2Y is said to be upper semicontinuous (respectively,almost upper semicontinuous) if for each x 2 X and each open set V inY with T (x) � V , there exists an open neighborhood U of x in X such thatT (y) � V (respectively, T (y) � clV ) for each y 2 U . �Remark 3.1. An upper semicontinuous multivalued mapping is clearly al-

most upper semicontinuous. From the de�nition, if T is almost semicontinuous,then clT is also almost semicontinuous. And it should be noted that we don�tneed the closedness assumption of T (x) for each x 2 X in the de�nitions. �The following example shows us an almost upper semicontinuous multivalued

mapping which isn�t upper semicontinuous.Example 3.1. Let X = [0;1) and � : X ! 2X be de�ned by�(2) = (1; 3) , and �(x) = [1; 3] if x 6= 2.

90

Page 91: GamesTheorySB,ESA,etc

Then � isn�t upper semicontinuous at 2 since for an open neighborhood(1,3) of �(2) there doesn�t exists any desired neighborhood U of 2 such thatT (y) � (1; 3) for all y 2 U ; however T (y) � [1; 3] for all y in any neighborhoodof 2. Therefore T is almost upper semicontinuous. �Now we given the following general de�nitions of equilibrium theory in math-

ematical economics. Let I be a �nite set of agents. For each i 2 I, let Xi be anonempty set of actions.De�nition 3.2. An abstract economy (or generalized game) � =

(Xi; Ai; Bi; Pi)i2I is de�ned as a family of ordered quadruples (Xi; Ai; Bi; Pi)whereXi is a nonempty topological vector space (a choice set), Ai; Bi :

Qj2I Xj !

2Xi are constraint multivalued mappings and Pi :Qj2I Xj ! 2Xi is a prefer-

ence multivalued mapping. An equilibrium for � (Schafer-Sonnenchein type)is a point x 2 X =

Qi2I Xi such that for each i 2 I; xi 2 clBi(x) and

Pi(x)TAi(x) = ;: �

Remark 3.2. When Ai = Bi for each i 2 I, our de�nitions of an abstracteconomy and an equilibrium coincide with the standard de�nitions of Shafer-Sonnenchein. �For each i 2 I; P 0i : X ! 2X will denote the multivalued mapping de�ned

by P 0i (x) = fyjy 2 X; yi 2 Pi(x)g(= ��1i (Pi(x)), where �i : X ! Xi is the i-thprojection).And we shall use the following notation:

Xi =Y

j2I;j 6=iXj

and let �i : X ! Xi; �i : X ! Xi be the projections of X onto Xi and

Xi, respectively. For any x 2 X, we simply denote �i(x) 2 Xi by xi andx = (xi; xi):In [28] Greenberg introduced a further generalized concept of equilibrium as

follows: Under same settings as above, = f igi2I be a family of functions i : X ! R+ for each i 2 I.De�nition 3.3. A - quasi-equilibrium for � is a point x 2 X such that

for all i 2 I,(1) xi 2 clAi(x);(2) Pi(x)

TAi(x) = ; and/or i(x) = 0: �

Remark 3.3. Quasi-equilibrium can be of special interest for economieswith a tax authority and the result of Shafer-Sonnenschein cannot be appliedin this problem. �Next we give another de�nition of equilibrium for an abstract economy

given by utility functions. By following Debreu, an abstract economy � =(Xi; Ai; fi)i2I is de�ned as a family of ordered triples (Xi; Ai; fi) where Xi isa nonempty topological vector space (a choice set), Ai :

Qj2I Xj = X ! 2Xi

is a constraint multivalued mapping and fi :Qj2I Xj ! R is a utility function

(payo¤ function).De�nition 3.4. An equilibrium for � (Nash type) is a point x 2 X

such that for each i 2 I; xi 2 clAi(x) and

91

Page 92: GamesTheorySB,ESA,etc

fi(x) = fi(xi; xi) = infffi(x1; :::; xi�1; z; xi+1; :::)jz 2 clAi(x)g:

�Remark 3.4. It should be noted that if Ai(x) = Xi for all x 2 X, then the

concept of an equilibrium for � coincides with the well-known Nash equilibrium.The two types of equilibrium points coincide when the preference multivaluedmapping Pi can be de�ned by

Pi(x) = fzi 2 Xijfi(xi; zi) < fi(x)g for each x 2 X:

3.3.2 A generalization of Himmelberg�s �xed point theorem

We begin with the following lemma.Lemma 3.1. Let X be a nonempty subset of a topological space and D

be a nonempty compacy subset of X. Let T : X ! 2D be an almost uppersemicontinuous multivalued mapping such that for each x 2 X; T (x) is closed.Then T is upper semicontinuous.Proof. For any x 2 X; let U be an open neighborhood of T (x) in D. Since

T (x) is closed in D, there exists an open neighborhood V of T (x) such that

T (x) � V � clV � U:

Since T is almost upper semicontinuous at x, for such open neighborhood Vof T (x), we can �nd an open neighborhood W of x such that T (y) � clV � Ufor all y 2W: Therefore T is upper semicontinuous at x. �Remark 3.5. For any upper semicontinuous multivalued mapping T : X !

2Y ; coT and cl co T aren�t necessarily upper semicontinuous in general even ifX = Y is compact convex in a locally convex Hausdor¤ topological vector space.�However the almost upper semicontinuty can be preserved as follows:Lemma 3.2. Let X be a convex subset of a locally convex Hausdor¤ topologi-

cal vector space E and D be a nonempty compact subset of X. Let T : X ! 2D bean almost upper semicontinuous multivalued mapping such that for each x 2 X,coT (x) � D: Then cl co T is almost upper semicontinuous.Proof. For any x 2 X, let U be an open set containing cl co T (x). Since

cl co T (x) is closed in D, we can �nd an open convex neighborhood N of 0 suchthat

cl co T (x) +N � cl(cl co T (x) +N) = cl co T (x) + clN � U:

Clearly V = cl co T (x) + N is an open convex set containing cl co T (x)and V � U: Since T is almost upper semicontinuous, there exists an openneighborhood W of x in X such that T (y) � clV for all y 2 W . Since V is

92

Page 93: GamesTheorySB,ESA,etc

convex, cl co T (y) � clV � clU for all y 2W: Therefore cl co T is almost uppersemicontinuous. �Remark 3.6. In the Lemma 3.2, we don�t know whether the multivalued

mapping coT is almost upper semicontinuous even when T is upper semicontin-uous. �We now prove the following generalization of Himmelberg�s �xed point the-

orem.Theorem 3.4. Let X be a convex subset of a locally convex Hausdor¤

topological vector space E and D be a nonempty compact subset of X. LetS; T : X ! 2D be almost upper semicontinuous multivalued mappings suchthat(1) for each x 2 X; ; 6= coS(x) � T (x);(2) for each x 2 X; T (x) is closed.Then there exists a point x 2 D such that x 2 T (x):Proof. For each x 2 X, since coS(x) � T (x) is closed, we have cl co S(x) �

T (x). By Lemma 3.2, the multivalued mapping cl co S : X ! 2D is also almostupper semicontinuous, so that by Lemma 3.1, cl co S is upper semicontinuousand closed convex valued in D. Therefore by Himmelberg�s �xed point theorem,there exists a point x 2 D such that x 2 cl co S(x) � T (x), which completesthe proof. �Corollary 3.1. Let X be a convex subset of a locally convex Hausdor¤

topological vector space E and D be a nonempty compact subset of X. LetS : X ! 2D be an almost upper semicontinuous multivalued mapping such thatfor each x 2 X, coS(x) is a nonempty subset of D. Then there exists a pointx 2 D such that x 2 cl co S(x):Proof. We de�ne a multivalued mapping T : X ! 2D by T (x) = cl co S(x)

for all x 2 X. Then by Lemma 3.2, T is almost upper semicontinuous. Clearlythe pair (S; T ) satis�es all conditions of Theorem 3.4, so that there exists apoint x 2 D such that x 2 T (x): �When S = T in Theorem 3.4, we obtain Himmelberg�s �xed point theorem

as a corollary:Corollary 3.2. Let X be a convex subset of a locally convex Hausdor¤

topological vector space and D be a nonempty compact subset of X. Let T :X ! 2D be an upper semicontinuous multivalued mapping such that for eachx 2 X, T (x) is a nonempty closed convex subset of D. Then there exists a pointx 2 D such that x 2 T (x): �

3.3.3 Existence of equilibria in abstract economies

In this section we consider both kinds of economy described in the prelimi-naries (that is, an abstract economy given by preference multivalued mappings(Shafer-Sonnenschein type) in compact setting and an abstract economy givenby utility functions (Nash type) in non-compact settings) and prove the exis-tence of equilibrium points or quasi-equilibrium points for either case by usingthe �xed point theorems in previous section.

93

Page 94: GamesTheorySB,ESA,etc

First, using �-majorised multivalued mappings we shall prove an equilibriumexistence of a compact abstract economy, which generalizes the powerful resultof Shafer-Sonnenschein. For simplicity, we may assume that Ai = Bi for eachi 2 I in a abstract economy.Theorem 3.5. Let � = (Xi; Ai; Pi)i2I be an abstract economy where I is a

countable set such that for each i 2 I,(1) Xi is a nonempty compact convex subset of a metrisable locally convex

Hausdor¤ topological vector space,(2) for each x 2 X =

Qi2I Xi; Ai(x) is nonempty convex,

(3) the multivalued mapping clAi : X ! 2Xi is continuous,(4) the multivalued mapping Pi is �-majorised.Then � has an equilibrium choice x 2 X, that is, for each i 2 I, xi 2 clAi(x)

and Ai(x)TPi(x) = ;:

Proof. Let i 2 I be �xed. Since Pi is �- majorised, for each x 2 X,there exists a multivalued mapping �x : X ! 2Xi and an open neighborhoodUx of x in X such that Pi(z) � �x(z) and zi =2 cl co �x(z) for each z 2 Ux,and �xjUx has an open graph in Ux � Xi. By compactness of X, the family{Uxjx 2 X} of an open cover of X contains a �nite subcover {Uxj jj 2 J}, whereJ = f1; 2; :::; ng. For each j 2 J , we now de�ne �j : X ! 2Xi by

�j(z) =

��xj (z); if z 2 UxjXi; if z =2 Uxj :

(44)

and next we de�ne �i : X ! 2Xi by

�i(z) =\j2J

�j(z)

for each z 2 X.For each z 2 X, there exists k 2 J such that z 2 Uxk so that zi =2

cl co �xk(z) = cl co �k(z); thus zi =2 cl co �i(z). We now show that the graphof �i is open in X �Xi. For each (z; x) 2 graph of �i, since X =

Sj2J Uxj ,

there exists fi1; :::ikg � J such that z 2 Uxi1T:::TUxik . Then we can �nd

an open neighborhood U of z in X such that U � Uxi1T:::TUxik . Since

�xi1 (z)T:::T�xik

(z) is an open subset of Xi containing x, there exists an openneighborhood V of x in Xi such that x 2 V � �xi1 (z)

T:::T�xik

(z): Thereforewe have an open neighborhood U �V of (z; x) such that U �V � graph of �i;so that the graph of �i is open in X � Xi: And it is clear that Pi(z) � �i(z)for each z 2 X:Next, since X �Xi is compact and metrisable, so is perfectly normal. Since

the graph of �i is open in X � Xi, by a result of Dugundji, there exists acontinuous function Ci : X �Xi ! [0; 1] such that Ci(x; y) = 0 for all (x; y) =2graph of �i and Ci(x; y) 6= 0 for all (x; y) 2 graph of �i: For each i 2 I, wede�ne a multivalued mapping Fi : X ! 2Xi by

Fi(x) = fyjy 2 clAi(x); Ci(x; y) = maxz2clAi(x)

Ci(x; z)g:

94

Page 95: GamesTheorySB,ESA,etc

Then by a result of Aubin and Ekeland, Fi is upper semicontinuous and foreach x 2 X, Fi(x) is nonempty closed. Then a multivalued mapping G : X !2X de�ned by G(x) =

Qi2I Fi(x) is also upper semicontinuous by a result of

Fan and G(x) is a nonempty compact subset of X for each x 2 X: Thereforeby Corollary 3.1, there exists a point x 2 X such that x 2 cl co G(x); that is,x 2 cl co G(x) �

Qi2I cl co Fi(x): Since Fi(x) � clAi(x) and Ai(x) is convex,

cl co Fi(x) � clAi(x): Therefore xi 2 clAi(x) for each i 2 I. It remains to showthat Ai(x)

TPi(x) = ;: If zi 2 Ai(x)

TPi(x) 6= ;; then Ci(x; zi) > 0 so that

Ci(x; z0i) > 0 for all z

0i 2 Fi(x). This implies that Fi(x) � �i(x); which implies

xi 2 cl co Fi(x) � cl co �i(x); this is a contradiction. So the theorem is proved.�Remark 3.7. In a �nite dimensional space, for a compact set A, co A is

compact and convex. Therefore when Xi is a subset of Rn, we can relax theassumption (b) of the de�nition of �-majorant as follows without a¤ecting theconclusion of Theorem 3.5:(b�) for each z 2 Nx, z =2 co �x(z):And in this case, Theorem 3.5 generalizes a Shafer-Sonnenschein�s theorem

in two aspects, that is, (i) Pi need not have open graph and (ii) an index set Imay not be �nite.�Using the concept of -quasi-equilibrium described in the preliminaries, we

further generalize Theorem 3.5 as follows:Theorem 3.6. Let � = (Xi; Ai; Pi)i2I be an abstract economy where I is a

countable set such that for each i 2 I,(1)Xi is a nonempty compact convex subset of a metrisable locally convex

Hausdor¤ topological vector space,(2) i : X =

Qi2I Xi ! R+ is a nonnegative real-valued lower semicontin-

uous function,(3) for each x 2 X, Ai(x) is nonempty convex,(4) the multivalued mapping cl Ai : X ! 2Xi is continuous for all x with

i(x) > 0 and is almost upper semicontinuous for all x with i(x) = 0;(5) the multivalued mapping Pi is �-majorised.Then � has a -quasi-equilibrium choice x 2 X, that is, for each i 2 I,(a) xi 2 cl Ai(x),(b) Ai(x)

TPi(x) = ; and/or i(x) = 0:

Proof. We can repeat the proof of Theorem 3.5 again. In the proof ofTheorem 3.5, for each i 2 I we shall replace the multivalued mapping Fiby a new multivalued mapping F �i : X ! 2Xi de�ned by F �i (x) = fyjy 2cl Ai(x); Ci(x; y) i(x) = maxz2cl Ai(x) Ci(x; z) i(x)g for each x 2 X.Since fxjx 2 X; i(x) > 0g is open, F �i is also upper semicontinuous. In fact,

for any open set V containing F �i (x), if i(x) = 0 then F�i (x) = cl Ai(x) � V .

Since cl Ai is upper semicontinuous, there exists an open neighborhood W of xsuch that F �i (y) � cl Ai(y) � V for all y 2W ; if i(x) > 0, then by a result ofAubin and Ekeland, F �i (x) = Fi(x) is also upper semicontinuous at x, so thatthere exists an open neighborhood W of x such that Fi(y) � V for each y 2W .Then W 0 = W

Tfzjz 2 X; i(z) > 0g is an open neighborhood of x such that

F �i (y) � V for each y 2W 0. Therefore F �i is upper semicontinuous.

95

Page 96: GamesTheorySB,ESA,etc

Then G =Qi2I Fi : X ! 2X is also upper semicontinuous by a result of

Fan, and G(x) is a nonempty compact subset of X for each x 2 X. Thereforeby the same proof as in Theorem 3.5, there exists a point x 2 X such thatxi 2 cl Ai(x) for each i 2 I. Finally, if i(x) = 0, then the conclusion (b) holds.In case i(x) > 0, if zi 2 Ai(x)

TPi(x) 6= ;, then Ci(x; zi) > 0 for all z0i 2 Fi(x).

This implies that Fi(x) � �i(x), which implies xi 2 cl co Fi(x) � cl co �i(x);this is a contradiction. Therefore we have Ai(x)

TPi(x) = ;: �

In most results on the existence of equilibria for abstract economies theunderlying spaces (commodity spaces or choice sets) are always compact andconvex. However, in recent papers, the underlying spaces aren�t always com-pact and it should be noted that we will encounter many kinds of multivaluedmappings in various economic situations; so it is important that we shall con-sider several types of multivalued mappings and obtain some existence resultsin non-compact settings. Now we prove the quasi-equilibrium existence theoremof Nash type non-compact abstract economy.Theorem 3.7. Let I be any (possibly uncountable) index set and for each

i 2 I, let Xi be a convex subset of a locally convex Hausdor¤ topological vectorspace Ei and Di be a nonempty compact subset of Xi. For each i 2 I, let fi :X =

Qi2I Xi ! R be a continuous function and i : X ! R+ be a nonnegative

real-valued lower semicontinuous function. For each i 2 I, Si : X ! 2Di be acontinuous multivalued mapping for all x 2 X with i(x) > 0 and be almostupper semicontinuous for all x 2 X with i(x) = 0 such that(1) Si(x) is a nonempty closed convex subset of Di,(2) xi ! fi(x

i; xi) is quasi-convex on Si(x).Then there exists an equilibrium point x 2 D =

Qi2I Di such that for each

i 2 I,(a) xi 2 Si(x),(b) fi(xi; xi) = infz2Si(x) fi(x

i; z) and/or i(x) = 0.Proof. For each i 2 I, we now de�ne a multivalued mapping Vi : X ! 2Xi

by

Vi(x) = fy j y 2 Si(x); fi(xi; y) i(x) = infz2Si(x)

fi(xi; z) i(x)g:

Since fx j x 2 X; i(x) > 0g is open, for each x 2 X with i(x) > 0, Viis upper semicontinuous at x by a result of Aubin and Ekeland and the sameargument of the proof of Theorem 3.6; and for each x 2 X with i(x) = 0,Vi(x) = Si(x) so that Vi is also upper semicontinuous at x. Therefore for eachx 2 X, Vi is upper semicontinuous at x and Vi(x) is nonempty compact andconvex.Now we de�ne V : X ! 2D by

V (x) =Yi2I

Vi(x)

for each x 2 X.Then by a result of Fan, V is also upper semicontinuous, and V (x) is a

nonempty compact convex subset of D for each x 2 X. Therefore, by Corollary

96

Page 97: GamesTheorySB,ESA,etc

3.2 there exists a point x 2 D such that x 2 V (x), that is, for each i 2 I, wehave(a) xi 2 Vi(x) � Si(x) and(b) fi(xi; xi) = infz2Si(x) fi(x

i; z) and/or i(x) = 0: �

3.3.4 Nash equilibrium of games and abstract economies

Each strategy vector determines an outcome (which may be a lottery in somemodels). Players have preferences over outcomes and this induce preferencesover strategy vectors. For convenience we will work with preferences over strat-egy vectors. There are two ways we might do this. The �rst is to describeplayer i�s preferences by a binary relation U i de�ned on X. Then U i(x) isthe set of all strategy vectors preferred to x. Since player i only has controlover the i-th component of x, we will �nd it more useful to describe player i�spreferences in terms of the good reply set. Given a strategy vector x 2 X anda strategy yi 2 Xi, let xjyi denote the strategy vector obtained from x whenplayer i chooses yi and other players keep their choices �xed. Let us say that yiis a good reply for player i to strategy vector x if xjyi 2 U i(x): This de�nesa multivalued mapping Ui : X ( Xi, called the good reply multivalued map-ping, by Ui(x) = fyi j yi 2 Xi; xjyi 2 U i(x)g: It will be convenient to describepreferences in terms of the good reply multivalued mapping Ui rather than thepreference relation U i. Note however that we lose some information by doingthis. Given a good reply multivalued mapping Ui it will not generally possibleto reconstruct the preference relation U i, unless we know that U i is transitive,and we will not make this assumption. Thus a game in strategic form is atuple (I; (Xi); (Ui)) where each Ui :

Qj2I Xj ( Xi.

A shortcoming of this model of a game is that frequently there are situationsin which the choices of players cannot be made independently. A simpli�edexample is the pumping of oil out of a common oil �eld by several producers.Each producer chooses an amount xi to pump out and sell. The price dependson the total amount sold. Thus each producer has partial control of the priceand hence of their pro�ts. But the xi cannot be chosen independently becausetheir sum cannot exceed the total amount of oil in the ground. To take suchpossibilities into account we introduce a multivalued mapping Fi : X ( Xi

which tells which strategies are actually feasible for player i, given the strategyvector of the others. (We have written Fi as a function of the strategies of allthe players including i as a technical convenience. In modelling most situations,Fi will be independent of player i�s choice.) The jointly feasible strategy vectorsare thus the �xed points of the multivalued mapping F =

Qi2I Fi : X (

X: A game with the added feasibility or constraint multivalued mapping iscalled a generalized game or abstract economy. It is speci�ed by a tuple(I; (Xi); (Fi); (Ui)) where Fi : X ( Xi and Ui : X ( Xi:A Nash equilibrium of a strategic form game or abstract economy is a

strategy vector x for which no player has a good reply. For a game an equilibriumis an x 2 X such that Ui(x) = ; for each i. For an abstract economy anequilibrium is an x 2 X such that x 2 F (x) and Ui(x)

TFi(x) = ; for each i.

97

Page 98: GamesTheorySB,ESA,etc

Nash proves the existence of equilibria for games where the players�prefer-ences are representable by continuous quasi-concave utilities and the strategysets are simplexes. Debreu proves the existence of equilibrium for abstracteconomies. He assumes that strategy sets are contractible polyhedra and thatthe feasibility multivalued mapping have closed graph and the maximized utilityis continuous and that the set of utility maximizers over each constraint set iscontractible. These assumptions are joint assumptions on utility and feasibilityand the simplest way to make separate assumptions is to assume that strategysets are compact and convex and that utilities are continuous and quasi-concaveand that the constraint multivalued mappings are continuous with compact con-vex values. Then the maximum theorem guarantees continuity of maximizedutility and convexity of the feasible sets and quasi-concavity imply convexity(and hence contractibility) of the set of maximizers. Arrow and Debreu usedDebreu�s result to prove the existence of Walrasian equilibrium of an economyand coined the term abstract economy.Gale and Mas-Colell prove a lemma which allows them to prove the exis-

tence of equilibrium for a game without ordered preferences. They assume thatstrategy sets are compact convex sets and that the good reply multivalued map-pings are convex valued and have open graph. Shafer and Sonnenschein provethe existence of equilibria for abstract economies without ordered preferences.They assume that the good reply multivalued mappings have open graph andsatisfy the convexity/irre�exivity condition xi =2 co Ui(x). They also assumethat the feasibility multivalued mappings are continuous with compact convexvalues. This result doesn�t strictly generalize Debreu�s result since convexityrather than contractibility assumptions are made.Theorem 3.8 (Gale, Mas-Colell). Let X =

Qi2I Xi, Xi being a non-

empty, compact, convex subset of Rki , and let Ui : X ( Xi be a multivaluedmapping satisfying(i) Ui(x) is convex for all x 2 X,(ii) U�i (fxig) is open in X for all xi 2 Xi.Then there exists x 2 X such that for each i, either xi 2 Ui(x) or Ui(x) = ;.Proof. LetWi = fx j Ui(x) 6= ;g. ThenWi is open by (ii) and UijWi

:Wi (Xi satis�es the hypotheses of the selection theorem, so there is a continuousfunction fi : Wi ! Xi with fi(x) 2 Ui(x). De�ne the multivalued mapping i : X ( Xi by

i(x) =

�ffi(x)g; if x 2Wi

Xi; if x =2Wi:(45)

Then i is upper hemi-continuous with nonempty compact and convex val-ues, and thus so is =

Qi2I i : X ( X: Thus by the Kakutani theorem, has

a �xed point �x. If i(�x) 6= Xi, then �xi 2 i(�x) implies �xi = fi(�x) 2 Ui(�x): If i(�x) = Xi, then it must be that Ui(�x) = ;. (Unless of course Xi is a singleton,in which case f�xig = i(�x).) �Remark 3.8. The previous theorem possesses a trivial extension. Each Ui

is assumed to satisfy (i) and (ii) so that the selection theorem may be employed.

98

Page 99: GamesTheorySB,ESA,etc

If some Ui is already a singleton-valued multivalued mapping, then the selec-tion problem is trivial.Thus we may allow some of the Ui�s to be continuoussingleton-valued multivalued mapping instead, and the conclusion follows. Thenext Corollary is derived from Theorem 3.8 by assuming each xi =2 Ui(x) andconcludes that there exists some x such that Ui(x) = ; for each i. Assumingthat Ui(x) is never empty yields a result equivalent to a Fan�s result. �Corollary 3.3. For each i, let Ui : X ( Xi have open graph and satisfy

xi =2 co Ui(x) for each x. Then there exists x 2 X with Ui(x) = ; for all i.Proof. Because Xi is convex subset the multivalued mapping co Ui satisfy

the hypotheses of Theorem 3.8 so there is x 2 X such that for each i, xi 2co Ui(x) or co Ui(x) = ;. Since xi =2 co Ui(x) by hypothesis, we have co Ui(x) =;, so Ui(x) = ;: �Theorem 3.9. (Shafer-Sonnenschein) Let (I; (Xi); (Fi); (Ui)) be an ab-

stract economy such that for each i,(i) Xi � Rki is nonempty, compact and convex(ii) Fi is a continuous multivalued mapping with nonempty compact convex

values(iii) Gr Ui is open in X �Xi

(iv) xi =2 co Ui(x) for all x 2 X.Then there is an equilibrium.Proof. De�ne �i : X�Xi ! R+ by �i(x; yi) = dist[(x; yi); (Gr Ui)

c]. Then�i(x; yi) > 0 if and only if yi 2 Ui(x) and �i is continuous since Gr Ui is open.De�ne Hi : X ( Xi by

Hi(x) = fyi j yi 2 Xi; yi maximizes �i(x; �) on Fi(x)g:

Then Hi has nonempty compact values and is upper hemi-continuous andhence closed. (To see that Hi is upper hemi-continuous, apply the maximumtheorem to the multivalued mapping (x; yi) 7!( fxg � Fi(x) and the function�i.) De�ne G : X ( X by G(x) =

Qi2I co Hi(x). Then by a well known

results, G is upper hemi-continuous with compact convex values and so satis�esthe hypotheses of the Kakutani �xed point theorem, so there is �x 2 X with �x 2G(�x). Since Hi(�x) � Fi(�x) which is convex, �xi 2 Gi(�x) = co Hi(�x) � Fi(�x). Wenow show Ui(�x)

TFi(�x) = ;. Suppose not, that is, there is zi 2 Ui(�x)

TFi(�x):

Then since zi 2 Ui(�x) we have �i(�x; zi) > 0, and since Hi(�x) consists of themaximizers of �i(�x; �) on Fi(�x), we have that �i(�x; yi) > 0 for all yi 2 Hi(�x).This says that yi 2 Ui(�x) for all yi 2 Hi(�x): Thus Hi(�x) � Ui(�x), so �xi 2Gi(�x) = co Hi(�x) � co Ui(�x), which contradicts (iv). Thus Ui(�x)

TFi(�x) = ;:

�Remark 3.9. The multivalued mappings Hi used in the proof of previ-

ous theorem aren�t natural constructions, which is the cleverness of Shafer andSonnenschein�s proof. The natural approach would be to use the best replymultivalued mappings, x 7!( fxi j Ui(xjxi)

TFi(x) = ;g: These multivalued

mappings are compact-valued and upper hemi-continuous. They may fail tobe convex-valued, however.Mas-Colell gives an example for which the best replymultivalued mapping hasn�t connected-valued submultivalued mapping. Taking

99

Page 100: GamesTheorySB,ESA,etc

the convex hull of the best reply multivalued mapping doesn�t help, since a �xedpoint of convex hull multivalued mapping may fail to be an equilibrium.Another natural approach would be to use the good reply multivalued map-

ping x 7!( co Ui(x)TFi(x). This multivalued mapping, while convex-valued,

isn�t closed-valued, and so the Kakutani theorem doesn�t apply. What Shaferand Sonnenschein do is choose a multivalued mapping that is a submultival-ued mapping of a good reply set when it is nonempty and equal to the wholefeasible strategy set otherwise. Under stronger assumptions on the Fi multi-valued mappings this approach can be made to work without taking a propersubset of the good reply set. The additional assumptions on Fi are the fol-lowing. First, Fi(x) is assumed to be topologically regular for each x, thatis, Fi(x) = cl [int Fi(x)]. Second, the multivalued mapping x 7!( int Fi(x)is assumed to have open graph. The requirement of open graph is strongerthan lower hemi-continuity. These assumptions were used by Borglin and Kei-ding who reduced the multi-player abstract economy to a 1-person game. Theproof below adds an additional player to the abstract economy by introducingan �abstract auctioneer�, and incorporates the feasibility constraints onto thepreferences which converts it into a game. Both the topological regularity andopen graph assumptions are satis�ed by budget multivalued mappings, providedincome is always greater than the minimum consumption expenditures on theconsumption set. The proof is closely related to the arguments used by Galeand Mas-Colell to reduce an economy to a noncooperative game. �Theorem 3.10. (A special case of Shafer-Sonnenschein theorem).

Let (I; (Xi); (Fi); (Ui)) be an abstract economy such that for each i we have(i) Xi � Rki is nonempty, compact and convex(ii) Fi is an upper hemi-continuous multivalued mapping with nonempty

compact convex values satisfying, for all x, Fi(x) = cl [int Fi(x)] and x 7!(int Fi(x) has open graph(iii) Gr Ui is open in X �Xi

(iv) for all x; xi =2 co Ui(x).Then there is an equilibrium, that is, an x� 2 X such that for each i,

x�i 2 Fi(x�); and Ui(x�)\Fi(x

�) = ;:

Proof. We de�ne a game as follows. Put Z0 =Qi2I Xi: For i 2 I put

Zi = Xi, and set Z = Z0 �Qi2I Zi. A typical element of Z will be denoted

(x; y), where x 2 Z0 and y 2Qi2I Zi. De�ne preference multivalued mappings

�i : Z ( Zi as follows. De�ne �0 by �o(x; y) = fyg, and for i 2 I set

�i(x; y) =

�Fi(x); if yi =2 Fi(x)co Ui(y)

Tint Fi(x); if yi 2 Fi(x):

(46)

Note that �0 is continuous and never empty-valued and that for i 2 I themultivalued mapping �i is convex-valued and satis�es yi =2 �i(x; y): Also fori 2 I, the graph of �i is open. To see this set

100

Page 101: GamesTheorySB,ESA,etc

Ai = f(x; y; zi) j zi 2 int Fi(x)g; Bi = f(x; y; zi) j yi =2 Fi(x)g;

Ci = f(x; y; zi) j zi 2 co Ui(y)g;

and note that

Gr �i = (Ai\Bi)

[(Ai

\Ci):

The set Ai is open because int Fi has open graph and Ci is open by hy-pothesis (iii). The set Bi is also open. If yi =2 Fi(x), then there is a closedneighborhood W of yi such that Fi(x) � W c, and upper hemi-continuity of Fithen gives the desired result.Thus the hypothesis of Remark 3.8 is satis�ed and so there exists (x�; y�) 2 Z

such that

x� 2 �0(x�; y�); (�)

and for i 2 I

�i(x�; y�) = ;: (��)

Now (*) implies x� = y�; and since Fi(x) is never empty, (**) becomes

co Ui(x�)\int Fi(x

�) = ;; for i 2 I:

Thus Ui(x�)Tint Fi(x

�) = ;: But Fi(x�) = cl [int Fi(x�)] and Ui(x�) is

open, so Ui(x�)TFi(x

�) = ;; that is, x� is an equilibrium. �

3.3.5 Walrasian equilibrium of an economy

We now have several tools for proving the existence of a Walrasian equilibriumof an economy. We will focus on two approaches. These are: the excess demandapproach and the abstract economy approach. The excess demand approachutilizes the Debreu-Gale-Nikaido lemma, namely Theorem 3.1. The abstracteconomy approach converts the problem of �nding a Walrasian equilibrium ofthe economy into the problem of �nding the Nash equilibrium of an associatedabstract economy.The central di¢ culty of the excess demand approach involves proving the

upper hemi-continuity of the excess demand multivalued mapping.The abstract economy approach explicitly introduces a �ctitious agent, the

�auctioneer�, into the picture and models the economy as an abstract economyor generalized game. The strategies of consumers are consumption vectors, thestrategies of suppliers are production vectors, and the strategies of the auction-eer are prices. The auctioneer�s preferences are to increase the value of excessdemand. A Nash equilibrium of the abstract economy corresponds to a Wal-rasian equilibrium of the original economy. The principal di¢ culty to overcomein applying the existence theorems for abstract economies is the fact that theyrequire compact strategy sets and the consumption and production sets aren�t

101

Page 102: GamesTheorySB,ESA,etc

compact. This problem is dealt with by showing that any equilibrium must lie ina compact set, then truncating the consumption and production sets and show-ing that the Nash equilibrium of the truncated abstract economy is a Walrasianequilibrium of the original economy.We now recall some notations and de�nitions need in what follows.Let Rm denote the commodity space. For i = 1; 2; :::; n let Xi � Rm denote

the i-th consumer�s consumption set, wi 2 Rn his private endowment, and Uihis preference relation on Xi. For j = 1; 2; :::; k let Yj denote the j-th supplier�sproduction set. Set X =

Pni=1Xi; w =

Pni=1 wi; and Y =

Pkj=1 Yj : Let a

ij

denote the share of consumer i in the pro�ts of supplier j. An economy is thendescribed by a tuple ((Xi; wi; Ui); (a

ij)):

De�nition 3.5. An attainable state of the economy is a tuple ((xi); (yj)) 2Qni=1Xi �

Qkj=1 Yj ; satisfying

nXi=1

xi �kXj=1

yj � w = 0:

�Let F denote the set of attainable states and let

M = f((xi); (yj)) j ((xi); (yj)) 2 (Rm)n+k;nXi=1

xi �kXj=1

yj � w = 0g:

Then F = (Qni=1Xi �

Qkj=1 Yj)

TM: Let X 0

i be the projection of F on Xi,and let Y 0j be the projection of F on Yj :De�nition 3.6. AWalrasian free disposal equilibrium is a price p� 2 �

together with an attainable state ((x�i ); (y�j )) satisfying:

(i) For each j = 1; 2; :::; k,

p� � y�j � p� � yj ; for all yj 2 Yj :

(ii) For each i = 1; 2; :::; n;

x�i 2 Bi; and Ui(x�i )\Bi = ;;

where

Bi = fxi j xi 2 Xi; p� � xi � p� � wi +

kXj=1

aij(p� � y�j )g:

Lemma 3.3. Let the economy ((Xi; wi; Ui); (Yj); (aij)) satisfy:

For i = 1; 2; :::; n,(1) Xi is closed, convex and bounded from below, and wi 2 Xi:For j = 1; 2; :::; k that(2) Yj is closed, convex and 0 2 Yj :

102

Page 103: GamesTheorySB,ESA,etc

(3) AYTRm+ = f0g:

(4) YT(�Y ) = f0g:

Then the set F of attainable states is compact and nonempty. Furthermore,0 2 Y 0j ; j = 1; 2; :::; k:Suppose in addition, that the following two assumptions hold. For each i =

1; 2; :::; n,(5) there is some x0i 2 Xi satisfying wi > x0i:(6) Y � �Rm+ :Then x0i 2 X 0

i; i = 1; 2; :::; n:Proof. Clearly ((wi); (0j)) 2 F; so F is nonempty and 0 2 Y 0j . The set F of

attainable states is clearly closed, being the intersection of two closed sets. So,it is su¢ ces to show that AF = f0g; where AF is asymptotic cone of F (theset of all possible limits of sequences of the form f�nxng, where each xn 2 Fand �n # 0:) By a well known result, we have

AF � A(nYi=1

Xi �kYj=1

Yj)\AM:

Also, we have

A(nYi=1

Xi �kYj=1

Yj) �nYi=1

(AXi)�kYj=1

(AYj):

Since each Xi is bounded below there is some bi 2 Rm such that Xi �bi + Rm+ : Thus AXi � A(bi + Rm+ ) = ARm+ = Rm+ : Also, we have AYj � AY:Again, since M � w is a cone, AM = M � w: Thus we can show AF = f0g ifwe can show that

(nYi=1

Rm+ �kYj=1

AY )\(M � w) = f0g:

In other words, we need to show that if xi 2 Rm+ ; i = 1; 2; :::; n; and yj 2 AY;j = 1; 2; :::; k and

Pni=1 xi �

Pkj=1 yj = 0; then x1 = ::: = xn = y1 = ::: =

yk = 0: NowPn

i=1 xi � 0; so thatPk

j=1 yj � 0 too. Since AY is a convex

cone,Pk

j=1 yj 2 AY: Since AYTRm+ = f0g;

Pni=1 xi �

Pkj=1 yj = 0 impliesPn

i=1 xi = 0 =Pk

j=1 yj : Now xi � 0 andPn

i=1 xi = 0 clearly imply that xi = 0,

i = 1; 2; :::; n: RewritingPk

j=1 yj = 0 yields yi = �(P

j 6=i yj): Both yi and thislast sum belong to Y as AY � Y . Thus yi 2 Y

T(�Y ) so yi = 0: This is true

for all i = 1; 2; :::; k:Now assume that (5) and (6) hold. By (5),

Pni=1 x

0i <

Pni=1 wi: Set y

0 =Pni=1 x

0i �

Pni=1 wi: Then y

0 < 0; so by (6) there are y0j ; j; 1; 2; :::; k; satisfying

y0 =Pk

j=1 y0j : Then ((x

0i); (y

0j)) 2 F; so x0i 2 X 0

i: �Under the hypotheses of Lemma 3.3 the set F of attainable states is compact.

Thus for each consumer i, there is a compact convex set Ki containing X 0i in

103

Page 104: GamesTheorySB,ESA,etc

its interior. Set X 00i = Ki

TXi: Then X 00

i � intX 0i: Likewise, for each supplier j

there is a compact convex set Cj containing Y 0j in its interior. Set Y00j = Cj

TYj :

Theorem 3.11. Let the economy ((Xi; wi; Ui); (Yj); (aij)) satisfy:

For i = 1; 2; :::; n,(1) Xi is closed, convex and bounded from below, and wi 2 Xi:(2) There is some x0i 2 Xi satisfying wi > x0i:(3) (a) Ui has open graph, (b) xi =2 co Ui(xi), (c) xi 2 cl Ui(xi):For each j = 1; 2; :::; k,(4) Yj is closed and convex and 0 2 Yj :(5) Y

TRm+ = f0g:

(6) YT(�Y ) = f0g:

(7) Y � �Rm+ :Then there is a free disposal equilibrium of the economy.Proof. De�ne an abstract economy as follows. Player 0 is the auctioneer.

His strategy set is �m�1; the closed standard (m�1)-simplex. These strategieswill be price vectors. The strategy set of consumer i will be X 0

i. The strategyset of supplier j is Y 0j . A typical strategy vector is thus of the form (p; (xi); (yj)):The auctioneer�s preferences are represented by the multivalued mapping

U0 : ��Qi2I X

0i �Qj2J Y

0j ( � de�ned by

U0(p; (xi); (yj)) = fq j q 2 �; q � (Xi2I

xi �Xj2J

yj � w) >

p � (Xi2I

xi �Xj2J

yj � w)g:

Thus the auctioneer prefers to raise the value of excess demand. Observethat U0 has open graph, convex upper contour sets and p =2 U0(p; (xi); (yj)):Supplier j��s preferences are represented by the multivalued mapping Vj� :

��Qi2I X

00i �

Qj2J Y

00j ( Y 00j� de�ned by

Vj�(p; (xi); (yj)) = fy00j� j y00j� 2 Y 00j� ; p � y00j� > p � yj�g:Thus suppliers prefer larger pro�ts. These multivalued mappings have open

graph, convex upper contour sets and satisfy yj� =2 Vj�(p; (xi); (yj)):The preferences of consumer i� are represented by multivalued mapping

U 0i� : ��Qi2I X

00i �

Qj2J Y

00j ( Xi� de�ned by

U 0i�(p; (xi); (yj)) = co Ui�(xi�):

This multivalued mapping has open graph, convex upper contour sets andsatis�es xi� =2 U 0i�(p; (xi); (yj)):The feasibility multivalued mappings are as follows. For suppliers and the

auctioneer, they are constant multivalued mappings and the values are equalto their entire strategy sets. Thus they are continuous with compact convexvalues. For consumers things are more complicated. Start by setting �j(p) =maxyj2Yj p � yj : By the maximum theorem this is a continuous function. Since0 2 Y 0j ; �j(p) is always nonnegative. Set

104

Page 105: GamesTheorySB,ESA,etc

Fi�(p; (xi); (yj)) = fx00i� j x00i� 2 X 00i� ; p � x00i� � p � wi� +

kXj=1

ai�

j �j(p)g:

Since �j(p) is nonnegative and x0i� < wi in X 00i , p � x0i < p � wi for any

p 2 �: Thus Fi� is lower hemi-continuous and nonempty-valued. Since X 00i�

is compact, Fi� is upper hemi-continuous, since it clearly has closed graph.Thus for each consumer, the feasibility multivalued mapping is a continuousmultivalued mapping with nonempty comapct convex values.The abstract economy so constructed satis�es all the hypotheses of the

Shafer-Sonnenschein theorem and so has a Nash equilibrium. Translating the de-�nition of Nash equilibrium to the case at hand yields the existence of (p�; (x�i ); (y

0j)) 2

��Qi2I X

00i �

Qj2J Y

00j satisfying

(i) q � (P

i2I x�i �P

j2J y0j �w) � p� � (

Pi2I x

�i �P

j2J y0j �w) for all q 2 �:

(ii) p� � y0j � p� � yj for all yj 2 Y 00j ; j = 1; 2; :::; k:(iii) x�i 2 Bi and co Ui(x�i )

TBi = ;; i = 1; 2; :::; n; where

Bi = fxi j xi 2 X 00i ; p

� � xi � p� � wi +kXj=1

aij(p� � y00j )g:

Let Mi = p� �wi +Pk

j=1 aij(p

� � y00j ): Then in fact, each consumer spends allhis income, so that we have the budget equality p� �x�i =Mi: Suppose not. Thensince Ui(x�i ) is open and x

�i 2 cl Ui(x�i ); it would follow that Ui(x�i )

TBi 6= ;; a

contradiction.Summing up the budget equalities and using

Pni=1 a

ij = 1 for each j yields

p� �Pn

i=1 x�i = p� � (

Pkj=1 y

00j + w); so that

p� � (Xi2I

x�i �Xj2J

y0j � w) = 0:

This and (i) yield Xi2I

x�i �Xj2J

y0j � w � 0:

We next show that p� � y0j � p� � yj for all yj 2 Yj : Suppose not, and letp� � y00j > p� � y0j : Since Yj is convex, �y00j + (1 � �)y0j 2 Yj ; and it too yields ahigher pro�t than y0j : But for � small enough, �y

00j + (1 � �)y0j 2 Y 00j ; because

Y 0j is in the interior of Cj : This contradicts (ii).By (7) z� =

Pi2I x

�i �

Pj2J y

0j � w 2 Y; so that there exists y00j 2 Yj ,

j = 1; 2; :::; k satisfying z� =P

j2J y0j : Set y

�j = y0j+y

00j : Since each y

0j maximizes

p� � yj over Yj , thanP

j2J y0j maximizes p

� � y over Y: But since p� � z� = 0;Pj2J y

�j also maximizes p

� over Y . But then each y�j must also maximizesp� � yj over Yj : Thus we have so far shown that p� � y�j � p� � yj for all yj 2 Yj ;j = 1; 2; :::; k: By construction, we have that ((x�i ); (y

�j )) 2 F: To show that

105

Page 106: GamesTheorySB,ESA,etc

(p�; (x�i ); (y�j )) is indeed a Walrasian free disposal equilibrium it remains to be

proven that for each i,

Ui(x�i )\fxi j xi 2 Xi; p

� � xi � p� � wi +kXj=1

aij(p� � y�j )g = ;:

Suppose that there is some x0i belonging to this intersection. Then for smallenough � > 0; �x0i+(1��)x�i 2 X 00

i and since x�i 2 cl Ui(x�i ), �x00i +(1��)x�i 2

co Ui(x�i )TBi; contradicting (iii). Thus ((x�i ); (y

�j )) is a Walrasian free disposal

equilibrium. �Theorem 3.12. Let the economy ((Xi; wi; Ui); (Yj); (a

ij)) satisfy the hy-

potheses of Theorem 3.11 and further assume that there is a continuous quasi-concave utility ui satisfying Ui(xi) = fx0i j x0i 2 Xi; ui(x

0i) > ui(xi)g: Then the

economy has a Walrasian free disposal equilibrium.Proof. Let Y 00j be as in proof of previous theorem. We de�ne the multivalued

mapping j as follows j : �( Y 00j by

j(p) = fyj j yj 2 Y 00j ; p � yj � p � y00j for all y00j 2 Y 00j g:

De�ne �j : �! R: by �j(p) = maxyj2Yj p �yj : By the maximum theorem, jis upper hemi-continuous with nonempty compact values and �j is continuous.Since 0 2 Yj ; �j is nonnegative. Since Y 00j is convex, j(p) is convex too.Let X 00

i be as in proof of the previous theorem and de�ne �i : �( X 00i by

�i(p) = fxi j xi 2 X 00i ; p � xi � p � wi +

Xj2J

aij�j(p)g:

As in proof of previous theorem the existence of x00i < wi in X 00i implies that

�i is a continuous multivalued mapping with nonempty values. Since X00i is

compact and convex, �i has compact convex values. De�ne �i : �( X 00i by

�i(p) = fxi; j xi 2 �i(p); ui(xi) � ui(x00i ) for all x

00i 2 �i(p)g:

By a theorem of Berge, �i is an upper hemi-continuous multivalued mappingwith nonempty compact values. Since ui is quasi-concave, �i has convex values.Set

Z(p) =nXi=1

�i(p)�kXj=1

j(p)� w:

This Z is upper hemi-continuous and has nonempty compact convex values.Also for any z 2 Z(p), p � z � 0: To see this just add up the budget multivaluedmappings for each consumer. By theorem 3.1, there is some p� 2 � and z� 2Z(p�); satisfying z� � 0: Thus there are x�i 2 �i(p�) and y�j 2 j(p�) such that

nXi=1

x�i �kXj=1

y�j � w � 0:

106

Page 107: GamesTheorySB,ESA,etc

It follows just as in proof of previous theorem that ((x�i ); (y�j )) is a Walrasian

free disposal equilibrium. �Remark 3.10. The literature on Walrasian equilibrium is enormous. Two

standard texts in the �eld are Debreu and Arrow-Hahn. �

3.3.6 Equilibria for abstract economies

The object of this subsection is to use new �xed-point theorems of the authorsAgarwal and Regan to establish the existence of equilibrium points of abstracteconomies. These results improve, extend and complement those in the litera-ture.Throughout in this subsection, I will be a countable set of agents and we

describe an abstract economy by � = (Qi; Fi; Pi)i2I where for each i 2 I; Qiis a choice (or strategy) set, Fi :

Qi2I Qi = Q ! 2Qi (nonempty subsets of

Qi) is a constraint multivalued mapping, and Pi : Q ! 2Qi is a preferencemultivalued mapping; here Qi will be a subset of a Fréchet space (complete,metrizable locally convex vectorial topological space) Ei for each i 2 I. A pointx 2 Q is called an equilibrium point of � if for each i 2 I; we have

xi 2 Fi(x) and Fi(x)\Pi(x) = ;;

here xi is the projection of x on Ei.Theorem 3.13. Let be a closed, convex subset of a Fréchet space E with

x0 2 : Suppose that there is an upper semicontinuous map F : ! CK()(here CK() denotes the family of nonempty, compact, convex subsets of )with the following condition holding:

A � ; A = co (fx0g[F (A))

with A = C and C � A countable; implies A is compact: (�)

Then F has a �xed point in . �Remark 3.11. Suppose in addition in Theorem 3.13, we assume

for any A � ; we have F ( �A) � F (A);

then we could replace (*) with

C � countable; C = co(fx0g[F (C)) implies C is compact; (��)

and the result in Theorem 3.13 is again true. �Now Theorem 3.13 together with Remark 3.11 yields the following theorem

of M·onch type for single valued maps.

107

Page 108: GamesTheorySB,ESA,etc

Theorem 3.14. Let be a closed, convex subset of a Fréchet space E withx0 2 : Suppose that there is a continuous map f : ! with the followingcondition holding:

C � countable; C = co(fx0g[f(C)) implies C is compact:

Then F has a �xed point in : �Next we present a �xed point result of Furi-Pera type.Theorem 3.15. Let E be a Fréchet space with Q a closed, convex subset

of E and 0 2 Q: Suppose F : Q ! CK(E) is a compact upper semicontinuousmap with the following condition holding:

if f(xn; �n)gn�1 is a sequence in @Q� [0; 1] converging to (x; �)

with x 2 � F (x) and 0 � � � 1; then there exists n0 2 f1; 2; :::g

with f�nF (xn)g � Q for n � n0:

Then F has a �xed point in Q. �Remark 3.12. In Theorem 3.15, if E is a Hilbert space, then one could

replace F : Q! CK(E) a compact map in Theorem 3.15 with F : Q! CK(E)a one-set contractive, condensing map with F (Q) a bounded set in E: �Let Z be a subset of a Hausdor¤ topological space E1 and W a subset of a

topological vector space E2: We say F 2 DTK(Z;W ) if W is convex and thereexists a map B : Z !W with

co (B(x)) � F (x); for all x 2 Z; B(x) 6= ; for each x 2 Z;

and the �bres

B�1(y) = fz j z 2 Z; y 2 B(z)g

are open (in Z) for each y 2W .The following selection theorem holdTheorem 3.16. Let Z be a nonempty, paracompact Hausdor¤ topological

space and W a nonempty, convex subset of a Hausdor¤ topological vector space.Suppose F 2 DTK(Z;W ). Then F has a continuous selection, that is, thereexists a continuous single valued map f : Z !W of F . �The following result is a �xed point theorem of Furi-Pera type for DTK

maps.Theorem 3.17. Let I be a countable index set and fQigi2I a family of

nonempty closed, convex sets each in a Fréchet Ei. Let Q =Qi2I Qi and

assume 0 2 Q: For each i 2 I, let Fi 2 DTK(Q;Ei) be a compact map. LetF : Q! 2E (here E =

Qi2I Ei) be given by

108

Page 109: GamesTheorySB,ESA,etc

F (x) =Yi2I

Fi(x); for x 2 Q;

and suppose the following condition holds:

if f(xn; �n)gn�1 is a sequence in @Q� [0; 1] converging to (x; �)

with x 2 � F (x) and 0 � � � 1; then there exists n0 2 f1; 2; :::gwith f�nF (xn)g � Q for n � n0:

Then F has a �xed point in Q. �Remark 3.13. In Theorem 3.17, if Ei is a Hilbert space for each i 2 I,

then one could replace Fi, a compact map for each i 2 I in Theorem 3.17 withF : Q ! 2E ; a one-set contractive, condensing map with F (Q) a bounded setin E. �We will now use the above �xed point results to obtain equilibrium theorems

for an abstract economy.Theorem 3.18. Let I be a countable set and � = (Qi; Fi; Pi)i2I an abstract

economy such that for each i 2 I, the following conditions hold:(1) Qi is a nonempty closed, convex subset of a Fréchet space Ei,(2) Fi : Q ! CK(Qi) is upper semicontinuous; here CK(Qi) denotes the

family of nonempty, compact, convex subsets of Qi,(3) Ui = fx j x 2 Q;Fi(x)

TPi(x) 6= ;g is open in Q,

(4) PijUi : Ui ! 2Ei is upper semicontinuous with Pi(x) closed and convexfor each x 2 Ui,(5) xi =2 Fi(x)

TPi(x); for each x 2 Q; here xi is the projection of x on E:

In addition, suppose x0 2 Q with(6) A � Q; A � co(fx0g

SF (A)) with A = C and C � A countable, implies

A is compactholding; here F : Q! 2Q is given by

F (x) =Yi2I

Fi(x); for x 2 Q:

Then � has a equilibrium point. That is, for each i 2 I, we have

xi 2 Fi(x) and Fi(x)\Pi(x) = ;;

here xi is the projection of x on Ei.Proof. Fix i 2 I. Let Gi : Ui ! 2Qi be given by

Gi(x) = Fi(x)\Pi(x);

which is upper semicontinuous. Let Hi : Q! 2Qi be de�ned by

109

Page 110: GamesTheorySB,ESA,etc

Hi(x) = Gi(x); if x 2 Ui; and Hi(x) = Fi(x); if x =2 Ui;

which is upper semicontinuous with nonempty, compact, convex values (noteGi(x) � Fi(x) for x 2 Ui).Let H : Q! 2Q be de�ned by

H(x) =Yi2I

Hi(x):

We have that H : Q! CK(Q) is upper semicontinuous. We wish to applyTheorem 3.13 to H. To see this, let A � Q with A = co(fx0g

SH(A));, A = C

and C � A countable. Then since

H(x) � F (x); for x 2 A

(note Hi(x) � Fi(x); for x 2 Q), we have

A � co(fx0g[F (A)):

Now (6) guarantees that A is compact. Theorem 3.13 guarantees that thereexists x 2 Q with x 2 H(x): From (5), we have x =2 Ui for each i 2 I. As aresult, for each i 2 I, we have xi 2 Fi(x) and Fi(x)

TPi(x) = ;; here xi is the

projection of x on Ei: �Remark 3.14. If F (B) � F (B) for any B � Q; then could replace (6) in

Theorem 3.18 with (see Remark 3.11)

C � Q countable; C � co(fx0g[F (C)) implies C is compact:

�Theorem 3.19. Let I be a countable set and � = (Qi; Fi; Pi)i2I an abstract

economy such that for each i 2 I, the following conditions hold:(1) Qi is a nonempty closed, convex subset of a Fréchet space Ei,(2) Fi : Q! CK(Qi) is upper semicontinuous, compact map,(3) Ui = fx j x 2 Q;Fi(x)

TPi(x) 6= ;g is open in Q,

(4) PijUi : Ui ! 2Ei is upper semicontinuous with Pi(x) closed and convexfor each x 2 Ui,(5) xi =2 Fi(x)

TPi(x); for each x 2 Q; here xi is the projection of x on E:

In addition, suppose 0 2 Q with(6)

if f(xn; �n)gn�1 is a sequence in @Q� [0; 1] converging to (x; �)

with x 2 � F (x) and 0 � � � 1; then there exists n0 2 f1; 2; :::g

with f�nF (xn)g � Q for n � n0

110

Page 111: GamesTheorySB,ESA,etc

holding; here F : Q! 2E (here E =Qi2I Ei) is given by

F (x) =Yi2I

Fi(x):

Then � has an equilibrium point x 2 Q. That is, for each i 2 I, we have

xi 2 Fi(x); and Fi(x)\Pi(x) = ;;

here xi is the projection of x on Ei.Proof. Fix i 2 I and let Hi be as in Theorem 3.18. The same reasoning as

in Theorem 3.18 guarantees that Hi : Q ! CK(Ei) is upper semicontinuous.Let H : Q ! 2E be as in proof of previous theorem. Notice H : Q ! CK(E)is an upper semicontinuous, compact map (use (2) with Hi(x) � Fi(x) forx 2 Q). We wish to apply Theorem 3.15. To see this, suppose f(xn; �n)gn�1 isa sequence in @Q � [0; 1] converging to (x; �) with x 2 �H(x) and 0 � � < 1:Then sinceH(x) � F (x) for x 2 Q; we have x 2 �F (x): Now (6) guarantees thatthere exists n0 2 f1; 2; :::g with f�nFn(xn)g � Q for each n � n0: Consequently,f�nHn(xn)g � Q for each n � n0: Theorem 3.15 guarantees that there existsx 2 Q with x 2 H(x); and it is easy to check, as in Theorem 3.18, that x is anequilibrium point of �. �Remark 3.15. Notice (6) can be replaced by

if f(xn; �n)gn�1 is a sequence in @Q� [0; 1] converging to (x; �)

with x 2 � H(x) and 0 � � � 1; then there exists n0 2 f1; 2; :::g

with f�nH(xn)g � Q for n � n0;

where H is given in proof of Theorem 3.18, and the result in Theorem 3.19is again true. �Remark 3.16. If Ei is a Hilbert space for each i 2 I, then one could replace

Fi : Q ! CK(Ei) a compact map for each i 2 I in (2) with F : Q ! 2E aone-set contractive, condensing map with F (Q) a bounded set in E. �Next we present a generalization of Theorems 3.18 and 3.19.Theorem 3.20. Let I be a countable set and � = (Qi; Fi; Pi)i2I an abstract

economy. Assume for each i 2 I that (1), (2), (3) and (5) of Theorem 3.18 hold.In addition, suppose for each i 2 I that there exists an upper semicontinuousselector

i : Ui ! 2Qi of Fi\PijUi : Ui ! 2Qi

(7) with i(x) closed and convex for each x 2 Uiis satis�ed. Then � has an equilibrium point x 2 Q: That is, for each i 2 I,

we have

111

Page 112: GamesTheorySB,ESA,etc

xi 2 Fi(x) and Fi(x)\Pi(x) = ;;

here xi is the projection of x on Ei.Proof. Fix i 2 I. Let Hi : Q! 2Qi be de�ned by

Hi(x) = i(x); if x 2 Ui;

andHi(x) = Fi(x); if x =2 Ui:

This Hi : Q ! CK(Qi) is upper semicontinuous (note i(x) � Fi(x) forx 2 Ui). Essentially the same reasoning as in Theorem 3.18 onwards establishesresult. �Remark 3.17. If PijUi : Ui ! 2Ei is upper semicontinuous with Pi(x)

closed and convex for each x 2 Ui, then of course (7) holds.If Fi

TPijUi : Ui ! 2Qi is lower semicontinuous with Pi(x) closed and convex

for each x 2 Ui, then (7) holds. �Theorem 3.21. Let I be a countable set and � = (Qi; Fi; Pi)i2I an abstract

economy. Assume for each i 2 I that (1), (2), (3) and (5) of Theorem 3.19 hold.In addition, suppose for each i 2 I that there exists an upper semicontinuousselector

i : Ui ! 2Ei of Fi\PijUi : Ui ! 2Ei

(8) with i(x) closed and convex for each x 2 Uiis satis�ed. Also assume 0 2 Q with (6) holding. Then � has an equilibrium

point.Proof. Fix i 2 I and let Hi be as in Theorem 3.20. Essentially the same

reasoning as in Theorem 3.19 establishes the result. �The theorems so far in this subsection assume Ui is open in Q. Our next

two results consider the case when Ui is closed in Q.Theorem 3.22. Let I be a countable set and � = (Qi; Fi; Pi)i2I an abstract

economy such that for each i 2 I, the following conditions hold:(1) Qi is a nonempty, closed, convex subset of a Fréchet space Ei,(2) Fi : Q! CK(Qi) is lower semicontinuous,(3) Ui = fx j x 2 Q;Fi(x)

TPi(x) 6= ;g is closed in Q,

(4) there exists a lower semicontinuous selector i : Ui ! 2Qi of FiTPijUi :

Ui ! 2Qi with i(x) closed and convex for each x 2 Ui,and(5) xi =2 Fi(x)

TPi(x) for each x 2 Q; here xi is the projection of x on Ei.

In addition, suppose x0 2 Q with(6) A � Q; A � co(fx0g

SF (A)) with A = C and C � A countable, implies

A is compactholding; here F : Q! 2Q is given by

112

Page 113: GamesTheorySB,ESA,etc

F (x) =Yi2I

Fi(x):

Then � has an equilibrium point. �Proof. Fix i 2 I and let Hi : Q! 2Qi be given by

Hi(x) = i(x); if x 2 UiHi(x) = Fi(x); if x =2 Ui:

This Hi : Q! CK(Qi) is lower semicontinuous. Then, there exists an uppersemicontinuous selector �i : Q! CK(Qi) of Hi. Let � : Q! 2Q be given by

�(x) =Yi2I�i(x); for x 2 Q:

Now � : Q! CK(Q) is upper semicontinuous. We wish to apply Theorem3.13 to �. To see this, let A � Q with A = co(fx0g

S�(A)), A = C and C � A

countable. Then since

�(x) � F (x); for x 2 Q(note �i(x) � Hi(x) � Fi(x), for x 2 Q), we have

A � co(fx0g[F (A)):

Now (5) guarantees that A is compact. Theorem 3.13 guarantees that thereexists x 2 Q with x 2 �(x): Now if x 2 Ui for some i 2 I, then

xi 2 �i(x) � Hi(x) = i(x)

(here xi is the projection of x on Ei), and so xi 2 Fi(x)TPi(x), a contra-

diction. As a result x =2 Ui for each i 2 I, so xi 2 Fi(x) and Fi(x)TPi(x) = ;:

�Remark 3.18. If Fi

TPijUi : Ui ! 2Qi is lower semicontinuous with Pi(x)

closed and convex for each x 2 Ui, then (4) is clearly satis�ed. �Theorem 3.23. Let I be a countable set and � = (Qi; Fi; Pi)i2I an abstract

economy such that for each i 2 I, the following conditions hold:(1) Qi is a nonempty, closed, convex subset of a Fréchet space Ei,(2) Fi : Q! CK(Ei) is lower semicontinuous, compact map,(3) Ui = fx j x 2 Q;Fi(x)

TPi(x) 6= ;g is closed in Q,

(4) there exists a lower semicontinuous selector i : Ui ! 2Ei of FiTPijUi :

Ui ! 2Ei with i(x) closed and convex for each x 2 Ui,and(5) xi =2 Fi(x)

TPi(x) for each x 2 Q; here xi is the projection of x on Ei.

In addition, suppose 0 2 Q with(6)

if f(xn; �n)gn�1 is a sequence in @Q� [0; 1] converging to (x; �)

113

Page 114: GamesTheorySB,ESA,etc

with x 2 � F (x) and 0 � � � 1; then there exists n0 2 f1; 2; :::g

with f�nF (xn)g � Q for n � n0

holding; here F : Q! 2E (here E =Qi2I Ei) is given by

F (x) =Yi2I

Fi(x):

Then � has an equilibrium point x 2 Q.Proof. Fix i 2 I and letHi be as in Theorem 3.22. The same reasoning as in

Theorem 3.22 guarantees that Hi : Q! CK(Ei) is upper semicontinuous, andthat there exists an upper semicontinuous selector �i : Q! CK(Ei) of Hi: Let� : Q ! 2E be as in proof of previous theorem. Notice � : Q ! CK(E) is anupper semicontinuous, compact map (use (2) with �i(x) � Fi(x) for x 2 Q). Wewish to apply Theorem 3.22. To see this, suppose f(xn; �n)gn�1 is a sequencein @Q � [0; 1] converging to (x; �) with x 2 ��(x) and 0 � � < 1: Then since�(x) � F (x) for x 2 Q; we have x 2 �F (x): Now (6) guarantees that thereexists n0 2 f1; 2; :::g with f�nFn(xn)g � Q for each n � n0: Consequently,f�n�n(xn)g � Q for each n � n0: Theorem 3.22 guarantees that there existsx 2 Q with x 2 �(x); and it is easy to check, as in Theorem 3.22, that x is anequilibrium point of �. �Next we discuss an abstract economy � = (Qi; Fi; Gi; Pi)i2I (here I is count-

able) where for each i 2 I, Qi � Ei is the choice set, Fi; Gi :Qi2I Qi = Q! 2Ei

are constraint multivalued mapping, and Pi : Q ! 2Ei is a preference multi-valued mapping. A point x 2 Q is called an equilibrium point of � if for eachi 2 I, we have

xi 2 clEiGi(x) = Gi(x) and Fi(x)\Pi(x) = ;

(here xi is the projection of x on Ei). The results which follows improvethose of Regan, Ding, Kim, Tan, Yannelis and Prabhaker. We establish a new�xed point result for DTK maps.Theorem 3.24. Let I be a countable index set and fQigi2I a family of

nonempty closed, convex sets each in a Fréchet Ei. For each i 2 I, let Gi 2DTK(Q;Qi) where Q =

Qi2I Qi: Assume x0 2 Q and suppose G : Q ! 2Q,

de�ned by G(x) =Qi2I Gi(x) for x 2 Q, satis�es the following condition:

C � Q countable; C � co(fx0g[G(C)) implies C is compact:

Then G has a �xed point in Q.Proof. Since Q is a subset of a metrizable space E =

Qi2I Ei we have that

Q is paracompact. Fix i 2 I. Then Gi 2 DTK(Q;Qi) together with Theorem3.16 guarantees that there exists a continuous selector gi : Q ! Qi of Gi. Letg : Q! Q be de�ned by

114

Page 115: GamesTheorySB,ESA,etc

g(x) =Yi2I

gi(x); for x 2 Q:

Notice G : Q! Q is continuous and g is a selector of G. We now show

if C � Q is countable and C = co(fx0g[g(C)) then C is compact:

To see this, notice if C � Q is countable and C = co(fx0gSg(C)), then

since g is a selector of G, we have

C � co(fx0g[G(C)):

Now the condition in state of theorem implies C is compact. Theorem 3.14guarantees that there exists x 2 Q with x = g(x): That is,

x = g(x) =Yi2I

gi(x) �Yi2I

Gi(x) = G(x):

�Theorem 3.25. Let I be a countable set and � = (Qi; Fi; Gi; Pi)i2I an

abstract economy such that for each i 2 I, the following conditions hold:(1) Qi is a nonempty, closed, convex subset of a Fréchet space Ei;(2) for each x 2 Q, Fi(x) 6= ; and co(Fi(x)) � Gi(x);(3) for each yi 2 Qi; the set [(coPi)�1(yi)

SMi]

TF�1i (yi) is open in Q;

here Mi = fx j x 2 Q;Fi(x)TPi(x) = ;g;

(4) Gi : Q! 2Qi , and(5) xi =2 co(Pi(x)) for each x 2 Q; here xi is the projection of x on Ei:In addition, suppose x0 2 Q with(6) C � Q countable, C � co(fx0g

SG(C)) implies C is compact

holding; here G : Q! 2Q is given by(7) G(x) =

Qi2I Gi(x); for x 2 Q:

Then � has an equilibrium point x 2 Q: That is, for each i 2 I, we have

xi 2 Gi(x) and Fi(x)\Pi(x) = ;;

here xi is the projection of x on Ei.Proof. For each i 2 I, let

Ni = fx j x 2 Q;Fi(x)\Pi(x) 6= ;; g

and for each x 2 Q, let

I(x) = fi j i 2 I; Fi(x)\Pi(x) 6= ;g:

For each i 2 I, de�ne multivalued mappings Ai; Bi : Q! 2Qi by

Ai(x) = coPi(x)\Fi(x); if i 2 I(x) (that is; x 2 Ni);

115

Page 116: GamesTheorySB,ESA,etc

and Ai(x) = Fi(x); if i =2 I(x);

and

Bi(x) = coPi(x)\Gi(x); if i 2 I(x) (that is; x 2 Ni);

and Bi(x) = Gi(x); if i =2 I(x):

It is easy to see (use (2)) and the de�nition of I(x) that for each i 2 I andx 2 Q that

co(Ai(x)) � Bi(x) and Ai(x) 6= ;:

Also, for each i 2 I and yi 2 Qi we have

A�1i (yi) = fx j x 2 Q; yi 2 Ai(x)g =

fx j x 2 Ni; yi 2 coPi(x)\Fi(x)g

[fx j x 2Mi; yi 2 Fi(x)g =

[fx j x 2 Ni; yi 2 coPi(x)g\fx j x 2 Ni; yi 2 Fi(x)g]

[fx j x 2Mi; yi 2 Fi(x)g =

f[(coPi)�1(yi)\F�1i (yi)]

\Nig

[[F�1i (yi)

\Mi] =

[(coPi)�1(yi)

\F�1i (yi)]

[[F�1i (yi)

\Mi] =

[(coPi)�1(yi)

[Mi]

\F�1i (yi);

which is open in Q. Thus, Bi 2 DTK(Q;Qi). Let B : Q ! 2Q be de�nedby

B(x) =Yi2I

Bi(x); for x 2 Q:

We now show

C � Q countable; C � co(fx0g[B(C)) implies C is compact:

To see this, let C � Q be countable with C � co(fx0gSB(C)): Now since

B(x) � G(x) for x 2 Q (note for each i 2 I that Bi(x) � Gi(x) for x 2 Q), wehave

116

Page 117: GamesTheorySB,ESA,etc

C � co(fx0g[G(C)):

Now (6) implies C is compact, so we have the above implication. Theorem3.24 guarantees that there exists x 2 Q with x 2 B(x), that is xi 2 Bi(x) foreach i 2 I; note if i 2 I(x) for some i 2 I, then Fi(x)

TPi(x) 6= ; and so

xi 2 co(Pi(x))TGi(x): In particular, xi 2 co(Pi(x)); and this contradicts (5).

Thus, i =2 I(x) for all i 2 I. Consequently, Fi(x)TPi(x) = ; and xi 2 Gi(x) for

all i 2 I: �Theorem 3.26. Let I be a countable set and � = (Qi; Fi; Gi; Pi)i2I an

abstract economy such that for each i 2 I, the following conditions hold:(1) Qi is a nonempty, closed, convex subset of a Fréchet space Ei;(2) for each x 2 Q, Fi(x) 6= ; and co(Fi(x)) � Gi(x);(3) for each yi 2 Ei; the set [(coPi)�1(yi)

SMi]

TF�1i (yi) is open in Q; here

Mi = fx j x 2 Q;Fi(x)TPi(x) = ;g;

(4) Gi : Q! 2Ei , is a compact map, and(5) xi =2 co(Pi(x)) for each x 2 Q; here xi is the projection of x on Ei:In addition, suppose 0 2 Q with(6)

if f(xn; �n)gn�1 is a sequence in @Q� [0; 1] converging to (x; �)

with x 2 � G(x) and 0 � � � 1; then there exists n0 2 f1; 2; :::g

with f�nG(xn)g � Q for n � n0

holding; here G : Q! 2E (here E =Qi2I Ei) is given by

G(x) =Yi2I

Gi(x):

Then � has an equilibrium point x 2 Q. That is, for each i 2 I, we have

xi 2 Gi(x) and Fi(x)\Pi(x) = ;;

here xi is the projection of x on Ei.Proof. For each i 2 I, let Ni; Ai and Bi be as in Theorem 3.25. Essentially

the same reasoning as in Theorem 3.25 guarantees that Bi 2 DTK(Q;Ei) foreach i 2 I. Also note that Bi is a compact map for each i 2 I. Let B : Q! 2E

be as in proof of previous theorem. We wish to apply Theorem 3.17. To seethis, suppose f(xn; �n)gn�1 is a sequence in @Q � [0; 1] converging to (x; �)with x 2 �B(x) and 0 � � < 1. Then, since B(x) � G(x) for x 2 Q, wehave x 2 �G(x): Now (6) guarantees that there exists n0 2 f1; 2; :::g withf�nG(xn)g � Q for each n � n0: Consequently, f�nB(xn)g � Q for eachn � n0: Theorem 3.17 guarantees that there exists x 2 Q with x 2 B(x); and itis easy to check, as in Theorem 3.25, that x is an equilibrium point of �. �

117

Page 118: GamesTheorySB,ESA,etc

Remark 3.19. If Ei is a Hilbert space for each i 2 I, then one could replaceGi : Q ! 2Ei a compact map for each i 2 I in (4) with G : Q ! 2E a one-setcontractive, condensing map with G(Q) a bounded set in E. �Finally, in this subsection, we present two more results for upper semicon-

tinuous maps which extend the well known results in literature.Theorem 3.27. Let I be a countable set and � = (Qi; Fi; Gi; Pi)i2I an

abstract economy such that for each i 2 I, the following conditions hold:(1) Qi is a nonempty, closed, convex subset of a Fréchet space Ei;(2) Fi : Q! 2Qi is such that co(Fi(x)) � Gi(x);(3) Gi : Q! 2Qi and Gi(x) is convex for each x 2 Q;(4) the multivalued mapping Gi : Q! CK(Qi); de�ned by Gi(x) = clQi

Gi(x);is upper semicontinuous,(5) for each yi 2 Qi, F�1i (yi) is open in Q,(6) for each yi 2 Qi, P�1i (yi) is open in Q, and(7) xi =2 co(Pi(x)) for each x 2 Q; here xi is the projection of x on Ei:In addition, suppose x0 2 Q with(8) A � Q; A � co(fx0g

SG(A)) with A = C and C � A countable, implies

A is compactholding; here G : Q! 2Q is given by(9)

G(x) =Yi2I

Gi(x); for x 2 Q:

Then � has an equilibrium point x 2 Q: That is, for each i 2 I, we have

xi 2 Gi(x) and Fi(x)\Pi(x) = ;:

Proof. Fix i 2 I and let �i : Q! 2Qi be de�ned by

�i(x) = co(Fi(x))\co(Pi(x)); for x 2 Q;

and

Ui = fx j x 2 Q;�i(x) 6= ;g:

Now (5) and (6) together with a result of Yannelis and Prabhaker imply foreach y 2 Qi that (coFi)�1(y) and (coPi)�1(y) are open in Q. As a result, foreach y 2 Q we have that

��1i (y) = (coFi)�1(y)

\(coPi)

�1(y)

is open in Q. Now it is easy to check that

Ui =[y2Qi

��1i (y);

118

Page 119: GamesTheorySB,ESA,etc

and as a result, we have that Ui is open in Q. Since Ui is a subset of ametrizable space E =

Qi2I Ei, we have that Ui is paracompact. Notice as well

that

i = �ijUi : Ui ! 2Qi

has convex values. Also for y 2 Qi, we have

�1i (y) = fx j x 2 Ui; y 2 i(x)g =

fx j x 2 Q; y 2 �i(x)g\Ui = (�i)

�1(y)\Ui;

so �1i (y) is open in Ui. Theorem 3.16 guarantees that there exists a con-tinuous selection fi : Ui ! 2Qi of i. For each i 2 I, let Hi : Q! 2Qi be givenby

Hi(x) = ffi(x)g; if x 2 Ui; and Hi(x) = Gi(x); if x =2 Ui:

This Hi is upper semicontinuous (note for each x 2 Ui that ffi(x)g � i(x) � co(Fi(x)) � Gi(x)). Also notice (4) guarantees that Hi : Q! CK(Qi):Let H : Q! 2Q be given by

H(x) =Yi2I

Hi(x); for x 2 Q:

This H : Q! CK(Q) is upper semicontinuous. We wish to apply Theorem3.13 to H. To see this, let A � Q with A = co(fx0g

SH(A)), A = C, and

C � A countable. Then since Hi(x) � Gi(x) for each x 2 Q, we have

H(x) �Yi2I

Gi(x) = G(x); for x 2 Q:

Thus

A � co(fx0g[G(A));

so (8) guarantees that A is compact. Theorem 3.13 guarantees that thereexists x 2 Q with x 2 H(x): If x 2 Ui for some i, then

xi = fi(x) 2 co(Fi(x))\co(Pi(x)) � co(Pi(x)):

This contradicts (7). Thus, for each i 2 I, we must have x =2 Ui, so xi 2Gi(x) and co(Fi(x))

Tco(Pi(x)) = ;: Our result follows since

Fi(x)\Pi(x) � co(Fi(x))

\co(Pi(x)):

�Remark 3.20. Notice (5) and (6) in last theorem could be replaced by

119

Page 120: GamesTheorySB,ESA,etc

(10) for each i 2 I, for each yi 2 Qi, (coFi)�1(yi)T(coPi)

�1(yi) is open inQ,and the result is again true. �Theorem 3.28. Let I be a countable set and � = (Qi; Fi; Gi; Pi)i2I an

abstract economy such that for each i 2 I, the following conditions hold:(1) Qi is a nonempty, closed, convex subset of a Fréchet space Ei;(2) Fi : Q! 2Ei is such that co(Fi(x)) � Gi(x);(3) Gi : Q! 2Ei and Gi(x) is convex for each x 2 Q;(4) the multivalued mapping Gi : Q! CK(Ei); de�ned by Gi(x) = clEiGi(x);

is upper semicontinuous,(5) for each yi 2 Ei, F�1i (yi) is open in Q,(6) for each yi 2 Ei, P�1i (yi) is open in Q, and(7) xi =2 co(Pi(x)) for each x 2 Q; here xi is the projection of x on Ei:In addition, suppose x0 2 Q with(8)

if f(xn; �n)gn�1 is a sequence in @Q� [0; 1] converging to (x; �)

with x 2 � G(x) and 0 � � � 1; then there exists n0 2 f1; 2; :::g

with f�nG(xn)g � Q for n � n0

holding; here G : Q! 2E (here E =Qi2I Ei) is given by

G(x) =Yi2I

Gi(x):

Then � has an equilibrium point x 2 Q. That is, for each i 2 I, we have

xi 2 Gi(x) and Fi(x)\Pi(x) = ;;

here xi is the projection of x on Ei.Proof. Let �i; Ui;Hi; and H be as in previous theorem. Essentially the

same reasoning as in previous theorem guarantees thatH : Q! CK(E) is uppersemicontinuous. Notice also that H is compact. We wish to apply Theorem 3.15to H. To see this, suppose f(xn; �n)gn�1 is a sequence in @Q� [0; 1] convergingto (x; �) with x 2 �H(x) and 0 � � < 1. Then, since H(x) � G(x) for x 2 Q,we have x 2 �G(x): Now (8) guarantees that there exists n0 2 f1; 2; :::g withf�nG(xn)g � Q for each n � n0: Consequently, f�nH(xn)g � Q for eachn � n0: Theorem 3.15 guarantees that there exists x 2 Q with x 2 H(x): �Theorem 3.29. Let I be a countable set and � = (Qi; Fi; Gi; Pi)i2I an

abstract economy such that for each i 2 I, the following conditions hold:(1) Qi is convex,(2) Di is a nonempty compact subset of Qi,(3) for each x 2 Q, Fi(x) is a nonempty convex subset of Di,

120

Page 121: GamesTheorySB,ESA,etc

(4) for each xi 2 Di, fP�1i (xi)SUig

TA�1i (xi) contains a relatively open

subset Oxi of coD such thatSxi2Di

Oxi = coD; where Ui = fx j x 2 Q;Pi(x)TFi(x) =

;g and D =Qi2I Di;

(5) for each x = fxig 2 Q, xi =2 co Pi(x):Then � has an equilibrium point.Proof. For each i 2 I, let

Gi = fx j x 2 Q;Pi(x)\Fi(x) 6= ;g

and for each x 2 Q, let

I(x) = fi j i 2 I; Pi(x)\Fi(x) 6= ;g:

Now for each i 2 I, we de�ne a multivalued mapping Ti : Q! 2Di by

Ti(x) = co Pi(x)\Fi(x); if i 2 I(x)

Ti(x) = Fi(x); if i =2 I(x):

Clearly for each x 2 Q, Ti(x) is a nonempty convex subset of Di. Also foreach yi 2 Di,

T�1i (yi) = [f(co Pi)�1(yi)\F�1i (yi)g

\Gi][[F�1i (yi)

\Ui] �

� [fP�1i (yi)\F�1i (yi)g

\Gi][[F�1i (yi)

\Ui] =

= [P�1i (yi)\F�1i (yi)]

[[F�1i (yi)

\Ui] = [P

�1i (yi)

[Ui]\F�1i (yi):

We note that the �rst inequality follows from the fact that for each yi 2 Di,P�1i (yi) � (co Pi)

�1(yi) because Pi(x) � (co Pi)(x) for each x 2 Q: Further-more, by virtute of (4), for each yi 2 Di, T

�1i (yi) contains a relatively open

set Oyi of Q such thatSyi2Di

Oyi = coD: Hence by a result of Hussain andTarafdar there exists a point x = fxig such that xi 2 Ti(x) for each i 2 I. Bycondition (5) and the de�nition of Ti, it now easily follows that x 2 Q is anequilibrium point of �. �Corollary 3.4. Let I be a countable set and � = (Qi; Fi; Gi; Pi)i2I an

abstract economy such that for each i 2 I, the following conditions hold:(1) Qi is convex,(2) Di is a nonempty compact subset of Qi,(3) for each x 2 Q, Fi(x) is a nonempty convex subset of Di,(4) the set Gi = fx j x 2 Q;Pi(x)

TFi(x) 6= ;g is a closed subset of Qi,

(5) for each yi 2 Di; P�1i (yi) is a relatively open subset in Gi and F

�1i (yi)

is a relatively open subset in Q;(6) for each x = fxig 2 Q, xi =2 co Pi(x):

121

Page 122: GamesTheorySB,ESA,etc

Then there is an equilibrium point of the economy �.Proof. Since P�1i (yi) is relatively open in Gi, there is an open subset Vi of

Q with P�1i (yi) = GiTVi: Hence for yi 2 Di, P

�1i (yi)

SUi = (Gi

TVi)SUi =

QT(ViSUi): Thus

fP�1i (yi)[Uig

\F�1i (yi) = (Vi

[Ui)\F�1i (yi) = Oyi ;

say, is relatively open subset of Q for each yi 2 Di, since Vi; Ui and F�1i (yi)

are open subset of Q. Now it follows thatSyi2Di

Oyi = coD: The corollary isthus a consequence of Theorem 3.29. �

3.4 Existence of �rst-order locally consistent equilibria

3.4.1 Introduction

A �rst-order locally consistent equilibrium (1-LCE) of a game is a con�gura-tion of strategies at which the �rst-order condition for payo¤ maximization issimultaneously satis�ed for all players. The economic motivation for introduc-ing this equilibrium concept is that oligopolistic �rms don�t know their e¤ectivedemand function, but �at any given status quo each �rm knows only the linearapproximation of its demand curve and beliefs it to be demand curve it faces�.In what follows, in order to distinguish between the abstract concept of 1-LCE,that is, a con�guration of a game in which the �rst-order condition for payo¤maximization is satis�ed for all players, and its economic interpretation, that is,a pro�t-maximizing con�guration in a market or in an economy in which �rmsknow only the linear approximation of their demand functions, the latter equi-librium concept will be called �rst-order locally consistent economic equilibrium(1-LCEE) (see [1], [22], [23]).

3.4.2 First-order equilibria for non-cooperative games

Consider the following non-cooperative game � = (I; (Si); (Hi))i2I , where I =f1; 2; :::; ng is the index set of players, Si is the strategy set of player i, andHi is the payo¤ function of player i. Set S =

Qi2I Si; and S�i =

Qj2I;j 6=i Sj :

The generic element of set S, (respectively S�i, respectively Si) is denoted byx, (resp. x�i, resp. xi). Denote by DxiHi the derivative of Hi with respect toxi. The derivative of Hi with respect to xi calculated at point x is denoted byDxiHi(x):A.1. (8) i 2 I, Si is a convex and compact subset of a Banach space.A.2. (8) i 2 I, the function Hi : S ! R is continuous; moreover, for every

x 2 S, the derivative DxiHi exists and is continuous, that is, there exists anopen set W 0

i � Si and an extension of function Hi to W 0i which is continuously

di¤erentiable with respect to xi:De�nition 3.7. A 1-LCE for game � is a con�guration x� 2 S such that:(i) if x�i 2 Si n @Si, then DxiHi(x

�) = 0;(ii) if x�i 2 @Si; then there exists a neighborhood of x�i in Si, N(x

�i ); such

that: DxiHi(x�)(xi � x�i ) � 0; for every xi 2 N(x�i ): �

122

Page 123: GamesTheorySB,ESA,etc

Condition (ii) means that if x�i belongs to the boundary of the strategy set,then either it satis�es the �rst-order condition for payo¤ maximization, or itis a local maximum. Notice that De�nition 3.7 is in line with the usual ideathat at 1-LCEs players carry out local experiments by employing the linearapproximations of some appropriate function, in this case the payo¤ function.Given a con�guration x0 2 S, interpreted as the status quo, de�ne the func-

tion �i : S�Si ! R as follows: �i(x0; xi) = Hi(x0)+DxiHi(x

0)(xi�x0i ):Withsome abuse of language, the following �ctitious n-person non-cooperative game�c = (I; (Si); (Hi); (�i))i2I will be associated to game �: In game �c, giventhe status quo x0, the best strategy for player i is the solution to the followingproblem:

(Pi) max �i(x0; xi); such that xi 2 Si:

Denote by Fi(x0) the set of solutions to problem (Pi):If we interpret game �c as an oligopolistic game among �rms which choose,

for example, the level of production, then the behavioral hypothesis underlyingproblem (Pi) is that given the status quo, �rms maximize the linear approxima-tion of their pro�t functions.De�nition 3.8. An equilibrium for the game �c is a con�guration x� 2 S

such that: x�i 2 Fi(x�) for every i 2 I: Denote by E(�c) the set of equilibria ofgame �c, and by LCE(�) the set of 1-LCEs of game �: �Theorem 3.30. Under A.1. and A.2., LCE(�) 6= ;:Proof. First we show that E(�c) = LCE(�): Suppose that x� 2 E(�c): It

is su¢ cient to show that x� satis�es the following conditions:(i)if x�i 2 Si n @Si; then DxiHi(x

�) = 0;(ii) if xi 2 @Si then DxiHi(x

�)(xi � x�i ) � 0;, for every xi 2 Si:To this end, suppose that x�i 2 Si n @Si but DxiHi(x

�) 6= 0: Since x�i isan interior point of Si then the linearity of �i implies that there exists a pointxyi 2 @Si such that �i(x�; x

yi ) > �i(x

�; x�i ); which is a contradiction. Supposenow that x�i 2 @Si and DxiHi(x

�)(xi � x�i ) > 0; for some xi 2 Si: Clearly, x�idoesn�t solve problem (Pi), a contradiction. Summarizing, x� 2 LCE(�):Finally, suppose that x� 2 LCE(�): Then, x� satis�es conditions (i) and (ii)

in De�nition 3.7. If x�i 2 Si n @Si; then DxiHi(x�) = 0; therefore, �i(x�; xi) =

Hi(x�) for every xi 2 Si: It follows that x�i solves problem (Pi). Consider now

the case x�i 2 @Si withDxiHi(x�)(xi�x�i ) � 0 for every xi in some neighborhood

of x�i , N(x�i ): By linearity, one obtains that DxiHi(x

�)(xi � x�i ) � 0 for everyxi 2 Si: Thus, also in this case x�i solves problem (Pi): Therefore, x� 2 E(�c):Now it is su¢ cient to show that E(�c) 6= ;: By A.2 it follows that the

function �i : S�Si ! R is continuous. Thus, by Berge�s maximum theorem themultivalued mapping Fi : S ( Si is upper hemi-continuous. It is also convex-valued because of linearity of �i: De�ne the multivalued mapping F : S ( Xas follows F =

Qi2I Fi: Because of A.1., a Bohnenblust and Karlin�s �xed

point theorem ensures that there exists x� 2 S such that x� 2 F (x�): Thus,x� 2 E(�c): �

123

Page 124: GamesTheorySB,ESA,etc

3.4.3 Existence of a �rst-order economic equilibrium

Next, in following example, we prove the existence of a �rst-order locally con-sistent economic equilibrium in a model of monopolistic competition similar tothe Bonanno and Zeeman�s one.Example 3.2. We consider a monopolistic competitive market with n price-

making �rms, i 2 I, I = f1; 2; :::; ng. The cost function of �rm i is Ci(qi) = ciqi;where qi is the level of output of �rm i and ci is a positive number. We assumethat the �rm i choose any price in the interval Ji = [ci; Pi]: Set J =

Qi2I Ji and

J�i =Qj2I; j 6=i Jj : The price set by �rm i is denoted by pi: Denote by p�i the

(n� 1)-dimensional vector whose elements are the prices set by all �rms exceptthe i-th one. Set p = (pi; p�i): The function Di : J ! R is the demand functionof �rms i, and it is indicated by Di(p): The true pro�ts of �rms are given by

Hi(p) = Di(p)(pi � ci):

�Next, one show that there exists a �rst-order locally consistent economic

equilibrium for the above monopolistic market.We suppose that:A.1. For every i 2 I, function Di is continuous on J , and the derivative

@Di=@pi : J ! R exists and is continuous.A.2. For every p�i 2 J�i; ifDi(p

0i; p�i) = 0, p

0i 2 JinfPig; then (@Di=@pi)(p

0i; p�i) �

0 and Di(p00i ; p�i) = 0 for every p

00i � p0i:

Here it is possible that for every price in Ji; �rm i�s market demand is zero.Remark 3.21. We shall assume that �rms maximize their conjectural pro�t

function calculated by taking into account the linear approximation of theirdemand function. Given the status quo p0 2 J; the conjectural demand of �rmi is

�i(pi; p0) := Di(p

0) + (@Di=@pi)(p0)(pi � p0i )

and conjectural pro�t is

H�i (pi; p

0) := �i(pi; p0)(pi � ci):

De�nition 3.9. A �rst-order locally consistent economic equilibrium is avector p� 2 J such that for every i 2 I we have

H�i (p

�i ; p

�) � H�i (pi; p

�); for every pi 2 Ji:

�De�nition 3.9 means that at equilibrium �rms are maximizing their conjec-

tural pro�t function. It is easily seen that if p� is a �rst-order locally consistenteconomic equilibrium then:i) �i(p�i ; p

�) = Di(p�); and

ii) (@�i=@pi)(p�) = (@Di=@pi)(p�):

124

Page 125: GamesTheorySB,ESA,etc

The condition i) means that at equilibrium the conjectural demand must beequal to the true demand.The condition ii) means that at equilibrium the slope of the true demand

function is equal to the slope of the conjectural demand.We haveTheorem 3.31. Under A.1 and A.2 there exists a �rst-order locally consis-

tent economic equilibrium.Proof. By setting Si = Ji, and xi = pi; i 2 I; the industry we are con-

sidering reduces to the game � considered above. Under A.1 and A.2 thegame � has clearly a �rst-order locally consistent equilibrium x� = (x�i )i2I :Set p�i = x�i ; i 2 I: Thus, to prove Theorem 3.31 it is su¢ cient to prove that if(p�i )i2I is a �rst-order locally consistent equilibrium then it satis�es conditionin De�nition 3.9. We have to consider three possible cases:a) p�i = Pi; b) p�i = ci; c) p�i 2 @Ji; i 2 I:Case a). p�i = Pi: Assumption A.2 ensures that Di(p

�) = (@Di=@pi)(p�) =

0: It follows that �i(pi; p�) = 0; pi 2 Ji: Therefore H�i (p

�i ; p

�) = H�i (pi; p

�) =0; pi 2 Ji:Thus, the condition in De�nition 3.9 is satis�ed.Case b). p�i = ci: Two case can occur:b1) (@Hi=@pi)(p

�) = 0;b2) (@Hi=@pi)(p

�) 6= 0:In the case b1) it isn�t possible that Di(p

�) > 0: In fact, if it is so, onehas that (@Hi=@pi)(p

�) = Di(p�) > 0; which is a contradiction. If Di(p

�) = 0then H�

i (p�i ; p

�) � H�i (pi; p

�) for pi 2 Ji; since Hi(p�i ; p

�) = 0 and H�i (pi; p

�) =((@Di=@pi)(p

�)(pi � p�i ))(pi � ci) � 0 because p�i = ci and (@Di=@pi)(p�) � 0

from assumption A.2.In the case b2); then by the fact that p� is a �rst-order locally consistent

equilibrium, it must satisfy the condition (Di(p�)+(@Di=@pi))(p

�)(p�i �ci))(pi�p�i ) � 0; pi 2 N(ci); where N(ci) is a right neighborhood of ci: Because p�i = cione has Di(p

�)(pi � p�i ) � 0; pi 2 Ji: This implies that Di(p�) = 0; and

therefore by A.2, that (@Di=@pi)(p�) � 0 and that Di(pi; p

��i) = 0 for every

pi 2 Ji n fcig:We shall prove that H�

i (p�i ; p

�) � H�i (pi; p

�) for pi 2 Ji: In fact, H�i (p

�i ; p

�) =0 while H�

i (pi; p�) = (Di(p

�)+(@Di=@pi)(p�)(pi�p�i ))(pi�ci) = (@Di=@pi)(pi�

ci)2 � 0 for every pi 2 Ji n fcig; from the above argument. Thus, also in this

case condition of De�nition 3.9 is satis�ed.Case c). pi 2 Ji n @Ji: By de�nition of �rst-order locally consistent equilib-

rium, one must have (@Hi=@pi)(p�) = 0:

Two cases can occur:c1) Di(p

�) > 0; andc2) Di(p

�) = 0:In the case c1) by noticing that (@Hi=@pi)(p

�) = 0 implies (@Di=@pi)(p�) < 0

and that (@2H�i =@p

2i )(p

�i ; p

�) = 2(@Di=@pi)(p�) one can conclude that (@Hi=@pi)(p

�) =0 implies (@2H�

i =@p2i )(p

�i ; p

�) < 0: Thus the condition in De�nition 3.9 is satis-�ed.

125

Page 126: GamesTheorySB,ESA,etc

In the case c2) if we prove that (@Di=@pi)(p�) = 0 we have completed

the proof because in this case H�i (p

�i ; p

�) = H�i (pi; p

�) = 0; pi 2 Ji: Sup-pose, on contrary, that (@Di=@pi)(p

�) < 0, then (@Hi=@pi)(p�)(pi � p�i ) =

(Di(p�)+(@Di=@pi)(p

�)(p�i �ci))(pi�p�i ) = (@Di=@pi)(p�)(p�i �ci)(pi�p�i ) > 0

for pi < p�i ; contradicting the hypothesis that p� is a �rst-order locally consis-

tent equilibrium. Thus, also in this last case the condition in De�nition 3.9 issatis�ed. The proof is complete. �Remark 3.22. In [9], Bonanno and Zeeman have provided a general exis-

tence result of a �rst-order locally consistent equilibrium for an abstract game-theoretic, and they employ their existence result to prove the existence of a�rst-order locally consistent equilibrium in a monopolistic competitive industrywith price-making �rms. �

3.4.4 First-order equilibria for an abstract economy

We consider an abstract economy with productions of m �rms and n goodsgiven by� = (G; I; J; (ui)i2I ; (Xi)i2I ; (!i)i2I ; (�i)i2I ; (Yj)j2J)whereG; I; respectively J are the index sets of goods, household, respectively

�rms.Given the production pro�le y = (y1; y2; :::; yn) 2 Y; where Y =

Qj2J Yj ; the

intermediate endowment of consumer i is !0i (y) = !i +P

j2J �ijyj : We denoteby Fi(p; y) the individual demand mapping, and by z(p; y) the aggregate excessdemand mapping of the economy at price p 2 � � Rn+; given the productionpro�le y: The symbolW (y) indicates the set of Walrasian prices associated withproduction pro�le y; that is W (y) = fp j p 2 �; z(p; y) = 0g:We set V = fy j y 2 Rm; !0i (y)� 0; i 2 Ig:We suppose that:A1. For all i 2 I; ui is such that Fi(p; y) is single-valued, strictly positive

and of class C1 in R�n+ � V:A2. Y � V: Moreover, Y is compact, and Yj is a convex set, j 2 J:A3. If W (y) is nonempty, then W (y) is singleton.A4. For all y 2 V the rank Dp�nz

�[p(y); y] = n�1 where z� is the functionz without the last element, p(y) 2W (y); and Dp�n is the derivative with respectto the n� 1 �rst component of p:The producer j calculates his pro�ts on the basis of the linear approximation

of the e¤ective demand function

p�j (yi; p0) = p(y0) + (yj � y0j )Dyjp(y

0)T ;

where y0 is a status quo, and Dyj denote the derivative with respect to yj ;and symbol T indicates the operation of transposition for matrices.De�nition 3.10. A �rst-order locally consistent economic equili-

brium for the economy � is a con�guration (p�; (y�j )j2J) 2 �� Y such that

p�j (y�j ; y

�)y�j � p�j (yj ; y�)yj ; yj 2 Yj ; j 2 J:

126

Page 127: GamesTheorySB,ESA,etc

�This de�nition means that at a �rst-order locally consistent economic equi-

librium �rms are maximizing their pro�ts according their perceived demandfunctions. It is easily seen that if (p�; (y�j )j2J) is a �rst-order locally consistenteconomic equilibrium then(a) p�j (y

�j ; y

�) = p(y�); j 2 J; and(b) Dyjp

�j (y

�j ; y

�) = Dyjp(y�); j 2 J:

Condition (a) means that at �rst-order locally consistent economic equilib-rium perceived prices are equal to the true ones, while condition (b) means thatthe slopes of the perceived demand curves are equal to the slopes of the truedemand curves.We haveTheorem 3.32.If the assumptions A1-A4 holds and �Dyjp(y)� � 0 for

every y 2 Y and for every � 2 Rn; then the economy � has a �rst-order locallyconsistent economic equilibrium.Proof. If use set Sj = Yj ; xj = yj and Hi(yj ; y) = p�j (yj ; y)yj ; j 2 J;

the economy � reduces to the game � introduced in the �rst subsection of thissection. Under assumptions A1-A4, p(y) is C1 and this game has clearly a �rst-order locally consistent equilibrium (x�j )j2J : We set y

�j = x�j ; j 2 J: In order to

prove the theorem it is su¢ cient to prove that y� = (y�j )j2J satis�es conditionin De�nition 3.9.To this end, note that since y� is a �rst-order locally consistent equilibrium,

then it must satisfy the condition

p(y�)yj � p(y�)y�j + [p(y�) + y�jDyjp(y

�)](yj � y�j )or

p(y�)(yj � y�j ) � y�jDyjp(y�)(y�j � yj); yj 2 Yj :

We prove the assertion if we show that y� satis�es the following condition

p(y�)y�j � p(y�)yj + (yj � y�j )Dyjp(y�)T yj ; yj 2 Yj ;

that is,

[p(y�) + yjDyjp(y�)](yj � y�j ) � 0; yj 2 Yj :

From the �rst member of last relationship and by taking into account pre-vious relationship, one obtains

p(y�)(yj � y�j ) + yjDyjp(y�)(yj � y�j ) �

� y�jDyjp(y�)(y�j � yj)� yjDyjp(y

�)(y�j � yj) =

= (y�j � yj)Dyjp(y�)(y�j � yj) � 0; yj 2 Yj ;

by assumption. This ends the proof. �

127

Page 128: GamesTheorySB,ESA,etc

3.5 Existence of equilibrium in generalized games withnon-convex strategy spaces

3.5.1 Introduction

The generalized game concept (or abstract economy) extends the notion ofNash non-cooperative game, in which the player�s strategy set depends on thechoices of all the other players. This concept was introduced by Debreu whoproved the existence of equilibrium in generalized games with general assump-tions. Arrow and Debreu applied this result to obtain the existence of compet-itive equilibrium by considering convex strategy subsets of a �nite dimensionalspace and a �nite number of agents with continuous quasi-concave utility func-tions. Since then, Arrow-Debreu�s result has been extended in several directionsby assuming weaker assumptions on strategy spaces, agent preferences, and soon. We have mention the works of Gale and Mas-Colell, who consider pref-erence relations which aren�t transitive or complete; Shafer and Sonnenscheinand Border, who modify continuity conditions on the constraint and preferencemultivalued mappings; as well as Borglin and Keiding, Yannelis and Prabhakar,Tarafdar, who consider in�nite dimensional strategy spaces or an in�nite numberof agents. Most of these existence theorems are proven by assuming convexityconditions on the strategy spaces as well as on the constraint multivalued map-ping, which allow to apply very well known �xed points theorems, as those ofBrower, Kakutani or Browder. The purpose of this section is to present gener-alizations of some of these results on the existence of equilibrium in generalizedgames by relaxing the convexity conditions. In order to do that, we make useof a new abstract convexity notion called mc-spaces which generalizes usualconvexity as well as other abstract convexity structures. These results coversituations in which neither strategy spaces nor preferences are convex.

3.5.2 Abstract convexity

This subsection is devoted to introduce the new notion of abstract convexity,mc-spaces, which be used throughout the section. Formally, an abstractconvexity on a set X is a family = = fAigi2I of subsets of X stable underarbitrary intersections, that is,

Ti2J Ai 2 =; for all J � I, and that contains the

empty and the total set, ;; X 2 =: The notion of mc-spaces is based on the ideaof replacing the linear segments which join any pair of points up (or the convexhull of a �nite set of points) in the usual convexity, by an path (respectively, aset) that will play its role.De�nition 3.11. A topological space X is an mc-space, or has an mc-

structure, if for any nonempty �nite subset of X, A � X; there exists anordering on it, namely A = fa0; a1; :::; ang; a set of elements fb0; b1; :::; bng � X;(not necessary di¤erent) and a family of functions

PAi : X � [0; 1]! X; i = 0; 1; :::; n

such that

128

Page 129: GamesTheorySB,ESA,etc

1. PAi (x; 0) = x; PAi (x; 1) = bi; 8x 2 X:2. The function GA : [0; 1]n ! X given by

GA(t0; t1; :::; tn�1) = PA0 (:::(PAn�1(P

An (bn; 1); tn�1); :::); t0)

is a continuous function. �Remark 3.23. Note that if PAi (x; t) is continuous on t, then P

Ai (x; [0; 1])

represents a continuous path which joints x and bi: These paths depend, insome sense, on the points which are considered, as well as on the �nite sub-set A which contain them. Thus, function GA can be interpreted as follows:PAn�1(bn; tn�1) = pn�1 represents a point of the path which joins bn with bn�1,PAn�2(pn�1; tn�2) = pn�2 is a point in the path which joins pn�1 with bn�2, etc.So, GA can be seen as a composition of these paths and can be considered asan abstract convex combination of the �nite set A: �Given an mc-structure, it is possible to de�ne an abstract convexity de�ned

by the family of those sets which are stable under function GA. In order tode�ne this convexity, we need some previous concepts.De�nition 3.12. If X is an mc-space, Z a subset of X and we denote by

< X > the family of nonempty �nite subsets of X, then for all A 2< X > suchthat A

TZ 6= ;; A

TZ = fai0 ; ai1 ; :::; aimg; (i0 < i1 < ::: < im); we de�ne the

restriction of function GA to Z as follows

GAjZ : [0; 1]m ! X

GAjZ(t) = PAi0 (:::(PAim�1(P

Aim(bim ; 1); tim�1); :::); ti0);

where PAik are the functions associated with the elements aik 2 ATZ: �

By making use of this notion, we can de�ne mc-sets which generalize usualconvex sets.De�nition 3.13. A subset Z of an mc-space X is an mc-set, if and only

if it is satis�ed that

8 A 2< X >; A\Z 6= ;; GAjZ([0; 1]m) � Z;

where m = jATZj � 1: �

Since the family of mc-sets is stable under arbitrary intersections, it de�nesan abstract convexity on X. Furthermore, we can de�ne themc-hull operatorin the usual way

Cmc(Z) =\fB j Z � B; B is an mc� setg:

Then it is obvious that

8 A 2< X >; A\Z 6= ;; GAjZ([0; 1]m) � Cmc(Z):

Remark 3.24. If X is a convex subset of a topological vector space, for any�nite subset Afa0; a1; :::; ang we can de�ne functions PAi (x; t) = (1� t)x+ tai;

129

Page 130: GamesTheorySB,ESA,etc

which represent the segment joining ai and x when t 2 [0; 1]: In this case, theimage of the composition GA([0; 1]n) coincides with the convex hull of A, somc-sets generalize convex sets. Other abstract convexity structures which aregeneralized by the notion of mc-structure are the simplicial convexity, c-spacesor H-spaces, G-convex spaces, and so on. �It is important to point out that in some applications the space is required

to satisfy local properties. So we also introduce the notion of local convexity inthe context of mc-spaces.De�nition 3.14. A metric mc-space (X; d) is a locally mc-space if and

only if for all " > 0; B(E; ") = fx j x 2 X; d(x;E) < "g is an mc-set wheneverE is an mc-set. �It is not hard to prove that the product of mc-spaces is an mc-space, and

that the product of a countable quantity of locally mc-spaces is also a locallymc-space.Next, the notion of KF-multivalued mapping and KF-majorized multivalued

mapping, introduced in Borglin and Keiding, are de�ned in the context of mc-spaces.De�nition 3.15. If X is an mc-space, then an mc-set multivalued mapping

� : X ( X is a KF �-multivalued mapping if for all x 2 X; ��1(x) is openand x =2 �(x): A multivalued mapping P : X ( X is called KF �-majorizedif there is a KF �-multivalued mapping, � : X ( X (majorant), such that forall x 2 X, P (x) � �(x): �The local version of KF �-multivalued mapping is de�ned as followsDe�nition 3.16. If X is an mc-space, then a multivalued mapping � :

X ( X is locally KF �-multivalued mapping if for all x 2 X such that�(x) 6= ;; there exists an open neighborhood Vx of x, and a KF �-multivaluedmapping, �x : X ( X; such that

8 z 2 Vx; �(z) � �x(z):

3.5.3 Fixed points results

We present now some �xed point results which will be applied to prove theexistence of equilibrium in generalized games. The following Llinares� lemmastates the existence of a continuous selection, with a �xed point, of the mc-hullof a multivalued mapping de�ned on mc-spaces.Lemma 3.4. If X is a compact topological mc-space and � : X ( X is a

nonempty multivalued mapping such that if y 2 ��1(x); then there exists somex0 2 X such that y 2 int ��1(x0): Then, there exists a nonempty �nite subsetA of X, and a continuous function f : X ! X satisfying:1. (9) x� 2 X such that x� = f(x�);2. (8) x 2 X; f(x) 2 GAj�(x)([0; 1]m): �Next result is an extension of Browder�s theorem. The proof is immediately

obtained by applying Lemma 3.4.

130

Page 131: GamesTheorySB,ESA,etc

Theorem 3.33. If X is a compact topological mc-space and � : X ( Xa multivalued mapping with open inverse images and nonempty mc-set values,then � has a continuous selection and a �xed point.A consequence of Theorem 3.33 is that any KF �-multivalued mapping de-

�ned from a compact topological mc-space in itself, has a point with emptyimage. In context of binary relations, the existence of points with empty imagesin the multivalued mapping of upper contour sets is equivalent to existence ofmaximal element (it is enough to consider �(x) as the set of alternatives betterthan x).Corollary 3.5. If X is a compact topological mc-space and � : X ( X is

a KF �-multivalued mapping, then there exists x� 2 X such that �(x�) = ;:In order to extend the previous result to locally KF �-majorized multivalued

mapping, �rst we present the following lemma.Lemma 3.5. If X is a compact topological mc-space and P : X ( X is a lo-

cally majorized KF �-multivalued mapping, then there exists a KF �-multivaluedmapping : X ( X; such that (8)x 2 X;P (x) � (x):Proof. Consider D = fx j x 2 X;P (x) 6= ;g and for each x 2 D, choose a

KF �-multivalued mapping x majorant of P at x, and an open neighborhoodGx of x. The set G =

Sx2D Gx is paracompact, so the open covering fGxgx2D

of G has a closed locally �nite re�nement fG0xg:For each x 2 D de�ne the set

J(x) = fxi j x 2 G0xig;

and the following multivalued mapping

(x) =\

xi2J(x)

xi(x); if x 2 G; respectively (x) = ;; if x =2 G:

Now we are going to see that multivalued mapping is theKF ��multivaluedmapping. It is clear that hasn�t �xed point since xi are KF

�-multivaluedmappings, has mc-set values by construction and satis�es for all x 2 X,P (x) � (x): Finally, to see that has open lower sections, consider

x 2 �1(y), y 2 (x) =\

xi2J(x)

xi(x);

y 2 xi(x); (8) xi 2 J(x):

Since xi are KF�� multivalued mappings, then they have open lower sec-

tions, so for each xi 2 J(x) there exists an open neighborhood W ix of x, such

that

W ix � �1xi (y); (8) xi 2 J(x):

By considering W 0x =

Txi2J(x)W

ix, W

0x is an open neighborhood of x since

J(x) is �nite. Moreover,

131

Page 132: GamesTheorySB,ESA,etc

x =2[

xi =2J(x)

G0xi

(which is closed since fG0xig is a locally �nite re�nement). Therefore, thereexists an open set containing x, W �

x , such that

W �x

\[[

xi =2J(x)

G0xi ] = ;;

so, J(w) � J(x) for each w 2W �x : But then,

W 0x

\W �x =Wx � �1xi (y); (8) xi 2 J(x);

and

y 2 xi(w); (8)w 2Wx and (8)xi 2 J(x);

that is,

y 2\

xi2J(x)

xi(w) �\

xi2J(w)

xi(w) = (w); (8)w 2Wx

therefore

Wx � �1(y)

and we conclude that has open lower sections. �As a consequence of Lemma 3.5 we state now the extension of Corollary 3.5

by considering locally KF �-majorized multivalued mappings.Theorem 3.34. If X is a compact topological mc-space and � : X ( X is

a locally KF �-majorized multivalued mapping, then the exists x� 2 X such that�(x�) = ;:

3.5.4 Existence of equilibrium

In this subsection we analyze the existence of equilibrium for generalized gamesin the context of mc-spaces by considering similar conditions to those of Borglinand Keiding and Tulcea. In order to do this, �rst we need to utilize the wellknown notation from generalized games. First result is a version of Borglin andKeiding�s result in context of mc-spaces.Lemma 3.6. If X is a compact topological mc-space, F : X ( X a non-

empty mc-set multivalued mapping such that for all x 2 X; F�1(x) is an openset, and P : X ( X a locally KF ��majorized multivalued mapping, then theexists x� 2 X such that

x� 2 F (x�) and F (x�)\P (x�) = ;:

132

Page 133: GamesTheorySB,ESA,etc

Proof. From Lemma 3.5 and without loss of generality we can assume thatmultivalued mapping P is a KF �-multivalued mapping. De�ne the multivaluedmapping � : X ( X by

�(x) = F (x); if x =2 F (x)

�(x) = P (x)\F (x); if x 2 F (x):

In order to see that multivalued mapping � is KF �-multivalued mapping,consider x 2 X, such that �(x) 6= ; (if �(x) = ;; we have the conclusion). It iseasy to see that � hasn�t �xed points and mc-set values. To see that has openlower sections, consider x 2 ��1(y); that is, y 2 �(x):On the one hand, if x =2 F (x); then it is possible to choose a neighborhood

Vx of x such that

(8) z 2 Vx; z =2 F (z):

Moreover, since y 2 �(x) = F (x); that is, x 2 F�1(y); which is open,there exists an open set Wx containing x such that Wx � F�1(y): If we takeU =Wx

TVx; then U � ��1(y):

On the other hand, if x 2 F (x); then y 2 �(x) = P (x)TF (x); so

x 2 P�1(y)\F�1(y)

which are open sets, therefore, there exists an open setWx containing x suchthat

Wx � P�1(y)\F�1(y) � ��1(y):

So, multivalued mapping � is a KF ��multivalued mapping, then by apply-ing Corollary 3.5 we have the conclusion. �Next result shows that the previous Lemma remains valid in the case of

considering a generalized game with a �nite quantity of agents.Lemma 3.7. If for each i = 1; 2; :::; n; Xi is a compact topological mc-space,

X =Qni=1Xi; Fi : X ( Xi is a nonempty mc-set multivalued mapping with

open lower sections, and Pi : X ( Xi is a locally KF ��majorized multivaluedmapping, then there exists x� 2 X such that

x�i 2 Fi(x�) and Fi(x�)\Pi(x

�) = ;; i = 1; 2; :::; n:

Proof. Consider a multivalued mapping F : X ( X de�ned as follows,

y 2 F (x) if and only if yi 2 Fi(x); i = 1; 2; :::; n;

that is, F (x) =Qni=1 Fi(x): So multivalued mapping F has nonempty mc-set

values and open lower sections.

133

Page 134: GamesTheorySB,ESA,etc

From Lema 3.5 and without loss of generality, we can assume that mul-tivalued mappings Pi are KF ��multivalued mappings. Moreover, for eachi = 1; 2; :::; n, we de�ne the following multivalued mappingsa) P �i : X ( X such that y 2 P �i (x) if and only if yi 2 Pi(x):b) P : X ( X in the following way:

P (x) =\

i2I(x)

P �i (x); if I(x) 6= ;;

P (x) = ;; if I(x) = ;;

where I(x) = fi j i 2 I; Pi(x)TFi(x) 6= ;g:

Next, we are going to see that P is KF �-majorized. To do that, consider x 2X such that P (x) 6= ;; then there exists i0 2 I(x) such that Pi0(x)

TFi0(x) 6= ;:

Since the set

fx j x 2 X; Pi0(x)\Fi0(x) 6= ;g

is an open set, there exists a neighborhood V of x such that (8)z 2 V; Pi0(z)TFi0(z) 6=

;; that is, i0 2 I(z); so

(8)z 2 V; P (z) =\

i2I(z)

P �i (z) � P �i0(z):

Moreover, since Pi0 is aKF�-multivalued mapping, P �i0 is aKF

�-multivaluedmapping and therefore multivalued mapping P is majorized by P �i0 : Thereforeby applying previous lemma to multivalued mappings F and P we obtain theconclusion. �In order to analyze the existence of equilibrium by considering a countable

quantity of agents, we use the next approximation result.Lemma 3.8. Let X be a compact topological metric space and Y a locally

mc-space. If � : X ( Y is an upper hemicontinuous multivalued mapping withmc-set values, then (8)" > 0 there exists an mc-set valued multivalued mappingH" : X ( Y with open graph such that

Gr(�) � Gr(H") � B(Gr(�); "):

Proof. By applying that multivalued mapping � is upper hemicontinuouswe know that

(8)" > 0; (9)0 < �(x) < "; such that

(8)z 2 B(x; �(x)); �(z) � B(�(x); "=2);

so, family fB(x; �(x)=2)gx2X is an open covering of X, which is compact,thus there exists a �nite subcovering fB(xi; �(xi)=2)gni=1: Consider �i = �(xi)=2

134

Page 135: GamesTheorySB,ESA,etc

and de�ne for all x 2 X I(x) = fi j i 2 I; x 2 B(xi; �i)g and the followingmultivalued mapping

H"(x) =\

i2I(x)

B(�(xi); "=2):

It is clear that H" is mc-set valued and moreover it has open graph, sincefor every x =2

Si=2I(x)B(xi; �i) which is a closed set, therefore, there exists � > 0

such that B(x; %)T(Si=2I(x)B(xi; �i)) = ;; so, I(z) � I(x) for all z 2 B(x; �)

and

H"(x) =\

i2I(x)

B(�(xi); "=2) �\

i2I(z)

B(�(xi); "=2) = H"(z)

and H"(x) is open because it is a �nite intersection of open sets.Moreover since

(8)(z; u) 2 B(x; �)�H"(x); H"(x) � H"(z)

then, B(x; �)�H"(x) � Gr(H"), that is, Gr(H") is open.Furthermore, Gr(�) � Gr(H") � B(Gr(�); "); since for all x 2 X,

�(x) � B(�(xi); "=2); for each i 2 I(x);therefore

�(x) �\

i2I(x)

B(�(xi); "=2) = H"(x);

thus Gr(�) � Gr(H") and it is easy to see that Gr(H") � B(Gr(�); "): �Theorem 3.35. If X is a compact locally mc-space, F : X ( X is a non-

empty mc-set valued multivalued mapping with closed graph, P : X ( X is a lo-cally KF ��majorized multivalued mapping and the set fx j x 2 X; P (x)

TF (x) =

;g is closed in X; then there exists x� 2 X such that

x� 2 F (x�) and F (x�)\P (x�) = ;:

Proof. By applying Lemma 3.8, we have that (8)" > 0; (9) H" such that

Gr(F ) � Gr(H") � B(Gr(F ); ")

where H" is an open graph multivalued mapping whose values are mc-sets.If we consider (X;H"; P ) and we apply Lemma 3.6 we can ensure that thereexists an element x" such that

x" 2 H"(x") and H"(x")\P (x") = ;:

Let f"ng be a sequence which converges to 0 and by reasoning as above weobtain another sequence fx"ngn2N such that

135

Page 136: GamesTheorySB,ESA,etc

(8)n 2 N; [F (x"n)\P (x"n)] � [H(x"n)

\P (x"n)] = ;

and since it belongs to a compact set due to

x"n 2 fx j x 2 X;P (x)\F (x) = ;g

then there exists a convergent subsequence to a point x�; which will be anelement of this set since it is closed.In order to prove that x� is a �xed point of F , we have that (8)n 2 N;

(x"n ; x"n) 2 Gr(H"n) � B(Gr(F ); "n)

and since Gr(F ) is a compact set, then (x"n ; x"n) converges to (x�; x�) 2

Gr(F ): �Next, a result on the existence of equilibrium in generalized games with a

countable number of agents is presented.Theorem 3.36. Let � = (Si; Fi; Pi)i2I be a generalized game such that

I is a countable set of indexes and for each i 2 I it is satis�ed that Si is anonempty compact locally mc-space; Fi is a closed graph multivalued mappingsuch that Fi(x) is a nonempty mc-set (8)x 2 X, Pi is a locally KF ��majorizedmultivalued mapping and the set fx j x 2 X;Pi(x)

TFi(x) = ;g is closed in X,

then there exists an equilibrium for the generalized game.Proof. Consider multivalued mapping F : X ( X as follows:

y 2 F (x) if and only if yi 2 Fi(x); (8)i 2 I;

that is, F (x) =Qi2I Fi(x):

So multivalued mapping F has closed graph with nonempty mc-set values.Moreover, for each i 2 I we de�ne the following multivalued mappings:a)P �i : X ( X such that y 2 P �i (x) if and only if yi 2 Pi(x):b) P : X ( X in the following way:

P (x) =\

i2I(x)

P �i (x); if I(x) 6= ;; and P (x) = ;; if I(x) = ;;

where I(x) = fi j i 2 I; Pi(x)TFi(x) 6= ;g:

In order to see that P is KF ��majorized, consider x 2 X such that P (x) 6=;; then there exists i0 2 I(x) such that Pi0(x)

TFi0(x) 6= ;: Since the set

fx j x 2 X;Pi0(x)\Fi0(x) 6= ;g

is open set, there exists a neighborhood V of x such that (8)z 2 V; Pi0(z)TFi0(z) 6=

;; that is, i0 2 I(z); so

(8)z 2 V; P (z) =\

i2I(z)

P �i (z) � P �i0(z):

136

Page 137: GamesTheorySB,ESA,etc

Moreover, from Lemma 3.5 and without loss of generality, we can assumethat multivalued mapping Pi0 is a KF

��multivalued mapping, so P �i0 is theKF ��multivalued mapping which majorizes P:Finally, we show that the set fx j x 2 X;P (x)

TF (x) = ;g is closed. (8)i 2

I(x) we de�ne the following multivalued mapping Qi : X ( Si

Qi(x) = Pi(x)\Fi(x); if i 2 I(x); and Qi(x) = Fi(x); if i =2 I(x):

It is clear that

P (x)\F (x) =

YQi(x); if I(x) 6= ;; and

P (x)\F (x) = ;; otherwise:

Multivalued mappingsQi : X ( Si have nonempty values, thus P (x)TF (x) =

; if and only if I(x) = ;:Therefore, we have

fx j x 2 X;P (x)\F (x) = ;g = fx j x 2 X; I(x) = ;g =

=\i2Ifx j x 2 X;Pi(x)

\Fi(x) = ;g:

Hence, fx j x 2 X;P (x)TF (x) = ;g is closed because it is the intersection

of closed sets. So, by applying the previous theorem we obtain that there existsan element x� 2 X such that

x� 2 F (x�) and F (x�)\P (x�) = ;;

so, I(x�) = ; and �nally,

x�i 2 Fi(x�) and Fi(x�)\Pi(x

�) = ;:

137

Page 138: GamesTheorySB,ESA,etc

3.6 References

1. D�Agata, A., Existence of �rst-order locally consistent equilibria, Annalesd�Economie et de Statistique, 43 (1996), 171-179

2. Agarwal, R.P., O�Regan, P., A note on equilibria for abstract economies,Mathematical and Computer Modelling, 34 (2001), 331-343

3. Aliprantis, C.D., Tourky, R., Yannelis, N.C., Cone conditions ingeneral equilibrium theory, Journal of Economic Theory, 92 (2000), 96-121

4. Aliprantis, C.D., Tourky, R., Yannelis, N.C., The Riesz-Kantorovichformula and general equilibrium theory, Journal of Mathematical Economics, 34(2000), 55-76

5. Arrow, K.J., Debreu, G., Existence of an equilibrium for a competitiveeconomy, Econometrice, 22 (1954), 265-290

6. Arrow, K.J., Hahn, F., General competitive analysis, 1971, San Fran-cisco, Holden-Day

7. Aubin, J.P., Ekeland, I., Applied nonlinear analysis, New York, JohnWiley and Sons, 1984

8. Berge, C., Topological spaces, 1963, New York, Macmillan9. Bonanno, G., Zeeman, C.E., Limited knowledge of demand and oligopoly

equilibria, J. Econom. Theory, 35 (1985), 276-28310. Border, K.C., Fixed point theorems with applications to economics

and game theory, Cambridge University Press, 198511. Borglin, A., Keiding, H., Existence of equilibrium actions and of

equilibrium: A note on the new existence theorems, J. Math. Econom., 3 (1976),313-316

12. Browder, F.E., The �xed point theory of multi-valued mappings intopological vector spaces, Math. Annalen, 177 (1968), 283-301

13. Debreu, G., New concepts and techniques for equilibrium analysis,Internatinal Economic Review, 3 (1962), 257-273

14. Ding, X., Kim, W., Tan, K., A selection theorem and its applications,Bull. Austral. Math. Soc. 46(1992),205-212

15. Gale, D., Mas-Colell, A., An equilibrium existence theorem for ageneral model without ordered preferences, J. Math. Econom. 2 (1975), 9-15

16. Grandmont, J.M., Temporary general equilibrium theory, Economet-rica, 45 (1977), 535-572

17. Himmelberg, C.J., Fixed points of compact multifunctions, J. Math.Anal. Appl., 38 (1972) 205-207

18. Husain, T., Tarafdar E., A selection and a �xed point theorem andan equilibrium of an abstract economy, Internat. J. Math. and Math. Sci. 18,1(1995), 179-184

19. Kakutani, S., A generalization of Brouwer�s �xed point theorem,Duke Mathematical Journal, 8 (1941), 416-427

20. Llinares J.V., Existence of equilibrium in generalized games withnon-convex strategy spaces, CEPREMAP, No. 9801 (1998), 1-14

138

Page 139: GamesTheorySB,ESA,etc

21. Maugeri, A., Time dependent generalized equilibrium problems, Ren-diconti del Circolo Matematico di Palermo, 58 (1999), 197-204

22. Muresan, A.S., First-order equilibria for an abstract economy, I,Bull. Stiint. Univ., Baia Mare, Ser. B, Matematic¼a-Informatic¼a, Vol. XIV,2(1998),191-196

23. Muresan, A.S., First-order equilibria for an abstract economy, II,Acta Technica Napocensis, Ser. Applied Mathematics and Mechanics, 41 (1998),201-204

24. Neuefeind, W., Notes on existence of equilibrium proofs and theboundary behavior of supply, Econometrica, 48 (1980), 1831-1837

25. Nikaido, H., Convex structures and economic theory, New York,Academic Press, 1968

26. Oettli, W., Schlager, D., Generalized vectorial equilibria and gen-eralized monotonicity, Functional analysis with current applications in science,technology and industry (Aligarh, 1996), 145-154, Pitman Res. Notes Math.Ser., 377, Logman, Harlow, 1998

27. Petrusel, A., Multifunctions and applications, Cluj University Press,Cluj-Napoca, 2002 (In Romanian)

28. Ray, I., On games with identical equilibrium payo¤s, Economic The-ory, 17 (2001), 223-231

29. Rim, D.I., Kim, W.K., A �xed point theorem and existence of equi-librium for abstract economies, Bull. Austral. Math. Soc., 45 (1992), 385-394

30. Rus, A.I., Generalized contractions and applications, Cluj UniversityPress, Cluj-Napoca, 2001

31. Rus, A.I., Iancu, C., Mathematical modelling, Transilvania Press,Cluj-Napoca, 2000 (In Romanian)

32. Shafer, W.J., Sonnenschein, H., Equilibrium in abstract economicswithout ordered preferences, J. Math. Econom. 2 (1975) 345-348

33. Tarafdar, E., A �xed point theorem and equilibrium point in abstracteconomy, J. Math. Econom., 20 (1991), 211-218

34. Tulcea, C.I., On the approximation of upper semi-continuous cor-respondeces and equilibrium of generalized games, J. Math. Anal. Appl., 136(1988), 267-289

35. Yannelis, N., Prabhakar, N., Existence of minimal elements andequilibria in linear topological spaces, J. Math. Econom., 12 (1983), 233-246

139