0% found this document useful (0 votes)
74 views25 pages

675t02 Game PDF

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
74 views25 pages

675t02 Game PDF

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

IE675 Game Theory

Lecture Note Set 2


Wayne F. Bialas1
Wednesday, January 19, 2005

2 TWO-PERSON GAMES
2.1 Two-Person Zero-Sum Games

2.1.1 Basic ideas

Definition 2.1. A game (in extensive form) is said to be zero-sum if and only if,
P
at each terminal vertex, the payoff vector (p1 , . . . , pn ) satisfies ni=1 pi = 0.
Two-person zero sum games in normal form. Here’s an example. . .
 
−1 −3 −3 −2
 
A= 0 1 −2 −1 
2 −2 0 1
The rows represent the strategies of Player 1. The columns represent the strategies
of Player 2. The entries aij represent the payoff vector (aij , −aij ). That is, if
Player 1 chooses row i and Player 2 chooses column j , then Player 1 wins aij and
Player 2 loses aij . If aij < 0, then Player 1 pays Player 2 |aij |.
Note 2.1. We are using the term strategy rather than action to describe the player’s
options. The reasons for this will become evident in the next chapter when we use
this formulation to analyze games in extensive form.
Note 2.2. Some authors (in particular, those in the field of control theory) prefer
to represent the outcome of a game in terms of losses rather than profits. During
the semester, we will use both conventions.
1
Department of Industrial Engineering, University at Buffalo, 301 Bell Hall, Buffalo, NY 14260-
2050 USA; E-mail: bialas@buffalo.edu; Web: http://www.acsu.buffalo.edu/˜bialas. Copyright ° c
MMV Wayne F. Bialas. All Rights Reserved. Duplication of this work is prohibited without written
permission. This document produced January 19, 2005 at 3:33 pm.

2-1
How should each player behave? Player 1, for example, might want to place a
bound on his profits. Player 1 could ask “For each of my possible strategies, what
is the least desirable thing that Player 2 could do to minimize my profits?” For
each of Player 1’s strategies i, compute

αi = min aij
j

and then choose that i which produces maxi αi . Suppose this maximum is achieved
for i = i∗ . In other words, Player 1 is guaranteed to get at least

V (A) = min ai∗ j ≥ min aij i = 1, . . . , m


j j

The value V (A) is called the gain-floor for the game A.


In this case V (A) = −2 with i∗ ∈ {2, 3}.
Player 2 could perform a similar analysis and find that j ∗ which yields

V (A) = max aij ∗ ≤ max aij j = 1, . . . , n


i i

The value V (A) is called the loss-ceiling for the game A.


In this case V (A) = 0 with j ∗ = 3.
Now, consider the joint strategies (i∗ , j ∗ ). We immediately get the following:
£ ¤
Theorem 2.1. For every (finite) matrix game A = aij
1. The values V (A) and V (A) are unique.
2. There exists at least one security strategy for each player given by (i∗ , j ∗ ).
3. minj ai∗ j = V (A) ≤ V (A) = maxi aij ∗
Proof: (1) and (2) are easy. To prove (3) note that for any k and `,

min akj ≤ ak` ≤ max ai`


j i

and the result follows.

2-2
2.1.2 Discussion

Let’s examine the decision-making philosophy that underlies the choice of (i∗ , j ∗ ).
For instance, Player 1 appears to be acting as if Player 2 is trying to do as much
harm to him as possible. This seems reasonable since this is a zero-sum game.
Whatever, Player 1 wins, Player 2 loses.
As we proceed through this presentation, note that this same reasoning is also used
in the field of statistical decision theory where Player 1 is the statistician, and Player
2 is “nature.” Is it reasonable to assume that “nature” is a malevolent opponent?

2.1.3 Stability

Consider another example


 
−4 0 1
 
A= 0 1 −3 
−1 −2 −1

Player 1 should consider i∗ = 3 (V = −2) and Player 2 should consider j ∗ = 1


(V = 0).
However, Player 2 can continue his analysis as follows
• Player 2 will choose strategy 1
• So Player 1 should choose strategy 2 rather than strategy 3
• But Player 2 would predict that and then prefer strategy 3
and so on.
Question 2.1. When do we have a stable choice of strategies?
The answer to the above question gives rise to some of the really important early
results in game theory and mathematical programming.
We can see that if V (A) = V (A), then both Players will settle on (i∗ , j ∗ ) with

min ai∗ j = V (A) = V (A) = max aij ∗


j i

Theorem 2.2. If V (A) = V (A) then


1. A has a saddle point

2-3
2. The saddle point corresponds to the security strategies for each player
3. The value for the game is V = V (A) = V (A)
Question 2.2. Suppose V (A) < V (A). What can we do? Can we establish a
“spy-proof” mechanism to implement a strategy?
Question 2.3. Is it ever sensible to use expected loss (or profit) as a perfor-
mance criterion in determining strategies for “one-shot” (non-repeated) decision
problems?

2.1.4 Developing Mixed Strategies

Consider the following matrix game. . .


" #
3 −1
A=
0 1

For Player 1, we have V (A) = 0 and i∗ = 2. For Player 2, we have V (A) = 1 and
j ∗ = 2. This game does not have a saddle point.
Let’s try to create a “spy-proof” strategy. Let Player 1 randomize over his two pure
strategies. That is Player 1 will pick the vector of probabilities x = (x1 , x2 ) where
P
i xi = 1 and xi ≥ 0 for all i. He will then select strategy i with probability xi .

Note 2.3. When we formalize this, we will call the probability vector x, a mixed
strategy.
To determine the “best” choice of x, Player 1 analyzes the problem, as follows. . .

2-4

3

1
3/5
0

-1
x1 = 1/5

x1 = 0 x1 = 1
x2 = 1 x2 = 0

Player 2 might do the same thing using probability vector y = (y1 , y2 ) where
P
i yi = 1 and yi ≥ 0 for all i.


3

1
3/5
0

-1
y1 = 2/5

y1 = 0 y1 = 1
y2 = 1 y2 = 0

2-5
If Player 1 adopts mixed strategy (x1 , x2 ) and Player 2 adopts mixed strategy
(y1 , y2 ), we obtain an expected payoff of

V = 3x1 y1 + 0(1 − x1 )y1 − x1 (1 − y1 )


+(1 − x1 )(1 − y1 )
= 5x1 y1 − y1 − 2x1 + 1

Suppose Player 1 uses x∗1 = 51 , then


µ ¶ µ ¶
1 1 3
V =5 y1 − y1 − 2 +1=
5 5 5

which doesn’t depend on y ! Similarly, suppose Player 2 uses y1∗ = 25 , then


µ ¶ µ ¶
2 2 3
V = 5x1 − − 2x 1 + 1 =
5 5 5
which doesn’t depend on x!
Each player is solving a constrained optimization problem. For Player 1 the problem
is
max{v}
st: +3x1 + 0x2 ≥ v
−1x1 + 1x2 ≥ v
x1 + x2 = 1
xi ≥ 0 ∀i
which can be illustrated as follows:

2-6

3

1
v
0

-1

x1 = 0 x1 = 1
x2 = 1 x2 = 0

This problem is equivalent to

max min{(3x1 + 0x2 ), (−x1 + x2 )}


x

For Player 2 the problem is

min{v}
st: +3y1 − 1y2 ≤ v
+0y1 + 1y2 ≤ v
y1 + y2 = 1
yj ≥ 0 ∀j
which is equivalent to

min max{(3y1 − y2 ), (0y1 + y2 )}


y

We recognize these as dual linear programming problems.


Question 2.4. We now have a way to compute a “spy-proof” mixed strategy for
each player. Modify these two mathematical programming problems to produce
the pure security strategy for each player.

2-7
In general, the players are solving the following pair of dual linear programming
problems:
max{v}
P
st: a x ≥ v ∀j
Pi ij i
x
i i = 1
xi ≥ 0 ∀i
and
min{v}
P
st: a y ≤ v ∀i
Pj ij j
i yi = 1
yi ≥ 0 ∀j

Note 2.4. Consider, once again, the example game


" #
3 −1
A=
0 1

If Player 1 (the maximizer) uses mixed strategy (x1 , (1 − x1 )), and if Player 2 (the
minimizer) uses mixed strategy (y1 , (1 − y1 )) we get

E(x, y) = 5x1 y1 − y1 − 2x1 + 1

and letting x∗ = 51 and y ∗ = 25 we get E(x∗ , y) = E(x, y ∗ ) = 53 for any x and y .


These choices for x∗ and y ∗ make the expected value independent of the opposing
strategy. So, if Player 1 becomes a minimizer (or if Player 2 becomes a maximizer)
the resulting mixed strategies would be the same!
Note 2.5. Consider the game
" #
1 3
A=
4 2

By “factoring” the expression for E(x, y), we can write

E(x, y) = x1 y1 + 3x1 (1 − y1 ) + 4(1 − x1 )y + 2(1 − x1 )(1 − y1 )


= −4x1 y1 + x1 + 2y1 + 2
x1 y1 1 1
= −4(x1 y1 − − + )+2+
4 2 8 2
1 1 5
= −4(x1 − )(y1 − ) +
2 4 2
It’s now easy to see that x∗1 = 12 , y1∗ = 1
4 and v = 52 .

2-8
2.1.5 A more formal statement of the problem
£ ¤
Suppose we are given a matrix game A(m×n) ≡ aij . Each row of A is a pure
strategy for Player 1. Each column of A is a pure strategy for Player 2. The value
of aij is the payoff from Player 1 to Player 2 (it may be negative).
For Player 1 let
V (A) = max min aij
i j

For Player 2 let


V (A) = min max aij
j i

{Case 1} (Saddle Point Case where V (A) = V (A) = V )


Player 1 can assure himself of getting at least V from Player 2 by playing his
maximin strategy.

{Case 2} (Mixed Strategy Case where V (A) < V (A))


Player 1 uses probability vector
X
x = (x1 , . . . , xm ) xi = 1 xi ≥ 0
i

Player 2 uses probability vector


X
y = (y1 , . . . , yn ) yj = 1 yj ≥ 0
j

If Player 1 uses x and Player 2 uses strategy j , the expected payoff is


X
E(x, j) = xi aij = xAj
i

where Aj is column j from matrix A.


If Player 2 uses y and Player 1 uses strategy i, the expected payoff is
X
E(i, y) = aij yj = Ai y T
j

where Ai is row i from matrix A.

2-9
Combined, if Player 1 uses x and Player 2 uses y , the expected payoff is
XX
E(x, y) = xi aij yj = xAy T
i j

The players are solving the following pair of dual linear programming prob-
lems:
max{v}
P
st: a x ≥ v ∀j
Pi ij i
x
i i = 1
xi ≥ 0 ∀i
and
min{v}
P
st: a y ≤ v ∀i
Pj ij j
i yi = 1
yi ≥ 0 ∀j

The Minimax Theorem (von Neumann, 1928) states that there exists mixed strate-
gies x∗ and y ∗ for Players 1 and 2 which solve each of the above problems with
equal objective function values.

2.1.6 Proof of the Minimax Theorem

Note 2.6. (From Başar and Olsder [2]) The theory of finite zero-sum games dates
back to Borel in the early 1920’s whose work on the subject was later translated
into English (Borel, 1953). Borel introduced the notion of a conflicting decision
situation that involves more than one decision maker, and the concepts of pure
and mixed strategies, but he did not really develop a complete theory of zero-sum
games. Borel even conjectured that the Minimax Theorem was false.
It was von Neumann who first came up with a proof of the Minimax Theorem,
and laid down the foundations of game theory as we know it today (von Neumann
1928, 1937).
We will provide two proofs of this important theorem. The first proof (Theorem 2.4)
uses only the Separating Hyperplane Theorem. The second proof (Theorem 2.5)
uses the similar, but more powerful, tool of duality from the theory linear program-
ming.

2-10
Our first, and direct, proof of the Minimax Theorem is based on the proof by von
Neumann and Morgenstern [7]. It also appears in the book by Başar and Olsder [2].
It depends on the Separating Hyperplane Theorem:1
Theorem 2.3. (From [1]) Separating Hyperplane Theorem. Let S and T be
two non-empty, convex sets in Rn with no interior point in common. Then there
exists a pair (p, c) with p ∈ Rn 6= 0 and c ∈ R such that

px ≥ c ∀x ∈ S
py ≤ c ∀y ∈ T

i.e., there is a hyperplane H(p, c) = {x ∈ Rn | px = c} that separates S and T .


Proof: Define S − T = {x − y ∈ Rn | x ∈ S, y ∈ T }. S − T is convex. Then
0∈ / int(S − T ) (if it was, i.e., if 0 ∈ int(S − T ), then there is an x ∈ int(S) and
y ∈ int(T ) such that x − y = 0, or simply x = y , which would be a common
interior point). Thus, we can “separate” 0 from S − T , i.e., there exists p ∈ Rn
where p 6= 0 and c ∈ R such that p · (x − y) ≥ c and p · 0 ≤ c. But, this implies
that
p · 0 = 0 ≤ c ≤ p · (x − y)
which implies p · (x − y) ≥ 0. Hence, px ≥ py for all x ∈ S and for all y ∈ T .
That is, there must be a c ∈ R such that

py ≤ c ≤ px ∀x ∈ S and ∀y ∈ T

A version of Theorem 2.3 also appears in a paper by Gale [5] and a text by Boot [3].
Theorem 2.3 can be used to produce the following corollary that we will use to
prove the Minimax Theorem:
Corollary 2.1. Let A be an arbitrary (m × n)-dimensional matrix. Then either
(i) there exists a nonzero vector x ∈ Rm , x ≥ 0 such that xA ≥ 0, or
(ii) there exists a nonzero vector y ∈ Rn , y ≥ 0 such that Ay T ≤ 0.
£ ¤
Theorem 2.4. Minimax Theorem. Let A = aij be an m × n matrix of real
numbers. Let Ξr denote the set of all r-dimensional probability vectors, that is,
Pr
Ξr = {x ∈ Rr | i=1 xi = 1 and xi ≥ 0}
1
I must thank Yong Bao for his help in finding several errors in a previous version of these notes.

2-11
We sometimes call Ξr the probability simplex.
Let x ∈ Ξm and y ∈ Ξn . Define

V m (A) ≡ max min xAy T


x y

V m (A) ≡ min max xAy T


y x

Then V m (A) = V m (A).


Proof: First we will prove that

(1) V m (A) ≤ V m (A)

To do so, note that xAy T , maxx xAy T and miny xAy T are all continuous functions
of (x, y), x and y , respectively. Any continuous, real-valued function on a compact
set has an extermum. Therefore, there exists x0 and y 0 such that

V m (A) = min x0 Ay T
y

V m (A) = max xAy 0T


x

It is clear that
(2) V m (A) ≤ x0 Ay 0T ≤ V m (A)
Thus relation (1) is true.
Now we will show that one of the following must be true:

(3) V m (A) ≤ 0 or V m (A) ≥ 0

Corollary 2.1 provides that, for any matrix A, one of the two conditions (i) or (ii)
in the corollary must be true. Suppose that condition (ii) is true. Then there exists
y 0 ∈ Ξn such that2

Ay 0T ≤ 0
⇒ xAy 0T ≤ 0 ∀x ∈ Ξm
⇒ max xAy 0T ≤ 0
x

Hence
V m (A) = min max xAy T ≤ 0
y x
2
Corollary 2.1 says that there must exist such a y 0 ∈ Rn . Why doesn’t it make a difference when
we use Ξn rather than Rn ?

2-12
Alternatively, if (i) is true then we can similarly show that
V m (A) = max min xAy T ≥ 0
x y

Define the (m × n) matrix B = [bij ] where bij = aij − c for all (i, j) and where c
is a constant. Note that
V m (B) = V m (A) − c and V m (B) = V m (A) − c
Since A was an arbitrary matrix, the previous results also hold for B . Hence either
V m (B) = V m (A) − c ≤ 0 or
V m (B) = V m (A) − c ≥ 0
Thus, for any constant c, either
V m (A) ≤ c or
V m (A) ≥ c
Relation (1) guarantees that
V m (A) ≤ V m (A)
Therefore, there exists a ∆ ≥ 0 such that
V m (A) + ∆ = V m (A).
Suppose ∆ > 0. Choose c = ∆/2 and we have found a c such that both
V m (A) ≥ c and
V m (A) ≤ c
are true. This contradicts our previous result. Hence ∆ = 0 and V m (A) = V m (A).

2.1.7 The Minimax Theorem and duality

The next version of the Minimax Theorem uses duality and provides several fun-
damental links between game theory and the theory of linear programming.3
Theorem 2.5. Consider the matrix game A with mixed strategies x and y for
Player 1 and Player 2, respectively. Then
3
This theorem and proof is from my own notebook from a Game Theory course taught at Cornell
in the summer of 1972. The course was taught by Professors William Lucas and Louis Billera. I
believe, but I cannot be sure, that this particular proof is from Professor Billera.

2-13
1. minimax statement

max min E(x, y) = min max E(x, y)


x y y x

2. saddle point statement (mixed strategies) There exists x∗ and y ∗ such that

E(x, y ∗ ) ≤ E(x∗ , y ∗ ) ≤ E(x∗ , y)

for all x and y .


2a. saddle point statement (pure strategies) Let E(i, y) denote the expected
value for the game if Player 1 uses pure strategy i and Player 2 uses mixed
strategy y . Let E(x, j) denote the expected value for the game if Player 1
uses mixed strategy x and Player 2 uses pure strategy j . There exists x∗ and
y ∗ such that
E(i, y ∗ ) ≤ E(x∗ , y ∗ ) ≤ E(x∗ , j)
for all i and j .
3. LP feasibility statement There exists x∗ , y ∗ , and v 0 = v 00 such that
P P
x∗i v0 ∀ j a y∗ ≤ v 00 ∀ i
Pi aij ∗
≥ Pj ∗ij j
i xi = 1 j yj = 1
x∗i ≥ 0 ∀i yj∗ ≥ 0 ∀j

4. LP duality statement The objective function values are the same for the
following two linear programming problems:

max{v} min{v}
P
P
st: Pi aij x∗i ≥ v ∀j st: a y∗ ≤ v ∀i

Pj ∗ij j
i xi = 1 i yj = 1

xi ≥ 0 ∀i yj∗ ≥ 0 ∀j

Proof: We will sketch the proof for the above results by showing that

(4) ⇒ (3) ⇒ (2) ⇒ (1) ⇒ (3) ⇒ (4)

and
(2) ⇔ (2a)
.

2-14
{(4) ⇒ (3)} (3) is just a special case of (4).
{(3) ⇒ (2)} Let 1n denote a column vector of n ones. Then (3) implies that there
exists x∗ , y ∗ , and v 0 = v 00 such that
x∗ A ≥ v 0 1n
x∗ Ay T ≥ v 0 (1n y T ) = v 0 ∀y

and
Ay ∗T ≤ v 00 1m
xAy ∗T ≤ xv 00 1m = v 00 (x1m ) = v 00 ∀x

Hence,
E(x∗ , y) ≥ v 0 = v 00 ≥ E(x, y ∗ ) ∀ x, y
and
E(x∗ , y ∗ ) = v 0 = v 00 = E(x∗ , y ∗ )

{(2) ⇒ (2a)} (2a) is just a special case of (2) using mixed strategies x with xi = 1
and xk = 0 for k 6= i.
{(2a) ⇒ (2)} For each i, consider all convex combinations of vectors x with xi =
1 and xk = 0 for k 6= i. Since E(i, y ∗ ) ≤ v , we must have E(x∗ , y ∗ ) ≤ v .
{(2) ⇒ (1)}
• {Case ≥}
E(x, y ∗ ) ≤ E(x∗ , y) ∀ x, y
∗ ∗
max E(x, y ) ≤ E(x , y) ∀y
x
∗ ∗
max E(x, y ) ≤ min E(x , y)
x y
min max E(x, y) ≤ max E(x, y ) ≤ min E(x∗ , y) ≤ max min E(x, y)

y x x y x y

• {Case ≤}
min E(x, y) ≤ E(x, y) ∀ x, y
y
· ¸
max min E(x, y) ≤ max E(x, y) ∀y
x y x
· ¸ h i
max min E(x, y) ≤ min max E(x, y)
x y y x

2-15
{(1) ⇒ (3)}
· ¸ h i
max min E(x, y) = min max E(x, y)
x y y x

Let f (x) = miny E(x, y). From calculus, there exists x∗ such that
f (x) attains its maximum value at x∗ . Hence
· ¸

min E(x , y) = max min E(x, y)
y x y

{(3) ⇒ (4)} This is direct from the duality theorem of LP. (See Chapter 13 of
Dantzig’s text.)

Question 2.5. Can the LP problem in section (4) of Theorem 2.5 have alternate
optimal solutions. If so, how does that affect the choice of (x∗ , y ∗ )?4

2.2 Two-Person General-Sum Games

2.2.1 Basic ideas

Two-person general-sum games (sometimes£ called ¤


“bi-matrix games”) can be rep-
£ ¤
resented by two (m × n) matrices A = aij and B = bij where aij is the
“payoff” to Player 1 and bij is the “payoff” to Player 2. If A = −B then we get a
two-person zero-sum game, A.
Note 2.7. These are non-cooperative games with no side payments.
Definition 2.2. The (pure) strategy (i∗ , j ∗ ) is a Nash equilibrium solution to the
game (A, B) if

ai∗ ,j ∗ ≥ ai,j ∗ ∀i
bi∗ ,j ∗ ≥ bi∗ ,j ∀j

Note 2.8. If both players are placed on their respective Nash equilibrium strategies
(i∗ , j ∗ ), then each player cannot unilaterally move away from that strategy and
improve his payoff.
4
Thanks to Esra E. Aleisa for this question.

2-16
Question 2.6. Show that if A = −B (zero-sum case), the above definition of a
Nash solution corresponds to our previous definition of a saddle point.
Note 2.9. Not every game has a Nash solution using pure strategies.
Note 2.10. A Nash solution need not be the best solution, or even a reasonable
solution for a game. It’s merely a stable solution against unilateral moves by a
single player. For example, consider the game
" #
(4, 0) (4, 1)
(A, B) =
(5, 3) (3, 2)

This game has two Nash equilibrium strategies, (4, 1) and (5, 3). Note that both
players prefer (5, 3) when compared with (4, 1).
Question 2.7. What is the solution to the following simple modification of the
above game:5 " #
(4, 0) (4, 1)
(A, B) =
(4, 2) (3, 2)

Example 2.1. (Prisoner’s Dilemma) Two suspects in a crime have been picked up
by police and placed in separate rooms. If both confess (C ), each will be sentenced
to 3 years in prison. If only one confesses, he will be set free and the other (who
didn’t confess (N C )) will be sent to prison for 4 years. If neither confesses, they
will both go to prison for 1 year.
This game can be represented in strategic form, as follows:

C NC
C (-3,-3) (0,-4)
NC (-4,0) (-1,-1)

This game has one Nash equilibrium strategy, (−3, −3). When compared with the
other solutions, note that it represents one of the worst outcomes for both players.

2.2.2 Properties of Nash strategies


5
Thanks to Esra E. Aleisa for this question.

2-17
Definition 2.3. The pure strategy pair (i1 , j1 ) weakly dominates (i2 , j2 ) if and
only if
ai1 ,j1 ≥ ai2 ,j2
bi1 ,j1 ≥ bi2 ,j2
and one of the above inequalities is strict.
Definition 2.4. The pure strategy pair (i1 , j1 ) strongly dominates (i2 , j2 ) if and
only if
ai1 ,j1 > ai2 ,j2
bi1 ,j1 > bi2 ,j2

Definition 2.5. (Weiss [8]) The pure strategy pair (i, j) is inadmissible if there
exists some strategy pair (i0 , j0 ) that weakly dominates (i, j).
Definition 2.6. (Weiss [8]) The pure strategy pair (i, j) is admissible if it is not
inadmissible.
Example 2.2. Consider again the game
" #
(4, 0) (4, 1)
(A, B) =
(5, 3) (3, 2)
With Nash equilibrium strategies, (4, 1) and (5, 3). Only (5, 3) is admissible.
Note 2.11. If there exists multiple admissible Nash equilibria, then side-payments
(with collusion) may yield a “better” solution for all players.
Definition 2.7. Two bi-matrix games (A.B) and (C, D) are strategically equiv-
alent if there exists α1 > 0, α2 > 0 and scalars β1 , β2 such that
aij = α1 cij + β1 ∀ i, j
bij = α2 dij + β2 ∀ i, j

Theorem 2.6. If bi-matrix games (A.B) and (C, D) are strategically equivalent
and (i∗ , j ∗ ) is a Nash strategy for (A, B), then (i∗ , j ∗ ) is also a Nash strategy for
(C, D).
Note 2.12. This was used to modify the original matrices for the Prisoners’
Dilemma problem in Example 2.1.

2-18
2.2.3 Nash equilibria using mixed strategies

Sometimes the bi-matrix game (A, B) does not have a Nash strategy using pure
strategies. As before, we can use mixed strategies for such games.
Definition 2.8. The (mixed) strategy (x∗ , y ∗ ) is a Nash equilibrium solution to
the game (A, B) if
x∗ Ay ∗T ≥ xAy ∗T ∀ x ∈ Ξm
x∗ By ∗T ≥ x∗ By T ∀ y ∈ Ξn

where Ξr is the r-dimensional probability simplex.


Question 2.8. Consider the game
" #
(−2, −4) (0, −3)
(A, B) =
(−3, 0) (1, −1)
Can we find mixed strategies (x∗ , y ∗ ) that provide a Nash solution as defined
above?
Theorem 2.7. Every bi-matrix game has at least one Nash equilibrium solution
in mixed strategies.
Proof: (This is the sketch provided by the text for Proposition 33.1; see Chapter 3
for a complete proofs for N ≥ 2 players.)
Consider the sets Ξn and Ξm consisting of the mixed strategies for Player 1 and
Player 2, respectively. Note that Ξn × Ξm is non-empty, convex and compact.
Since the expected payoff functions xAy T and xBy T are linear in (x, y), the result
follows using Brouwer’s fixed point theorem,

2.2.4 Finding Nash mixed strategies

Consider again the game


" #
(−2, −4) (0, −3)
(A, B) =
(−3, 0) (1, −1)
For Player 1
xAy T = −2x1 y1 − 3(1 − x1 )y1 + (1 − x1 )(1 − y1 )
= 2x1 y1 − x1 − 4y1 + 1

2-19
For Player 2

xBy T = −2x1 y1 − 2x1 + y1 − 1

In order for (x∗ , y ∗ ) to be a Nash equilibrium, we must have for all 0 ≤ x1 ≤ 1

(4) x∗ Ay ∗T ≥ xAy ∗T ∀ x ∈ Ξm
(5) x∗ By ∗T ≥ x∗ By T ∀ y ∈ Ξn

For Player 1 this means that we want (x∗ , y ∗ ) so that for all x1

2x∗1 y1∗ − x∗1 − 4y1∗ + 1 ≥ 2x1 y1∗ − x1 − 4y1∗ + 1


2x∗1 y1∗ − x∗1 ≥ 2x1 y1∗ − x1

Let’s try y1∗ = 21 . We get


µ ¶ µ ¶
1 1
2x∗1 − x∗1 ≥ 2x1 − x1
2 2
0 ≥ 0

Therefore, if y ∗ = ( 12 , 12 ) then any x∗ can be chosen and condition (4) will be


satisfied.
Note that only condition (4) and Player 1’s matrix A was used to get Player 2’s
strategy y ∗ .
For Player 2 the same thing happens if we use x∗1 = 1
2 and condition (5). That is,
for all 0 ≤ y1 ≤ 1

−2x∗1 y1∗ − 2x∗1 + y1∗ − 1 ≥ −2x1 y1∗ − 2x1 + y1∗ − 1


−2x∗1 y1∗ + y1∗ ≥ −2x1 y1∗ + y1
µ ¶ µ ¶
1 ∗ ∗ 1 ∗
−2 y + y1 ≥ −2 y + y1
2 1 2 1
0 ≥ 0

How can we get the values of (x∗ , y ∗ ) that will work? One suggested approach
from (Başar and Olsder [2]) uses the following:

2-20
Theorem 2.8. Any mixed Nash equilibrium solution (x∗ , y ∗ ) in the interior of
Ξm × Ξn must satisfy
n
X
(6) yj∗ (aij − a1j ) = 0 ∀ i 6= 1
j=1
m
X
(7) x∗i (bij − bi1 ) = 0 ∀ j 6= 1
i=1

Proof: Recall that


m X
X n
E(x, y) = xAy T = xi yj aij
i=1 j=1
Xn X m
= xi yj aij
j=1 i=1

Pm
Since x1 = 1 − i=2 xi , we have
n
"m à m
! #
X X X
T
xAy = xi yj aij + 1 − xi yj a1j
j=1 i=2 i=2
n
" m
#
X X
= yj a1j + yj xi (aij − a1j )
j=1 i=2
 
n
X m
X n
X
= yj a1j + xi yj (aij − a1j )
j=1 i=2 j=1

If (x∗ , y ∗ ) is an interior maximum (or minimum) then


Xn

xAy T = yj (aij − a1j ) = 0 for i = 2, . . . , m
∂xi j=1

Which provide the Equations 6.


The derivation of Equations 7 is similar.
Note 2.13. In the proof we have the equation
 
n
X m
X n
X
xAy T = yj a1j + xi  yj (aij − a1j )
j=1 i=2 j=1

2-21
Any Nash solution (x∗ , y ∗ ) in the interior of Ξm × Ξn has
n
X
yj∗ (aij − a1j ) = 0 ∀ i 6= 1
j=1

So this choice of y ∗ produces


n
X m
X
T
xAy = yj a1j + xi [0]
j=1 i=2

making this expression independent of x.


Note 2.14. Equations 6 and 7 only provide necessary (not sufficient) conditions,
and only characterize solutions on the interior of the probability simplex (i.e., where
every component of x and y are strictly positive).
For our example, these equations produce

y1∗ (a21 − a11 ) + y2∗ (a22 − a12 ) = 0


x∗1 (b12 − b11 ) + x∗2 (b22 − b21 ) = 0

Since x∗2 = 1 − x∗1 and y2∗ = 1 − y1∗ , we get

y1∗ (−3 − (−2)) + (1 − y1∗ )(1 − 0) = 0


−y1∗ + (1 − y1∗ ) = 0
1
y1∗ =
2

x∗1 (−3 − (−4)) + (1 − x∗1 )(−1 − 0) = 0


x∗1 − (1 − x∗1 ) = 0
1
x∗1 =
2
But, in addition, one must check that x∗1 = 1
2 and y1∗ = 1
2 are actually Nash
solutions.

2.2.5 The Lemke-Howson algorithm

Lemke and Howson [6] developed a quadratic programming technique for finding
mixed Nash strategies for two-person general sum games (A, B) in strategic form.
Their method is based on the following fact, provided in their paper:

2-22
Let ek denote a column vector of k ones, and let x and y be row vectors of dimension
m and n, respectively. Let p and q denote scalars. We will also assume that A and
B are matrices, each with m rows and n columns.
A mixed strategy is defined by a pair (x, y) such that

(8) xem = yen = 1, and x ≥ 0, y ≥ 0

with expected payoffs


(9) xAy T and xBy T .
A Nash equilibrium solution is a pair (x̄, ȳ) satisfying (8) such that for all (x, y)
satisfying (8),
(10) xAȳ T ≤ x̄Aȳ T and x̄By T ≤ x̄B ȳ T .
But this implies that

(11) Aȳ T ≤ x̄Aȳ T em and x̄B ≤ x̄B ȳ T eTn .

Conversely, suppose (11) holds for (x̄, ȳ) satisfying (8). Now choose an arbitrary
(x, y) satisfying (8). Multiply the first expression in (11) on the left by x and
second expression in (11) on the right by y T to get (10). Hence, (8) and (11) are,
together, equivalent to (8) and (10).
This serves as the foundation for the proof of the following theorem:
Theorem 2.9. Any mixed strategy (x∗ , y ∗ ) for bi-matrix game (A, B) is a Nash
equilibrium solution if and only if x∗ , y ∗ , p∗ and q ∗ solve problem (LH):

(LH): maxx,y,p,q {xAy T + xBy T − p − q}


st: Ay T ≤ pem
B T xT ≤ qen
xi ≥ 0 ∀i
yj ≥ 0 ∀j
Pm
x i = 1
Pi=1
n
j=1 yj = 1

Proof: (⇒)
Every feasible solution (x, y, p, q) to problem (LH) must satisfy the constraints

Ay T ≤ pem
xB ≤ qeTn .

2-23
Multiply both sides of the first constraint on the left by x and multiply the second
constraint on the right by y T . As a result, we see that a feasible (x, y, p, q) must
satisfy

xAy T ≤ p
xBy T ≤ q.

Hence, for any feasible (x, y, p, q). the objective function must satisfy

xAy T + xBy T − p − q ≤ 0.

Suppose (x∗ , y ∗ ) is any Nash solution for (A, B). Let

p∗ = x∗ Ay ∗T
q ∗ = x∗ By ∗T .

Because of (10) and (11), this implies

Ay ∗T ≤ x∗ Ay ∗T em = p∗ em
x∗ B ≤ x∗ By ∗T eTn = q ∗ eTn .

So this choice of (x∗ , y ∗ , p∗ , q ∗ ) is feasible, and results in the objective function


equal to zero. Hence it’s an optimal solution to problem (LH)
(⇐)
Suppose (x̄, ȳ, p̄, q̄) solves problem (LH). From Theorem 2.7, there is at least one
Nash solution (x∗ , y ∗ ). Using the above argument, (x∗ , y ∗ ) must be an optimal
solution to (LH) with an objective function value of zero. Since (x̄, ȳ, p̄, q̄) is an
optimal solution to (LH), we must then have

(12) x̄Aȳ T + x̄B ȳ T − p̄ − q̄ = 0

with (x̄, ȳ, p̄, q̄) satisfying the constraints

(13) Aȳ T ≤ p̄em


(14) x̄B ≤ q̄eTn .

Now multiply (13) on the left by x̄ and multiply (14) on the right by ȳ T to get

(15) x̄Aȳ T ≤ p̄
(16) x̄B ȳ T ≤ q̄.

2-24
Then (12), (15), and (16) together imply
x̄Aȳ T = p̄
x̄B ȳ T = q̄.

So (13), and (14) can now be rewritten as


(17) Aȳ T ≤ x̄Aȳ T em
(18) x̄B ≤ x̄B ȳ T en .
Choose an arbitrary (x, y) ∈ Ξm × Ξn and, this time, multiply (17) on the left by
x and multiply (18) on the right by y T to get
(19) xAȳ T ≤ x̄Aȳ T
(20) x̄By T ≤ x̄B ȳ T
for all (x, y) ∈ Ξm × Ξn . Hence (x̄, ȳ) is a Nash equilibrium solution.

2.3 BIBLIOGRAPHY

[1] Anon., The history of economic thought web site, Department of Economics,
New School University (2003)
http://cepa.newschool.edu/het/home.htm

[2] T. Başar and G. Olsder, Dynamic noncooperative game theory, Academic Press
(1982).
[3] John C. G. Boot, Quadratic programming, North-Holland, Amsterdam, (1964).
[4] G. Debreu, Separation theorems for convex sets, in T. C. Koopmans and A.
F. Bausch, Selected topics in economics involving mathematical reasoning,
SIAM Review 1. (1959) 79–148.
[5] D. Gale, The basic theorems of real linear equations, inequalities, linear pro-
gramming and game theory, Navel Research Logistics Quarterly, Vol. 3 (1956)
193–200.
[6] C. E. Lemke and J. T. Howson, Jr., Equilibrium points of bimatrix games, SIAM
Journal, Volume 12, Issue 2 (Jun., 1964), pp 413–423.
[7] J. von Neumann and O. Morgenstern, Theory of games and economic behavior,
Princeton Univ. Press (1947).
[8] L. Weiss, Statistical decision theory, McGraw-Hill (1961).

2-25

You might also like