0% found this document useful (0 votes)
18 views25 pages

Lecture 8 Game Theory

Uploaded by

Faris Azhari
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views25 pages

Lecture 8 Game Theory

Uploaded by

Faris Azhari
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

Game Theory: Introduction

Basic framework
● studies strategically inter-dependent situations.
● provides us tools for analyzing most of problems in
social science: multiple agents and interactions
● employs equilibrium as a solution concept.
● Can be static (simultaneous decision) and dynamic
(sequential decision) models
● Game forms: normal vs extensive form
● Deals with information problem
– complete vs incomplete information
– perfect vs imperfect information
Model Structure
● Player: Who are involved?
● Rules of the game when and how players move?
What information players know? What can they do?
● Outcome: for each possible set of actions, what are
the outcomes?
● Payoff: Player's preference over possible outcome
● Player's decision: choose strategies (s*) that gives
highest payoff for her
Equilibrium Concepts
Complete Incomplete
Information Information

Static Nash Bayesian Nash


Equilibrium Equilibrium

Dynamic Subgame Perfect Bayesian


Perfect Equilibrium
Equilibrium
Example 1: prisoner dillema
Two suspects are charged with a joint crime, and are held
separately by the police.
Each prisoner is told the following:
1) If one prisoner confesses and the other one does not, the
former will be given a reward of 1 and the latter will receive a
fine equal to 2.
2) If both confess, each will receive a fine equal to 1.
3) If neither confesses, both will be set free.

Define the model structure:


players, rules, outcome and payoff?
Normal Form of PD game
(bi-matrix)
Player 1

Player 2
Solution Concepts
Nash equilibrium (mathematical definition)
A strategy profile s* is called a Nash equilibrium if and only if
the following condition is satisfied:

i, si
 i ( s )   i ( s , si )
* *
i

Nash equilibrium is defined over strategy profiles, NOT over


individual strategies.
What is the solution of PD above?
Zero-Sum Game

Matching Pennies

Player 2 Heads Tails

Player 1
Heads 1 -1
-1 1
Tails -1 1
1 -1
No Nash Equilibrium?

The distinguishing feature of zero-sum game is that
each player would like to outguess the other since
there is no “win-win” situation.

When each player would always like outguess the
other(s), there is no Nash equilibrium.
 Contradicting Nash’s theorem??
Mixed Strategy

A mixed strategy for a player is a probability
distribution over some (or all) of her strategies.

The strategy we have studied so far, i.e., taking some
action for sure, is called a pure-strategy.

When the outcome of the game is uncertain, we
assume that each player maximizes expected value
of her payoff.
 Expected utility theory (von Neumann and
Morgenstern, 1944)
Matching Pennies Again

Introducing mixed strategies

Player 2 Heads (p) Tails (1-p)

Player 1
Heads 1 -1
(q) -1 1
Tails -1 1
(1-q) 1 -1
How to Find Equilibrium?

If a player takes both “Heads” and “Tails” with positive
probability, she must be indifferent between these two
pure strategies, i.e., the expected payoff derived by
choosing Heads must be equal to that by choosing Tails.

Player 1 choose H: E(U)= (-1)p+(1)(1-p)

Player 1 choose T: E(U)= (1)p+(-1)(1-p)
indifference between H and T -p+(1-p)=p-(1-p) --> p=0.5.
For Player 2

q-(1-q)=-q+(1-q), hence q=0.5.
How to Verify Equilibrium?

Note that if p=0.5, Player 1 does not have a strict
incentive to change her strategy from q=0.5.

Similarly, Player 2 does not have a strict incentive to
change his strategy from p=0.5, if q=0.5.
 Therefore, p=q=0.5 constitutes a mixed-strategy
equilibrium.
Modified Matching Pennies

Suppose the payoffs in the up-left cell changes as
the following:

Player 2 Heads (p) Tails (1-p)


Player 1
Heads 2 -1
(q) -2 1
Tails -1 1
(1-q) 1 -1
Indifference Property

Under mixed-strategy NE, Player 1 must be
indifferent between choosing H and T:
 -2p+(1-p)=p-(1-p), hence p=0.4.

Similarly, Player 2 must be indifferent between
choosing H and T:
 2q-(1-q)=-q+(1-q), hence q=0.4.

You can easily verify that (p,q)=(0.4,0.4)
indeed becomes a mixed-strategy NE.
Dominated strategy and NE

If iterated elimination of strictly dominated
strategies eliminates all but one combination of
strategies, denoted as s, then s becomes the
unique NE of the game.

If a combination of strategies s is a NE, then s
survives iterated elimination of strictly
dominated strategies.
Entry Deterrence
Monopolist Price War Accommodate
Entrant

In -1 1
-1 1
Out 4 4
0 0


There are two NE: (In, A) and (Out, PW)

(Out, PW) relies on a non-credible threat.
Dynamic Games

Dynamic games often have multiple Nash equilibria,
and some of them do not seem plausible since they
rely on non-credible threats.

By solving games from the back to the forward, we
can erase those implausible equilibria.
 Backward Induction

This idea will lead us to the refinement of NE, the
subgame perfect Nash equilibrium.
Extensive Form Games
The extensive-form representation of a game specifies
the following 5 elements:
• The players in the game
• When each player has the move
• What each player can do at each of her
opportunities to move
• What each player knows at ---.
• The payoff received by each player for each
combination of moves that could be chosen by the
players.
Game Tree

An extensive-form game is defined by a tree that
consists of nodes connected by branches.

Each branch is an arrow, pointing from one node (a
predecessor) to another (a successor).

For nodes x, y, and z, if x is a predecessor of y and y
is a predecessor of z, then it must be that x is a
predecessor of z.

A tree starts with the initial node and ends at terminal
nodes where payoffs are specified.
Tree Rules
1. Every node is a successor of the initial node.
2. Each node except the initial node has exactly one
immediate predecessor. The initial node has no
predecessor.
3. Multiple branches extending from the same node
have different action labels.
4. Each information set contains decision nodes for
only one of the players.
Information Set

An information set for a player is a collection of
decision nodes satisfying that (i) the player has the
move at every node in the information set, and (ii)
when the play of the game reaches a node in the
information set, the player with the move does not
know which node in the information set has been
reached.
 At every decision node in an information set, each
player must (i) have the same set of feasible actions,
and (ii) choose the same action.
Subgame

A subgame in an extensive-form game (a)
begins at some decision node n with a
singleton information set, (b) includes all the
decision and terminal nodes following n, and
(c) does not cut any information sets.
 We can analyze a subgame on its own,
separating it from the other part of the game.
Subgame Perfect NE

A subgame perfect Nash equilibrium (SPNE) is
a combination of strategies in a extensive-form
which constitutes a Nash equilibrium in every
subgame.
 Since the entire game itself is a subgame, it is
obvious that a SPNE is a NE, i.e., SPNE is
stronger solution concept than NE.
Stackelberg Model
The Stackelberg model is a dynamic version of the
Cournot model in which a dominant firm moves first
and a subordinate firm moves second.
• Firm 1 (a leader) chooses a quantity first
• Firm 2 (a follower) observes the firm 1’s quantity
and then chooses a quantity
 Solve the game backwards!

You might also like