0% found this document useful (0 votes)
12 views48 pages

Btalk

The document discusses a methodology for analyzing market dynamics, emphasizing the incorporation of institutional backgrounds and heterogeneity in market analysis. It critiques the limitations of the Markov Perfect equilibrium framework in capturing real-world complexities and proposes alternative approaches for empirical analysis of dynamic models. The author highlights ongoing research and the need for improved estimation techniques to better understand market behaviors and outcomes.

Uploaded by

Ghetto Vibez
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views48 pages

Btalk

The document discusses a methodology for analyzing market dynamics, emphasizing the incorporation of institutional backgrounds and heterogeneity in market analysis. It critiques the limitations of the Markov Perfect equilibrium framework in capturing real-world complexities and proposes alternative approaches for empirical analysis of dynamic models. The author highlights ongoing research and the need for improved estimation techniques to better understand market behaviors and outcomes.

Uploaded by

Ghetto Vibez
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 48

Methodology for Analyzing

Market Dynamics.

Adapted from three lectures given in 2014.


• The Cowles Lecture: N.A.Econometric
Society, Minneapolis, June.
• Keynote Address: Society for Applied
Dynamic Games, Amsterdam, July.
• Keynote Address: L.A. Econometric Society
Meetings, Sao Paulo, November.

by

Ariel Pakes
(Harvard University and the NBER).

1
Background: Methodological
Developments in IO.

• We have been developing tools that enable


us to better analyze market outcomes.

• Common thread: emphasis on incorporat-


ing the institutional background needed to
make sense of the data used in analyzing
the likely causes of historical events, or the
likely responses to environmental and pol-
icy changes.

• Focus. Incorporate
(i) heterogeneity (in plant productivity,
products demanded, bidders and/or con-
sumers) and where possible,
(ii) equilibrium conditions (Nash in prices
or quantities, and extensions designed to
analyze allocations in network, platform,
and vertical markets).

2
We largely relied on earlier work by our game
theory colleagues for the analytic frameworks.

• Each agent’s actions a↵ect all agents’ pay-


o↵s, and

• At the “equilibrium” or “rest point”


(i) agents have correct perceptions, and
(ii) the system is in some form of Nash
equilibrium (policies such that no agent has
an incentive to deviate).

• Our contribution is the development of an


ability to adapt the analysis to the richness
of di↵erent real world institutions.

3
Claim 1. The tools developed for the anal-
ysis of market allocations conditional on the
“state variables” of the problem (characteris-
tics of products marketed, cost determinants,
...) pass a market test for success as:
(i) They have been incorporated into applied
work in virtually all of economics that deals
with market allocations (especially where pro-
ductivity and/or demand is needed),
(ii) They are used by public agencies, consul-
tancies and to some extent by firms and
(iii) They do surprisingly well, both in fit and in
providing a deeper understanding of empirical
phenomena.

Note. There are improvements still being done,


and important new work in analyzing equilib-
rium allocations in markets where Nash in prices
or quantities seems inappropriate; e.g. vertical
markets, platform markets,...
4
E.g. of Fit: Pricing Behavior. Wollman’s dis-
sertation (commercial trucks). Estimate BLP
demand, regress Nash markup on instruments
ˆ
to get markup (R2=.44 or .46 with time dum-
mies; sophisticated IV would do better). Look
ˆ
to fit & whether coefficient of markup ⇡ 1?

Table 1: Fit of Pricing Equilibrium.

Price (S.E.) Price (S.E.)


Gross Weight .36 (0.01) .36 (.003)
Cab-over .13 (0.01) .13 (0.01)
Compact front -.19 (0.04) 0.21 (0.03)
long cab -.01 (0.04) 0.03 (0.03)
Wage .08 (.003) 0.08 (.003)
ˆ
M arkup .92 (0.31) 1.12 (0.22)
Time dummies? No n.r. Yes n.r.
R2 0.86 n.r. 0.94 n.r.
Nobs=1,777; firms=16; t=1992-2012; Heter-cons s.e.

Note. Level shifts (time dummies) are 8% of


the 14% of unexplained variance.
5
What About “Dynamics”? Use textbook
distinction: (i) static models solve for prof-
its conditional on state variables, (ii) dynamics
analyzes the evolution of those state variables.

The initial frameworks by our theory colleagues


made assumptions which insurred that the

1. state variables evolve as a Markov process

2. and the equililbrium is some form of Markov


Perfection (no agent has an incentive to
deviate at any value of the state variables).

E.g. Maskin and Tirole (1988) and Ericson


and Pakes (1995). We now consider each of
these in turn.

6
On (1); the Markov Assumption. Except
in situations involving active experimentation
and learning (where policies are transient), ap-
plied work is likely to stick with the assump-
tion that states evolve as a time homogenous
Markov process of finite order. There are at
least three reasons for this:
• It is a convenient and fits the data well.
• Realism suggests information access and re-
tention conditions limit the memory used.
• We can bound unilateral deviations (simi-
lar to Weintraub, 2014), and have conditions
which insure those deviations can be made ar-
bitrarily small by letting the length of the kept
history grow (White and Scherer, 1994).

On 2: Perfection. The type of rational-


ity built into Markov Perfection is more ques-
tionnable; even though it has been useful in the
simple models used by our theory colleagues to
explore possible outcomes in a structured way.
We come back to this below.
7
Empirical work on dynamics proceeded in
a similar way to what we did in static anal-
ysis; we took the Markov Perfect framework
and tried to incorporate the institutions that
seemed necessary to analyze actual markets.

The Result. Though the MP framework was


useful in guiding analysis of several issues (e.g.
productivity) it became unweildly when con-
fronted with the task of analyzing market dy-
namics. This because of the complexity of the
institutions we were trying to model. The dif-
ficulties became evident when we tried to use
the Markov Perfect notions to structure

• the estimation of parameters, or to

• compute the fixed points that defined the


equilibria or rest points of the system.
8
Our response. Keep the equilibrium notion
and develop techniques to make it easier to
circumvent the estimation and computational
problems. Useful contribution in this regard:

• The development of estimation techniques


that circumvent the problem of repeatedly
computing equilibria (that do not require a
nested fixed point algorithm).

• the use of approximations and functional


forms for primitives which enabled us to
compute equilibria quicker and/or with less
memory requirements.

The underlying ideas: (i) are useful under other


equilibirum assumptions, and (ii) enabled an
expansion of computational dynamic theory.
However they were not powerful enough to
allow us to incorporate sufficient realism into
empirical work based on Markov Perfection.
This leads me to my second claim.

Claim 2. Empirical work on dynamic models


have not passed beyond the hands of a few
diligent I.O. researchers.

As a result dynamic issues are analyzed in a


much less rigorous way than static issues even
when dynamic computational results indicate
that they are essential to understanding the
implications of the phenomena of interest (e.g.’s:
mergers or collusion). Moreover the complex-
ity of Markov Perfection not only limits our
ability to do dynamic analysis of market out-
comes it also

• leads to a question of whether some other


notion of equilibria will better approximate
agents’ behavior.

9
I want to focus on the last point. The fact
that Markov Perfect framework becomes un-
wieldily when confronted by the complexity of
real world institutions, not only limits our abil-
ity to do empirical analysis of market dynamics

• it also raises the question of whether some


other notion of equilibria will better approx-
imate agents’ behavior.

I.e. If we abandon Markov Perfection can we


both

• better approximate agents’ behavior and,

• enlarge the set of dynamic questions we are


able to analyze.

10
The complexity issue. When we try to incor-
porate what seems to be essential institutional
background we find

• That the agent is required to; (i) access


a large amount of information (all state
variables), and (ii) either compute or learn
an unrealistic number of strategies (one for
each information set).

How demanding is this? Consider markets


where consumer, as well as producer, choices
are dynamic (e.g.’s; durable, experience, or
network goods); need the distribution of; cur-
rent stocks ⇥ household characteristics, pro-
duction costs, . . .. In a symmetric information
MPE an agent would have to access all state
variables, and then either compute a doubly
nested fixed point, or learn and retain, policies
from each distinct information set.
Theory Fix: Assume agents only have access
to a subset of the state variables.

• Since agents presumably know their own


characteristics and these tend to be per-
sistent, we would need to allow for assy-
metric information: the “perfectness” no-
tion would then lead us to a “Bayesian”
Markov Perfect solution.

Problem. The burden of computing its strate-


gies insures that they will not be directly com-
puted by either agents or the analyst for even
the simplest (realistic) applied problem. The
additional burden results from the need to com-
pute posteriors, as well as optimal policies; and
the requirement that they be consistent with
one another.
Could agents learn these policies, or at least
policies which maintain some of the logical fea-
tures of Bayesian Perfect policies, from com-
bining data on past behavior with market out-
comes? They would have to learn about;
• primitives (some empirical work on this),
• the likely behavior of their competitors, and
• market outcomes given primitives, competi-
tor behavior, and their own policies.

There is surprising little empirical evidence on


how firms formulate their perceptions about
either other firms’ behavior, or on the impact
of their own strategies given primitives and the
actions of competitors.

11
An ongoing study by U. Doraszelski, G. Lewis
and myself of bids from the date the British
Electric Utility market for frequency response
opened, addresses this question. The conclu-
sions we are reasonably confident on to date
are (see the figures)

• The bids do eventually converge, and they


converge to what looks like a Nash equilib-
rium (in 2009), and

• In its initial stages, the learning process


is complex, involves experimentation, and
di↵ers among firms.

• The smaller changes that occured in the


first half of 2010 (Drax signs long-term
contract with NG) seems more structurred
and proceeds to an equilibrium much quicker.

12
10
9
3 4 5 6 7 8
Weighted average bid
2
1

2005m7 2007m1 2008m7 2010m1 2011m7


Date

Aberthaw Cottam Connah’s Quay


Drax Eggborough Peterhead
Rats Seabank
30
Bids: cross−unit variance (active firms)
10 0 20

2006m4 2007m4 2008m4 2009m4 2010m4 2011m4 2012m4


Date

Share−weighted Unweighted
Rest of Talk.

• Unfortunately, I have little to say on “ac-


tive” experimentation periods; on model-
ing beliefs on the value of di↵erent experi-
ments.

• For more stable environments I introduce


(i) a notion of equilibrium that is less de-
manding than Markov Perfect for both the
agents, and the analyst, and show how to
(ii) compute the equilibrium and
(iii) estimate o↵ of equilibrium conditions.

• Consider restrictions that mitigate multiple


equilibria.

• Provide a computed example of this equi-


librium (electric utility generation).
13
I start with strategies that are “rest points”
to a dynamical system. Later I will consider
institutional change, but only changes where it
is reasonable to model responses to the change
with a simple reinforcement learning process (I
do not consider changes that lead to active
experimentation). This makes my job much
easier because:

• Strategies at the rest point likely satisfy a


Nash condition of some sort; else someone
has an incentive to deviate.

• However it still leaves opens the question:


What is the form of the Nash Condition?
What Conditions Can We Assume for the
Rest Point at States that are Visited
Repeatedly?

We expect (and I believe should integrate into


our modelling) that

1. Agents perceive that they are doing the


best they can at each of these points, and
that

2. These perceptions are at least consistent


with what they observe.

Note. It might be reasonable to assume more


than this: that agents (i) know and/or (ii) ex-
plore, properties of outcomes of states not vis-
ited repeatedly. I come back to this below.

14
Formalization of Assumptions.

• Denote the information set of firm i in period


t by Ji,t. Ji,t will contain both public (⇠t) and
private (!i,t) information, so Ji,t = {⇠t, !i,t}.

• Assume Ji,t evolves as a (controlled) finite


state Markov process on J (or can be ade-
quately approximated by one); and only a fi-
nite number of firms are ever simultaneously
active.

• Policies, say mi,t 2 M, will be functions of


Ji,t. For simplicity assume #M is finite, and
that it is a simple capital accumulation game,
i.e. 8(mi, m i) 2 Mn, & 8 ! 2 ⌦

P! (·|mi, m i, !) = P! (·|mi, !).


where the public information, ⇠, is used to pre-
dict competitor behavior and common demand
and cost conditions which evolve as an exoge-
nous Markov process.
15
• So a “state” of the system, is

st = {J1,t, . . . , Jnt,t} 2 S,

#S is finite. ) any set of policies will insure


that st will wander into a recurrent subset of
S, say R ⇢ S, in finite time, and after that
st+⌧ 2 R w.p.1 forever.

• Agents do not: (i) know st, or (ii) calculate


policies for its components.

• ) If Ji,t 2 J and #J = K, while the #


firms is N , the number of states changes from
K N to either:
(i) K (symmetric agents), or
(ii) K ⇥ N (otherwise).

16
• Let the agent’s perception of the expected
discounted value of current and future net cash
flow were it to chose m at state Ji, be

W (m|Ji), 8m 2 M & 8Ji 2 J ,

• and of expected profits be

⇡ E (m|Ji).

Our assumptions imply:

• Each agent choses an action which max-


imizes its perception of its expected dis-
counted value, and

• For those states that are visited repeatedly


(are in R) these perceptions are consistent
with observed outcomes.

17
Formally

A. W (m⇤|Ji) W (m|Ji), 8m 2 M & 8Ji 2 J ,

B. &, 8Ji which is a component of an s 2 R


X 0 0 0
W (m(Ji)|Ji) = ⇡ E (m|Ji)+ ⇤ e
W (m (Ji )|Ji )p (Ji |Ji),
0
Ji

where, if pe(·) provides the empirical probability


(the fraction of periods the event occurs)
X
⇡ E (m|Ji) ⌘ E[⇡(·)|Ji, J i]pe(J i|Ji),
J i
and
n pe(J i, Ji) o
pe(J i |Ji ) ⌘ e
,
p (Ji) J ,J
i i

while
0
n
e 0 pe(Ji , Ji) o
p (Ji |Ji) ⌘ e 0 . 
p (Ji) J ,J
i i

18
“Experience Based Equilibrium”

These are the conditions of a (restricted) EBE


(Fershtman and Pakes, 2012; for related ear-
lier work see Fudenberg and Levine, 1993).
Bayesian Perfect satisfy them, but so do weaker
notions. We now turn to its :
(i) computational and estimation properties,
(ii) overcoming multiplicity issues,
(iii) and then to an example.

Computational Algorithm. “Reinforcement


learning” algorithm (Pakes and McGuire, 2001).
• Can be viewed as a learning process. Makes
it a candidate to: (i) analyze (small) pertur-
bations to the environment, as well as (ii) to
compute equilibrium.
• Does not generate a curse of dimensionality
in either: (i) the number of states or (ii) the
computation of continuation values.
19
Iterative Algorithm: Iterations defined by
• A location, say Lk = (J1k , . . . Jn(k)
k ) 2 S: is the
information sets of agents active.
• Objects in memory (i.e. M k ):
(i) perceived evaluations, W k ,
(ii) No. of visits to each point, hk .

So algorithm must update (Lk , W k , hk ). Com-


putational burden determined by; memory con-
straint, and compute time. I use a simple (not
neccesarily) optimal structure to memory.

Update Location.

• Calculate “greedy” policies for each agent

m⇤i,k = arg max W k (m|Ji,k )


m2M
• Take random draws on outcomes conditional
on m⇤i,k : i.e. if we invest in “payo↵ relevant”

20
!i,k 2 Ji,k , draw !i,k+1 conditional on (!i,k , m⇤i,k ).
• Update {Ji,k }i.

Update W k .

• “Learning” interpretation: Assume agent ob-


serves b(m i) and knows the primitives (⇡(·), p(·|!, m))
conditional on (b(m i), !, m).

• Its ex poste perception of what its value


would have been had it chosen m is

V k+1(Ji,k , m) =

⇡(!i,k , m, b(m i,k ), dk )+max W k (m̃|Ji,k+1(m)),


m̃2M

where Jik+1(m) is what the k + 1 information


would have been given m and competitors ac-
tual play.
Treat V k+1(Ji,k ) as a random draw from the
possible realizations of W (m|Ji,k ), and update
W k as in stochastic integration (Robbins and
Monroe,1956)

W k+1(m|Ji,k ) W k (m|Ji,k ) =

1 k+1 (J , m)
[V i,k W k (m|Ji,k )].
hk (Ji,k )
or
W k+1(m|Ji,k ) =

1 k+1 (hk (Ji,k ) 1) k


k
V (Ji,k , m)+ k
W (m|Ji,k ),
h (Ji,k ) h (Ji,k )
(other weights might be more efficient).

Notes.
• If we have equilibrium valuations we tend to
stay their, i.e. if ⇤ designates equilibrium

E[V ⇤(Ji, m⇤)|W ⇤] = W ⇤(m⇤|Ji).


• To learn equilibrium values we need to visit
points repeatedly; only likely for states in R.

• Agents (not only the analyst) could use the


algorithm to find equilibrium policies or adjust
to perturbations in the environment.

• Algorithm has no curse of dimensionality.


(i) Computing continuation values: integration
is replaced by averaging two numbers.
(ii) States: algorithm eventually wanders into
R and stays their, and #R  #J .

•The algorithm uses the stochastic approxima-


tion literature and as in that literature it can
be augmented to use functional form approxi-
mations where needed (“TD learning”; Sutton
and Barto,1998).
Computational Properties.

• Testing. The algorithm does not necessar-


ily converge, but a test for convergence exists
and does not involve a curse of dimensionality
(Fershtman and Pakes, 2012).

• The test is based on simulation. It produces


a consistent estimate of an L2(P (R)) norm of
the percentage bias in the implied estimates of
V (m, Ji); where P (R) is the invariant measure
on the recurrent class.

Estimation.

• Need a candidate for Ji. Either: (i) empir-


ically investigate determinants of controls, or
(ii) ask actual participants.

• Does not require nested fixed point algo-


rithm. Use estimation advances designed for
MP equilibria (POB or BBL), or a perturba-
tion (or “Euler” like) condition (below).
21
Euler-Like Condition.

• With assymetric information the equilibrium


condition

W (m⇤|Ji) W (m|Ji)
is an inequality which can generate (set) esti-
mators of parameters.

• Ji contains both public and private informa-


tion. Let J 1 have the same public, but di↵ernt
private, information then J 2. If a firm is at J 1
it knows it could have played m⇤(J 2) and its
competitors would respond by playing on the
equilbrium path from J 2.

• If m⇤(J 2) results in outcomes in R, we can


simulate a sample path from J 2 using only ob-
served equilibrium play. The Markov property
insures it would intersect the sample path from
22
the DGP at a random stopping time with prob-
ability one and from that time forward the two
paths would generate the same profits.

• The conditional (on Ji) expectation of the


di↵erence in discounted profits between the
simulated and actual path from the period of
the deviation to the random stopping time,
should, when evaluated at the true parame-
ter vector, be positive. This yields moment
inequalities for estimation as in Pakes, Porter,
Ho and Ishii (forthcoming), Pakes, (2010).
Multiplicity.

• R contains both “interior” and “boundary”


points. Points at which there are feasible strate-
gies which can lead outside of R are boundary
points. Interior points are points that can only
transit to other points in R no matter which
(feasible) policy is chosen.

• Our conditions only insure that perceptions


of outcomes are consistent with the results
from actual play at interior points. Perceptions
of outcomes for some feasible (but inoptimal)
policy at boundary points are not tied down by
actual outcomes.

• MPBE are a special case of (restricted) EBE


and they have multiplicity. Here di↵ering per-
ceptions at boundary points can support a (pos-
sibly much) wider range of equilibria.
23
Narrowing the Set of Equilibria.

• In any empirical appliction the data will rule


out equilibria. m⇤ is observable, at least for
states in R, and this implies inequalities on
W (m|·). With enough data W (m⇤|·) will also
be observable up to a mean zero error.

• Use external information to constrain percep-


tions of the value of outcomes outside of R.
If available use it.

• Allow firms to experiment with mi 6= m⇤i at


boundary points (as in Asker, Fershtman, Ji-
hye, and Pakes, 2014). Leads to a stronger
notion of, and test for, equilibrium. We insure
that perceptions are consistent with the results
from actual play for each feasible action at
boundary points (and hence on R).

24
Boundary Consistency.

Let B(Ji|W) be the set of actions at Ji 2 s 2


R which could generate outcomes which are
not in the recurrent class (so Ji is a bound-
ary point) and B(W) = [Ji2RB(Ji|W). Then
the extra condition needed to insure “Bound-
ary Consistency” is:

Extra Condition. Let ⌧ index future periods,


then 8(m, Ji) 2 B(W)
1
h X i
⌧ ⇡ m(J ), m(J ⇤
E ( i,⌧ i,⌧ ))|Ji = Ji,0 , W  W (m |Ji ),
⌧ =0
where E[·|Ji, W] takes expectations over future
states starting at Ji using the policies gener-
ated by W. 

25
Testing for Boundary Consistency.

From each (m, Ji) 2 B(W) simulate indepen-


dent sample paths. Index the periods of a path
by ⌧ & terminate it the first time Ji 2 s 2 R (or
if that does not occur at some large number),
say ⌧ ⇤. The path’s estimate of W (m|Ji) is
⌧⇤
X
W̃ (m|Ji) ⌘ ⌧ ⇡ m(J ), m(J ) + ⌧ ⇤ W (m⇤ |J ⇤ ),
( i,⌧ i,⌧ ) i,⌧
⌧ =0
with mean and variance; W (m|Ji), V ar[W (m|Ji)].

Let f (x)+ = max[0, f (x)], and

[W (m|Ji) W (m⇤|Ji)]+
T (m|Ji) ⌘
W (m⇤|Ji)

Now use a one-sided test of


P
(m,Ji )2B(W) T (m|Ji )
H 0 : qP = 0,
(m,Ji )2B(W) V ar[T (m|Ji )]
where V ar[T (m|Ji)] is the variance of T (m|Ji).
26
Simple Electric Utility Eg.

Two firms: each has a vector of generators.


Firm’s decisions: bid or not each generator. If
not bid, do maintenance or not.
ISO: sum bid functions, intersect with demand
(varies by day of the week), pay a uniform price
to accepted electricity.

• ! 2 ⌦. Cost of producing electricy on each


firm’s generators. Cost increases stochas-
tically with use, but reverts to a starting
value if the firm goes down for mainte-
nance.

• mi 2 Mi. Vector of mi,r 2 {0, 1, 2}; 0 )


shutdown without maintenance, 1 ) shut-
down with maintenance, 2 ) bid into mar-
ket.
27
• b(mi) : mi ! {0, bi}ni where bi is the fixed
bid schedule of firm i. b observed. m not
observed.

• d is demand on that day,f is maintenance


cost (“ investment”), p = p(b(mi), b(m i), d)
is price, q = q(b(mi), b(m i), d) is allocated
quantity vector, so realized profits are

X X X
⇡i,t = ptqi,r,t ci(!i,r,t, qi,r,t) fi {mi,r,t = 1}
r r r

⌘ ⇡i(!i, mi, b(m i), d)


mi,r,t = 0 ) !i,r,t+1 = !i,r,t,
mi,r,t = 1 ) !i,r,t+1 = ! i,r (!=restart state),
mi,r,t = 2 ) !i,r,t+1 = !i,r,t ⌘i,r,t
with P (⌘) > 0 f or ⌘ 2 {0, 1}.

Note b(m) is the only signal sent in each pe-


riod. b(m i,t 1) is a signal on ! i,t 1 which
is unobserved to i and is a determinant of
b(m i,t) (and so ⇡i,t).
State of the game. si,t = (J1,t, . . . Jnt,t) 2 S,
and
Ji,t = (⇠t, !i,t) 2 (⌦(⇠), ⌦)
where

• !i,t represents private information

• and ⇠t is public information (shared by all).


Example ⇠t = {b(m1,⌧ ), b(m2,⌧ ), d⌧ }⌧ t, and
knowledge of ! i,t the last period of reve-
lation (happens every T periods).

28
Model Details.

Parameter Firm B Firm S


Number of Generators 2 3
Range of ! 0-4 0-4
MC @ ! = (0, 1, 2, 3)⇤ (20,60,80,100) (50,100,130,170)
Capacity at Const MC 25 15
Costs of Maintenance 5,000 2,000

⇤ MC is constant at this cost until capacity and then goes


up linearly. At ! = 4 the generator shuts down.

Firm S: small (gas fired) generators with high


MC but low start up costs.
Firm B: large (coal fired) generators lower MC
and higher start up costs.
Constant, small, elasticity of demand.

Computational Details.
• High initial conditions “insures” we try all
strategies (induces a lot of experimentation).
• Convergence test is in terms of L2(P(R))
norm of percentage bias in estimates of W .
300 million iterations L2(P(R)) ⇡ .00005.
29
The Economics of Alternative
Environments: Planner vs AsI.

Base Case: Planner Strategy. Constrain


planner to use the same bid function (compare
just investment strategies). Never shuts down
without doing maintenance. Weekdays: oper-
ates at almost full capacity. Maintenance done
on weekend. Maintenance done about 15% of
the periods for both B and S generators.

Base Case: AsI Equilibrium. Shuts down


about 20% of the periods. However about half
the time generators are shutdown they are not
doing maintenance. Only does maintenance in
about 10% of the periods. ) 25-30% more
shutdown but 30% less maintenance than the
social planner. Most (but not all) shutdown
on weekends (just as social planner).

30
Base Case: Costs. Planner does more main-
tenance and can optimize maintenance jointly
over large and small generators. ) much lower
production costs and lower total costs per unit
quantity.
• I.e. the planner produces more and has lower
average total costs in a model in which marginal
costs increase in quantity. E↵ect of increased
maintenance.

Base Case: Prices and Quantities. Plan-


ners 2% more output on weekdays, with in-
elastic demand ) price fall of ⇡ 10%.
• Planner’s extra maintenance makes it opti-
mal for it to bid in more and therefore keep
price down, and it internalizes the extra CS.
AsI firms do not.
• Even the social planner has weekday prices
that are 20% higher than weekend prices (the
AsI di↵erence is larger). With these primi-
tives large weekend/weekday price di↵erences
are “optimal”.
31
Base Case
SP AsI FI
Panel A: Strategies.
Firm B: Shutdown and Maintenance.
Shutdown % 14.52 19.96 12.31
Maintenance % 14.52 10.1 10.9
Firm S: Shutdown and Maintenance.
Shutdown % 16.85 21.48 20.74
Maintenance % 16.85 9.83 9.91
Firm B: Operating Generators (by day).
Saturday 1,41 1.08 1.72
Sunday .88 1.21 1.65
Weekday Ave. 1.93 1.78 1.78
Firm S: Operating Generators (by day).
Saturday 1.55 1,56 2.03
Sunday 1.89 1.75 1.86
Weekday Ave. 2.80 2.64 2.55
Panel B: Costs ( ⇥10 3).
Maint. B 29 20.2 21.95
Maint. S 20.2 11.8 11.9
Var. B 211.1 235.1 240.4
Var. S 174.8 228.1 215.9
Total/Quantity 0.389 0.452 0.444
Panel C: Quantities and Prices.
Ave. Q Wkend 93.5 92.0 32
98.6
Ave. P Wkend 303 325 260
Ave. Q Wkday 185.7 181.8 181.2
Base Case vs Excess Capacity: AsI & FI

• Maintenance and Shutown.


Base case: the FI equlibrium generates less
shutdown and more maintenance.
“Excess” Capacity (more capacity relative to
demand) the AsI equilibrium generates less shut-
down and more maintenance.

• Weekday vs Weekend.
Base case: AsI vs FI strategies: weekends the
AsI equilibrium shuts down more genrators. This
enables the firms to signal that their genera-
tors will be bid in on the high-priced weekdays.
Excess Capacity: Now the ASI firm no longer
distinguishes much between weekend and week-
day.

• Prices.
With excess capacity the di↵erence between
weekday and weekend prices drops dramati-
cally (to 5.4% in the AsI and 1% in the FI equi-
librium) and AsI operation increases on week-
end.
34
• Costs.
Increasing capacity relative to demand the av-
erage cost is over 30% lower. Raises questions
of what are the capital costs and incentives for
private generator construction?

• Total Surplus:
Increase in capacity/demand ratio generates
a large increase in consumer surplus, and a
somewhat smaller total surplus increases. Does
increased surplus cover social cost of genera-
tor construction? And if so how do we induce
the investment?

35
Base Case Excess Capacity
AsI FI AsI FI
Panel A: Strategies.
Firm B: Shutdown and Maintenance.
Shutdown % 19.96 12.31 41.97 43.75
Maintenance % 10.1 10.9 6.47 6.25
Firm S: Shutdwon and Maintenance.
Shutdown % 21.48 20.74 53.1 56.4
Maintenance % 9.83 9.91 5.22 4.84
Firm B: Operating Generators (by day).
Saturday 1.08 1.72 1.03 1.0
Sunday 1.21 1.65 1.03 1.0
Weekday Ave. 1.78 1.78 1.03 1.0
Firm S: Operating Generators (by day).
Saturday 1,56 2.03 1.21 0.48
Sunday 1.75 1.86 1.20 0.44
Weekday Ave. 2.64 2.55 1.25 1.44
Panel B: Quantities and Prices.
Ave. Q Wkend 92.0 98.6 33.6 33.1
Ave. P Wkend 325 260 168 175.6
Ave. Q Wkday 181.8 181.2 42.50 42.43
Ave. P Wkday 401 411 177 177
Costs, Consumer Surplus and Total
Surplus (⇥10 3) .
Base Case Excess Capacity
AsI FI AsI FI
Average Cost .452 .444 .290 .282
CS⇤ 581.5 595.0 1,316 1,311
Total Surplus 288.9 301.4 1,374 1,373

⇤ CS= these numbers plus 58,000.


⇤⇤ Total Surplus = these numbers plus 59,000.

36
Conclusions.

• There is a need for increased research on the


dynamics of market outcomes.

• The framework used for this analysis ought


probably to require less of both the agent and
the analyst then does “Bayesian Perfect” no-
tions of equilibria.

• Ultimately, that framework will have to inte-


grate the analysis of the reactions to changes
in institutions with an analysis of policies for
states that are observed repeatedly. “Adap-
tation” processes, like reinforcement learning,
might be adequate for reactions to changes
that do not induce calculated experimentation.

• A start for equilibirum conditions at situa-


tions that are observed repeatedly are those of
“Experience Based Equilibrium”. If more strin-
gent equilibrium conditions are justified they
should be imposed as they will result in a more
precise analysis.
37

You might also like