0% found this document useful (0 votes)
3 views3 pages

Op Tim Ization Summary

This document summarizes key concepts from lectures on optimization, focusing on the Karush-Kuhn-Tucker (KKT) conditions and their applications in economic scenarios such as wage schemes and tax design. It also discusses correspondences, value functions, and fixed point theorems, highlighting their importance in establishing solutions for optimization problems and equilibrium in economics. Theorems related to KKT conditions, value functions, and fixed points are presented to illustrate their roles in optimization theory.

Uploaded by

Nguyen My Anh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views3 pages

Op Tim Ization Summary

This document summarizes key concepts from lectures on optimization, focusing on the Karush-Kuhn-Tucker (KKT) conditions and their applications in economic scenarios such as wage schemes and tax design. It also discusses correspondences, value functions, and fixed point theorems, highlighting their importance in establishing solutions for optimization problems and equilibrium in economics. Theorems related to KKT conditions, value functions, and fixed points are presented to illustrate their roles in optimization theory.

Uploaded by

Nguyen My Anh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Optimization

Daniel N. Hauser

This note summarizes some of the key concepts from the fourth set of
lectures. Economic “applications” of KKT condi-
tions (beyond supply and demand)
1. Designing a wage scheme to incen-
KKT conditions
tivize unobservable effort.

Now we’d like to solve problems of the form 2. Designing the optimal redistributive
tax scheme

max f ( x ) 3. Supervised learning models in


machine learning (e.g. SVMs)
s.t. g( x ) ≤ 0

for f : Rm → R and g : Rm → Rn . We can leverage the inequality


constraints here to get a fancier version of the Lagrange multiplier
conditions A couple things to note about these
conditions
Definition (Karush-Kuhn-Tucker Conditions). An x ∈ Rm and a vector • The multiplier effective “turns
λ∈ Rn satisfies the KKT conditions if: on constraints.” Complementary
slackness means that any gi whose
associated multiplier is non-zero
∇ f ( x )T = λ T Dg( x ) (First Order Conditions) must hold with equality, i.e. λi >
T 0 ⇒ gi ( x ) = 0.
λg( x ) = 0 (Complementary Slackness)
• The sign of the multiplier is impor-
g( x ) ≤ 0 (Feasibility) tant. We can rule out any feasible
λ≥0 (Positivity of the Multiplier). points that can only satisfy these
gradient condition but don’t have
the right sign for the multipliers.
Like with Lagrange multipliers, these constraints are necessary at Be careful with the directions of the
any maximizer that satisfies a rank condition on Dg. Fortunately, we inequalities!

can find much simpler conditions for concave problems • Minimizers satisfy the same condi-
tions, but λ ≤ 0.
Theorem 1. The KKT conditions are necessary if f is concave, each gi is
convex and there exists an x s.t. gi ( x ) < 0 for all i.
The KKT conditions are sufficient if ∇ f ( x ) ̸= 0 for all feasible x, f is
quasiconcave, and gi ’s are quasiconvex.

So for nice enough problems, the KKT conditions exactly identify


the set of global maximizers.1 1
Note that, unlike with equality con-
straints, the conditions for maxima and
In general these conditions are going to be annoying (or more minima are different, so there was no
likely impossible) to solve without careful thought. When approach- hope that these conditions on their own
could identify maximizers. Here the
ing a maximization problem, before going to this step, think carefully sign of the multiplier in well-behaved
problems sidesteps the need to worry
about things like second order condi-
tions.
optimization 2

about the problem. Are there any constraints that clearly bind (hold
with equality) or clearly don’t. This can make looking for solutions to
the KKT conditions much more manageable. On the other hand, even
if you can’t explicitly solve for the maximizer, these conditions can
still tell us a lot about its properties.

Correspondences

A correspondence is a function that maps from each element in the


domain to a set of elements in the range. We consider two different
continuity notions that each capture some aspects of what it meant
for a function to be continuous.

Definition. A compact valued correspondence Γ : A ⇒ B with B compact


is said to be upper hemicontinuous iff it has a closed graph.

Definition. A correspondence Γ : A ⇒ B is said to be lower hemi-


continuous iff for all x ∈ A xn → x and for all y ∈ Γ( x ) there exists
a subsequence xnk and a sequence ynk ∈ Γ( xnk ) such that xnk → x and
ynk → y.

Value functions
Unsurprisingly we use value functions,
Let Θ ⊆ Rm , X ⊆ Rn , and consider a function continuous f : X → policy correspondence and the envelope
theorem a lot. Supply, compensated
Θ → R and a correspondence C : Θ ⇒ X. We call the object
and uncompensated demand, input
demand, etc. are all policy functions,
V (θ ) := max f ( x; θ ) while the expenditure, profit, indirect
x∈X
utility, and cost functions are all value
s.t. x ∈ C (θ ) functions. It will also be a surprisingly
convenient technical tool:
the value function. This function describes the maximum for different • Dynamic maximization problems
values of the exogenous parameters (the elements of θ). The corre- an be written recursively using the
value function.
sponding arg max we call the policy correspondence. The value function
• In mechanism design, when trying
and policy correspondence inherit some nice properties from f and c to design a pricing scheme that
induces specific decisions from
Theorem 2 (Berge’s Maximum Theorem). Let Θ, X be non-empty and consumers, the envelope theorem
compact, f and C are compact valued and continuous. Then the value func- lets us characterize how decision
makers respond to different menus.
tion is continuous and the policy correspondence is non-empty, compact-
valued and UHC.

Similarly the value function and policy correspondence inherit


concavity/convexity properties. We can also describe the derivative
optimization 3

of the value function in terms of the primatives. Consider a problem


of the form

max f ( x; θ )
x∈X

s.t. g( x; θ ) = 0

with single valued policy function ϕ(θ ). Then the change in the value
function with respect to the exogenous parameters is entirely deter-
mined by how f and g change with respect to the exogenous parame-
ters.

Theorem 3 (The Envelope Theorem). Suppose f , g, ϕ are continuously


differentiable and Dg(ϕ(θ ), θ ) has full rank. Then the value function V is
differentiable and

DV (θ ) = −λ T Dθ g(ϕ(θ ); θ ) + Dθ f (ϕ(θ ), θ )

Fixed Point Theorems


We’ll use these theorems to estab-
A problem we’ll run into a lot is establishing that a system of non- lish existence of Nash and Walrasian
equilibrium.
linear equations has a solution. Occasionally the separating hyper-
plane theorem will be enough, but often we’ll have to apply a fixed
point theorem.

Theorem 4 (Brouwer’s Fixed Point Theorem). Every continuous func-


tion that maps from a non-empty convex, compact subset of Rn to itself has
a fixed point.

Theorem 5 (Kakutani’s Fixed Point Theorem). Every non-empty, con-


vex valued, UHC correspondence from a non-empty, convex, compact subset
of Rn to itself has a fixed point.

You might also like