Mathematical Appendix I.
Kuhn-Tucker Theorems
I.1 Constrained Maximization: Necessary Conditions.
               n
Function F : IR+ → IR is the objective function; functions g j : IR+
                                                                   n
                                                                     →
IR, for j = 1, . . . , k, are constraint functions. Assume that F and g j are
                                           ∂F           ∂g j
differentiable, with partial derivatives   ∂xi   and    ∂xi    denoted by ∂i F and ∂i g j ,
respectively.
The constrained maximization problem (with nonnegativity constraints) is
                                     max F (x)                                         (1)
                                      x
                            subject to         g 1 (x) ≥ 0,
                                                  ......,
                                               g k (x) ≥ 0,
                                 x1 ≥ 0, . . . , xn ≥ 0.
We write the Lagrangian as
                                                      X
                                                      k
                                 k
                        1
                    L(λ , . . . , λ , x) = F (x) +          λj g j (x),
                                                      j=1
where λj ≥ 0, for j = 1, . . . , k, are the Lagrange multipliers. We use λ to
denote the k-vector of multipliers.
                                           1
Kuhn-Tucker conditions for x∗ ≥ 0 and λ∗ ≥ 0 are:
              for all i = 1, . . . , n and j = 1, . . . , k,
                       X
                       k
         ∂i F (x ) +
                ∗
                             λ∗j ∂i g j (x∗ ) ≤ 0,   and if x∗i > 0, then “ = 0”,   (2a)
                       j=1
                    g j (x∗ ) ≥ 0,      and if λ∗j > 0, then “ = 0”.                (2b)
Where do these conditions come from? Think about maximizing La-
grangian L(λ, x) with respect to x and minimizing it with respect to λ, un-
constrained, except for x ≥ 0 and λ ≥ 0. This is the saddle-point. K-T
conditions (2) are FOCs for such max-min (or saddle-point) problem.
Theorem (Kuhn-Tucker): If x∗ ≥ 0 is a solution to the constrained maxi-
mization problem, and the Constraint Qualification Condition holds, then x∗
and some λ∗ ≥ 0 satisfy K-T conditions (2).
Constraint Qualification Condition:
(i) Kuhn-Tucker original – don’t touch it.
(ii) g j concave for all j, and Slater’s condition, that is, there is some
x0 ≥ 0 with g j (x0 ) > 0 for all j.
(iii) rank condition (see Takayama 1.D.4, or Varian, ch 27),
(iv) g j linear for all j, (Arrow-Hurwicz-Uzawa, see Takayama 1.D.4)
                                                 2
I.2 Sufficiency of Kuhn-Tucker Conditions.
The most standard theorem is:
Theorem S1: Suppose that F and g 1 , . . . , g k are all concave functions. If
x∗ ≥ 0 and λ∗ ≥ 0 satisfy K-T conditions (2), then x∗ is a solution to the
constrained maximization problem.
A better theorem is due to Arrow and Enthoven (1961).
Theorem S2: Suppose that F and g 1 , . . . , g k are all quasi-concave functions
and some “mild” condition holds. If x∗ ≥ 0 and λ∗ ≥ 0 satisfy K-T conditions
(2), then x∗ is a solution to the constrained maximization problem.
The extra (“mild”) condition is not needed if F is concave (and g 1 , . . . , g k are
quasi-concave). See Takayama 1.E for three versions of the condition.
Quasi-concavity (and therefore also concavity) of functions g j implies that
the constraint set, i.e. the set of x ≥ 0 satisfying g 1 (x) ≥ 0, . . . , g k (x) ≥ 0, is
convex.
                                           3
I.3 Constrained Minimization
The constrained minimization problem (with nonnegativity constraints) is
                                           min F (x)                                 (3)
                                             x
                   subject to            g 1 (x) ≤ 0, . . . , g k (x) ≤ 0,
                                        x1 ≥ 0, . . . , xn ≥ 0.
The Lagrangian is
                                                     X
                                                     k
                              L(λ, x) = F (x) +            λj g j (x).
                                                     j=1
Kuhn-Tucker conditions for x∗ ≥ 0 and λ∗ ≥ 0 are,
             for all i = 1, . . . , n and j = 1, . . . , k,
                       X
                       k
         ∂i F (x ) +
               ∗
                             λ∗j ∂i g j (x∗ ) ≥ 0,   and if x∗i > 0, then “ = 0”,   (4a)
                       j=1
                   g j (x∗ ) ≤ 0,       and if λ∗j > 0, then “ = 0”.                (4b)
The corresponding saddle-point problem is to minimize Lagrangian L(λ, x)
with respect to x and maximize it with respect to λ for x ≥ 0 and λ ≥ 0.
The Kuhn-Tucker Theorem holds with no change for the constrained mini-
mization problem. However, in constraint qualification conditions concavity
of functions g j , if present, has to be replaced by their convexity. This guaran-
tees convexity of the constraint set described here by inequalities g j (x) ≤ 0.
                                                 4
Theorems S1 and S2 continue to hold with concavity (quasi-concavity) of
functions F and g j replaced by their convexity (quasi-convexity, respectively).
I.4 Remarks:
• Applications of K-T theorems in microeconomics:
  (i) Consumer theory: utility maximization subject to budget constraint,
      and expenditure minimization.
 (ii) Welfare economics: Characterization of Pareto optimal allocations as
      solutions to maximization of a welfare function subject to resource con-
      straints, or maximization of one agent’s utility subject to constraints on
      other agents’ utilities and resource constraints.
(iii) Producer theory: cost minimization.
• There are versions of K-T theorems for maximization and minimization
with mixed constraints, i.e., when some constraints are of the equality form,
g j (x) = 0. See Sundaram [2], Section 6.4.
• K-T theorems hold for local maxima (minima) as well.
References: [1] Takayama (1995), 1.D and 1.E. [2] Sundaram (1999), Chap-
ter 6. [3] Varian, ch 27. [4] MasColell et al.    Warning: Takayama [1] and
Sundaram [2] do not explicitly write nonnegativity constraints x ≥ 0. Varian
[3] writes constraints in the maximization problem as g j (x) ≤ 0.
                                       5
I.5 Example: Consider the following constrained maximization problem:
                         maximize   ln(x1 + 1) + ln(x2 + 1)
                        subject to p1 x1 + p2 x2 ≤ m
                                    x1 ≥ 0,   x2 ≥ 0,
where p1 > 0, p2 > 0 and m > 0.
In order to derive the solution (as a function of parameters p1 , p2 and m) we
write the Kuhn-Tucker first-order conditions (2) as
                       1
       (1)                − λ∗ p1 ≤ 0,    and if   x∗1 > 0, then “= 0”.
                 x∗1   +1
                       1
       (2)                − λ∗ p2 ≤ 0,    and if   x∗2 > 0, then “= 0”.
                 x∗2   +1
       (3)      p1 x∗1 + p2 x∗2 ≤ m,     and if λ∗ > 0, then “= 0”.
with x∗ ≥ 0 and λ∗ ≥ 0.
Note that (3) holds with equality since it follows from (1) that λ∗ > 0.
We solve inequalities (1-3) by considering cases:
Case 1. x∗1 > 0, x∗2 > 0.
Then (1) and (2) hold with equalities. Solving (1), (2) and (3) we find x∗1 =
m + p2 − p1           m + p1 − p2               2
            and x∗2 =             and λ∗ =             . For x∗1 and x∗2 to
   2p1                   2p2               m + p1 + p2
be strictly positive, it has to be that m + p2 > p1 and m + p1 > p2 . Thus
Case 1 applies with x∗1 and x∗2 as listed above if m + p2 > p1 and m + p1 > p2 .
                                          6
Case 2. x∗1 > 0, x∗2 = 0.
                         m
(3) implies that x∗1 =      .   Since (1) holds with equality, we solve it for
                         p1
         1
λ∗ =          . Next we need to verify inequality (2). It states
       m + p1
                                        p2
                                 1−          ≤ 0,
                                      m + p1
                                                                   m ∗
and it holds if p2 ≥ m + p1 . Thus Case 2 applies (with x∗1 =        , x = 0) if
                                                                   p1 2
p2 ≥ m + p1 .
Case 3. x∗1 = 0, x∗2 > 0.
                                                                             m
This case is very similar to Case 2. From (3) and (2) we obtain x∗1 =           ,
                                                                             p2
         1
λ∗ =         . Verifying inequality (1), we obtain p1 ≥ m + p2 . Thus Case 3
      m + p2
                            m
applies (with x∗1 = 0, x∗2 = ) if p1 ≥ m + p2 .
                            p2
The case x∗1 = x∗2 = 0 cannot hold since it violates equation (3). This con-
cludes our solution to the K-T conditions.
Since utility function is concave and the constraint function is concave (in
fact, it is linear) K − T conditions are sufficient (Theorem S1). Hence, the
solution to K-T conditions is a constrained maximizer. Further, since the
Slater’s condition holds, every constrained maximizer has to satisfy K − T
conditions.