0% found this document useful (0 votes)
72 views71 pages

A First Course in Optimization: Answers To Selected Exercises

The document contains answers and solutions to exercises from a textbook on optimization. It is divided into chapters that correspond to topics in optimization like linear programming, convex functions, and iterative algorithms. The exercises cover a wide range of techniques in optimization including applying inequalities like Cauchy-Schwarz and AM-GM, using properties of convex and concave functions, and relating optimization problems to concepts like eigenvalues and eigenvectors. The document provides detailed solutions and explanations to help readers learn how to set up and solve different types of optimization problems.

Uploaded by

Sehar Sajjad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
72 views71 pages

A First Course in Optimization: Answers To Selected Exercises

The document contains answers and solutions to exercises from a textbook on optimization. It is divided into chapters that correspond to topics in optimization like linear programming, convex functions, and iterative algorithms. The exercises cover a wide range of techniques in optimization including applying inequalities like Cauchy-Schwarz and AM-GM, using properties of convex and concave functions, and relating optimization problems to concepts like eigenvalues and eigenvectors. The document provides detailed solutions and explanations to help readers learn how to set up and solve different types of optimization problems.

Uploaded by

Sehar Sajjad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 71

A First Course in Optimization:

Answers to Selected Exercises


Charles L. Byrne
Department of Mathematical Sciences
University of Massachusetts Lowell
Lowell, MA 01854
December 2, 2009
(The most recent draft is available as a pdf file at
http://faculty.uml.edu/cbyrne/cbyrne.html)

Contents
1 Preface

2 Introduction
2.1 Exercise . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

5
5

3 Optimization Without Calculus


3.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . .

7
7

4 Geometric Programming
4.1 Exercise . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

13
13

5 Convex Sets
5.1 References Used in the Exercises . . . . . . . . . . . . . . .
5.2 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . .

15
15
16

6 Linear Programming
6.1 Needed References . . . . . . . . . . . . . . . . . . . . . . .
6.2 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . .

23
23
23

7 Matrix Games and Optimization


7.1 Some Exercises . . . . . . . . . . . . . . . . . . . . . . . . .

25
25

8 Differentiation
8.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . .

27
27

9 Convex Functions
9.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . .

29
29

10 Fenchel Duality
10.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . .

33
33

CONTENTS

11 Convex Programming
11.1 Referenced Results . . . . . . . . . . . . . . . . . . . . . . .
11.2 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . .

35
35
35

12 Iterative Optimization
12.1 Referenced Results . . . . . . . . . . . . . . . . . . . . . . .
12.2 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . .

39
39
39

13 Modified-Gradient Algorithms

43

14 Quadratic Programming

45

15 Solving Systems of Linear Equations


15.1 Regularizing the ART . . . . . . . . . . . . . . . . . . . . .
15.2 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . .

47
47
47

16 Conjugate-Direction Methods
16.1 Proofs of Lemmas . . . . . . . . . . . . . . . . . . . . . . .
16.2 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . .

51
51
53

17 Sequential Unconstrained Minimization Algorithms


17.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . .

57
57

18 Likelihood Maximization

59

19 Operators
19.1 Referenced Results . . . . . . . . . . . . . . . . . . . . . . .
19.2 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . .

61
61
62

20 Convex Feasibility and Related Problems


20.1 A Lemma . . . . . . . . . . . . . . . . . . . . . . . . . . . .
20.2 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . .

67
67
67

CONTENTS

Chapter 1

Preface
In the chapters that follow you will find solutions to many, but not all,
of the exercises in the text. Some chapters in the text have no exercises,
but those chapters are included here to keep the chapter numbers the same
as in the text. Please note that the numbering of exercises within each
chapter may differ from the numbering in the text itself, because certain
exercises have been skipped.

CHAPTER 1. PREFACE

Chapter 2

Introduction
2.1 Consider the function f (x) defined by f (x) = ex , for x > 0 and by
f (x) = ex , for x < 0. Show that
1 = lim inf f (x)
x0

and
1 = lim sup f (x).
x0

As we approach x = 0 from the left, the limit is 1, while as we approach


from the right, the limit is +1. Therefore, the set S is S = {1, 0, 1}. The
greatest lower bound for S is its minimum member, 1, while the least
upper bound is its maximum member, +1.

2.1

Exercise

2.2 For n = 1, 2, ..., let


An = {x| kx ak

1
},
n

and let n and n be defined by


n = inf{f (x)| x An },
and
n = sup{f (x)| x An }.
a) Show that the sequence {n } is increasing, bounded above by f (a)
and converges to some , while the sequence {n } is decreasing,
bounded below by f (a) and converges to some . Hint: use the
fact that, if A B, where A and B are sets of real numbers, then
inf(A) inf(B).
5

CHAPTER 2. INTRODUCTION
b) Show that and are in S. Hint: prove that there is a sequence
{xn } with xn in An and f (xn ) n + n1 .
c) Show that, if {xm } is any sequence converging to a, then there is
a subsequence, denoted {xmn }, such that xmn is in An , for each n,
and so
n f (xmn ) n .
d) Show that, if {f (xm )} converges to , then
.
e) Show that
= lim inf f (x)
xa

and
= lim sup f (x).
xa

According to the hint in a), we use the fact that An+1 An . For b),
since n is the infimum, there must be a member of An , call it xn , such
that
1
n f (xn ) n + .
n
Then the sequence {xn } converges to a and the sequence {f (xn )} converges
to .
Now suppose that {xm } converges to a and {f (xm )} converges to .
We dont know that each xm is in Am , but we do know that, for each n,
there must be a term of the sequence, call it xmn , that is within n1 of a,
that is, xmn is in An . From c) we have
n f (xmn ) n .
The sequence {f (xmn )} also converges to , so that
.
Finally, since we have shown that is the smallest number in S it follows
that
= lim inf f (x).
xa

The argument for is similar.

Chapter 3

Optimization Without
Calculus
3.1

Exercises

3.1 Let A be the arithmetic mean of a finite set of positive numbers, with
x the smallest of these numbers, and y the largest. Show that
xy A(x + y A),
with equality if and only if x = y = A.
This is equivalent to showing that
A2 A(x + y) + xy = (A x)(A y) 0,
which is true, since A x 0 and A y 0.
3.2 Minimize the function
f (x) = x2 +

1
4
+ 4x + ,
2
x
x

over positive x. Note that the minimum value of f (x, y) cannot be found
by a straight-forward application of the AGM Inequality to the four terms
taken together. Try to find a way of rewriting f (x), perhaps using more
than four terms, so that the AGM Inequality can be applied to all the terms.
The product of the four terms is 16, whose fourth root is 2, so the AGM
Inequality tells us that f (x) 8, with equality if and only if all four terms
7

CHAPTER 3. OPTIMIZATION WITHOUT CALCULUS

are equal. But it is not possible for all four terms to be equal. Instead, we
can write
f (x) = x2 +

1
1
1
1
1
+x+ +x+ +x+ +x+ .
x2
x
x
x
x

These ten terms have a product of 1, which tells us that


f (x) 10,
with equality if and only if x = 1.
3.3 Find the maximum value of f (x, y) = x2 y, if x and y are restricted
to positive real numbers for which 6x + 5y = 45.
Write
f (x, y) = xxy =

1
(3x)(3x)(5y),
45

and
45 = 6x + 5y = 3x + 3x + 5y.
3.4 Find the smallest value of
f (x) = 5x +

16
+ 21,
x

over positive x.
Just focus on the first two terms.
3.5 Find the smallest value of the function
p
f (x) = x2 + y 2 ,
among those values of x and y satisfying 3x y = 20.
By Cauchys Inequality, we know that
p
p
p
20 = 3x y = (x, y) (3, 1) x2 + y 2 32 + (1)2 = 10 x2 + y 2 ,
with equality if and only if (x, y) = c(3, 1) for some number c. Then solve
for c.
3.6 Find the maximum and minimum values of the function
p
f (x) = 100 + x2 x
over non-negative x.

3.1. EXERCISES

The derivative of f (x) is negative, so f (x) is decreasing and attains its


maximum of 10 at x = 0. As x , f (x) goes to zero, but never reaches
zero.
3.7 Multiply out the product
(x + y + z)(

1 1
1
+ + )
x y z

and deduce that the least value of this product, over non-negative x, y, and
z, is 9. Use this to find the least value of the function
f (x) =

1
1 1
+ + ,
x y z

over non-negative x, y, and z having a constant sum c.


See the more general Exercise 3.9.
3.8 The harmonic mean of positive numbers a1 , ..., aN is
H = [(

1
1
+ ... +
)/N ]1 .
a1
aN

Prove that the geometric mean G is not less than H.


Use the AGM Inequality to show that H 1 G1 .
3.9 Prove that
(

1
1
+ ... +
)(a1 + ... + aN ) N 2 ,
a1
aN

with equality if and only if a1 = ... = aN .


When we multiply everything out, we find that there are N ones, and
n
pairs of the form aam
+ aam
, for m 6= n. Each of these pairs has
n
1
the form x + x , which is always greater than or equal to two. Therefore,
the entire product is greater than or equal to N + N (N 1) = N 2 , with
equality if and only if all the an are the same.
N (N 1)
2

3.10 Show that the Equation S = U LU T , can be written as


S = 1 u1 (u1 )T + 2 u2 (u2 )T + ... + N uN (uN )T ,

(3.1)

and
S 1 =

1 1 1 T
1
1 N N T
u (u ) + u2 (u2 )T + ... +
u (u ) .
1
2
N

(3.2)

10

CHAPTER 3. OPTIMIZATION WITHOUT CALCULUS


This is just algebra.

3.11 Let Q be positive-definite, with positive eigenvalues


1 ... N > 0
and associated mutually orthogonal norm-one eigenvectors un . Show that
xT Qx 1 ,
for all vectors x with kxk = 1, with equality if x = u1 . Hints: use
1 = kxk2 = xT x = xT Ix,
I = u1 (u1 )T + ... + uN (uN )T ,
and Equation (3.1).
Use the inequality
xT Qx =

N
X

n |xT un |2 1

n=1

N
X

|xT un |2 = 1 .

n=1

3.12 Relate Example 4 to eigenvectors and eigenvalues.


We can write

4
6
12

6
9
18

12
2 3 6
2 3 6
18 = 49( , , )T ( , , ),
7 7 7
7 7 7
36

so that

4
(2x + 3y + 6z)2 = |(x, y, z)(2, 3, 6)T |2 = (x, y, z) 6
12

6
9
18

12
18 (x, y, z)T .
36

The only positive eigenvalue of the matrix is 7 and the corresponding normalized eigenvector is ( 72 , 37 , 67 )T . Then use Exercise 3.11.
3.13 Youngs Inequality Suppose that p and q are positive numbers
greater than one such that p1 + 1q = 1. If x and y are positive numbers, then
xy

xp
yq
+ ,
p
q

with equality if and only if xp = y q . Hint: use the GAGM Inequality.


This one is pretty easy.

3.1. EXERCISES

11

3.14 For given constants c and d, find the largest and smallest values of
cx + dy taken over all points (x, y) of the ellipse
y2
x2
+
= 1.
a2
b2
3.15 Find the largest and smallest values of 2x+y on the circle x2 +y 2 = 1.
Where do these values occur? What does this have to do with eigenvectors
and eigenvalues?
This one is similar to Exercise 3.12.
3.16 When a real M by N matrix A is stored in the computer it is usually
vectorized; that is, the matrix
A11
A21

.
A=
.

.
AM 1

A12
A22

...
...

A1N
A2N

AM 2

...

AM N

becomes
vec(A) = (A11 , A21 , ..., AM 1 , A12 , A22 , ..., AM 2 , ..., AM N )T .
Show that the dot product vec(A) vec(B) = vec(B)T vec(A) can be obtained by
vec(A) vec(B) = trace (AB T ) = trace (B T A).
Easy.

12

CHAPTER 3. OPTIMIZATION WITHOUT CALCULUS

Chapter 4

Geometric Programming
4.1

Exercise

4.1 Show that there is no solution to the problem of minimizing the function
g(t1 , t2 ) =

2
+ t 1 t 2 + t1 ,
t1 t2

(4.1)

over t1 > 0, t2 > 0. Can g(t1 , t2 ) ever be smaller than 2 2?


Show that the only vector that works must have 3 = 0, which is not
allowed.
To answer the second part, begin by showing that if t2 1 then
g(t1 , t2 ) 4, and if t1 1, then g(t1 , t2 ) 3. We want to consider
the case in which t1 0, t2 , and both t1 t2 and (t1 t2 )1 remain
bounded. For example, we try
t2 =

1
(aet1 + 1),
t1

2
+ a + 1 as t1 0. Which a
for some positive a. Then g(t1 , t2 ) a+1
givesthe smallest value? Minimizing
the
limit,
with respect to a, gives

a = 2 1, for which the limit is 2 2.


More generally, let
f (t1 )
,
t2 =
t1

for some positive function f (t1 ) such that f (t1 ) does not go to zero as t1
goes to zero. Then we have
g(t1 , t2 ) =

2
+ f (t1 ) + t1 ,
f (t1 )
13

14

CHAPTER 4. GEOMETRIC PROGRAMMING


2
f (0)

+ f (0), as t1 0. The minimum of this limit, as a

function of f (0), occurs when f (0) = 2, and the minimum limit is 2 2.

which converges to

Chapter 5

Convex Sets
5.1

References Used in the Exercises

Proposition 5.1 The convex hull of a set S is the set C of all convex
combinations of members of S.
Proposition 5.2 For a given x, a vector z in C is PC x if and only if
hc z, z xi 0,

(5.1)

for all c in the set C.


Lemma 5.1 For H = H(a, ), z = PH x is the vector
z = PH x = x + ( ha, xi)a.

(5.2)
J

Lemma 5.2 For H = H(a, ), H0 = H(a, 0), and any x and y in R , we


have
PH (x + y) = PH x + PH y PH 0,

(5.3)

PH0 (x + y) = PH0 x + PH0 y,

(5.4)

so that

that is, the operator PH0 is an additive operator. In addition,


PH0 (x) = PH0 x,

(5.5)

so that PH0 is a linear operator.


Lemma 5.3 For any hyperplane H = H(a, ) and H0 = H(a, 0),
PH x = PH0 x + PH 0,
so PH is an affine linear operator.
15

(5.6)

16

CHAPTER 5. CONVEX SETS

Lemma 5.4 For i = 1, ..., I let Hi be the hyperplane Hi = H(ai , i ),


Hi0 = H(ai , 0), and Pi and Pi0 the orthogonal projections onto Hi and
Hi0 , respectively. Let T be the operator T = PI PI1 P2 P1 . Then
T x = Bx + d, for some square matrix B and vector d; that is, T is an
affine linear operator.
Definition 5.1 Let S be a subset of RJ and f : S [, ] a function
defined on S. The subset of RJ+1 defined by
epi(f ) = {(x, )|f (x) }
is the epi-graph of f . Then we say that f is convex if its epi-graph is a
convex set.

5.2

Exercises

5.1 Let C RJ , and let xn , n = 1, ..., N be members of C. For n =


1, ..., N , let n > 0, with 1 + ... + N = 1. Show that, if C is convex, then
the convex combination
1 x1 + 2 x2 + ... + N xN
is in C.
We know that the convex combination of any two members of C is again
in C. We prove the apparently more general result by induction. Suppose
that it is true that any convex combination of N 1 or fewer members of
C is again in C. Now we show it is true for N members of C. To see this,
merely write
N
X

n xn = (1 N )

n=1

N
1
X
n=1

n
xn + N xN .
1 N

5.2 Prove Proposition 5.1. Hint: show that the set C is convex.
Note that it is not obvious that C is convex; we need to prove this fact.
We need to show that if we take two convex combinations of members of
S, say x and y, and form z = (1 )x + y, then z is also a member of C.
To be concrete, let
N
X
x=
n xn ,
n=1

and
y=

M
X
m=1

m y m ,

5.2. EXERCISES

17

where the xn and y m are in S and the positive n and m both sum to
one. Then
N
M
X
X
z=
(1 )n xn +
m y m ,
n=1

m=1

which is a convex combination of N + M members of S, since


N
X

(1 )n +

n=1

M
X

m = 1.

m=1

5.3 Show that the subset of RJ consisting of all vectors x with ||x||2 = 1
is not convex.
Dont make the problem harder than it is; all we need is one counterexample. Take x and x, both with norm one. Then 12 (x + (x)) = 0.
5.4 Let kxk2 = kyk2 = 1 and z = 12 (x + y) in RJ . Show that kzk2 < 1
unless x = y. Show that this conclusion does not hold if the two-norm k k2
is replaced by the one-norm, defined by
kxk1 =

J
X

|xj |.

j=1

Now x and y are given, and z is their midpoint. From the Parallelogram
Law,
kx + yk2 + kx yk2 = 2kxk2 + 2kyk2 ,
we have
kx + yk2 + kx yk2 = 4,
so that

1
kzk2 + kx yk2 = 1,
4
from which we conclude that kzk < 1, unless x = y.
5.5 Let C be the set of all vectors x in RJ with kxk2 1. Let K be a
subset of C obtained by removing from C any number of its members for
which kxk2 = 1. Show that K is convex. Consequently, every x in C with
kxk2 = 1 is an extreme point of C.
Suppose we remove some x with kxk = 1. In order for the set without
x to fail to be convex it is necessary that there be some y 6= x and z 6= x in
C for which x = y+z
2 . But we know that if kyk = kzk = 1, then kxk < 1,
unless y = z, in which case we would also have y = z = x. So we cannot
have kyk = kzk = 1. But what if their norms are less than one? In that
case, the norm of the midpoint of y and z is not greater than the larger of

18

CHAPTER 5. CONVEX SETS

the two norms, so again, it cannot be one. So no x on the boundary of C


is a convex combination of two distinct members of C, and removing any
number of such points leaves the remaining set convex.
5.6 Prove that every subspace of RJ is convex, and every linear manifold
is convex.
For the first part, replace in the definition of a subspace with 1 ,
and require that 0 1. For the second part, let M = S + b, where S
is a subspace, and b is a fixed vector. Let x = s + b and y = t + b, where s
and t are in S. Then
(1 )x + y = [(1 )s + t] + b,
which is then also in M .
5.7 Prove that every hyperplane H(a, ) is a linear manifold.
Show that
H(a, ) = H(a, 0) +

a.
kak2

5.8 (a) Let C be a circular region in R2 . Determine the normal cone


for a point on its circumference. (b) Let C be a rectangular region in R2 .
Determine the normal cone for a point on its boundary.
For (a), let x0 be the center of the circle and x be the point on its
circumference. Then the normal cone at x is the set of all z of the form
z = x + (x x0 ), for 0. For (b), if the point x lies on one of the sides
of the rectangle, then the normal cone consists of the points along the line
segment through x formed by the outward normal to the side. If the point
x is a corner, then the normal cone consists of all points in the intersection
of the half-spaces external to the rectangle whose boundary lines are the
two sides that meet at x.
5.9 Prove Lemmas 5.2, 5.3 and 5.4.
Applying Equation (5.2),
PH x = x + ( ha, xi)a,
we find that
PH (x+y) = x+y +( ha, x+yi)a = x+( ha, xi)a+y +( ha, yi)aa
= PH (x) + PH (y) PH (0).

5.2. EXERCISES

19

We also have
PH (x) = x + a ha, xia = a + x ha, xia = PH (0) + PH0 (x).
Finally, consider the case of I = 2. Then we have
T x = P2 P1 (x) = P2 (x + (1 ha1 , xi)a1 )
= [x ha1 , xia1 ha2 , xia2 + ha1 , xiha2 , a1 ia2 ] + 1 a1 + 2 a2 1 ha2 , a1 ia2
= Lx + b,
where L is the linear operator
Lx = x ha1 , xia1 ha2 , xia2 + ha1 , xiha2 , a1 ia2 ,
and
b = 1 a1 + 2 a2 1 ha2 , a1 ia2 .
We can then associate with the linear operator L the matrix
B = I a1 (a1 )T a2 (a2 )T + ha2 , a1 ia2 (a1 )T .
5.10 Let C be a convex set and f : C RJ (, ]. Prove that f (x)
is a convex function, according to Definition 5.1, if and only if, for all x
and y in C, and for all 0 < < 1, we have
f (x + (1 )y) f (x) + (1 )f (y).
Suppose that the epigraph of f is a convex set. Then
(x, f (x)) + (1 )(y, f (y)) = (x + (1 )y, f (x) + (1 )f (y))
is a member of the epigraph, which then tells us that
f (x + (1 )y) f (x) + (1 )f (y).
Conversely, let (x, ) and (y, ) be members of the epigraph of f , so that
f (x) , and f (y) . Let be in [0, 1]. We want to show that
(x, ) + (1 )(y, ) = (x + (1 )y, + (1 ))
is in the epigraph. But this follows from the inequality
f (x + (1 )y) f (x) + (1 )f (y) + (1 ).
5.11 Given a point s in a convex set C, where are the points x for which
s = PC x?

20

CHAPTER 5. CONVEX SETS

Suppose that s = PC x. From the inequality


hx s, c si 0,
for all c in C, it follows that x s is in NC (s). So s = PC x if and only if
the vector x s is in NC (s).
5.12 Let C be a closed non-empty convex set in RJ , x a vector not in C,
and d > 0 the distance from x to C. Let
C (a) = supha, ci ,
cC

the support function of C. Show that


d = m = max {ha, xi C (a)}.
||a||1

Hints: Consider the unit vector


and Proposition 5.2.

1
d (x

PC x), and use Cauchys Inequality

Let a0 = d1 (x PC x). First, show that


C (a0 ) = ha0 , PC xi.
Then
d = ha0 , x PC xi = ha0 , xi ha0 , P Cxi = ha0 , xi C (a0 ) m;
therefore, d m.
From Cauchys Inequality, we know that
ha, x PC xi kx PC xkkak = d,
for any vector a with norm one. Therefore, for any a with norm one, we
have
d ha, xi ha, PC xi ha, xi C (a).
Therefore, we can conclude that d m, and so d = m.
5.13 (R
adstr
om Cancellation)
(a) Show that, for any subset S of RJ , we have 2S S + S, and
2S = S + S if S is convex.
(b) Find three finite subsets of R, say A, B, and C, with A not
contained in B, but with the property that A + C B + C. Hint: try
to find an example where the set C is C = {1, 0, 1}.

5.2. EXERCISES

21

(c) Show that, if A and B are convex, B is closed, and C is bounded,


then A + C B + C implies that A B. Hint: Note that, under
these assumptions, 2A + C = A + (A + C) 2B + C.
(a) If x and y are in S, then x + y is in S + S. If S is convex, then
x+y
2 is also in S, or
x+y
x + y = 2(
)
2
is in 2S; therefore S + S = 2S.
(b) Take A = {0, 1}, B = {0, 2} and C = {1, 0, 1}. Then we have
A + C = {1, 0, 1, 2},
and
B + C = {1, 0, 1, 2, 3}.
(c) Begin with
2A + C = (A + A) + C = A + (A + C) A + (B + C) = (A + C) + B
(B + C) + B = (B + B) + C = 2B + C.
Continuing in this way, show that
nA + C nB + C,
or

1
1
C B + C,
n
n
for n = 1, 2, 3, .... Now select a point a in A; we show that a is in B.
A+

Since C is a bounded set, we know that there is a constant K > 0


such that kck K, for all c in C. For each n there are cn and dn in
C and bn in B such that
a+

1
1 n
c = bn + d n .
n
n

Then we have
bn = a +

1 n
(c dn ).
n

(5.7)

Since C is bounded, we know that n1 (cn dn ) converges to zero, as


n . Then, taking limits on both sides of Equation (5.7), we find
that {bn } a, which must be in B, since B is closed.

22

CHAPTER 5. CONVEX SETS

Chapter 6

Linear Programming
6.1

Needed References

Theorem 6.1 Let x and y be feasible vectors. Then


z = cT x bT y = w.

(6.1)

Corollary 6.1 If z is not bounded below, then there are no feasible y.


Corollary 6.2 If x and y are both feasible, and z = w, then both x and y
are optimal for their respective problems.
Lemma 6.1 Let W = {w1 , ..., wN } be a spanning set for a subspace S
in RI , and V = {v 1 , ..., v M } a linearly independent subset of S. Then
M N.

6.2

Exercises

6.1 Prove Theorem 6.1 and its corollaries.


Since y is feasible, we know that AT y c, and since x is feasible, we
know that x 0 and Ax = b. Therefore, we have
w = bT y = (Ax)T y = xT AT y xT c = cT x = z.
This tells us that, if there are feasible x and y, then z = cT x is bounded
below, as we run over all feasible x, by any w. Consequently, if z is not
bounded, there cannot be any w, so there can be no feasible y.
If both x and y are feasible and z = cT x = bT y = w, then we cannot
decrease z by replacing x, nor can we increase w by replacing y. Therefore,
x and y are the best we can do.
23

24

CHAPTER 6. LINEAR PROGRAMMING

6.2 Let W = {w1 , ..., wN } be a spanning set for a subspace S in RI , and


V = {v 1 , ..., v M } a linearly independent subset of S. Let A be the matrix
whose columns are the v m , B the matrix whose columns are the wn . Show
that there is an N by M matrix C such that A = BC. Prove Lemma
6.1 by showing that, if M > N , then there is a non-zero vector x with
Cx = Ax = 0.
If C is any M by N matrix with M > N , then, using Gauss elimination,
we can show that there must be non-zero solutions of Cx = 0. Now, since
W is a spanning set, each column of A is a linear combination of the
columns of B, which means that we can find some C such that A = BC. If
there is a non-zero x for which Cx = 0, then Ax = 0 also. But the columns
of A are linearly independent, which tells us that Ax = 0 has only the zero
solution. We conclude that M N .
6.3 Show that when the simplex method has reached the optimal solution
for the primal problem PS, the vector y with y T = cTB B 1 becomes a feasible
vector for the dual problem and is therefore the optimal solution for DS.
Hint: Clearly, we have
z = cT x = cTB B 1 b = y T b = w,
so we need only show that AT y c.
We know that the simplex algorithm halts when the vector
rT = (cTN cTB B 1 N ) = cTN y T N
has only non-negative entries, or when
cN N T y.
Then we have


BT
A y=
NT
T

BT y
y=
NT y

so y is feasible, and therefore optimal.

B T (B T )1 cB

cN



= c,

Chapter 7

Matrix Games and


Optimization
7.1

Some Exercises

and q = 1 y are probability vectors and


7.1 Show that the vectors p = 1 x
are optimal randomized strategies for the matrix game.
Since
z = cT x
= = bT y = w,

it follows that cT p = bT q = 1. We also have


x
T A
yx
T c = ,
and
x
T A
y = (AT x
)T y bT y = ,
so that
x
T A
y = ,
and
pT A
q=

1
.

For any probabilities p and q we have


pT A
q=

1 T
1
1
p A
y pT c = ,

and
pT Aq = (AT p)T q =

1 T T
1
1
(A x
) q bT q = .

25

26

CHAPTER 7. MATRIX GAMES AND OPTIMIZATION

7.2 Given an arbitrary I by J matrix A, there is > 0 so that the matrix


B with entries Bij = Aij + has only positive entries. Show that any
optimal randomized probability vectors for the game with pay-off matrix B
are also optimal for the game with pay-off matrix A.
This one is easy.

Chapter 8

Differentiation
8.1

Exercises

8.1 Let Q be a real, positive-definite symmetric matrix. Define the Qinner product on RJ to be
hx, yiQ = xT Qy = hx, Qyi,
and the Q-norm to be
||x||Q =

q
hx, xiQ .

Show that, if f (a) is the Frechet derivative of f (x) at x = a, for the


usual Euclidean norm, then Q1 f (a) is the Frechet derivative of f (x) at
x = a, for the Q-norm. Hint: use the inequality
p
p
J ||h||2 ||h||Q 1 ||h||2 ,
where 1 and J denote the greatest and smallest eigenvalues of Q, respectively.
8.2 For (x, y) not equal to (0, 0), let
f (x, y) =

xa y b
,
+ yq

xp

with f (0, 0) = 0. In each of the five cases below, determine if the function
is continuous, G
ateaux, Frechet or continuously differentiable at (0, 0).
1) a = 2, b = 3, p = 2, and q = 4;
2) a = 1, b = 3, p = 2, and q = 4;
3) a = 2, b = 4, p = 4, and q = 8;
27

28

CHAPTER 8. DIFFERENTIATION
4) a = 1, b = 2, p = 2, and q = 2;
5) a = 1, b = 2, p = 2, and q = 4.

Chapter 9

Convex Functions
Proposition 9.1 The following are equivalent:
1) the epi-graph of g(x) is convex;
2) for all points a < x < b
g(x)

g(b) g(a)
(x a) + g(a);
ba

(9.1)

3) for all points a < x < b


g(x)

g(b) g(a)
(x b) + g(b);
ba

(9.2)

4) for all points a and b in R and for all in the interval (0, 1)
g((1 )a + b) (1 )g(a) + g(b).

(9.3)

Lemma 9.1 A firmly non-expansive operator on RJ is non-expansive.

9.1

Exercises

9.1 Prove Proposition 9.1.


The key idea here is to use =

xa
ba ,

so that x = (1 )a + b.

9.2 Prove Lemma 9.1.


From the definition, F is firmly non-expansive if
hF (x) F (y), x yi kx yk22 .
Now use the Cauchy Inequality on the left side.
29

(9.4)

30

CHAPTER 9. CONVEX FUNCTIONS

9.3 Show that, if x


minimizes the function g(x) over all x in RJ , then
x = 0 is in the sub-differential g(
x).
This is easy.
9.4 If f (x) and g(x) are convex functions on RJ , is f (x) + g(x) convex?
Is f (x)g(x) convex?
It is easy to show that the sum of two convex functions is again convex.
The product of two is, however, not always convex; take f (x) = 1 and
g(x) = x2 , for example.
9.5 Let C (x) be the indicator function of the closed convex set C, that is,
(
0, if x C;
C (x) =
+, if x
/ C.
Show that the sub-differential of the function C at a point c in C is the
normal cone to C at the point c, that is, C (c) = NC (c), for all c in C.
This follows immediately from the definitions.
9.6 Let g(t) be a strictly convex function for t > 0. For x > 0 and y > 0,
define the function
y
f (x, y) = xg( ).
x
Use induction to prove that
N
X

f (xn , yn ) f (x+ , y+ ),

n=1

PN
for any positive numbers xn and yn , where x+ =
n=1 xn . Also show
that equality obtains if and only if the finite sequences {xn } and {yn } are
proportional.
We show this for the case of N = 2; the more general case is similar.
We need to show that
f (x1 , y1 ) + f (x2 , y2 ) f (x1 + x2 , y1 + y2 ).
Write
f (x1 , y1 )+f (x2 , y2 ) = x1 g(
(x1 + x2 )g(

 x
y1
y2
y1
x2
y2 
1
)+x2 g( ) = (x1 +x2 )
g( )+
g( )
x1
x2
x1 + x2 x1 x1 + x2 x2

x1 y1
x2 y2
y+
+
) = x+ g( ) = f (x+ , y+ ).
x1 + x2 x1
x1 + x2 x2
x+

9.1. EXERCISES

31

9.7 Usethe result in Exercise 9.6 to obtain Cauchys Inequality. Hint: let
g(t) = t.

Using the result in Exercise 9.6, and the choice of g(t) = t, we obtain

N
X

r
xn

n=1

so that

N
X

yn
x+
xn

y+
x+


xN yn x+ y+ ,

n=1

which is Cauchys Inequality.


9.8 Use the result in Exercise 9.6 to obtain Milnes Inequality:
x+ y+

N
X

N
 X
xn yn 
.
(xn + yn )
x + yn
n=1
n=1 n

t
Hint: let g(t) = 1+t
.

This is a direct application of the result in Exercise 9.6.


9.9 For x > 0 and y > 0, let f (x, y) be the Kullback-Leibler function,

x
f (x, y) = KL(x, y) = x log
+ y x.
y
Use Exercise 9.6 to show that
N
X
n=1

Use g(t) = log t.

KL(xn , yn ) KL(x+ , y+ ).

32

CHAPTER 9. CONVEX FUNCTIONS

Chapter 10

Fenchel Duality
10.1

Exercises

10.1 Let A be a real symmetric positive-definite matrix and


f (x) =

1
hAx, xi.
2

Show that

1 1
hA a, ai.
2
Hints: Find f (x) and use Equation (10.1).
f (a) =

We have
f (a) = ha, (f )1 (a)i f ((f )1 (a)).
Since f (x) = Ax, it follows from Equation (10.1) that
1
1
f (a) = ha, A1 ai hA(A1 )a, A1 ai = ha, A1 ai.
2
2

33

(10.1)

34

CHAPTER 10. FENCHEL DUALITY

Chapter 11

Convex Programming
11.1

Referenced Results

Theorem 11.1 Let (


x, y) be a saddle point for K(x, y). Then x
solves the
primal problem, that is, x
minimizes f (x), over all x in X, and y solves
the dual problem, that is, y maximizes g(y), over all y in Y . In addition,
we have
g(y) K(
x, y) f (x),

(11.1)

for all x and y, so that the maximum value of g(y) and the minimum value
of f (x) are both equal to K(
x, y).
Theorem 11.2 Let x be a regular point. If x is a local constrained
minimizer of f (x), then there is a vector such that
1) i 0, for i = 1, ..., K;
2) i gi (x ) = 0, for i = 1, ..., K;
PI
3) f (x ) + i=1 i gi (x ) = 0.

11.2

Exercises

11.1 Prove Theorem 11.1.


We have
f (
x) = sup K(
x, y) = K(
x, y),
y

and
f (x) = sup K(x, y) K(x, y) K(
x, y) = f (
x).
y

35

36

CHAPTER 11. CONVEX PROGRAMMING

From f (x) f (
x) we conclude that x = x
minimizes f (x). In a similar
way, we can show that y = y maximizes g(y).
11.2 Apply the gradient form of the KKT Theorem to minimize the function f (x, y) = (x + 1)2 + y 2 over all x 0 and y 0.
Setting the x-gradient of the Lagrangian to zero, we obtain the equations
2(x + 1) 1 = 0,
and
2y 2 = 0.
Since x 0, we cannot have 1 = 0, consequently g1 (x, y) = x = 0, so
x = 0. We also have that y = 0 whether or not 2 = 0. Therefore, the
answer is x = 0, y = 0.
11.3 Minimize the function
f (x, y) =

p
x2 + y 2 ,

subject to
x + y 0.
Show that the function M P (z) is not differentiable at z = 0.
Equivalently, we minimize f (x, y)2 = x2 + y 2 , subject to the constraint
g(x, y) z. If z 0, the optimal point is (0, 0) and M P (z) = 0. If
z
z < 0, the optimal point is x = z2 = y, and M P (z) =
. The directional
2
derivative of M P (z) at z = 0, in the positive direction is zero, while in the
negative direction it is 12 , so M P (z) is not differentiable at z = 0.
11.4 Minimize the function
f (x, y) = 2x y,
subject to
x + y 1,
0 x 1,
and
y 0.
We write g1 (x, y) = x + y 1, g2 (x, y) = x 0, g3 (x, y) = x 1 0,
and g4 (x, y) = y 0. Setting the partial derivatives of the Lagrangian
to zero, we get
0 = 2 + 1 + 2 3 ,

11.2. EXERCISES

37

and
0 = 1 + 1 4 .
Since i 0 for each i, it follows that 1 6= 0, so that 0 = g1 (x, y) =
x + y 1, or x + y = 1.
If 4 = 0, then 1 = 1 and 2 3 = 1. Therefore, we cannot have
2 = 0, which tells us that g2 (x, y) = 0, or x = 1, in addition to y = 0.
If 4 > 0, then y = 0, which we already know, and so x = 1 again. In
any case, then answer must be x = 1 and y = 0, so that the minimum is
f (1, 0) = 2.
11.5 Apply the theory of convex programming to the primal Quadratic
Programming Problem (QP), which is to minimize the function
f (x) =

1 T
x Qx,
2

subject to
aT x c,
where a 6= 0 is in RJ , c < 0 is real, and Q is symmetric, and positivedefinite.
With g(x) = aT x c 0, we set the x-gradient of L(x, ) equal to zero,
obtaining
Qx + a = 0,
or
x = Q1 a.
So, for any 0, x L(x, ) = 0 has a solution, so all 0 are feasible
for the dual problem. We can then write
L(x , ) =

 2

2 T 1
a Q a 2 aT Q1 a c =
aT Q1 a + c .
2
2

Now we maximize L(x , ) over 0.


We get
0 = aT Q1 a c,
so that
=

c
aT Q1 a

and the optimal x is


x =

cQ1 a
.
aT Q1 a

11.6 Use Theorem 11.2 to prove that any real N by N symmetric matrix
has N mutually orthonormal eigenvectors.

38

CHAPTER 11. CONVEX PROGRAMMING

Let Q be symmetric. First, we minimize f (x) = 21 xT Qx subject to


g(x) = 1 xT x = 0. The feasible set is closed and bounded, and f (x)
is continuous, so there must be a minimum. From the KKT Theorem we
have
Qx x = 0,
so that Qx = x and x is an eigenvector of Q; call x = xN , and = N .
Note that the optimal value of f (x) is 2N .
Next, minimize f (x), subject to g1 (x) = 1 xT x = 0 and xT xN = 0.
The Lagrangian is
L(x, ) =

1 T
x Qx + 1 (1 xT x) + 2 xT xN .
2

Setting the x-gradient of L(x, ) to zero, we get


0 = Qx 1 x + 2 xN .
Therefore,
0 = xT Qx 1 xT x + 2 xT xN = xT Qx 1 ,
or
1 = xT Qx.
We also have
0 = (xN )T Qx 1 (xN )T x + 2 (xN )T xN ,
and
(xN )T Q = N (xN )T ,
so that
0 = N (xN )T x 1 (xN )T x + 2 = 2 ,
so 2 = 0. Therefore, we have
Qx = 1 x,
so x is another eigenvector of Q, with associated eigenvalue 1 ; we write
x = xN 1 , and 1 = N 1 . Note that the optimal value of f (x) is now
N 1
2 . Since the second optimization problem has more constraints than
the first one, we must conclude that N 1 N .
We continue in this manner, each time including one more orthogonality constraint, which thereby increases the optimal value. When we have
performed N optimizations, and are about to perform one more, we find
that the constraint conditions now require x to be orthogonal to each of
the previously calculated xn . But these vectors are linearly independent in
RN , and so the only vector orthogonal to all of them is zero, and we are
finished.

Chapter 12

Iterative Optimization
12.1

Referenced Results

Lemma 12.1 Suppose that xk+1 is chosen using the optimal value of k ,
as described by Equation (12.1),
g(xk k g(xk )) g(xk g(xk )).

(12.1)

hg(xk+1 ), g(xk )i = 0.

(12.2)

Then

12.2

Exercises

12.1 Prove Lemma 12.1.


Use the Chain Rule to calculate the derivative of the function f () given
by
f () = g(xk g(xk )),
and then set this derivative equal to zero.
12.2 Apply
the Newton-Raphson method to obtain an iterative procedure
for finding a, for any positive a. For which x0 does the method converge?
There are two answers, of course; how does the choice of x0 determine
which square root becomes the limit?
This is best done as a computer exercise.
12.3 Apply the Newton-Raphson method to obtain an iterative procedure
for finding a1/3 , for any real a. For which x0 does the method converge?
39

40

CHAPTER 12. ITERATIVE OPTIMIZATION


Another computer exercise.

12.4 Extend the Newton-Raphson method to complex variables. Redo the


previous exercises for the case of complex a. For the complex case, a has
two square roots and three cube roots. How does the choice of x0 affect
the limit? Warning: The case of the cube root is not as simple as it may
appear, and has a close connection to fractals and chaos.
Consult Schroeders book before trying this exercise.
12.5 (The Sherman-Morrison-Woodbury Identity) Let A be an invertible matrix. Show that, if = 1 + v T A1 u 6= 0, then A + uv T is
invertible and
(A + uv T )1 = A1

1 1 T 1
A uv A .

(12.3)

This is easy.
12.6 Use the reduced Newton-Raphson method to minimize the function
subject to Ax = b, where

0
13 6 3
9
3
13 23
Q=
,
6
9 12 1
3
3
1
1


2 1 2 1
A=
,
1 1 3 1

1 T
2 x Qx,

and

Start with

 
3
b=
.
2

1
1
x0 = .
0
0

We begin by finding a basis for the null space of A, which we do by


using Gauss elimination to solve Ax = 0. We find that (1, 4, 1, 0)T and
(2, 3, 0, 1)T do the job, so the matrix Z is

1 2
4 3
Z=
.
1
0
0
1

12.2. EXERCISES

41

The matrix Z T Q is then


ZT Q =
and

46 114 18
42
98
14


68
Z Qx =
.
56
T

We have

14
14


520 448
Z QZ =
,
448 392
T

so that
(Z T QZ)1 =


1
392
3136 448


448
.
520

One step of the reduced Newton-Raphson algorithm, beginning with v 0 =


0, gives


0.5
1
T
1 T
0
v = (Z QZ) Z Qx =
,
0.4286
so that

0.6428
0.2858
x1 = x0 + Zv 1 =
.
0.5
0.4286

When we check, we find that Z T Qx1 = 0, so we are finished.


12.7 Use the reduced steepest descent method with an exact line search to
solve the problem in the previous exercise.
Do as a computer problem.

42

CHAPTER 12. ITERATIVE OPTIMIZATION

Chapter 13

Modified-Gradient
Algorithms
There are no exercises in this chapter.

43

44

CHAPTER 13. MODIFIED-GRADIENT ALGORITHMS

Chapter 14

Quadratic Programming
There are no exercises in this chapter.

45

46

CHAPTER 14. QUADRATIC PROGRAMMING

Chapter 15

Solving Systems of Linear


Equations
15.1

Regularizing the ART

In our first method we use ART to solve the system of equations given in
matrix form by
 
u
T
[A
I ]
= 0.
(15.1)
v
We begin with u0 = b and v 0 = 0. Then, the lower component of the limit
vector is v = 
x .
The method of Eggermont et al. is similar. In their method we use
ART to solve the system of equations given in matrix form by
 
x
[ A I ]
= b.
(15.2)
v
We begin at x0 = 0 and v 0 = 0. Then, the limit vector has for its upper
component x = x
 , and v = b A
x .

15.2

Exercises

15.1 Show that the two algorithms associated with Equations (15.1) and
(15.2), respectively, do actually perform as claimed.
We begin by recalling that, in the under-determined case, the minimum
two-norm solution of a system of equations Ax = b has the form x = AT z,
for some z, so that the minimum two-norm solution is x = AT (AAT )1 b,
47

48

CHAPTER 15. SOLVING SYSTEMS OF LINEAR EQUATIONS

provided that the matrix AAT is invertible, which it usually is in the underdetermined case.
The solution of Ax = b for which kx pk is minimized can be found in
a similar way. We let z = x p, so that x = z + p and Az = b Ap. Now
we find the minimum two-norm solution of the system Az = b Ap; our
final solution is x = z + p.
The regularized solution x
 that we seek minimizes the function
f (x) =

1
kAx bk2 + 2 kxk2 ,
2

and therefore can be written explicitly as


x
 = (AT A + 2 I)1 AT b.
For large systems, it is too expensive and time-consuming to calculate x

this way; therefore, we seek iterative methods, in particular, ones that do
not require the calculation of the matrix AT A.
The first of the two methods offered in the chapter has us solve the
system
 
u
T
[A
I ]
= 0.
(15.3)
v
When we use the ART algorithm, we find the solution closest to where we
began the iteration. We begin at
 0  
u
b
=
,
0
v0
so we are finding the solution of Equation (15.1) for which
ku bk2 + kvk2
is minimized. From our previous discussion, we see that we need to find
the solution of
 
s
T
[A
I ]
= AT b
t
for which
ksk2 + ktk2
is minimized. This minimum two-norm solution must have the form
   


s
A
Az
=
z=
.
t
I
z
This tells us that
(AT A + 2 I)z = AT b,

15.2. EXERCISES

49

or that z = x
 . Since the lower part of the minimum two-norm solution
is t = z, the assertion concerning this algorithm is established.
The second method has us solve the system
 
x
[ A I ]
= b,
(15.4)
v
using the ART algorithm and beginning at
 
0
,
0
so we are seeking a minimum two-norm solution of the system in Equation
(15.2). We know that this minimum two-norm solution must have the form
   T
 T 
x
A
A z
=
z=
.
v
I
z
Therefore,
(AAT + 2 I)z = b,
and
AT (AAT + 2 I)z = (AT A + 2 I)AT z = (AT A + 2 I)x = b.
Consequently, x = x
 , and this x is the upper part of the limit vector of
the ART iteration.

50

CHAPTER 15. SOLVING SYSTEMS OF LINEAR EQUATIONS

Chapter 16

Conjugate-Direction
Methods
16.1

Proofs of Lemmas

Lemma 16.1 When xk is constructed using the optimal , we have


f (xk ) dk = 0.

(16.1)

Proof: Differentiate the function f (xk1 +dk ) with respect to the variable
, and then set it to zero. According to the Chain Rule, we have
f (xk ) dk = 0.

Lemma 16.2 The optimal k is


k =

r k dk
,
dk Qdk

(16.2)

where rk = c Qxk1 .
Proof: We have
f (xk ) = Qxk c = Q(xk1 + k dk ) c = Qxk1 c + k Qdk ,
so that
0 = f (xk ) dk = rk dk + dk Qdk .

51

52

CHAPTER 16. CONJUGATE-DIRECTION METHODS

Lemma 16.3 Let ||x||2Q = x Qx denote the square of the Q-norm of x.


Then
||
x xk1 ||2Q ||
x xk ||2Q = (rk dk )2 /dk Qdk 0
for any direction vectors dk .
Proof: We use c = Q
x. Then we have
(
xxk )Q(
xxk ) = (
xxk1 )Q(
xxk1 )2k dk Q(
xxk1 )+k2 dk Qdk ,
so that
k
x xk1 k2Q k
x xk k2Q = 2k dk (c Qxk1 ) k2 dk Qdk .
Now use rk = c Qxk1 and the value of k .
Lemma 16.4 A conjugate set that does not contain zero is linearly independent. If pn 6= 0 for n = 1, ..., J, then the least-squares vector x
can be
written as
x
= a1 p1 + ... + aJ pJ ,
with aj = c pj /pj Qpj for each j.
Proof: Suppose that we have
0 = c1 p1 + ... + cn pn ,
for some constants c1 , ..., cn . Then, for each m = 1, ..., n we have
0 = c1 pm Qp1 + ... + cn pm Qpn = cm pm Qpm ,
from which it follows that cm = 0 or pm = 0.
Now suppose that the set {p1 , ..., pJ } is a conjugate basis for RJ . Then
we can write
x
= a1 p1 + ... + aJ pJ ,
for some aj . Then for each m we have
pm c = pm Q
x = am pm Qpm ,
so that
am =

pm c
.
Qpm

pm

Lemma 16.5 Whenever pn+1 = 0, we also have rn+1 = 0 , in which case


we have c = Qxn , so that xn is the least-squares solution.

16.2. EXERCISES

53

Proof: If pn+1 = 0, then rn+1 is a multiple of pn . But, rn+1 is orthogonal


to pn , so rn+1 = 0.
Theorem 16.1 For n = 1, 2, ..., J and j = 1, ..., n 1 we have
a) rn rj = 0;
b) rn pj = 0; and
c) pn Qpj = 0.

16.2

Exercises

16.1 There are several lemmas in this chapter whose proofs are only
sketched. Complete the proofs of these lemma.
The proof of Theorem 16.1 uses induction on the number n. Throughout
the following exercises assume that the statements in the theorem hold for
some n < J. We prove that they hold also for n + 1.
16.2 Use the fact that
rj+1 = rj j Qpj ,
to show that Qpj is in the span of the vectors rj and rj+1
We have
rj+1 = c Qxj = c Qxj1 j Qpj = rj j Qpj ,
so that
rj+1 rj = j Qpj .
16.3 Show that rn+1 rn = 0. Hints: establish that
n =

rn rn
,
pn Qpn

by showing that
r n pn = r n r n ,
and then use the induction hypothesis to show that
rn Qpn = pn pn .

54

CHAPTER 16. CONJUGATE-DIRECTION METHODS


We know that
rn = pn + n1 pn1 ,

where
bn1 =

rn Qpn1
.
pn1 Qpn1

Since rn pn1 = 0, it follows that


r n pn = r n r n .
Therefore, we have
n =

rn rn
.
pn Qpn

From the induction hypothesis, we have


(rn pn ) Qpn = n1 pn1 Qpn = 0,
so that
rn Qpn = pn Qpn .
Using
rn+1 = rn n Qpn ,
we find that
rn+1 rn = rn rn n rn Qpn = 0.
16.4 Show that rn+1 rj = 0, for j = 1, ..., n1. Hints: 1. show that rn+1
is in the span of the vectors {rj+1 , Qpj+1 , Qpj+2 , ..., Qpn }; and 2. show that
rj is in the span of {pj , pj1 }. Then use the induction hypothesis.
We begin with
rn+1 = rn n Qpn = rn1 n1 Qpn1 n Qpn .
We then use
rn1 = rn2 n2 pn2 ,
and so on, to get that rn+1 is in the span of the vectors {rj+1 , Qpj+1 , Qpj+2 , ..., Qpn }.
We then use
rj rj+1 = 0,
and
rj = pj + j1 pj1 ,
along with the induction hypothesis, to get
rj Qpm = 0,
for m = j + 1, ..., n.

16.2. EXERCISES

55

16.5 Show that rn+1 pj = 0, for j = 1, ..., n. Hint: show that pj is


in the span of the vectors {rj , rj1 , ..., r1 }, and then use the previous two
exercises.
Write
pj = rj j1 pj1
and repeat this for pj1 , pj2 , and so on, and use p1 = r1 to show that pj
is in the span of {rj , rj1 , ..., r1 }. Then use the previous exercises.
16.6 Show that pn+1 Qpj = 0, for j = 1, ..., n 1. Hint: use
Qpj = j1 (rj rj+1 ).
We have
pn+1 Qpj = rn+1 Qpj = rn+1 (j1 (rj rj1 )) = 0.
The final step in the proof is contained in the following exercise.
16.7 Show that pn+1 Qpn = 0. Hints: write
pn+1 = rn+1 n pn ,
where
n =

rn+1 Qpn
.
pn Qpn

We have
pn+1 Qpn = rn+1 Qpn n pn Qpn = 0,
from the definition of n .

56

CHAPTER 16. CONJUGATE-DIRECTION METHODS

Chapter 17

Sequential Unconstrained
Minimization Algorithms
Lemma 17.1 For any non-negative vectors x and z, with z+ =
0, we have
KL(x, z) = KL(x+ , z+ ) + KL(x,

17.1

PJ

j=1 zj

x+
z).
z+

Exercises

17.1 Prove Lemma 17.1.


This is easy.
17.2 Use the logarithmic barrier method to minimize the function
f (x, y) = x 2y,
subject to the constraints
1 + x y 2 0,
and
y 0.
For k > 0, we minimize
x 2y k log(1 + x y 2 ) k log(y).
Setting the gradient to zero, we get
k = 1 + x y2 ,
57

>

(17.1)

58CHAPTER 17. SEQUENTIAL UNCONSTRAINED MINIMIZATION ALGORITHMS


and
2=k

2y
1

.
1 + x y2
y

Therefore,
2y 2 2y k = 0,
so that

1 1
1 + 2k.
+
2 2
As k 0, we have y 1 and x 0, so the optimal solution is x = 0 and
y = 1, which can be checked using the KKT Theorem.
y=

17.3 Use the quadratic-loss penalty method to minimize the function


f (x, y) = xy,
subject to the equality constraint
x + 2y 4 = 0.
For each k > 0 we minimize the function
xy + k(x + 2y 4)2 .
Setting the gradient equal to zero, we obtain
x = 4k(x + 2y 4),
y = 2k(x + 2y 4),
and so x = 2y and
y=

2x 8
.
4 + k1

Solving for x, we get


x=

16
8 k1

y=

8
8

and

1
k

As k , x approaches 2 and y approaches 1. We can check this result


by substituting x = 4 2y into f (x, y) and minimizing it as a function of
y alone.

Chapter 18

Likelihood Maximization
There are no exercises in this chapter.

59

60

CHAPTER 18. LIKELIHOOD MAXIMIZATION

Chapter 19

Operators
19.1

Referenced Results

Suppose that B is a diagonalizable matrix, that is, there is a basis for RJ


consisting of eigenvectors of B. Let {u1 , ..., uJ } be such a basis, and let
Buj = j uj , for each j = 1, ..., J. For each x in RJ , there are unique
coefficients aj so that
x=

J
X

aj uj .

(19.1)

j=1

Then let
||x|| =

J
X

|aj |.

(19.2)

j=1

Lemma 19.1 The expression || || in Equation (19.2) defines a norm on


RJ . If (B) < 1, then the affine operator T is sc, with respect to this norm.
Lemma 19.2 Let T be an arbitrary operator T on RJ and G = I T .
Then
||x y||22 ||T x T y||22 = 2(hGx Gy, x yi) ||Gx Gy||22 . (19.3)
Lemma 19.3 Let T be an arbitrary operator T on RJ and G = I T .
Then
hT x T y, x yi ||T x T y||22 =
hGx Gy, x yi ||Gx Gy||22 .
61

(19.4)

62

CHAPTER 19. OPERATORS

Lemma 19.4 An operator T is ne if and only if its complement G = I T


is 12 -ism, and T is fne if and only if G is 1-ism, and if and only if G is
fne. Also, T is ne if and only if F = (I + T )/2 is fne. If G is -ism and
> 0 then the operator G is -ism.
Proposition 19.1 An operator F is firmly non-expansive if and only if
F = 12 (I + N ), for some non-expansive operator N .
Proposition 19.2 Let T be an affine linear operator whose linear part B
is diagonalizable, and || < 1 for all eigenvalues of B that are not equal to
one. Then the operator T is pc, with respect to the norm given by Equation
(19.2).

19.2

Exercises

19.1 Show that a strict contraction can have at most one fixed point.
Suppose that T is strict contraction, with
kT x T yk rkx yk,
for some r (0, 1) and for all x and y. If T x = x and T y = y, then
kx yk = kT x T yk rkx yk,
which implies that kx yk = 0.
19.2 Let T be sc. Show that the sequence {T k x0 } is a Cauchy sequence.
From
||xk xk+n || ||xk xk+1 || + ... + ||xk+n1 xk+n ||.

(19.5)

||xk+m xk+m+1 || rm ||xk xk+1 ||

(19.6)

and

we have
||xk xk+n || (1 + r + r2 + ... + rn1 )kxk xk+1 k.
From
kxk xk+1 k rk kx0 x1 k
we conclude that, given any  > 0, we can find k > 0 so that, for any n > 0
kxk xk+n k ,

19.2. EXERCISES

63

so that the sequence {T k x0 } is a Cauchy sequence.


Since {xk } is a Cauchy sequence, it has a limit, say x
. Let ek = x
xk .
From
ek = T k x
T k x0
we have
kek k rk k
x x0 k,
so that {ek } converges to zero, and {xk } converges to x
. Since the sequence
{xk+1 } is just the sequence {xk }, but starting at x1 instead of x0 , the
sequence {xk+1 } also converges to x
. But we have {xk+1 } = {T xk }, which
converges to T x
, by the continuity of T . Therefore, T x
=x
.
19.3 Suppose that we want to solve the equation
x=

1 x
e .
2

Let T x = 12 ex for x in R. Show that T is a strict contraction, when restricted to non-negative values of x, so that, provided we begin with x0 > 0,
the sequence {xk = T xk1 } converges to the unique solution of the equation.
Let 0 < x < z. From the Mean Value Theorem we know that, for t > 0,
et = e0 es t,
for some s in the interval (0, t). Therefore,
1 e(zx) = ec (z x),
for some c in the interval (0, z x). Then
|T x T z| =

1
1
1 x
e (1 e(zx) ) ex ec (z x) (z x).
2
2
2

Therefore, T is a strict contraction, with r = 21 .


19.4 Prove Lemma 19.1.
Showing that it is a norm is easy.
Since T x T y = Bx By, we know that kT x T yk = kB(x y)k.
Suppose that
J
X
xy =
cj uj .
j=1

Then
B(x y) =

J
X
j=1

j cj uj ,

64

CHAPTER 19. OPERATORS

and
kB(x y)k =

J
X

|j ||cj | (B)

j=1

J
X

|cj | = (B)kx yk.

j=1

Therefore, T is a strict contraction in this norm.


19.5 Show that, if the operator T is -av and 1 > > , then T is -av.
Clearly, if
hGx Gy, x yi

1
||Gx Gy||22 ,
2

(19.7)

and 1 > > , then


hGx Gy, x yi

1
||Gx Gy||22 .
2

19.6 Prove Lemma 19.4.


Clearly, the left side of the equation
||x y||22 ||T x T y||22 = 2(hGx Gy, x yi) ||Gx Gy||22
is non-negative if and only if the right side is non-negative, which is equivalent to
1
(hGx Gy, x yi) ||Gx Gy||22 ,
2
which means that G is 21 -ism. Similarly, the left side of the equation
hT x T y, x yi ||T x T y||22 =
hGx Gy, x yi ||Gx Gy||22
is non-negative if and only if the right side is, which says that T is fne if
and only if G is fne if and only if G is 1-ism.
19.7 Prove Proposition 19.1.
Note that F = 12 (I + N ) is equivalent to F = I 12 G, for G = I N .
Therefore, if N is ne, then G is 21 -ism and 12 G is 1-ism, or fne. Therefore,
N is ne if and only if F is the complement of a 1-ism operator, and so is
itself a 1-ism operator and fne.
19.8 Prove Proposition 19.2.

19.2. EXERCISES

65

Suppose that T y = y. We need to show that


kT x yk < kx yk,
unless T x = x. Since T y = y, we can write
kT x yk = kT x T yk = kB(x y)k.
Suppose that
xy =

J
X

aj uj ,

j=1

and suppose that T x 6= x. Then


kx yk =

J
X

|aj |,

j=1

while
kB(x y)k =

J
X

|j ||aj |.

j=1

If j = 1 for all j such that aj 6= 0, then B(x y) = x y, which is not


the case, since T x T y 6= x y. Therefore, at least one |j | is less than
one, and so
kT x yk < kx yk.
19.9 Show that, if B is a linear av operator, then || < 1 for all eigenvalues of B that are not equal to one.
We know that B is av if and only if there is (0, 1) such that
B = (1 )I + N.
From Bu = u it follows that
Nu =

+1
u.

Since N is ne, we must have


+ 1



1,

or
| + 1| .
Since B is ne, all its eigenvalues must have || 1. We consider the
cases of real and complex and not real separately.

66

CHAPTER 19. OPERATORS

If is real, then we need only show that we cannot have = 1. But


if this were the case, we would have
| 2 + | = 2 ,
or > 1, which is false.
If = a + bi, with a2 + b2 = 1 and b 6= 0, then we have
|(a + 1) + bi|2 2 ,
or
(a + ( 1))2 + b2 2 .
Then
a2 + b2 + 2a( 1) + ( 1)2 2 .
Suppose that a2 + b2 = 1 and b 6= 0. Then we have
1 2 + 2a(1 ) (1 )2 = 2a(1 ) 1 + 2,
or
2 2(1 )a + 2,
so that
1 (1 )a + (1),
which is a convex combination of the real numbers a and 1. Since (0, 1),
we can say that, unless a = 1, 1 is strictly less than the maximum of 1
and a, implying that a 1, which is false. Therefore, we conclude that
a2 + b2 < 1.

Chapter 20

Convex Feasibility and


Related Problems
20.1

A Lemma

For i = 1, ..., I let Ci be a non-empty, closed convex set in RJ . Let C =


Ii=1 Ci be the non-empty intersection of the Ci .
Lemma 20.1 If c C and x = c +
PCi (c + pi ), then c = PC x.

20.2

PI

i=1

pi , where, for each i, c =

Exercises

20.1 Prove Lemma 20.1.


For each i we have
hc (c + pi ), ci ci = hpi , ci ci 0,
for all ci in Ci . Then, for any d in C = Ii=1 Ci , we have
hpi , d ci 0.
Therefore,
hc x, d ci = h

I
X

pi , d ci =

i=1

I
X
i=1

from which we conclude that c = PC x.


67

hpi , d ci 0,

68CHAPTER 20. CONVEX FEASIBILITY AND RELATED PROBLEMS

You might also like