0% found this document useful (0 votes)
94 views46 pages

LN2 Projection Ver2 Slides

Uploaded by

lijuncheng0219
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
94 views46 pages

LN2 Projection Ver2 Slides

Uploaded by

lijuncheng0219
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 46

Ch02.

Projection

Ping Yu

HKU Business School


The University of Hong Kong

Ping Yu (HKU) Projection 1 / 45


1 Hilbert Space and Projection Theorem

2 Projection in the L2 Space

3 Projection in Rn
Projection Matrices

4 Partitioned Fit and Residual Regression


Projection along a Subspace
Ping Yu (HKU) Projection 2 / 45
Overview

Whenever we discuss projection, there must be an underlying Hilbert space since


we must define "orthogonality". [Figure here]
We explain projection in two Hilbert spaces (L2 and Rn ) and integrate many
estimators in one framework.
Projection in the L2 space: linear projection and regression (linear regression is a
special case)
Projection in Rn : Ordinary Least Squares (OLS) and Generalized Least Squares
(GLS)
One main topic of this course is the (ordinary) least squares estimator (LSE).
Although the LSE has many interpretations, e.g., as a MLE or a MoM estimator,
the most intuitive interpretation is that it is a projection estimator.

Ping Yu (HKU) Projection 2 / 45


History of the Hilbert Space

David Hilbert (1862-1943), Göttingen

Ping Yu (HKU) Projection 3 / 45


Hilbert Space and Projection Theorem

Hilbert Space and Projection Theorem

Ping Yu (HKU) Projection 4 / 45


Hilbert Space and Projection Theorem

Hilbert Space

Definition (Hilbert Space)


A complete inner product space is called a Hilbert space.a An inner product is a
bilinear operator h , i : H H ! R, where H is a real vector space,b satisfying for any
x, y , z 2 H and α 2 R,
(i) hx + y , zi = hx, zi + hy , zi ;
(ii) hαx, zi = α hx, zi ;
(iii) hx, zi = hz, xi ;
(iv) hx, xi 0 with equal if and only if x = 0.
We denote this Hilbert space as (H, h , i).
a
A metric space (H, d ) is complete if every Cauchy sequence in H converges in H, where d is a metric on H.
A sequence fxn g in a metric space is called a Cauchy sequence if for any ε > 0, there is a positive integer N
such that for all natural numbers m, n > N, d (xm , xn ) < ε.
b
A vector space (also called a linear space) is a collection of objects called vectors, which may be added
together and multiplied ("scaled") by numbers.

Ping Yu (HKU) Projection 5 / 45


Hilbert Space and Projection Theorem

Angle and Orthogonality

An important inequality in the inner product space is the Cauchy–Schwarz


inequality:
jhx, y ij kxk ky k ,
p
where k k h , i is the norm induced by h , i.
Due to this inequality, we can define
hx, y i
angle(x, y ) = arccos .
kxk ky k

We assume the value of the angle is chosen to be in the interval [0, π ].

[Figure here]

If hx, y i = 0, angle(x, y ) = π2 ; we call x is orthogonal to y and denote it as x ? y .

Ping Yu (HKU) Projection 6 / 45


Hilbert Space and Projection Theorem

Figure: Angle in Two-dimensional Euclidean Space

Ping Yu (HKU) Projection 7 / 45


Hilbert Space and Projection Theorem

Projection and Projector

The ingredients of a projection are fy , M, (H, h , i)g, where M is a subspace of H.


Note that the same H endowed with different inner products are different Hilbert
spaces, so the Hilbert space is denoted as (H, h , i) rather than H.
Our objective is to find some Π(y ) 2 M such that

Π(y ) = arg min ky hk2 . (1)


h2M

Π( ): H ! M is called a projector, and Π(y ) is called a projection of y .

To characterize Π(y ), we first introduce three concepts.

Ping Yu (HKU) Projection 8 / 45


Hilbert Space and Projection Theorem

Direct Sum, Orthogonal Space and Orthogonal Projector

Definition
Let M1 and M2 be two disjoint subspaces of H so that M1 \ M2 = f0g. The space

V = fh 2 Hjh = h1 + h2 , h1 2 M1 , h2 2 M2 g

is called the direct sum of M1 and M2 and it is denoted by V = M1 M2 .

Definition
Let M be a subspace of H. The space

M? fh 2 Hj hh, Mi = 0g

is called the orthogonal space or orthogonal complement of M, where hh, Mi = 0


means h is orthogonal to every element in M.

Definition
Suppose H = M1 M2 . Let h 2 H so that h = h1 + h2 for unique hi 2 Mi , i = 1, 2. Then
P is a projector onto M1 along M2 if Ph = h1 for all h. In other words, PM1 = M1 and
PM2 = 0. When M2 = M1? , we call P as an orthogonal projector. [Figure here]

Ping Yu (HKU) Projection 9 / 45


Hilbert Space and Projection Theorem

Figure: Projector and Orthogonal Projector

What is M2 ?
[Back to Lemma 9]

Ping Yu (HKU) Projection 10 / 45


Hilbert Space and Projection Theorem

Hilbert Projection Theorem

Theorem (Hilbert Projection Theorem)


If M is a closed subspace of a Hilbert space H, then for each y 2 H, there exists a
unique point x 2 M for which ky xk is minimized over M. Moreover, x is the closest
element in M to y if and only if hy x, Mi = 0.

The first part of the theorem states the existence and uniqueness of the projector.
The second part of the theorem states something related to the first order
conditions (FOCs) of (1) or, simply, orthogonal conditions. [Figure here]
From the theorem, Π( ) is the orthogonal projector onto M, where "orthogonality"
is defined by h , i, and need not be the intuitive orthogonality in the Euclidean
inner product.1
- In other words, given any closed subspace M of H, H = M M ? .
Also, the closest element in M to y is determined by M itself, not the vectors
generating M since there may be some redundancy in these vectors.

1
If we insist using the Euclidean inner product, then Π ( ) need not be an orthogonal projector but may be an
projector along a subspace, see GLS below.
Ping Yu (HKU) Projection 11 / 45
Hilbert Space and Projection Theorem

Figure: Projection

Ping Yu (HKU) Projection 12 / 45


Hilbert Space and Projection Theorem

Sequential Projection

Theorem (Law of Iterated Projections or LIP)

If M1 and M2 are closed subspaces of a Hilbert space H, and M1 M2 , then


Π1 (y ) = Π1 (Π2 (y )), where Πj ( ), j = 1, 2, is the orthogonal projector of y onto Mj .

Proof.
Write y = Π2 (y ) + Π?
2 (y ). Then

Π1 (y ) = Π1 ( Π2 (y ) + Π? ?
2 (y )) = Π1 ( Π2 (y )) + Π1 ( Π2 (y )) = Π1 ( Π2 (y )),

where the last equality is because Π?


2 (y ) , x = 0 for any x 2 M2 and M1 M2 .

We first project y onto a larger space M2 , and then project the projection of y (in
the first step) onto a smaller space M1 .
The theorem shows that such a sequential procedure is equivalent to projecting y
onto M1 directly.
We will see some applications of this theorem below.

Ping Yu (HKU) Projection 13 / 45


Projection in the L2 Space

Projection in the L2 Space

Ping Yu (HKU) Projection 14 / 45


Projection in the L2 Space

Linear Projection

A random variable x 2 L2 (P ) if E [x 2 ] < ∞.


L2 (P ) endowed with some inner product is a Hilbert space.
y 2 L2 (P ) , x1 , , xk 2 L2 (P ), M = span (x1 , , xk ) span(x),2 H = L2 (P ) with
h , i defined as hx, y i = E [xy ].
h i
Π(y ) = arg min E (y h)2
h2M h i (2)
= x0 arg min E (y x0 β )2 .
β 2Rk

Π( ) is called the best linear predictor (BLP) onto span(x).


Π(y ) is called the linear projection of y onto x.

2
span(x) = z 2 L2 (P )jz = x0 α, α 2 Rk .
Ping Yu (HKU) Projection 15 / 45
Projection in the L2 Space

continue...

Since this is a concave programming problem, FOCs are sufficient3 :

2E x y x0 β 0
= 0 ) E [xu ] = 0 (3)
h i
where u = y Π(y ) is the error, and β 0 = arg min E (y x0 β )2 .
β 2Rk
Π(y ) always exists and is unique, but β 0 needn’t be unique unless x1 , , xk are
linearly independent, that is, there is no nonzero vector a 2 Rk such that a0 x = 0
almost surely (a.s.).
h i
Why? If 8 a 6= 0, a0 x 6= 0, then E (a0 x)2 > 0 and a0 E [xx0 ] a > 0, thus E [xx0 ] > 0.4
So from (3),
1
β 0 = E xx0 E [xy ] (why?) (4)
1
and Π(y ) = x0 (E [xx0 ]) E [xy ].
In the literature, β with a subscript 0 usually represents the true value of β .

3 ∂ 0
= ∂∂x (x0 a) = a
∂ x (a x)
4
For a matrix A, A > 0 means it is positive definite.
Ping Yu (HKU) Projection 16 / 45
Projection in the L2 Space

Regression
The setup is the same as in linear projection except that M = L2 (P, σ (x)), where
L2 (P, σ (x)) is the space spanned by any function of x (not only the linear function
of x) as long as it is in L2 (P ).
h i
Π(y ) = arg min E (y h)2 (5)
h2M
Note that
h i
E (y h )2
h i
= E (y E [y jx] + E [y jx] h)2
h i h i
= E (y E [y jx])2 + 2E [(y E [y jx]) (E [y jx] h)] + E (E [y jx] h)2
h i h i h i
?
= E (y E [y jx])2 + E (E [y jx] h)2 E (y E [y jx])2 E [u 2 ],

so Π(y ) = E [y jx], which is called the population regression function (PRF), where
the error u satisfies E [ujx] = 0 (why?).
We can use variation to characterize the FOCs:
h i
0 = arg minE (y (Π(y ) + εh(x)))2
ε2R
2 E [h(x) (y (Π(y ) + εh(x)))]j =0 (6)
ε =0
) E [h(x)u ] = 0, 8 h(x) 2 L2 (P, σ (x))
Ping Yu (HKU) Projection 17 / 45
Projection in the L2 Space

1.75
1.5

1.5
Conditional Density

1
0.5

0 0.5
6
4
9
12
16 2

21 0 6 9 12 16 21

Figure: Conditional Mean of ln(WAGE ) Given Education

E [y jx] is a very nonlinear function, but over some range of education levels, e.g.,
[6, 16], it can be approximated by a linear function quite well.

Ping Yu (HKU) Projection 18 / 45


Projection in the L2 Space

Relationship Between the Two Projections

x0 (E [xx0 ]) 1 E [xy ] is the BLP of E [y jx], i.e., the BLPs of y and E [y jx] are the
same. [Figure here]
This is a straightforward application of the law of iterated projections.
Explicitly, define
h i Z h i
2 2
β o = arg min E E [y jx] x0 β = arg min E [y jx] x0 β dF (x).
β 2Rk β 2Rk

The FOCs for this minimization problem are


E [ 2x (E [y jx] x0 β o )] = 0
) E [xx0 ] β o = E [xE [y jx]] = E [xy ]
) β o = (E [xx0 ]) 1 E [xy ] = β 0
In other words, β 0 is a (weighted) least squares approximation to the true model.
[Figure here]
If E [y jx] is not linear in x, β o depends crucially on the weighting function F (x) or
the distribution of x.
The weighting function ensures that frequently drawn xi will yield small
approximation errors at the cost of larger approximation errors for less frequently
drawn xi .
Ping Yu (HKU) Projection 19 / 45
Projection in the L2 Space

Figure: Linear Approximation of Conditional Expectation (I)

Ping Yu (HKU) Projection 20 / 45


Projection in the L2 Space

Figure: Linear Approximation of Conditional Expectation (II)

Ping Yu (HKU) Projection 21 / 45


Projection in the L2 Space

Linear Regression

Linear regression is a special case of regression with E [y jx] = x0 β .


Regression and linear projection are implied by the definition of projection, but
linear regression is a "model" where some structure (or restriction) is imposed.
In the following figure, when we project y onto a larger space M2 = L2 (P, σ (x)),
Π(y ) falls into a smaller space M1 = span (x) by coincidence, so there must be a
restriction on the joint distribution of (y , x) (what kind of restriction?).
In summary, the linear regression model is

y = x0 β + u,
E [ujx] = 0.

E [ujx] = 0 is necessary for a causal interpretation of β .

Ping Yu (HKU) Projection 22 / 45


Projection in the L2 Space

Figure: Linear Regression

Ping Yu (HKU) Projection 23 / 45


Projection in Rn

Projection in Rn

Ping Yu (HKU) Projection 24 / 45


Projection in Rn

The LSE

The projection in the L2 space is treated as the population version.


The projection in Rn is treated as the sample counterpart of the population
version.
The LSE is defined as
n h i

2 2
βb = arg min SSR (β ) = arg min yi x0i β = arg min En y x0 β ,
β 2Rk β 2Rk i =1 β 2Rk

where En [ ] is the expectation under the empirical distribution of the data, and
n n n n
∑ ∑ yi2 ∑ xi yi + β 0 ∑ xi x0i β
2
SSR (β ) yi x0i β = 2β 0
i =1 i =1 i =1 i =1
= y0 y 2β 0 X0 y + β 0 X0 X β 5

is the sum of squared residuals as a function of β . [Figure here]

5
X and y will be defined in the following slide.
Ping Yu (HKU) Projection 25 / 45
Projection in Rn

Figure: Objective Functions of OLS Estimation: k = 1, 2

Ping Yu (HKU) Projection 26 / 45


Projection in Rn

Normal Equations

SSR (β ) is a quadratic function of β , so the FOCs are also sufficient to determine


the LSE.
Matrix calculus6 gives the FOCs for βb :
n n
2 ∑ xi yi + 2 ∑ xi x0i βb

0 = SSR (βb ) =
∂β i =1 i =1

= 2X y + 2X Xβb ,
0 0

which is equivalent to the normal equations

X0 Xβb = X0 y.

So
βb = (X0 X) 1 0
X y.

6 ∂ 0 0 0

∂ x (a x) = ∂ x (x a) = a, and ∂
∂ x (x Ax) = (A + A0 )x.
Ping Yu (HKU) Projection 27 / 45
Projection in Rn

Notations

Matrices are represented using uppercase bold. In matrix notation the sample
(data, or dataset) is (y, X), where y is an n 1 vector with ith entry yi and X is a
matrix with ith row x0i , i.e.,
0 1 0 0 1
y1 x1
B C B C
y = @ ... A and X = @ ... A ,
(n 1) (n k )
yn x0n
The first column of X is assumed to be ones if without further specification, i.e., the
first column of X is
1 = (1, , 1)0 .
The bold zero, 0, denotes a vector or matrix of zeros.
Reexpress X as
X = X1 Xk ,
where different from xi , Xj , j = 1, , k , represents the jth column of X and is all
the observations for jth variable.
The linear regression model upon stacking all n observations is then
y = Xβ + u,
where u is an n 1 column vector with ith entry ui .
Ping Yu (HKU) Projection 28 / 45
Projection in Rn

LSE as a Projection

The above derivation of βb expresses the LSE using rows of the data matrices y
and X. The following expresses the LSE using columns of y and X.
y 2 Rn , X1 , , Xk 2 Rn are linearly independent,
M = span (X1 , , Xk ) span(X),7 H = Rn with the Euclidean inner product.8

Π(y) = arg min ky hk2


h2M
= X arg min ky Xβ k2
β 2Rk (7)
n 2
= X arg min ∑ yi x0i β ,
k β 2R i =1

2
where ∑ni=1 yi x0i β is exactly the objective function of OLS.

7
span(X) = z 2 Rn jz = Xα, α 2 Rk is called the column space of X.
8
Recall that for x = (x1 , , xn )0 , and z = (z1 , , zn )0 , the Euclidean inner product of x and z is
hx, zi = ∑ni=1 xi zi , so kxk2 = hx, xi = ∑ni=1 xi2 .
Ping Yu (HKU) Projection 29 / 45
Projection in Rn

Solving βb

As Π(y) = Xβb , we can solve out βb by premultiplying both sides by X, that is,

X0 Π(y) = X0 Xβb ) βb = (X0 X) 1 0


X Π (y),

where (X0 X) 1 exists because X is full rank.


On the other hand, orthogonal conditions for this optimization problem are

X0 u
b = 0,

where ub = y Π(y). Since these orthogonal conditions are equivalent to normal


equations (or the FOCs),
βb = (X0 X) 1 X0 y.
These two βb ’s are the same since
1 0 1 0 1 0b
(X0 X) Xy ( X0 X ) X Π ( y ) = ( X0 X ) X u = 0.

Finally,
1 0
Π ( y ) = X ( X0 X ) X y = PX y,
where PX is called the projection matrix.

Ping Yu (HKU) Projection 30 / 45


Projection in Rn

Multicollinearity

In the calculation of the above two slides, we first project y on span(X) to get Π(y)
and then find βb such that Π(y) = Xβb .
The two steps involve very different operations: optimization versus solving linear
equations.
Furthermore, although Π(y) is unique, βb may not be. When rank(X) < k or X is
rank deficient, there are more than one (actually, infinite) βb such that Xβb = Π(y).
This is called multicollinearity and will be discussed in more details in the next
chapter.
In the following discussion, we always assume rank(X) = k or X is full-column
rank; otherwise, some columns of X can be deleted to make it so.

Ping Yu (HKU) Projection 31 / 45


Projection in Rn

Generalized Least Squares

All are the same as in the last example except hx, ziW = x0 Wz, where the weight
matrix W > 0.
The projection
Π(y) = X arg min ky Xβ k2W . (8)
β 2Rk

FOCs are
e
X, u W = 0 (orthogonal conditions)
e=y
where u Xβe , that is,

hX, XiW βe = hX, yiW ) βe = (X0 WX) 1 0


X Wy.

Thus
1 0
Π(y) = X(X0 WX) X Wy = PX?WX y
where the notation PX?WX will be explained later.

Ping Yu (HKU) Projection 32 / 45


Projection in Rn Projection Matrices

Projection Matrices

Since Π(y) = PX y is the orthogonal projection onto span(X), PX is the orthogonal


projector onto span(X).
b = y Π(y) = (In PX ) y MX y is the orthogonal projection onto
Similarly, u
span? (X), so MX is the orthogonal projector onto span? (X), where In is the n n
identity matrix.
Since
1 0
PX X = X ( X0 X ) X X = X,
MX X = (In PX ) X = 0;

we say PX preserves span(X), MX annihilates span(X), and MX is called the


annihilator.
b:
This implies another way to express u
b = MX y = MX (Xβ + u) = MX u.
u

Also, it is easy to check MX PX = 0, so MX and PX are orthogonal.

Ping Yu (HKU) Projection 33 / 45


Projection in Rn Projection Matrices

continue...

0
PX is symmetric: P0X = X(X0 X) 1 X0 = X ( X0 X ) 1 X0 = PX .

PX is idempotent9 (intuition?): P2X = X(X0 X) 1 X0 X(X0 X) 1 X0 = PX .


PX is positive semidefinite: for any α 2 Rn , α 0 PX α = (X0 α )0 (X0 X) 1 X0 α 0, where
(X0 X) 1 > 0 but X0 α may be 0 (why?), so cannot be changed to >.
"Positive semidefinite" cannot be strengthen to "positive definite".
Why? An idempotent matrix is always diagonalizable and its eigenvalues are
either 0 or 1, so its rank equals its trace10 .
1 0 1 0
tr(PX ) = tr(X(X0 X) X ) = tr((X0 X) X X) = tr(Ik ) = k < n,

and
tr(MX ) = tr(In PX ) = tr(In ) tr(PX ) = n k < n.
For a general "nonorthogonal" projector P, it is still unique and idempotent, but
need not be symmetric (let alone positive semidefiniteness).
For example, PX?WX in the GLS estimation is not symmetric.
9
A square matrix A is idempotent if A2 = AA = A. An idempotent matrix need not be symmetric.
10
Trace of a square matrix is the sum of its diagonal elements. tr(A + B) =tr(A)+tr(B) and tr(AB) =tr(BA).
Ping Yu (HKU) Projection 34 / 45
Partitioned Fit and Residual Regression

Partitioned Fit and Residual Regression

Ping Yu (HKU) Projection 35 / 45


Partitioned Fit and Residual Regression

Partitioned Fit

It is of interest to understand the meaning of part of βb , say, βb 1 in the partition of


0 0
βb = (βb , βb )0 , where we partition
1 2
" #
.
Xβ = X1 ..X2
β1
β2

with rank(X) = k .
We will show that βb 1 is the "net" effect of X1 on y when the effect of X2 is removed
from the system. This result is called the Frisch-Waugh-Lovell (FWL) theorem due
to Frisch and Waugh (1933) and Lovell (1963). [Figure here]
The FWL theorem is an excellent implication of the projection property of least
squares.
To simplify notation, P P ,M j M , Π (y) = X βb , j = 1, 2.
Xj j Xj j j j

Ping Yu (HKU) Projection 36 / 45


Partitioned Fit and Residual Regression

History of FWL

R. Frisch (1895-1973), Oslo, 1969NP F.V. Waugh (1898-1974), USDA

M.C. Lovell (1930-), Wesleyan


Ping Yu (HKU) Projection 37 / 45
Partitioned Fit and Residual Regression

The FWL Theorem

Theorem
βb 1 could be obtained when the residuals from a regression of y on X2 alone are
regressed on the set of residuals obtained when each column of X1 is regressed on
X2 . In mathematical notations,
1 1
βb 1 = X01?2 X1?2 X01?2 y?2 = X01 M2 X1 X01 M2 y.

where X1?2 = (I P2 )X1 = M2 X1 , y?2 = (I P2 )y = M2 y.

This theorem states that βb 1 can be calculated by the OLS regression of y e = M2 y


e 1 = M2 X1 . This technique is called residual regression. [Figure here]
on X

Corollary
1 ?a
Π1 (y) X1 βb 1 = X1 X01?2 X1 X01?2 y P12 y = P12 (Π(y)).
a
Will be explained below.

Ping Yu (HKU) Projection 38 / 45


Partitioned Fit and Residual Regression

Figure: The FWL Theorem

Ping Yu (HKU) Projection 39 / 45


Partitioned Fit and Residual Regression

P12

1
P12 = X1 X01 (I P2 )X1 X0 (I P2 ).
|{z} | 1 {z }
trailing term leading term

I P2 in the leading term annihilates span(X2 ) so that P12 (Π2 (y)) = 0. The
leading term sends Π(y) toward span? (X2 ).
But the trailing X1 ensures that the final result will lie in span(X1 ).
The rest of the expression for P12 ensures that X1 is preserved under the
transformation: P12 X1 = X1 .
Why P12 y = P12 (Π(y))? We can treat the projector P12 as a sequential projector:
first project y onto span(X) to get Π(y), and then project Π(y) to span(X1 ) along
span(X2 ) to get Π1 (y). [Figure here]
βb is calculated from Π (y) by
1 1

βb 1 = (X01 X1 ) 1 0
X1 Π1 (y).

Ping Yu (HKU) Projection 40 / 45


Partitioned Fit and Residual Regression

Figure: Projection by P12

Ping Yu (HKU) Projection 41 / 45


Partitioned Fit and Residual Regression

Proof I of the FWL Theorem (brute-force)

Calculate βb 1 explicitly in the residual regression and check whether it is equal to


the LSE of β 1 .
Residual regression includes the following three steps.
Step 1: Projecting y on X2 , we have the residuals
1 0
by = y
u X2 (X02 X2 ) X2 y = M2 y.

Step 2: Projecting X1 on X2 , we have the residuals


b x = X1
U X2 (X02 X2 ) 1 0
X2 X1 = M2 X 1 .
1

b x , we get the residual regression estimator of β


by on U
Step 3: Projecting u 1 1

1 1
βe 1 = b0 U
U b
x1 x1
b0 u
U 0
x1 b y = X1 M2 X1 X01 M2 y
h i 1h i
1
= X01 X1 X01 X2 (X02 X2 ) 1 X02 X1 X01 y X01 X2 X02 X2 X02 y
h i
1 0
B 1 X01 y X01 X2 X02 X2 X2 y

Ping Yu (HKU) Projection 42 / 45


Partitioned Fit and Residual Regression

continue...

On the other hand,


1
X01 X1 X01 X2 X01 y
βb =
X02 X1 X02 X2 X02 y
B 1 B 1 X0 X (X0 X ) 1
1 2 2 2 X01 y
= ,
X02 y

and

βb 1 = B 1 0
X1 y B 1 0
X1 X2 (X02 X2 ) 1 X02 y
h i
1 0
= B 1
X01 y X01 X2 X02 X2 X2 y = βe 1 .

The partitioned inverse formula:


!
1 1 1
A11 A12 A11.2 A11.2 A12 A221
= (9)
A21 A22 A22.1 A21 A111
1
A22.11

where A11.2 = A11 A12 A221 A21 and A22.1 is similarly defined.

Ping Yu (HKU) Projection 43 / 45


Partitioned Fit and Residual Regression

Proof II of the FWL Theorem

1
To show βb 1 = X01?2 X1?2 X01?2 y?2 , we need only show that

X01 M2 y = X01 M2 X1 βb 1 .

Multiplying y = X1 βb 1 + X2 βb 2 + u
b by X01 M2 on both sides, we have

X01 M2 y = X01 M2 X1 βb 1 + X01 M2 X2 βb 2 + X01 M2 u


b = X01 M2 X1 βb 1 ,

where the last equality is from M2 X2 = 0, and X01 M2 u


b = X01 u
b = 0 (why the first
b = Mu and M2 M = M).
equality hold? u

Ping Yu (HKU) Projection 44 / 45


Partitioned Fit and Residual Regression Projection along a Subspace

P12 as a Projector along a Subspace

Lemma

Define PX?Z as the projector onto span(X) along span? (Z), where X and Z are n k
matrices and Z0 X is nonsingular. Then PX?Z is idempotent, and
1 0
PX?Z = X(Z0 X) Z.

For orthogonal projectors, PX = PX?X .


To see the difference between PX and PX?Z , we check Figure 2 again.
In the left panel, X = (1, 0)0 and Z = (1, 1)0 ; in the right panel, X = (1, 0)0 . (why?)
It is easy to check that

1 0 1 1
PX = and PX?Z = .
0 0 0 0

So an orthogonal projector must be symmetric, while a general projector need not


be.
P12 = PX1 ?X1?2 .

Ping Yu (HKU) Projection 45 / 45

You might also like