Matrix Analysis: Lecture 8
Feng Wei
daoshuo@bit.edu.cn
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
(Beijing Institute of Technology) Matrix Analysis: Lecture 8 1 / 15
Kernel and Image of Linear Mappings
Let f : V −→ W be a linear mapping. We define the null space or kernel of
f to be the set
N ( f ) := { v ∈ V f (v) = 0 } or Ker f := { v ∈ V f (v) = 0 }.
One can define the range or image of f to be the set
R( f ) := { w ∈ W w = f (v) for some v ∈ V }
or
Im f := { w ∈ W w = f (v) for some v ∈ V }.
It is not difficult to see that Ker f is a subspace of V , while Im f is a
subspace of W .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
(Beijing Institute of Technology) Matrix Analysis: Lecture 8 2 / 15
Kernel and Image of Linear Mappings
If the matrix representation of f with respect to the basis v1 , · · · , vn and
w1 , · · · , wm is A,
f [v1 , · · · , vn ] = [w1 , · · · , wm ]A,
then
R( f ) ∼
= Span {α1 , · · · , αn } := R(A),
where {α1 , · · · , αn } are the column vectors of A. Up to isomorphism, we
have
N (f) ∼
= { x ∈ Rn Ax = 0 } := N (A).
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
(Beijing Institute of Technology) Matrix Analysis: Lecture 8 3 / 15
Orthogonal and Orthonormal Sets of Vectors
Let {v1 , · · · , vk } be a collection of nonzero vectors vi ∈ Rn . This collection
is said to be orthogonal if vTi v j = 0 for all i ̸= j and orthonormal if
vTi v j = δi j , where δi j is the Kronecker data defined by
{
1 if i = j
δi j =
0 if i ̸= j
The word “orthonormal” is referred to “orthogonal + normal”. That is, it
is an orthogonal set of unit vectors.
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
(Beijing Institute of Technology) Matrix Analysis: Lecture 8 4 / 15
Orthogonal and Orthonormal Sets of Vectors
Let us construct two orthogonal and orthonormal sets of vectors.
Example 1
2
2
−1
3 3 3
2 , −1 , 2
3
−1
3
2
3
2
3 3 3
Example 2
√1 √1 √1
3 6
√1 −1
2
√1
, √ ,
3 2
−2
6
√1 0 √
3 6
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
(Beijing Institute of Technology) Matrix Analysis: Lecture 8 5 / 15
Orthogonal Complement
Definition 1
Let S ⊆ Rn . The orthogonal complement of S is defined as the set
S ⊥ = { v ∈ Rn vT s = 0 ∀s ∈ S }.
It is easy to see that S ⊥ is a subspace of Rn .
Example 3
Let
1 −1
0 2
S = Span
2 , 1
.
1 3
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
(Beijing Institute of Technology) Matrix Analysis: Lecture 8 6 / 15
Orthogonal Complement
How to find its orthogonal complement S ⊥ ? The computation involved
is simply to find all nontrivial solutions of the system of linear equations
{
x1 + 0x2 + 2x3 + x4 = 0
−x1 + 2x2 + x3 + 3x4 = 0
Answer:
−4 −1
−3 −2
S ⊥ = Span
2 , 0
.
0 1
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
(Beijing Institute of Technology) Matrix Analysis: Lecture 8 7 / 15
Basic Properties of Orthogonal Complements
Theorem 1
Let R, S ⊆ Rn . Then
1 S ⊥ ⊆ Rn .
2 S ⊕ S ⊥ = Rn .
3 (S ⊥ )⊥ = S .
4 R ⊆ S if and only if S ⊥ ⊆ R ⊥ .
5 (R + S )⊥ = R ⊥ ∩ S ⊥ .
6 (R ∩ S )⊥ = R ⊥ + S ⊥ .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
(Beijing Institute of Technology) Matrix Analysis: Lecture 8 8 / 15
Basic Properties of Orthogonal Complements
We only discuss and prove the second item in class. The proofs of other
results are left as exercises. Let {v1 , · · · , vk } be an orthonormal basis for S
and let x ∈ Rn be an arbitrary vector. We construct a vector
k
x1 = ∑ (xT vi )vi
i=1
and set
x2 = x − x1 .
Then x1 ∈ S . It is enough for us to show that x2 ∈ S ⊥ . Indeed, x2T v j = 0.
Thus we have S + S ⊥ = Rn . The remainder is to prove that
S ∩ S ⊥ = {0}.
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
(Beijing Institute of Technology) Matrix Analysis: Lecture 8 9 / 15
Structure of Linear Mappings
(2) =⇒ (3);
The proofs of (4) and (5) are straightforward.
(5) =⇒ (6).
Theorem 2
Let f : Rn −→ Rm be a linear mapping. The matrix representation of f
with respect to the basis v1 , · · · , vn and w1 , · · · , wm is A, that is,
f [v1 , · · · , vn ] = [w1 , · · · , wm ]A.
Then
1 N (A)⊥ = R(AT ).
2 R(A)⊥ = N (AT ).
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
(Beijing Institute of Technology) Matrix Analysis: Lecture 8 10 / 15
Structure of Linear Mappings
Let us complete the proof of the second statement.
v ∈ R(A)⊥ ⇐⇒ vT x = 0, ∀x ∈ R(A).
⇐⇒ vT Ay = 0, ∀y ∈ Rn .
⇐⇒ yT AT v = 0, ∀y ∈ Rn .
⇐⇒ AT v = 0.
⇐⇒ v ∈ N (AT )
Please practice and complete the first statement in an analogous manner.
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
(Beijing Institute of Technology) Matrix Analysis: Lecture 8 11 / 15
Decomposition Theorem of Linear Mappings
Theorem 3
Let f : Rn −→ Rm be a linear mapping. The matrix representation of f
with respect to the basis v1 , · · · , vn and w1 , · · · , wm is A, that is,
f [v1 , · · · , vn ] = [w1 , · · · , wm ]A.
Then
1 Rn = N (A) ⊕ R(AT ).
2 Rm = R(A) ⊕ N (AT ).
Let us consider a general matrix A ∈ Rm×n
r . When thought of as a linear
mapping from Rn to Rm , many properties of A can be developed in terms
of the following four fundamental subspaces R(A), R(A)⊥ , N (A), N (A)⊥
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
(Beijing Institute of Technology) Matrix Analysis: Lecture 8 12 / 15
Four Fundamental Subspaces
Four Fundamental Subspaces Theorem
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
(Beijing Institute of Technology) Matrix Analysis: Lecture 8 13 / 15
Four Fundamental Subspaces
The previous figure makes many key properties seem almost obvious and
we return to this figure frequently both in the context of linear mappings
and in illustrating concepts such as singular value decompositions of
matrices, controllability and observability of linear systems.
Definition 2
Let V and W be linear spaces over a filed F and f : V −→ W be a linear
mapping.
1 f is called a surjective if R( f ) = W .
2 f is said to be a injective if N ( f ) = {0}.
Furthermore, if f is both surjective and injective, then we say f is a
bijective. In this case, we call f a linear automorphism.
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
(Beijing Institute of Technology) Matrix Analysis: Lecture 8 14 / 15
Rank-nullity Theorem
Let f : Rn −→ Rm be a linear mapping. The matrix representation of f
with respect to the basis v1 , · · · , vn and w1 , · · · , wm is A, that is,
f [v1 , · · · , vn ] = [w1 , · · · , wm ]A.
Then we have the following rank-nullity theorem.
Theorem 4
dim N (A) + dim R(A) = n,
or equivalently
dim N ( f ) + dim R( f ) = n.
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
(Beijing Institute of Technology) Matrix Analysis: Lecture 8 15 / 15