0% found this document useful (0 votes)
86 views148 pages

LA Lectures

Uploaded by

richardlauncher
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
86 views148 pages

LA Lectures

Uploaded by

richardlauncher
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 148

Lecture Notes

MTH 1102 LINEAR ALGEBRA I

M.K.Nganda I.Ndikubwayo

Department of Mathematics
Makerere University
Kampala, Uganda

August 26, 2016


i

Course Description

The course introduces students to vectors, vector spaces, linear transformations, and systems of
linear equations. Using systems of linear equations, the course explores mathematical properties of
a vector such as linear independence, bases, and dimension. Linear transformations are studied
as relationships between vector spaces leading to the Rank-Nullity Theorem. The course also
introduces students to eigenspaces and diagonalization.

Course Objectives

By the end of this course, students should be able to:

• define vector spaces.

• solve systems of linear equations.

• find eigenvalues and eigenvectors of square matrices, and diagonalize square matrices.

• define linear transformations and find the associated matrices.

• state and prove the Rank-Nullity Theorem.

Detailed Course Outline

Solutions of Systems of Linear Equations

Matrices, Gaussian and Gauss-Jordan elimination, elementary matrices, echelon forms, determi-
nants, Cramer’s Rule (15 hours).

General Vector Spaces

Euclidean n-spaces, matrix spaces, polynomial spaces, and function spaces; subspaces (12 hours).

Properties of Vector Spaces

Linear independence, spanning sets, basis, dimension, row and column spaces, rank (9 hours).
ii

Linear Transformation

Kernel, range, matrix of a linear transformation. Eigenvalues and eigenvectors, diagonalization


(9 hours).

Reading List
1. Text recommended by the course lecturer.

2. Notes prepared by the lecturer.

3. Elementary Linear Algebra with Applications, Howard Anton and Chris Rorres, Eleventh
Edition, John Wiley & Sons, Inc., 2014.

4. Schaum’s Outline of Theory and Problems of Linear Algebra, Seymour Lipschutz and Marc
Lars Lipson, Fourth Edition, McGraw-Hill Companies, 2009.

5. Introduction to Linear Algebra and its Applications, Gilbert Strang, Fourth Edition, Welleslley-
Cambrigde Press, USA, 2009.

6. Basic Linear Algebra, T.S. Blyth and E.F. Robertson, Second Edition, Springer-Verlag
London Limited, 2007.
iii

“Success is a lousy teacher. It seduces smart people into thinking they can’t lose.”
Bill Gates.
Contents

1 Matrices 1

1.1 Introduction to Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

2 Determinants 10

2.1 Combinatorial Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

2.2 Cofactor Expansion Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

3 Systems of Linear Equations 21

3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

3.2 Solving a Linear System of Equations . . . . . . . . . . . . . . . . . . . . . . . . . 23

3.2.1 Row Echelon Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

3.2.2 Cramer’s Rule Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

4 Inverses of Square Matrices 36

4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

4.2 The Method of Elementary Row Operations . . . . . . . . . . . . . . . . . . . . . 36

4.3 The Adjoint Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

5 Vector Spaces and Vector Sub-Spaces 50

5.1 Vectors in n-spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

5.2 Vector Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

5.3 Vector Subspaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

iv
CONTENTS v

6 Linear Independence 63

6.1 Linear Combinations and Spanning Sets . . . . . . . . . . . . . . . . . . . . . . . 63

6.2 Linear independence and dependence of vectors . . . . . . . . . . . . . . . . . . . 66

7 Basis and Dimension 71

7.1 Bases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

7.2 Dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

8 Rank and Nullity 78

8.1 Rank of a Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78

9 Eigenvectors and Diagonalization 94

9.1 Eigenvalues and Eigenvectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94

9.2 Similarity and Diagonalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97

10 Linear Transformations and Matrix Representations 105

10.1 Linear Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105

10.2 Matrix Representations of Linear Transformations . . . . . . . . . . . . . . . . . . 114


Chapter 1

Matrices

1.1 Introduction to Matrices

Definition 1.1.1. A matrix is a rectangular array of numbers. An m×n (read as m by n) matrix


is a rectangular array of m × n numbers arranged in m horizontal rows and n vertical columns.
The numbers in the array are called entries of the matrix. A matrix is generally written as
 
a11 a12 · · · a1n
 a21 a22 · · · a2n 
A =  ..
 
.. .. 
 . . . 
am1 am2 . . . amn
For brevity we write A = (aij ) (or [aij ]), where aij (or [A]ij ) is an entry in the ith row and j th
column.
Definition 1.1.2. A matrix A is said to be a square matrix if it has the same number of rows
and columns. That is an (m × n) matrix is a square matrix if m = n where m is the number of
rows and n is the number of columns. An m × n matrix is said to be of size or order m × n. If
m = n, then it is of size (or order) n or m.
Definition 1.1.3. Two matrices A and B are defined to be equal if they have the same size and
their corresponding entries are equal. In other words, the matrices A = (aij ) and B = (bij ), each
of order m × n, are said to be equal if their corresponding entries are the same, i.e., aij = bij ∀ i, j.
Definition 1.1.4. Given an m × n matrix A = (aij ) the transpose of A, denoted by At or
AT or A0 , is an n × m matrix that results from interchanging the rows and columns of A. So
AT = (aij )T = (aji ).
Example 1.1.1. If
   
2 3 7 2 5 0
A= 5 6 8  then AT =  3 6 1 .
0 1 −1 7 8 −1
Definition 1.1.5. An m × n matrix A = (aij ) is said to be a zero matrix if aij = 0, ∀ i, j. That
is to say, all its entries are zero. It is denoted by Omn or simply O. The zero matrix has the
property that for every matrix A of the same size as O it is true that A + O = O + A = A.

1
Matrices 2

Definition 1.1.6. A square matrix A is said to be symmetric if AT = A. A matrix A is anti-


symmetric or skew-symmetric if AT = −A.
Example 1.1.2. If
   
1 2 3 1 2 3
A =  2 3 4 , then AT =  2 3 4  .
3 4 5 3 4 5

That is, AT = A; so A is symmetric.


Example 1.1.3. If
   
0 −2 −3 0 2 3
B= 2 0 −5  , then B T =  −2 0 5 .
3 5 0 −3 −5 0

That is, B T = −B; thus B is anti-symmetric.


Theorem 1.1.1. Let A and B be matrices and α be a scalar in a field F. Then

(i) (At )t = A.

(ii) (A + B)t = At + B t .

(iii) (AB)t = B t At .

(iv) (αA)t = αAt .


Corollary 1.1.1. Let A and B be matrices of the same size in a field F and α and β be scalars.
Then (αA + βB)t = αAt + βB t .

Proof: It is left as an exercise to the reader. 2


Definition 1.1.7. A square matrix A = (aij ) is said to be a diagonal matrix if aij = 0 ∀ i 6= j.
In other words, it is a square matrix in which all the entries off the main diagonal elements are
zero. If A = (aij ) is diagonal and of order n and aii = 1, ∀ i, then A is called an identity matrix
usually denoted by I or In . That is
 
1 0 ... 0
 0 1 ... 0 
I =  .. ..
 
.. 
 . . . 
0 0 ... 1
The identity matrix has the property that for every square matrix A of the same order as I, it is
true that AI = IA = A.
Example 1.1.4. The matrices
   
2 0 0 6 0 0
 0 3 0  and  0 0 0 
0 0 −1 0 0 0
are diagonal matrices.
Matrices 3

Definition 1.1.8. A square matrix A = (aij ) is said to be a tri-diagonal matrix if aij = 0 for all
|i − j| ≥ 2, i.e., if every element in the ith row and j th column is zero when the absolute difference
between
 i and j  is greater or equal to two. A tri-diagonal matrix of order 3 can be written as
a11 a12 0
 a21 a22 a23 .
0 a32 a33
 
1 3 0
Example 1.1.5. The matrix A =  2 1 5  is a tri-diagonal matrix.
0 1 −1
Definition 1.1.9. A square matrix is upper triangular if all the entries below the main diagonal
are zero. In other words, a matrix A = (aij ) is upper triangular if aij = 0, ∀ i > j. It is of the
form  
a11 a12 . . . a1n
 0 a22 . . . a2n 
 
 .. .
. .
.
A= . .

 . . 
 · 
0 0 . . . ann
Definition 1.1.10. A square matrix is lower triangular if all the entries above the main diagonal
are zero. In other words, a matrix A = (aij ) is lower triangular if aij = 0, ∀ i < j. It is of the
form  
a11 0 . . . 0
 a21 a22 . . . 0 
A =  .. ..  .
 
..
 . . . 
an1 an2 . . . ann
Definition 1.1.11. A square matrix A = (aij ) is triangular if it is either upper or a lower
triangular.
Example 1.1.6. The matrices
   
1 0 0 0 0 1
A= 2 3 0  and B =  0 3 2 
4 2 −1 0 0 −1
are triangular; A is lower triangular and B is upper triangular.
Definition 1.1.12. A matrix A = (aij ) is idempotent if A2 = AA = A.
1 1
 
   
2
0 2
1 0 1/2 1/2 1 1
Example 1.1.7. The matrices A = , B= , C = 2
0  are
2
0 1 1/2 1/2
0 0 1
idempotent.
Definition 1.1.13. A square matrix A = (aij ) of order n is non-singular or invertible if there
exists a matrix of order n, denoted by A−1 , such that

AA−1 = A−1 A = I.

(The matrix I is the identity matrix, and A−1 is the inverse of A). If A−1 does not exist we say
that A is singular or non invertible.
Matrices 4

Definition 1.1.14.

(a) Let A = (aij ) and B = (bij ) be matrices of the same size. The sum of A and B, written
A + B, is the matrix whose ij-th entry is aij + bij . That is if A = (aij ), B = (bij ) then

C = A + B = (cij = aij + bij ),

where C = (cij ). Note that since cij = bij + aij then A + B = B + A.

(b) Let k be a scalar (or a constant). Then kA is a matrix whose ij-th entry is kaij , i.e.,

kA = (kaij ).

Thus, for example, (−1)A = (−aij ) and

A + (−1)A = A − A = O.

(c) Let A be an m × r matrix and B be an r × n matrix in a field F. The product of A and B is


the matrix whose ij-th entry is the dot product of the ith row of A and j th column of B. Let
Pr
C = AB = (cij ). Then cij = aik bkj , where cij ∈ F. Thus matrix multiplication requires
k=1
that the number of columns of A is the same as the number of rows of B. Note that C is an
m × n matrix.
   
−1 2 3 −1 5 −2
Example 1.1.8. Let A = and B = . Find
−1 0 2 2 2 −1
(i) A + 2B, (ii) 3A0 − B 0 , (iii) (3A − B)0 .

Solution:
     
−1 2 3 −2 10 −4 −3 12 −1
(i) A + 2B = + =
−1 0 2 4 4 −2 3 4 0
     
−3 −3 −1 2 −2 −5
(ii) 3A0 − B 0 =  6 0 − 5 2  =  1 −2 
9 6 −2 −1 11 7
(iii)
   0
0 −3 6 9 −1 5 −2
(3A − B) = −
−3 0 6 2 2 −1
 
 0 −2 −5
−2 1 11
= = 1 −2 
−5 −2 7
11 7

Note that 3A0 − B 0 = (3A − B)0 . 2


Example 1.1.9. Solve the following matrix equation for a, b, c and d.
   
a−b b+c 8 1
=
3d + c 2a − 4d 7 6
Matrices 5

Solution: a = 5, b = −3, c = 4, d = 1. 2
   
1 −3 7 4
Example 1.1.10. Let A = and B = . Find A + B + (A + B)0
2 6 −5 8

Solution:    
0 2 −1 0 14 −1
Note that A + A = and B + B = . So A + B + (A + B)0 = A + A0 +
 −1 12 −1 16 
16 −2 8 1
B + B0 = . Alternatively, use A + B = so that (A + B) + (A0 + B 0 ) =
 −2
  28    −3 14
8 1 8 −3 16 −2
+ = . Note that A + A0 , B + B 0 , and their sum are
−3 14 1 14 −2 28
symmetric matrices. 2
Example 1.1.11.
  
4 −1 1 4 2
Let A = and B =
0 2 3 1 5
    
4 −1 1 4 2 1 15 3
Then AB = =
0 2 3 1 5 6 2 10
 
  1 3  
0 1 4 2  21 17
and BB = 4 1  = .
3 1 5 17 35
2 5
Example 1.1.12.
 
3 + 2i 0    
−i 2 −1 − i 0 −i
Let A =  −i 2 , B= , C= .
0 i 3 2i −5
1+i 1−i
Then    
5 + i 4i − 11 −23 − 25i −1 − i
BC = , CA = ,
3i − 2 −5 69 − 5i −5 + 9i
and  
25 − 7i 57 + 36i
(1 + i)AB + (3 − 4i)C 0 =  −1 − i −8 − 6i  .
6 + 3i −15 + 26i
Theorem 1.1.2. Let A be a square matrix. Then A + A0 is symmetric.

Proof: Since A is symmetric then A0 = A. So (A + A0 )0 = A0 + (A0 )0 = A0 + A = A + A0 . Thus


A + A0 is symmetric. [Note that (A + A0 )0 is a square matrix as A is a square matrix.] 2
Theorem 1.1.3. Let A, B and C be matrices and s and t scalars in a field F. Then

(i) A + B = B + A (Addition commutative law)

(ii) (A + B) + C = A + (B + C) (Addition Associative law)

(iii) A + O = A + O = A (Additive identity)

(iv) A + (−A) = O, where −A = (−1)A (Additive inverse)


Matrices 6

(v) (st)A = s(tA) (commutative law)


(vi) (s + t)A = sA + tA (distributive law)
(vii) t(A + B) = tA + tB (distributive law)
(viii) I.A = A.I = A (multiplicative identity)

Proof: We prove parts (ii) and (vii) and leave the rest as exercises.

(ii) Let A = (aij ) and B = (bij ), C = (cij ). Then A + B = (aij + bij ) by definition. Also
(A + B) + C = ((aij + bij ) + cij ); but (aij + bij ) + cij = aij + (bij + cij ) using associativity
of scalars. So (A + B) + C = (aij + (bij + cij )) = (aij ) + (bij + cij ) = A + (B + C).
(vii) Suppose that A = (aij ) and B = (bij ). Then, aij + bij is the ij-th entry of A + B while
t(aij + bij ) is the ij-th entry of t(A + B). Note that taij and tbij are the ij-th entries of tA
and tB respectively. So, taij + tbij is the ij-th entry of tA + tB, where t, aij , bij are scalars
in a field F. Observe that t(aij + bij ) = (t(aij ) + t(bij )) ∀ i, j. Thus t(A + B) = tA + tB
since the corresponding entries are equal.

2
Definition 1.1.15. The matrices A and B are conformable if either AB or BA is defined.
Theorem 1.1.4. Let A, B, and C be conformable matrices in a field F and k be a scalar. Then

(i) (AB)C = A(BC)


(ii) A(B + C) = AB + AC
(iii) (B + C)A = BA + CA
(iv) k(AB) = (kA)B = A(kB)

Proof: We prove parts (i) and (ii) and leave the rest as exercises.

(i) We show that(AB)C = A(BC); let A = (aij )r×m , B = (bjk )m×n , C = (ckl )n×p , and AB =
S = (sik )r×n and BC = T = (tjl )m×p . Then, sik = ai1 b1k + ai2 b2k + ai3 b3k + · · · + aim bmk =
Pm n
P
aij bjk and tjl = bj1 c1l + bj2 c2l + bj3 c3l + · · · + bjn cnl = bjk ckl . Multiplying S by C, i.e.,
j=1 k=1
(AB) by C, the il-th element of (AB)C is
n
X n X
X m
si1 c1l + si2 c2l + · · · + sin cnl = sik ckl = (aij bjk )ckl . (1.1)
k=1 k=1 j=1

On the other hand, multiplying A by T , the il-th element of A(BC) is


m
X n X
X m
ai1 t1l + ai2 t2l + · · · + aim tml = aij tjl = aij (bjk ckl ). (1.2)
j=1 k=1 j=1

Since the above sums (1.1) and (1.2) are equal, then the corresponding entries are equal and
thus the theorem is proved.
Matrices 7

(ii) Let A = (aij ), B = (bij ), and C = (cij ) with respective sizes r × m, m × n, m × n; we show
that A(B + C) = AB + AC, that is

[A(B + C)]ij = [AB + AC]ij , ∀ i, j


[A(B + C)]ij = ai1 (b1j + c1j ) + ai2 (b2j + c2j ) + · · · + aim (bmj + cmj )
= (ai1 b1j + ai2 b2j + · · · + aim bmj ) + ai1 c1j + ai2 c2j + · · · + aim cmj
= [AB]ij + [AC]ij .

Thus A(B + C) = AB + AC.

Remark
 1.1.1.
 If AB =  O it doesnot mean that A = O or B = O or both. For example if
1 1 1 −1
A= and B = , then AB = O but A 6= O and B 6= O.
1 1 −1 1
Definition 1.1.16. Let A be a square matrix. The powers of A are defined as

A0 = I, A1 = A, A2 = AA, A3 = A2 A, . . .

If f (x) be a polynomial such that

f (x) = a0 + a1 x + a2 x2 + · · · + an xn ,
then f (A) = a0 I + a1 A + a2 A2 + · · · + an An .

If f (A) = 0, A is called the root or zero of the polynomial f (x).


   
2 2 2 2 10 2
Example 1.1.13. Let A = and f (x) = x − x − 8. Then A = , A3 =
  3 −1 3 7
26 18
, and
27 −1

f (A) = A2 − A − 8I
     
10 2 2 2 8 0
= − −
3 7 3 −1 0 8
 
0 0
= .
0 0

Therefore A is a zero of f (x).

Definition 1.1.17. Let A = (aij ) be a square matrix.


P The trace of A, denoted by tr(A), is the
sum of all diagonal elements of A. That is, tr(A) = aii .
i
   
1 2 0 0 5 1
Example 1.1.14. Let A = . Then AA = and tr(AA0 ) = 31. Also
  3 −1 4 1 26
10 −1 12
0 0
AA=  −1 5 −4  and tr(A A) = 31.
12 −4 16
Theorem 1.1.5. Let A and B be square matrices of order n and k be a scalar, then
Matrices 8

(a) tr(At ) = tr(A)

(b) tr(A + B) = tr(A) + tr(B)

(c) tr(kA)=k tr(A)

(d) tr(AB) = tr(BA)

Proof: We prove part (d) and leave the rest as exercises. Let AB = C and BA = D, then C and
D are square matrices of order n. The ii-th element of C = ith row of A × ith column of B, i.e.,
 
b1i
 b2i 
cii = (ai1 ai2 · · · ain )  .. 
 
 . 
bni
= ai1 b1i + ai2 b2i + · · · + ain bni
X n
= aik bki .
k=1

n
P n P
P n n P
P n n
P
Thus tr(C) = cii = aik bki = bki aik = dkk = tr(D). Hence tr(AB) = tr(BA).
i=1 i=1 k=1 k=1 i=1 k=1
2
Assignment 1.1.1.
   
1 1   1 1 −3
1 −1 −2
1. Let A =  −2 −1 , B = and C =  −1 2 0  Find if
2 1 −2
1 2 −3 −1 0
possible AB, BA, AC, CA, B t C, C t B t , CC t , 3A − 3t , −2At . If it is not possible explain why.

2. Indicate which of the following matrices are square, diagonal, upper or lower triangular,
symmetric, or skew-symmetric. Calculate the transpose for each matrix.
   
−1 4   −2 0 0  
1 −1 −2 2 0
A=  0 1 , B= , C=  4 0 0 ,  D= ,
2 1 −2 0 −1
 6 0  −1 2 3
6 0 0
E= 0 6 0 
0 0 6
3. State whether the following statements are True or False. Justify your answer, that is if
yes provide an explanation and if no give a counterexample.

(a) The sum of an upper triangular matrix and a lower triangular matrix is a symmetric
matrix.
(b) The trace of a skew-symmetric matrix must equal zero.
(c) If A and B are matrices such that AB and BA are both defined, then A and B are both
square.
(d) The product of two skew-symmetric matrices of the same size is skew-symmetric.
(e) Let A be any matrix. Show that AAt and At A are both symmetric.
Matrices 9

4. (a) Find a 2 × 2 idempotent matrix besides the trivial matrices.


(b) If A is an n × n idempotent matrix, show that In − A is also idempotent.
(c) Use the above example to construct more examples of idempotent matrices.
5. (a) Prove that any symmetric or skew-symmetric matrix is square.
(b) Prove that every diagonal matrix is symmetric.
(c) Describe completely every matrix that is both diagonal and skew- symmetric.
6. Let A be a square matrix of order n.
(a) Prove that 12 (A + At ) is a symmetric matrix.
(b) Prove that 12 (A − At ) is a skew-symmetric matrix.
(c) Show that A = 21 (A + At ) + 21 (A − At ).
(d) Decompose
 matrixA as the sum of a symmetric and a skew-symmetric matrix where
3 −1 4
A= 0 2 5 
1 −3 2
   
1 2 1 2
7. Let A = and B =
3 4 1 k
Is it possible to choose k such that AB = BA? If so, what should k equal?
 
−1 0 1
8. Let A = . Show that if B is a 3 × 2 such that AB = I2 , then
 0 1 1 
a b
B =  −a − 1 1 − b  for suitable numbers a and b. Use the associative law to show
a+1 b
that (BA)2 B = B.
 
a b
9. Let A = ; prove that A2 − (a + d)A + (ad − bc)I2 = 0.
c d
Exercise 1.1.1.
     
1 2 2 0 4 2
1. Let A = and B = . Show that AB = and BA =
  3 −1 1 1 5 −1
2 4
. Note that AB 6= BA. In general, matrices A and B are said to commute if
4 1
AB = BA.
2. Suppose that A and B are conformable matrices and consider
(A − B)(A + B) = A2 − B 2 + AB − BA
Prove that (A − B)(A + B) = A2 − B 2 if and only if A and B commute (i.e., AB = BA).
6 BA.
3. Let A and B be non-commutative conformable matrices. Prove that AB =
 
  1 5 2  
1 4 2 0 25 1 19
4. Let C = and D =  −1 0 1 . Show that CD = and find
3 1 5 18 2 31
3 2 4
D0 C directly.
5. Prove that (AB)0 = B 0 A0 for A, B conformable matrices.
Chapter 2

Determinants

We are all familiar with functions such as f (x) = cos x and f (x) = x2 , which associate a real
number f (x) with a real number x. Since both x and f (x) assume only real values, such functions
are described as real-valued functions of a real variable. In this section we shall study the deter-
minant function, which is a real-valued function of a matrix variable in the sense that it associates
a real number f (A) with a square matrix A (that is, f (A) = |A|, which is the determinant of the
matrix A). Our work on determinant functions will have important applications to the theory
of systems of linear equations and will also lead us to an explicit formula for the inverse of an
invertible matrix. To compute the determinant of any matrix, we can use two methods, namely

• Combinatorial method.
• Cofactor expansion method.

2.1 Combinatorial Method

Definition 2.1.1. A permutation of a set of integers {1, 2, ..., n} is the arrangement of these
integers in some order without omissions or repetitions.

We shall denote a general permutation of the set {1, 2, ..., n} by < i1 , i2 , ..., in > where i1 is the
first integer in the permutation, i2 is the second integer in the permutation, etc. For example
there are six different permutations of the set of integers {1, 2, 3}. These are
< 1, 2, 3 >, < 1, 3, 2 >, < 2, 1, 3 >, < 2, 3, 1 >, < 3, 1, 2 >, < 3, 2, 1 > .
A set of n elements has n! permutations. We denote the set of all permutations of set S by Sn
where n is the number of elements of S, and thus
S1 = {< 1 >}
S2 = {< 1, 2 >, < 2, 1 >}
S3 = {< 1, 2, 3 >, < 1, 3, 2 >, < 2, 1, 3 >, < 2, 3, 1 >, < 3, 1, 2 >, < 3, 2, 1 >}
Exercise 2.1.1. Find all permutations of the given set of integers:

10
Determinants 11

(i) {4, 7} (ii) {1, 2, ..., n}

Definition 2.1.2. A permutation < i1 , i2 , . . . , in > of set S is said to have an inversion if a larger
integer preceeds (comes before) a smaller integer.

For example < 1, 2, 3, 4 > has no inversion, < 4, 1, 3 > has two inversions, < 6, 1 > has one
inversion etc.
Exercise 2.1.2. Find the total number of inversions in the following permutations:

(i) < 4, 1, 3, 5, 2 > (ii) < 5, 4, 3, 2, 1 >

Definition 2.1.3. A permutation is even if the total number of inversions is an even integer. A
permutation is said to be an odd if the total number of inversions is an odd integer.

The following table classifies the various permutations of S = {5, 6, 7} as an even or odd.

Table 2.1:

Permutation Number of inversions Classification


< 5, 6, 7 > 0 even
< 5, 7, 6 > 1 odd
< 6, 5, 7 > 1 odd
< 6, 7, 5 > 2 even
< 7, 5, 6 > 2 even
< 7, 6, 5 > 3 odd

Definition 2.1.4. An elementary product from an n × n matrix A is any product of n entries


from A, no two of which come from the same row or the same column.

For example, the 3 × 3 matrix below


 
a11 a12 a13
A =  a21 a22 a23 
a31 a32 a33
has 3! = 6 elementary products. These are a11 a22 a33 , a12 a21 a33 , a13 a21 a32 , a11 a23 a32 , a12 a23 a31 ,
a13 a22 a31 . Hence the elementary products are of the form a1i1 a2i2 · · · anin where < i1 , i2 , · · · , in >
is the permutation of the set {1, 2, ..., n}.
Definition 2.1.5. A signed elementary product from matrix A is an elementary product a1i1 a2i2 · · · anin
multiplied by +1 or −1.

An elementary product is multiplied by +1 if the permutation < i1 , i2 , · · · , in > is even and is


multiplied by −1 if the permutation < i1 , i2 , · · · , in > is an odd permutation.
Definition 2.1.6. Let A = (aij ) be an n × n matrix. The determinant of A denoted as |A| or
det(A) is defined as the sum of all signed elementary products from A.
Determinants 12

Example 2.1.1. Find the determinant of the following matrix;


 
1 2 0
A= 3 0 1 
3 3 4

Solution:

Table 2.2:

Elementary product Associated permutation Classification signed elementary product


(1)(0)(4) < 1, 2, 3 > even +(1)(0)(4)
(2)(1)(3) < 2, 3, 1 > even +(2)(1)(3)
(0)(3)(3) < 3, 1, 2 > even +(0)(3)(3)
(1)(1)(3) < 1, 3, 2 > odd −(1)(1)(3)
(2)(3)(4) < 2, 1, 3 > odd −(2)(3)(4)
(0)(0)(3) < 3, 2, 1 > odd −(0)(0)(3)

|A| = (1)(0)(4) + +(2)(1)(3) + +(0)(3)(3) + −(1)(1)(3) + −(2)(3)(4) + −(0)(0)(3)


= 0 + 6 + 0 − 3 − 24 − 0 = −21.

2
 
−2 1 4
Example 2.1.2. Use the method of permutations to compute |A| for A =  3 5 −7 .
1 6 2

Solution:

|A| = +a11 a22 a33 − a11 a23 a32 + a12 a21 a33 − a12 a23 a31 + a13 a21 a32 − a13 a22 a31
= (−2)(5)(2) − (−2)(−7)(6) + −(1)(3)(2) + (1)(−7)(1) + (4)(3)(6) − (4)(5)(1)
= −137 + 72 = −65.

Exercise 2.1.3. Use the method of permutation to evaluate the determinant of matrix A given
below.
     
−5 6 3 0 0 1 2 3
(a) A =
−7 −2 (c) A =  2 −1 5  (e) A =  2 3 4 
1 9 −4 1 5 7
   
−1 1 2 c −4 3
(b) A =  3 0 −5  (d) A =  2 1 c2 
1 7 2 4 c−1 2

Remark 2.1.1.
Determinants 13
 
a11 a12
(i) when A = , S2 = {< 1, 2 > < 2, 1 >}, and |A| = Σ±a1i1 a2i2 = +a11 a22 −a12 a21 .
a21 a22
 
a11 a12 a13
(ii) when A =  a21 a22 a23 , S3 = {< 1, 2, 3 > < 1, 3, 2 > < 2, 1, 3 > < 2, 3, 1 > <
a31 a32 a33
3, 1, 2 >< 3, 2, 1 >}, and |A| = Σ ± a1i1 a2i2 a3i3 = +a11 a22 a33 + a12 a23 a31 + a13 a21 a32 −
a12 a21 a33 − a13 a22 a31 − a11 a23 a32 .
 
1 2
Example 2.1.3. Find the determinant of A = .
3 7

Solution:
|A| = +a11 a22 − a12 a21 = 1(7) − 2(3) = 1.
2
 
1 2 0
Example 2.1.4. Given that A =  3 0 1 , compute |A|.
3 3 4

Solution:

|A| = Σ ± a1i1 a2i2 a3i3 = +a11 a22 a33 − a11 a23 a32 − a12 a21 a33
+a12 a23 a31 + a13 a21 a32 − a13 a22 a31
⇒ |A| = 1.0.4 + 2.1.3 + 0.3.3 − 1.1.3 − 2.3.4 − 0.0.3 = −21.

2.2 Cofactor Expansion Method

Definition 2.2.1. If A is a square matrix, the minor of entry aij , denoted by |Mij |, is the
determinant of the submatrix Mij that remains after the ith row and the j th column are deleted
from A.

Alternatively, if A = (aij ) is an n × n then Mij will denote the (n − 1) × (n − 1) matrix obtained


from A by deleting its ith row and j th column. The determinant of Mij , denoted by |Mij |, is called
the minor of the element aij of A.

Definition 2.2.2. The cofactor of aij , denoted by Aij , is given by

Aij = (−1)i+j |Mij |.

Example 2.2.1. Find all the cofactors of the entries of matrix


 
1 2 3
A =  3 2 1 .
2 0 2
Determinants 14

Solution:
2 1 3 1 3 2
A11 = (−1)1+1 = 4, A12 = (−1)1+2 = −4, A13 = (−1)1+3 = −4
0 2 2 2 2 0
2 3 1 3 1 2
A21 = (−1)2+1 = −4, A22 = (−1)2+2 = −4, A23 = (−1)2+3 = 4
0 2 2 2 2 0
2 3 1 3 1 2
A31 = (−1)3+1 = −4, A32 = (−1)3+2 = 8, A33 = (−1)3+3 = −4
2 1 3 1 3 2
So the cofactor matrix of A is
   
A11 A12 A13 4 −4 −4
(Aij ) =  A21 A22 A23  =  −4 −4 4 
A31 A32 A33 −4 8 −4
2
Definition 2.2.3. Suppose that A = (aij ) is an n × n matrix, and let Aij denote the cofactor of
the element aij with i, j = 1, 2, . . . , n. The determinant of matrix A is equal to the sum of the
products of any row or column and the corresponding cofactor. That is

(i) summing along the ith row,


|A| = ai1 Ai1 + ai2 Ai2 + ... + ain Ain
Xn
= aik Aik
k=1

(ii) summing along the j th column,


|A| = a1j A1j + a2j A2j + ... + anj Anj
Xn
= akj Akj
k=1

Example 2.2.2. Compute the determinant of the matrix


 
1 2 3
A =  3 2 1 .
2 0 2

Solution: We know (from the last example) that the cofactor matrix of A is
 
4 −4 −4
 −4 −4 4 
−4 8 −4
Summing along the
1st column ⇒ |A| = (1)(4) + (3)(−4) + (2)(−4) = −16
2st column ⇒ |A| = (2)(−4) + (2)(−4) + (0)(8) = −16
3st column ⇒ |A| = (3)(−4) + (1)(−4) + (2)(−4) = −16
1st row ⇒ |A| = (1)(4) + (2)(−4) + (3)(−4) = −16
2st row ⇒ |A| = (3)(−4) + (2)(−4) + (1)(4) = −16
3st row ⇒ |A| = (2)(−4) + (0)(8) + (2)(−4) = −16
Determinants 15

We get the same value of determinant no matter which row or column we consider. A row or
column with most zeros always leads to a faster computation of the determinant. 2
Definition 2.2.4. The transpose of the cofactor matrix is the adjoint of the matrix. It is usually
denoted by adj(A).

We observe that if  
a11 a12 . . . a1n
 a21 a22 . . . a2n 
A=
 
.. .. .. 
 . . . 
an1 an2 . . . ann
the cofactor matrix of A is given by
 
A11 A12 . . . A1n
 A21 a22 . . . A2n 
(Aij ) = 
 
.. .. .. 
 . . . 
An1 An2 . . . Ann

and the adjoint of A is given by

 
A11 A21 . . . An1
 A12 A22 . . . An2 
Adj(A) =  ..
 
.. .. 
 . . . 
A1n A2n · · · Ann
 
1 2 3
Example 2.2.3. Find the adjoint of the matrix A =  3 2 1 .
2 0 2

Solution: The cofactor matrix of A is


   
4 −4 −4 4 −4 −4
(Aij ) =  −4 −4 4  ⇒ adj(A) =  −4 −4 8 
−4 8 −4 −4 4 −4
2

Properties of Determinants

(i) For any square matrix A, |A| = |AT |.

(ii) If a matrix B is obtained from a square matrix A by interchanging the position of two rows
in A (first elementary row operation), then |B| = −|A|.
Compute the determinants of following matrices:
     
1 2 0 3 0 1 0 3 1
 3 0 1 ,  1 2 0 ,  2 1 0 .
3 3 4 3 3 4 3 3 4
Determinants 16

(iii) If two rows or columns of a square matrix are identical (the same), then its determinant is
equal to zero.

(iv) If a square matrix A has a zero row or a zero column, then |A| = 0.

(v) If a matrix B is obtained from a square matrix A by multiplying every element in one row
of A by a scalar α (second elementary row operation), then |B| = α|A|.

(vi) If matrix B is obtained from a square matrix A by adding to one row of A a scalar times
another row of A (third elementary row operation) then |B| = |A|.

(vii) The determinant of a triangular or diagonal matrix is given by the product of the elements
in the main diagonal.

(viii) If A and B are square matrices of the same order, then |AB| = |A||B|.
1
(ix) |A−1 | = ; 6 0 and A−1 is called the inverse of A.
|A| =
|A|
(x) The determinant of any identity matrix is 1.

(xi) If A is an n × n matrix and λ is a scalar, then |λA| = λn |A|.

Exercise 2.2.1. Using a general form of 3 × 3 matrix, demonstrate the validity of properties
(v) and (vi) of determinants mentioned above. Using the properties of determinants, find the
determinants of the matrices below.
   
0 0 0 3 1 4
(i)  2 3 0  (iii)  0 3 4 
−1 −2 4 0 0 3
 
3 1 0 0 0  
 0 1 0 0 0  1 0 0 0
 
(ii)  0 0 4 −1 0 
  0 2 0 0 
 0 0 0 −1 0 
 (iv) 
 0

0 3 0 
0 0 0 0 2 0 0 0 4

Exercise 2.2.2. Prove that if Ai , i = 1, 2, . . . , n are square matrices of the same order, then

|A1 A2 · · · An | = |A1 ||A2 | · · · |An |.

Deduce that if A is a square matrix then |An | = |A|n .

Example 2.2.4. Use the properties of determinants to find the determinants of matrix
 
2 1 3 1
 1 0 1 1 
A=  0 2 1

0 
0 1 2 3
Determinants 17

2 1 3 1 1 0 1 1
1 0 1 1 2 1 3 1
Solution: Using row interchange we obtain = − . Applying the
0 2 1 0 0 2 1 0
0 1 2 3 0 1 2 3
elementary row operation R2 + −2R1 −→ R2 we get
1 0 1 1
1 1 −1
0 1 1 −1
− = − 2 1 0 .
0 2 1 0
1 2 3
0 1 2 3
Finally the operations R2 + −2R1 −→ R2 , R3 − R1 −→ R3 lead to
1 1 −1
−1 2
− 0 −1 2 = − = 6.
1 4
0 1 4
2
 
1 2 3 4
 −2 1 −4 3 
Example 2.2.5. Find the determinant of matrix A = 
 3 −4 −1

2 
4 3 −2 −1

1 2 3 4
−2 1 −4 3
Solution: |A| = . Using elementary row operations R1 −→ R1 , 2R2 +
3 −4 −1 2
4 3 −2 −1
R1 −→ R2 , R3 − 3R1 −→ R3 , R4 − 4R1 −→ R4 we have
1 2 3 4
5 2 11 5 2 11
0 5 2 11
= −10 −10 −10 = − 10 1 1 1
0 −10 −10 −10
−5 −14 −17 −5 −14 −17
0 −5 −14 −17
Applying the elementary column operations C1 − C2 −→ C2 , C3 − C1 −→ C3 we get
5 −3 6
−3 −6
−10 1 0 0 = − 10(−1) = 900.
−9 −12
−5 −9 −12
2
Remark 2.2.1. We can combine row reduction and cofactor expansion to calculate determinants
of large matrices.
2 1 3 1
0 1 1 1 3 1
1 0 1 1
Example 2.2.6. = 2 2 1 0 − 2 1 0 . Next
0 2 1 0
1 2 3 1 2 3
0 1 2 3

0 1 1 1 0 1  
2 1 0 1
2 2 1 0 = 2 0 2 1 = 2 +3 = 2(3 − 6) = −6,
1 2 2 1
1 2 3 3 1 2
Determinants 18

and
1 3 1 1 3 1
−5 −2
2 1 0 = 0 −5 −2 = = − 12.
−1 2
1 2 3 0 −1 2
Therefore the required determinant is −6 + 12 = 6.
 
1 0 −2
Example 2.2.7. Given that A =  3 1 4  , find adj(A).
5 2 −3

Solution: Since the cofactors of A are


1 4 3 4 3 1
A11 = (−1)1+1 = −11, A12 = (−1)1+2 = 29, A13 = (−1)1+3 = 1
2 −3 5 −3 5 2
0 −2 1 −2 1 0
A21 = (−1)2+1 = −4, A22 = (−1)2+2 = 7, A23 = (−1)2+3 = −2
2 −3 5 −3 5 2
0 −2 1 −2 1 0
A31 = (−1)3+1 = 2, A32 = (−1)3+2 = −10, A33 = (−1)3+3 = 1
4 1 3 4 3 1
The Matrix of cofactors is  
−11 29 1
 −4 7 −2 
2 −10 1
and thus the adjoint of A is  
−11 −4 2
adj(A) =  29 7 −10 
1 −2 1
The Matrix adjoint is useful in finding the inverse of a non-singular matrix. 2
Example 2.2.8. Given the matrix
 
4 3 2
A= 1 3 1 
2 0 2
compute the determinant of A using the cofactor minor technique.

Solution: The cofactors of A are


3 1 1 1 1 3
A11 = (−1)2 = 6, A12 = (−1)3 = 0, A13 = (−1)4 = −6
0 2 2 2 2 0
3 2 4 2 4 3
A21 = (−1)3 = −6, A22 = (−1)4 = 4, A23 = (−1)5 = 6
0 2 2 2 2 0
3 2 4 2 4 3
A31 = (−1)4 = −3, A32 = (−1)5 = −2, A33 = (−1)6 = 9
3 1 1 1 1 3
The Matrix of cofactors is  
6 0 −6
 −6 4 6 
−3 −2 9
and |A| = 4(6) + 3(0) + 2(−6) = 12. 2
Exercise 2.2.3.

Using properties of determinants, find the determinants of the following matrices if possible.
Determinants 19
     
2 1 6 2 0 0 1 1 2 1
(a) A =
7 4 2 (c) A =  0 4 0   3 1 4 5 
(e) A =  
0 0 3  7 6 1 2 
    1 1 3 4
1 2 3 9 8 6
(b) A =  4 5 6  (d) A =  7 4 0 
8 8 9 9 8 6

Theorem 2.2.1. A square matrix has an inverse if and only if its determinant is non zero.
1
Theorem 2.2.2. If a matrix A is invertible, then det(A−1 ) = .
det(A)

Proof: A is invertible implies that det(A) 6= 0 and AA−1 = I. Therefore det(AA−1 ) = det(I) ⇒
1
det(AA−1 ) = 1, and det(A)det(A−1 ) = 1 ⇒ det(A−1 ) = . 2
detA
Theorem 2.2.3. Similar matrices have the same determinant.

Proof: If A and B are similar, then there exists an invertible matrix P such that A = P −1 BP .
1
Now, det(A) = det(P −1 BP ) = det(P −1 )det(B)det(P ) = det(B)det(P ) = det(B). 2
det(P )
Assignment 2.2.1.

1. Using properties of determinants, find the determinants of the following matrices if possible
     
2 1 6 2 0 0 1 1 2 1
(a) A = ,
7 4 2 (c) C =  0 4 0 ,  3 1 4 5 
(e) E =  7 6 1 2 .

0 0 3
1 1 3 4
   
1 2 3 9 8 6
(b) B =  4 5 6 , (d) D =  7 4 0 ,
8 8 9 9 8 6

2. Let A and B be square matrices of order 3 such that |A2 B −1 | = −8. Use properties of
determinants to compute

(a) |2A|, (b) |B|, (c) |AB|.

3. Let A and B be square matrices of order 4 such that |A| = 4 and |B| = 2. Find

(a) |BA|, (b) |B 2 |, (c) |2A|, (d) |(AB)t |, (e) |B −1 |.

4. Let A and B be square matrices of order 3 such that |A| = −2 and |B| = 5. Find

(a) |BA|, (b) |B 4 |, (c) | − 2A|, (d) |(AB)t |, (e) |B −1 |.

5. An operation is done to get from the first matrix to the second. Identify what was done
and tell how it will affect the value of the determinant.
   
a b a c
(a) A = and B = ,
c d b d
Determinants 20
   
a b a b
(b) A = and B = .
c d a+b c+d
6. Determine whether the matrix has an inverse by finding whether the determinant is non
zero. If the determinant is nonzero,find the inverse
 using the formula for the inverse which
1 3 3
involves the cofactor matrix. A =  2 4 1 
0 1 1
 
2 0 −3 3
 4 −2 −1 3 
7. Find the determinant of A =   1 −1
 by
0 −2 
2 1 −2 1
(a) row reducing A to upper triangular form,
(b) permutation inversion technique,
(c) cofactor expansion technique.
 
4 −5 2 −3
 −6 1 2 −4 
8. Find the determinant of A =  3 −8
 by
5 2 
−7 0 −1 9
(a) row reducing A to upper triangular form,
(b) permutation inversion technique,
(c) cofactor expansion technique.
Chapter 3

Systems of Linear Equations

3.1 Introduction

Systems of simultaneous linear equations appear frequently in engineering and scientific problems.
The need for efficient methods that solve such systems was one of the historical forces behind the
introduction of matrices, and that need continues today, especially for solution techniques that
are applicable to large systems containing hundreds of equations and hundreds of variables. A
system of m linear equations in n variables x1 , x2 , . . . , xn has the general form
a11 x1 + a12 x2 + a13 x3 + ··· + a1n xn = b1
a21 x1 + a22 x2 + a23 x3 + ··· + a2n xn = b2
.. (3.1)
.
am1 x1 + am2 x2 + am3 x3 + · · · + amn xn = bm
where the coefficients aij (i = 1, 2, . . . m; j = 1, 2, . . . n) and the quantities bi are all known scalars.
The variables in a linear equation appear only to the first power and are multiplied only by known
scalars. Linear equations do not involve products of variables, variables raised to powers other
than one, or variables appearing as arguments of transcendental functions. For systems containing
a few variables, it is common to denote the variables by distinct letters such as x, y, and z. Such
labeling is impractical for systems involving hundreds of variables; instead a single letter identifies
all variables with different numerical subscripts used to distinguish different variables, such as
x1 , x2 , . . . , xn .
Example 3.1.1. The system
2x + 3y − z = 60
4x − 5y + 6z = 35
of two equations in the variables x, y, and z is linear.
Example 3.1.2. The system
20x1 + 80x2 + 35x3 + 40x4 + 55x5 = −0.005
90x1 − 15x2 − 70x3 + 25x4 + 55x5 = 0.015
30x1 + 35x2 − 35x3 + 10x4 − 65x5 = −0.015
of three equations with five variables x1 , x2 , . . . , x5 is also linear.

21
Systems of Linear Equations 22

Example 3.1.3. The system

2x + 3xy = 25

4 x + sin y = 50

is not linear for many reasons: it contains a product xy of variables; it contains the variable x
raised to the one-half power; and it contains the variable y as the argument of a transcendental
sine function.
Remark 3.1.1. Any linear system of the form (3.1) can be rewritten in the matrix form Ax =
b with      
a11 a12 · · · a1n x1 b1
 a21 a22 · · · a2n   x2   b2 
A =  .. ..  , x =  ..  , b =  .. 
     
..
 . . .   .   . 
am1 am2 · · · amn xn bm
If m 6= n, then A is not square and the dimensions of x and b will be different.
Definition 3.1.1. A solution to linear system (3.1) is a set of scalar values for the variables
x1 , x2 , . . . , xn that when substituted into each equation of the system makes each equation true.
The set of all solutions to the system is called its solution set.
Example 3.1.4. The scalar values x = 2 and y = 3 are a solution to the system

3x + 2y = 12
6x + 4y = 24

A second solution is x = −4 and y = 12. In contrast, the scalar values x = 1, y = 2, and z = 3


are not a solution to the system

2x + 3y + 4z = 20
4x + 5y + 6z = 32
7x + 8y + 9z = 40

because these values do not make the third equation true, even though they do satisfy the first
two equations of the system.

From the linear system (3.1), one can obtain a matrix


 
a11 a12 ··· a1n
 a21 a22 ··· a2n 
A =  ..
 
.. .. 
 . . . 
am1 am2 · · · amn
called the coefficient matrix, and
 
a11 a12 ··· a1n b1
 a21 a22 ··· a2n b2 
Ā = (A, b) = 
 
.. .. .. .. 
 . . . . 
am1 am2 · · · amn bm
called the augmented matrix of the system.
Systems of Linear Equations 23

3.2 Solving a Linear System of Equations

When constructing an augmented matrix, we must write the unknowns in the same order in each
equation, and the constants must be on the right. The basic method for solving a system of
linear equations is to replace the given system by a new system that has the same solution set
but is easier to solve. This new system is generally obtained in a series of steps by applying the
elementary row operations:

1. Interchange two equations.

2. Multiply an equation through by a nonzero constant.

3. Add a multiple of one equation to another.

to eliminate the unknowns systematically. Since the rows (horizontal entries) of an augmented
matrix correspond to the equations in the associated system, these three operations correspond
to the following operations on the rows of the augmented matrix:

1. Interchange two rows; Ri ←→ Rj interchanges row i and row j

2. Multiply a row through by a nonzero constant; Ri −→ tRi multiplies row i by a non-zero


scalar t.

3. Add a multiple of one row to another row; Rj −→ Rj + tRi adds t times row i to row j.
Definition 3.2.1. Matrix A is row equivalent to matrix B, written A ∼ B, if B is obtained from
A by a sequence of elementary row operations.
Theorem 3.2.1. Row equivalence is an equivalence relation. That is:

(1) A ∼ A for any matrix A.

(2) If A ∼ B, then B ∼ A.

(3) If A ∼ B and B ∼ C, then A ∼ C.


Example 3.2.1.
   
1 2 0 1 2 0
A =  2 1 1  R2 −→ R2 + 2R3 4
 −1 5  R2 ←→ R3 ,
1 −1 2 1 −1 2
   
1 2 0 2 4 0
 1 −1 2  R1 −→ 2R1  1 −1 2 =B
4 −1 5 4 −1 5
Thus A is row-equivalent to B. Clearly, B is also row equivalent to A by performing the inverse
row operations R1 −→ 21 R1 , R2 ←→ R3 , R2 −→ R2 − 2R3 .
Definition 3.2.2. A matrix A = (aij ) is said to be in row echelon form if it satisfies the following
conditions:
Systems of Linear Equations 24

1. If a row does not consist entirely of zeros, then the first nonzero entry in the row is a 1; we
call this a leading 1.

2. All rows of A containing only zeros appear below the rows whose entries are not all zeros.

3. In any two successive rows that do not consist entirely of zeros, the leading 1 in the pro-
ceeding row occurs farther to the right of the leading 1 in the preceding row.

Remark 3.2.1. If a matrix satisfies conditions 2 and 3 of Definition 3.2.2, and the first nonzero
element is not 1 for every row that does not consist entirely of zeros, then it is called an echelon
matrix, or it is said to be in echelon form. These leading nonzero elements in their respective
rows, are called the pivots of the echelon matrix.

Definition 3.2.3. A matrix A = (aij ) is said to be in reduced row echelon form (or row canonical
form) if it satisfies the following conditions:

1. It is in row echelon form.

2. Each column that contains a leading 1 has zeros everywhere else in that column.

Example 3.2.2. The matrices


     
1 2 0 1 0 0 1 1 −1 1
 0 1 −2  ,  0 1 0 ,  0 1 −2 1 
0 0 0 0 0 1 0 0 0 1

are in row echelon form.

Example 3.2.3. The matrices


     
1 0 0 1 0 0 1 0 −1 0
 0 1 −9  ,  0 1 0 ,  0 1 −2 0 
0 0 0 0 0 1 0 0 0 1

are in reduced row-echelon form.

Remark 3.2.2.

(i) An m×n matrix can be changed to row echelon form by applying elementary row operations.

(ii) Two methods can be used to solve a linear system of equations Ax = b. These are

• Row echelon.
• Crammer’s rule.

Exercise 3.2.1. Determine whether the following matrices are in reduced row echelon form or
not.        
0 0 0 1 7 −1 1 6 0 0 1 0 −1 0
 0 0 0  ,  0 1 −2 1  ,  0 1 0  ,  9 1 −2 0  .
0 0 0 0 0 0 2 0 0 1 0 −7 0 1
Systems of Linear Equations 25

3.2.1 Row Echelon Method

To solve a linear system of equations Ax = b,

(i) Form the augmented matrix Ā = (A, b).

(ii) Reduce Ā = (A, b) to row echelon form.

(iii) Use back substitution to get the solution.

Remark 3.2.3. A linear system Ax = b may have

(i) no solution; there is a row in Ā whose only non zero element, in the bottom most row, is in
the last column - the system is said to be inconsistent.

(ii) a unique solution (only one solution), and this occurs when m = n after the matrix has
been changed to row echelon form; the non zero element is not only in the last column. The
system is said to be consistent.

(iii) infinitely many solutions; in the row echelon form of Ā, we have n > m. The system has
more than one solution, i.e., it has infinitely many solutions. It is said to be consistent.

Example 3.2.4. Solve the linear system of equations below using the method of reduction of a
matrix to row echelon form.

x1 − 3x2 + 5x3 = 3
x2 + 3x3 = 2
x2 + 3x3 = 5

Solution:    
1 −3 5 3 1 −3 5 3
Ā =  0 1 3 2 ∼ 0 1 3 2 
0 1 3 5 0 0 0 3
Here m = n = 3; 0x3 = 3 is undefined. The linear system is inconsistent. 2

Exercise 3.2.2. Solve the linear system of equations below

x + 2y + z = 3
2x + 3y + 3z = 3
−3x − 4y − 5z = −5

Example 3.2.5. Solve the linear system of equations below using the method of reduction of a
matrix to row-echelon form.
x 1 + x2 + x3 = 6
3x1 − 2x2 + 3x3 = 8
−2x1 + 4x2 − 3x3 = −3
Systems of Linear Equations 26

Solution:
 
1 1 1 6
Ā =  3 −2 3 8  R1 → R1 , R2 − 3R1 → R2 , R3 + 2R1 → R3
−2 4 −3 −3

 
1 1 1 6
∼  0 −5 0 −10  R1 → R1 , R2 → R2 , 5R3 + 6R2 → R3
0 6 −1 9
 
1 1 1 6
−1 −1
∼  0 −5 0 −10  R1 → R1 , R2 → R2 , R3 → R3
5 5
0 0 −5 −15
 
1 1 1 6
∼  0 1 0 2 .
0 0 1 3

By back substitution, we have x3 = 3, x2 = 2, x1 + x2 + x3 = 6 −→ (x1 , x2 , x3 ) = (1, 2, 3). The


solution is unique solution. 2
Remark 3.2.4. The method used to solve Example 3.2.5, where the final form of the augmented
matrix is in row echelon form, is known as Gaussian elimination method.
Example 3.2.6. Solve the linear system
x1 − 3x2 + 5x4 = 4
−x1 + 3x2 + x3 − 3x4 = −11
−x1 + 3x2 − 5x3 + x5 = −3
3x1 − 9x2 + 15x4 = 12

Solution:
 
1 −3 0 5 0 4
 −1 3 1 −3 0 −11 
Ā =   R → R1 , R2 + R1 → R2 , R3 + R1 → R3 , R4 − 3R1 → R4
 −1 3 −5 0 1 −3  1
3 −9 0 15 0 12
 
1 −3 0 5 0 4
 0 0 1 2 0 −7 
∼   R → R1 , R3 + 5R2 → R3
 0 0 −5 5 1 1  1
0 0 0 0 0 0
 
1 −3 0 5 0 4
∼  0 0 1 2 0 −7  .
0 0 0 15 1 −34
Let x5 = s, 15x4 + x5 = −34, x3 + 2x4 = −7, x1 − 3x2 + 5x4 = 4. Note that the number of
unknowns (5) exceeds the number of equations (3) after reducing to row echelon form. So let
x5 = s, x4 = −34−s
15
, x2 = t ⇒ x3 = −7 − 2x4 , x1 = 4 − 5s + 3t. To get a particular equation, we
assign values to s and t. The variables x2 and x5 are called free variables. Observe that s and t
can take on many values, thus giving us many different answers. 2
Systems of Linear Equations 27

Example 3.2.7. Solve the linear system

x1 + x2 + x3 = 2
2x1 + 3x2 − x3 = 8
x1 − x2 − x3 = −8

Solution:
 
1 1 1 2
Ā =  2 3 −1 8  R1 → R1 , R2 − 2R1 → R2 , R3 − R1 → R3
1 −1 −1 −8
 
1 1 1 2
∼  0 1 −3 4  R1 → R1 − R2 , R2 → R2 , R3 + 2R2 → R3
0 −2 −2 −10
 
1 0 4 −2
 0 1 −3 −1
∼ 4  R1 → R1 , R2 → R2 , R3 → R3
8
0 0 −8 −2
 
1 0 4 −2
∼  0 1 −3 4  R1 → R1 − 4R3 , R2 → R2 + 3R3 , R3 → R3
1
0 0 1 4
 
1 0 0 −3
∼  0 1 0 19  .
4
1
0 0 1 4

By back substitution, we have x3 = 41 , x2 = 19


4
, x1 = −3. 2

Remark 3.2.5. The method used to solve Example 3.2.7, where the final form of the augmented
matrix is in reduced row echelon form, is known as Gauss-Jordan elimination method.

Exercise 3.2.3. Solve the linear system using the reduction of matrix into row echelon form

x1 − x2 + x3 = 1
4x1 − 2x2 + 8x3 = 1
3x1 − x2 + 7x3 = 0
6x1 − 4x2 + 10x3 = 3

Example 3.2.8. Solve the linear system using Gauss-Jordan elimination method

x1 + 2x2 − 3x3 − 2x4 + 4x5 = 1


2x1 + 5x2 − 8x3 − x4 + 6x5 = 4
x1 + 4x2 − 7x3 + 5x4 + 2x5 = 8
Systems of Linear Equations 28

Solution:
 
1 2 −3 −2 4 1
Ā =  2 5 −8 −1 6 4  R1 → R1 , R2 − 2R1 → R2 , R3 − R1 → R3
1 4 −7 5 2 8
 
1 2 −3 −2 4 1
∼  0 1 −2 3 −2 2  R1 → R1 , R2 → R2 , R3 − 2R2 → R3
0 2 −4 7 −2 7
 
1 2 −3 −2 4 1
∼  0 1 −2 3 −2 2  R1 + 2R3 → R1 , R2 − 3R3 → R2 , R3 → R3
0 0 0 1 2 3
 
1 2 −3 0 8 7
∼  0 1 −2 0 −8 −7  R1 − 2R2 → R1 , R2 → R2 , R3 → R3
0 0 0 1 2 3
 
1 0 1 0 24 21
∼  0 1 −2 0 −8 −7  .
0 0 0 1 2 3

Using back substitution we obtain

x1 + x3 + 24x5 = 21
x2 − 2x3 − 8x5 = −7
x4 + 2x5 = 3

which is equavalent to

x1 = 21 − x3 − 24x5
x2 = −7 + 2x3 + 8x5
x4 = 3 − 2x5

Here x1 , x2 , x4 are the dependent (or pivotal or leading) variables while x3 and x5 are the free
variables. We can set free variables equal to parameters, say x3 = s, x5 = t. This process yields

x1 = 21 − s − 24t, x2 = −7 + 2s + 8t, x3 = s, x4 = 3 − 2t, x5 = t.

Example 3.2.9. Solve the linear system below

3x − y + 7z = 0
2x − y + 4z = 21
x − y + z = 1
6x − 4y + 10z = 3
Systems of Linear Equations 29

Solution:  
3 1 7 0
 2 −1 4 1 
 1 −1 1 1  R1 ←→ R3
Ā =  2 

6 −4 10 3
 
1 −1 1 1
 2 −1 4 1 
∼   3 −1 7 0  R1 → R1 , R2 − 2R1 → R2 , R3 − 3R1 → R3 , R4 − 6R1 → R4
2 

6 −4 10 3
 
1 −1 1 1
 0 1 2 − 32 
∼   R + R2 → R1 , R2 → R2 , R3 − 2R2 → R3 , R4 − R3 → R4
 0 2 4 −3  1
0 2 4 −3
1 0 3 − 12
 
 0 1 2 −3 
∼   0 0 0
2 .
0 
0 0 0 0
The augmented matrix has been converted to reduced row echelon form. Let z = t; then the
complete solution is
1 3
x = − − 3t, y = − − 2t, z = t.
2 2
2
Exercise 3.2.4. Solve the following linear system of equations.
3x + y − z = 1 x1 + x2 − x3 + 2x4 = 1
(i)
5x − y + z = −1 (iii) x1 − x2 + 7x3 + 4x4 = 1
−5x1 + 3x2 − 15x3 − 6x4 = 9

2x + y + 3z = 4 2x1 + 2x2 + x3 − 4x4 = 1


(ii) 4x + y + 4z = 1 (iv) 2x1 + 3x2 − x4 = −1
3x + y + 2z = 0 x1 − 6x2 + 3x3 − 8x4 = 7
Example 3.2.10. Find conditions that must be satisfied by b1 , b2 , and b3 for the system to be
consistent
x1 − 2x2 + 5x3 = b1
4x1 − 5x2 + 8x3 = b2
−3x1 + 3x2 − 3x3 = b3

Solution: We form the augmented matrix and reduce it to row echelon form:
   
1 −2 5 b1 1 −2 5 b1
 4 −5 8 b2  ∼  0 3 −12 b2 − 4b1 
−3 3 −3 b3 0 −3 12 b3 + 3b1
   
1 −2 5 b1 1 −2 5 b1
∼  0 3 −12 b2 − 4b1 ∼ 0 3 −12 b2 − 4b1 
0 0 0 b3 + 3b1 + b2 − 4b1 0 0 0 b3 + b2 − b1
 
1 −2 5 b1
b2 −4b1
∼  0 1 −4 3
.
0 0 0 b3 + b2 − b1
Systems of Linear Equations 30

Thus the system is consistent if b1 = b2 + b3 . In this case the solution is


1
x2 = (b2 − 4b1 + 12s), s ∈ R
3
x1 = b1 − 5s + 2x2
2 8
= b1 − 5s + b2 − b1 + 8s
3 3
5 2
= − b1 + b2 + 3s
3 3
x3 = s

or equivalently,    5   2   
x1 −3 3
3
 x2  =  − 4  b1 +  1  b2 +  4  s, s ∈ R
3 3
x3 0 0 1
2

Example 3.2.11. Show that the following system is consistent if and only if c = 2a − 3b and
solve the system in this case.

2x − y + 3z = a
3x + y − 5z = b
−5x − 5y + 21z = c

Solution: We form the augmented matrix and reduce it to row echelon form:
   
2 −1 3 a 2 −1 3 a
 3 1 −5 b  ∼  1 2 −82 b − a 
−5 −5 21 c −5 −5 21 c
   
1 2 −8 b − a 1 2 −8 b−a
∼  2 −1 3 a  ∼  0 −5 19 −2b + 3a 
−5 −5 21 c 0 5 −19 5b − 5a + c
− 52 b+a
   
1 2 −8 b−a 1 0 5
∼  0 1 − 19
5
2b−3a
5
∼ 0 1 − 19
5
2b−3a
5
.
0 0 0 3b − 2a + c 0 0 0 3b − 2a + c

From the last matrix, we see that the original system is consistent if 3b − 2a + c = 0 and the
solution is
b+a 2 2b − 3a 19
x= + t, y = + t, z = t,
5 5 5 5
where z is the free variable. 2

Definition 3.2.4. A linear system Ax = b is homogeneous if b = 0. The system Ax = 0 is


always consistent as x1 = 0, . . . , xn = 0 is a solution. This solution is called the trivial solution.
Any other solution is called a non-trivial solution.

Theorem 3.2.2. A homogeneous system of m linear equations in n unknowns always has a


non-trivial solution if m < n. In fact, it has infinitely many solutions.

Proof: It is left as an exercise to the reader. 2


Systems of Linear Equations 31

Remark 3.2.6. Recall that a homogeneous system of linear equations is of the form Ax = 0;

(i) if n = m, after reducing to row echelon form, then the system always has a trivial solution.

(ii) if n > m, before and after reducing the system to row echelon form, it has a non trivial
solution.

(iii) if the coefficient matrix A is a square matrix, and det(A) = 0, the system has a non trivial
solution.

(iv) if the coefficient matrix A is a square matrix, and det(A) 6= 0, the system has only the
trivial solution.
Example 3.2.12. Solve the linear system
x1 + 2x2 + x3 = 0
3x1 + x2 + x3 = 0
2x1 + 2x2 − 3x3 = 0

Solution: Note that n = m = 3 before row echelon reduction. The augmented matrix is
     
1 2 1 0 1 2 1 0 1 2 1 0
Ā =  3 1 1 0  ∼  0 −5 −2 0  ∼  0 −5 −2 0  .
2 2 −3 0 0 −2 −5 0 0 0 − 29
2
0

Since n = m = 3 after row echelon reduction, we have a trivial solution (x1 , x2 , x3 ) = (0, 0, 0). 2
Example 3.2.13. Solve the linear system
x1 + x2 − 2x3 + x4 = 0
x1 + 2x2 + x3 + 3x4 = 0

Solution: Note that n = 4, m = 2; the augmented matrix is


   
1 1 −2 1 0 1 1 −2 1 0
Ā = ∼ .
1 2 1 3 0 0 1 3 2 0

Since n = 4, m = 2 after reducing to row echelon form the system has a non trivial solution. Let
x3 = p, x4 = s ⇒ x2 = −3p − s , and x1 = 5p + s. Then the solution is
       
x1 5p + s 5 1
 x2   −3p − 2s 
 = p  −3  + s  −2  ,
   
 x3  = 
  
p   1   0 
x4 s 0 1

which is a non trivial solution. 2


Exercise 3.2.5. Show that the following linear system has no trivial solution.
x1 − 4x2 + 2x3 = 0
x1 + x2 + 2x3 = 0
2x1 − 3x2 + 4x3 = 0
Systems of Linear Equations 32

3.2.2 Cramer’s Rule Method

If Ax = b is a system of n linear equations in n unknowns such that det(A) 6= 0, then the system
has a unique solution. This solution is
det(A1 ) det(A2 ) det (An )
x1 = , x2 = , . . . , xn = .
det(A) det(A) det(A)
where Aj is the 
matrixobtained by replacing the entries in the jth column of A by the entries in
b1
 b2 
the matrix b =  ..  .
 
 . 
bn
Example 3.2.14. Solve the linear system using Cramer’s rule.
x1 + x2 + x3 = 1
2x1 + 3x2 + 4x3 = 1
x1 + 2x2 + x3 = 2

  

1 1 1 1
Solution: Let A =  2 3 4  ; b =  1 . Then
1 2 1 2
 
1 1 1
A1 =  1 3 4  ⇒ |A1 | = −2
2 2 1
 
1 1 1
A2 =  2 1 4  ⇒ |A2 | = −2
1 2 1
 
1 1 1
A3 =  2 3 1  ⇒ |A3 | = 2
1 2 2
1
By using the formula xj = |Aj | and |A| = −2,
|A|
1 −2
x1 = |A1 | = = 1
|A| −2

1 −2
x2 = |A2 | = = 1
|A| −2

1 2
x3 = |A3 | = = −1
|A| −2
2
Assignment 3.2.1.
Systems of Linear Equations 33

1. Use the Gaussian elimination method to solve each of the following systems of linear equa-
tions. In each case, indicate whether the system is consistent or inconsistent. Give the
complete solution set, and if the solution set is infinite, specify three particular solutions.
−5α1 − 2α2 + 2α3 = 14
(a) 3α1 + α2 − α3 = −8
2α1 + 2α2 − α3 = −3
3α1 − 2α2 + 4α3 = −54
(b) −α1 + α2 − α3 = 20
5α1 − 4α2 + 8α3 = −83
6x1 − 12x2 − 5x3 + 16x4 − 2x5 = −53
(c) −3x1 + 6x2 + 3x3 − 9x4 + x5 = 29
−4x1 + 8x2 + 3x3 − 10x4 + x5 = 33
2. Suppose that each of the following is the final augmented matrix obtained after Gaussian
elimination. In each case, give the complete solution set for the corresponding system of
linear equations.
   
1 −5 2 3 −2 −4 1 4 −8 −1 2 −3 −4
 0 1 −1 −3 −7 −2   0 1 −7 2 −9 −1 −3 
(a)  , (c)  ,
 0 0 0 1 2 5   0 0 0 0 1 −4 2 
0 0 0 0 0 0 0 0 0 0 0 0 0
   
1 −3 6 0 −2 4 −3 1 −7 −3 −2 −1 −5
 0 0 1 −2 8 −1 5   0 0 1 2 3 1 
(b)  , (d)  .
 0 0 0 0 0 1 −2   0 0 0 1 −1 4 
0 0 0 0 0 0 0 0 0 0 0 0 −2

3. Determine whether the following matrices A and B are row equivalent. [Hint: Do they have
the same row reduced echelon form matrix?]
   
6 1 −16 −2 2 9 −2 −31 2 32
A =  −27 5 91 −8 −98  , B =  −3 20 49 −22 −152 
21 −4 −71 6 76 −3 39 69 −43 −279

4. A certain number of nickels, dimes, and quarters totals $16.50. There are twice as many
dimes as quarters, and the total number of nickels and quarters is 20 more than the number
of dimes. Find the correct number of each type of coin.

5. A mixture of 6 gallons of chemical A, 8 gallons of chemical B, and 13 gallons of chemical


C is required to kill a destructive crop insect. Commercial spray X contains 1, 2, and
2 parts, respectively, of these chemicals. Commercial spray Y contains only chemical C.
Commercial spray Z contains chemicals A, B, and C in equal amounts. How much of each
type of commercial spray is needed to get the desired mixture?

6. Determine if the system is consistent. If so, is the solution unique?

x + 2y + z − w = 2 x + 2y + z − w = 2
x − y + z + w = 1 x − y + z + w = 0
(a) (b)
2x + y − z = 1 2x + y − z = 1
4x + 2y + z = 5 4x + 2y + z = 3
Systems of Linear Equations 34

7. Find a homogeneous linear system of two equations such that

x1 = 1, x2 = −1, x3 = 1, x4 = 2

and
x1 = 2, x2 = 0, x3 = 3, x4 = −1
are solutions of the system.

8. Consider the homogeneous system of equations, Ax = 0. Show that, if x1 and x2 are


solutions of this system and is a real number, then x1 + x2 and λx1 are also solutions of the
system. Does the result hold for inhomogeneous systems?

9. Let A be an m × n matrix. If b is a non-zero m-vector, explain why the systems Ax = b


and Ax = 0 are not equivalent.

10. You have a system of k equations in two variables, k ≥ 2. Explain the geometric significance
of

(a) No solution,
(b) A unique solution,
(c) An infinite number of solutions.

11. Do the three lines, x + 2y = 1, 2x − y = 1, and 4x + 3y = 3 have a common point of


intersection? If so, find the point and if not, tell why they don’t have such a common point
of intersection.

12. True or False: Explain if true or give a counterexample if false.

(a) The augmented matrix for a linear system contains all the essential information from
the system.
(b) It is possible for a linear system of equations to have exactly three solutions.
(c) A consistent system must have exactly one solution.
(d) A homogeneous system of linear equations must have at least one solution.
(e) Multiplying matrices and then performing a row operation on the product has the
same effect as performing the row operation on the first matrix and then calculating
the product.

13. Suppose a system of equations has fewer equations than variables. Must such a system be
consistent? If so, explain why and if not, give an example which is not consistent.

14. If a system of equations has more equations than variables, can it have a solution? If so,
give an example and if not, tell why not.

15. Choose h and k such that the augmented matrix shown has one solution. Then choose h and
k such that the system has  no solutions.
 Finally, choose h and k such that thesystem has
1 h 2 1 2 2
infinitely many solutions: . Repeat the above steps for . Similarly
 2 4 k 2 h k
1 2 3
do the same for .
h k −9
Systems of Linear Equations 35

16. Determine the values of k such that the system of linear equations

x − y + 2z = 0
−x + y − z = 0
x + ky + z = 0

has exactly one solution.

17. Find all values of for which the homogeneous system of linear equations has nontrivial
solutions.
(λ + 2)α1 − 2α2 + 3α3 = 0
−2α1 + (λ − 1)α2 + 6α3 = 0
α1 + 2α2 + λα3 = 0

18. Find (if possible) values of a, b and c such that the system of linear equations

2x − y + z = a
x + y + 2z = b
3y + 3z = c

has

(a) no solution,
(b) exactly one solution,
(c) infinitely many solutions.

19. Find the values of a and b for which the linear system

x + y z + w = 1
ax + y + z + w = b
3x + 2y + aw = 1 + a

is consistent. Also find the complete solution when a = b = 2.

Exercise 3.2.6. Solve the following linear equations by the use of Cramer’s Rule where applicable.

x1 − 4x2 + x3 = 6 3x1 − x2 + x3 = 4
(a) 4x1 − x2 + 2x3 = −1 (b) −x1 + 7x2 − 2x3 = 1
2x1 + 2x2 − 3x3 = −20 2x1 + 6x2 − x3 = 5
Chapter 4

Inverses of Square Matrices

4.1 Introduction

Definition 4.1.1. A square matrix B is said to be an inverse of a square matrix A if

AB = BA = I.

This matrix is usually denoted by A−1 and it is of the same order as A, since the two products
are conformable.
Remark 4.1.1. A square matrix has an inverse iff it can be transformed into an identity matrix
with elementary row operations.

An inverse of a matrix can be obtained by the

(a) Method of elementary row operations.

(b) Adjoint method.

(c) Direct method.

4.2 The Method of Elementary Row Operations


Step 1: Create the augmented matrix [A|I], where A is an n × n matrix to be inverted and I is
an n × n identity matrix.

Step 2: Apply elementary row operations on [A|I] to transform A to row echelon form.

Step 3: If the row echelon form of A has zero elements along the main diagonal, stop: A does
not have an inverse. Otherwise continue.

Step 4: Use the elementary row operations on the augmented matrix to transform the left par-
tition to an n × n identity matrix.

36
Inverses of Square Matrices 37

Step 5: The right partition of the final augmented matrix is the inverse of A.

Example 4.2.1. Find the inverse of the matrix


 
1 2
A=
3 4

Solution:
     
1 2 1 0 1 2 1 0 1 2 1 0
[A|I] = ∼ ∼
3 4 0 1 0 −2 −3 1 0 1 2 − 21
3

 
1 0 −2 1
∼ 3 . 2
0 1 2
− 12
Example 4.2.2. Find the inverse of the matrix
 
0 1 1
A= 1 1 1 
1 1 3

Solution:    
0 1 1 1 0 0 1 1 1 0 1 0
[A|I] =  1 1 1 0 1 0  ∼  0 1 1 1 0 0 
1 1 3 0 0 1 1 1 3 0 0 1
   
1 1 1 0 1 0 1 1 1 0 1 0
∼  0 1 1 1 0 0  ∼  0 1 1 1 0 0 
0 0 2 0 −1 1 0 0 1 0 − 12 12
   
1 0 0 −1 1 0 1 0 0 −1 1 0
1
∼  0 1 1 1 0 0  ∼  0 1 0 1 2
− 12 
0 0 1 0 − 12 21 0 0 1 0 − 12 1
2
2
 
1 6 4
Example 4.2.3. Show that the matrix C =  2 4 −1  has no inverse.
−1 2 5

Solution:    
1 6 4 1 0 0 1 6 4 1 0 0
[C|I] =  2 4 −1 0 1 0  ∼  0 −8 −9 −2 1 0 
−1 2 5 0 0 1 0 8 9 1 0 1
 
1 6 4 1 0 0
∼ 0 −8
 −9 −2 1 0 
0 0 0 −1 1 1
Since there is a zero along the main diagonal, the matrix C has no inverse. 2
Theorem 4.2.1. The inverse of a matrix is unique.

Proof: Let B and C be inverses of a matrix A. Then AB = BA and AC = CA. Thus


C = CI = C(AB) = (CA)B = IB = B. 2
Inverses of Square Matrices 38

Theorem 4.2.2. If A and B are square nonsingular matrices, then

(a) (A−1 )−1 = A.

(b) (AB)−1 = B −1 A−1 .

(c) (At )−1 = (A−1 )t .


 
1
(d) (kA)−1 = A−1 , k 6= 0.
k

Proof:

(a) Since AA−1 = A−1 A = I, then A−1 is invertible and (A−1 )−1 = A.

(b) We note that (B −1 A−1 )(AB) = B −1 (A−1 A)B = B −1 IB = B −1 B = I. Thus AB is invertible,


and since the inverse is unique, it follows that (AB)−1 = B −1 A−1 .

(c) We note that (At )(A−1 )t = (A−1 A)t = I t = I. So At is invertible, and since the inverse is
unique, then (At )−1 = (A−1 )t .
 
1 1
(d) Note that kA A−1 = k. AA−1 = AA−1 = I. So kA is invertible and its inverse is
  k k
1
A−1 .
k

Theorem 4.2.3. The inverse of a nonsingular symmetric matrix is symmetric.

Proof: By definition, if A symmetric then At = A. Thus (A−1 )t = (At )−1 = (A)−1 . 2

Theorem 4.2.4. If the coefficient matrix A is invertible, then the linear system Ax = b has a
solution x = A−1 b.

Proof: Since A is invertible we have A−1 (Ax) = A−1 b ⇔ Ix = A−1 b ⇔ x = A−1 b. Hence with
this formula we can solve for all variables in the linear system at the same time. 2

Theorem 4.2.5. If A is invertible, then the linear system Ax = b has a unique solution.

Proof: Let w = A−1 b. Then Aw = A(A−1 b) = Ib = b; so w is one solution to the linear


system Ax = b. If y is another solution to the system, then Ay = b = Aw. Multiplying either
side of this equation by A−1 , we obtain A−1 (Ay) = A−1 (w) ⇔ Iy = Iw ⇔ y = w. 2
Inverses of Square Matrices 39

4.3 The Adjoint Method

Theorem 4.3.1. Let A be a square matrix and I the identity matrix of the same order as A.
Then Adj(A).A = det(A).I
1
6 0 then A−1 =
Corollary 4.3.1. If A is a square matrix and |A| = Adj(A).
det(A)

1 1
Proof: Note that Adj(A).A = det(A).I ⇔ .Adj(A).A = I ⇔ .Adj(A).AA−1 =
det(A) det(A)
1
IA−1 ⇔ Adj(A) = A−1 . 2
det(A)
Example 4.3.1. Find the inverse of the matrix
 
1 1 −1
A= 0 0 1 
2 1 2

Solution: We construct the cofactor matrix:


0 1 0 1 0 0
A11 = (−1)1+1 = − 1, A12 = (−1)1+2 = 2, A13 = (−1)1+2 =0
1 2 2 2 2 1

1 −1 1 −1 1 1
A21 = (−1)2+1 = −3, A22 = (−1)2+2 = 4, A23 = (−1)2+3 =1
1 2 2 2 2 1

1 −1 1 −1 1 1
A31 = (−1)3+1 = 1, A32 = (−1)3+2 = −1, A33 = (−1)3+3 =0
0 1 0 1 0 0

Thus the cofactor matrix is given by


     
A11 A12 A13 −1 2 0 −1 −3 1
 A21 A22 A23  =  −3 4 1  =⇒ adj(A) =  2 4 −1 
A31 A32 A33 1 −1 0 0 1 0

Now |A| = A11 a11 + A12 a12 + A13 a13 = (−1)(1) + (2)(1) + (0)(−1) = −1 + 2 + 0 = 1. Therefore
 
−1 −3 1
adj(A) 
A−1 = = 2 4 −1 
|A|
0 1 0
2

Example 4.3.2. Solve the linear system

x + 2y = 11
x + y = 7

for the values of x and y.


Inverses of Square Matrices 40
 
1 2
Solution: Now with A = ,
1 1
      
x −1 −1 2 11 3
=A b= =
y 1 −1 7 4

Thus x = 3 and y = 4. 2

Assignment 4.3.1.

1. If the matrix product A−1 B is known, how could you calculate B −1 A without necessarily
knowing what A and B are?

2. Let A be a symmetric nonsingular matrix. Prove that A−1 is symmetric.

3. (a) You have already seen that every square matrix containing a row of zeroes must be
singular. Why must every square matrix containing a column of zeroes be singular?
(b) Why must every diagonal matrix with at least one zero main diagonal entry be singular?
(c) Why must every upper triangular matrix with no zero entries on the main diagonal be
nonsingular?

4. If A−1 exists, does Ax = 0 have a nontrivial solution? Why or why not?

5. Explain why Ax = 0 always has a solution even when A−1 does not exist.

(a) What can you conclude about A if the solution is unique?


(b) What can you conclude about A if the solution is not unique?

6. Find the solution set for the linear system

4α1 − 6α2 + α3 = 17
−α1 + 2α2 − α3 = −14
3α1 − 5α2 + α3 = 23

by calculating the inverse of the coefficient matrix and then using matrix multiplication.

7. Suppose AB = AC and A is an invertible n×n matrix. Does it follow that B = C? Explain


why or why not. What if A were a non invertible n × n matrix?

8. What form does a 2 × 2 matrix have if it commutes with every other 2 × 2 matrix?

Exercise 4.3.1.

1. Find A−1 using

(a) the adjoint method,


(b) the method of elementary row operations,

where
Inverses of Square Matrices 41
   
2 0 1 2 2 4
(i) A =  3 2 4 . (iv) A =  1 0 1 .
6 3 7 0 1 0
 
  1 2 −3
1 0 2
(v) A =  2 5 −4 .
(ii) A =  −1 3 0 .
−3 −4 8
1 0 1  
  1 2 4 6
1 0 1  0 1 2 0 
(vi) A = 
 0 0 1 2 .

(iii) A =  1 2 −1 .
4 −4 5 0 0 0 2

2. Solve the linear system


2x + z = 7
3x + 2y + 4z = 10
6x + 3y + 7z = −1

3. Let A be a square matrix of order n. Show that det[adj(A)] = [det(A)]n−1 .


 
1 4
4. If A = , verify that A2 − 2A + 13I2 = 0 and deduce that A−1 = − 13
1
(A − 2I2 ).
−3 1
 
1 1 −1
5. Let A =  0 0 1 
2 1 2
(a) Verify that A3 = 3A2 − 3A + I3 .
(b) Express A4 in terms of A2 , A, and I3 and hence calculate A4 explicitly.
(c) Use part (a) to prove that A is non-singular and find A−1 explicitly.

6. (a) Let B be a square matrix of order n such that B 3 = 0. If A = In − B, prove that A


is non-singular and that A−1 = In + B + B 2 . Show that the system of linear equations
Ax = b has the solution x = b + Bb + B 2 b.
 
0 r s
(b) If B =  0 0 t , verify that B 3 = 0; use part(a) to determine (I3 − B)−1 .
0 0 0
7. Let A an n × n.

(a) If A2 = 0, prove that A is singular.


(b) If A2 = A and A 6= In , prove that A is singular.
 
1 a b
8. Prove that the real matrix A =  −a 1 c  is non-singular by proving that A is row-
−b −c 1
equivalent to I3 .

9. (a) If A is m × n and B is n × m, prove that AB is singular if m > n.


 
1 2
(b) Prove that A = is singular and find a non-singular matrix P such that P A
−2 −4
has a zero last row.
Inverses of Square Matrices 42
 
1 2 k
10. Find a rational number k for which the matrix A =  3 −1 1  is singular.
5 3 −5
 
a b
11. Prove that A = has no inverse if ad − bc = 0. [Hint: Use the equation A2 − (a +
c d
d)A + (ad − bc)I2 = 0.]
 
1 a b
12. Prove that the real matrix A =  −a 1 c  is non-singular.
−b −c 1
Semester I Makerere University Examinations 2013/2014 43

MAKERERE UNIVERSITY School of Physical Sciences


College of Natural Sciences Department of Mathematics

Name:. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Stud No:. . . . . . . . . . . . . . . . . . Reg No:. . . . . . . . . . . . . . . . . .

School:. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Course:. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

MTH 1102: LINEAR ALGEBRA I TEST 1 MONDAY NOVEMBER 04, 2013


1:00 PM - 2:00 PM

Instructions to Candidates:

(i) Attempt ALL questions.

(ii) Each correct answer will earn 2 marks. Each wrong answer will earn -2 marks. A question
not attempted will earn 0 marks.

(iii) In Question 1–5, answer True or False in the space provided. In Question 6–9 circle the
correct answer. In Question 10–15, write the final answer in the space provided.

1. If A and B are matrices such that AB and BA are both defined, then AB and BA are
square matrices. .........

2. If A is a square matrix and |A| = 0, then A is nonsingular. .........

3. If A is an m × n matrix such that m < n, then the linear system Ax = b is inconsistent.


.........

4. If A = (aij ) is an m × n matrix, then the matrix B = (aji ) is an n × m matrix. .........

5. If λ is a scalar and A is a square matrix of size n, then |λA| = λn |A|. .........

6. The matrices A, B, and C are respectively of sizes 4 × 5, 4 × 3, and 5 × 4. What is the


size of the matrix [(A + C t )B]t ? .........

(a) 4 × 3 (b) 3 × 5 (c) 3 × 4 (d) none of these.

7. A is a 4 × 4 square matrix such that |A| = −2. If B is a square matrix obtained from A by
interchanging the first and fourth row, then |B| is

(a) -8 (b) -2 (c) 2 (d) none of these.




1 2 3
8. If A =  3 2 1  , then the cofactor matrix of A is
2 0 2
Semester I Makerere University Examinations 2013/2014 44

(a)   (b)   (c)   (d) none of these.


−4 −4 −4 4 −4 −4 4 −4 −4
 −4 −4 4   −4 −4 4   −4 −4 8 
4 8 −4 −4 8 −4 −4 4 −4
 
1 1 0
9. If A =  0 1 1 , then tr(A2 ) is
−1 1 2

(a) 7 (b) 0 (c) 8 (d) none of these.


 
2 3
10. The inverse of the matrix A = is
1 5
 
(a)   (b)   (c) 5/7 3/7 (d) none of these.
−5/7 −3/7 5/7 −3/7 −1/7 2/7
−1/7 2/7 −1/7 2/7

.................
 
1 0 0 0
 3 1 2 2 
11. The determinant of the matrix A = 
 1 0 −2 1  , is
 .................
2 0 0 1
 
1 2 1
12. Find (5A)−1 if A =  0 1 2  ,
0 0 1

13. Construct a 3 × 3 matrix A = (aij ) if aij = ij−1 . ................................

 
1 3
14. Find the values of a scalar k for which the matrix A = is singular. .................
2 k2
15. Suppose that a and b are scalars. Determine the conditions a and b must satisfy for the
linear system

x + 2y + z = 2
y + 2z = 5
az = 4 − b2

to be inconsistent. .................

Good Luck
Semester I Makerere University Examinations 2014/2015 45

MAKERERE UNIVERSITY School of Physical Sciences


College of Natural Sciences Department of Mathematics

Name:. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Stud No:. . . . . . . . . . . . . . . . . . Reg No:. . . . . . . . . . . . . . . . . .

School:. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Course:. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

MTH 1102: LINEAR ALGEBRA I TEST 1 WEDNESDAY OCTOBER 15, 2014


1:00 PM - 2:00 PM

Instructions to Candidates:

(i) Attempt ALL questions.

(ii) Each correct answer will earn 2 marks. A question failed or not attempted will earn 0
marks.

(iii) In Question 1–5, answer True or False in the space provided. In Question 6–10 circle the
correct answer. In Question 11–15, write the final answer in the space provided.

1. If A and B are square matrices of the same order then (A + B)2 = A2 + 2AB + B 2 . .........

2. If A is a square matrix of order n then det(At ) = (−1)n det(A). .........

3. Two matrices are row equivalent if they have the same number of rows. .........

4. Suppose that O is a zero matrix of the same size as the square matrices A and B. If
AB = O, then A = O or B = O. .........

5. If x1 and x2 are solutions of the linear system Ax = 0, then x1 + x2 and λx1 (λ ∈ R) are
also solutions of Ax = 0. .........

6. Assume that B is a square matrix of order 3 with the property that B 2 = B. Which of the
following statements about B must be true?

(a) B = I3 . (c) det(B) = 0. (e) None of these.


t 2 t
(b) (B ) = B . (d) det(B) = 1.

7. Let I3 and O3 respectively, be the identity matrix and the zero matrix of order 3. If A is a
4 × 3 matrix and B is a 3 × 1 matrix, what will A(I3 + O3 )(I3 − O3 )B − AB be?

(a) It is AB.
(b) It is a 4 × 1 zero matrix.
(c) It is −2AB.
(d) These matrix operations are not possible as stated.
Semester I Makerere University Examinations 2014/2015 46

(e) None of these.


8. When performing Gauss-Jordan elimination on a matrix with 3 rows, which of the
following is not an acceptable row operation?
(a) Swap row 2 and row 3.
(b) Multiply row 3 by 3/2.
(c) Add -3/2 to each entry in row 3.
(d) Multiply row 2 by 3/2 and add the answer to row 3.
(e) None of these.
9. If A is an upper triangular with all diagonal entries zero, then I + A is
(a) invertible.
(b) idempotent.
(c) singular.
(d) nilpotent.
(e) None of these.
 
k+1 3 7
10. Consider a matrix A =  2 k+1 1 . If tr(A) = 1 then the value of the scalar k
0 4 −5
is

(a) 4. (c) 2. (e) None of these.


(b) 3/2. (d) 3.
 
1 k2
11. Find the values of a scalar k for which the matrix A = is symmetric..................
2 3
12. Suppose that A is a square matrix of order 3 such that |A| = 4. Then the determinant of
2A is equal to .................
 
0 0 0 1
 1 2 2 3 
13. The determinant of the matrix B =  0 −2 1 1  is
 .................
0 0 1 2
 
3 0 0
14. Find (7A)−1 if A =  0 −4 0  . .................
0 0 2

15. Determine all values of the scalar a for which the following linear system has infinitely
many solutions:
x1 + x2 − x3 = 2
x1 + 2x2 + x3 = 3
x1 + x2 + (a2 − 5)x3 = a
.................

Good Luck
Semester I Makerere University Examinations 2015/2016 47

MAKERERE UNIVERSITY School of Physical Sciences


College of Natural Sciences Department of Mathematics

Name:. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. Course:. . . . . . . . . . . . . . . School:. . . . . . . . . . . . . . . . . .

Stud No:. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Reg No:. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

MTH 1102: LINEAR ALGEBRA 1 TEST 1 THURSDAY OCTOBER 08, 2015


1:00 PM - 2:00 PM

Instructions to Candidates:

Answer ALL questions in the provided space.

 
1 −2 −3
1. True or False; the matrix B =  2 0 5  is symmetric. Justify your answer.
−3 11 7

 
1 0 −1 0
2. True or False; the matrix A =  0 1 −2 0  is in row echelon form. Justify your
0 1 0 0
answer

3. Elementary row operations have been applied to a square matrix A of order 10 to obtain
an equivalent matrix B. Which of the following operations is not an acceptable
elementary row operation?

(a) The first row exchanged with the ninth row.


(b) One of the entries in A multiplied by the third row and then added to the fourth row.
(c) The fifth row multiplied by 1/4.
(d) None of these.

4. Give an example of a square matrix of A = (aij ) size of 3 satisfying



1 if |i − j| > 1
aij =
−1 if |i − j| ≤ 1
.
Semester I Makerere University Examinations 2015/2016 48

5. Determine the order of the matrix Dt (BE) if the respective sizes of the matrices B, D, E,
are 4 × 5, 4 × 2, 5 × 4.

6. Show that if A and B are skew symmetric matrices of the same order, then A + B is also
skew symmetric.

7. If A is an invertible matrix show that |A−1 | = 1/|A|.

8. If A is an invertible square matrix of size 4 such that |A| = 5, find |(2A)−1 |.

9. The augmented matrix of a linear system Ax = b has been reduced to the form
 
1 2 1 1
 0 1 4 5 
2
0 0 b−1 a −4
State the conditions on a and b such that the system

(i) has infinitely many solutions.


(ii) is inconsistent.

−1 −1
10. Suppose that A and B 
are matrices
 such that the products B A and A B are defined.
2 0 0
−1 −1
Find B A if A B =  0 2 0 .
0 0 2
Semester I Makerere University Examinations 2015/2016 49
 
1 −1 3
11. If the trace of the matrix B =  2 t2 2  is 3, what is the value of t?
1 1 −t

 
1 0 0
12. For what value of a is the matrix C =  2 2 + a 0  nonsingular?
7 1 3−a


1 1 −1
13. Find the determinant of the matrix D =  0 0 1 
2 1 2

   
1 1 −1 −1 2 0
14. If the cofactor matrix of A =  0 0 1  is Cof(A) =  −3 4 1 , find A−1 .
2 1 2 1 −1 0

15. Is the system of equations


2x1 + 3x2 + 4x3 = 20
4x1 + 5x2 + 6x3 = 32
7x1 + 8x2 + x1 x3 = 40
linear? Justfy your answer.
Chapter 5

Vector Spaces and Vector Sub-Spaces

5.1 Vectors in n-spaces

Definition 5.1.1. The set of all n-tuples of real numbers, denoted by Rn , is called n-space. A
particular n-tuple in Rn , say u = (u1 , u2 , u3 , . . . , un ) is called a vector. The numbers ui are
called the coordinates, components, entries, or elements of vector u. The number, n is the
dimension of the vector u. We sometimes say that u is an n-dimensional vector in Rn or Cn and
write u ∈ Rn or u ∈ Cn . Let v ∈ Rn and u ∈ Rn . Then v is said to be equal to u iff
ui = vi , ∀ i = 1, 2, . . . , n, and we write v = u.

Example 5.1.1. Let u = (2, −7, 1), v = (−3, 0, 4), and w = (0, 5, −8). Find the components of
the vector x that satisfy 2u − v + x = 7x + w.

Solution: Let x = (x1 , x2 , x3 ). Then

2u − v + x = 2(2, −7, 1) − (−3, 0, 4) + (x1 , x2 , x3 ),


= (7 + x1 , − 14 + x2 , − 2 + x3 ).
Also 7x + w = (7x1 , 7x2 + 5, 7x3 − 8).
So 2u − v + x = 7x + w.
7
⇒ 7 + x1 = 7x1 or x1 = ,
6
−19
−14 + x2 = 7x2 + 5 or x2 = ,
6
−2 + x3 =  3 − 8 or x3= −1.
7x
7 19
Therefore x = , − , −1 .
6 6
2

Definition 5.1.2. Let v = (v1 , v2 , . . . , vn ) and u = (u1 , u2 , . . . , un ) be vectors defined over a field
F. For our purpose, F is either R or C. The sum of v and u is the vector obtained by adding
corresponding components and is denoted by v + u, i.e., v + u = (v1 + u1 , v2 + u2 , . . . , vn + un ).
This is sometimes called coordinate-wise addition. The symbol 0 will denote a zero vector. For

50
Vector Spaces 51

example in R3 , 0 = (0, 0, 0). Let k ∈ F (k is a scalar); we define the scalar product kv as the
vector obtained by multiplying each component of v by k, i.e., kv = (kv1 , kv2 , . . . , kvn ). Thus

−v = (−v1 , −v2 , . . . , −vn ) and v − v = 0.

Properties of n-spaces

Let s, t be scalars in F and u, v, w be vectors in Fn . Then

1. v + u ∈ Fn (closure).

2. v + u = u + v (commutative law).

3. (v + u) + w = v + (u + w) (associative law).

4. v + 0 = v (existence of additive identity 0).

5. v + (−v) = 0 (existence of additive inverse).

6. sv ∈ Fn (closure property).

7. (st)v = s(tv) (associative law).

8. (s + t)v = sv + tv and s(v + u) = sv + su (distributive law).

9. 1v = v (existence of multiplicative identity 1).


Example 5.1.2. Prove that v + u = u + v for u, v ∈ Fn .

Solution: Let v = (v1 , v2 , . . . , vn ) and u = (u1 , u2 , . . . , un ).

Then v + u = (v1 + u1 , v2 + u2 , . . . , vn + un )
= (u1 + v1 , u2 + v2 , . . . , un + vn ) (since vi + ui = ui + vi for ui , vi ∈ F a field)
= u + v (By definition)

2
Example 5.1.3. Let u = (2, −7, 1), v = (−3, 0, 4), w = (0, 5, −8). Find (i) 3u − 4v, (ii)
2u + 3v − 5w.

Solution:

(i) 3u − 4v = 3(2, −7, 1) − 4(−3, 0, 4) = (18, −21, −13).

(ii) It is left as an exercise to the reader.

2
Example 5.1.4. Let u = (−1, 3, 2, 0), v = (2, 0, 4, −1), w = (7, 1, 1, 4) and x = (6, 3, 1, 2).
Find scalars a, b, c, d such that au + bv + cw + dx = (0, 5, 6, −3).
Vector Spaces 52

Solution:

a(−1, 3, 2, 0) + b(2, 0, 4, −1) + c(7, 1, 1, 4) + d(6, 3, 1, 2) = (0, 5, 6, −3)

Equating components of LHS to those of RHS we get the system of equations:

−a + 2b + 7c + 6d = 0
3a + c + 3d = 5
2a + 4b + c + d = 6
−b + 4c + 2d = −3

The values of a, b, c and d are a solution to the above system and can be obtained by using any
of the methods discussed in Chapter 3 (finding the values of a, b, c, d is left as an exercise). 2

Definition 5.1.3. Let u = (u1 , u2 , . . . , un ) and v = (v1 , v2 , . . . , vn ) be vectors. The dot product
or inner product of u and v, denoted by u · v, is defined as
n
X
u · v = u1 v1 + u2 v2 + · · · + un vn = ui · vi if u, v ∈ Rn ,
i=1
and
n
X
u · v = u1 v̄1 + u2 v̄2 + · · · + un v̄n = ui · v̄i if u, v ∈ Cn ,
i=1

where v̄i is the conjugate of vi . Note that the dot product is a scalar in the field over which the
vectors are defined.

Definition 5.1.4. Let u, v ∈ Fn . Then u and v are said to be orthogonal or perpendicular if


u · v = 0, and we write u⊥v.

Example 5.1.5. Find u · v for

(i) u = (1, 2), v = (6, −8).

(ii) u = (1, −3, 7), v = (8, −2, −2).

Solution:

(i) u · v = (1)(6) + (2)(−8) = −10.

(ii) u · v = (1)(8) + (−3)(−2) + (7)(−2) = 0 and so u⊥v.

Example 5.1.6. Let u = (1 + 7i, 2 − 6i) and v = (5 − 2i, 3 − 4i). Find

(i) u + v, (ii) (3 + i)u, (iii) 2iu + (4 − 7i)v, (iv) u · v and v · u.

Solution:
Vector Spaces 53

(i) u + v = (1 + 7i, 2 − 6i) + (5 − 2i, 3 − 4i) = (6 + 5i, 5(1 − 2i)).

(ii) It is an exercise.

(iii) It is an exercise.

(iv) u · v = (1 + 7i)(5 − 2i) + (2 − 6i)(3 − 4i) = 21 + 27i. Find v · u.

Theorem 5.1.1. Let u, v and w be vectors in Rn and k be a scalar. Then

(a) u · v = v · u.

(b) u · (v + w) = u · v + u · w.

(c) k(u · v) = (ku) · v = u · (kv).

(d) v · v > 0 if v 6= 0 and v · v = 0 if v = 0.

Proof:

(a) By definition X X
u·v = ui vi = vi ui = v · u

(b)

u · (v + w) = u1 (v1 + w1 ) + u2 (v2 + w2 ) + · · · + un (vn + wn )


= u1 v1 + u1 w1 + u2 v2 + u2 w2 + · · · + un vn + un wn
= (u1 v1 + u2 v2 + · · · + un vn ) + (u1 w1 + u2 w2 + · · · + un wn )
= u·v+u·w

(c) It is left as an exercise.

(d) If v 6= 0, i.e., at least one component of v is not zero, then v · v = v12 + v22 + · · · + vn2 > 0. If
v = 0, i.e., all components of v are zero, then v · v = 0.

Theorem 5.1.2. Let u, v, w be vectors in Cn and k be any complex number. Then

(a) u · v = v · u.

(b) (u + v) · w = u · w + v · w.

(c) (ku) · v = u · (kv) = k(u · v).

(d) v · v ≥ 0. Further v · v = 0 if and only if v = 0.


Vector Spaces 54

Definition 5.1.5. Let u be a vector in Rn or Cn . The norm or length of the vector u, denoted
by ||u||, is defined to be the non-negative square root of u · u, i.e.,

||u|| = u·u
 
n
 21
 2
for u ∈ Rn ,
P
ui




 i=1
=

 n  21

2
for u ∈ Cn .
P
|ui |



i=1
n
Let u and v be vectors in R . The distance between u and v, denoted by d(u, v), is defined as

d(u, v) = ||u − v||.

Example 5.1.7. Let u = (1, −3, 2), v = (1, 1, 0), and w = (2, 2, −4). Find

(i) ||u + v||, (ii) ||u|| + ||v||, (iii) ||3u − 5v + w||, (iv) d(u, w).

Solution:

√ √ √
(i) ||u + v|| = ||(1, −3, 2) + (1, 1, 0)|| = ||2, −2, 2|| = 22 + 22 + 22 = 12 = 2 3.
√ √ √ √
(ii) ||u|| = 14 and ||v|| = 2 and so ||u|| + ||v|| = (1 + 7) 2.
(iii) It is left as an exercise.

(iv) d(u, w) = ||u − w|| = ||(−1, −5, 6)|| = 62.

2
Example 5.1.8. Let u = (3i, 0, −i), v = (0, 3 + 4i, −2i) and w = (1 + i, 2i, 0). Find

(i) u · v, (ii) v · w, (iii) ||3u − 5v + w||.

Solution:

(i) u · v = 2.
(ii) v · w = 8 − 6i.
1 √
(iii) ||3u − 5v + w|| = ||(1 + 10i, −(15 + 18i), 7i)|| = (1 + 100 + 225 + 256 + 49) 2 = 631 .
(Recall that |z|2 = (a2 + b2 ) for z = a + bi).

2
Definition 5.1.6. Let u, v ∈ Rn and θ be the angle between u and v. Then
u·v
cos θ = , 0 < θ < π.
||u|| · ||v||
For n = 2 or 3, this definition coincides with the usual geometric notion of an angle.
Vector Spaces 55

Remark 5.1.1.

(i) Note that if u and v are orthogonal then u · v = 0. So θ = π2 .

(ii) From Definition 5.1.6 we can easily see that

u · v ≤ ||u||||v||. (5.1)

This is called Cauchy-Schwartz Inequality (prove it).


Theorem 5.1.3. (Minkowski) For all u, v ∈ Rn ,

||u + v|| ≤ ||u|| + ||v||. (5.2)

This is called the triangle inequality.

Proof: Consider

||u + v||2 = (u + v) · (u + v)
= u · u + 2u · v + v · v
≤ ||u||2 + 2||u||||v|| + ||v||2 (using Cauchy-Schwarz Inequality)
= (||u|| + ||v||)2
∴ ||u + v|| ≤ ||u|| + ||v||.

2
Example 5.1.9. Find the cosine of the angle θ between u and v for

(i) u = (1, 2), v = (6, −8).

(ii) u = (−3, 1, 2), v = (8, −2, −2).

Solution:
√ √
u·v 5 10 7
(i) cos θ = =− , (ii) cos θ = − .
||u||||v|| 5 21

2
Example 5.1.10. Let a = (k, 1) and b = (4, 3). Find k so that

(i) a and b are orthogonal.

(ii) the angle between a and b is π4 .

Solution:

(i) a ⊥ b ⇒ a · b = 0 or 4k + 3 = 0. Therefore k = − 34 .
Vector Spaces 56

(ii)

π 2 a·b 4k + 3
cos = = = 1
4 2 ||a||||b|| 5(k 2 + 1) 2

5 2 4k + 3
⇔ = 1
2 (k 2 + 1) 2
⇔ 7k 2 + 48k − 7 = 0.
1
∴ k = or − 7.
7
However for θ = π4 , a · b > 0. Thus the required value of k = 71 .

Exercise 5.1.1. Prove the Cauchy-Schwartz inequality.

5.2 Vector Spaces

Definition 5.2.1. A vector space over a field F is a set V , whose elements are called vectors
together with two operations (+) and (·) satisfying the properties under Section 5.1. That is, if
u, v, w ∈ V and k, m ∈ F then

1. u + v ∈ V (closure).

2. u + v = v + u (commutativity under addition).

3. u + (v + w) = (u + v) + w (associativity of addition).

4. There exists a zero vector, i.e., 0 ∈ V such that 0 + u = u + 0 = u for all u ∈ V .

5. For each u ∈ V , there exists a unique -u ∈ V such that u + (-u) = (-u) + u = 0.

6. ku ∈ V (closure of multiplication).

7. k(u + v) = ku + kv (distributivity of multiplication).

8. (k + m)u = ku + mu (distributivity of multiplication).

9. k(mu) = (km)u (associative law).

10. 1u = u (existence of multiplicative identity).

Remark 5.2.1.

1. Scalars may be real numbers or complex numbers, i.e., F = R or F = C. Vector spaces in


which scalars are complex numbers are called complex vector spaces while those in which
scalars are real numbers are called real vector spaces.
Vector Spaces 57

2. To check whether a given space is a vector space, we have to check all properties (or
axioms). If at least one of them is violated (that is, not satisfied), the space is not a vector
space.
3. The signs “+”, and “·” are not necessarily ordinary addition and multiplication.
Example 5.2.1. Consider the n-dimensional space Rn , with usual addition and multiplication.
Show that Rn is a vector space.

Solution: Let u = (u1 , u2 , u3 , . . . , un ), v = (v1 , v2 , v3 , . . . , vn ), w = (w1 , w2 , w3 , . . . , wn ) ∈ Rn ,


and α, β ∈ F. Then

1. u + v = (u1 + v1 , u2 + v2 , u3 + v3 , . . . , un + vn ) ∈ Rn .
2. u + v = (u1 , u2 , u3 , . . . , un ) + (v1 , v2 , v3 , . . . , vn )
= (u1 + v1 , u2 + v2 , u3 + v3 , . . . , un + vn )
= (v1 + u1 , v2 + u2 , v3 + u3 , . . . , vn + un )
= (v1 , v2 , v3 , . . . , vn ) + (u1 , u2 , u3 , . . . , un )
v + u. Thus commutativity holds.
3. u + (v + w) = (u1 , u2 , u3 , . . . , un ) + (v1 + w1 , v2 + w2 , v3 + w3 , . . . , vn + wn )
= (u1 + (v1 + w1 ), u2 + (v2 + w2 ), u3 + (v3 + w3 ), . . . , un + (vn + wn ))
= (u1 + v1 + w1 , u2 + v2 + w2 , u3 + v3 + w3 , . . . , un + vn + wn )
= ((u1 + v1 ) + w1 , (u2 + v2 ) + w2 , (u3 + v3 ) + w3 , . . . , (un + vn ) + wn )
= (u1 + v1 , u2 + v2 , u3 + v3 , . . . , un + vn ) + (w1 , w2 , w3 , . . . , wn )
= (u + v) + w. Thus associativity holds.
4. In Rn , ∃ 0 = (0, 0, 0, . . . , 0) such that for any u ∈ V = Rn
u + 0 = (u1 , u2 , u3 , . . . , un ) + (0, 0, 0, . . . , 0)
=(u1 + 0, u2 + 0, u3 + 0, . . . , un + 0)
=(u1 , u2 , u3 , . . . , un )
= u.
5. For each u ∈ Rn ∃! − u = (−u1 , −u2 , −u3 , . . . , −un ) ∈ Rn such that
u + −u = (u1 , u2 , u3 , . . . , un ) + (−u1 , −u2 , −u3 , . . . , −un )
= (u1 − u1 , u2 − u2 , u3 − u3 , . . . , un − un )
= (−u1 + u1 , −u2 + u2 , −u3 + u3 , . . . , −un + un )
=−u + u
= 0.
6. Also α · u = (αu1 , αu2 , αu3 , . . . , αun ) ∈ Rn .
7. α · (u + v) = α · ((u1 , u2 , u3 , . . . , un ) + (v1 , v2 , v3 , . . . , vn ))
= α · (u1 + v1 , u2 + v2 , u3 + v3 , . . . , un + vn )
= α · (u1 , u2 , u3 , . . . , un ) + α · (v1 , v2 , v3 , . . . , vn )
= α·u+α·v
8. (α + β) · u = (α + β)(u1 , u2 , u3 , . . . , un )
= ((α + β)u1 , (α + β)u2 , (α + β)u3 , . . . , (α + β)un )
= (αu1 + βu1 , αu2 + βu2 , αu3 + βu3 , . . . , αun + βun )
= α(u1 , u2 , u3 , . . . , un ) + β(u1 , u2 , u3 , . . . , un )
= α · u + β · u.
Vector Spaces 58

9. α · (β · u) = α · (β · (u1 , u2 , u3 , . . . , un ))
= α · (βu1 , βu2 , βu3 , . . . , βun )
=αβ(u1 , u2 , u3 , . . . , un )
=αβu.
10. 1 · u = 1(u1 , u2 , u3 , . . . , un ) = (u1 , u2 , u3 , . . . , un ) = u.

Hence Rn is a vector space. 2


Example 5.2.2. Let V be the set of ordered pairs of real numbers R2 with (+) and (·) defined
by x + y = x − y and α · x = αx. Is V a vector space?

Solution: No, it is not a vector space since the commutative law fails. 2
Example 5.2.3. Let V = R3 , the set of all triples of real numbers (x, y, z) with the operations
defined by (x1 , x2 , x3 ) + (y1 , y2 , y3 ) = (x1 + y1 , x2 + y2 , x3 + y3 ) and α · (x1 , x2 , x3 ) = (0, 0, 0).
Show that V is not a vector space.

Solution: Since condition (x) is not satisfied, it is not a vector space, i.e.,
1 · (x1 , x2 , x3 ) = (0, 0, 0) 6= (x1 , x2 , x3 ). 2
Exercise 5.2.1.

1. Let V be a set of all m × n matrices with real entries with usual addition and usual scalar
multiplication. Show that V is a vector space.
2. Let V be a set of real-valued functions defined on R. Let f, g ∈ V and define “+” and “·”
as (f + g)(x) = f (x) + g(x) and (α · f )(x) = αf (x). Prove that V is a vector space.
3. Let V be a set of all polynomials Pn of degree less or equal to n, with ordinary addition
and multiplication. Show that V is a vector space.
4. Let V be the set of all triples of real numbers (x, y, z) ∈ R3 with “+” and “.” defined by
     
x1 y1 x1 + y 1
 x2  +  y 2  =  x2 + y 2  ,
x3 y3 x3 + y 3
   
x1 0
α x2
  =  0 .
x3 0
Is V a vector space?
5. Let V be a set of all real valued functions f defined everywhere on R such that f (1) = 0
with operations described in Qn 2. Show that V is a vector space.
6. Let V be a set of all positive real numbers with operations x + y = xy and kx = xk . Show
that V is a vector space.
7. The set of all tuples of real numbers with addition defined by
(x, y, z) + (u, v, w) = (z + w, y + v, x + u) and standard scalar multiplication. Determine
whether V is a vector space or not.
Vector Spaces 59

5.3 Vector Subspaces

Definition 5.3.1. A subset W of a vector space V is a subspace of V if W is a vector space


itself with respect to the same operations on V .
Theorem 5.3.1. Let W be a non empty subset of a vector space V with operations “+” and
“.”. Then W is a subspace of V if and only if

(a) u, v ∈ W then u + v ∈ W .
(b) u ∈ W and α is any scalar, then αu ∈ W .

Proof: If W is a subspace of V , then all the vector space axioms are satisfied; in particular,
Axioms 1 and 6 hold. But these are precisely conditions (a) and (b).

Conversely, assume conditions (a) and (b) hold. Since these conditions are vector space Axioms
1 and 6, we need only show that W satisfies the remaining eight axioms. Axioms 2, 3, 7, 8, 9,
and 10 are automatically satisfied by the vectors in W since they are satisfied by all vectors in
V . Therefore, to complete the proof, we need only verify that Axioms 4 and 5 are satisfied by
vectors in W .

Let u be any vector in W . By condition (b), ku is in W for every scalar k. Setting k = 0, it


follows from Theorem 5.1.1 that 0u = 0 is in W , and setting k = −1, it follows that
(−1)u = −u is in W . 2
Remark 5.3.1.

(i) If at least one of the conditions in Theorem 5.3.1 is violated, W is not a subspace of V .
(ii) For every u, v ∈ W and α, β ∈ F, the conditions in Theorem 5.3.1 are equivalent to
αu + βv ∈ W .
Example 5.3.1. Let V be a vector space consisting of all 2 × 3 matrices together with the
usual addition and scalar multiplication. Let W be a set defined by
   
a b 0
W = u:u= : a, b, c, d ∈ R
c 0 d
Is W a vector space of V ?
 
1 1 0
Solution: Clearly W is a subset of V and W 6= ∅ since ∈ W . Let u, v ∈ W such
    1 0 1
a1 b 1 0 a2 b 2 0
that u = and v = . Then
c1 0 d1 c2 0 d 2
     
a1 b 1 0 a2 b 2 0 a1 + a2 b 1 + b 2 0
u+v = + = ∈ W.
c1 0 d 1 c2 0 d 2 c1 + c2 0 d1 + d2
Let α ∈ R. Then    
a1 b 1 0 αa1 αb1 0
αu = α = ∈ W.
c1 0 d 1 αc1 0 αd1
Thus, W is a subspace of V . 2
Vector Spaces 60

Example 5.3.2. Let W be a set of all ordered triples of the form (a, b, 1) with a, b ∈ R. Is W a
vector subspace of V = R3 .

Solution: Clearly ∅ = 6 W since (0, 0, 1) ∈ W . Let u, v ∈ W such that u = (a1 , b1 , 1) and


v = (a2 , b2 , 1). Then u + v = (a1 + a2 , b1 + b2 , 2) ∈
/ W because it is not of the form (a, b, 1).
Hence W is not a vector space of V . Also if α is a scalar then
αu = α(a, b, 1) = (αa1 , αb1 , α) ∈
/ W. 2

Example 5.3.3. Let V = R3 together with the usual operations of addition and scalar
multiplication. If S = {(x, y, z) : x2 + y = z}, is S a vector space of V ?

Solution: Clearly ∅ = 6 S since (0, 1, 1) ∈ S. Let x, y ∈ S such that x = (x1 , y1 , z1 ) and


y = (x2 , y2 , z2 ). So x + y = (x1 + x2 , y1 + y2 , z1 + z2 ) and x + y ∈ S if
(x1 + x2 )2 + y1 + y2 = z1 + z2 . But x ∈ S ⇒ x21 + y1 = z1 and y ∈ S ⇒ x22 + y2 = z2 . Adding, we
obtain z1 + z2 = x21 + x22 + y1 + y2 6= (x1 + x2 )2 + y1 + y2 . Hence S is not a vector space of R3 .
Let α be a scalar. Then αx = (αx1 , αy1 , αz1 ) ∈ / S since (αx1 )2 + αy1 = αz1 6= αx21 + αy1 . 2

Assignment 5.3.1.

1. Prove that the set of all scalar multiples of the vector (1, 3, 2) in R3 forms a vector space
with the usual operations on 3-vectors.
1
is a vector space using the operations ⊕ and ⊗ given by x ⊕ y = (x + y) 3
2. Prove that R √
and a ⊗ x = ( 3 a)x where a is any real number.

3. Show that R, with ordinary addition but with scalar multiplication replaced by a x=0
for every real number a, is not a vector space.

4. Show that the set of singular 2 × 2 matrices under the usual operations is not a vector
space.

5. Show that the set R, with the usual scalar multiplication but with addition given by
x ⊕ y = 2(x + y), is not a vector space.

6. Prove or disprove that each given subset of R2 is a subspace of R2 under the usual vector
operations. (In these problems, a and b represent arbitrary real numbers.)

(a) The set of unit 2-vectors.


(b) The set of 2-vectors of the form (a, 2a).
(c) The set {(1, 2)}.
(d) The set of 2-vectors of the form (a, b), where |a| = |b|.
(e) The set of points lying on the parabola y = x2 .
(f) The set of points in the plane lying inside the circle of radius 1 centered at the origin.

7. Show that:

(a) The set of vectors of the form (a, 0, 0) is a subspace of R3 .


(b) The set of vectors of the form (a, 3a, 2a) is a subspace of R3 .
Vector Spaces 61

(c) {(x, y, z) : x2 + y 2 + z 2 ≤ 1} is not a subspace of R3 .

8. Prove or disprove that each given subset of M2×2 is a subspace of M2×2 under the usual
vector operations (In these problems, a and b represent arbitrary real numbers):
 
a −a
(a) The set of matrices of the form .
b 0
(b) The set of 2 × 2 matrices having the sum of all entries zero.
(c) The set of 2 × 2 matrices having the product of all entries zero.
   
1 3 0 0
(d) The set of 2 × 2 matrices A such that A = .
−2 −6 0 0
9. Determine which of the following are subspaces of V , the vector space of all real valued
functions defined on R:

(a) The family of all functions f such that f (0) = 1,


(b) The family of all continuous functions f .

10. Prove or disprove that each given subset of P5 is a subspace of P5 under the usual
operations:

(a) {p ∈ P5 : the coefficient of the first degree term of p equals the coefficient of the
fifth-degree term of p}.
(b) {p ∈ P5 : p0 (4) = 0}, where p0 is the derivative of p.

11. Show that the set of vectors of the form (a, b, 0, c, a − 2b + c) in R5 forms a subspace of R5
under the usual operations.

12. (a) Prove that the set of all 3-vectors orthogonal to (1, −1, 4) forms a subspace of R3 .
(b) Is the subspace from part (a) all of R3 , a plane passing through the origin in R3 , or a
line passing through the origin in R3 ?

13. Suppose V and W are subspaces of a vector space H. Show that V ∩ W defined to be a
set of all vectors which are in both V and W , is also a subspace.

Exercise 5.3.1.

1. Suppose that W is a subset of a vector space V and u, v ∈ W . If α, β ∈ F, show that W


is a vector subspace of V if and only if αu + βv ∈ W .

2. Suppose that ui ∈ V (i = 1, 2, . . . , n), where V is a vector space. If αi ∈ F, is it true that


Pn
αi ui ∈ V ? Justify your answer.
i=1

3. Show that W is a subspace of R3 where W = {(a, b, c) : a + 2b + 3c = 0}.

4. Show that W is a subspace of R3 where W = {ax + by : x, y ∈ R3 ; a, b ∈ R}.

5. Prove that the intersection of any number of subspaces of vector space V is a vector
subspace of V .
Vector Spaces 62

6. Let P2 be the set of all polynomials of degree less or equal to 2 over R. Show that P2 is a
vector subspace of P2 .

7. If A is an m × n matrix, show that the set of all solutions to the system Ax = 0 is a


subspace of Rn .

8. Prove that the set of all real valued continuous functions is a subspace of V , the space of
all real valued functions.

9. Let A be an n × n matrix with real entries and suppose λ is any scalar. Let M =
{x : Ax = λx} be a set of vectors associated with λ. Prove that M ∪ {0} is a subspace of
Rn .
Chapter 6

Linear Independence

6.1 Linear Combinations and Spanning Sets

Definition 6.1.1. We define a vector x ∈ V to be a linear combination of vectors


x1 , x2 , x3 , . . . , xn in V if and only if x can be expressed in the form,

x = α1 x1 + α2 x2 + α3 x3 + · · · + αn xn

where αi (i = 1, 2, . . . , n) are scalars.

Example 6.1.1. In R4 can we write the vector (2, 1, 5, −5) as a linear combination of the
vectors (1, 2, 1, −1), (1, 0, 2, −3), (1, 1, 0, −2)?

Solution: We look for the constants α1 , α2 , α3 such that

(2, 1, 5, −5) = α1 (1, 2, 1, −1) + α2 (1, 0, 2, −3) + α3 (1, 1, 0, −2).

So
α1 + α2 + α3 = 2
2α1 + α3 = 1
α1 + 2α2 = 5
−α1 − 3α2 − 2α3 = −5
   
1 1 1   2
 2 α1
0 1   =  1 . By using Gaussian elimination method the
 
⇔    α2
1 2 0   5 
α3
−1 −3 −2 −5
 
1 1 1 2  
 0 1 1 3  1 1 1 2
resulting augmented matrix reduces to  2 2  ∼  0 1 1 3 
. On back
 0 0 1 −1  2 2
0 0 1 −1
0 0 0 0
substitution we get (α1 , α2 , α3 ) = (1, 2, −1). Therefore
(2, 1, 5, −5) = 1(1, 2, 1, −1) + 2(1, 0, 2, −3) + −1(1, 1, 0, −2). So it can be expressed in terms of
those vectors; thus its a linear combination of those vectors. 2

63
Linear independence 64

Example 6.1.2. Write a polynomial P = t2 + t + 2 over R as a linear combination of the


polynomials P1 = 2t2 + t, P2 = 3t2 + 2t + 2.

Solution: We search for the constants α1 , α2 such that P = α1 P1 + α2 P2 . So we get

t2 + t + 2 = α1 (2t2 + t) + α2 (3t2 + 2t + 2)

which implies that

2α1 t2 + 3α2 t2 = t2
α1 t + 2α2 t = t
0 + 2α2 = 2

By equating coefficients we get

2α1 + 3α2 = 1
α1 + 2α2 = 1
2α2 = 2

⇒ (α1 , α2 ) = (−1, 1). Thus t2 + t + 2 = −1(2t2 + t) + 1(3t2 + 2t + 2). So it can be expressed in


terms of those vectors; thus its a linear combination of those polynomials. 2

Example 6.1.3. In R3 can we write the vector v = (2, -5, 3) as a linear combination of the
vectors v1 = (1, -3, 2), v2 = (2, -4, -1), v3 = (1, -5, 7)?

Solution: We look for the constants α1 , α2 , α3 such that

(2, −5, 3) = α1 (1, −3, 2) + α2 (2, −4, −1) + α3 (1, −5, 7).

Thus
α1 + 2α2 + α3 = 2
−3α1 − 4α2 − 5α3 = −5
2α1 − α2 + 7α3 = 3
    
1 2 1 α1 2
which is equivalent to  −3 −4 −5   α2  =  −5 . The augmented matrix is thus
  2 −1 7  α3 3
1 2 1 2 1 2 1 2
 −3 −4 −5 −5  which reduces to  0 1 −1 1 . This is an inconsistent system, so
2 −1 7 3 0 0 0 3
no scalars α1 , α2 , α3 satisfy the equation. Hence, V cannot be expressed as a linear combination
of these vectors. 2

Exercise 6.1.1. Determine if the vector w is a linear combination of vectors v1 and v2 where

(a) w = (-12, 20), v1 = (-1, 2) and v2 = (4, -6).

(b) w = (4, 20), v1 = (2, 10) and v2 = (-3, -15).


Linear independence 65

(c) w = (1, -4), v1 = (2, 10) and v2 = (-3, -15).

Definition 6.1.2. Let S = {x1 , x2 , . . . , xn } be a subset of a vector space V . Then S spans V or


V is spanned or generated by the set S if every vector in V is a linear combination of all vectors
in the set S. We write V = Span(S) or V = Span{x1 , x2 , . . . , xn }.

Example 6.1.4. Show that the set S = {(1, 2, 1), (1, 0, 2), (1, 1, 0)} spans V = R3 .

Solution: Let (a, b, c) ∈ V = R3 be an arbitrary vector in V . To show that S spans V , we


need to find constants ν1 , ν2 , ν3 such that

(a, b, c) = ν1 (1, 2, 1) + ν2 (1, 0, 2) + ν3 (1, 1, 0).

This implies that


ν1 + ν2 + ν3 = a
2ν1 + ν3 = b
ν1 + 2ν2 = c
   
1 1 1 a 1 1 1 a
So that the augmented form is  2 0 1 b  which reduces to  0 1 21 2a−b
2
.
5a−3c−b
1 2 0 c 0 0 1 3
Clearly the system is consistent and therefore the values of ν1 , ν2 , ν3 are defined; thus S spans
V. 2

Example 6.1.5. Let V = P2 be a vector space of all polynomials of degree less or equal to 2,
and let S = {t2 + 2t + 1, t2 + 2}. Does S span P2 ?

Solution: Let at2 + bt + c be an arbitrary polynomial in P2 . Note that any polynomial may be
chosen provided its a polynomial of degree less or equal to 2. We try to find the constants α, β
such that at2 + bt + c = α(t2 + 2t + 1) + β(t2 + 2). Equating coefficients

α+β = a
2α = b
α + 2β = c

Solving the system gives


b b
α = , β = a − , α = c − 2β = c − 2a + b.
2 2
The system is inconsistent since we have two different values of α; so no α, β exist to make the
equations true. Hence S does not span or generate V . 2

Exercise 6.1.2.

1. Write the polynomial t2 + 1 over R as a linear combination of polynomials


t2 + 2t + 1, t2 − 1, t + 2.

2. Determine whether the following polynomials span P2 i.e., the space of all polynomials of
degree less or equal to 2:
p1 = 1 − x − 2x2 , p2 = 3 + x, p3 = 5 − x + 4x2 , p4 = −2 − 2x + 2x2 .
Linear independence 66
 
2 3
3. Check whether the matrix A = can be written as a linear combination of
1 2
matrices
       
1 0 1 −1 2 1 −1 0
B1 = , B2 = , B3 = , B4 = .
1 2 0 0 1 2 0 0

(The solution to the respective scalars is 2, -2, 1, -2.)

4. Let V = P3 be a vector space of polynomials of degree less or equal to 3. Check whether


or not the set S = {t3 , t3 + 1, t3 + 2t2 + 1} spans V .

5. Let V = R3 . Which of the following sets spans V .

(i) {(1, 0, 2), (1, 1, 0), (1, 1, 1)}.


(ii) {(1, 2, 3), (1, 1, 1)}.
(iii) {(1, 1, 0), (2, 0, 1), (2, 1, 1), (0, 0, 1)}.

6.2 Linear independence and dependence of vectors

Definition 6.2.1. Let V be a vector space over a field K. The vectors v1 , v2 , . . . , vn ∈ V are
said to be linearly independent if whenever α1 v1 + α2 v2 + · · · + αn vn = 0, then
α1 = α2 = · · · = αn = 0.
Definition 6.2.2. The vectors v1 , v2 , · · · , vn ∈ V are said to be linearly dependent if there
exists scalars α1 , α2 , . . . , αn not all zero such that α1 v1 + α2 v2 + · · · + αn vn = 0.
Example 6.2.1. Determine if the set {(1, 0, 1, 2), (0, 1, 1, 2), (1, 1, 1, 3)} is linearly independent.

Solution: We search for scalars α1 , α2 , α3 , α4 such that


α1 (1, 0, 1, 2) + α2 (0, 1, 1, 2) + α3 (1, 1, 1, 3) = (0, 0, 0, 0). So

α1 + α3 = 0
α2 + α3 = 0
α1 + α2 + α3 = 0
2α1 + 2α2 + 3α3 = 0
   
1 0 1   0
 0 1 1  α1  0 
 1 1 1  α2
This is equivalent to    =  . The augmented form is
 0 
α3
  2 2 3  0
1 0 1 0 1 0 1 0
 0 1 1 0 
 which reduces to  0 1 1 0  ⇒ α1 = α2 = α3 = 0. Thus the set is
 

 1 1 1 0   0 0 1 0 
2 2 3 0 0 0 0 0
linearly independent since all the α are zero. 2
Example 6.2.2. Determine whether the set {(1, 2, −1), (1, −2, 1), (−3, 2, −1), (2, 0, 0)} is
linearly dependent or not.
Linear independence 67

Solution: We look for scalars α1 , α2 , α3 , α4 such that


α1 (1, 2, −1) + α2 (1, −2, 1) + α3 (−3, 2, −1) + α4 (2, 0, 0) = 0. Thus
α1 + α2 − 3α3 + 2α4 = 0
2α1 − 2α2 + 2α3 = 0
−α1 + α2 − α3 = 0
    
1 1 −3 2 α1 0
which is equivalent to  2 −2 2 0   α2  =  0  from which the augmented
 −1  1 −1 0 α3 0
1 1 −3 2 0
matrix is  0 1 −2 1 0  in row echelon form. Thus we get infinitely many solutions, not
0 0 0 0 0
all zero; so the set is linearly dependent. 2
Example 6.2.3. Determine whether the set {(1, −2, −3), (5, 6, −1), (3, 2, 1)} is linearly
dependent or not.

Solution: We need to find scalars α1 , α2 , α3 such that


α1 (1, −2, 3) + α2 (5, 6, −1) + α3 (3, 2, 1) = 0.
α1 + 5α2 + 3α3 = 0
−2α1 + 6α2 + 2α3 = 0
3α1 − α2 + α3 = 0
Thus α1 = − 21 t, α2 = − 12 t, α3 = t (check). So the linear system has non trivial solutions and
hence the set is linearly dependent. 2
Remark 6.2.1. We can show that the system has a non trivial solution without solving the
system by showing that the coefficient matrix has determinant zero and consequently not
invertible. However if the coefficient matrix has a non zero determinant, it implies that it is
invertible and hence has unique trivial solution since we are solving a homogeneous system.
Theorem 6.2.1. A set S with two or more vectors is

(a) Linearly dependent if and only if at least one of the vectors in S is expressible as a linear
combination of the other vectors in S.
(b) Linearly independent if and only if no vector in S is expressible as a linear combination of
the other vectors in S.

Proof:

(a) Let S = {v1 , v2 , . . . , vr } be a set with two or more vectors. If we assume that S is linearly
dependent, then there are scalars k1 , k2 , . . . , kr , not all zero, such that
k1 v1 + k2 v2 + · · · + kr vr = 0 (6.1)
To be specific, suppose that k1 6= 0. Then (6.1) can be rewritten as
   
k2 kr
v1 = − v2 + · · · + − vr
k1 k1
Linear independence 68

which expresses v1 as a linear combination of the other vectors in S. Similarly, if kj 6= 0 in


(6.1) for some j = 2, 3, . . . , r, then vj is expressible as a linear combination of the other
vectors in S. Conversely, let us assume that at least one of the vectors in S is expressible as
a linear combination of the other vectors. To be specific, suppose that

v1 = c2 v2 + c3 v3 + · · · + cr vr

so
v1 − c2 v2 − c3 v3 − · · · − cr vr = 0
It follows that S is linearly dependent since the equation

k1 v1 + k2 v2 + · · · + kr vr = 0

is satisfied by
k1 = 1, k2 = −c2 , , . . . , kr = −cr
which are not all zero. The proof in the case where some vector other than v1 is expressible
as a linear combination of the other vectors in S is similar.
(b) It is left as an exercise to the reader.

2
Theorem 6.2.2.

(a) A finite set of vectors that contains the zero vector is linearly dependent.
(b) A set with exactly two vectors is linearly independent if and only if neither vector is a scalar
multiple of the other.

Proof:

(a) For any vectors v1 , v2 , . . . , vr , the set S = {v1 , v2 , . . . , vr , 0} is linearly dependent since the
equation
0v1 + 0v2 + · · · + 10 = 0
expresses 0 as a linear combination of the vectors in S with coefficients that are not all zero.
(b) It is left as an exercise to the reader.

2
Theorem 6.2.3. If S = {v1 , v2 , . . . , vr } ⊂ Rn and r > n, then the set is linearly
dependent(l.d.).

Proof: Suppose that


v1 = {v11 , v12 , . . . , v1n }
v2 = {v21 , v22 , . . . , v2n }
.. ..
. .
vr = {vr1 , vr2 , . . . , vrn }
Linear independence 69

Consider the equation

k1 v1 + k2 v2 + · · · + kr vr = 0

Expressing both sides of this equation in terms of components and then equating corresponding
components, we obtain the system

v11 k1 + v21 k2 + · · · + vr1 kr = 0


v12 k1 + v22 k2 + · · · + vr2 kr = 0
.. .. .. ..
. . . .
v1n k1 + v2n k2 + · · · + vrn kr = 0

This is a homogeneous system of n equations in the r unknowns k1 , k2 , . . . , kr . Since r > n, it


follows from Theorem 3.2.2 that the system has nontrivial solutions. Therefore,
S = {v1 , v2 , . . . , vr } is a linearly dependent set. 2

Remark 6.2.2. Theorem 6.2.3 tells us that a set in R2 with more than two vectors is linearly
dependent and a set in R3 with more than three vectors is linearly dependent and so on.

Assignment 6.2.1.

1. Determine if the set S = {(1, 3, −1), (2, 7, −3), (4, 8, −7)} spans R3 .

2. Prove that the set S = {(1, −2, 2), (3, −4, −1), (1, −4, 9), (0, 2, −7)} does not span R3 .

3. Consider the subset


S = {x3 − 2x2 + x − 3, 2x3 − 3x2 + 2x + 5, 4x2 + x − 3, 4x3 − 7x2 + 4x − 1} of P3 . Show
that 3x3 − 8x2 + 2x + 16 is in span(S) by expressing it as a linear combination of the
elements of S.

4. Determine if (3, 4, 1, 6) lies in span {(1, 2, −1, 2), (−2, 3, 1, −1), (−1, 3, 2, 1)} in R4 .

5. Determine whether the given set of vectors spans the vector space M2×2 :
       
2 1 0 0 3 −1 0 0
, , , .
0 0 2 1 0 0 3 1

6. Let V be a vector space and suppose u1 , u2 , u3 span V . Let


v1 = u1 , v2 = u2 − u1 , v3 = u3 . Prove that v1 , v2 , v3 also span V .

7. In each part,determine by quick inspection whether the given set of vectors is linearly
independent. State a reason for your conclusion.

(a) {(0, 1, 1)}


(b) {(1, 2, −1), (3, 1, −1)}.
(c) {(1, 2, −5), (−2, −4, 10)}.
(d) {(4, 2, 1), (−1, 3, 7), (0, 0, 0)}.
(e) {(2, −5, 1), (1, 1, −1), (0, 2, −3), (2, 2, 6)}.

8. Determine whether the vectors (1, 2, 3), (1, −1, 2), (1, −4, 2) in R3 are linearly independent.
Linear independence 70

9. Show that the vectors (0, 3, 1, −1), (6, 0, 5, 1), (4, −7, 1, 3) form a linearly dependent set in
R4 . Express each vector as a linear combination of the other two.

10. Determine which of the following sets of vectors are linearly independent.

(a) {(1, 9, −2), (3, 4, 5), (−2, 5, −7)}


(b) {(2, −1, 3), (4, −1, 6), (−2, 0, 2)}.
(c) {(2, 5, −1, 6), (4, 3, 1, 4), (1, −1, 1, −1)}.

11. Determine a value for q such that the following vectors are linearly independent.
(1, 1, 2, 1), (2, 1, 2, 3), (1, 4, 2, 1), (−1, 3, 5, q).

12. Determine which of the following subsets of P2 are linearly independent.

(a) {x2 + x + 1, x2 − 1, x2 + 1}.


(b) {2x − 6, 7x + 2, 12x − 7}.

13. If you have 5 vectors in R5 and the vectors are linearly independent, can it always be
concluded they span R5 ? Explain.

14. Let V be a vector space and suppose that u1 , u2 , u3 are linearly independent vectors in V.
Prove that u1 + u2 , u2 + u3 , u3 are also linearly independent.

15. Let V be a vector space and let u, v, w ∈ V . Show that the vectors u − v, v − w and
w − u are linearly dependent.

Exercise 6.2.1.

1. Which of the following belong to span S if S = {t2 + 2t + 1, t2 + 3, t − 1}?

(a) −t2 + t − 4, (b) 2t2 + 2t + 2, (c) t2 + t + 1, (d) t2 + t + 2.

2. Determine if the following sets are linearly independent or not.

(a) {(1, 0, 1), (2, 1, 4), (3, 2, 0)},


(b) {(1, 1, 1), (0, 0, 0), (1, 2, 1)},
(c) {(1, 2, 5), (3, 6, 15), (2, 5, 12), (1, 1, 1), (−1, −1, −3)}.

3. Redo Examples 6.2.1, 6.2.2, 6.2.3 by evaluating the determinants of the coefficient
matrices formed.
Chapter 7

Basis and Dimension

7.1 Bases

Definition 7.1.1. A set of vectors S = {v1 , v2 , v3 , . . . , vn } ∀ vi ∈ V (i = 1, 2, . . . , n) is called


a basis for a vector space V if

(a) S spans V .

(b) S is linearly independent.


Example 7.1.1. The set {e1 = (1, 0, 0, . . . , 0), e2 = (0, 1, 0, . . . , 0), . . . , en = (0, 0, . . . , 1)} is a
basis for Rn . This set is called a natural basis (or standard basis) for Rn . In particular, for
n = 2, the set {e1 = (1, 0), e2 = (0, 1)} is a basis for R2 . For n = 3, the set
{e1 = (1, 0, 0), e2 = (0, 1, 0), e3 = (0, 0, 1)} is a basis for R3 .
Example 7.1.2. Determine whether any of the following sets of vectors will form a basis for R3 .

(a) v1 = (1, −1, 1), v2 = (0, 1, 2), v3 = (3, 0, −1).

(b) v1 = (1, 0, 0), v2 = (0, 1, 0), v3 = (0, 0, 1).

(c) v1 = (1, −1, 1), v2 = (−1, 2, −2), v3 = (−1, 4, −4).

Solution:

(a) We check whether the given vectors span R3 and are linearly independent.
Span: We find scalars α1 , α2 , α3 such that

(a, b, c) = α1 (1, −1, 1) + α2 (0, 1, 2) + α3 (3, 0, −1).

This is equivalent to
α1 + 0α2 + 3α3 = a
−α1 + α2 + 0α3 = b
α1 + 2α2 − α3 = c

71
Basis and Dimension 72
   
1 0 3 α1 a
The matrix form of the system is  −1 1 0   α2  =  b .
1 2 −1 α3 c
Linear independence: we determine whether the system

(0, 0, 0) = β1 (1, −1, 1) + β2 (0, 1, 2) + β3 (3, 0, −1)

has only trivial solution, i.e., β1 = 0, β2 = 0 β3 = 0. This reduces to solving


    
1 0 3 α1 0
 −1 1 0   α2  =  0 .
1 2 −1 α3 0
Observe that the above two systems have the same coefficient matrix A. We can
demonstrate that the vectors span R3 and that they are linearly independent by showing
that det(A) 6= 0.

 
1 0 3
A =  −1 1 0 
1 2 −1

Since det(A) = −10 6= 0, hence the vectors v1 , v2 , and v3 form a basis for R3 . We note that
if the determinant of the coefficient matrix is zero, then the vectors will not span R3 , and
will not be linearly independent and thus will not form a basis for R3 .

(b) It is left as an exercise.

(c) We look for scalars α1 , α2 , α3 such that

(a, b, c) = α1 (1, −1, 1) + α2 (−1, 2, −2) + α3 (−1, 4, −4)

and
(0, 0, 0) = β1 (1, −1, 1) + β2 (−1, 2, −2) + β3 (−1, 4, −4)
The coefficient matrix is

 
1 −1 −1
A =  −1 2 4 
1 −2 −4

Since det(A) = 0 the vectors v1 , v2 , and v3 do not span R3 , and so are not linearly
independent. Thus they do not form a basis for R3 .

2
Exercise 7.1.1. Show that

(a) {(1, 0, 0), (0, 1, 0), (0, 0, 1)} is a basis for R3 .

(b) {(1, 0, 0, 0), (0, 1, 0, 0), (0, 0, 1, 0), (0, 0, 0, 1)} is a basis for R4 .
Basis and Dimension 73

(c) {t, 1} is a basis for P1 where P1 is a set of polynomials of degree less or equal to 1.

(d) {t2 , t, 1} is a basis for P2 where P2 is a set of polynomials of degree less or equal to 2.
       
1 0 0 1 0 0 0 0
(e) , , , is a basis for M2 a space of 2 × 2 matrices.
0 0 0 0 1 0 0 1
Theorem 7.1.1. Suppose that the set S = {x1 , x2 , x3 , . . . , xn } is a basis for the vector space V .
Then every vector x in V can be written in only one way as a linear combination of vectors in S.

Proof: Suppose there are scalars αi , βi , i = 1, 2, . . . , n such that


x = α1 x1 + α2 x2 + α3 x3 + · · · + αn xn and x = β1 x1 + β2 x2 + β3 x3 + · · · + βn xn . Then
x − x = 0 ⇔ (α1 − β1 )x1 + (α2 − β2 )x2 + (α3 − β3 )x3 + · · · + (αn − βn )xn = 0. This implies
that α1 − β1 = 0, α2 − β2 = 0, . . . , αn − βn = 0, i.e., α1 = β1 , α2 = β2 , . . . , αn = βn . Hence every
x ∈ V can only be expressed in one and only one way as a linear combination of vectors in S. 2
Note 7.1.1. A non zero vector space, generally contains infinitely many elements. However a
vector space with a basis is completely described by a finite numbers of elements in the basis S.

7.2 Dimension

Definition 7.2.1. The dimension of a vector space V is the number of vectors in the basis for
V . It is denoted by dim(V ). A vector space with a finite dimension is said to be finite
dimensional. If V does not have a basis with a finite number of elements it is said to be
infinite-dimensional. In addition, we shall regard the zero vector space to be finite dimensional.
Example 7.2.1.

(a) dim(Rn ) = n.

(b) dim(R2 ) = 2.

(c) dim (R9 ) = 9.

(d) dim(R5 ) = 5.

(e) Pn = Space of all polynomials of degree ≤ n; dim(Pn ) = n + 1.

(f) M2,2 (R): the space of 2 × 2 matrices; dim(M2,2 (R)) = 4.

(g) Mm,n (R): the space of m × n matrices; dim(Mm,n (R)) = mn.

(h) P = Space of all polynomials; dim(P ) = ∞.

(i) {0} = trivial vector space; dim({0}) = 0.


Note 7.2.1. All finite dimensional vector spaces of the same dimension differ only in the nature
of their elements, but the algebraic properties are the same.
Theorem 7.2.1. Let V be a finite-dimensional vector space, and let S = {x1 , x2 , x3 , . . . , xn } be
any basis for V .
Basis and Dimension 74

(a) If a set has more than n vectors, then it is linearly dependent.

(b) If a set has fewer than n vectors, then it does not span V .
Corollary 7.2.1. If S = {x1 , x2 , x3 , . . . , xn } and T = {y1 , y2 , y3 , . . . , ym } are bases for V , then
m = n.

Proof: It is left as an exercise to the reader. 2


Theorem 7.2.2. (Plus-Minus Theorem) Let S be a nonempty set of vectors in a vector space V .

(a) If S is a linearly independent set, and if v is a vector in V that is outside of Span(S), then
the set S ∪ {v} that results by inserting v into S is still linearly independent.

(b) If v is a vector in S that is expressible as a linear combination of other vectors in S, and if


S − {v} denotes the set obtained by removing v from S, then S and S − {v} span the same
space; that is,
Span(S) = Span(S − {v}).

Proof:

(a) Assume that S = {v1 , v2 , . . . , vr } is a linearly independent set of vectors in V , and v is a


vector in V outside of Span(S). To show that S 0 = {v1 , v2 , . . . , vr , v} is a linearly
independent set, we must show that the only scalars that satisfy

k1 v1 + k2 v2 + · · · + kr vk + kk+1 v = 0 (7.1)

are k1 = k2 = · · · = kr = kk+1 = 0. But we must have kk+1 = 0; otherwise, we could solve


(7.1) for v as a linear combination of v1 , v2 , . . . , vr , contradicting the assumption that v is
outside of Span(S). Thus (7.1) simplifies to

k1 v1 + k2 v2 + · · · + kr vk = 0 (7.2)

which, by the linear independence of {v1 , v2 , . . . , vr }, implies that k1 = k2 = · · · , kr = 0

(b) Assume that S = {v1 , v2 , . . . , vr } is a set of vectors in V , and to be specific, suppose that
vr is a linear combination of v1 , v2 , . . . , vr−1 }, say

vr = c1 v1 + c2 v2 + · · · + cr−1 vr−1 (7.3)

We show that if vr is removed from S, then the remaining set of vectors {v1 , v2 , . . . , vr−1 }
still spans Span(S); that is, we must show that every vector w in Span(S) is expressible as a
linear combination of v1 , v2 , . . . , vr−1 . But if w is in Span(S), then w is expressible in the
form
w = k1 v1 + k2 v2 + · · · + kr−1 vr−1 + kr vr
or, on substituting (7.3),

w = k1 v1 + k2 v2 + · · · + kr−1 vr−1 + kr (c1 v1 + c2 v2 + · · · + cr−1 vr−1 )

which expresses w as a linear combination of v1 , v2 , . . . , vr−1 .


Basis and Dimension 75

Theorem 7.2.3. If V is an n-dimensional vector space, and if S is a set in V with exactly n


vectors, then if

(a) S is spans V , S is a basis for V .

(b) S is linearly independent, S is a basis for V .

Proof:

(a) Assume that S has exactly n vectors and spans V . To prove that S is a basis, we must show
that S is a linearly independent set. But if this is not so, then some vector v in S is a linear
combination of the remaining vectors. If we remove this vector from S, then it follows from
the Plus-Minus Theorem (Theorem 7.2.2(b)) that the remaining set of n − 1 vectors still
spans V . But this is impossible, since it follows from Theorem 7.2.1(b) that no set with
fewer than n vectors can span an n-dimensional vector space. Thus S is linearly
independent.

(b) Assume that S has exactly n vectors and is a linearly independent set. To prove that S is a
basis, we must show that S spans V . But if this is not so, then there is some vector v in V
that is not in Span(S). If we insert this vector into S, then it follows from the Plus-Minus
Theorem (Theorem 7.2.2(a)) that this set of n + 1 vectors is still linearly independent. But
this is impossible, since it follows from Theorem 7.2.1(a) that no set with more than n
vectors in an n-dimensional vector space can be linearly independent. Thus S spans V .

Theorem 7.2.4. Let S be a finite set of vectors in a finite-dimensional vector space V .

(a) If S spans V but is not a basis for V , then S can be reduced to a basis for V by removing
appropriate vectors from S.

(b) If S is a linearly independent set that is not already a basis for V , then S can be enlarged
to a basis for V by inserting appropriate vectors into S.

Proof:

(a) If S is a set of vectors that spans V but is not a basis for V , then S is a linearly dependent
set. Thus some vector v in S is expressible as a linear combination of the other vectors in S.
By the Plus-Minus Theorem (Theorem 7.2.2(b)), we can remove v from S, and the resulting
set S 0 will still span V . If S 0 is linearly independent, then S 0 is a basis for V , and we are
done. If S 0 is linearly dependent, then we can remove some appropriate vector from to
produce a set S 00 that still spans V . We can continue removing vectors in this way until we
finally arrive at a set of vectors in S that is linearly independent and spans V . This subset
of S is a basis for V by Theorem 7.2.3.
Basis and Dimension 76

(b) Suppose that dim(V ) = n . If S is a linearly independent set that is not already a basis for
V , then S fails to span V , and there is some vector v in V that is not in Span(S). By the
Plus-Minus Theorem (Theorem 7.2.2(a)), we can insert v into S, and the resulting set S 0
will still be linearly independent. If S 0 spans V , then S 0 is a basis for V , and we are finished.
If S 0 does not span V , then we can insert an appropriate vector into S 0 to produce a set S 00
that is still linearly independent. We can continue inserting vectors in this way until we
reach a set with n linearly independent vectors in V . This set will be a basis for V by
Theorem 7.2.3.

2
Theorem 7.2.5. Let V be a finite dimension vector space. Every non-zero subspace W of V
has a finite basis and dim(W ) ≤ dim(V ); moreover, if dim(W ) = dim(V ), then W = V .

Proof: Since V is finite-dimensional, so is W (prove it). Accordingly, suppose that


S = {w1 , w2 . . . , wm } is a basis for W . Either S is also a basis for V or it is not. If it is, then
dim(W ) = dim(V ) = m. If it is not, then by Theorem 7.2.4(b), vectors can be added to the
linearly independent set S to make it into a basis for V , so dim(W ) < dim(V ). Thus
dim(W ) ≤ dim(V ) in all cases. If dim(W ) = dim(V ), then S is a set of m linearly independent
vectors in the m-dimensional vector space V ; hence S is a basis for V by Theorem 7.2.3. This
implies that W = V (why?). 2
Example 7.2.2. Find the dimension for the solution space of W = {x : Ax = 0}:
x1 + 2x2 + 3x4 + x5 = 0
2x1 + 3x2 + 3x4 + x5 = 0
x1 + x2 + 2x3 + 2x4 + x5 = 0
3x1 + 5x2 + 6x4 + 2x5 = 0
2x1 + 3x2 + 2x3 + 5x4 + 2x5 = 0

Solution: The
 augmented matrixis  
1 2 0 3 1 0 1 2 0 3 1 0
 2 3 0 3 1 0   0
  1 0 3 1 0 
 1

⇒ [A|0] =  1 1 2 2 1 0  ∼ 0
 0 1 1 2
0 
 . If we let x4 = s, x5 = t then
 3 5 0 6 2 0   0 0 0 0 0 0 
2 3 2 5 2 0 0 0 0 0 0 0
       
x1 3s + t 3 1
 x2   −3s − t   −3   −1 
   1     1 
 x3 = − t−s  = s −1  + t  − .
   2     2 
 x4   s   1   0 
x5 t 0 1
Thus the set    

 3 1 

−3   −1 



   1 

 −1  ,  − 
   2 


 1   0  

 
0 1
 

generates W and hence dim(W ) = 2. 2


Basis and Dimension 77

Assignment 7.2.1.

1. Determine whether the following sets of vectors are bases for R3


(a) {(1, 0, 1), (1, 1, 0), (0, 1, 1), (2, 1, 1)}.
(b) {(1, 0, 1), (0, 1, 0), (1, 2, −1)}.
(c) {(1, 0, 1), (0, 1, 1), (1, 2, −1)}.
2. Determine the dimensions of the following subspaces of R4
(a) all vectors of the form (a, b, c, 0).
(b) {(a, b, c, d) : a + b + c + d = 0}.
3. Find a spanning family for the subspace S of R3 defined by the equation 2x − 3y + 5z = 0.
4. Prove that the set S = {(1, −2, 2), (3, −4, −1), (1, −4, 9), (0, 2, −7)} does not span R3 .
5. Consider the subset
S = {x3 − 2x2 + x − 3, 2x3 − 3x2 + 2x + 5, 4x2 + x − 3, 4x3 − 7x2 + 4x − 1} of P3 . Show
that 3x3 − 8x2 + 2x + 16 is in span(S) by expressing it as a linear combination of the
elements of S.
6. Let H = Span {v1 , v2 , v3 , . . . , v6 } be a subspace of R5 . Is the set {v1 , v2 , v3 , . . . , v6 } a
basis for H? Explain.
7. Determine if the following set is a subspace of the appropriate space. If the set is a
subspace, find a basis and the dimension of the subspace. If the set is not a subspace,
provide a counterexample to illustrate that one of the conditions in the definition of
subspace does not hold. W = {(a, b): a, b are non-negative real numbers}.
8. Find a basis for the subspace of M2×2 consisting of all 2 × 2 matrices A such that A = At .
What is the dimension of the subspace of Mn×n consisting of all n × n matrices A such
that A = At ?
9. Find a basis for the solution space of the system of equations Ax = 0 when
 
1 1 2
(a) A =  2 −1 2 .
0 3 3
 
−1 0 1 2
 −1 1 0 −1 
(b) A = 
 0 −1 1
.
3 
1 −2 1 4
What is the dimension of the solution space in each case?
10. In R3 let u1 = (1, 3, 4), u2 = (4, 0, 1) and u3 = (3, 1, 2). Determine the dimension of span
{u1 , u2 , u3 } and determine if (1, 1, 1) lies in span {u1 , u2 , u3 }.
11. In R4 let u1 = (2, 1, 0, −1), u2 = (4, 8, −4, −3), u3 = (1, −3, 2, 0) and u4 = (1, 10, −6, −2).
Determine the dimension of span {u1 , u2 , u3 , u4 } and find a nonzero linear combination of
{u1 , u2 , u3 , u4 } which equals the zero vector.
Chapter 8

Rank and Nullity

8.1 Rank of a Matrix

In this section we define the rank and nullity of a matrix. We further prove the Rank and
Nullity Theorem for matrices.

We note that an m × n matrix


 
a11 a12 . . . a1n
 a21 a22 . . . a2n 
A=
 
.. .. ... .. 
 . . . 
am1 am2 . . . amn

contains m rows and n columns. Each row considered as a row matrix on its own is an element
of Rn .

Definition 8.1.1.

1. The vectors r1 = (a11 , a12 , . . . , a1n ), r2 = (a21 , a22 , . . . , a2n ), . . ., rm = (am1 , am2 , . . . , amn ) in
Rn formed from rows of A are called row vectors of A.
     
a11 a12 a1n
 a21   a22   a2n 
2. The vectors c1 =  ..  , c2 =  ..  , . . . , cn =  ..  in Rm
     
 .   .   . 
am1 am2 amn
formed from the columns of A are called column vectors of A.

3. If A is an m × n matrix, then the subspace of Rn spanned by the row vectors of A is called


the row space of A.

4. The subspace of Rm spanned by the column vectors of A is called the column space of A.

5. The set of all x ∈ Rn such that Ax = 0, is called the null space of A or solution space of A.

6. The dimension of the null space of A is called the nullity of A.

78
Rank and Nullity 79

Theorem 8.1.1. A system of linear equations Ax = b is consistent if and only if b is in the


column space of A.

Example 8.1.1. Let Ax = b be the linear system


    
−1 3 2 x1 1
 1 2 −3   x2  =  −9  .
2 1 2 x3 −3

Then b is in the column space of Ax = b since solving this system yields


x1 = 2, x2 = −1, x3 = 3.

Theorem 8.1.2. Elementary row operations do not change the nullspace of a matrix.

Theorem 8.1.3. Elementary row operations do not change the row space of a matrix.

Theorem 8.1.4. If A and B are row equivalent matrices, then

(a) A given set of column vectors of A is linearly independent if and only if the corresponding
column vectors of B are linearly independent.

(b) A given set of column vectors of A forms a basis for the column space of A if and only if the
corresponding column vectors of B form a basis for the column space of B.

Theorem 8.1.5. If a matrix B is in row echelon form, then the row vectors with the leading 1’
s ( the nonzero row vectors) form a basis for the row space of B, and the column vectors with
the leading 1’ s of the row vectors form a basis for the column space of B.

Note 8.1.1. Suppose that a matrix A is equivalent to a matrix B, which is in row-echelon form.

(i) Since elementary row operations do not change the row space of a matrix, we can find a
basis for the row space of A by finding a basis for the row space of B (see Theorems 8.1.3
and 8.1.5).

(ii) Since A and B may have different column spaces, we cannot find a basis for the column
space of A directly from the column vectors of B. However, it follows from Theorem
8.1.4(b) that if we can find a set of column vectors of B that forms a basis for the column
space of B, then the corresponding column vectors of A will form a basis for the column
space of A.

Definition 8.1.2.

1. The dimension of the row space of a matrix A is called the row rank of A.

2. The dimension of the column space of a matrix A is called the column rank of A.

Theorem 8.1.6. The column rank and the row rank of any matrix A are equal.

Proof: Let B be any row-echelon form of A. It follows from Theorem 8.1.3 that

dim(row space of A) = dim(row space of B)


Rank and Nullity 80

and it follows from Theorem 8.1.4 (b) that


dim(column space of A) = dim(column space of B)
We next show that the row space and column space of B have the same dimension. We note
that the dimension of the row space of B is the number of nonzero rows, and the dimension of
the column space of B is the number of columns that contain leading 1’s (Theorem 8.1.5). But,
the nonzero rows are precisely the rows in which the leading 1’s occur, so the number of leading
1’s and the number of nonzero rows are the same. Thus the row space and column space of B
have the same dimension. 2
Definition 8.1.3. The dimension of the row or column space of a matrix A is the rank of the
matrix A.
Theorem 8.1.7. If A is any matrix, then rank(A) = rank(At ).

Proof: rank(A) = dim(row space of A) = dim(column space of At ) = rank(At ). 2


Theorem 8.1.8. (Rank-Nullity Theorem) If A is a matrix with n columns, then
n = rank(A) + nullity(A).

Proof: Since A has n columns, the homogeneous linear system Ax = 0 has n unknowns
(variables). These fall into two categories: the leading variables and the free variables. Thus
   
number of leading number of free
+ = n.
variables variables
But the number of leading variables is the same as the number of leading 1’s in the reduced
row-echelon form of A, and this is the rank of A. Thus
 
number of free
rank(A) + = n.
variables
The number of free variables is equal to the nullity of A. This is so because the nullity of A is
the dimension of the solution space of Ax = 0, which is the same as the number of parameters
in the general solution, which is the same as the number of free variables. Thus
rank(A) + nullity(A) = n.
2
Remark 8.1.1. Theorem 8.1.8

(i) is also referred to as the Dimension Theorem for matrices.


(ii) also contains two results that are of importance in their own right (see Thoerem 8.1.10).
Theorem 8.1.9. If x0 denotes any single solution of a consistent linear system Ax = b, and if
x1 , x2 , . . . , xk form a basis for the nullspace of A - that is, the solution space of the
homogeneous system Ax = 0 then every solution of Ax = b can be expressed in the form
x = x0 + c 1 x1 + c 2 x2 + · · · + c k xk (8.1)
and, conversely, for all choices of scalars c1 , c2 , . . . , ck , the vector x in this formula is a solution
of Ax = b.
Rank and Nullity 81

Proof: Let x be an arbitrary solution of the system Ax = b. Then Ax0 = b and Ax = b so that
A(x − x0 ) = 0. So x − x0 is in the null space of A; hence there are scalars c1 , c2 , . . . , ck such that

x − x0 = c1 x1 + c2 x2 + · · · + ck xk ⇔ x0 + c1 x1 + c2 x2 + · · · + ck xk .

Conversely, if x = x0 + c1 x1 + c2 x2 + · · · + ck xk then

Ax = Ax0 + c1 Ax1 + c2 Ax2 + · · · + ck Axk = b + c1 0 + c2 0 + · · · + ck 0 = b

which completes the proof. 2

Theorem 8.1.10. If A is an m × n matrix, then

(i) rank(A) = number of leading variables in the solution of Ax = 0.

(ii) nullity(A) = number of parameters in the general solution of Ax = 0.

Theorem 8.1.11. (The Consistency Theorem)

If Ax = b is a linear system of m equations in n unknowns, then the following are equivalent:

(i) Ax = b is consistent.

(ii) b is in the column space of A.

(iii) The coefficient matrix A and the augmented matrix [A|b] have the same rank.

Theorem 8.1.12. Let A be an n × n matrix. The following statements are equivalent:

(a) A is invertible.

(b) The null space of A is {0}, i.e., just a zero vector.

(c) nullity(A) = 0.

(d) rank(A) = n.

(e) The column vectors of A form a basis for Rn .

(f) The row vectors of A form a basis for Rn .

Theorem 8.1.13. Let A be an n × n matrix. The following statements are equivalent:

(a) A is invertible.

(b) The only solution to the system Ax = 0 is only the trivial solution.

(c) A is row equivalent to In .

(d) Ax = b has exactly one solution for every n × 1 matrix b.

(e) Ax = b is consistent for every n × 1 matrix b.


Rank and Nullity 82

(f) det(A) 6= 0.
Example 8.1.2. Consider a linear system of equations where the coefficient matrix is
 
1 1 5 1 4
A =  2 −1 1 2 2 .
3 0 6 0 −3
Find the

(i) basis for the row space of A.


(ii) dimension for the row space of A.
(iii) basis for the column space of A.
(iv) Basis for the column space of A.
(v) rank of A.
(vi) basis for the solution space of A.
(vii) nullity of A.
   
1 1 5 1 4 1 0 2 0 −1
Solution: A =  2 −1 1 2 2 ∼ 0 1 3 0 2 
3 0 6 0 −3 0 0 0 1 3

(i) The basis for the row space of A is {(1, 0, 2, 0, −1), (0, 1, 3, 0, 2)(0, 0, 0, 1, 3)}.
(ii) dim(the row space of A) = 3.
     
 1 1 1 
(iii) The basis for the column space of A is  2 , −1 , 2  .
   
3 0 0
 

(iv) dim(the column space of A) = 3.


(v) rank(A) = 3.
(vi) For the solution space, we have to solve Ax = 0. This gives
       
x1 −2s + t −2 1
 x2   −3s − 2t   −3   −2 
       
 x3  =  s  = s 1  + t 0 .
       
 x4   −3s   0   −3 
x5 t 0 1
where s and t are arbitrary scalars. Hence the basis for the solution space is
   

 −2 1  
−3  −2 


 

  

 1 ,  0  .
   


  0   −3  

 
0 1
 
Rank and Nullity 83

(vii) nullity(A) = dimension of the solution space of A. Hence nullity(A) = 2.

Example 8.1.3. Consider a linear system of equations where the coefficient matrix is
 
1 2
A=
3 4

Find the

(i) basis for the row space of A.

(ii) dimension for the row space of A.

(iii) basis for the column space of A.

(iv) basis for the column space of A.

(v) rank of A.

(vi) basis for the solution space of A.

(vii) nullity of A.

   
1 2 1 0
Solution: A = ⇔ .
3 4 0 1

(i) The basis for the row space of A is {(1, 0), (0, 1)}

(ii) dim(the row space of A) = 2.


   
1 2
(iii) The basis for the column space of A is , .
3 4
(iv) dim(the column space of A) = 2.

(v) rank(A) = 2.

(vi) For the solution space; we have to solve Ax = 0. Solving this system gives a trivial
solution x1 = 0 and x2 = 0. Thus the solution space of A is {0 }.

(vii) nullity(A) = 0.

Example 8.1.4. Consider a matrix


 
1 1 1
A =  0 −1 1 
1 0 2
Rank and Nullity 84

(a) Explain when the linear system Ax = b for b ∈ R3 would have

(i) no solution.
(ii) infinitely many solutions.
(iii) a unique solution.

(b) Show that x = (3, 1, 4)t is in the column space of A.

(c) Find a basis of the column space of A and hence state its dimension.

(d) Explain whether every vector x ∈ R3 is in the column space of A.

(e) State the rank and nullity of A.

(f) Does Ax = 0 have a non trivial solution?. Justify your answer.

Solution:

(a) We consider the augmented matrix for any (a, b, c) ∈ R3 :


     
1 1 1 a 1 1 1 a 1 0 2 a+b
 0 −1 1 b  ∼  0 −1 1 b  ∼  0 1 −1 −b 
1 0 2 c 0 −1 1 c − a 0 0 0 c−a−b
.

(i) There is no solution if c 6= a + b for every (a, b, c) ∈ R3 .


(ii) There are infinitely many solutions if c − a − b = 0, i.e., c = a + b for every
(a, b, c) ∈ R3 .
(iii) A unique solution is not possible for every (a, b, c) ∈ R3 since the matrix A is 3 × 3
with only 2 non-zero rows in echelon form.

(b) To be in the column space of A, (a, b, c) must be such that c = a + b (in which case the
system is consistent). For x = (3, 1, 4)t , 4 = 3 + 1 as per required condition; hence it is in
the column space of A.
   
 1 1 
(c) The basis for the column space is  0  ,  −1  .
1 0
 

(d) dim(column space) = 2.

(e) Not all x ∈ R3 are in the column space of A since dim(col space A) = 2 and dim(R3 ) = 3.

(f) dim(col space) = rank(A) = 2.

(g) nullity(A) = 3 - rank(A) = 3 - 2 = 1.

(h) Ax = 0 has a non trivial solution since we have two equations and 3 unknowns in the
echelon form.

2
Rank and Nullity 85

Assignment 8.1.1.

1. A matrix is called full rank when its rank is as large as possible. Let A be an m × n
matrix of full rank. Justify whether or not Ax = b will be consistent if:

(a) m = n,
(b) m > n,
(c) m < n.
 
1 −1 2 1
 2 1 −1 2 
2. Let A = 
 2 −1

3 −1 
1 1 −2 4
(a) Find a basis for the row space of A. What is the rank of A? Use the rank to
determine the dimension of the set of solutions of the system of equations Ax = 0.
(b) Find a basis for the set of solutions of Ax = 0 and observe that the dimension agrees
with that predicted in (a).
 
1 2 −1 4
 2 −1 3 3 
3. Let A =  −1
. Find bases for the row and column spaces of A.
3 −4 1 
0 1 −1 1
4. Give examples of nonzero m × n matrices A such that

(a) rank(A) = min{m, n},


(b) rank(A) < min{m, n}.

5. Prove that if A is a matrix which is not square, then either the row vectors of A or the
column vectors of A are linearly dependent.

6. Let A be a 3 × 4 matrix with rank(A) = 3.

(a) What is the dimension of {x : Ax = 0}?


(b) Is Ax = b consistent for all b?
(c) If Ax = b is consistent, how many solutions does it have?
(d) If A is a 4 × 3 matrix with rank(A) = 3, what are the answers to (a), (b) and (c)?

Exercise 8.1.1.

Consider a matrix
 
1 2 0
A= 0 1 2 
1 3 2

(a) Solve Ax = b for b = (4, −1, 3)t .

(b) Is b in the column space of A. Justify your answer.


Rank and Nullity 86

(c) Explain whether every vector b ∈ R3 is in the column space of A. Justify your answer.

(d) Find the basis for the column space of A.

(e) Find the basis for the row space of A.

(f) Find the basis for the solution space of A.


Semester I Makerere University Examinations 2013/2014 87

MAKERERE UNIVERSITY School of Physical Sciences


College of Natural Sciences Department of Mathematics

Name:. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Stud No:. . . . . . . . . . . . . . . . . . Reg No:. . . . . . . . . . . . . . . . . .

School:. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Course:. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

MTH 1102: LINEAR ALGEBRA I TEST 2 THURSDAY DECEMBER 12, 2013


1:00 PM - 2:00 PM

Instructions to Candidates:

(i) Attempt ALL questions.

(ii) Each correct answer will earn 2 marks. A question failed or not attempted will earn 0
marks.

(iii) In Question 1–5, answer True or False in the space provided. In Question 6–10 circle the
correct answer. In Question 11–15, write the final answer in the space provided.

1. Let T : R2 → R3 be a linear transformation such that T (x, y) = (x − y, y − x, 2x − 2y).


Then T is 1-1. ................

2. A finite set of vectors that contains the zero vector is always linearly independent.
................

3. If A and B are similar matrices, then A−1 and B −1 are also similar. .................

4. Let V = R3 , with the operations in V defined by


(x1 , x2 , x3 ) + (y1 , y2 , y3 ) = (x1 + y1 , x2 + y2 , x3 + y3 ) and α.(x1 , x2 , x3 ) = (0, 0, 0). Then V is
a vector space ................

5. A system of linear equations Ax = b is inconsistent if and only if b is in the column space


of A. ................

6. The characteristic polynomial of a matrix B is P (λ) = λ2 − 4λ + 3. The determinant of B


is

(a) 1. (b) -3. (c) 3. (d) none of these.

7. Suppose that T : V → W is a linear transformation from a 5 dimensional vector space V


to a 4 dimensional vector space W . If the nullity of T is 2, then the rank of T is

(a) 2. (b) 3. (c) 1. (d) none of these.

8. Let S = {(1, −1, 0), (0, 1, 1), (1, −1, 1)} be a basis for a vector space V = R3 . Let
x = (1, 2, 3). Then [x]S is
Semester I Makerere University Examinations 2013/2014 88

(a) (−1, 3, 0)t . (b) (1, −3, 0)t . (c) (1, 3, 0)t . (d) none of these.

9. Let T : R3 → R2 be a linear transformation defined by T (x, y, z) = (x + y, y − z). If B is


0
the natural basis for R3 and B is the natural basis for R2 , then the matrix A for T with
0
respect to B and B is
     
−1 1 0 1 1 0 1 1 0 (d) none of these.
(a) (b) (c)
0 1 1 0 1 −1 0 −1 1
 
1 1 5
10. Let A = . A basis for the solution space of Ax = 0 is
2 −1 1
(a) {(−2, −3, 1)t }. (b) {(2, 3, 1)t }. (c) {(−2, 3, −1)t }. (d) none of these.

11. Let u1 = (1, 2, 1, 1), u2 = (3, 5, 4, 2), u3 = (−1, 0, −3, 1) and u4 = (2, 2, 4, 0). A basis for
U = span{u1 , u2 , u3 , u4 } is ...............................................

 
1 1 2
12. Let A =  0 1 0  . A diagonal matrix D that is similar to A is ............................
0 1 3

 
1 1 5 1 4
13. Let A =  2 −1 1 2 2 . The basis for the column space of A
3 0 6 0 −3
is ................................

14. Let T : R4 → R2 be a linear transformation defined by T (x, y, z, w) = (x + y, z + w). The


nullity of T is .................

 
3 1
15. Let B = . The matrix P that diagonalizes B is ................................
2 2

Good Luck
Semester I Makerere University Examinations 2014/2015 89

MAKERERE UNIVERSITY School of Physical Sciences


College of Natural Sciences Department of Mathematics

Name:. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Stud No:. . . . . . . . . . . . . . . . . . Reg No:. . . . . . . . . . . . . . . . . .

School:. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Course:. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

MTH 1102: LINEAR ALGEBRA I TEST 2 WEDNESDAY NOVEMBER 12, 2014


1:00 PM - 2:00 PM

Instructions to Candidates:

(i) Attempt ALL questions.

(ii) Every correct answer will earn 2 marks.

(iii) In Questions 1–5, answer True or False in the space provided, justifying your answer. In
Questions 6–10 circle the correct answer. In Questions 11–15, write the final answer in the
space provided.

1. If S{v1 , v2 , v3 , v4 } are vectors in R4 , and v4 = 0 then S is linearly independent.

2. If the dimension of the column space of an 8 × 6 matrix A is 4, then the dimension of the
null space of A is 4.

3. Every linearly independent set of vectors in a vector space V forms a basis of V .

4. Let S = {v1 , v2 , . . . , vn } and let H = Span{v1 , v2 , . . . , vn }. If one of the vectors in S say,


vk , is a linear combination of the remaining vectors in S, then the set formed from S by
removing vk still spans H.
Semester I Makerere University Examinations 2014/2015 90

5. The set of ordered pairs of real numbers R2 together with the usual addition and scalar
multiplication defined by α.x = (x, αy), ∀x ∈ R2 is not a vector space.

6. Given two vector subspaces, W1 and W2 of a vector space V , which of the following
statements about W1 and W2 is not always true?

(a) W1 ∪ W2 is a vector subspace of V .


(b) W1 ∩ W2 is a vector subspace of V .
(c) W1 ∪ W2 is a vector subspace of V if and only if W1 ⊆ W2 or W2 ⊆ W1 .
(d) None of these.

7. Let A = {1, t2 , 3 + 3t}, B = {t, t2 , 2t + 3t2 }, C = {1, t2 , 3 + 3t + t2 }. Which of these sets is


linearly independent?

(a) C only. (c) B only. (e) A only.


(b) All of them. (d) A and C. (f) None of these.
 
0 1 4
8. Given a matrix A =  1 2 −1 , which of the following statements about A is not
5 8 0
true?

(a) The columns of A are linearly independent.


(b) The row rank of A is 3.
(c) The basis for column space of A is {(0, 1, 5), (1, 2, −8), (4, −1, 0)}.
(d) The nullity of A is zero.
(e) All the above are true.
(f) None of the above is true.

9. Which of the following is true?

(a) dim(M2×3 ) = dim(P5 ).


(b) dim(Pn ) = n.
(c) dim(Rn ) = n − 1.
(d) dim(Rn ) = dim(Pn ).
(e) None of these.

10. Which of the following polynomials DOES NOT belong to the span of
{1 + 2t + t2 , 3 + t2 , −1 + t}?

(a) −t2 + t − 4. (b) 2t2 + 2t + 2. (c) t2 + t + 1. (d) None of these.


Semester I Makerere University Examinations 2014/2015 91
 
1 2 3
11. Let A =  1 −3 4 . Write down the basis for the column space of A.
2 −1 7

12. Let u = (1, 1, 0, 2), v = (−1, 1, 0, 0) and (2, 1, 0, 3). Are the vectors u, v and w linearly
independent? Justify your answer.

13. Write down the set of vectors that will exactly span the vector space M2×2 of all 2 × 2
matrices with real coefficients.

 
1 −2 3 −3 −1
14. Find the nullity of A if A =  −2 5 −5 4 −4 .
−1 3 −2 1 −5

15. Let W be a subset of a vector space V over R. State the conditions which W must satisfy
to be a vector subspace of V .
(i)
(ii)
(iii)

Good Luck
Semester I Makerere University Examinations 2015/2016 92

MAKERERE UNIVERSITY School of Physical Sciences


College of Natural Sciences Department of Mathematics

Student Number:. . . . . . . . . . . . . . . . . . . . . . . . . . . Registration Number:. . . . . . . . . . . . . . . . . . . . . . . .

School:. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Course:. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

MTH 1102: LINEAR ALGEBRA 1 TEST 2 TUESDAY NOVEMBER 3, 2015


1:00 PM - 2:00 PM

Instructions to Candidates:

(i) Attempt ALL questions.

(ii) Every correct answer will earn 2 marks.

(iii) In Questions 1–5, answer True or False in the space provided. In Questions 6–10 circle the
correct answer. In Questions 11–15, write the final answer in the space provided.

1. If the vectors v1 and v2 lie on a line in R2 then they are linearly independent. .........

2. Suppose a 5 × 7 matrix A has four (4) pivot columns. The dimension of the null space of
A is 1. .........

3. Let S = {v1 , v2 , v3 }. The vectors v1 − v2 , v2 − v3 and v3 − v1 are linearly independent.


.........

4. Let S = {v1 , v2 , . . . , vn } be a basis for Rn . If one of the vectors in S say, vk , is replaced


by a linear combination of all vectors in S, then the new set formed spans Rn . .........

5. The set of ordered pairs of real numbers R3 together with the usual addition and scalar
multiplication defined by β.x = (βx, y, βz) ∀ x ∈ R3 is not a vector space. .........

6. A homogeneous system Ax = 0, where A is an n × n matrix, will have a non trivial


solution if

(a) rank(A) < n. (c) The row space of A contains only


linearly independent vectors.
(b) Rank(A) > n. (d) None of these.

7. Let A = {(1, 2, −1), (3, 6, −3)}, B = {1, t, t2 }, C = {1, t2 , 3 + 3t + t2 }. Which of these is


linearly independent?

(a) C only. (c) B only. (e) A only.


(b) all of them. (d) B and C.
Semester I Makerere University Examinations 2015/2016 93
 
0 0 1
8. Consider the matrix A =  1 1 −1 . Using rank(A), decide which of the following
2 1 2
statements about the system Ax = b, where b = (1, 1, 1)t is true.

(a) The system has a unique solution.


(b) The system has infinitely many solutions.
(c) The system has no solution.
(d) The information given is not enough to determine whether the system has a solution.
       
2 1 0 0 0 0 3 −1
9. Consider the set S = , , , . All the
0 0 −3 −1 6 2 0 0
following are false except

(a) The set S is linearly independent.


(b) The set S spans M2×2 .
(c) The set S contains linearly dependent matrices.
(d) The set S forms a basis for M2×2 .

10. Which of the following DOES NOT belong to the span of {(2, 3, -5), (-4, -5, 8)}?

(a) (8, 2, 9), (b) (-2, -2, 3), (c) (0, 1, -2), (d) None of these.
 
1 1 3 1
11. Let A =  2 1 5 4 . Write down the basis for the null space of A. ..................
1 2 4 −1
12. Write down the basis for the column space of A if A is as given in Question 11 above.

13. Determine whether the set of polynomials S = {1, 1 − t, 1 + t + t2 } is a basis for P2 .

14. Write down the set of vectors that will exactly span the vector space P3 of all polynomials
of degree less or equal to 3 with real coefficients.

15. Let W ⊆ R3 be given by W = {(x, y, z) ∈ R3 : x + y = 2z}. Determine whether W is a


vector subspace of R3 .
Chapter 9

Eigenvectors and Diagonalization

9.1 Eigenvalues and Eigenvectors

Almost all vectors change direction when they are multiplied by a matrix A. Certain
exceptional vectors x are in the same direction as Ax and these are the eigenvectors of A. “The
basic equation is Ax = λx, where the scalar λ is an eigenvalue of A.”

The eigenvalue λ tells whether the special vector x is stretched or shrunk or reversed or left
unchanged when it is multiplied by A. We may find λ to be equal to 2 or 21 or −1 or 1. The
eigenvalue λ could be zero as well. Then Ax = 0x means that this eigenvector x is in the
nullspace. Most 2 × 2 matrices have two eigenvector directions and two eigenvalues. We will
show that det(A − λI) = 0.
Note 9.1.1. The eigenvectors corresponding to λ are the nonzero vectors in the solution space
of (A − λIn )x = 0, that is, in the null space of (A − λIn ). We call this solution space the
eigenspace of A corresponding to λ.
Definition 9.1.1. If A is an n × n matrix, then a nonzero vector x in Rn is called an
eigenvector of A if Ax is a scalar multiple of x; that is, Ax = λx. The scalar λ is called an
eigenvalue of A, and x is said to be an eigenvector of A corresponding to λ.
   
−1 1 2
Example 9.1.1. The vector x = is an eigenvector of A = because
     1   4 3
1 2 −1 1 −1
Ax = = = (−1) = (−1)x. The corresponding eigenvalue is
4 3 1 −1 1
λ = −1.
   
4 1 2 3
Example 9.1.2. The vector x =  1  is an eigenvector of A =  2 4 6  because
    −2    3 6 9
1 2 3 4 0 4
Ax =  2 4 6   1  =  0  = 0  1  = 0x. The corresponding eigenvalue is
3 6 9 −2 0 −2
λ = 0.
Example 9.1.3. Let A = In ; then Ax = In x = 1x, ∀x. The corresponding eigenvalue is λ = 1
for every vector x.

94
Eigenvectors and Diagonalization 95

Let Ax = λx, where A is a square matrix of order n. To find eigenvalues and eigenvectors of A,
we need to solve a homogeneous system given by
Ax = λx ⇔ Ax − λx = 0 ⇔ (A − λIn )x = 0.
For a non trivial solution to the homogeneous system, we require that
|A − λIn | = 0 or det(A − λIn ) = 0.
Definition 9.1.2. The equation det(A − λIn ) = 0 is called the characteristic equation of A and
the scalars satisfying this equation are called the eigenvalues of A. When expanded, the
determinant det(A − λIn ) is always a polynomial P in λ, called the characteristic polynomial of
A.
Note 9.1.2. If A is an n × n matrix, the characteristic equation of A has exactly n roots.
These roots may be all real or all complex or both. The complex roots occur in pairs, i.e., a
complex root together with its conjugate.
 
1 2 −1
Example 9.1.4. Find the eigenvalues and the eigenvectors of A =  1 0 1 .
4 −4 5

Solution:
   
1 2 −1 1 0 0
(a) Characteristic Polynomial: now P (λ) = |A − λI3 | =  1 0 1  − λ 0 1 0  =
4 −4 5 0 0 1
1−λ 2 −1
−λ 1 1 1 1 −λ
1 −λ 1 = (1 − λ) −2 − = −(λ3 − 6λ2 +
−4 5 − λ 4 5−λ 4 −4
4 −4 5 − λ
11λ − 6) = (λ − 1)(λ − 2)(λ − 3) = 0 if λ1 = 1, λ2 = 2, λ3 = 3. Hence the eigenvalues of A
are therefore λ1 = 1, λ2 = 2, and λ3 = 3.
(b) Eigenvectors: for each eigenvalue we need to get the corresponding eigenvector by solving
the homogeneous equation (A − λi I3 )x = 0, i = 1, 2, 3.
For λ1 = 1,
       
1 2 −1 1 0 0 x1 0
 1 0 1  −1 0 1 0
    x2  =  0 
4 −4 5 0 0 1 x3 0
    
0 2 −1 x1 0
⇔  1 −1 1   x2  =  0  .
4 −4 4 x3 0
Solving the homogeneous equation gives x = (x1 , x2 , x3 ) = r(− 21 , 21 , 1). Hence the
eigenvectors corresponding to λ1 = 1 are the vectors x = r(− 12 , 12 , 1).
For λ2 = 2,
       
1 2 −1 1 0 0 x1 0
 1 0 1  − 2  0 1 0    x2  =  0 
4 −4 5 0 0 1 x3 0
    
−1 2 −1 x1 0
⇔  1 −2 1   x2  =  0 .
4 −4 3 x3 0
Eigenvectors and Diagonalization 96

Solving the homogeneous equation gives x = (x1 , x2 , x3 ) = s(− 21 , 41 , 1). Hence the
eigenvectors corresponding to λ2 = 2 are the vectors x = (x1 , x2 , x3 ) = s(− 21 , 41 , 1).
For λ3 = 3,
       
1 2 −1 1 0 0 x1 0
 1 0 1  −3 0 1 0
    x2  =  0 
4 −4 5 0 0 1 x3 0
    
−2 2 −1 x1 0
⇔  1 −3 1   x2  =  0  .
4 −4 2 x3 0
Solving the homogeneous equation gives x = (x1 , x2 , x3 ) = t(− 14 , 14 , 1). Hence the
eigenvectors corresponding to λ3 = 3 are the vectors x = (x1 , x2 , x3 ) = t(− 14 , 41 , 1).

Properties of Eigenvalues and Eigenvectors

Theorem 9.1.1. If A is a square triangular matrix (upper triangular, lower triangular, or


diagonal), then the eigenvalues of A are the entries on the main diagonal of A.
Theorem 9.1.2. The sum of the eigenvalues of a square matrix equals the trace of the matrix.
Theorem 9.1.3. If k is a positive integer, λ is an eigenvalue of a square matrix A, and x is a
corresponding eigenvector, then λk is an eigenvalue of Ak and x is a corresponding eigenvector.
Theorem 9.1.4. If x is an eigenvector of A corresponding to the eigenvalue λ, then kλ and x
are a corresponding pair of eigenvalues and eigenvectors of kA, for any nonzero scalar k.
Theorem 9.1.5. A square matrix A is invertible if and only if λ = 0 is not an eigenvalue of A.
Theorem 9.1.6. A square matrix is singular if and only if it has a zero eigenvalue.

Proof: A square matrix A has a zero eigenvalue if and only if det(A − 0I) = 0, or (since
0I = 0) if and only if det(A) = 0, which is true only if A is singular. 2
Note 9.1.3.

1. A non singular square matrix and its inverse have reciprocal eigenvalues and identical
eigenvectors.
2. The product of all the eigenvalues of a square matrix (counting multiplicity) equals the
determinant of the matrix.
3. If λ is an eigenvalue of A, then λ is also an eigenvalue of At .
Theorem 9.1.7. Let A be square matrix of order n and λ be an eigenvalue of A. Let V be the
set of all eigenvectors corresponding to λ. Prove that V is a vector subspace of Rn .

Proof: Let u, v ∈ V , k ∈ R. Then A(u + v) = Au + Av = λu + λv = λ(u + v). Therefore


u + v is also in V . Also A(ku) = k(Au) = k(λu) = λ(ku). Thus ku ∈ V . This means that V is
a subspace of Rn . 2
Eigenvectors and Diagonalization 97

9.2 Similarity and Diagonalization

Definition 9.2.1. A square matrix A is similar to a square matrix B if there is an invertible


matrix P such that B = P −1 AP .
Theorem 9.2.1. Suppose that A, B, C are square matrices of the same order n. Then

(a) A is similar A.
(b) If A is similar to B, then B is similar to A.
(c) If A is similar to B and B is similar to C, then A is similar to C.

Proof:

(a) Since A = In−1 AIn , then A is similar to A.


(b) Since A is similar to B, then there is an invertible matrix P such that
B = P −1 AP ⇒ A = P BP −1 = (P −1 )−1 BP −1 = Q−1 BQ, where Q = P −1 . Thus B is similar
to A.
(c) A similar to B ⇒ there is an invertible matrix P such that B = P −1 AP . Also B similar to
C ⇒ there is an invertible matrix Q such that
C = Q−1 BQ = Q−1 P −1 AP Q = (P Q)−1 A(P Q) = R−1 AR, where R = P Q. Hence A is
similar to C.

2
Remark 9.2.1. From Theorem 9.2.1, we see that the property of similarity among square
matrices is an equivalence relation.
Definition 9.2.2. A matrix A is said to be diagonalizable, if it is similar to diagonal matrix.
That is if there exists an invertible matrix P such that P −1 AP is a diagonal matrix. The matrix
P is said to diagonalize A or it is called the diagonalizing matrix of A.
 
1 1
Example 9.2.1. A = is diagonalizable, since its similar to the diagonal matrix
  −2 4  
2 0 1 1
. The diagonalizing matrix P = .
0 3 1 2
Theorem 9.2.2. If A is an n × n matrix, then the following are equivalent:

(a) A is diagonalizable.
(b) A has n linearly independent eigenvectors.
Theorem 9.2.3.

(a) A matrix A is diagonalizable if all its eigenvalues real and distinct. However, if eigenvalues
of A are real but not distinct (polynomial with repeated roots), then A may or may not be
diagonalizable.
Eigenvectors and Diagonalization 98

(b) If the eigenvalues of A are real but not distinct, A is diagonalizable if and only if for each
eigenvalue of a multiplicity k we can find k linearly independent eigenvectors.
Theorem 9.2.4. Eigenvectors of a matrix corresponding to distinct eigenvalues are linearly
independent.
Remark 9.2.2. The preceding theorem guarantees that an n × n matrix A with n linearly
independent eigenvectors is diagonalizable, and the proof provides the following method for
diagonalizing A.

Thus to diagonalize a square matrix A we follow the steps below:

Step 1. Find n linearly independent eigenvectors of A, say x1 , x2 , . . . , xn .


Step 2. Form the matrix P having x1 , x2 , . . . , xn as its column vectors.
Step 3. The matrix P −1 AP will then be diagonal with λ1 , λ2 , . . . , λn as its successive diagonal
entries, where λi is the eigenvalue corresponding to xi for i = 1, 2, 3, . . . , n.
 
1 2 −1
Example 9.2.2. Refer to the matrix A =  1 0 1  in Example 9.1.4. Since the
4 −4 5
eigenvalues are λ1 = 1, λ2 = 2 and λ3 = 3 which are real and  all distinct(different),
 then A is
−1 −2 −1
diagonalizable. The diagonalizing matrix P is given by P =  1 1 1 . The diagonal
  2 4 4
1 0 0
matrix D =  0 2 0 . Find P −1 and check that D = P −1 AP .
0 0 3
 
5 2 −2
Example 9.2.3. Diagonalize the matrix A if possible; A =  2 5 −2 .
−2 −2 5

Solution: P (λ) = (λ − 9)(λ − 3)2 = 0, and solving this equation gives


λ1 = 9 λ2 = λ3 = 3 (repeated eigenvalue 3). So
    
5−λ 2 −2 x1 0
(A − λI3 )x = 0 ⇔  2 5−λ −2   x1  =  0  .
−2 −2 5 − λ x3 0

When λ1 = 9 we get the linear system

    
−4 2 −2 x1 0
 2 −4 −2   x1  =  0  .
−2 −2 −4 x3 0
Thus the augmented matrix is
   
−4 2 −2 0 1 −2 −1 0
 2 −4 −2 0  ∼  0 1 1 0 .
−2 −2 −4 0 0 0 0 0
Eigenvectors and Diagonalization 99

Letting x3 = t we get x2 = −t and x1 = −t. Thus

     
x1 −t −1
 x2  =  −t  = t  −1  .
x3 t 1
 
−1
The eigenvector corresponding to the eigenvalue λ1 = 9 is  −1 .
1

For λ2 = λ3 = 3 we the linear system

    
2 2 −2 x1 0
 2 2 −2   x1  =  0  .
−2 −2 2 x3 0
This leads to the augmented matrix

   
2 2 −2 0 1 1 −1 0
 2 2 −2 0  ∼  0 0 0 0 .
−2 −2 2 0 0 0 0 0

Let x2 and x3 be free variables; suppose x2 = r and x3 = s. Then x1 = −r + s. So the solution is

       
x1 −r + s −1 1
 x2  =  r  =r  1  + s 0 .

x3 s 0 1
   
 −1 1 
The eigenvectors corresponding to the eigenvalue λ2 = λ3 = 3 are  1  ,  0  . Since
0 1
 
these vectors are linearly independent (check this), by Theorem
 9.2.3(b), the
matrix A is
−1 −1 1
diagonalizable. The diagonalizing matrix P is given by P =  −1 1 0 . The diagonal
  1 0 1
9 0 0
matrix is thus D =  0 3 0 . Find P −1 and check that D = P −1 AP . 2
0 0 3
 
5 0 0
Example 9.2.4. Show that A =  1 5 0  is not diagonalizable.
0 1 5

Solution: Observe that λ1 = λ2 = λ3 = 5. Now


    
0 0 0 x1 0
(A − 5I)x = 0 ⇔  1 0 0   x1  =  0 .
0 1 0 x3 0
Eigenvectors and Diagonalization 100

So the augmented matrix is

   
0 0 0 0 1 0 0 0
 1 0 0 0  ∼  0 1 0 0 .
0 1 0 0 0 0 0 0
Letting x3 = s, we see that x2 = x1 = 0. Thus we get the eigenvector
     
x1 0 0
 x2  =  0  = s  0  .
x3 s 1
2
 
3 −2 0
Example 9.2.5. Determine if A =  −2 3 0  is diagonalizable or not.
0 0 5

Solution: We form a characteristic equation and obtain eigenvalues λ1 = 1, λ2 = λ3 = 5. The


eigenvectors corresponding to the eigenvalues λ1 = 1, λ2 = λ3 = 5 are respectively,
           
x1 1 y1 −1 z1 0
 x2  =  1  ,  y2  =  1  ,  z2  =  0  .
x3 0 y3 0 z3 1
Thus A is diagonalizable since

(a) The eigenvectors corresponding to the eigenvalues λ2 = 5 and λ3 = 5 are linearly


independent.
 
1 −1 0
(b) The diagonalizing matrix P =  1 1 0 .
0 0 1
 
1 0 0
(c) The diagonal matrix D = P −1 AP =  0 5 0 .
0 0 5

2
 
−3 2
Example 9.2.6. Determine whether A = is diagonalizable or not.
−2 1

Solution:Now |A − λI| = 0 ⇔ λ2 + 2λ + 1 = 0 ⇔ (λ + 1)2 = 0. So the eigenvalues for the matrix


A are λ1 = −1 and λ2 = −1, which are real and repeated. The corresponding eigenvectors are
     
x1 t 1
= =t .
x2 t 1
Thus A is not diagonalizable since

(a) There are no two linearly independent eigenvectors corresponding to λ1 = −1 and λ2 = −1.
Eigenvectors and Diagonalization 101

(b) We cannot find a matrix P such that P −1 AP is a diagonal matrix.

2
 
0 1 0
Example 9.2.7. Determine if A =  0 0 1  is diagonalizable.
4 −17 8

√ √
Solution: The eigenvalues of the matrix are λ1 = 4, λ2 = 2 + 3, λ3 = 2 − 3. Since the
eigenvalues are real and distinct, matrix A is diagonalizable. We note that

(i) We can’t use the idea of multiplicity on A since no eigenvalue is repeated.


 
2 0√ 0
(ii) We can find a matrix P such that P −1 AP =  0 2 + 3 0√  a diagonal matrix.
0 0 2− 3

Computing Powers of a Diagonalizable Square Matrix

Let us consider the powers a matrix A, that is A2 , A3 , A4 , . . .. Suppose you need the hundredth
power of A, that is, A100 . Multiplying 100 matrices may be very disturbing. This product A100
can be found by using the eigenvalues of A, but not by multiplying 100 matrices. There are
numerous problems in applied mathematics that require the computation of high powers of a
square matrix. We shall show how diagonalization can be used to simplify such computations
for diagonalizable matrices. If A is a square matrix of order n and P is an invertible square
matrix, then
(P −1 AP )2 = P −1 AP P −1 AP = P −1 AIAP = P −1 A2 P.
More generally, for any positive integer k, (P −1 AP )k = P −1 Ak P . It follows that if A is
diagonalizable, and P −1 AP = D is a diagonal matrix, then P −1 Ak P = (P −1 AP )k = Dk .
Solving for Ak yields Ak = P Dk P −1 . This last equation expresses the k th power of A in terms of
the k th power of the diagonal matrix D. The good news is that Dk is easy to compute, for if
   k 
d1 0 · · · 0 d1 0 ··· 0
 0 d2 · · · 0   0 d k ··· 0 
2
D =  .. then Dk =  .. .
   

 .   . 
k
0 0 . . . dn 0 0 . . . dn
 
1 1 2
Example 9.2.8. Let A =  0 1 0 .
0 1 3

(a) Find the eigenvalues and their corresponding vectors.

(b) Determine the diagonal matrix that is similar to A.


Eigenvectors and Diagonalization 102

(c) Determine the invertible matrix P which diagonizes A.

(d) Use the result to compute A2 .

Solution:
       
x1 s 0 1
(a) λ1 = λ2 = 1, λ3 = 3. x2
  =  −2r  = r −2
  + s 0  for λ1 = λ2 = 1 and

    x3 
 r 1 0
x1 s 1
 x2  =  0  = s  0  for λ3 = 3. Thus (0, −2, 1)t , (1, 0, 0)t , (1, 0, 1)t are the
x3 s 1
eigenvectors of A.
 
1 0 0
(b) D =  0 1 0 .
0 0 3

1 − 12 −1
   
1 0 1
(c) P =  0 −2 0  ⇒ P −1 =  0 − 12 0 .
1
0 1 1 0 2
1
 
1 4 8
(d) A2 = P D2 P −1 =  0 1 0  .
0 4 9

2
 
3 1
Example 9.2.9. Let A = .
2 2

(a) Find a matrix B similar to A.

(b) Compute A4 .

Solution:

1
− 31
   
1 1 −1 3
(a) The diagonalizing matrix is P = . P =
1 which implies that
2
  −2 1 3 3
1 0
B = P −1 AP = is the matrix that is similar to matrix A.
0 4
 1
− 13
    
4 4 −1 1 1 1 0 3
171 85
(b) A = P B P = 2 1 = .
−2 1 0 256 3 3
170 86

2
 
4 2
Example 9.2.10. Let A = . Find a matrix B similar to A.
3 −1
Eigenvectors and Diagonalization 103
   3 1

2 −1 −1 7 7
Solution: Since P = , then P = which implies that
 1 3 − 17 2
7
−1 5 0
B = P AP = . 2
0 −2
Assignment 9.2.1.

1. Suppose u is an eigenvector of a 4 × 4 matrix A corresponding to the eigenvalue −3.

(a) Is A + 3I an invertible matrix? Explain.


(b) Show that u is an eigenvector of A3 and find the corresponding eigenvalue.

2. Suppose a 6 × 6 matrix A has only three distinct eigenvalues, 1, 0 and −1. Suppose
{v1 , v2 , v3 } is a basis for the eigenspace corresponding to the eigenvalue 1 and {w1 , w2 } is
a basis for the eigenspace corresponding to the eigenvalue 0.

(a) Is A diagonalizable? Explain.


(b) Let b = 2v1 − 5v2 be a vector in R6 . Is b an eigenvector of A? Explain. If it is an
eigenvector, find the corresponding eigenvalue.
(c) What is dim(nullity A) and rank A? Explain.
 
a b
3. Let A be an invertible diagonalizable matrix where A = : ad − bc = 1 and
  c d
λ1 0
D= . Find the general form for A−k that is (A−1 )k .
0 λ2
4. (a) A is a 6 × 6 matrix with 4 eigenvalues. Two of the eigenspaces are two-dimensional.
Is A always diagonalizable? Why or why not?
(b) Let A be an n × n matrix and λ ∈ R. Let E(λ) = {x ∈ Rn : Ax = λx}. Show that
E(λ) is a subspace of Rn . What is the dimension of E(λ) when λ is not an eigenvalue
of A ?

5. Find the eigenvalues and eigenvectors of the following matrices:


     
2 −2 3 7 4 −1 2 −2 3
(a) A =  1 1 1 , (b) B =  4 7 −1 , (c) C =  10 −4 5 .
1 3 −1 −4 −4 4 5 −4 6

In each case state whether or not there is a basis for R3 consisting of eigenvectors of the
matrix.

6. Let A be an n × n diagonal matrix. Determine the eigenvalues of A.


 
1 1 −1
7. Let A =  −1 −1 2 . Find an invertible matrix P and a diagonal matrix D such
−5 −2 3
that P −1 AP = D.

8. Let A be a 4 × 4 matrix with eigenvalues λ1 = 1, λ2 = −1, λ3 = λ4 = 2. Find A−1 in terms


of the positive powers of A.
Eigenvectors and Diagonalization 104
 
3 −4
9. Let A = . Find an invertible matrix P and a diagonal matrix D such that
2 −3
P −1 AP = D. Hence compute A20 .

10. Let A be a diagonalisable n × n matrix with eigenvalues λ1 , λ2 , . . . , λn . Prove that


det(A) = λ1 λ2 · · · λn , the product of the eigenvalues.

Exercise 9.2.1.
 
0 0 0
1. Let A =  0 0 1  .
3 0 1
(a) Find the eigenvalues and their corresponding vectors.
(b) Determine the diagonal matrix that is similar to matrix A.
(c) Determine the invertible matrix P which diagonizes A.
(d) Use the result to compute A4 .

2. Show that the following matrices are not diagonalizable:


     
2 −3 3 0 0 2 0
(a) , (c) .
1 −1 (b)  0 2 0 , 1 2
0 1 2

3. Find which of the matrices below are diagonalizable. For those which are diagonalizable,
find P and D such that P −1 AP = D (D is a diagonal matrix).
   
1 1 1 2 3
(a) ,
1 −2 (c)  0 −1 −2 ,
0 0 2
 
  3 1 0
1 1 −2 (d)  0 3 1 .
(b) ,
4 0 4 0 0 3

4. Find bases for eigenspaces associated with each eigenvalue of the matrices below:
   
2 3 0 2 2 3 4
(a)  0 1 0 ,  0 2 3 2 
(b) 
 0 0 1 0 .

0 0 2
0 0 0 1
 
3 −5
5. Let A = . Find A9 .
1 −3
     
2 0 −2 3 0 0 −2 0 1
6. Let A =  0 3 0 . Show that B =  0 3 0  and P =  0 1 0  if
0 0 3 0 0 2 1 0 0
−1
B = P AP .
Chapter 10

Linear Transformations and Matrix


Representations

10.1 Linear Transformations

In this section we define a linear transformation, its rank, kernel, and nullity. We prove the
Rank and Nullity Theorem for linear transformations.
Definition 10.1.1. A function T : V −→ W from a vector space V into a vector space W is
called a linear transformation from V to W if, for all vectors u and v in V and all scalars c,

(i) T (u + v) = T (u) + T (v).

(ii) T (cu) = cT (u).

If V = W then T is called a linear operator on V .


Definition 10.1.2. Let V and W be any two vector spaces. The mapping T : V −→ W such
that T (v) = 0 for every v ∈ V is a linear transformation called the zero transformation.
Definition 10.1.3. Let V be a vector space. The mapping T : V −→ V such that T (v) = v is
a linear transformation called the identity operator.
Example 10.1.1. Let T : R3 −→ R2 be defined by T (u1 , u2 , u3 ) = (u1 , u2 ). Is T a linear
transformation?

Solution: Let u = (u1 , u2 , u3 ) and v = (v1 , v2 , v3 ). Then


T (u + v) = T ((u1 , u2 , u3 ) + (v1 , v2 , v3 )) = T (u1 + v1 , u2 + v2 , u3 + v3 ) = (u1 + v1 , u2 + v2 ) =
(u1 , u2 ) + (v1 , v2 ) = T (u) + T (v). Also,
T (cu) = T (c(u1 , u2 , u3 )) = T (cu1 , cu2 , cu3 ) = (cu1 , cu2 ) = c(u1 , u2 ) = cT (u1 , u2 , u3 ) = cT (u).
Thus T is a linear transformation. 2
Example 10.1.2. Let T : R3 −→ R3 be defined by T (u1 , u2 , u3 ) = (u1 + 1, 2u2 , u3 ). Is T
linear?

105
Linear Transformations 106

Solution: Let u = (u1 , u2 , u3 ) and v = (v1 , v2 , v3 ). Then


T (u + v) = T ((u1 , u2 , u3 ) + (v1 , v2 , v3 )) = T (u1 + v1 , u2 + v2 , u3 + v3 ) =
(u1 + v1 + 1, 2(u2 + v2 ), u3 + v3 ) = (u1 + v1 + 1, 2u2 + 2v2 , u3 + v3 ), and T (u) + T (v) =
T (u1 , u2 , u3 ) + T (v1 , v2 , v3 ) = (u1 + 1, 2u2 , u3 ) + (v1 + 1, 2v2 , v3 ) = (u1 + v1 + 2, 2u2 + 2v2 , u3 + v3 ).
Since T (u + v) 6= T (u) + T (v), then T is not a linear transformation. 2
Example 10.1.3. Determine whether the transformation T is linear if T : P3 −→ P2 is
defined by T (a3 t3 + a2 t2 + a1 t + a0 ) = 3a3 t2 + 2a2 t + a1 where ai (i = 0, 1, 2, 3) is a real number.

Solution: Let u = a3 t3 + a2 t2 + a1 t + a0 and v = b3 t3 + b2 t2 + b1 t + b0 . Then,


T (u) = 3a3 t2 + 2a2 t + a1 and T (v) = 3b3 t2 + 2b2 t + b1 .
T (u+v) = T (a3 t3 +a2 t2 +a1 t+a0 +b3 t3 +b2 t2 +b1 t+b0 ) = T ((a3 +b3 )t3 +(a2 +b2 )t2 +(a1 +b1 )t+(a0 +
b0 )) = 3(a3 + b3 )t2 + 2(a2 + b2 )t + (a1 + b1 ) = (3a3 t2 + 2a2 t + a1 ) + (3b3 t2 + 2b2 t + b1 ) = T (u) + T (v).
Let c be be a scalar, then T (cu) = T (c(a3 t3 + a2 t2 + a1 t + a0 )) = T (a3 ct3 + a2 ct2 + a1 ct + a0 c) =
3(a3 c)t2 + 2(a2 c)t + a1 c) = c(3a3 t2 + 2a2 t + a1 ) = cT (u). So T (cu) = cT (u). Hence T is a linear
transformation. 2
Exercise 10.1.1.

1. Show that T : R2 −→ R2 defined by T (u1 , u2 ) = (u1 , −u2 ) is a linear transformation .


 
6 7  
2 2 x 1
2. If T : R −→ R defined by T (x1 , x2 ) =  0 1  . Show its a linear
x2
7 −5
transformation .
3. Show that T : V −→ W defined by T (f (x)) = f 0 (x) is a linear transformation , where V
is a space of all differentiable functions and W a space of real valued functions .
4. Determine whether the transformation T is linear if T : R2 −→ R defined by T (a, b) = ab
for all real numbers a and b.
5. Use the definition of a linear operator that was given in this section to show that the
function T : R2 −→ R2 given by the formula T (x1 , x2 ) = (x1 + 2x2 , 3x1 − x2 ) is a linear
operator.
6. Use the definition of a linear transformation given in this section to show that the function
T : R3 −→ R2 given by the formula T (x1 , x2 , x3 ) = (2x1 − x2 + x3 , x2 − 4x3 ) is a linear
transformation.
7. Let V and W be any vector spaces and c be any scalar. Show that the following
statements are equivalent:
(a) T : V −→ W is a linear transformation.
(b) T (αu + βv) = αT (u) + βT (v), ∀u, v ∈ V and any scalars α, β.
Theorem 10.1.1. If T : Rn −→ Rm is defined as T (u) = Au for an m × n matrix A, then T is
linear.

Proof: Let u and v ∈ Rn and c be any scalar. Then


T (u + v) = A(u + v) = A(u) + A(v) = T (u) + T (v). Also, T (cu) = A(cu) = cA(u) = cT (u).
Thus T is a linear transformation. 2
Linear Transformations 107

Theorem 10.1.2. If T : V −→ W is a linear transformation, then

(a) T (0) = 0.

(b) T (−v) = −T (v) for all v ∈ V

(c) T (v − w) = T (v) − T (w) for all v, w ∈ V .

Proof:

(a) Note that for every v ∈ V, 0v = 0. So T (0) = T (0v) = 0T (v) = 0.

(b) T (−v) = T ((−1)v) = (−1)T (v) = −T (v).

(c) T (v − w) = T (v + (−1)w) = T (v) + (−1)T (w) = T (v) − T (w).

Definition 10.1.4. If T : V −→ W is a linear transformation, we define kernel of T , denoted


by Ker(T )(or Ker T ), as Ker(T ) = {u ∈ V : T (u) = 0}. That is, the subset of V consisting of
all vectors in V that T maps to 0. The range of T , denoted by R(T )(or Range T ), is defined as
R(T ) = {T (v) : v ∈ V } = {w ∈ W : w = T (v) for some v ∈ V }. That is, the set of all vectors
in W that are images under T of at least one vector in V .

Example 10.1.4. Let T : R3 −→ R2 be defined by T (x, y, z) = (x, y). Find

(i) Ker(T ).

(ii) the basis of Ker T .

(iii) dim(Ker T ).

Solution:

(i) T (x, y, z) = (x, y) = (0, 0) ⇔ x = 0, y = 0 and z can take on any arbitrary value. Thus
Ker(T ) = {(0, 0, k); k ∈ R}.

(ii) Since (0, 0, k) = k(0, 0, 1), the basis of kernel T is {(0, 0, 1)}.

(iii) dim(Ker(T )) = 1.

Example 10.1.5. Let T : R2 −→ R2 be defined by T (x, y) = (x + y, x − y). Find Ker(T ).

Solution: T (x, y) = (x + y, x − y) = (0, 0) ⇒ x + y = 0, and x − y = 0 ⇒ x = 0, y = 0.


Thus Ker(T ) = {(0, 0)}. 2
Linear Transformations 108

Example 10.1.6. Let T : R4 −→ R2 be defined by T (x, y, z, w) = (x + y, z + w).

(i) Find Ker(T ).

(ii) the basis of Ker T .

(iii) dim(Ker T ).

Solution:

(i) T (x, y, z, w) = (x + y, z + w) = (0, 0) ⇒ x + y = 0, z + w = 0. Let


w = r, y = s ⇒ z = −r, x = −s. Thus Ker(T ) = {(x, y, z, w) = (−s, s, −r, r)}.

(ii) Since (−s, s, −r, r) = s(−1, 1, 0, 0) + r(0, 0, −1, 1), the basis of Ker(T ) is
{(−1, 1, 0, 0), (0, 0, −1, 1)}.

(iii) dim(Ker T ) = 2.

Example 10.1.7. Let T : R2 −→ R2 be the linear operator given by the formula


T (x, y) = (2x − y, −8x + 4y). Which of the following vectors are in R(T )?

(a) (1, -4), (b) (5, 0), (c) (-3, 4).

Solution:

(a) If (1, −4) is in R(T ), then there is a vector (x, y) ∈ R2 such that
T (x, y) = (2x − y, −8x + 4y) = (1, −4). If we equate components, we find that 2x − y = 1 or
y = t and x = t+1 2
. Thus T maps infinitely many vectors onto (1, −4). Hence (1, 4) ∈ R(T ).

(b) If (5, 0) is in R(T ), then there is a vector (x, y) ∈ R2 such that


T (x, y) = (2x − y, −8x + 4y) = (5, 0). If we equate components, we find that 2x − y = 5 and
−8x + 4y = 0. Letting y = t, we see that x = 2t and x = t+5 2
. Thus the system is
inconsistent.

(c) If (−3, 4) is in R(T ), then there is a vector (x, y) ∈ R2 such that


T (x, y) = (2x − y, −8x + 4y) = (−3, 4). If we equate components, we find that 2x − y = −3
and −8x + 4y = 4. Letting y = t, we see that x = t−1 2
and x = t−32
. Thus the system is
inconsistent.

Exercise 10.1.2. Let T : R4 −→ R3 be the linear operator given by the formula


T (x1 , x2 , x3 , x4 ) = (4x1 + x2 − 2x3 − 3x4 , 2x1 + x2 + x3 − 4x4 , 6x1 − 9x3 + 9x4 ). Which of the
following vectors are in R(T )?
Linear Transformations 109

(a) (0, 0, 6) (b) (1, 3, 0) (c) (2, 4, 1)

Theorem 10.1.3. If T : V −→ W is linear transformation, then

(a) The kernel of T is a subspace of V .

(b) The range of T is a subspace of W .

Proof:

(a) We note that 0 ∈ Ker T ⇒ Ker T 6= ∅. Let u, v ∈ Ker T and c be a scalar. Then
T (u + v) = T (u) + T (v) = 0 + 0 = 0; so u + v ∈ Ker T . Also T (cu) = cT (u) = c.0 = 0; so
cu ∈ Ker T . Thus Ker T is a subspace of V .

(b) Observe that R(T ) 6= ∅ since T (0) = 0. Let w1 , w2 ∈ range T and c be a scalar. Then there
are vectors u1 , u2 ∈ V such that T (u1 ) = w1 and T (u2 ) = w2 . Now
w1 + w2 = T (u1 ) + T (u2 ) = T (u1 + u2 ). Since u1 + u2 ∈ V , it follows that w1 + w2 ∈ range
T . Also cw1 = cT (u1 ) = T (cu1 ). Since cu1 ∈ V , it follows that cw1 ∈ range T . Thus range
T is a subspace of W .

Definition 10.1.5. A linear transformation T : V −→ W is said to be one-to-one (1 - 1) if


∀ u, v ∈ V, T (u) = T (v) ⇒ u = v. So T maps distinct vectors in V to distinct vectors in W .
Alternatively, T : V −→ W is said to be 1 - 1, if ∀ u, v ∈ V, u 6= v ⇒ T (u) 6= T (v).

Example 10.1.8. L : R2 −→ R2 defined by L(x, y) = (x + y, x − y) is a 1 − 1 transformation.

Solution: Suppose that L(x1 , y1 ) = L(x2 , y2 ); then


(x1 + y1 , x1 − y1 ) = (x2 + y2 , x2 − y2 ) ⇔ x1 + y1 = x2 + y2 and x1 − y1 = x2 − y2 . Thus x1 = x2
and y1 = y2 ; so (x1 , y1 ) = (x2 , y2 ) and hence L is 1-1. 2

Example 10.1.9. T : R3 −→ R2 defined by T (x, y, z) = (x, y) is not a 1 − 1 transformation


since T (3, 7, −1) = (3, 7), and T (3, 7, 12) = (3, 7); two different objects are taken to one(the
same) image.

Definition 10.1.6. Let T : V −→ W be a linear transformation from a finite dimensional


between vector space V to a vector space W . Then

rank T = dim(range T ), nullity T = dim(Ker T ).

Theorem 10.1.4. If T : V −→ W is a linear transformation, then the following are equivalent:

(a) T is one-to-one.

(b) Ker T = {0}.

(c) Nullity T = 0.
Linear Transformations 110

Proof:

(i) (a) ⇔ (b): Suppose that T is 1-1 and let u ∈ Ker T ; then T (u) = 0. Since T (0) = 0, then
u = 0. Thus Ker T = {0}. Now suppose that Ker T = {0} and T (u) = T (v). Now
T (u − v) = 0. So u − v ∈ Ker T , and u − v = 0 ⇒ u = v. Hence T is 1-1.
(ii) (b) ⇔ (c): The proof follows from the definition of nullity T .

2
Theorem 10.1.5. (Rank-Nullity Theorem) Let T : V −→ W be a linear transformation from a
finite dimensional vector space V to a vector space W . Then
dim(V ) = nullity T + rank T.

Proof: We must show that if dim(V ) = n then


n = nullity T + rank T.
We shall give the proof for the case where 1 ≤ nullity T < n. The cases where nullity T = 0 and
nullity T = n are left as exercises. Assume nullity T = r, and let {v1 , . . . , vr } be a basis for Ker
T . Since {v1 , . . . , vr } is linearly independent, Theorem 7.2.4(b) states that there are n − r
vectors, vr+1 , . . . , vn , such that the extended set {v1 , . . . , vr , vr+1 , . . . , vn } is a basis for V . To
complete the proof, we shall show that the n − r vectors in the set S = {T (vr+1 ), . . . , T (vn )}
form a basis for R(T ). It will then follow that
rank T + nullity T = (n − r) + r = n.
First we show that S spans R(T ). If w is any vector in R(T ), then w = T (v) for some vector v
in V . Since {v1 , . . . , vr , vr+1 , . . . , vn } is a basis for V , the vector v can be written in the form
v = c1 v1 + · · · + cr vr + cr+1 vr+1 + · · · + cn vn .

Since {v1 , . . . , vr } are in Ker T , we have T (v1 ) = · · · = T (vr ) = 0, so


w = T (v) = cr+1 T (vr+1 ) + · · · + cn T (vn ).
Thus S spans R(T ).
Finally, we show that S is a linearly independent set and consequently forms a basis for the
range of T . Suppose that some linear combination of the vectors in S is zero; that is,
kr+1 T (vr+1 ) + · · · + kn T (vr+1 ) = 0. (10.1)
We must show that kr+1 = · · · = kn = 0. Since T is linear, Equation (10) can be rewritten as
T (kr+1 vr+1 + · · · + kn vn ) = 0.
which says that kr+1 vr+1 + · · · + kn vn ∈ Ker T . This vector can therefore be written as a linear
combination of the basis vectors {v1 , . . . , vr }, say
kr+1 vr+1 + · · · + kn vn = k1 v1 + · · · + kr vr
Thus,
k1 v1 + · · · + kr vr − kr+1 vr+1 − · · · − kn vn = 0.
Since {v1 , . . . , vn } is linearly independent, all of the k’s are zero; in particular,
kr+1 = · · · = kn = 0, which completes the proof. 2
Linear Transformations 111

Remark 10.1.1. Theorem 10.1.5 is also referred to as the Dimension Theorem for linear
transformations.

Theorem 10.1.6. If V is a finite-dimensional vector space, and T : V −→ V is a linear


operator, then the following are equivalent:

(a) T is 1-1.

(b) Ker T = {0}.

(c) Nullity T = 0.

(d) The range of T is V ; that is, R(T ) = V .

Proof: The equivalence of (a), (b), and (c) follows from Theorem 10.1.4. We now prove that
(c)⇔ (d).
(c)⇒ (d): Suppose that dim(V) = n and nullity T = 0. It follows from the Rank-Nullity
Theorem that
rank T = n − nullity T = n.
By definition, dim(R(T )) = rank T = n. It now follows from Theorem 7.2.5 that R(T ) = V .
(d)⇒ (c): Suppose that dim(V ) = n and R(T ) = V , then dim(R(T )) = n, so rank T = n. Thus
it follows from the Rank-Nullity Theorem that

nullity T = n − rank T = n − n = 0.

Definition 10.1.7. Let T : V −→ W be a linear transformation. Then, R(T ) ⊆ W and if


R(T ) = W , then the linear transformation is said to be onto. In other words, T is onto if for
every w ∈ W there is v ∈ V such that T (v) = w.

Example 10.1.10. A linear transformation T : R3 −→ R2 defined by T (x, y, z) = (x, y) is


onto since for all (x, y) ∈ R2 , we can find its corresponding (x, y, z) ∈ R3 ; R(T ) = R2 .
  
1 0 1 x1
Example 10.1.11. Show that T : R3 −→ R3 defined by T (x1 , x2 , x3 ) =  1 1 2   x2 
2 1 3 x3
is not onto.
    
1 0 1 x1 y1
Solution: Let (y1 , y2 , y3 ) ∈ R3 such that  1 1 2   x2  =  y2 . The augmented
   2 1 3 x3 y3
1 0 1 y1 1 0 1 y1
matrix  1 1 2 y2  reduces to  0 1 1 y2 − y1  which is inconsistent unless
2 1 3 y3 0 0 0 y3 − y2 − y1
y3 − y2 − y1 = 0. This means not all the members in R3 can be mapped by T . Hence T is not
onto. 2

Example 10.1.12. In each part find Ker T , and determine whether the linear transformation
T is one-to-one:
Linear Transformations 112

(a) T : R2 −→ R2 , where T (x, y) = (x, y).


(b) T : R2 −→ R2 , where T (x, y) = (x + y, x − y).
(c) T : R2 −→ R3 , where T (x, y) = (x − y, y − x, 2x − 2y).

Solution:

(a) Clearly Ker T = {(0, 0)}, so T is 1 - 1.


(b) Since T (x, y) = (0, 0) if and only if x = y and x = −y, the kernel is {0, 0} and T is 1 - 1.
(c) Here T (x, y) = (0, 0, 0) if and only if x and y satisfy the equations
x − y = 0, −x + y = 0, and 2x − 2y = 0, i.e., (x, y) is in Ker T if and only if x = y. Thus
the kernel of T is the line y = x and T is not 1 - 1. In fact Ker T = {t(1, 1) : t ∈ F}; so the
basis of Ker T is {(1, 1)} and dim(Ker T ) = 1.

2
Theorem 10.1.7. Given a linear transformation T : V −→ W , and a basis {v1 , v2 , . . . , vn }, for
V , then R(T ) = Span{T (v1 ), T (v2 ), . . . , T (vn )}.

Proof: Suppose y ∈ R(T ); so y = T (x) for some x ∈ V . Now there are scalars k1 , k2 , . . . , kn
such that x = k1 v1 + k2 v2 + · · · + kn vn . Thus, y = T (x) = k1 T (v1 ) + k2 T (v2 ) + · · · + kn T (vn ),
which completes the proof. 2
Remark 10.1.2. This theorem does not say {T (v1 ), T (v2 ), . . . , T (vn )} is a basis for R(T ),
because the set could be linearly dependent. However, it does give a way of finding a basis for
the range: remove dependent vectors from {T (v1 ), T (v2 ), . . . , T (vn )} until the set becomes
linearly independent.
Theorem 10.1.8. Let V and W be vector spaces, and let {v1 , v2 , . . . , vn } be a basis for V .
Given any n vectors {w1 , w2 , . . . , wn }, in W , there is a unique linear transformation
T : V −→ W for which
T (v1 ) = w1 , T (v2 ) = w2 , . . . , T (vn ) = wn .

Proof: Existence: Given any vector u ∈ V , we can write u = c1 v1 + c2 v2 + · · · + cn vn for some


scalars c1 , c2 , . . . , cn . We now define T this way: T (u) = c1 w1 + c2 w2 + · · · + cn wn . We next
show that T is linear. Let u, v ∈ V and k be a scalar. Let u = c1 v1 + c2 v2 + · · · + cn vn and
v = d1 v1 + d2 v2 + · · · + dn vn . Then u + v = (c1 + d1 )v1 + (c2 + d2 )v2 + · · · + (cn + dn )vn , so
T (u + v) = (c1 + d1 )w1 + (c2 + d2 )w2 + · · · + (cn + dn )wn
= c1 w1 + c2 w2 + · · · + cn wn + d1 w1 + d2 w2 + · · · + dn wn
= T (u) + T (v).
Since ku = kc1 v1 + kc2 v2 + · · · + kcn vn , then
T (ku) = kc1 w1 + kc2 w2 + · · · + kcn wn
= k(c1 w1 + c2 w2 + · · · + cn wn )
= kT (u),
Linear Transformations 113

as desired.
Uniqueness: Suppose that S : V −→ W is linear and S(v1 ) = w1 , S(21 ) = w2 , . . . , S(vn ) = wn .
We show that S(u) = T (u), ∀u ∈ V . Given u in V , there are scalars c1 , c2 , . . . , cn such that
u = c1 v1 + c2 v2 + · · · + cn vn . So

S(u) = S(c1 v1 + c2 v2 + · · · + cn vn )
= c1 S(v1 ) + c2 S(v2 ) + · · · + cn S(vn )
= c1 w1 + c2 w2 + · · · + cn wn
= T (u),

as desired. 2
Remark 10.1.3. Theorem 10.1.8 is sometimes expressed this way: A linear transformation is
uniquely determined by its action on a basis. That is, if you know what T does to each of the
vectors in some basis, then in principle, you know T .
Example
  10.1.13.
 Find
 a linear
 transformation
 T : M2×2 −→ P2 , that maps the basis vectors
1 0 1 1 1 1 1 1
, , , to polynomials 1 + t + t2 , 1 + t, t + t2 , t, respectively.
0 0 0 0 1 0 1 1

Solution:
  Theorem 10.1.8 says that there is a unique transformation that does this. Let
a b
∈ M2×2 . Then
c d
         
a b 1 0 1 1 1 1 1 1
= (a − b) + (b − c) + (c − d) +d (check this).
c d 0 0 0 0 1 0 1 1
Thus,
         
a b 1 0 1 1 1 1 1 1
T = (a − b)T + (b − c)T + (c − d)T + dT
c d 0 0 0 0 1 0 1 1
2 2
= (a − b)(1 + t + t ) + (b − c)(1 + t) + (c − d)(t + t ) + dt
= (a − c) + at + (a − d)t2 .

2
Example
 10.1.14.
  Find a linear transformation T : M2×2 −→ P2 , with kernel equal to the span
1 2 0 0
of , and range equal to the span of {1 + t, t + t2 }.
0 0 2 1

   
1 2 0 0
Solution: We extend , to a basis of M2×2 . Here is one such basis (check
   0  0 2 
 1 
1 2 0 0 1 1 0 0
this): , , , . Theorem 10.1.8 says there is a (unique)
0 0 2 1 0 0 1 1
transformation that maps a basis onto any given set of vectors of the same size. We want to
map the first vectors to the 0 (that is, the zero polynomial) in P2 , since they must be in the
kernel. We map the other two vectors to the two basis vectors for the range. That is, we look
for a transformation that does this:
       
1 2 0 0 1 1 0 0
T = 0, T = 0, T = 1 + t, T = t + t2 .
0 0 2 1 0 0 1 1
Linear Transformations 114
 
a b
Then for ∈ M2×2 , we have
c d
       
a b 1 2 0 0 1 1
T = (b − a)T + (c − d)T + (2a − d)T
c d 0 0 2 1 0 0
 
0 0
+(2d − c)T
1 1
= (b − c)(0) + (c − d)(0) + (2a − b)(1 + t) + (2d − c)(t + t2 )
= (2a − b) + (2a − b − c + 2d)t + (2d − c)t2 .
2

10.2 Matrix Representations of Linear Transformations

In this section we show that every linear transformation from an n-dimensional vector space
into an m-dimensional vector space can be represented by an m × n matrix. Thus we can reduce
the study of linear transformations on finite-dimensional vector spaces, to the study of matrices.
We begin with transformations from Rn to Rm , and then extend this to transformations from n
dimensional vector spaces to m dimensional vector spaces.
Theorem 10.2.1. If T : Rn −→ Rm is a linear transformation and B = {e1 , e2 , . . . , en },
B 0 = {e1 , e2 , . . . , em } are the standard bases for Rn and Rm , respectively. Then there is a
unique m × n matrix A for which
T (x) = Ax (10.2)
for every x ∈ Rn .

Proof: Existence: Suppose that T (e1 ) = w1 , T (e2 ) = w2 , . . . , T (e


n ) = 
wn . By Theorem 10.1.8,
x1
 x2 
once we know this information, we essentially know T . Let x =  ..  ∈ Rn . Since
 
 . 
xn
 
x1
 x2 
 ..  = x1 e1 + x2 e2 + · · · + xn en ,
 
 . 
xn
we have
T (x) = T (x1 e1 + x2 e2 + · · · + xn en )
= x1 T (e1 ) + x2 T (e2 ) + · · · + xn T (en )
= x1 w1 + x2 w2 + · · · + xn wn .
Now let A be the matrix that has the w’s as its columns: A = (w1 |w2 | · · · |wn ). We have
 
x1
 x2 
Ax = (w1 |w2 | · · · |wn )  ..  = x1 w1 + x2 w2 + · · · + xn wn = T (x),
 
 . 
xn
Linear Transformations 115

Uniqueness: Suppose that there is another m × n matrix B such that T (x) = Bx for every
x ∈ Rn . Then Ax = Bx ⇐⇒ (A − B)x = 0. So A − B is the zero m × n matrix, which
completes the proof. 2
Definition 10.2.1. The matrix A mentioned in Theorem 10.2.1 is called the standard matrix
for T . It is the matrix for T w.r.t. the bases B and B 0 , respectively.
Remark 10.2.1. Theorem 10.2.1 says that the only linear transformations from Rn −→ Rm are
the matrix transformations. A transformation may be defined differently, but in the end, we can
find an A to describe it. Finding this matrix A is our next task.
Example 10.2.1. Suppose that T : R3 −→ R2 is a linear transformation defined by
T (x, y, z) = (y − z, 2x + 3y + 4z). Find the standard matrix for T .

Solution: Since T (e1 ) = (0, 2), T (e2 ) = (1, 3), T (e3 ) = (−1, 4), then the standard matrix
 
0 1 −1
A= .
2 3 4
 
x  
y−z
It is easy to see that T (x, y, z) = A y
  = . 2
2x + 3y + 4z
z
Theorem 10.2.2. If A is the standard matrix for the linear transformation T : Rn −→ Rm , then

(i) nullity T = nullity A.

(ii) rank T = rank A.


Definition 10.2.2. Let S = {v1 , v2 , . . . , vn } be a basis of a vector space V and x ∈ V . The
coordinate vector of x with respect to the basis S, denoted by [x]S , is the set of scalars written as
 
a1
 a2 
 
 a3 
[x]S =   ∈ Rn ,
 .. 
 . 
an

where x = a1 v1 + a2 v2 + · · · + an vn for unique scalars a1 , a2 , . . . , an . The components of [x]S are


called the coordinates of x with respect to the basis S.
Example 10.2.2. In R3 , consider the basis S = {(1, −1, 0), (0, 1, 1), (1, −1, 1)}. Let
x = (1, 2, 3). Find the coordinate vector [x]S .

Solution: We find the scalars (a1 , a2 , a3 ) such that    


a1 1
(1, 2, 3) = a1 (1, −1, 0) + a2 (0, 1, 1) + a3 (1, −1, 1) ⇒  a2  =  3  = [x]S . 2
a3 0
Example 10.2.3. In R4 , consider the natural (standard) basis S = {(1, 0, . . . , 0),
(0, 1, . . . , 0), . . . , (0, 0, . . . , 1)}. Let x = (6, 7, −2, 8) ∈ R4 . Find the coordinate vector [x]S .
Linear Transformations 116

Solution: We find the scalars (a1 , a2 , a3 , a4 ) such that


(6, 7, −2, 8) = a1 (1, 0, 0, 0) + a2 (0,
1, 0, 0)+ a3
(0, 0, 1,0) + a4 (0, 0, 0, 1). Solving gives
a1 6
 a2   7 
(a1 , a2 , a3 , a4 ) = (6, 7, −2, 8) ⇒  a3  =  −2  = [x]S .
   2
a4 8
Remark 10.2.2. Suppose that V and W are vector spaces with dimensions n and m,
respectively. Let B = {v1 , v2 , . . . , vn } and B 0 = {w1 , w2 , . . . , wn } be bases for V and W ,
respectively. Then for each x in V , the coordinate vector [x]B will be a vector in Rn , and the
coordinate vector [T (x)]0B will be a vector in Rm . There is a linear transformation from Rn to
Rm , and Theorem 10.2.1 tells us that there is a unique standard m × n matrix A for this
transformation such that A[x]B = [T (x)]B 0 . In other words, for every linear transformation
T : V −→ W there is an m × n matrix A such that A[x]B = [T (x)]B 0 for every x ∈ V . This
matrix can be found as follows: A = ([T (v1 )]B 0 |[T (v2 )]B 0 | · · · |[T (vn )]B 0 ) for vj ∈ B ⊆ V and
T (vj ) ∈ W , for j = 1, 2, . . . , n. Here [T (vj )]B 0 is the j-th column of A.

Theorem 10.2.3. If T : V −→ W is a linear transformation from an n-dimensional vector


space V to an m-dimensional vector space W , with bases B = {v1 , v2 , . . . , vn } and
B 0 = {w1 , w2 , . . . , wn }, respectively, then there is a unique m × n matrix A for which

[T (x)]B 0 = A[x]B (10.3)

for all x in V , where A = ([T (v1 )]B 0 |[T (v2 )]B 0 | · · · |[T (vn )]B 0 ). Thus, if
 
k1
 k2 
A[x]B =  .. 
 
 . 
km

then T (x) = k1 w1 + k2 w2 + · · · + km wm .

Proof: We are looking for an m × n matrix


 
a11 a12 · · · a1n
 a21 a22 · · · a2n 
A =  ..
 
.. .. 
 . . . 
am1 am2 · · · amn

such that Equation (10.3) holds. We observe that the existence and uniqueness of A is
established by Theorem 10.2.1. Since this equation holds for the basis vectors v1 , v2 , . . . , vn
then,
A[v1 ]B = [T (v1 )]B 0 , A[v2 ]B = [T (v2 )]B 0 , . . . , A[vn ]B = [T (vn )]B 0 . (10.4)
But      
1 0 0

 0 


 1 


 0 

[v1 ]B = 
 0 ,

[v2 ]B = 
 0 ,...,

[vn ]B = 
 0 

 ..   ..   .. 
 .   .   . 
0 0 1
Linear Transformations 117

so
 
  1  
a11 a12 ··· a1n a11
 0 
 a21 a22 ··· a2n    a21 
A[v1 ]B = 

.. .. ..

 0 =
 
..


 . . .  ..   . 
 . 
am1 am2 · · · amn am1
0

 
  0  
a11 a12 ··· a1n a12
 1 
 a21 a22 ··· a2n    a22 
A[v2 ]B = 

.. .. ..

 0  =  ..
  

 . . .  ..   . 
 . 
am1 am2 · · · amn am2
0
..
.  
  0  
a11 a12 ··· a1n a1n
 0 
 a21 a22 ··· a2n    a2n 
A[vn ]B = 

.. .. ..

 0  =  ..
  

 . . .  ..   . 
 . 
am1 am2 · · · amn amn
1

Substituting these results into Equation (10.4) yields


     
a11 a12 a1n
 a21   a22   a2n 
 ..  = [T (v1 )]B 0 ,  ..  = [T (v2 )]B 0 , . . . ,  = [T (vn )]B 0 .
     
 ..
 .   .   . 
am1 am2 amn

Thus the matrix for T with respect to the bases B and B 0 is

A = ([T (v1 )]B 0 |[T (v2 )]B 0 | · · · |[T (vn )]B 0 ) ,

which completes the proof. 2


Remark 10.2.3. Note that if V = Rn , W = Rm , and B, B 0 are the respective standard bases,
then Theorem 10.2.3 becomes Theorem 10.2.1.
Definition 10.2.3. The matrix A stated in Theorem 10.2.3 is called the matrix for T with
respect to the bases B and B 0 (or the matrix associated with T with respect to the bases B and
B 0 ), and it is sometimes denoted by [T ]B 0 ,B .
Example 10.2.4. Let T : R3 −→ R2 be defined by T (x, y, z) = (x + y, y − z). Let B be the
standard basis for R3 and B 0 the standard basis for R2 . Find the matrix for T .

Solution: Let B = {(1, 0, 0), (0, 1, 0), (0, 0, 1)} and B 0 = {(1, 0), (0, 1)}

(i) T  0) = (1, 0) ⇒ (1, 0) = α1 (1, 0) + α2 (0, 1) ⇒ α1 = 1, α2 = 0 ⇒ [T (1, 0, 0)]B 0 =


(1, 0,
1
.
0
Linear Transformations 118

(ii) T  0) = (1, 1) ⇒ (1, 1) = α1 (1, 0) + α2 (0, 1) ⇒ α1 = 1, α2 = 1 ⇒ [T (0, 1, 0)]B 0 =


(0, 1,
1
.
1
(iii) T (0, 0, 1) = (0, −1) ⇒ (0, −1)= α1 (1, 0) + α2 (0, 1) ⇒ α1 = 0, α2 =
0
−1 ⇒ [T (0, 0, 1)]B 0 = .
−1

 
1 1 0
Thus the matrix for T is A = . 2
0 1 −1
Remark 10.2.4. Notice that Example 10.2.4 can easily be handled using Theorem 10.2.1 (try
it).

Exercise 10.2.1. Let T : R3 −→ R2 be defined by T (x, y, z) = (x + y, y − z). Let


B = {(1, 0, 1), (0, 1, 1), (1, 1, 1)} be the basis for R3 and B 0 = {(1, 2), (−1, 1)} be the basis for R2 .
Find the matrix A associated with T .

Example 10.2.5. Let T : P1 −→ P2 be defined by T (p(t)) = tp(t). Let B be the standard basis
for P1 and B 0 the standard basis for P2 . Find the matrix associated with T w.r.t. the bases
B = {1, t} and B 0 = {1, t, t2 }.

Solution:
 
0
(i) T (1) = t ⇒ t = α1 1 + α2 t + α3 t2 ⇒ α1 = 0, α2 = 1, α3 = 0 ⇒ [T (1)]B 0 =  1 .
0
 
0
(ii) T (t) = t2 ⇒ t2 = α1 1 + α2 t + α3 t2 ⇒ α1 = 0, α2 = 0, α3 = 1 ⇒ [T (t)]B 0 =  0 .
1



0 0
Thus the matrix associated with T is A =  1 0 . 2
0 1
Exercise 10.2.2. Repeat Example 10.2.5 with B = {1, t} and B 0 = {1 + t, t − 1, t2 }.
 
  y
x
Example 10.2.6. Let T : R2 −→ R3 be defined by T =  −5x + 13y . Find the
y
−7x + 16y
0
matrix for T with respect tothe bases
 B = (u , u
1 2 ) and B = 1 2 , v3 ), where u1 =
(v , v
    1 −1 0
3 5
, u2 = , v1 =  0  , v2 =  2  , v3 =  1 , and use the matrix to
1 2
  −1 2 2
2
obtain T .
1

Solution:
Linear Transformations 119
         
  1 1 1 −1 0
3
(i) T =  −2  ⇒  −2  = α1  0  + α2  2  + α3  1  ⇒ α1 =
1
−5 −5  −1  2 2
   1
3
1, α2 = 0, α3 = −2 ⇒ T =  0 .
1 0
B −2
         
  2 2 1 −1 0
5
(ii) T =  1  ⇒  1  = α1  0  + α2  2  + α3  1  ⇒ α1 =
2
−3 −3  −1  2 2
   3
5
3, α2 = 1, α3 = −1 ⇒ T =  1 .
2 0
B −1

 
1 3
Then the matrix for T is A =  0 1 . Since A[x]B = [T (x)]B 0 , we need to find [x]B , i.e.,
−2 −1
the scalars a1 and a2 that make x a linear combination of the vectors in B. From the question,
x = (2, 1) then the coordinate matrix of x w.r.t. B is got by writing the vector x = (2, 1) as a
linear
 combination
  of basis  vectors
 (3, 1) and (5, 2). So  
2 3 5 −1
= a1 + a2 ⇒ a1 = −1, a2 = 1 ⇒ [x]B = . Then
1  1  2   1 
1 3   2 2
−1
A[x]B =  0 1  =  1  this means [T (x)]B 0 =  1 , which are the scalars
1
−2 −1 1 1
0
for T (x) as
 a linear
 combination
  of the vectors
    in B . That is to say,  
1 −1 0 1   1
2
T (x) = 2  0  + 1  2  + 1  1  =  3  or T =  3 . 2
1
−1 2 2 2 2
 
  x1 + 2x2
x1
Example 10.2.7. Let T : R2 −→ R3 defined by T =  −x1 . Find the matrix
x2
0  
0 1
for T with respect to the bases B = (u1 , u2 ) and B = (v1 , v2 ), where u1 = , u2 =
      3
  1 2 3  
−2 8
, v1 =  1  , v2 =  2  , v3 =  0 , and use the matrix to obtain T .
4 3
1 0 0

Solution:
         
  7 7 1 2 3
1
(i) T =  −1  ⇒  −1  = α1  1  + α2  2  + α3  0  ⇒ α1 =
3
0 0  1  0 0
   0
1
0, α2 = − 21 , α3 = 38 ⇒ T = − 12 .
3 B0 8
3
Linear Transformations 120
         
  6 6 1 2 3
−2
(ii) T =  2  ⇒  2  = α1 1
  + α2 2
  + α3 0  ⇒ α1 = 0, α2 =

4
0 0  1 0 0
  0
4 −2
1, α3 = 3
⇒ [T ]B 0 =  1  .
4 4
3


0 0
Then the matrix associated with T is A =  − 12 1 . Since A[x]B = [T (x)]B 0 , we need to find
8 4
3 3
[x]B , i.e., the scalars
  that make
 xa linear
 combination
 of the vectors in B. Observe that
 19 
8 1 −2 19 21
x = (8, 3); then = a1 + a2 ⇒ a1 = 5 , a2 = − 10 . Thus [x]B = 5
21
 3  3  4   − 10
0 0  19  0 0
and A[x]B =  − 12 1  5
21 =  −4 . Therefore [T (x)]B 0 =  −4 ; which are the
8 4 − 10 22 22
3 3 3 3
0
scalars forT (x)as a linear
 combination
  of  thevectors
 in B . Thus  
1 2 3 14   14
T (x) = 0  1  − 4  2  + 22  0  =  −8  or T 8 =  −8 . 2
3 3
1 0 0 0 0
Example 10.2.8. Let T : P2 → P4 be a linear transformation defined by T (p(x)) = x2 p(x).
Verify that T is linear. Find the matrix for T with respect to the bases B and B 0 , where
B = {p1 (x), p2 (x), p3 (x)} with p1 (x) = 1 + x2 , p2 (x) = 1 + 2x + 3x2 , p3 (x) = 4 + 5x + x2 , and
B 0 is the standard basis for P4 .

Solution: B 0 = {1, x, x2 , x3 , x4 }; so T (p1 (x)) = x2 (1 + x2 ) = x2 + x4 =


0 · 1 + 0 · x + 1 · x2 + 0 · x3 + x4 ⇒ [T (p1 (x))]B 0 = [0, 0, 1, 0, 1]t (column vector), T (p2 (x)) =
x2 (1 + 2x + 3x2 ) = 0 · 1 + 0 · x + 1 · x2 + 2x3 + 3x4 ⇒ [T (p2 (x))]B 0 = [0, 0, 1, 2, 3]t , T (p3 (x)) =
x2 (4 + 5x + x2 ) = 0 · 1 + 0 · x + 4x2 + 5x3 + x4 ⇒ [T (p3 (x))]B 0 = [0, 0, 4, 5, 1]t . Thus the matrix
0
for Twith respect to the basis B and B is
0 0 0
 0 0 0 
 
A=  1 1 4 .
 2
 0 2 5 
1 3 1
Exercise 10.2.3.

1. Let T : R2 −→ R2 defined by T (x1 , x2 ) = (x1 , 0). Find the basis for

(a) Ker T .
(b) R(T ).
  
4 2 2 x1
2. If T : R3 −→ R3 defined by T (x1 , x2 , x3 ) =  2 3 −1   x2 .
−1 1 −2 x3
(a) Is T 1 - 1?
Linear Transformations 121

(b) Is T onto?
(c) Find dim R(T ).

3. Prove Theorem 10.1.5 for the cases where nullity T = 0 and nullity T = n.

4. Prove Theorem 10.1.7.


2 3
5. Let T : P3−→ M2×2
 be a map defined by T (p(t)) = p(A), where p(t) = a + bt + ct + dt
1 1
and A = .
−1 1
(a) Show that T is linear.
(b) Find T (1 − 3t + 2t2 ).
(c) Find Ker T .
(d) Find the basis of Ker T and dim(Ker T ).
(e) Find R(T ).
(f) Find the basis of R(T ) and dim(R(T )).

6. Redo Example 10.2.4 using Theorem 10.2.1.

7. Consider a linear transformation T : R3 −→ R2 defined by T (x, y, z) = (x + y, y − z). Let


B = {(1, 0, 1), (0, 1, 1), (1, 1, 1)} be a basis for R3 and B 0 be a standard basis for R2 .

(a) Find the matrix for T with respect with B and B 0 .


(b) Find T (2, 1, 0).

8. Prove Theorem 10.2.1 using the ideas applied in the proof of Theorem 10.2.3.

9. If  
−1 2 0 4 5 −3
 3 −7 2 0 1 4 
A=
 2 −5 2

4 6 1 
4 −9 2 −4 −4 7
is the standard matrix for a linear transformation T : R6 −→ R4 , find the rank and nullity
of T .

10. Find the rank and nullity of T in Example 10.2.1.


Semester I Makerere University Examinations 2015/2016 122

MAKERERE UNIVERSITY School of Physical Sciences


College of Natural Sciences Department of Mathematics

Name:. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. Course:. . . . . . . . . . . . . . . School:. . . . . . . . . . . . . . . . . .


Stud No:. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Reg No:. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

MTH 1102: LINEAR ALGEBRA 1 TEST 3 THURSDAY NOVEMBER 19, 2015


1:00 PM - 1:45 PM

Instructions to Candidates:

Answer ALL questions in the provided space. Every question is 3 marks.

1. True or False; Suppose that A and B are square matrices of the same order. If A is similar
to B then B is also similar to A. Justify your answer.

2. True or False; A linear transformation T : R3 −→ R2 defined by T (x, y, z) = (y, z) is 1-1.


Defend your answer.

3. Prove that if λ is an eigenvalue of a square matrix A with x as the associated eigenvector,


then kλ (k 6= 0) is an eigenvalue of kA.

4. If the characteristic polynomial of a 3 × 3 matrix A is P (λ) = (λ − 3)2 (λ − 9), find det(A).

5. Show that if T : V −→ W is a linear transformation such that Ker T = {0}, then T is 1-1.
Semester I Makerere University Examinations 2015/2016 123

6. The characteristic polynomial a 3 × 3 matrix A is P (λ) = (λ − 1)(λ2 − 5λ + 6). Find the


eigenvalues of A and hence show that A is diagonalizable.

7. Show that the transformation T : R2 −→ R2 defined by T (u1 , u2 ) = (u1 , −u2 ) is linear.

   
1 0 −1 0
8. The diagonalizing matrix for a matrix A = is P = . Given that
  1 2 1 1
−1 −1 0
P = , find A6 .
1 1

9. Let T : R3 −→ R2 be a linear transformation defined by T (x, y, z) = (y, z). Find

(i) Ker T . (ii) rank T .

10. Suppose that T : R3 −→ R2 be a linear transformation defined by


T (x1 , x2 , x3 ) = (x2 − x3 , 2x1 + 3x2 + 4x3 ). Find the standard matrix for T .
Makerere University Semester I Examinations 2011/2012 124

MAKERERE UNIVERSITY School of Physical Sciences


College of Natural Sciences Department of Mathematics

MTH 1102: LINEAR ALGEBRA I SATURDAY JANUARY 07, 2012


12:00 Noon - 02:00 PM

Instructions to Candidates:

(i) This examination consists of SECTION A and SECTION B.

(ii) Answer ALL questions in Section A and any TWO questions in Section B.

(iii) If you answer more than TWO questions from SECTION B, ONLY the first consecutive
TWO will be marked.

(iv) Write all your answers in the examination answer book provided.

(v) Please read and follow ALL the instructions on the examination answer book.

Section A: 40 marks

1. (a) Define an invertible (non-singular) matrix A. [1 mark]


(b) Prove that if a matrix B is invertible then so is its inverse (B −1 ). [3 marks]

2. (a) Define the determinant of a matrix A. [2 marks]


(b) Find the determinant of the matrix A where
 
0 0 2 0
 1 0 0 1 
A=  0 −1

3 0 
2 1 5 −3

[3 marks]

3. (a) Define a vector subspace. [2 marks]


3 3
(b) State one example of a vector subspace of R which is neither R nor {0}. [1 mark]
x + byy : for x , y ∈ R3 and a, b ∈ R} is a subspace of R3 . [3 marks]
(c) Show that W = {ax

4. (a) Define the rank of a matrix A. [1 mark]


(b) Given the matrix  
1 3 −2 1
 1 4 1 2 
A= 
 2 6 −4 2 
1 4 2 2
find
Makerere University Semester I Examinations 2011/2012 125

(i) rank(A). [2 marks]


(ii) the basis for the nullspace of A. [2 marks]
5. (a) Define an eigenvector of a matrix A. [2 marks]
(b) Show that if x is an eigenvector of A, then rxx (r ∈ R) is also an eigenvector of A.
[3 marks]
 
1 1
6. Determine whether the matrix B = is diagonalizable. [4 marks]
0 1
7. (a) Let P1 be the vector space of all polynomials of atmost degree 1. Let T : P1 → P1 be a
linear transformation. Determine whether or not the following are linear
transformations.
(i) T1 (a0 + a1 x) = (a0 + 2) + (a1 + 2)x. [2 marks]
(ii) T2 (a0 + a1 x) = a0 + a1 (x + 1). [2 marks]
(b) Let T : V → W be a linear transformation from the vector space V to W . Let
S = {x
x1 , x 2 , . . . , x k } be a set of vectors in V and M = {T (x
x1 ), T (x
x2 ), . . . , T (x
xk )}.
Show that S is linearly independent if M is linearly independent. [2 marks]
(c) Let T : R2 → R2 be a linear transformation defined by T (x, y) = (−4x + 2y, 16x − 8y).
Determine whether    
5 2
(i) the vectors and are in the range of T .
−20 3
   
1 4
(ii) the vectors and are in the kernel of T . [3 marks]
1 8
   
2 2 1 1
(d) Let T : R → R be a linear transformation such that T = ;
      0 1
0 1 1
T = . Compute T . [2 marks]
1 −1 2

Section B: 30 marks

8. (a) (i) Explain what is meant by the solution of a linear homogeneous system.
(ii) If A is an m × n matrix show that the set of all solutions to the system Ax x=0
n
is a subspace of R . [5 marks]
 
1 2 0
(b) Let A =  0 1 2 .
1 3 2
(i) Solve Ax x = b for b = (4, −1, 3)t .
(ii) Is b in the column space of A? Justify your answer.
(iii) Is every b = (b1 , b2 , b3 ) in the column space of A? Justify your answer.
Find a basis for the
(iv) column space of A.
(v) row space of A.
(vi) solution space of A. [10 marks]
9. (a) Define a linear transformation T from a vector space V to a vector space W .
[2 marks]
Makerere University Semester I Examinations 2011/2012 126

(b) Let T : R4 → R2 be defined by T (x


x) = (2x1 + x2 , x3 − x4 ) for x = (x1 , x2 , x3 , x4 ).
(i) Show that T is a linear transformation.
(ii) Find a basis for Ker(T).
(iii) Find a basis for range(T). [13 marks]

10. (a) Let S = {x


x1 , x 2 , . . . , x n }, define what is meant by the set S
(i) being linearly independent.
(ii) forming a basis for a vector space V . [3 marks]
(b) Let V be a vector space consisting of all polynomials of degree atmost 4.
(i) Write down any basis for V .
(ii) Does the set S = {2 + t − t2 + 4t4 , 1 − t + 3t3 , 7 + t4 } span V ?
[5 marks]
   
−7 3 1 0
(c) (i) Write the matrix as a linear combination of and
  3 20 3 4
−3 1
. [3 marks]
−1 4
(ii) Prove that if a set S = {x
x1 , x 2 , . . . , x n } is a basis for a vector space V , then
every element in V can be written in a unique way as a linear combination of
elements of S. [4 marks]

11. (a) Let A be a square matrix. Explain what is it means to say that matrix
(i) A is similar to matrix B.
(ii) A is diagonalizable. [3 marks]
(b) Suppose that A, B, and C are n × n matrices and that A is similar to B, and B is
similar to C. Show that A is similar to C. [4 marks]
 
0 1 1
(c) Let A =  1 0 1 .
1 1 0
(i) Find the eigenvalues of A.
(ii) Determine whether A is diagonalizable, and if so find the diagonalizing matrix
P. [8 marks]

,We wish you a Merry Xmas and a Happy New Year 


Semester I Makerere University Examinations 2012/2013 127

MAKERERE UNIVERSITY School of Physical Sciences


College of Natural Sciences Department of Mathematics

Student Number:. . . . . . . . . . . . . . . . . . . . . . . . . . . Registration Number:. . . . . . . . . . . . . . . . . . . . .

School:. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Course:. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

MTH 1102: LINEAR ALGEBRA I MONDAY DECEMBER 10, 2012


8:00 AM - 10:00 AM

Instructions to Candidates:

(i) This examination consists of SECTION A and SECTION B.

(ii) Answer ALL questions in Section A and ANY TWO questions in Section B.

(iii) There is no benefit in attempting more than TWO questions from Section B.

(iv) Please read and follow ALL the instructions on the examination answer book.

Section A: Answer ALL the questions for 40 marks

1. State whether the following statements are TRUE or FALSE.

(a) Every vector space has a basis. [1 mark]


x = 0 has no solutions.
(b) If A contains a row of zeros, then the homogeneous system Ax
[1 mark]
(c) Let T : V −→ W be a one to one linear transformation. Then the kernel of T contains
only the zero vector. [1 mark]

2. A finite set of vectors that contains the zero vector is always

(a) linearly independent.


(b) linearly dependent. [2 marks]
(c) both linearly dependent and linearly independent.
(d) none of these.

3. The characteristic polynomial of a matrix A is given by ChA (λ) = λ2 − 4λ + 3. What is


the determinant of A?

(a) 2 (c) 3 (e) None of these.


(b) -4 (d) -3

[2 marks]
Semester I Makerere University Examinations 2012/2013 128
 
1 0 0 −1
 3 1 2 2 
4. Find the determinant of A =  . [3 marks]
 1 0 −2 1 
2 0 0 1
5. Which of the following statements are true.

I. det(AB) = det(A) + det(B)


II. det(At ) = det(A)
III. det(A + B) = det(A) + det(B)

(a) All but II (b) All but III (c) All (d) None of these.

[2 marks]

6. Let A be an n × n matrix and T (A) = tr(A) be a transformation that maps an n × n


matrix to its trace. Show that T is linear. [3 marks]

7. (a) Define what is meant by a square matrix A being singular. [2 marks]


x = b has a
(b) Show that if a square matrix A is invertible, then the linear system Ax
unique solution. [4 marks]
 
2 1 3
8. Find the inverse of A =  0 1 4 . [3 marks]
1 2 1
9. Check whether the vectors v 1 = (1, 0, −1), v 2 = (1, 2, 3) and v 3 = (−1, 3, −1) are linearly
independent. [5 marks]

10. For the following sets, check whether they are subspaces of the given vector space.

(a) V = {vv ∈ R4 : Bvv = 4vv }, where B is a 4 × 4 matrix. [3 marks]


(b) U = {(x, y, z) ∈ R3 : y = 2x − 3}. [3 marks]

11. Show that if zero is an eigenvalue of a square matrix A, then A is not invertible.

12. Consider the matrices


 
  1 0
1 2
A= and B =  0 3  .
3 0
−2 1

Only one of the products AB and BA makes sense. Determine which one and compute
the product. [3 marks]
Semester I Makerere University Examinations 2012/2013 129

Section B: Answer ANY TWO questions for 30 marks

13. (a) x = b said to be homogeneous?[2 marks]


(i) When is the linear system of equations Ax
(ii) How many solutions (if any) does the system Axx = b have given that
   
1 0 1 0 −1
A =  0 1 2 0  and b =  3 ?
0 0 0 1 7

Justify your answer. [3 marks]


(b) Determine the values of the number “a” for which the linear system of equations

x1 + 2x2 + x3 = a
x1 + x2 + ax3 = 1
3x1 + 4x2 + (a2 − 2)x3 = 1

has
(i) no solution.
(ii) infinitely many solutions.
(iii) a unique solution. [10 marks]

14. (a) Give a reason why a set of ordered pairs of integers is not a vector space with the
usual operations of addition and scalar multiplication. [2 marks]
(b) (i) Let W be a subset of a vector space V . When is W said to be a vector subspace
of V ? [2 marks]
(ii) Let V and W be two vector spaces; prove that the intersection of V and W is a
subspace of W . [4 marks]
 
1 −1 2 4
(c) Let A = −2 3 −4 −5 .
1 −3 2 −2
(i) Find the basis for the null space of A. [5 marks]
(ii) Is every vector b = (b1 , b2 , b3 ) in the column space of A? Justify your answer.
[2 marks]

15. (a) For a square matrix A, what does it mean for λ to be an eigenvalue of A? [1 mark]
(b) Show that if A and B are invertible similar matrices, then A−1 and B −1 are also
similar. [3 marks]
 
3 2 1
(c) Consider the matrix A = 0 2 0 

1 2 3
(i) Find the characteristic polynomial of A. [2 marks]
(ii) Find all the eigenvalues of A. [1 mark]
(iii) Find the basis for the eigenspace for each eigenvalue. [5 marks]
(iv) Is A diagonalizable? If so find a matrix D that is similar to matrix A, and hence
find the eigenvalues of A2 . [3 marks]
Semester I Makerere University Examinations 2012/2013 130

16. Let T : M2×2 −→ M2×2 a linear transformation defined by


   
a b a + 2b + 3c 2b − 3c + 4d
T = ,
c d −a − 4b − 5d 0

with respect to the standard basis for M2×2 .

(a) Find the matrix representation for the linear transformation T defined above.[5 marks]
(b) Find the set of all vectors in Ker(T ) and hence the basis for Ker(T ). [4 marks]
(c) Is T one to one? Justify your answer. [2 marks]
(d) Find the basis for Range(T ). [2 marks]
(e) Is T onto? Justify your answer. [2 marks]

,Merry Xmas and Happy New Year 


Semester I Makerere University Examinations 2013/2014 131

MAKERERE UNIVERSITY School of Physical Sciences


College of Natural Sciences Department of Mathematics

MTH 1102: LINEAR ALGEBRA I TUESDAY JANUARY 14, 2014


8:00 AM - 10:00 AM

Instructions to Candidates:

(i) This examination consists of SECTION A and SECTION B.

(ii) Answer ALL questions in Section A and any TWO questions in Section B.

(iii) If you answer more than TWO questions from SECTION B, ONLY the first consecutive
TWO will be marked.

(iv) Write all your answers in the examination answer book provided.

(v) Please read and follow ALL the instructions on the examination answer book.

Section A: Answer ALL the questions for 40 marks

1. State whether the following statements are TRUE or FALSE.

(a) If 0 is an eigenvalue of an n × n matrix A, then rank(A) < n. [2 marks]


(b) The dimension of the row space and the column space of A are the same, even if the
A is not square. [2 marks ]
(c) A homogeneous system of seven equations with seven unknowns will always have
infinitely many solutions. [2 marks]

2. For what value(s) of k does the following linear system

kx + y + z = 1,
x + ky + z = 1,
x + y + kz = 1,

have no solution?

(a) k = 0, 2 (b) k = 0, −2 (c) k = −2, 1 (d) k = −2 (e) None of the


above.

[2 marks]

3. Given T : V → W a linear transformation between vector spaces V and W , suppose that


V is 5-dimensional, W is 4-dimensional and the rank of T is 3. Then the nullity of T is
Semester I Makerere University Examinations 2013/2014 132

(a) 2 (b) −2 (c) 1 (d) none of these.

[2 marks]

4. Let A be an m × n matrix such that nullity(A) = n − m + 2. The rank of A is

(a) m + 2 (b) m − 2 (c) −m + 2 (d) −m − 2 (e) none of


these.

[2 marks]
 
−1 0 0
5. Let A =  −1 −1 −2 . Find the eigenvalues of A. [4 marks]
1 0 1
6. Show whether the set S = {(2, 0, 1), (−1, 3, 4), (1, 1, −2)} spans R3 . [4 marks]
     
4 5 −5 −1
7. For matrix A = . Given that and are the eigenvectors
−3 −4 3 1
corresponding to the eigenvalues 1 and −1 respectively, compute A101 . [4marks]

8. Consider the matrices  


1 0  
1 2
A =  0 3  and B = .
3 0
−2 1
Only one of the products AB and BA makes sense. Determine which one and compute
that product. [4 marks]

9. How many solutions (if any) does the x = b have where


equation Ax
   
1 0 2 1
0 3 1 2 .
  
A= 0 and b =
0 0 3
0 0 0 0
Justify your answer. [4 marks]
 
1 2 1
10. Let A = 0 1 2. Find the determinant of 5A. [4 marks]
0 0 1
11. Let W be a set of matrices of the form
 
0 a12
.
a21 a22
Is W a subspace of M22 ? [4 marks]

Section B: Answer ANY TWO questions for 30 marks

12. (a) Consider the homegeneous system Ax x = 0. Show that if x1 and x2 are solutions of
x1 are also solutions of the
this system and λ is a real number, then x 1 + x 2 and λx
system. Does the result hold for non-homogeneous systems? [5 marks]
Semester I Makerere University Examinations 2013/2014 133

(b) For which value(s) of λ does the system of equations


(λ − 3)x + y = 0,
x + (λ − 3)y = 0,
have nontrivial solutions? [4 marks]
(c) Consider the following system of linear equations:
x1 + 2x3 + 4x4 = −8,
x2 − 3x3 − x4 = 6,
3x1 + 4x2 − 6x3 + 8x4 = 0,
−x2 + 3x3 + 4x4 = −12.
By Reducing the augmented matrix to echelon form, find all solutions to the
system. [6 marks]
13. (a) Let V be an n-dimensional vector space and S = {vv 1 , v 2 , . . . , v n } be a set of vectors
in V . When is S said to be a basis for V ? [2 marks]
(b) Let  
1 −3 4 5 −2
 2 −6 9 8 −1 
A=
 2 −6
.
9 9 −1 
−1 3 −4 −5 2
(i) Find a basis for the column space of A. [3 marks]
t
(ii) Show whether the vector v = (4, 2, 7, −4) is in the column space of A. [2marks]
(iii) State the rank of A. [1 mark]
(iv) Find the basis for the solution space of A. [3 marks]
(c) Let S = {vv 1 , v 2 , . . . , v n } be a set of vectors in a vector space V .
(i) When are the vectors in S said to be linearly dependent. [1 mark]
3
(ii) If V = R , determine whether the vectors (1, 2, 3), (1, −1, 2) and (1, −4, 2) are
linearly independent. [3 marks]
14. (a) For a square matrix A, what does it mean for v to be an eigenvector of A? [1 mark]
(b) Let A be an n × n matrix and λ be an eigenvalue of A. Let V be the set of all
eigenvectors corresponding to λ. Prove that V is a vector sub-space of Rn . [3marks]
 
1 1 2
(c) Consider the matrix B = 0 1 0 .
0 1 3
(i) Find the characteristic polynomial of B. [2 marks]
(ii) Compute all the eigenvalues of B. [1 mark]
(iii) Find the basis for the eigenspace for each eigenvalue. [5 marks]
(iv) Is B diagonalisable? If so, find a matrix D that is similar to matrix B and
hence compute the eigenvalues of B 2 . [3marks]
15. Let T : R3 → R3 such that
   
x x − y − z
T  y  =  x + 3y + z  .
z 3x − y − 2z
Semester I Makerere University Examinations 2013/2014 134

(a) Show that the transformation T is linear. [4 marks]


(b) Find the standard matrix representation for the transformation T . [3 marks]
(c) Is T one to one? Justify your answer. [3 marks]
(d) Is T onto? [2 marks]
 
−2
(e) If there is any vector v , find it such that T (vv ) =  5 . [3 marks]
−5

END
Semester I Makerere University Examinations 2014/2015 135

MAKERERE UNIVERSITY School of Physical Sciences


College of Natural Sciences Department of Mathematics

Student Number:. . . . . . . . . . . . . . . . . . . . . . . . . . . Registration Number:. . . . . . . . . . . . . . . . . . . . . . . .

School:. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Course:. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

MTH 1102: LINEAR ALGEBRA I MONDAY DECEMBER 15, 2014


8:00 AM - 10:00 AM

Instructions to Candidates:

(i) This examination consists of SECTION A and SECTION B.

(ii) Answer ALL questions in Section A and any TWO questions in Section B.

(iii) If you answer more than TWO questions from SECTION B, ONLY the first consecutive
TWO will be marked.

(iv) Please read and follow ALL the instructions on the examination answer book.

Section A (40 marks)

For Questions 1–5, answer True or False.

1. If A and B are similar matrices then tr(A) = tr(B). [1 Mark]

2. The diagonal matrix D for A such that D = P −1 AP is always invertible. [1 Mark]

3. Interchanging two rows of a square matrix does not have any effect on the value of its
determinant. [1 Mark]

4. If A is an n × n matrix then A is row equivalent to In . [1 Mark]

5. A system of linear equations has infinitely many solutions if and only if at least one row in
the coefficient matrix does not contain a pivot (non-zero leading element). [1 Mark]

6. The value(s) of the scalar k for which the following linear system

x1 + x2 − x3 = 2
x1 + 2x2 + x3 = 3
2
x1 + x2 + (k − 5)x3 = k

has no solution is (are)


Semester I Makerere University Examinations 2014/2015 136

(a) 2. (c) k = ±2. (e) None of these.


(b) -2. (d) k ∈ R\{±2}.

[2 Marks]

7. Which of the following is not a basis for R?

(a) {π}. (c) {-1, 1}. (e) None of these.


(b) {-1}. (d) {1}.

[2 Marks]

8. If the characteristic polynomial of a square matrix A is P (λ) = λ2 − 4λ + 3, then the trace


of A is

(a) 3. (c) 31 . (e) None of these.


(b) 1. (d) 4.

[2 Marks]

9. Let T : R3 −→ R2 be defined by T (x, y, z) = (x, y). Then dim(Ker T ) is

(a) 0. (c) 1. (e) None of these.


(b) 3. (d) 2.

[2 Marks]
 
2 0 0
10. The inverse of the matrix B =  1 3 0  is
0 0 1
     
3 0 0 3 0 0 3 0 0 (d) None of these
1 1 1
(a) 6 6 2 0  (b) 6 −1 2 0  (c) 6
−1 0 0 
0 0 6 0 0 6 0 0 6

[2 Marks]

11. Find the eigenvalues for A when


 
3 0 0
A= 0 4 1 
0 2 5

[5 Marks]

12. Is the matrix  


−3 2 7
B= 0 1 0 
0 0 −2
diagonalizable? Justify your answer. [4 Marks]
Semester I Makerere University Examinations 2014/2015 137

13. If the column space of an 8 × 4 matrix A is 3-dimensional, find the dimensions of the null
space. [3 Marks]

14. Let W be a set of all points form R3 of the form (x1 , 0, x3 ). Show that W is a subspace of
R3 . [4 Marks]

15. Suppose that a transformation T : Rn −→ Rm is linear and that {vv 1 , v 2 , v 3 } is a linearly


dependent set. Show that the set {T (vv 1 ), T (vv 2 ), T (vv 3 )} is also linearly dependent.[4 Marks]

16. Find the determinant of A where


 
3 0 0
A =  0 −1 1 
1 −2 5

[3 Marks]

17. Write down an example of a 3 × 3 anti-symmetric matrix. [2 Marks]

Section B (30 marks)

18. (a) (i) Define what is meant by a transformation T : V −→ W being linear. [2 Marks]
(ii) Show that the transformation T : P1 −→ R defined by
Z 1
T (f (t)) = f (t)dt
0

is linear. [2 Marks]
(iii) Find the kernel of T for T defined as in part (a)(ii) above. [2 Marks]
(b) Let T : R2 −→ R3 be a linear transformation defined by T (x1 , x2 ) = (x1 + 2x2 , x2 , 6x1 ).
Let B be the standard basis for R2 and B 0 be the basis for R3 given by
      
 1 1 1 
0
B =  0 , 1 , 1 
   
0 0 1
 

(i) Find the matrix of T with respect to the bases B and B 0 . [5 Marks]
(ii) Use the matrix obtained in part (b)(i) above to compute T (1, 2). [4 Marks]

19. (a) (i) Prove that similar matrices have the same characteristic polynomial. [3 Marks]
(ii) Given that an invertible matrix A is similar to an invertible B, prove that their
inverses are also similar. [2 Marks]
(b) Let  
1 −1 −1
A= 0 1 3 
0 3 1
(i) Compute the eigenvalues of A. [4 Marks]
(ii) Determine whether A is diagonalizable and if so diagonalize A. [6 Marks]

20. (a) (i) Let V (+, .) be a vector space and W be a non-empty subset of V . State what is
meant by W (+, .) being a vector space of V . [2 Marks]
Semester I Makerere University Examinations 2014/2015 138

(ii) Show whether the subset


  
a b
B= ; a, b, c, d ∈ Z
c d

is a vector space of all 2 × 2 matrices. [2 Marks]


(b) Suppose that λ is an eigenvalue of A. Prove that the eigenspace
x ∈ Rn : Ax
Eλ = {x x = λxx} together with the zero vector is a subspace of Rn . [3 Marks]
   
1 1 −β 1
(c) Let A =  3 2 −2  and b =  −1 . Find the values of β and α for the linear
2 1 1 −2α
x = b to have
system Ax
(i) a unique solution. [4 Marks]
(ii) no solution. [2 Marks]
(iii) infinitely many solutions. [2 Marks]
 
1 1 3 1
21. (a) Consider a matrix A =  2 1 5 4 . Find a basis for the column space of A.
1 2 4 −1
[3 Marks]
     
2 −4 8
(b) Let v 1 =  3  , v 2 =  −5 . Is the vector v 3 =  2  in the Span{vv 1 , v 2 }?
−5 8 −9
[4 Marks]
(c) Show that the set of vectors B = {1, 1 − t, 1 + t + t2 } is a basis for the vector space P2 .
[5 Marks]
(d) Let S = {x
x1 , x 2 , . . . , x n } be a basis for the vector space V . Show that every element in
V can be written in one and only one way as a linear combination of the vectors in S.
[3 Marks]

,Merry Xmas and Happy New Year


Semester I Makerere University Examinations 2015/2016 139

MAKERERE UNIVERSITY School of Physical Sciences


College of Natural Sciences Department of Mathematics

MTH 1102: LINEAR ALGEBRA I MONDAY DECEMBER 14, 2015


8:00 AM - 10:00 AM

Instructions to Candidates:

(i) This examination consists of SECTION A and SECTION B.

(ii) Answer ALL questions in Section A and any TWO questions in Section B.

(iii) If you answer more than TWO questions from SECTION B, ONLY the first consecutive
TWO will be marked.

(iv) Write all your answers in the examination answer book provided.

(v) Please read and follow ALL the instructions on the examination answer book.

Section A: 40 marks

1. If A and B are matrices of the same size, is det(A + B) = det(A) + det(B)? Justify your
answer. [2 marks]

2. Check whether the set S = {(1, 1, 1), (1, 0, 1)} is a basis for R3 . [2 marks]
   
1 0 −2 −11 29 1
3. The cofactor matrix for the matrix A =  3 1 4  is Cof(A) =  −4 7 −2 .
5 2 −3 2 −10 1
Use Cof(A) to find |A|. [1 mark]

4. Let A and B be matrices of the same size in a field F. Prove that


tr(A + B) = tr(A) + tr(B). [2 marks]

5. Prove that if A is a square matrix such that A2 = A and A 6= I, then A is singular.


[3 marks]

6. State the conditions on the number “a” such that the linear system

x + 4y − 2z = 3
2y + 5z = 7
(a + 4)z = a2 − a

(a) has a unique solution. [1 mark]


(b) is inconsistent. [1 mark]
Semester I Makerere University Examinations 2015/2016 140

7. Consider R2 together with usual addition and scalar multiplication defined by


α.(x, y) = (0, 0) for every (x, y) ∈ R2 . Show that R2 together with these operations is not
a vector space. [1 mark]

8. Show that W = {(a, b, c) : a + 2b + 3c = 0} is a subspace of R3 . [3 marks]

9. (a) Define a basis for a vector space V . [2 marks]


(b) Suppose that S = {vv 1 , v 2 , . . . , v n } is a basis for a vector space V . Show that
S ∪ {u
u}, u ∈ V is linearly dependent. [2 marks]
 
−1 2 0
10. Find the rank of the matrix A =  2 1 3 . [4 marks]
−1 0 −2
11. The characteristic polynomial of a square matrix A is P (λ) = λ2 − 9λ + 8. Find

(a) the eigenvalues of A. [2 marks]


t
(b) tr(A ). [1 mark]
t
(c) det(A ). [1 mark]
   
1 2 1 4
12. A matrix P = diagonalizes a matrix A = . Use P to compute A3 .
0 1 0 3
[4 marks]

13. Let T: R3 −→ R3 be x = Ax


a linear transformation defined by Tx x, where the matrix
−1 2 0
A= 2 1 3 . Show that T is
−1 0 −2
(a) one to one. [2 marks]
(b) onto. [2 marks]

14. Show that the transformation T : R2 −→ R3 , defined by T (x, y) = (x + 3, 2y, x + y) is not


linear. [2 marks]

15. Find the standard matrix A for the linear transformation T : R2 −→ R2 , defined by
T (x, y) = (2x + 3y, 4x − 5y). [2 marks]
Semester I Makerere University Examinations 2015/2016 141

Section B: 30 marks

16. (a) Let V be an n dimensional vector space and T : V −→ V be a linear transformation.


Prove that if {xx1 , x 2 , . . . , x n } is a basis for V then the set {T (x
x1 ), T (x
x2 ), . . . , T (x
xn )} is
a basis for T (V ). [3 marks]
(b) Let T : R3 −→ R2 be defined by T (x, y, z) = (x + 2y + z, −x + 3y + z). Find a basis
for
(i) Ker T ,
(ii) range T . [4 marks]
(c) Let T : P2 −→ P2 be a linear transformation on the set of polynomials of degree less
or equal to 2, defined by T (f (t)) = tf 0 (t).
(i) Determine whether T is onto.
(ii) Find the coordinate vector [T (t2 + 3)]B , where B is a standard basis for P2 .
(iii) Verify that dim(P2 ) = Nullity T + dim(range T ). [8 marks]
 
2 −1
17. Let A =
−1 2
(a) (i) Find the eigenvalues of A. [2 marks]
(ii) State the eigenvalues of A3 . [1 mark]
(b) Compute the eigenvectors of A. [4 marks]
(c) Determine whether the matrix B = A − 3I is non singular. [1 mark]
k
(d) State the general formula for finding A , where k is a positive integer, and hence
compute A6 . [7 marks]

18. (a) Let V (+, ·) be a vector space and W be a non-empty subset of V . When is W (+, ·)
said to be a vector subspace of V? [2 marks]
(b) Consider a vector space M2×2 , of all 2 × 2 matrices, and let
  
2 a1
W = : a1 , a2 ∈ R
0 a2

be a subset of M2×2 . Is W a vector subspace of M2×2 ? Support your answer.[3 marks]


(c) Using the Rank Theorem or otherwise, show whether or not the linear system

x + 2y + z = 2
x − 2y + 2z = 4
−3x − 4y − 5z = −5

is consistent. [3 marks]
(d) Find conditions on α, β, γ, θ for the linear system

x − 4y + 7z = α
3y − 5z = β
−2x + 5y − γz = θ

to have
Semester I Makerere University Examinations 2015/2016 142

(i) no solution, [4 marks]


(ii) a unique solution, [1 mark]
(iii) infinitely many solutions. [2 marks]

19. (a) Let  


1 0 1
 2 1 1 
A=
 3
.
0 1 
0 −3 3
Find a basis for each of the following fundamental subspaces associated with A:
(i) Row space of A, [3 marks]
(ii) Column space of A, [2 marks]
(iii) Solution space of A. [2 marks]
(b) Decide whether the matrix  
1 0 −1
A= 0 2 5 ,
−2 0 −2
is invertible or not. If it is, compute A−1 . [6 marks]
(c) If a 6 × 3 matrix A has rank 3, find the
(i) nullity of A, [1 mark]
(ii) dimension of the row space of A, [1 mark]

,We wish you a Merry Xmas and a Happy New Year 

You might also like