0% found this document useful (0 votes)
32 views20 pages

Lecture5 D3

The document discusses key concepts in linear algebra, focusing on the invertibility of matrices, the Gauss-Jordan method for finding inverses, and the definitions of linear dependence and independence. It provides propositions and proofs regarding the conditions under which sets of vectors are linearly dependent or independent, as well as methods for determining these properties through row operations. Additionally, it introduces the concept of row rank and its implications for the linear independence of row vectors in a matrix.

Uploaded by

Harsha Vardhan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views20 pages

Lecture5 D3

The document discusses key concepts in linear algebra, focusing on the invertibility of matrices, the Gauss-Jordan method for finding inverses, and the definitions of linear dependence and independence. It provides propositions and proofs regarding the conditions under which sets of vectors are linearly dependent or independent, as well as methods for determining these properties through row operations. Additionally, it introduces the concept of row rank and its implications for the linear independence of row vectors in a matrix.

Uploaded by

Harsha Vardhan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

MA 110:

Lecture 05

Saurav Bhaumik
Department of Mathematics
IIT Bombay

Spring 2025

Saurav Bhaumik, IIT Bombay Linear Algebra: Lecture 04


Recall:
Proposition
An n×n matrix is invertible if and only if it can be transformed
to the n×n identity matrix by EROs.

Proof. Suppose A ∈ Rn×n is invertible. Using EROs, transform


A to a matrix A′ ∈ Rn×n such that A′ is in a RCF. Since A is
invertible, the linear system Ax = 0 has only the zero solution.
Hence A′ has n nonzero rows, and so each of the n columns of
A′ is pivotal. Also, the number of rows of A is equal to the
number of its columns, that is, m = n. Therefore A′ = I.
Conversely, suppose A ∈ Rn×n can be transformed to the n×n
identity matrix I by EROs. Since Ix = 0 =⇒ x = 0 for
x ∈ Rn×1 , we see that the linear system Ax = 0 has only the
zero solution. Hence A is invertible.

Saurav Bhaumik, IIT Bombay Linear Algebra: Lecture 04


Remark.
Suppose an n × n square matrix A is invertible. In order to
solve the linear system Ax = b for a given b ∈ Rn×1 , we may
transform the augmented matrix [A|b] to [I | c] by EROs. Now
Ax = b ⇐⇒ Ix = c for x ∈ Rn×1 . Hence Ac = b. Thus c is
the unique solution of Ax = b. This observation is the basis of
an important method to find the inverse of a square matrix.
Gauss-Jordan Method for Finding the Inverse of a Matrix
Let A ∈ Rn×n be an invertible matrix. Consider the basic

n×1
column vectors e1 , . . . , en ∈ R . Then e1 · · · en = I.
Let x1 , . . . , xn be the unique elements of Rn×1 be such that
Ax1 = e1 , . . . , Axn = en , and define X := x1 · · · xn . Then
     
AX = A x1 · · · xn = Ax1 · · · Axn = e1 · · · en = I.

By an earlier result, it follows that X = A−1 .


Saurav Bhaumik, IIT Bombay Linear Algebra: Lecture 04
Hence to find A−1 , we may solve the n linear systems
Ax1 = e1 , . . . , Axn = en simultaneously by considering the
n × 2n augmented matrix
[A|e1 · · · en ] = [A | I]
and transform A to its RCF, namely to I, by EROs. Thus if
[A | I] is transformed to [I | X], then X is the inverse of A.
Remark To carry out the above process, we need not know
beforehand that the matrix A is invertible. This follows by
noting that A can be transformed to the identity matrix by
EROs if and only if A is invertible. Hence the process itself
reveals whether A is invertible or not.
Example
Let  
−1 1 2
A :=  3 −1 1 .
−1 3 4
Saurav Bhaumik, IIT Bombay Linear Algebra: Lecture 04
We use EROs to transform [A | I] to [I | X], where X ∈ R3×3 .
   
−1 1 2 1 0 0 −1 1 2 1 0 0
 3 −1 1 0 1 0 → 0
  2 7 3 1 0 →
−1 3 4 0 0 1 0 2 2 −1 0 1

   
−1 1 2 1 0 0 1 −1 −2 −1 0 0
 0 2 7 3 1 0 → 0 1 3.5
  1.5 0.5 0 
0 0 −5 −4 −1 1 0 0 1 0.8 0.2 −0.2
 
1 −1 0 0.6 0.4 −0.4
−→ 0 1 0 −1.3 −0.2 0.7 
0 0 1 0.8 0.2 −0.2
 
1 0 0 −0.7 0.2 0.3
−→ 0
 1 0 −1.3 −0.2 0.7  = [I | X].
0 0 1 0.8 0.2 −0.2
Saurav Bhaumik, IIT Bombay Linear Algebra: Lecture 04
Thus A is invertible and
 
−7 2 3
1 
A−1 =X= −13 −2 7  .
10
8 2 −2

Saurav Bhaumik, IIT Bombay Linear Algebra: Lecture 04


Linear Dependence
Let n ∈ N. We shall work entirely with row vectors in R1×n (of
length n), or entirely with column vectors in Rn×1 (of length
n), both of which will be referred to as ‘vectors’.
We have already considered a linear combination

α1 a1 + · · · + αm am

of vectors a1 , . . . , am , where α1 , . . . , αm are scalars.


A set S of vectors is called linearly dependent if there is
m ∈ N, there are (distinct) vectors a1 , . . . , am in S and there
are scalars α1 , . . . , αm , not all zero, such that

α1 a1 + · · · + αm am = 0.

It can be seen that S is linearly dependent ⇐⇒ either 0 ∈ S


or a vector in S is a linear combination of other vectors in S.
Saurav Bhaumik, IIT Bombay Linear Algebra: Lecture 04
Examples n
 T  T  T o
(i) Let S := 1 2 , 2 1 , 1 −1 ⊂ R2×1 . Then
the set S is linearly dependent since
             
2 1 1 1 2 1 0
= + . Clearly, − + = .
1 2 −1 2 1 −1 0

(ii) Let
       
S := 1 2 3 , 2 3 1 , 3 1 2 , 0 −3 3 ⊂ R1×3 .
Then
 the set
 Sis linearly
 dependent
 since
  
0 −3 3 = 1 2 3 − 2 2 3 1 + 3 1 2 . Clearly,
         
1 2 3 −2 2 3 1 + 3 1 2 − 0 −3 3 = 0 0 0 .

In (i) above, S is a set of 3 vectors in R2×1 , and in (ii) above,


S is a set of 4 vectors in R1×3 . These examples illustrate an
important phenomenon to which we now turn. First we prove
the following crucial result.
Saurav Bhaumik, IIT Bombay Linear Algebra: Lecture 04
Proposition
Let S be a set of s vectors, each of which is a linear
combination of elements of a (fixed) set of r vectors. If s > r ,
then the set S is linearly dependent.
Proof. Let S := {x1 , . . . , xs }, and suppose each vector in S is
a linear combination of elements of the set {y1 , . . . , yr } of r
vectors and s > r . Then
Xr
xj = ajk yk for j = 1, . . . , s, where ajk ∈ R.
k=1

Let A := [ajk ] ∈ Rs×r . Then AT ∈ Rr ×s . Since r < s, the


linear system AT x = 0 has a nonzero solution, that is, there
are α1 , . . . , αs , not all zero, such that
      
α1 a11 · · · as1 α1 0
T  ..   .. .
.. ..   ..  =  ... 
. . r ×1
A  . = . ∈R ,
   
αs a1r · · · asr αs 0
Saurav Bhaumik, IIT Bombay Linear Algebra: Lecture 04
Ps
that is, j=1 ajk αj = 0 for k = 1, . . . , r . It follows that
s
X s
X r
X  r X
X s 
αj xj = αj ajk yk = ajk αj yk = 0.
j=1 j=1 k=1 k=1 j=1

Since not all α1 , . . . , αn are zero, S is linearly dependent.


Corollary
Let n ∈ N and S be a set of vectors of length n. If S has more
than n elements, then S is linearly dependent.

Proof. If S is a set of column vectors of length n, then each


element of S is a linear combination of the n column vectors
e1 , . . . , en . Similarly, if S is a set of row vectors of length n,
then each element of S is a linear combination of the n row
vectors eT1 , . . . , eTn . Hence the desired result follows from the
crucial result we just proved.
Saurav Bhaumik, IIT Bombay Linear Algebra: Lecture 04
Linear Independence
A set S of vectors is called linearly independent if it is not
linearly dependent, that is,

α1 a1 + · · · + αm am = 0 =⇒ α1 = · · · = αm = 0,

whenever a1 , . . . , am are (distinct) vectors in S and α1 , . . . , αm


are scalars. We may also say that the vectors in S are linearly
independent.
Linearly independent sets are important because each one of
them gives us data that we cannot obtain from any linear
combination of the others. In this sense, each element of a
linearly independent set is indispensable!
Examples
(i) The empty set is linearly independent vacuously.

Saurav Bhaumik, IIT Bombay Linear Algebra: Lecture 04


(ii) Let S be the subset of Rn×1 consisting of the basic column
vectors e1 , . . . , en . Then S is linearly independent. To see this,
let α1 , . . . , αn ∈ R and α1 e1 + · · · + αn en = 0. Then the jth
entry αj of α1 e1 + · · · + αn en is equal to 0 for j = 1, . . . , n.
1×4
(iii) Let S denote
  the subset
 of R consisting
  of the vectors

1 0 0 0 , 1 1 0 0 , 1 1 1 0 and 1 1 1 1 .
Then S is linearly independent. To see this, let
α1 , α2 , α3 , α4 ∈ R be such that
     
α1 1 0 · · · 0 +  α 2 1 1 0 0 + α 3 1 1 1 0 +
α4 1 1 1 1 = 0 0 0 0 .
Then α1 + α2 + α3 + α4 = 0, α2 + α3 + α4 = 0, α3 + α4 = 0
and α4 = 0, that is, α4 = α3 = α2 = α1 = 0.

Saurav Bhaumik, IIT Bombay Linear Algebra: Lecture 04


How to Decide Linear Independence of Column Vectors?
Suppose c1 , . . . , cn are column vectors each of length m. Let
A ∈ Rm×n be the matrix having c1 , . . . , cn as its n columns.
Then for x1 , . . . , xn ∈ R,
 
x
   .1 
x1 c1 + · · · + xn cn = c1 · · · cn  ..  = Ax.
xn

Hence the subset S := {c1 , . . . , cn } of Rm×1 is linearly


independent if and only if the linear system Ax = 0 has only
the zero solution. This is the case if and only if in any REF A′
of A, there are n nonzero rows, as we have seen in Lecture 3.
Hence if A ∈ Rm×n is transformed to a REF A′ , and A′ has r
nonzero rows, then the columns of A form a linearly
independent subset of Rm×1 if r = n, and they form a linearly
dependent subset Rm×1 if r < n.
Saurav Bhaumik, IIT Bombay Linear Algebra: Lecture 04
Row Rank of a Matrix
Let A ∈ Rm×n . The row rank of A is the maximum number
of linearly independent row vectors of A. Thus the row rank of
A is equal to r if and only if there is a linearly independent set
of r rows of A and any set of r + 1 rows of A is linearly
dependent.
Let r be the row rank of A. Then r = 0 if and only if A = O.
Since the total number of rows of A is m, we see that r ≤ m.
Also, since the row vectors of A form a subset of R1×n , no
more than n of them can be linearly independent. Thus r ≤ n.
Let a1 , . . . , am be any m vectors in R1×n . Clearly, they are
linearly independent if and only if the matrix A formed with
these vectors as row vectors has row rank m, and they are
linearly dependent if the row rank of A is less than m.

Saurav Bhaumik, IIT Bombay Linear Algebra: Lecture 04


Examples
 
3 0 2 2
(i) Let A :=  −3 0 12 27.
−21 21 0 15
 
The row vectors of A are
 a 1 := 3  0 2 2 , 
a2 := −3 0 12 27 and a3 := −21 21 0 15 .
Let α1 , α2 , α3 ∈ R be such that α1 a1 + α2 a2 + α3 a3 = 0.
This means  
  3 0 2 2  
α1 α2 α3  −3 0 12 27 = 0 0 0 0 ,
−21 21 0 15
that is, 3α1 − 3α2 − 21α3 = 0, 21α3 = 0, 2α1 + 12α2 = 0 and
2α1 + 27α2 + 15α3 = 0. Clearly, α3 = 0, and the two
equations 3α1 − 3α2 = 0, 2α1 + 12α2 = 0 show that
α1 = α2 = 0 as well. Thus the 3 rows of A are linearly
independent. Hence the row rank of A is 3.
Saurav Bhaumik, IIT Bombay Linear Algebra: Lecture 04
(ii)  
3 0 2 2
Let A :=  −3 21 12 27.
−21 21 0 15
 
The row  vectors of A are
 a 1 := 3 0
 2 2 , 
a2 := −3 21 12 27 and a3 := −21 21 0 15 .
Observe that 6a1 − a2 + a3 = 0. Hence the three row vectors
a1 , a2 , a3 are not linearly independent. But the set {a1 , a2 } is
linearly independent since a1 ̸= 0, a2 ̸= 0, and they are not
scalar multiples of each other. (The same holds for the sets
{a2 , a3 } and {a3 , a1 }.) Hence the row rank of A is 2.
We used the relation 6a1 − a2 + a3 = 0 above to determine
the row rank of A. It is difficult to think of such a relation out
of nowhere. We shall develop a systematic approach to find
the row rank of a matrix.

Saurav Bhaumik, IIT Bombay Linear Algebra: Lecture 04


First we prove the following preliminary results.
Proposition
If a matrix A is transformed to a matrix A′ by elementary row
operations, then the row ranks of A and A′ are equal, that is,
EROs do not alter the row rank of a matrix.
Proof. ERO of type I: Ri ←→ Rj with i ̸= j: A and A′ have
the same set of row vectors. So there is nothing to prove.
ERO of type II: Ri + αRj with i ̸= j: Suppose the set
{a1 , . . . , ai , . . . , aj , . . . , am } of all row vectors of A contains a
linearly independent subset S := {aj1 , . . . , ajs } having s
elements. We claim that the set
{a1 , . . . , ai + α aj , . . . , aj , . . . am } of all row vectors of A′ also
contains a linearly independent set S ′ containing s elements.
If ai ̸∈ S, then we let S ′ := S. Next, suppose ai ∈ S. Then

Saurav Bhaumik, IIT Bombay Linear Algebra: Lecture 04


we may replace ai suitably either by ai + αaj or by aj in the
set S to form a linearly independent set S ′ . The last
statement follows by considering the cases ai + αaj = 0,
aj = 0, and by observing that if ai + α aj as well as aj were
linear combinations of vectors in S \ {ai }, then so would be
ai = (ai + α aj ) − α aj , and this would contradict the linear
independence of S. Note that the converse claim also holds.
ERO of type III: α Rj with α ̸= 0: {aj , aj1 , . . . ajs } is linearly
independent ⇐⇒ {α aj , aj1 , . . . , ajs } is linearly independent.
Thus the maximum number of linearly independent rows of A
is the same as the maximum number of linearly independent
rows of A′ , that is, the row ranks of A and A′ are equal.
Proposition
Let a matrix A′ be in REF. Then the nonzero rows of A′ are
linearly independent, and so the row rank of A′ is equal to the
number of nonzero rows of A′ .
Saurav Bhaumik, IIT Bombay Linear Algebra: Lecture 04
Proof. Let the number of nonzero rows of A′ be r . Let the
pivots p1 , . . . , pr in these rows be in columns k1 , . . . , kr , where
1 ≤ k1 < · · · < kr ≤ n. Suppose α1 a1 + · · · + αr ar = 0,
where α1 , . . . , αr ∈ R.
Assume for a moment that not all α1 , . . . , αr are zero, and let
αj be the first nonzero number among them. Now the kj th
component of α1 a1 + · · · + αr ar = αj aj + · · · + αr ar is equal
to αj pj since all entries in the kj th column below the pivot pj
are equal to 0. Hence αj pj = 0. But pj ̸= 0, and so αj = 0,
contrary to our assumption. Thus α1 = · · · = αr = 0. This
shows that the first r rows of A′ are linearly independent.
Also, since the last m − r rows of A′ are zero rows, any r + 1
row vectors of A′ will contain the vector 0, and so they will not
be linearly independent. Hence the row rank of A′ is r .

Saurav Bhaumik, IIT Bombay Linear Algebra: Lecture 04


We have now obtained an important result which tells us how
to find the row rank of a matrix.
Proposition
The row rank of a matrix is equal to the number of nonzero
rows in any row echelon form of the matrix.

Proof. Let A be a m × n matrix. By using EROs of type I and


type II, we transform A to a row echelon form A′ . Then the
row rank of A is equal to the row rank of A′ , and it is equal to
the number of nonzero rows of A′ .
The above proposition implies that if A′ and A′′ are two row
echelon forms of a matrix A, then they have the same number
of nonzero rows; this number is equal to the row rank of A.
Since each nonzero row of a matrix in a REF has exactly one
pivot, we see that the row rank of a matrix is equal to
the number of pivots in any row echelon form of the matrix.
Saurav Bhaumik, IIT Bombay Linear Algebra: Lecture 04

You might also like