Maths Sy Paper2
Maths Sy Paper2
Unit 5 : Determinants
Recommended Books :
1. SERGE LANG : Introduction to Linear Algebra, Springer Verlag.
2. S. KUMARESAN : Linear Algebra A Geometric approach, Prentice
Hall of India Private Limited.
6
8
1.0 Objectives
1.1 Introduction
1.2 Systems of Linear equations and matrices
1.3 Equivalent Systems
1.4 Exercise
1.5 Unit End Exercise
1.0 OBJECTIVES
Our aim in this section is to study the rectangular arrays of numbers
corresponding linear equation and some related concept to develop an
elementary theory of the same. Object of this section is to solve the
unknowns providing different and simplest methods for understanding of
the students in a very simple manner.
1.1 INTRODUCTION
a11x1 a12 x2 ... a1n xn b , where a11,a12 ,...,a1n and b are
real numbers.
where aij , bi , 1 i m ,
1 j n .
is called a system of m linear equations in n unknown.
If b1 b2 ... bm 0
10
i.e.
a11x1 a12 x2 ...a1n xn 0
a21x1 a22 x2 ... a2n xn 0
am1x1 am2 x2 ... amn xn 0
In short,
Simple example
x 2 y 3z 0
2x 3y 4z 0 (homogeneous system)
x y 5 z 0
If
x 2 y 3z 5
2 x 3 y z 3 (non-homogeneous system)
x y 5 z 0
1. x + 2y = 4 E1
x + 2y = 6 – E2 (inconsistent)
11
2. x 2 y 4E1
3. x 2 y 4E1
s 2t 4
2s 4t 8
Geometrically, the first system has two parallel lines which do not
intersect. The second system has two intersecting lines intersecting in one
point. The third system has one line and every point on the line is a
solution. Thus, the system has infinitely many solutions.
n
aij x j bi ; for 1 i m
j 1
A system of equations obtained by
i) Multiplying an equation in the system by a non-zero scalar.
ii) Adding scalar multiple of an equation in the system to another
equation is called asystem equivalent to the given system.
i) m = 1, n = 2
a11s a12t 0
a12
t, t t
a11
a12
i.e. ,1 t t
a11
1
If a11 0 , then a12 0 and multiplying the equations by a12 we
get equation y 0 .
a x a y 0
11 12
m=n=2
a21x a22 y 0 E2
a11 a12
If λ , λ λ 0 , then multiplying equation (i) by λ , we
a21 a22
get that two equations are same and the system is a11x a12 y 0 , which
a a
we discussed in (i) as 11 12 , then a11a22a12 a21 0 .
a21 a22
a x a y 0
11 12
a11s 0
a21s 0
a x a y 0
s = 0 21 22
14
iii) m = 1, n = 3
a11x a12 y a13z 0
If a11 0 and (r, s, t) is a solution of the above system
a11r a12s a13t 0 .
r a12 s a13t , s,t .
a11 a11
The solution set is
a12
a
s 13 t ,s,t s,t
a11
a11
a12
a
i.e. 0 s 13 ,0,1
,1, t s,t
a11
a11
Thus, the system has infinitely many solutions. Geometrically the system
represents a plane passing through origin and any point on the plane is a
solution of the system.
iv) m = 2, n = 3
a11x a12 y a13z 0 ( E1 )
v) m 3, n 3
a11x a12 y a13z 0 ( E1 )
Substituting in E1
a11λ a22a33 a23a32 a12 λ a31a23 a21a33 a13 λ a21a32 a31
a22 0
a11 a12 a13
λ a21 a22 a23 0 and so λ 0 .
a31 a32 a33
r,s,t 0,0,0
Geometrically, the three equations represent three planes passing through
origin and they intersect in a unique point, name the origin.
We observe that, in the above systems, the system has infinitely many
solutions of m < n . But, for m = n , the system has only trival solution
provided certain conditions were satisfied by the coefficients aij .
n
NOTE : The system aij x j 0 , 1 i m of m homogeneous linear
j 1
equations in n-unknowns has a non-trivial solution if m < n .
Solve examples :
1 2
Solution : We take y 1,z 1 and get x 2 then
3 3
2
,1,1 is a non-trivial solution of the system.
3
We observe that :
1. Proposition : For a homogeneous system of m linear equations in n
unknowns, the sum of two solutions and a scalar multiple of a solution
is again a solution of the same system.
2. Proposition : A necessary and sufficient condition for the sum of two
solutions or any scalar multiple of a solution to be the solution of the
n
same system of linear equations aij x j bi ,1 i m is that bi 0
j 1
for 1 i m .
Examples
2 x 5 y 7 z w8 7
3 y 4w 8
3x 5 z 9w 0
n
aij x j bi ,1 i m
j 1
The system (A) can be written in the matrix form as
Proof (a) : Since c c1,c2 ,...,cn is the solution of the system therefore,
n
we have aij c j 0,1 i m (1)
j 1
n n n
Consider L.H.S. = aij x j = aij c j = . aij c j
j 1 j 1 j 1
= . 0 [from equation (1)]
= 0 = R.H.S.
.c c1,c2 ,...,cn is the solution of the given system.
b) Since c1 c11 ,c 1n is the solution of the given system.
12 ,...,c
Substituting the values of x1 ,x2 ,...,x p 1 ,..,xn in (II), we get the values
x p . Therefore, we get the values of x1 ,x2 ,...,xn which are not all zeros.
Note :
i) If, m n , then system has non-trivial solution but the converse may
not be true. i.e. if the system has non-trivial then it does not mean
mn.
Examples :
1) Find the solution set of the following problems.
a) 2 x 3 y 5z 0 (i)
x yz0 (ii)
Solution :
(ii) y x z
Substituting y, in (i), we get 2 x 3 x z 5z 0 .
x 2 z
Again,
y x z 2 z z 3z
y 3z
Solution set is x, y,z / x, y,z 2z,3z,z z
z 2,3,1 z
22
EXERCISE:
i) 2x 4z 3 y 0
3x z y 0
ii) 2 x y 4 z w 0
3x 2 y 3z w 0
x y z 0
iii) x y 2 z 3w 0
2 x 4 z 4w 0
x y 2 z w 0
x 2 y 2 z 0
iv) 7 x 2 y 5z w 0
x y z 0
xz w 0
y 2 z w 0
Example 2: 2x 3 y 4z 0 (1)
3x y z 0 (2)
Solution : (2) y 3x z
Substituting y in (1), we get z 7 x
Since y 3x z 3x 7 x 10 x
y 10 x
Solution set is S x, y, z : x, y, z x,10x,7 x : x
x 1,10,
7 : x
Example 3: 2x y 4z w 0 … (1)
3x 2 y 3z w 0 … (2)
x y z 0 … (3)
Solution : (3) y x z
Substituting y x z in (2), we get w 5x 5z
4
Substituting the values of y and w in (1), we get x z
3
S z 4 ,1 ,1, 5
z
3 3 3
Example 4:
7 x 2 y 5z w 0 (1)
23
x y z 0 (2)
x zw0 (3)
y 2z w 0 (4)
(2) y xz (5)
By substituting on (1), we get
w 5x 3z (6)
(3) z 2 x x0
(5) y 0
(6) w 0
Solution set is S x, y, z, w x, y, z , w
S 0, 0, 0, 0 .
1.4 EXERCISE :
v) x1 3x2 x4 0
x1 4 x2 2 x3 0
2 x2 2 x3 x4 0
x1 2 x2 x3 x4 0
24
2
25
2.0 Objectives
2.1 Introduction
2.2 Multiplication of Matrices
2.3 Matrices and Linear Equations
2.4 Transpose of a Matrix
2.5 Diagonal
2.0 OBJECTIVES :
2.1 INTRODUCTION :
element aij occurs in the ith row and jth column. The matrix is denoted by
A aij .
Transpose of a matrix : If A = aij mn , then transpose of A is the
n m matrix B b ji
nm
where b ji aij for 1 i m , 1 j n .
Symmetric Matrix :
An n n matrix A over is said to be symmetric if At A .
aij a ji for 1 i n , 1 j n .
i) A At is symmetric.
Let A aij be an m n matrix over and B b jk be an n p
matrix. We define the product,
Note 1 : The product AB may be defined but the product BA may not be
defined. However, if A, B are m n matrices over , AB and BA are both
defined. However they may not be equal.
0 1 0 0 1 0
For example, let A , B , then AB ,
0 0 1 0 0 0
0 0
BA , AB BA .
0 1
Identity Matrix :
1 0 0
0 1 0
In is called an identity n n matrix.
0 0 1
Kronecker’s delta :
I n δij ,δij 1 if i j
0 if i j
a11 0 0
a21 a22 0
A
an1 an2 ann
Diagonal matrix :
nn
A aij is called a diagonal matrix if aij 0 for i j
a11 0 0
0 a22 0
A
0 0 ann
Scalar Matrix :
nn
A aij is called a scalar matrix if diagonal is c for some c .
c 0 0
0 c 0
A
0 0 c
Invertible Matrix :
nn
A aij is said to be an invertible matrix if B M n R such that
AB BA I n .
An invertible matrix is also called a non-singular matrix. The
matrix B is called an inverse of A.
Exercises 1.2 :
1 1 1
1) Let A 0 1 1 find A2 , A3 , A4 .
0 1
0
1 a 1 b 2 3
and let A
2) Let a,b , B find AB , A , A .
0 1 0 1
Show that A is invertible and find A 1 .
1 0
3) i) Find a 2 2 matrix A such that A2 I can you find
0 1
all such matrices?
ii) Find a 2 2 non-zero matrix A such that A2 0 .
4) Let A M n of A3 0 show that I A is invertible.
5) Let A M n of A3 A I 0 show that A is invertible.
6) Let A,B,P M n . If P is invertible and B P 1AP then show
that B n P 1An P for n .
7) A,B M n of A,B are upper triangular matrices, find if AB is
upper triangular.
cosθ sin θ
8) Let R θ ,θ show that
sin θ cosθ
R θ1 R θ 2 R θ1 θ 2 for θ1,θ 2 .
30
1 2 3 4
0 2 3 4
9) Let A then find A1 ?
0 0 3 4
0 0 0 4
1 1
3 1
10) Let A 2 2 , B Is there a matrix C such that
1 0 4 4
CA B ? Justify your answer.
Summary :
1) The m n zero matrix denoted by 0mn or simply 0, is the matrix
whose elements are all zero. Find x, y, z, t.
x y z 3
If 0
y 4 z w
Solution :
x y0 z30
Set :
y40 zw0
x 4, y 4, z 3, w 3
2) Show that for any matrix A , we have A A
Solution :
A aij
mn
m,n
aij
m,n
aij
x y x 6 4 x y
3) Fin x, y, z, w if 3
z w 1 2w z w 3
Solution : 3x x 4
3y x y 6
3z z w 1
3w 2w 3
x 2, y 4, z 1, w 3
vi) K1 K2 A K1A K2 A
vii) K1 K2 A K1 K2 A
viii) 1.A A
Problem
1. Show that 1 A A
Answer : Consider
A 1 A 1A 1 A
1 1 A
0. A
0
Thus, A 1 A 0
Or -A + A +(-1)A = 0 - A
Or 1 A A ( - A + A = 0)
2. Show that A A 2 A, A A A 3 A
2 A 1 1 A
1 A 1 A
A A
Thus 2A A A
3 A 2 1 A
2 A 1A
A A A
3A A A A
Matrix Multiplication :
The product of row matrix and a column matrix with the same
number of elements is their inner product as defined as :
b1 n
1
a , ..., an a b
11 ... a b
n n ak bk
b k 1
n
3
8 4 5 2 8 3 4 2 1 + (5)(-1) = 24+ (-8) – 5 = 11
1
1 3 2 0 4
Example 1 : A and B
2 1 3 2 6
33
11 6 14
2 2 1 3 0 2 2 4 1 6
11 6 14
AB
1 2 4
2 1
1 2 5
2. Find AB if A 1 0 & B
0
.
3 4 3 4
1 4
A 2 5 .
T
3 6
Theorem : The transpose operations on matrices satisfieds.
i) A B T AT BT
AT
T
ii) A
iv) AB T BT . AT
2.5 DIAGONAL :
The diagonal of A aij consists of the elements a11,a22 ,...,ann where
A is n-square matrix.
Trace of an n-square matrix A aij is the sum of its diagonal elements
i.e. tr An a11 a22 ... ann
viz. trace of the matrix
1 2 3
A 4 5 6 is 1 5 9 15
7 8 9
Property : Suppose A aij and B bij are n-square matrices and k
is any scalar. Then
i) tr A B tra A tra B ,
Examples :
35
1 0
I2 is the identity matrix of order 2.
0 1
1 0 0
I3 0 1 0 is the identity matrix of order 3 etc.
0 0 1
Kronecker delta : Kronecker delta is defined as
0 if i j
ij
1 if i j
Accordingly, I ij
Note: Trace of I n n
Scalar Matrix Dk : Dk K . I
1 1 3
Example – Show that A 5 2 6 is nilpotent of class 3.
2 1 3
0 0 0
2
Answer : A 3 3 9
1 1 3
A3 A2. A 0
Inverse Matrix : A square matrix A is invertible if there exists a square
matrix B such that AB BA I ; where I is identity matrix.
1
Note : A1 A
2 5 3 5
Example 1 : Show that A and B are inverses.
1 3 1 2
1 0
Answer : AB I
0 1
1 0
BA I
0 1
a b
Example 2 : When is the general 2 2 matrix A invertible?
c d
What then is its inverse?
Answer : Take scalars x, y, z, t such that
a b x y 1 0 ax bz ay bt 1 0
c d z t 0 1 or cx dz cy dt 0 1 ,
ax bz 1 ay bt 0
cx dz 0 cy dt 1
Both of which have coefficient matrix A . Set A ad bc . We know
that A is invertible if A 0 . In that case first and second system have
the unique solutions.
xd A, z c A ,
y b A , ta A .
d A b A 1 d b
Thus A1
c A a A A c a
In other words, when A 0 the inverse of 2 2 matrix A is obtained by
i) interchanging the elements or the main diagonal,
ii) taking the negative of the other elements, and
37
1
iii) multiplying the matrix by
A
3 5
Example 1 : Find the inverse of A
2 3
Answer : A 1 0 , A 1 exists. Next, interchanging the diagonal
1
elements, take the negative of the other elements and multiply by .
A
5 3 5
A1 1 3
2 3 2 3
5 3
Example 2 : Find the inverse of A
4 2
Answer : A 2, A1 exists.
2 3 1 3 2
A1 1
4 5 2 5 2
2
2 3
Example 3 : Find the inverse of A
1 3
1 3 3 1 3 1 3
A1
9 1 2 1 9 2 9
Definition : A square matrix „ A ‟ of order „n‟ A 0 . Inverse of A is
Adjoint of A
denoted and defined by A 1
A
AT Transpose of A
Example :
1 1
i) Find inverse of the matrix A
2 1
Solution : A 3 0
1 3 1 3
A1
2 3 1 3
3 1
ii) If A
2 2
1 4 1 8
A1
1 4 3 8
38
1 5 2
iii) If A 2 0 1 A 0 A is not invertible.
3 1 2
1 5 2
iv) If A 2 1 1
3 1 2
Answer : A 8 0, A1 exists. Take B = matrix of cofactors called
as Adjoint of A.
111 1 11 2 1 13
1 1 1 1 1
16 8 8 16
2 1 2 2 23
B 1 8 1 8 1
131 3 13 2 5 133 11 3 5 11
1 8 3
B 1 8 5
T
1 16 11
2 2 2
1 3 1
1 1 5
1 1
A
4
2 2 2
39
40
3
ELEMENTARY ROW OPERATIONS AND
GAUSS-ELIMINATION METHOD, ROW–
ECHELON FORM
Unit structure:
3.0 Objectives
3.1 Elementary row operations
3.2 Gauss Elimination Method to Solve AX=B
3.3 Matrix units and Elementary Matrices
3.4 Elementary Matrices
3.5 Linear Algebra System of Linear Equations
3.0 OBJECTIVES :
ii) For each i,1 i r ,aik i 0 , aik i is called the pivot element of
ith row.
iii) For each i,1 i r,ask i 0 for s i .
2 1 4 3
R3 R3 2 R1
0 1 3 2
0 2 6 4
(making entries below 1st pivot 0)
43
2 1 4 3
R3 R3 2 R2
0 1 3 2
0 0 0 0
(making entries below 2nd pivot 0)
6 3 4 1 0 2
i) 4 1 6 ii) 2 1 3
1 2 5 4 1 8
Step II : Find the first column from the left containing non-zero entry.
If this is in ith row, not is 1st row, perform R1 Ri .
Step III : Multiply that row by suitable numbers and subtract from
rows below it to make all entries below it 0.
44
Call the leading non-zero entries of each row pivots. If there are
columns corresponding to non-pivot elements , we assign arbitrary values
to unknowns corresponding to them and we solve the system by back
substitution. The method breaks down when the pivots appear in last
column.
Exercises 1.1 :
ii) x1 2 x2 x3 2 x4 1
x1 x2 x3 x4 2
x1 7 x2 5x3 x4 3
1 1 2
2 1 1 3
i) 0 0 0 ii)
0 1 0 0 0 0
0
1 0 0 3 1
1 1
iii) iv) 0 0 0 1 1
0 1 0 1
0 0 0
0 0 1
v) 0 0 1
0 0 1
0 1 2 1 2 1 1
0 1 2 2 7 2 4
i)
0 2 4 3 7 1 0
0 3 6 1 6 4 1
1 5 2 1 4 0
ii) 3 0 4 6 2 1
1 2 1 2 3 1
46
0 0 0
I rs 0 1 0
0 0
0
I rr A A
I rs .I rs 0,I rr .I rr I rr if r s
47
0 0
as1 asn
I rs I sr A rth row
ar1 arn sth row
0
We shall denote –
i) The matrix obtained by exchanging ith row and jth row of identity
matrix by Eij .
iii) Eij λ A is the matrix obtained by adding λ times jth row to ith row of
A.
1 c12 c1n
0 1 c2 n
C
0 0 1
1 c12 0
Ri Cin Rn gives 0 1 0
0 1
0 0
Similarly,
49
0
Ri Cin1Rn1 makes n 1 column 1 and so on and finally we get
th
0
identity matrix. Thus, an invertible matrix is row equivalent to identity
matrix. Further Ek ...E1 A I n .
Exercises 1.2 :
1 0 3 0 0 1 1 0 0
i) E 0 1 0 ii) 0 1 0 iii) E 2 1 0
0 1 1 0 0 1
0 0 0
1 2 1 2
ii) A , B
0 1 0 1
2 1 1 3
iii) A , B
1 3 2 1
0 1 0
5 1
iii) iv) 0 1
0
0
1
1 0 0
2 0 0 2
1 0 0 0
1
v) 0 1
1 0
0 vi)
0 0 1 0
0 0 0
0 0 0 1
2 0 0 0 0 0 0 1
0 4 0 0 0 0 5 0
i) A ii) A
0 0 5 0 0 2 0 0
0 0 0 2 0 0 0 1
1 0
A , find elementary matrices E1 ,E2 such that
2
6. Let
5
E2 E1A I .
STOP
2s t1,2 s t,s,t,
1 t 1,1,0,2,0 s,t
Row Echelon Form : A matrix in row echelon form has zeros below each
leading 1.
Example :
52
1 4 3 7 1 1 0
i) 0 1 6 2 ii) 0 1 0
0 0 1 5 0 0 0
1 1 3 4
0 1 2 6 0
0 1 3 4
iii) 0 0 1 1 0 iv)
0 0 1 5
0 0 0 0 1
0 0 0 2
Reduced row echelon form : A matrix in reduced row echelon form has
zeros below and above each leading 1.
Examples :
1 0 0 4 1 0 0
i) 0 1 0 7 ii) 0 1 0
0 0 1 1 0 0 1
0 1 2 0 1
0 0 0 1 3 0 0
iii) iv)
0 0 0 0 0 0 0
0 0 0 0 0
1 0 2 3 1 0 0 0
0 1 1 2 0 1 0 0
v) vi)
0 0 0 0 0 0 1 0
0 0 0 0 0 0 0 1
1 0 0 5 1 0 0 4 1
a) 0 1 0 2 b) 0 1 0 2 6
0 0 1 4 0 0 1 3 2
53
1 6 0 0 4 2
1 0 0 0
0 0 1 0 3 1
c) d) 0 1 2 0
0 0 0 1 5 2
0 0 0 1
0 0 0 0 0 0
Solution :
x15
x2 2 By inspection
x3 4
x1 1 4x4
x2 6 2x4
x3 2 3x4
Elementary matrices :
54
1 0 0 0
0 1 0 0
i.e. I
0 0 1 0
0 0 0 1
0 0 1 0
0 1 0 0
i) , by interchanges of R1 and R3 .
1 0 0 0
0 0 0 1
1 0 0 0
0 1 0 0
ii) , by 5R3 .
0 0 5 0
0 0 0 1
1 0 0 0
0 1 3 0
iii) , by R2 3R3 .
0 0 1 0
0 0 0 1
Exercises 1.3 :
1 4 3
i) A 1 2 0
2 2 3
Ans : A1 does not exists.
1 2 3 40 16 9
ii) A 2 5 3 , A1 13 5 3
1 0 8 5 2 1
1 6 4
iii) A 2 4 1 , A1 does not exists.
1 2 5
1 1 1
5 5 5
1 0 2
1 1 4
iv) A , A1 3 1 2
5 5 5
1 1 0
2 1 1
5 10 10
Inverse of a matrix A :
Adjoint of A Adj A
A1
A A
Some corrections on pages nos: 2, 9, 10, 17, 24, 30, 38, 41, 42, 43, 49,
57
57
4
VECTOR SPACES AND SUBSPACES
Unit Structure:
4.0 Objectives
4.1 Introduction
4.1.1 Addition of vectors
4.1.2 Scalar multiplication
4.2 Vector spaces
4.3 Subspace of a vector space
4.4 Summary
4.5 Unit End Exercise
4.0 OBJECTIVES
This unit would make you to understand the following concepts :
Vectors and Scalars in plane and space
Addition and scalar multiplication of vectors
Various properties regarding addition and scalar multiplication of
vectors
Idea of vector space
Definition of vector space
Various examples of vector space
Definition of subspace
Examples of subspace
Results related to union and intersection of subspace
Linear Span
4.1 INTRODUCTION
The significant purpose of this unit is to study a vector space. A
vector space is a collection of vectors that satisfies a set of conditions.
We‟ll look at many of the important ideas that come with vector spaces
once we get the general definition of a vector and a vector space.
Remark : 1) Vectors are denoted with a boldface lower case letter. For
instance we could represent the vector above by v, w, a, b, etc. Also when
we‟ve explicitly given the initial and terminal points we will often
represent the vector as, v = AB
3) In plane i.e. in IR2 we write vector v with initial point at origin and
terminal point at (x, y) as v = (x, y) and a vector w with initial point at (x1,
y1) and terminal point at (x2, y2) as w = (x2 – x1, y2 – y1). Similarly we
can express vectors in IR3 (space).
u v -v
u+v u -v v
u
-2v
v 2v
4. 2 VECTOR SPACES
Now we will generalize this concept to any set (not necessarily set
of traditional vectors) with two operations satisfying all the conditions
which are satisfied by addition and scalar multiplication in normal vectors.
So we can call members of this set also as vectors and set as vector space.
If the following axioms are true for all objects u, v, and w in V and
all scalars c and k then V is called a vector space and the objects in V are
called vectors.
7) c(u + v) = cu + cv
8) (c + k)u = cu + ku
9) c(ku) = (ck )u
10) 1u = u
(c) u + w = v + w
(u + w) + (-w) = (v + w) + (-w)
u + (w + (-w)) = v + (w + (-w) ) (Associative property)
u + 0 = v + 0 ( -w is the additive inverse of w)
u = v (0 is the additive identity)
Hence u + w = v + w u = v
Similarly we can prove that w + u = w + v u = v
Hence first six axioms for vector space are satisfied, remaining 4
axioms can be easily verified. Hence n is a vector space. It is known as
the Euclidean Space. Members of n are vectors of this vector space.
Example 5: Let X be a non empty set. Let V = Set of all functions from X
to .
If f, g V then f + g and f are also functions from X to defined by
(f + g)(x) = f(x) + g(x) and ( f)(x) = f(x) for all x in X.
We have learned about vector space. We know that any set can
behave as a set of vectors if it satisfies certain axioms. Now the question is
„if we take a subset of a vector space then is it a set of vectors? i.e. Is it a
vector space?
The only ones that we really need to worry about are the remaining
four, all of which require something to be in the subset W. The first two
(1 and 2) are the closure axioms that require that the sum of any two
elements from W is also in W and that the scalar multiple of any element
from W will be also in W. Note that the sum and scalar multiple will be in
V we just don‟t know if it will be in W. We also need to verify that the
zero vector (axiom 5) is in W and that each element of W has a negative
that is also in W (axiom 6).
W is a subspace of V.
Remark : A set {0} consisting of zero vector of any vector space V and V
itself are subspaces of V.
We can now state whether the given subset is a subspace of a vector space.
Following are some interesting examples
Now TST a, b L, , a b L
Hence a b L L is a subspace of 2 .
67
Example 8: Let L’ = {(x ,y, z) / x = kx0, y ky0, z = kz0, ; (x0, y0, z0) is
fixed vector in 3 and k }
i.e. L’ = {( kx0, ky0, kz0) / (x0, y0, z0) is fixed vector in 3 and k }.
Now TST a, b P, , a b P
Let a = (x1, y1, z1), b = (x2, y2, z2) 2x1 + 3y1 + z1 = 0, 2x2 + 3y2 + z2 = 0
a b = (x1, y1, z1) + (x2, y2, z2) = ( x1 + x2, y1 + y2, z1 +
z2)
Hence a b P. P is a subspace of 3 .
0 0
W, W is nonempty.
0 0
Let A, B W. and ,
a 0 b 0
A 1 B 1
0
a2 0 b2
,
a 0 b 0 a b1 0
A B 1 1 1 W
0 a2 0 b2 0 a 2 b2
Example 12: Let Pn[x] be set of all polynomials of degree n with real
coefficients then W is a subspace of x .
Example 13: Let W be set of all continuous real valued functions defined
on [a, b] then W is a subspace of vector space of all real valued functions
on [a, b]
0
0
O = S
0
S is non empty.
For X, Y S and ,
AX = O and AY = O
A( X + Y) = A X + A Y = (AX) + (AY) = ×0 + ×0 = 0
X + Y S
Hence S is a subspace of n
a1 a 2
= { 0 a / a1 , a 2 , a3 are real numbers }.Show that W3 is a
3
subspace of M2 2.
Let a, b W1 W2 and ,
a+ b W1 and a + b W2 ( W1 and W2 are subspaces and
by thm 3)
X y
W1 W2
W1 W2 is a subspace x + y W1 W2 x + y W1 or x + y
W2
Let a, b W1 + W2 ; ,β
a W1 + W2 a = x1 + y1
b W1 + W2 b = x2 + y2,
where x1, x2 W1 and y1, y2 W2 (by definition of W1 + W2)
= x1 + y1 + β x2 + β y2
= ( x1 + β x2) + ( y1 + β y2)
Hence W1 + W2 is a subspace of V
4. 4 SUMMARY :
Theory:
1) Define a vector space.
2) Define a subspace of a vector space
3) State the condition for a subset W to be a subspace of a vector
space V.
Problems:
1) Let V = {(x, y) / x, y are real numbers}. Addition and scalar
multiplication in V is defined as (x1, y1) + (x2, y2) = (x1 + y1, x2
+ y2) and k(x, y) = (kx, 0) respectively. Show that V is not a
vector space. Which axiom is not satisfied by V?
2) Show that following are subspaces of 2
73
74
5
LINEAR SPAN, LINEARLY DEPENDENT
AND INDEPENDENT SETS
Unit Structure:
5.0 Objectives
5.1 Introduction
5.2 Linear combination of vectors
5.3 Linear span
5.4 Convex sets
5.5 Linearly dependent and linearly independent sets
5.6 Summary
5.7 Unit End Exercise
5. 0 OBJECTIVES
This chapter would make you to understand the following concepts:
Linear combination of vectors
Linear span of a set
Spanning set or generating set of a vector space
Linearly dependent set
Linearly independent set
5. 1 INTRODUCTION
In the vector space there are two operations addition and scalar
multiplication. Operating these operations on elements of vector space and
scalars (real numbers) we get an element of a vector space known as a
linear combination of vectors.
Set of all linear combination of elements of some subset of vector
space V has various properties. It is a subspace of V. The most important
is that it can cover V. Hence the concept of generators is introduced.
Suppose S is a subset of a vector space such that no element of S is
linear combination of other elements of S. We can say that elements of S
are not depend on other elements of S. We call S as linearly independent
set. Subsets of V which are not like S are linearly dependent.
In this unit we define and elaborate all these concepts by studing
various examples and properties.
5. 2 LINEAR COMBINATION IN A VECTOR SPACE
2
Example 2: u = (-1, 2) and v = (4, -6) are vectors in .
Is (-12, 20) a linear combination of u and v?
If yes then there exist real numbers 1 and 2 such that w = 1u + 2v
Suppose w = 1u + 2v
Then (-12, 20) = w = 1(-1, 2) + 2(4, -6)
(-12, 20) = (- 1 + 4 2, 2 1 - 6 2)
-12 = - 1 + 4 2 and 20 = 2 1 - 6 2
By solving these equations simultaneously we get 1 = 4 and 2 = -2
which are real numbers.
2
Example 3: u = (2, 1, 0) and v = (-3, -15) are vectors in
Is w=(1,-4) a linear combination of u and v?
Suppose w = 1u + 2v
Then (1, -4) = 1(2, 10) + 2(-3, -15)
1 = 2 1 - 3 2 and -4 = 10 1 – 15 2
Multipling first equation by 5 we get
5 = 10 1 – 15 2
By comparing with second equation we get 5 = -4, which is not true.
we could not find 1 and 2 such that w = 1u + 2v
Hence (1, -4) is not a linear combination of (2, 10) and (-3, -15)
Answers
1) Yes, (2, 3) = -2(-1, 0) + (3/4)(0, 4)
2) No, We cannot find real numbers a and b such that
2a + 8b = 5 and 3a + 12b = 6
3) Yes, (1, 2, 3) = 1(0, 2, 0) + (1/4)(4, 0, 0) + 3(0, 0, 1)
4) (4, 5) = 1(1, 0) + 1(3, 5)
(-3, 7) = (-36/5)(1, 0) + (7/5)(3, 5)
5) 3x2 - 2x + 4 = -2(x – 2) + 3 (x2 + 1) + (-3/5)(5)
5. 3 LINEAR SPAN
Example 4: Consider a vector space 2 . Let S = {(1, 2), (2, 3)}. Then
L(S) = { 1(1, 2) + 2(2, 3) / 1 and 2 are real numbers}
Let S be nonempty.
Then S has at least one element say x. x L(S) for any real number
.
0 x = 0 L(S). Hence L(S) is nonempty.
1 0 0 0
Let S = { , }
0 0 0 1
1 0 0 0
L(S) = { , β / , β }
0 0 0 1
0 0 0
L(S) = { 0 / , β }
0
,
0
0
L(S) = { / , β }
0
3
Example 7: Let V = and S = {(1, 1, 0), (2, 0, 2)} Let us check whether
(5, 2, 3) and (4, 1, 5) are in L(S).
Answers
5. 4 CONVEX SETS
2 3
We have defined lines in and . We now define a line in a vector
space.
v+u
t1 v
u
t2 u
Definition : A subset S of a vector space V is said to be convex if
P, Q S (1 – t) P + tQ S for 0 t 1
PpPPppP
S1 S2
80
S1 is convex. S 2 is 1
Since 0 t 1 0 1 – t 1
Also 0 t1, t1‟ 1
0 (1 – t) t1 + t t1‟ (1 – t) + t = 1
0 (1 – t) t1 + t t1‟ 1
Similarly 0 [(1 – t) t2 + t t2‟ 1
From (*) (1 – t) P + tQ S
Hence S is convex.
S1 S2 is convex.
3
Let S = {(1, 1, 0), (2, 1, 1), (1, 0, 1)} be a subset of . We will write a
vector
(4, 2, 2) as a linear combination of elements of S. By observing elements
of S, we get
(4, 2, 2) = 2(1, 1, 0) + 0(2, 1, 1) + 2(1, 0, 1). But one can also write,
(4, 2, 2) = 1(1, 1, 0) + 1(2, 1, 1) + 1(1, 0, 1) or
(4, 2, 2) = 0(1, 1, 0) + 2(2, 1, 1) + 0(1, 0, 1)
2
Example 9: Let S = {(1, 0), (-1, 2), (2, -4)} be a subset of .
Let v1 = (1, 0), v2 = (-1, 2) and v3 = (2, -4)
Then, Since for a1 = 0, a2 = -2 and a3 = 1, not all zero such that a1 v1 + a2
v2 + a3v3 = 0
Hence S is linearly dependent.
2
Example 10: A subset S = {(1, 0), (0, 1)} of is linearly independent.
Since a1(1, 0) + a2(0, 1) = (0, 0)
(a1, a2) = (0, 0)
a1 = 0 and a2 = 0.
Example 11: Let S = {(1, 7, -4), (1, -3, 2), (2, -1, 1)}.
To find whether S is linearly dependent or independent, consider
a(1, 7, -4) + b(1, -3, 2) + c(2, 1, 1) = (0, 0, 0) where a, b, c
(a + b + 2c, 7a - 3b + c, -4a + 2b + c) = (0, 0, 0)
a + b + 2c = 0 …….(i)
7a - 3b + c = 0 …….(ii)
-4a + 2b + c = 0 …….(iii)
Consider av = 0 where a .
Since v ≠ 0 av = 0 a = 0
Because if a ≠ 0 then av = 0 a-1(av) = a-1 0
(a-1a)v = 0
1v = 0
v = 0. But v ≠ 0
83
Hence av = 0 a = 0
{ v } is linearly independent.
{x + y, y + z, z + x} is linearly independent.
2
Note : Let v1 and v2 be two non zero vectors in . If v1 and v2 are
linearly dependent i.e. { v1, v2} is linearly dependent. Then there exist real
numbers a and b such that av1 + bv2 = 0 where a and b both are non zero.
Because if a = 0 then
bv2 = 0 gives b = 0.
v1 = (-b/a) v2.
v1 v2
0
3
Note : Let v1, v2, v3 be three non zero vectors in . If v1,v2, v3 are
linearly dependent then there exist real numbers a, b and c not all zero
such that av1 + bv2 + cv3 = 0
v2
v1 v3
0
85
If v2 and v3 are linearly independent then they are not on same line
through origin.
v1 – k1v2 – k2v3 = 0 v1 lies on the plane passing through v2, v3 and
origin.
i.e. v1, v2, v3 are coplanar.
v2 v1
0 v3
2
1) Show that {(1, 2), (3, 4)} is linearly independent in .
4
2) Show that {(1, 1, 2, 0), (0, 1, 4, 9)} is linearly independent in .
3) If {x, y} is linearly independent in a vector space V then show that
{x + ay, x + by} is linearly independent where a and b are real
number which are not same.
Suppose S is finite.
Let S = { v2,….vn }
Without loss of generality assume that T = { v2,….vk } , k ≤ n
T is linearly dependent there exist real numbers a1, a2, … . ak not all
zero such that a1v1 + a2v2 + ….+ akvk = 0
Proof: S is finite.
Let S = { v1, v2,….vn} and let L be linearly independent.
Suppose x L(S) x = a1v1 + a2v2 + ….+ anvn, where a1, a2, …,an .
a1v1 + a2v2 + ….+ anvn – x = 0
a1v1 + a2v2 + ….+ anvn + (-1)x = 0, where -1 ≠ 0
{ v1, v2,….vn, x} = S { x } is linearly dependent.
Proof: Since S is linearly dependent, there exists real numbers a1, a2, …an
not all zero such that a1v1 + a2v2 + ….+ anvn = 0. ……….(i)
5. 6 SUMMARY
In this unit we have defined span of a set, linearly independent and
linearly dependent set in a vector space.
The major results for a vector space V, we have proved are:
If S is a subset of V then L(S) is the smallest subspace containing S
Subset of a linearly independent set is linearly independent
Superset of linearly dependent set is linearly dependent
For x V, S {x} is linearly dependent if and only if x L(S)
Every element of linearly dependent set can be expressed as a
linear combination of other elements of the set
If S is linearly independent then every element of L(S) has unique
expression
If L(S) = V the S is a generating set of V
Theory:
Problems:
90
6
BASIS AND DIMENSION
Unit Structure:
6.0 Objectives
6.1 Introduction
6.2 Basis of a vector space
6.3 Dimension of a vector space
6.4 Rank of a matrix
6.5 Summary
6.6 Unit End Exercise
6.0 OBJECTIVES
6. 1 INTRODUCTION
L S 3
Let T = {(1, 0, 0), (0, 1, 0)}
Clearly x, y,z L T
L(T) 3 .
(a, b, c) = (0, 0, 0)
a = 0, b = 0, c = 0
B is linearly independent.
Hence B is a basis of 3 .
Hence B‟ is a basis of 3 .
1 0 0 1 0 0 0 0 0 0
Consider a1 0 0 + a2 0 0 + a3 1 0 + a4 0 1 = 0 0
93
a1 0 0 a 2 0 0 0 0 0 0
0 0 0 a3 0 0 a 4 0 0
+ + + =
0
a1 a 2 0 0
=
a 3 a 4 0 0
a1 = 0, a2 = 0 , a3 = 0, a4 = 0
B is linearly independent.
x y
Now let M 22 .
z w
x y 1 0 0 1 0 0 0 0
Since = x + y + z + w
z w 0 0 0 0 1 0 0 1
L(B) = M 22
B is a basis of M 22
(1, 1, 1) 3
If there exist a, b such that (1, 1, 1) = a(1, 1, 0) + b(-1, 0, 0)
Then (1, 1, 1) = (a – b, a, 0)
1 = a – b, 1 = a, 1 = 0
94
But 1 0
(1, 1, 1) L(B)
L(B) 3
B is not a basis of 3 .
Note : In example 1 and example 2 we have seen that B and B‟ are two
different basis of 3 .
Which shows that a basis of a vector space is not unique.
Infact a vector space has infinitely many bases.
n m
= ( a
j 1 i 1
ij xj )vj
n m
then ( a
j 1 i 1
ij xj )vj = 0
a
i 1
ij xj = 0 , j = 1, 2,……n …………..……(ii)
a i 1
ij c j = 0, where at least one cj is non zero.
n m
= (0) v
j 1
j =0
1 0 0 1 0 0 0 0
Example 7: Since { , , , } is a basis of
0 0 0 0 1 0 0 1
M 22 , dim M 22 = 4.
Proof: W is a subspace of V.
If W = { 0 }, dim W = 0 dim V
Let B = {w1, w2, …., wr, u1, u2,…, um, v1, v2,…vs}
101
Claim: B is a basis of W1 + W2
W1 + W2 = { x + y/ x W1, y W2}
Let w W1 + W2
w = x + y where x W1, y W2
x = a1w1 + a2w2 + ….+ arwr + b1u1 + b2u 2 + , ,,,,,+ bmum
y = c1w1 + c2w2 + ….+ crwr + d1v1 + d2v2 + ….+ dsvs
x + y = a1w1 + a2w2 + ….+ arwr + b1u1 + b2u2 + ,,,,,,+ bmum + c1w1 +
c2w2 +
+ …+.crwr + d1v1 + d2v2 + ….+ dsvs
x + y = (a1 + c1)w1 + (a2 + c2)w2+ ….(ar + cr)wr + b1u1 + b2u 2 +
,,,,,,+ bmum +
d1v1 + d2v2 + ….+ dsvs
x + y = w L(B)
L(B) = W1 + W2
Now to show that B is linearly independent
Consider a1w1 + a2w2 + ….arwr + b1u1 + b2u2 + ,,,,,,+ bmum + c1v1 +
c2v2 + ….+ csvs = 0 ………(i)
c1v1 + c2v2 + ….+ csvs = - a1w1 - a2w2 - ….- arwr - b1u1 - b2u2 -
…..- bmum
Now - a1w1 - a2w2 - ….- arwr - b1u1- b2u2 - …..- bmum W1
c1v1 + c2v2 + ….+ csvs W1
{w1, w2, …., wr, v1, v2, ….vs} is a basis of W2, c1v1 + c2v2 + ….+
csvs W2
c1v1 + c2v2 + ….+ csvs W1 W2.
c1v1 + c2v2 + ….+ csvs = d1w1 + d2w2 + ….+ drwr ({w1, w2, …., wr}
is a basis of W1 W2)
c1v1 + c2v2 + ….+ csvs – (d1w1 + d2w2 + ….+ drwr) = 0
c1v1 + c2v2 + ….+ csvs – d1w1 –d2w2 - ….. – drwr = 0
{w1, w2, …., wr, v1, v2, ….vs} is a basis of W2. linearly
independent
c1 = c2 = …cs= d1 = d2 = …= dr = 0
Hence a1w1 + a2w2+ ….arwr + b1u1+ b2u2 + ,,,,,+ bmum + c1v1 + c2v2 +
….+ csvs = 0 a1 = a2 = …= ar = b1 = b2 = …= bm = c1 = c2 = …cs
=0
102
B is linearly independent.
B is a basis of W1 + W2
dim (W1 + W2) = r + m + s = (r + m) + (r + s) – r
= dim W1 + dim W2 – dim (W1 W2).
3
Example 8: Let W be subspace of given by
W = {(x, y, z)/ x + y + z = 0 }
W = {(x, y, -x – y)/ x, y }
W = {(x, 0, -x) + (0, y, -y)/ x, y }
W = {x(1, 0, -1) + y(0, 1, -1)/ x, y }
W = L({(1, 0, -1), (0, 1, -1)})
{(1, 0, -1), (0, 1, -1)} generates W ………….(i)
4
Example 9: Let U and W be subspaces of given by
U = {(x, y, z, w)/ x + y + z = 0} and
W = {(x, y, z, w)/ x + w = 0, y = 2z}
We find dim W, dim U, dim (U W) and dim (U+W)
U = {(x, y, z, w)/ x + y + z = 0}
U = {(x, y, -x – y, w)/ x, y, w }
U = {(x, 0, -x, 0) + (0, y, -y, 0) + (0, 0, 0, w)/ x, y, w }
U = {x(1, 0, -1, 0) + y(0, 1, -1, 0) + w(0, 0, 0, 1)/ x, y, w }
W = L({(1, 0, -1, 0), (0, 1, -1, 0), (0, 0, 0, 1)})
{(1, 0, -1, 0), (0, 1, -1, 0), (0, 0, 0, 1)} generates W ………….(i)
(a , b, -a –b, c) = (0, 0, 0, 0)
a = 0, b = 0, c = 0
{(1, 0, -1, 0), (0, 1, -1, 0), (0, 0, 0, 1)} is linearly independent
………….(ii)
From (i) and (ii) {(1, 0, -1, 0), (0, 1, -1, 0), (0, 0, 0, 1)} is a basis of U
dim U = 3 …………(iii)
It can be easily proved that {(1, 0, -1, 0), (0, 2, 0, 1)} is linearly
independent.
{(1, 0, -1, 0), (0, 2, 0, 1)} is a basis of W
dim W = 2 …………….(iv)
Now (x, y, z, w) U W if and only if (x, y, z, w) U and (x, y, z,
w) W
U W = {(x, y, z, w)/ x + y + z = 0 and x + w = 0, y = 2z}
U W = {(x, y, z, w)/ x + 2z + z = 0 and x + w = 0, y = 2z}
U W = {(x, y, z, w)/ x + 3z = 0 and w = -x, y = 2z}
U W = {(x, y, z, w)/ x = -3z, w = 3z, y = 2z}
U W = (-3z, 2z, z, 3z)/ z }
U W = {z(-3, 2, 1, 3)/ z }
{(3, 2, 1, -3)} generates U W
Consider (0, 1, 0, 0) in 4
If (0, 1, 0, 0) L(S {(1, 0, 0, 0)})
Then there exist real numbers a, b and c such that
(0, 1, 0, 0) = a(1, 2, 1 0) + b(0, 0, 1, 1) + c(1, 0, 0, 0)
a + c = 0, 2a = 1, a + b = 0, b = 0
a = ½ and a = 0, not possible
(0, 1, 0, 0) L(S {(1, 0, 0, 0)})
S {(1, 0, 0, 0)} {(0, 1, 0, 0)} is linearly independent
{(1, 2, 1, 0), (0, 0, 1, 1), (1, 0, 0, 0), (0, 1, 0, 0)} is linearly
independent
It a basis of 4 since it has 4 elements.
Answers:
1) (i) 1 (ii) 1 (iii) 1 (iv) 1 (v)1 (vi) 1 (vii) 2
105
2
2) {(1, -1), (1, 0)} or a set containing (1, -1) and any element of
which is not a multiple of (1, -1)
3) {(1, 0. 2), (0, 1, 2), (0, 0, 1)} or a set containing (1, 0, 2) and
3
(0, 1, 2) and any element of which is not a linear combination
of {(1, 0. 2), (0, 1, 2)}
a11 ...............a1n
a j1 ..................a jn
B= The Columns of B are B1, B2, …Bn.
a ..................a
i1 in
a ..................a
m1 mn
a11 ...............a1n
B= ai1 .......... ....... .a
in , 0
am1 ..................amn
108
a12
a11 a 1n
a .
1 ai1 . + 2 i 2 +… + n a in . = 0
am1 a mn
a
m2
1 ai1 + 2 ai2 + ……+ n ain = 0 for i = 1, 2, …m
( 1ai1 + 2ai2 + ……+ nain) = 0 for i = 1, 2, …m
1ai1 + 2ai2 + ……+ nain = 0 for i = 1, 2, …m
1A1 + 2A2 + ….+ nAn = 0, where A1, A2, …An are
columns of A.
Column rank of B = column rank of A
1 0 ...0.........0
0 1 ....0.........0
0 0 ....1......... 0 (r 0)
0 0 0 0
0 0 0 0
a11 0 0 ..... 0
0 a a .....a
22 23 2n
a 22 a 23 .....a 2 n
Let A* = A* is (m -1) (n -1) matrix
a m 2 a m3 .....a mn
a11 0 ...0.........0
0 a ....0.........0
22
By repeating this process we get A equivalent to 0 0 ....a rr ......... 0
0 0 0 0
0 0 0 0
1 2 1 3
Example 11: Let A =
3 6 3 2
Column space of A = L({(1, -3), (2, -6), (-1, 3), (3, -2)}) 2
Consider the set {(1, -3), (2, -6), (-1, 3), (3, -2)}
{(1, -3), (3, -2)} {(1, -3), (2, -6), (-1, 3), (3, -2)})
Column rank of A = 2
rank A = 2
1 1 0 1
Example 12: We reduce A = 3 2 1 1 to echelon form
1 0 1 3
By A2 + (-1)A1, A4 + A1 on A , we get
112
1 0 0 0
A ~ 3 1 1 4
1 1 1 4
By (-1)A2, we get
1 0 0 0
A ~ 3 1 1 4
1 1 1 4
1 0 0 0
0 0
A~ 0 1
2 1 0 0
1 0 0 0
0 0
A~ 0 1
0 0 0 0
Hence rank A = 2
2 1
7
(i) A= 3
6 1
1 3
0 2
(ii) A =
5 1
2 3
113
1 2 3
2 1 0
(iii) A=
2 1 3
1 4 2
1 1 2
0 1 5
(i)
2 1 5
1 1 3
2 1 10
(ii)
3 51 1
Answers:
6. 5 SUMMARY
rank. The rank of the matrix is the value of row rank which is same as
column rank.
Theory:
Problems:
2 3 1 1 0 1 1 1
2) = (-7) + 11 + 14 + (-5)
4 7 1 1 1 0 0 0
3) (i) 1 (ii) 1 (iii) 1 (iv) 2 (v) 2
4) W1 W2= { (x, y, z)/ x + z = 0, y = 0} So dimW1 W2 = 1
5) dim W1 = 2, dim W2 = 2, dim W1 W2 = 1 dim W1 + W2 = 3
3
7) A basis of containing (1, 0, 2) is {(1, 0, 0), (0, 1, 0), (1, 0, 2)}
116
Unit Structure:
7.0 Objectives
7.1 Introduction
7.2 Inner product
7.3 Norm of a vector
7.4 Summary
7.5 Unit End Exercise
7.0 OBJECTIVES
7. 1 INTRODUCTION
iii) x x1 , x2 ,....., xn
x . y x1 y1 x2 y2 ..... xn yn
x1 y1 x2 y2 ....... xn yn
x1 y1 x2 y2 ..... xn yn
118
x. y
For x, y, z V and
(i) <x, x> 0 and <x, x> = 0 if and only if x = 0
(ii) <x, y> = <y, x>
(iii) < x, y> = <x, y>
(iv) <x + y, z> = <x, z> + <y, z>
2
Example 1: Let V =
Let x = (x1, x2), y = (y1, y2)
Define <x, y> = 2x1y1 + 5x2y2 ……………(*)
Let x, y, z
2
x = (x1, x2), y = (y1, y2), z = (z1, z2)
(i) <x, x> = 2x1x1 + 5x2x2
= 2x12 + 5x22 0
Hence 2 is an inner product space under the inner product < , > is
defined in (*)
For f, g, h C[a, b]
t
f (t ) f (t )d t +
t
f (t ) f (t )d t > 0
Contradiction
Hence <f, f> = 0 f = 0
<f, f> = 0 f = 0
= g( t ) f ( t ) d t
a
= <g, f>
b
=
a
f (t ) g (t )d t
=
a
f (t ) g (t )d t
= <f, g>
= ( f ( t ) g ( t ) ) h( x ) d t
a
121
b b
=
a
f (t )h(t )d t + g( t ) h ( t ) d t
a
= <f. g> + <g, h>
Thus C[a, b] is an inner product space.
Example 3: Let V = C, a vector space of complex numbers
Let z , w C . Define < z , w > = Re( z w ) (Real part of complex number
zw)
For z , w , t C
(i) If z = a + i b , z z = a2 + b2
< z , z > = Re( z z ) = a2 + b2 0
< z , z > = 0 Re( z z ) = 0
a2 + b2 = 0
a = 0, b = 0
z =0
(ii) z = a + i b, w = c + i d, z = a - i b, w = c - i d
< z , w> = Re( z w ) = ac + bd,
<w, z> = Re( w z ) = ac + bd
< z , w > = < w , z >
= Re( z w + t w )
= Re( z w ) + Re( t w )
= <z , t> + <w, t>
b1 b2 t b1 b3
Let B b b then B
= = b b and ABt =
3 4 2 4
a1b1 a 2 b2 a1b3 a 2 b4
a b a b a b a b
3 1 4 2 3 3 4 4
tr(ABt) = a1b1 + a2b2 + a3b3 + a4b4
<A, B> = a1b1 + a2b2 + a3b3 + a4b4
For A, B, C M2
<A, A> = 0 a1 + a2 + a3 + a4 = 0
2 2 2 2
Does this definition give inner product? i.e. Is P2[x] inner product space?
Let p(x) = x – x2
Clearly p(x) P2[x]
123
7. 3 NORM OF A VECTOR
Example 6: We know that IRn is an inner product space with usual dot
product.
2
(i) The usual inner product in is <x, y> = x1y1 + x2y2 , for x =
(x1, x2),
y = (y1, y2)
x = x, x = x1 x 2
2 2
(3 , 4 ) = 32 4 2 =5
124
(ii) The inner product given in example 1 is <x, y> = 2x1y1 + 5x2y2
x = x, x = 2 x1 5 x 2
2 2
(3 , 4 ) = 2( 3 2 ) 5 ( 4 2 ) = 98
Theorem 1: Let V be an inner product space. The norm function has the
following properties. For x V,
(i) x 0 and x = 0 x = 0
(ii) x = x
Proof: For x V,
(ii) x = x, x = x, x = 2 x, x = x, x
= x
x x x
Then since = = 1, is said to be a unit vector in the
x x x
direction of x .
Proof: Let x, y V.
If y 0 then x, y x,0 0 and x y = x 0 =0
equality holds.
Suppose y 0
Define f : by f( t ) = x t y
f t 0 for all t
x, y x y
Proof: Consider x y = x y, x y
2
= x, x + y, x + x, y + y, y
+ 2 | x, y | + y
2 2
= x
126
( x
2 2
+ 2 x y + y )2 (By Cauchy
Schwarz inequality)
x y 2 x 2
( + y )2
Corollary : For x , y V, x y x y
Proof: For x, y V , x x y y
By Triangle inequality
x = (x y) y x y + y
x - y x y ……………(i)
Now y y x x
y = ( yx) x y x + x
y - x y x
- ( x - y ) ( y x = ( x y) = x y ) ………….(ii)
x y x y
Answers:
1) 7
2) 6/5
7. 4 SUMMARY
(ii) ( 2 , < , >), Where <x, y> = x1y1 - x1y2 - x2y1 + 3x2y2
x = (x1, x2), y = (y1, y2)
b
Answers:
1) 17
129
8
ORTHOGONALITY
Unit Structure:
8. 0 Objectives
8.1 Introduction
8.2 Angle betweennon zero vectors
8.3 Orthogonal Prrojection on to a line
8.4 Orthogonal vectors
8.5 Orthogonal and Orthonormal sets
8.6 Gram Schmidt Orthogonalisation Process
8.7 Orthogonal Complement of a set
8.8 Summary
8.9 Unit End Exercise
8. 0 OBJECTIVES
8. 1 INTRODUCTION
In this unit we see how to define an angle between two non zero
vectors. Once the angle is defined we able to learn perpendicular vectors
130
x, y
-1 x y 1
x, y
There exist unique in [0, ] such that Cos
x y
This is known as the angle between vectors x and y.
Example 1: The angle between vectors (1, 1, -1) and (0, -1, -1) in IR
3
= /2
8. 3 ORTHOGONAL VECTORS
3
Note: We can verify that vectors (1, 1, -1) and (0, -1, -1) in are
orthogonal with respect to usual inner product i.e. dot product.
131
2
y
Proof: x y = x y, x y
2
= x , x + y, x + x , y + y, y
+ 2 x, y + y
2 2
= x
if and only if x , y = 0
2
Thus x y
2 2
= x + y
2
Hence x y
2 2
= x + y if and only if x and y are orthogonal
vectors in V.
x x+y
A B
O y C
So, by vector addition fourth vertex is the end point of the vector x + y.
+ 2 x, y + y
2
Now, x y
2 2
= x
- x, y + y
2
x y 2
= x
2
x
u (x.u)u
2
While studying vectors in plane , we have seen that If x is
any vector and u is an unit vector then the orthogonal projection of x on u
is (x . u) u = (|x| Cos )u where is the angle between x and u.
Now we generalize this to vectors of an inner product space
v u v Pu ( v ) 2 for all in .
2
v Pu (v ) v u for all in .
v, w
Note: For any vector w in V Pw (v ) = w
w, w
1) Find the orthogonal projection of (1, 1) along (1, -2) with respect
to the usual inner product. Ans :
1 ,2
5
5
2) Find the shortest distance of point (1, 1, 1) from (3, 0, 0)
S is an orthogonal set
S is an orthogonal basis of V.
Further, <(1, 0, 0), (0, 0, 1)> = <(1, 0, 0), (0, 1, 0)> = <(0, 1, 0),
(0, 0, 1)> = 0
S is an orthonormal set
S is an orthonormal basis of V.
Let V be a finite dimensional inner product space and {v1, v2,…., vn} be a
basis of V.
n
2 2
|| x || = <x, v1> + <x, v2> + …..+ <x, vn> = 2 2
x,v v
i 1
i i
= j
n
Thus, x = <x, v1>v1 + <x, v2>v2 + ……+ <x, vn> vn = x,v v
i 1
i i
= ( x , vi x , v ) vi, vj j
i 1 j1
n n
= ( x , vi x , vi ) vi , vi ( vi, vj = 0 if i j )
i 1 i 1
n
= ( x , vi )2 ( vi , vi = 1)
i 1
n n
a v
i 1
i i ,v j a v ,v
i 1
i i j ai
(Since i j vi , v j 0 )
But a
i 1
i vi 0
0, v j 0
v j 0 for each j
S is linearly independent.
136
u
Note : We know that If u V then is the unit vector along u.
u
Let { u1 , u 2 , ......, u k } be an orthogonal set in V.Then
u1 u2 uk
= =……..= =1
u1 u2 uk
ui uj ui , u j
Also , 0 if i j
ui uj ui u j
ui , ui
= 1 if i j
ui ui
u1 u u
{ , 2 , ........, k } is an orthonormal set.
u1 u2 uk
v 2 , u1
For k = 2, we take u 2 v 2 u1
u1 , u1
v 2 , u1
Then u1 , u 2 u1 , v2 u1
u1 , u1
v2 , u1
u1 , v 2 u1 , u1
= u1 , u1
u1 , v2 v2 , u1
=
=0
v 2 , u1
u 2 v2 u1
Now if u2 = 0 then u1 , u1 =0
v 2 , u1
v2 u1
u1 , u1 = cu1 = cv1 (Since u1 = v1)
{ v1, v2} is linearly dependent
Since { v1, v2, …,vk} is linearly independent, its subset {v1, v2} is
linearly independent
Contradiction. Hence u2 0 .
{u1, u2} is an orthogonal set.
Also L({u1, u2}) = L({v1, v2})
k 1 vk , ui
u k , u j vk ui , u j
i 1 ui , ui
k 1 vk , ui
vk , u j ui , u j
= i 1 ui , ui
vk , u j
vk , u j uj , uj
= u j , u j
138
Note: Gram Schmidt process is to get an orthogonal set {u1, u2,..,uk} from
linearly independent set {v1, v2,….vk}. Where u1 = v1 and
k 1 vk , ui
u k vk u i for k = 2, 3, …..n
i 1 ui , ui
u1 u u
From Note 8.4.1 { , 2 , ........, n } is orthonormal basis
u1 u2 un
of V.
Let {v1, v2, v3} = {(0, 1, -1), (1, 2, 1), (1, 0, 1) } …………(i)
v 2 , u1
u 2 v2 u1
u1 , u1
v2 , u1 v2 , u 2
u 3 v3 u1 u2
u1 , u1 u2 , u2
(1, 0 , 1) , (0, 1, 1) (1, 0 , 1) , (1, 1, 3 / 2 )
u3 (1, 0, 1) (0, 1, 1) (1, 1, 3 / 2 )
(0, 1, 1) , ( 0 ,1, 1) (1, 1, 3 / 2 ) , (1, 1, 3 / 2)
1 10
u3 (1, 0, 1) (0, 1, 1) (1, 1, 3 / 2 )
2 17
1 1 10 10 15
u3 (1, 0, 1) (0, , ) ( , , )
2 2 17 17 17
10 3 32
u3 ( , , )
17 34 34
10 3 32
{u1, u2, u3 } = {(0, 1, -1), (1, 1, 3/2), ( , , ) } is an
17 34 34
orthogonal set.
Let W = { (x, y, z) 3 / 3x – 2y + z = 0}
Now we find an orthonormal basis of W from S by Gram Schmidt process
u1 = v1 = (1, 0, -3)
v 2 , u1
u 2 v2 u1
u1 , u1
(0 ,1, 2) , (1, 0, 3)
( 0, 1, 2 ) (1, 0, 3)
= (1, 0, 3) , (1, 0 , 3 )
6
= ( 0, 1, 2 ) (1, 0, 3)
10
3 1
= ( , 1, )
5 5
3 1
{(1, 0, -3), ( , 1, ) } is an orthogonal basis of W
5 5
3 1 3 1
( , 1, ) ( , 1, )
(1, 0, 3) 5 5 (1, 0, 3) 5 5
{ , } = { , }
(1, 0, 3) 3 1 10 35
( , 1, )
5 5 5
1 3 3 5 1
= { ( ,0, ), ( , , ) } is an orthonormal basis
10 10 35 35 35
of W.
Hence x is a subspace of V
Let x, y W , a, b
<x, w> = 0 and <y, w> = 0 for all w in W
To show that ax + by W
So w W.
Let w‟ = v – w
v = w + w‟, w W ………….(i)
Claim: w‟ W
= < v, ej > - < v, ej > < ej, ej > (< ei, ej > = 0 for i j)
= < v, ej > - <v, ej > = 0
w‟ W
v = w + w‟ where w W and w‟ W
V = W + W
x=0
W W = {0}
V = W W .
8. 8 SUMMARY
In this unit we have defined the angle between two vectors using
inner product.
Theory:
1) Define orthogonal set and orthonormal set in an inner product space
2) Define orthogonal basis and orthonormal basis of an inner product
space.
3) Define orthogonal projection of a vector along an unit vector.
4) Define orthogonal projection of a vector along any vector.
5) How to obtain an orthogonal set from a linearly independent set in an
inner product space?
6) Define an orthogonal complement of a set.
Problems:
1) In 3 , with respect to usual inner product convert the linearly
independent set { (1, 5, 7), (-1, 0, 2)} to an orthogonal set using Gram
Schmidt process.
Ans : , 62 75 ,65 75 ,58 75
144
Ans : 0, 1
2
, 1
2 , 2 1 1
3 1, 2 , 2 ,
1
2 3 3
1 , 1 , 4
3
4) Consider the inner product space M2 with respect to inner product
< A, B > = tr( ABt). Transform the following linearly independent set in to
orthogonal basis using Gram Schmidt process.
1 1 1 0 0 1 1 0
i) {
, , 0 1 , 0 1 }
0 0 1 0
1 0 1 2 1 2 13 1 1 2 1 2
Ans : , , 1
3
, 1
1 0 1 0 3 1 2 1 2
1 1 1 0 1 0 1 0
ii) { , , , }
0 1 1 1 0 1 0 0
1 0 13 2 3 15 2 5 1 2 0
Ans : , 1 , 2 1 , 1
1 1 1
3 5 5 0 2
1 2 0 1
5) Find the projection of along
2
in the inner product
1 3 1
0 1
space given in problem 4) Ans :
1 2
6) Find the cosine of angle between (1, -3, 2) and (2, 1, 5) in 3 with
9
respect to usual inner product. Ans :
14 30
2 1 1 1
7) Find the cosine of angle between and in M2
3 1 2 3
with respect to inner product given in problem 4). Ans :
145
9
LINEAR TRANSFORMATIONS
Unit Structure :
9.0 Objectives
9.1 Introduction
9.2 Rank Nullity theorem
9.3 The space L(U,V) of all linear transformations from U to V
9.4 Summary
9.0 OBJECTIVES :
9.1 INTRODUCTION :
146
1
Say we have the vector in n , and we rotate it through 90
0
0
degrees (anticlockwise), to obtain the vector . We can also stretch a
1
2 4
vector U to make it 2U, for example becomes or, it we look at
3 6
the projection of one vector onto the x-axis, extracting it‟s x-component.
2 2
E.g. to , these all are examples of mapping between two vectors,
3 0
and are all linear transformations. A linear transformation is an important
concept in mathematics, because many real world phenomenon can be
approximated by linear models.
Examples :
(x, y)
X
(x, 0)
i) T x, y x1, y1 T x x1, y y1
x x1, 0
147
x, 0 x1, 0
T x, y T x1, y1
ii) T x, y T x, y x, 0
. x, 0 . T x, y
Clearly, T preserves the null vectors.
T 0, 0 0, 0 .
T f T g
(3) Let v1, v2, ..., vn be a basis of a finite n – dimensional vector
space V over . Define a map T : V n by associating to each
148
element V , it‟s unique co-ordinate vector with relative to this basis of
V.
T Ou Ou T Ou T Ou
T Ou Ov T Ou T Ou
Ou Ou Ou
Exercises :
ii) T : 2 2
T x, y x cos y sin , x sin y cos
T u1 . T u1
Similarly for 1 ,
T 1. u 1. u1 1. T u 1. T u1
T u u1 T u T u1
this implies that T is a linear transformation. The next theorem gives you
the important properties of a linear transformation.
(ii) T – u – T u
(iii) T u – u1 T u – T u1
Proof :
(i) Ou Ou Ou
T Ou T Ou T Ou Ov
(ii) Ou u – u
T Ou T u T – u
Ov T u T – u
T u – T – u OR T – u T u
(iii) T u – u1 T u – u1 T u T – u1
T u – T u1 [By (ii)]
Examples :
x
(1) The projection of a vector in a plane on the x-axis given by
y
T x, y x, 0 .
KerT x, y 2 T x, y 0, 0
x, y 2 x, 0 0, 0
x, y 2 x 0
0, y 2 y
= Y-axis
In this example,
151
kerT is the line passing through the origin in a plane having the
equation y x .
Ker T
O
X
Proof :
(i) Let , and u, u1 ker T then we need to show that
u u1 ker T .
1 T u u1
1 ImT u u1 U
Given a linear transformation T : U V , where U and V are real vector
spaces. kerT helps us to check whether T is a one-to-one linear
transformation or not, as stated in the following theorem.
Let u ker T T u 0 .
T u T 0 and injectivity of T implies that u 0 .
ker T 0
T u T u1 for u, u1 U
T u T u1 0 T u, u1 0
u – u1 ker T but ker T 0
u – u1 0
u u1 , this shows that T is injective map.
Suppose that B u1, u2, ..., um is a basis for U and we know the
values of T on u1, u2, ..., um , then we can determine the action of a linear
transformation T naturally on any vector u U .
T x, y, z xT 1, 0, 0 yT 0, 1, 0 zT 0, 0, 1
T x, y, z x 0, 0, 1 y 1, 0, 0 z 0, 1, 0
0, 0, x y, 0, 0 0, z, 0 y, z, x
T x, y, z y, z, x for x, y, z 3 .
Theorem : Let U and V be vector spaces u1, u2, ..., un be a basis of U.
Let 1, 2, ..., n be any vectors in V then there exists a unique linear
transformation T : U V such that T ui i for all I, 1 i n .
T u 1 1 2 2 ... n n
For c, d & u, u1 U .
then cu du1 c 1 u1 2 u2 ... n dn d 1n u1 ... 1n un
154
c1 d 11 u1 ... cn d 1n un
T cu du1 T c1 d11 u1 ... c n d 1n un
c1 d 11 1 ... cn d 1n n
c 1 1 ... n n d 11 1 ... 1n n
T cu du1 cT u dT u1
T is a linear transformation
T ui 0 . T u1 ... 1. T ui ... 0 . T un
0 . 1 ... 1. i ... 0 . n i
T ui i for all i, 1 i n
T u
M T on U
Hint : To find kerT , solve the system of equations.
x y 2z 0
2x y 0
– x – 2y 2z 0
1 –1 2
Using matrix form, after reducing the matrix 2 1 0 to row
–1 – 2 2
1 –1 2
reduced echelon form 0 1 – 4 .
3
0 0 0
4
y z and x – 2z .
3 3
ker T –2, 4, 3
To find ImT see that
3x
T x, y, z 2x y –1, 1, – 2 z 2, 0, 2
2
3
Because 1, 2, – 1 2 –1, 1, – 2 2, 0, 2
2
ImT – 1, 1, – 2 2, 0, 2 ,
In this case ImT is a two-dimensional subspace of 3 generated by
vectors – 1, 1, – 2 and 2, 0, 2 .
5. Let T : 4 3 be defined by
T x, y, z, w x – y z w, x 2z – w, x z w . Find dimensions
of ImT and kerT.
Proof : If ImT 0 then kerT U and dimU dim ker T dim ImT .
Let u U T u ImT , hence there are scalars 1, 2, ..., m such that
T u 1 1 2 2 ... m m
1 Tu1 2 Tu2 ... m Tum
T 1 u1 2 u2 ... m um
This means that there are scalars 1, 2, ..., p such that
u – 1 u1 2 u2 ... m um 1 k1 2 k2 ... p k p .
In other words,
u = 1 u1 2 u2 ... m um 1 k1 2 k2 ... p k p
1 u1 2 u2 ... m um 1 k1 2 k2 ... p k p 0
T 1 u1 2 u2 ... m um 1 k1 2 k2 ... p k p T 0 0
1 1 ... m m 0
158
1 2 ... m 0
ImT V
ker T 0
Note : If dim U dim V the above corollary implies that any linear
transformation T : U V is bijective.
S L U, V and S u . S u u U
S T u u1
S u u1 T u u1
S u T u S u1 T u1
S T u S T u1 for any u, u1 U
Similarly, S T u S T u u U,
s u u1 S u u1 S u S u1
S u S u1
S u S u1
Similarly,
S cu C S u u U, C ,
SOT u u1 S T u u1
S T u T u1 (T is linear)
S T u S T u1 (S is linear)
S T u S T u1
160
SOT u SOT u1
Example :
S a 1 b 2 S a.Tu1 b.Tu2
S T au1 bu2
S is a linear transformation..
Example : F : 3 3 defined by
1 1 2 x 0
This system can be written as 1 2 1 y 0 .
2 2 3 z 0
1 1 2
The coefficient matrix 1 2 1 has it‟s row reduced echelon form
2 2 3
1 0 0
as 0 1 0 .
0 0 1
1 0 0 x 0
The system is equivalent to 0 1 0 y 0
0 0 1 z 0
x y z 0
(Hint : Solve the system x 2 y z 0 x, y,z ker T
3x 5 y z 0
x 3z, y 2 z
9) Let A,B be linear maps of V into itself. If ker A 0 ker B , show
that ker AOB 0 .
E p,q : U V as follows.
164
E p,q u j p if jq
0 if jq
We claim that E p, q
1 p m,1 q n forms a basis for
L U ,V .
m n
We show that T a pq E p,q
p 1 q 1
m
Consider a pq E p,q u j
n
p 1 q 1
m n
a pq E p,q u j
p 1 q 1
m n
a pq jq p
p 1 q 1
m
a pj Tu j
p 1
m n
T a pq E p,q SpanE p,q
1 p m ,1 q n
p 1 q 1
L U ,V
m n
If the transformation a pq E p,q is the zero transformation for
p 1 q 1
m n
p 1 q 1
scalars a pq then a pq E p,q u j 0 for each j 1 j n
m n
a pj p 0
p 1 q 1
But p 1 p m is a linearly independent set.
a pj 0 p, j
This shows that E p,q 1 p m, q n is a basis for L U ,V .
dim L U ,V mn
from V into such that f a b1 af bf 1 for all vectors
,1 V and scalars a,b .
Examples :
Definition : Let V be a vector space over the space L V , (the set of
all linear functionals on V) is called the dual space of V denoted by V * .
V * L V ,
fi j ij 0 if i j, 1if i j .
B* f1,..., f n for V * such that fi j ij for each linear functional
f :V .
n
f f i fi
i 1
n
fi i
i1
Proof : We have shown above that there is a unique basis f1,..., fn of
V * dual to the basis 1,...,n of V.
cj f j .
n
Similarly, if U i i is a vector in V, then
i 1
n n
f j i f j i i ij j
i 1 i 1
n
Note : The expression fi i provides with a nice way of
i 1
describing what the dual basis is. If B 1,...,n is an ordered basis
for V and B* f1, f 2 ,..., f n is the dual basis, then fi is precisely the
168
abc 0
t1a t2b t 3c 0
t12a t22b t32c 0
1 1 1 a 0
t1 t2 t3 b 0
2 2 3 c 0
t1 t2 t2
1 1 1
But the matrix t1 t2 t3 is invertible, because t1,t2 ,t3 are all distinct.
2 2 2
t1 t2 t3
We would like to investigate for the basis of V, whose dual is L1,L2 ,L .
Li p j x ij
OR
p j ti ij
p1 x
x t2 x t3
t1 t2 t1 t3
p2 x
x t1 x t3
t2 t1 t2 t3
p3 x
x t1 x t2
t3 t1 t3 t2
nullity f = dimV 1
x1 x2 3x3 )
f2 1 0; f2 2 1; f2 3 0;
f3 1 0; f3 2 0; f3 3 0.
x, y,z 1 2 3 . Find ,, in terms of x, y,z .
f x, y,z f 1 f 2 f 3
p x c0 c1x c2x 2
1 2 1
f1 p x p x dx; f 2 p x p x dx; f3 p x p x dx
0 0 0
Show that f1, f 2 , f3 is a basis for V by exhibiting the basis for V
*
p3 x c0 c1x c2x 2 and use the fact that fi p j x ij )
171
9.4 SUMMARY :
172
10
DETERMINANTS
Unit Structure :
10.0 Objectives
10.1 Introduction
10.2 Existence and Uniqueness of determinant function
10.3 Laplace expansion of a determinant
10.4 Summary
10.0 OBJECTIVES :
This chapter would help you understand the following terms and
topics.
To each n n matrix A over , we associate a real no. called as the
determinant of matrix A.
Determinant as an n-linear skew-symmetric function from
n n n
... , which is equal to 1 on E ,E ,...,E 1 2 n
.
0
j 0
Here E jth place
1
0
E j is the jth column of the n n identity matrix I n .
R1
Determinant of an n n matrix A c1 c 2 ... c n as
Rn
determinant of it‟s column vectors c1 c 2 ... c n or row vectors
R1
.
Rn
We shall see the existence and uniqueness of determinant function
using permutations.
173
For n n matrix A,det At det A & det AB det A. det B .
For square matrix B.
Laplace expansion of a determinant, Vandermonde determinant,
determinant of upper triangular and lower triangular matrices.
10.1 INTRODUCTION :
It‟s value is 0 on any matrix having two equal rows or two equal
columns and it‟s value on the n n identity matrix is equal to 1. We shall
see that such a function exists and then that it is unique with it‟s useful
properties.
D R1,R2 ,...,Ri Ri1,...,Rn
A11 A22 ... Aii Aii1 ... Ann i is fixed.
A11 A22... Aii ... Ann A11 A22... Ann
A A12
A 11
A21 A22
D A
D must be skew-symmetric.
D E1 E1 0 D E2 ,E2 and
D E2 E1 D E1,E2 D I
D also satisfies D I 1 .
D A A11A22 A12A21
176
equal and D A1 D A , if A1 is a matrix obtained from A by
interchanging two rows of A.
O D R1,..., R j R j 1,R j R j 1,..., Rn
D R1,...,R j R j 1,R j ,...,Rn D R1,...,R j R j 1,R j 1,...,Rn
O D A1 D A (D is n-linear)
D B 1 D( A) D( A)
2 j i 1
177
n
E j A (1)i j Aij Dij A is an n-linear skew symmetric
i 1
function on n n matrices over . If D is a determinant function, so is
Ej .
Proof :
k j k 1 j
Therefore, E j A 1 Akj Dkj A 1 A k 1 j A
178
0 1 0
Example 2 : Let A be 3 3 matrix A 0 0 1
1 0 0
1 0
Then E1 A 1
0 1
0 1
E2 A 1
1 0
0 1
E3 A 1
1 0
A A11 E1 A12 E2 A13E3 , A21E1 A22 E2 A23E3 , A31E1
A32 E2 A33E3
then D R1 ,R 2 ,R n D R1,R2 ,..,Rn
D R1 ,R 2 ,...,R i ,...,R j ,...,R n
D R1,R2 ,...,R j ,...,Ri ,...,Rn
D R1,R2 ,...,Ri ,...,R j ,...,Rn
D R1,R2 ,..,Rn
D R1 ,...,R n .
C1,...,Cn say
Then D C1 ,C 2 ,...,C n D C1,C2 ,...,Cn
D R1,R2 ,...,Rn
In general, D R 0 0... 1 ,R 0 0... 2 ,...,R 0 0... n
1 2 k 1 2 k 1 2 k
1 2 ... k D R1,R2 ,...,Rn
n
Ai Aij E j , For 1 i n
j 1
n
D A D A1 j E j , A2 , A3 , An
j 1
n
A1 j D E j , A2 , A3 ,..., An
j 1
n
Now replace A2 by A2k Ek
k 1
n
D E j , A2 ,..., An A1k D E j ,Ek ,..., An
k 1
j 1 k 1
n n n
D A
A1k1 A2k2 ... Ankn D Ek1 ,Ek2 ,...,Ekn
k1 1 k2 1 kn 1
(Using n-linearity of D)
Also, D Ek1, Ek2 ,..., Ekn 0 whenever two indices ki are equal.
D A
A1k 1 A2k2 ... Ankn D Ek1 ,Ek2 ,...,Ekn where the
k1,k2 ,...,kn
sum on R.H.S. is extended over all sequences R1,R2 ,...,Rn of positive
integers not exceeding n since a finite sequence or n-tuple is a function
defined on the first n-positive integers such a function corresponds to
the n-tuple 1 , 2 ,..., n .
182
D A
Sn
A11 A2 2 ... An n D E1 ,...,E n
But we know that D E1 ,...,E n D E1,E2 ,En
D A det A D I
Remarks :
(1) The expression A11 A2 2.... An n above depends only
Sn
on the matrix A and thus uniquely determines D A .
For example,
1 2 3 1 2 3 1 2 3
Where Id , ,
1 2 3 2 3 1 3 1 2
1 2 3 1 2 3 1 2 3
, ,
1 3 2 3 2 1 2 1 3
The above expression is A11 A22 A33 A12 A23 A31 A13 A21 A32
A11 A23 A32 A13 A22 A31 A12 A21 A33 is the same as the determinant
of a 3 3 matrix.
D A1,..., An 0 whenever Ai A j i j .
D A det A D I (by *)
184
Theorem : For an n n matrix A over det At det A where At
denotes the transpose of A.
Proof : IF is a permutation in S n ,
det At A1,1... A n ,n
Sn
1 as varies over S n , therefore 1 also varies
over S n .
det At
Sn
sgn 1 A111... An,1n
det A
Proof : Since AA1 I ,det AA1 det I 1
185
0 a b
1) If A is the matrix over given by A a 0 c then show that
b c 0
det A 0 .
1 x x12 x1n 1
1
1 x2
4) Prove that det
x22 x2n 1
x x
j i 1 i j n
1 xn xn2 xnn 1
Then
A j ,r A j ,r Ak , r 1 Ak , n
1 1 1 1
det A eJ
j j ...i A j ,1 A j , r Ak , r 1 Ak , n
1 2 r
r r r r
Here 1 j1, 2 j2 ,..., r jr
r 1 k1, r 2 k2 ,..., r i ki
2) Prove that the area of the triangle in the plane with vertices
x1 x2 1
1
x1,x2 , y1, y2 , z1,z2 is the absolute value of y1 y2 1 .
2
z1 z2 1
P Q
3) Suppose we have an n n matrix A of the block form where P
O R
is r r matrix, R is s s matrix, Q is r s matrix & O denotes the s r
null matrix.
P Q
(Hint : Define D P,Q,R det , consider D as an s-linear
O R
function of the rows of R.)
10.4 SUMMARY :
det : n n ... n
ntimes
det A sgn A11 ... An n where the sum on R.H.S. is taken
Sn
over all permutations in S n .
det At det A. det A det A
1
.
1 x x12 x1n 1
1
1 x2 2 n 1
x2 x2
4) The Vandermonde matrix and it‟s
1 xn xn2 xnn 1
determinant x j xi 1 i j n .
A j ,1 A j ,r Ak , r 1 Ak , n
1 1 1 1
det A eJ
j j ... j A j ,1 A j , r Ak , r 1 Ak , n
1 2 r
r r r r
where the sum is taken over all r-tuples j1, j2 ,..., jr J such that
j1 j 2 ... j r 1 j1 , 2 j 2 ,..., r j r .
188
189
11
11.0 Objectives
11.1 Introduction
11.2 Properties of determinants and Cramer‟s Rule
11.3 Determinant as area and volume
11.4 Summary
11.0 OBJECTIVES :
11.1 INTRODUCTION :
1 2
For example, if you consider two vectors and in the plane of
2 4
1
2 , then the determinant of the matrix A C1 C2 where C1 &
2
2
C2 gives you the information, whether C1 & C2 are linearly
4
independent vectors or linearly dependent vectors in terms of the det A ,
1 2
which is equal to 1 4 2 2 = 0.
2 4
1 2
This implies that C1 & C2 are linearly dependent,
2 4
which can be clearly because 2C1 C2 . In other words 2C1 C2 0 .
We try to generalize this result in the following theorem.
n
Cj k Ck
k j j
k 1
n
k
D C1,C2 ,...,C j ,...,Cn D C1,C2 ,..., Ck ,...,Cn
j
k j
k 1
n
k D C1,C2 ,..,Ck ,..,Cn
k j j
k 1
j 1
1 D C1,...,C1,...,Cn 2 D C1,C2 ,...,C2 ,...,Cn ....
j j j
j 1
D C1,C2 ,...,C j 1,C j 1,...,Cn
j
D C1,...,C j 1,C j 1,...,Cn
... n D C1,...,Cn ,...,Cn
j
=0
We shall also see that the converse of the above theorem is also
true, in other words the determinant.
1 0 0
B 0 1 0 has rank equal to 3. ( E1,E2 ,E3 are linearly independent.)
0 0 1
For example,
Proof : To prove this theorem use induction on n and consider two cases.
1) All elements in the first row of A are 0.
2) Some elements in the first row of A are not 0.
Assume that n 1 , let A aij ,i 1,2,...,n, j 1,2,...,n
0 0 0
a21 a22 a2n
Case (1) A
an1 an 2 ann
194
a22 a2n
*
Consider A .
a
2n ann n 1 n 1
0 0 0
a21 C22 0 0
a31 C32 C33 0 0 A is equivalent to a linear triangular
a Cn3 Cnn
n1 Cn 2
matrix.
a11 0 0 0
A is column equivalent to M
a21
p22 p23 p2n
an1 pn 2 pn3 pnn
p22 p2n
Now, consider the n 1 n 1 matrix p .
p pnn
n2
By induction hypothesis, the matrix p is row equivalent to
q22 0 0
q32 q33 0 .
qn 2 qn3 qnn
195
We have seen that if the column vectors C1,C2 ,...,Cn are linearly
dependent then D C1,C2 ,...,Cn 0 . Here we prove it‟s converse.
det B b11b22...bnn .
Assume that bkk 0 for some k, 1 k n .
d 11 0 0 0
Say
D = d K-1,K-1
0
0
0
d n1 dn2 d n-1
rank A = rank D .
rank D n 1
rank A n 1 dimC1,C2 ,...,Cn n 1
C1,C2 ,...,Cn are linearly dependent.
A1 AX A1 B .
A1 A X A1B
IX A1B
X A1B
A aij 1 i, j n , for 1 i, j n
i j
The cofactor of aij is the scalar given by 1 det A i j .
Example :
1 1 0
1) Consider a 3 3 matrix A 0 2 2 .
1 3 1
2
The cofactor of a11 111 det A 1 1 2 2 6 4
3 1
2 2
Here A 1 1 .
3 1
198
1 0
Here A 2 1 .
3 1
1 1 2 0
0 1 1 2
2) Consider a 4 4 matrix A
3 2 1 0
1 0 0 3
1 2 0
1 2 0 2
0 1 2 1 2 0
1 0 3 0
3 1 0
2 2 0 6 2 12 10
Example :
3 1 0
1) Consider a 3 3 matrix A 1 2 5
1 0
2
the 1,1 minor of the matrix A
2 5
A 1 1
2 0
Definition : Let A be an n n matrix. A aij 1 j, j n . Let Cij
denote the cofactor of aij for all i, j a i, j n .
Cij 1
i j
det A i j for 1 i, j n write C Cij 1 i, j n the
C is an n n matrix called as the matrix of cofactors of A.
Example :
2 1 3
1) For a 3 3 matrix A 0 1 1
1 2 0
11 1 1
C11 1 det A 1 1 2
2 0
1 2 0 1
C12 1 det A 1 2 1
1 0
1 3 0 1
C13 1 det A 1 3
1 2
2 1 1
The matrix of cofactors of A C 6 3 5
4 2 2
Example :
2 6 4
1) In above example adj A CT 1 3 2 our aim behind
1 5 2
finding out adjoint of the matrix A is to simply steps in order to obtain the
inverse of the matrix A, whenever det A 0 .
200
1 1 2 0
2 3 3 1
2) For the 4 4 matrix A given by A , find
4 5 0 3
2 1 3 2
i) i, j minor of A for all i, j , 1 i, j 4
ii) the cofactor of aij for all i, j . 1 i, j 4
n
i j
det A 1 aij det A i j (the expansion by minors of the jth
i 1
column)
n
det A aij cij
i 1
n
i j
1 aik det A i j
i 1
n
aik Cij
i 1
C ij 1
i j
det A i j &bij aik 1 i n
n
If j k aik Cij 0
i 1
n
This means that aik Cij ik det A jk 0if j k , 1if j k
i 1
n
Cij aik det A
. jk
i 1
n
adj A ji aik det A
. jk
i 1
For an n n matrix A.
t
adj At . At det A. I
A. adj At det A. I
But adj At adj A
t
t
A adj A det A. I
t
A. adj A det A. I
1 1
A1 det A adj A adj A
det A
A1. A adj A det A A1
1
A1 adj A
det A
1
If det A 0 A adj A I n
det A
1
Also adj A A I n
det A
1
A1 exists and A1 adj A
det A
det M j
xj
det A
M j A1, A2 ,...,B,..., An
jth column
x j D A1, A2 ,..., An
Since every term in this sum except jth them is 0, because two
columns are same.
det M j
x j
det A
Note : The above theorem gives us the way of obtaining the unique
solution of the system AX B where A is an n n matrix.
A aij 1 i, j n, X x1,x2 ,...,xn ,B b1,b2 ,...,bn
t t
OR
204
3 2 4
Solution : Let A 2 1 1 det A 5
1 2 3
1 2 4
1
x 0 1 1
1
5 5
1 2 3
3 1 4
1
y 2 0 1 0
5
1 1 3
3 2 1
1 2
z 2 1 0
5 5
1 2 1
a b
c , given that ad bc 0
d
i)
1 2 0
ii) 1 1 1
1 2 1
205
5 0 0 0
7 2 0 0
3) Compute
9 4 1 0
9 2 3 1
iii) x y 2z 1 iv) 2 x 3 y 4 z a
x y z 2 5x 6 y 7 z b
2 x y z 5 8x 9 y 9 z c
u+v
o u
P u, u 0 , 1
A u, 0
u 1
A u, A u, if 1 0
u2 2
206
u 1
A u, if 1 0
u2 2
u 1
A u, has same sign as 1
u2 2
The main good in this section is to show that the oriented area of a
parallelepiped spanned by vectors u, in a plane is same as D u, .
A u, D u,
1 1
ii) A u, A u,. n
n n
iii) For C ,C 0 A cu, CA u,
iv) For any C , A cu, CA u, & A0 u,c CA u,
v) A u , A u,
u
We extend the postulates we made about areas to volumes.
3) If G1 and G2 are two regions, which are disjoint or such that their
intersection has volume zero, then V G1 G2 V G1 V G2 .
3) V e1,e2 ,e3 1
1 m
ii) For n 3 , V u,, V u,,
n n
209
V u1 u2 ,, V a u b c ,,
a V u,,
V au b c,, V u ,,
V u1,, V u2 ,,
1
Then m T w.r.t. the standard basis on both sides is 1 , denote
2 2
m T by det T .
det T
S
u Su
Hence S P u, S T C S T C .
A S P u, det S T
det S det T
det S A u,
u1 1 1
m T u2 2 2
u 3 3
3
det m T det T
det S V P u,,
11.4 SUMMARY :
The system has a unique solution namely X A1B provided that the
co-efficient matrix. A has non-vanishing determinant.
det M j
xj for 1 j n
det A
213
12
RELATION BETWEEN MATRICES AND
LINEAR TRANSFORMATIONS
Unit Structure:
12.0 Objectives
12.1 Introduction
12.2 Representation of a linear transformation by matrix
12.3 The matrices associated with composite, inverse, sum of linear
transformation.
12.4 The connection between the matrices of a linear transformation
with respect to different bases.
12.5 Summary
12.0 OBJECTIVES :
This chapter would help you understand the following concepts and
topics.
Representation of linear transformation from U to V. Where U and
V are finite dimensional real vector spaces by matrices with
respect to the given ordered bases of U and V.
The relation between the matrices of linear transformation from U
to V with respect to different bases of U and V.
Matrix of sum of linear transformations and scalar multiple of a
linear transformation.
Matrices of composite linear transformation and inverse of a linear
transformation.
12.1 INTRODUCTION :
T : 2 3 defined by
214
We found that
a11 1 a12 0
a21 0 and a22 1
a31 1 a32 1
1 j 2
A aij 1 i 3
1 0
A 0 1
1 1
32
x
If we consider AX, where X 1
x2 21
1 0
x
then we get 0 1 1 x1, x2 ,x1 x2
1 1 x2
Recall that the set 1, 0 , 0,1 forms a basis for the vector space
2 . As a set it is not different from the set 0,1 , 1, 0 . So we
afterwards try to distinguish between these sets by using a term called
ordered basis.
12.2 REPRESENTATION OF A LINEAR
TRANSFORMATION BY MATRIX :
1 j n
This gives rise to an m n matrix. A aij 1 i m
aij
a2 j
t
th
A has j column aij ,a2 j ,..., amj
amj
a11
a
21 is the first column of A.
am1
a12
a22 is the second columns of A.
am 2
216
a1n
a2 n is the nth column of A.
amn
Call this matrix A to be the matrix of T with respect to the ordered basis
B
B, B of V ,V respectively, denote this matrix by m T B .
B
Note: m T B is uniquely determined for given ordered bases B, B of V
and V respectively.
B
We shall use this procedure of finding m T B to establish a one-
to-one correspondence between L V ,V and M mn .
Theorem:
Let V be an n-dimensional vector space with ordered basis
1, 2 ,..., n and let V be an m-dimensional vector space with basis
1 , 2 ,..., m . Then there is a one-to-one correspondence between
L V ,V and M mn .
Proof:
Let T L V ,V ,T : V V is a linear transformation.
m
T j aij i for each j, 1 j n .
i 1
B
This gives rise to the matrix m T B .
We show that for each A M mn these is a unique linear map from V to
V , whose matrix is precisely A.
Let A aij ,1
i m;1 j n .
We show that
B
X B T m T B X B
x1
n
x
Let V , x j j X B 2
j i
xn
Let T A aij
Let Ai the ith row of the matrix A.
ai1, ai 2 ,..., ain .
x1
x
Ai X B ai1, ai 2 ,..., ain 2
xn
n
aij x j
j 1
a11 a12 a1n x1
a21 a22 a2n x2
X B
am1 am 2 amn x3
218
n
aij x j
j 1
n
a2 j x j
j 1
n
amj x j
j 1
n n
Also, T T x j j x j T j
j 1 j 1
n m m n
x j aij i aij x j i
j 1
j 1 i 1 j 1
n
aij x j
j 1
X B T X B
n
mj j mT B X B
a x
j 1 B
B
X B T m T B X B
B
X B T1 m T1 B X B
B
m T2 B X B X B T2
Proof:
Consider,
B
m S T B X B
X B S T
X B S T
B
m S B X B T
B B
m S B m T B X B
B B B
Using comparison m S T B m S B m T B
e.g.
Consider a linear transformation
T Id : V V T (V is finite dim vector space)
Then, for 1 j n
n
j id j aij i
i 1
Since 1, 2 ,..., n is a basis of V.
m Id ij 1 i nI n (the n n identity matrix)
1 j n
Corollary:
220
B A1 .
B
that case m T 1
Proof:
Suppose that T is non-singular. Then T T 1 Id T 1 T .
B B
B B
m T T 1 m Id B m T 1 T
B
m T m T 1 I m T 1 m T
A m T 1 I m T 1 A
Hence m T A is invertible and A1 m T 1 conversely,
Suppose that A is invertible. Then A1 exists and also there exists a
transformation.
S : V V such that m S B A1
B
B
m Id B m Id B
B
B B
m Id B m Id B m Id B
B
B B
m Id B m Id B I m Id B m Id B
B B
Proof:
Write T : V V as follows:
T Id T Id
B B
m T B m Id T Id B
B
m Id T B m Id B
B
B
m Id B m T B m Id B
B B
B
Put N m Id B
B B
Since m Id B m Id B T m Id B m Id B
B B
B
m T B N 1 m T B N
B
Examples:
Solution:
Let B 1, 2 ,..., 3 and B 1 , 2 ,..., m be bases for V ,V
respectively.
222
Let m S aij 1 i mm T bij 1 i m
1 j n
1 j n
and m S T Cij 1 i m
1 j n
m
Then S T j Cij i for each j = 1, 2, n
i 1
also S T j S j T j
m m
aij i bij i
i 1 i 1
m
aij bij i
i 1
Hence m S T m S m T
T p x 1 2 x x2 p x . Find the matrix of T relative to the
T x x 2 x 2 x3
T x 2 x 2 2 x3 x 4
T x3 x3 2 x 4 x5
223
B 1 1,1,1,1
,2 1,1,1, 0,0 ,4 1,0,0,0
0 ,3 1,1,
of 4 and basis B1 11 1 x,12 1 x of P1 .
Solution : T 1 T 1,1,1,1
2 2 x 211 012
3 1
T 2 T 1,1,1,
0 2 x 11 12
2 2
T 3 T 1,1,
0,0 1 x 11 012
1 1
T 4 T 1,0,0,0 1 11 12
2 2
3 1
2 1
B1 2
M T B
2
0 1 1
0
2 2
Solution :
224
Te2 T 0,1,
0 0,1,
0 0.e1 1.e2 0.e3
Te3 T 0,0,1
0,,0 0.e1 0.e2 0.e3
1 0 0
m T 1 1 0 w.r.t natural basis for 3 on both the sides.
0 0 0
ii) Let u1 1,1,
0 ;u2 0,1,1
;u3 1,1,1
Tu1 T 1,1,
0 1,2,0 1u1 2u2 3u3
Tu2 T 0,1,1
0,1,
0 1u1 2u2 3u3
Tu3 T 1,1,
1,2,0 1u1 2u2 3u3
Similarly,
Tu2 0,1,
0 1u1 1u2 1 u3 ,
Te2 0,1,
1u1 2 u2 3u3
Te1 1,1,
0 1 1,1,
0 2 0,1,1
3 1,
0 1 3 ,1 2 3 ,2 3
1,1,
1 1,2 0,3 0
Te1 1,1,
0 1.u1 0.u2 0.u3
Similarly,
Te2 0,1,
1.u1 1.u2 1 u3
1 1 0
m T 0 1 0 w.r.t. basis e1,e2 ,e3 of 3 and basis
0 1 0
B u1,u2 ,u3 of 3 (co domain).
Solution :
1 1 1 0 1 0
T E1 1.E1 0.E2 1.E3 0.E4
1 1 0 0 1 0
1 1 0 1 0 1
T E2 0.E1 1.E2 0.E3 1.E4
1 1 0 0 0 1
1 1 0 0 1 0
T E3 1.E1 0.E2 1.E3 0.E4
1 1 1 0 1 0
226
1 1 0 0 0 1
T E4 0.E1 1.E2 0.E3 1.E4
1 1 0 1 0 1
1 0 1 0
The matrix of the transformation is 0 1 0 1
. Now it‟s
1 0 1 0
0 1 0 1
your turn to solve certain exercises.
1 2
2) Let M and T be the linear operator defined on the set of
3 4
1 0 0 1 0 0
2 2 matrices having basis E1 ,E2 ,E3 ,
0 0 0 0 1 0
0 0
E4 by T A M . A , find the matrix of T.
0 1
4) Consider the set e3t ,te3t ,t 2e3t a basis of a vector space V of
functions f :: . Let D be the differential operator on V.
(Hint : Find D e3t ,D te3t ,D t 2e3t , write all these as a linear
m T B A aij 1 i, j n . If B is another n n matrix such that
B
Proof : Since A m T B
B
n
T j aij i
i 1
Let j S j , since 1,...,n is a basis of V and S is non-singular
Let
B1
m T 1 D ij 1 i, j n
B
228
n n
T j ij i ij S i
i 1 i 1
n n
ij nki k
i 1 k 1
n n
nki ij k
i 1 k 1
n n
nki ij k
(i)
k 1 i 1
n
Also T j T nij i
i 1
n n n
nij T j nij aki k
i 1 i 1 i 1
n n
aki nij k
i 1 i 1
n n
aki nij k (ii)
i 1 i 1
In the above theorem, we saw that, how one can connect the two
matrices A, B relative to bases B and B1 of V respectively, in terms of on
invertible matrix N.
1
Here we can observe that B C 1 AC A CBC 1 C 1 BC 1 .
229
B 1 1,2 x,3 x 2 ,4 x3
and B1 1 1,2 1 x,3 x x 2 ,4 x 2 x3
B1
Find A m D B ,B m D 1 , the transition matrix N and verify that
B
B
B B1 AN where D : P3 P3 is a differential operator.
Solution :
0 1 0 0
B
m DB 0
0
0
0
2
0
0
3
A
0 0 0 0
D x 2 2 x 0.1 2.x 0.x 2 0.x3
D 100.1 0.1 x 0. x x 2 0. x 2 x3
D 1 x 11.1 0.1 x 0. x x 2 0. x 2 x3
1 1 0 0
0 1 1 0
Also the transition matrix N
0 0 1 1
0 0 0 1
representing D with respect to basis B 1,x,x 2 and find the matrix A
representing D w.r.t. B1 1,2 x,4 x 2 2 .
Solution :
D 1 0 0 1 0 x 0 x2
D x 1 1 1 0 x 0 x2
D x 2 2 x 0 1 2 x 0 x2
231
0 1 0
B 0 0 2
0 0
0
Now,
D 1 0.1 0.2 x 0. 4 x 2 2
D 2 x 2.1 0.2 x 0. 4 x 2 2
0 2 0
B1
A m D 1 0 0 4
B 0 0
0
1
1 0
2
1 2 2
N 0 2
1
0 N 0
1
0
0 2
0 4
0 1
0
4
N 1BN A
3) Find the matrix for the rotation in the plane through an angle
(anticlockwise) R : 2 2 .
12.5 SUMMARY :
assuming given ordered bases B 1,..,n and B1 11,...,1m of
V and V 1 respectively.
3) The matrix of the sum of linear transfer motions S and T is the sum of
matrices of linear transformations S and T, with respect to given bases
on both the sides,
m STT BB
1 B1 B1
m S B m T B
233
B1 B1
Similarly, m T B . m T B where T is a linear
transformation defined by T .T for all V .
234
13
LINEAR EQUATIONS AND MATRICES
Unit Structure:
13.0 Objectives
13.1 Introduction
13.2 Solutions of the homogeneous system AX= 0
13.3 Solution of the non-homogeneous system AX = B, where B 0
13.4 Summary
13.0 OBJECTIVES :
Where aij and bi are real numbers and x1 ,x2 ,...,xn are unknowns
13.1 INTRODUCTION :
Where aij and bi are real numbers and x1 ,...,xn are n-unknowns
a11a12 ...a1n
a21 a22 ...a2n
The m n matrix A aij 1i m
1 j n
am1am 2 ...amn mn
q1
A solution to the system is a vector Q n
in such that
q
n
AQ = B
The set of all solutions to the system is called the solution set of the
system. When B = 0 the system is called homogeneous.
The homogeneous system corresponding to AX = B is AX = 0.
Proof: Let S be the solution set of the system AX = 0 Let P, Q be any two
solutions of the system AX = 0. Let a,b , then A (aP + bQ) =
aAP bAQ a0 b0 0 .
aP bQ is a solution of AX = 0.
Hence aP bQ S , thus S is a subspace of n .
Now, we turn to the most important result in this chapter that gives us the
connection between the dimension of the solutions of the system AX = 0
and the rank of the co-efficient matrix A.
Theorem:
The dimension of the solution of the system AX = 0 is n-rank A.
n
aij q j
q1 A1Q j 1
T( Q ) T
q A Q n
n m
mj j
a q
j 1
A1
Where A Ai ' s are rows of the matrix A.
A
m
ap1 bq1 A1 aP bQ
Then T aP bQ T
ap bq A aP bQ
n n m
237
aA1P bA1Q
aA P bA Q
m m
A1P A1Q
a b aT P bT Q
A P A Q
m m
T is a linear transformation.
Also, Q ker T if T( Q ) 0 , in other words.
n
aij q j a11q1 a12 q2 a1n qn
j 1 0
a21q1 a22 q2 a2n qn
iff
n 0
mj j
a q am1q1 am 2 q2 amn qn
j 1
x y 0
x yz 0
1 1 0
Here m =2 & n = 3, A
1 1 1 23
1, 1,0 is a non-trivial solution of this system.
Definition:
If A aij 1 i m is the matrix associated with the system of
1 j n
a11 a a1n b1
equations. AX B the m n 1 matrix is
a
m1 am 2 amn bm
called the Augmented matrix of the system AX B, denoted by
A, B or A B .
B 1 A1 2 A2 ... n An
1
Let P B AP
n
The system AX B has a solution namely P.
Theorem:
Suppose the system AX = B has a particular solution X 0 . Then a
vector X in n is also a solution iff X X 0 Y , where Y is a solution
of the corresponding homogeneous system AX = 0.
Here Y satisfies AX = 0
Note: When we are asked to find all solutions of the non-homogeneous
system AX = B. B 0, we need to find one solution of the corresponding
non-homogeneous system and secondly to solve the homogenous system.
Given a system of n non-homogeneous linear equations in n
unknowns, the system possesses a unique solution provided that the
associated n n matrix is invertible. We prove this result in the following
theorem.
Examples:
Solution:
The given system can be represented in the following form.
1 2 3 x 0
3 4 4 y 0
7 10 12 z 0
1 2 3 1 2 3 1 2 3
R2 3R1 R3 2 R2
3 4 4 0 2 5 0 2 5
R3 7 R1
7 10 12 0 4 9 0 0 1
1 2 3
the rank of 0 2 5 is 3
0 0 1
1 2 3
Since A is row equivalent to 0 2 5
0 0 1
242
rank (A) = 3. Hence the only solution is the trivial solution, namely 0.
2) Solve
2 x 2 y 5z 3w 0
4x y z w 0
3x 2 y 3z 4w 0
x 3 y 7 z 6w 0
Solution:
The system can be written as follows:
2 2 5 3 x 0
4 1 1 1 y 0
3 2 3 4 z 0
1 3 7 6 w 0
1 3 7 6
R1 R4 4 1 1 1 R2 4 R1,R3 3R1
3 2 3 4 R4 2 R1
2 2 5 3
1 3 7 6 x 0
0 11 27 23 y 0
0 7 18 14 z 0
0 4 9 9 w 0
1 3 7 6 x 0
0 1 0 4 y 0
0 1 0 4 z 0
0 4 9 9 w 0
1 3 7 6 x 0
0 1 0 4 y
0
0 0 0 0 z 0
0 0 9 7 w 0
dim S 4 3 1
All the solutions of the system are scalar multiples of one single non
zero vector.
7
y 4w&z w,x 3 y 7 z 6w
9
49 5
12w w 6w w
9 9
5 7
x, y, z, w w, 4w, w, w
9 9
w
5,36, 7,9
9
Thus the solution space is S 5,36,7,9 /
3) Show that the only real value for which the following equations
have nonzero solution is 6.
x 2 y 3z x;
3x y 2 z y;
2 x 3 y z z .
Solution:
The system can be rewritten as.
1 2 3 x 0
3 1 2 y 0
2 3 1 z 0
4 3 5 x 0
R1 R2 gives 3 1 2 y 0
2 1 z 0
3
R1 R3gives
244
0 0 0
becomes 3 5 2 , whose rank is 2.
2 3 5
dim S 3 2 1
R2 3R1,R3 2R1gives
1 1 1
0 2 1 If 2 , this matrix is
0 1 1
1 1 1
0 0 1 , which has rank 3.
0 1 1
In this case these is only one solution namely, the trivial (zero) solution.
1 1 1 1 1 1
0 2
0
1 becomes 0 0 2 3 3
1 1 0 1 1
245
1 1 1
1
R2 gives 0 0 1
2
3 3 0 1 1
1 1 1
R1 R2 and R3 R2 gives 0 0 1
0 1
Solution:
The augmented matrix A B IS
2 6 0 11 2 6 0 11
R2 3R1
6 20 6 3 0 2 6 30
0 6 18 1 0 6 18 1
2 6 0 11
R3 3R2
0 2 6 30
0 0 0 91
2 6 0 2 6 0 11
rank 6 20 6 2 and rank 0 2 6 30 3
0 6 18 0 0 0 91
2 x 2 y 3z 2;
x y z 1 are consistent and solve them.
Solution:
1 2 1 3
3 1 2 1
The augmented matrix A B is
2 2 3 2
1 1 1 1
1 2 1 3
R2 R4 ,R3 2 R4 2 0 1 2
0 0 1 4
1 1 1 1
1 2 1 3
R2 R3 ,R4 R3 2 0 0 2
0 0 1 4
1 1 0 5
1 2 0 7
1
R1 R3 ,R2 1 0 0 1
2
0 0 1 4
1 1 0 5
0 2 0 8
R1 R2 ,R4 R2 1 0 0 1
0 0 1 4
0 1 0 4
0 0 0 0
R1 2 R4 1 0 0 1
0 0 1 4
0 1 0 4
The system has precisely one solution which is same as the solution of
the system
0 0 0 0
x
1 0 0 1
y
0 0 1 4
z
0 1 0 4
Solution:
1 1 1 6
A B 1 2 3 10
1 2
1 1 1 1
R3 R2
1 2 3 10
0 0 3 10
1 1 1 6
R2 R1
0
1 2 4
0 0 3 10
Solution:
The solution space 2, 4,0 1, 2,1 /
7) Discuss the system
x y 4 z 6;
x 2 y 2 z 6;
x y z 6 for different values of .
Solution:
1 1 4 6
A B 1 2 2 6
1 1 6
1 1 4 6
R2 R1
0
1 6 0
1 1 6
1 0 10 6
R1 R2
0 1 6 0
1 1 6
1 0 10 6
R3 R2
0 1 6 0
0 7 6
7 0 10 6
R3 R1 1
10
1 6 0
0
7 9
0 0
10 5
7
the system is consistent iff
10
In case
7
10
, rank A = rank A B and the system has a unique
solution.
Check your progress
1) Show that the following system of equations is not consistent.
x 4 y 7 z 14;
3x 8 y 2 z 13;
249
7 x 8 y 26 z 5
2) Solve the following systems
(i) x y z 6;
x 2 y 3z 10;
x 2 y 4z 1
(ii) 2 x 3 y z 9;
x 2 y 3z 6;
3x y 2 z 8
(iii) x 2 y 5z 9;
3x y 2 z 5;
2 x 3 y z 3;
4 x 5 y z 3
13.4 SUMMARY :
250
14
SIMILARITY OF MATRICES
(Characteristic Polynomial of A Linear
Transformation)
Unit Structure :
14.0 Objectives
14.1 Similarity of Matrices
14.2 The Characteristic Polynomial of a Square Matrix
14.3 Characteristic Polynomial of a Linear Transformation
14.4 Summary
14.0 OBJECTIVES
Now note that EAE 1 is invertible, its inverse being EA1E 1 . But this
implies that B is invertible which in fact is not invertible. Consequently
such an invertible E does not exist, that is, A and B are not similar.
1 0
Proof : Recall, the identity matrix I 1
0 1
Also, since B C, there exists and invertible F such that CFBF 1
(**)
A C proving (3).
Let GL (n, ) = { E M (n, ) : E is invertible}.
Also for A M (n, ) we put :
We prove below that two similarity class are either the same or
disjoint subsets of M (n, )
Proof : Suppose [A] [B] is not empty. We have to prove [A] = [B] now.
We accomplish this by choosing a matrix C from the non-empty set
[A] [B] and verify the equations: [A] = [C] = [B].
We prove [A] = [C] only. (the other equality, namely [C] = [B] is proved
by similar method.)
Now [A] = [C] implies C [A] and therefore, there exists G GL(n, )
such that C = GA G-1 (*)
Multiplying on the left by G-1 and on the right by G, the equality (*)
gives: G-1 C G = A (**)
Now, let X [A]. Then there exists E GL (n, ) such that X= EAE-1
(***)
Now we have both : [A] [C] and [C] [A] and therefore the
equality [A] = [C]. As remarked above we prove [B] = [C] similarly and
therefore [A] = [B].
B – t I = EAE – t I
= EAE – E tI E-1
= E (A – t I) E-1
ab t
AtI
cd t
a-tb
cd-t
abc
Also, for a 3 x 3 matrix A d e f
g hi
a tbc
p A (t )det d e t f
ghit
= t3 – (a + e + i) t3 + ……0
n
T f j = bij fi (1 jn)
i1
We claim that the matrices A and B are similar. For, let C[cij ]1i, jn
n
be the matrix given by f j ckj ek (1
jn)
k 1
n
f j ckj ek 1 jn
k1
n
To get, T ( f j ) ckj T (ek ).
k 1
n
NowT ( f j ) bkj f k
k 1
n n
bkj ck e
k 1 1
n n
( ck bkj )e
1 k 1
n n n n n
and ckj T (ek ) ckj ak e ( ak ckj )e
k1 k 1 1 1 k 1
n n n n
( ck bkj )e ( ak ckj )e
1 k 1 1 k 1
For the range 1 jn . Comparing the coefficients of each e we get.,
n n
ck bkj ak ckj 1, jn
k 1 k 1
Now for any t and for any two matrices A, B with AB we
have
Consequently, we get :
Choose a vector basis {e1e2 en} and let A be the matrix of T with
respect to the basis B. Then we consider p A t .
For example :
4 t3
pT (t )det = (4 – t)2 – 3 = t2 – 8t + 13.
14 t
14.4 SUMMARY
EXERCISES
259
15
EIGENVALUS AND EIGENFUNCTIONS
Unit Structure
15.0 Objectives
15.1 Eigenvalues and Eigenvectors
15.2 Finding Eigenvalues and eigen-vectors of a linear transformation
15.3 Summary
15.0 OBJECTIVES
a b a 2a
Thus, a 2a and a 0 implies 2 1 which is not true.
Therefore such a does not exist, that is, S has no eigenvalues (and
eigenvectors).
To see this, choose a vector basis E e1, e2 en of V and consider the
linear transformation T : V V
1
2 0
T k
0
n
and consequently, the matrix (with respect to the same vector basis of V)
of any power T of T is :
1 1
2 2 0
T T
k
0
n
T v v v
and therefore v v 0 i.e. v 0 with 0 which implies
v 0 a contraction. Therefore E E 0 .
Using this result, it can be proved that T can have at most n distinct
Eigenvalues. Below, we give an alternate proof.
T v v
Suppose T aij is the matrix of T with respect to the chosen vector
n
basis E . (Thus we have T e j aij ei )
i 1
n
Therefore T v T x j e j x1 xn being the components of
j 1
v with respect to E.
n n
n n n
T(v) x jT e j x j aij ei aij x j ei
j 1 j 1 i 1 i 1 j l
n
Also, v xiei . Therefore the equation T v v gives;
i 1
263
n n n
aij x j ei xi ei
i 1 j 1 i 1
n
aij x j xi 1 i n
j 1
n
But we can write xi ij x j and therefore the above equations (*)
j 1
can be rewritten in the form :
n
aij ij x j 0 1 i n
j 1
Now this system has a “non trivial” solution x1 xn [non-trivial in the
sense that x1, x2 , , xn 0,0, 0 remembering v should not be the zero
vector] if and only if
det aij ij 0
Example 1: Obtain eigen-values and an eigen vector for each eigen vector
of the linear transformation T : 2 2 given by
T x, y 2 x 3 y,x 4 y x, y .
2
Solution:
Clearly the matrix T of T with respect to the standard basis
2 3
e1 1,0 , e2 0,1 of 2 is : T 1
4
Therefore the characteristic polynomial pT of T is :
2 3
det 0
1 4
i.e. pT 2 4 3 0
pT 2 6 5 0
2 x 3 y 5x
x 4y 5y
It has v 11 as a non-zero solution.
x 0 0 3x
0 2 y z 3y
0 2 y 5 z 3z
15.3 SUMMARY
Exercises:
2) What way the eigen spaces E ,E 2 E 3 are related?
y
(ix) T : 3 3 ,T x, y, z 2 x z, ,x 4 z
2
(x) T : 3 3 ,T x, y, z ax y,x ay z, y az