0% found this document useful (0 votes)
25 views18 pages

Unit 1

This document provides an overview of similarity of matrices and linear transformations. It discusses: 1) How the matrix representation of a linear transformation depends on the choice of basis for the vector space. Two bases will yield similar matrices for the same linear transformation. 2) The definition of similar matrices - two matrices A and B are similar if there exists an invertible matrix P such that B = P^-1AP. 3) Important properties preserved under similarity, including eigenvalues. Similarity leads to the concept of diagonalizing matrices and linear transformations.

Uploaded by

Sandeep Gaya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views18 pages

Unit 1

This document provides an overview of similarity of matrices and linear transformations. It discusses: 1) How the matrix representation of a linear transformation depends on the choice of basis for the vector space. Two bases will yield similar matrices for the same linear transformation. 2) The definition of similar matrices - two matrices A and B are similar if there exists an invertible matrix P such that B = P^-1AP. 3) Important properties preserved under similarity, including eigenvalues. Similarity leads to the concept of diagonalizing matrices and linear transformations.

Uploaded by

Sandeep Gaya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

UNIT 1 SIMILARITY

Structure Page No.


1.1 Introduction 7
Objectives
1.2 Matrix of a Linear Transformation 7
1.3 Similar Matrices 13
1.4 Diagonalisability 15
1.5 Summary 20
1.6 Solutions/Answers 20

1 . INTRODUCTION
In your undergraduate course on linear algebra you would have studied the basics of
vector spaces: linear independence, bases, linear transformations, eigenvalues and
eigenvectors, characteristic and minimal polynomials. In this unit, we start by recalling
how a linear operator T on an n-dimensional vector space V can be seen as an n x n
matrix w.r.t. a fixed ordered basis. This gives a one-one onto correspondence between
the set of linear transformations and the set of n x n matrices. This is actually not a
canonical map, that is, the matrix of T changes if the ordered basis changes.

Interestingly, the matrices of linear transformations w.r.t. the different ordered bases
will be similar, as you will see in Sec. 1.3. Similarity is important for us to study,
chiefly because two similar matrices will have the same eigenvalues. Translating this
to linear operators, it means that a change of basis will not alter the set of eigenvalues
of the linear operator T .

Similarity leads us very naturally to the process of diagonalising a matrix or a linear


transformation, which we shall discuss in Sec. 1.4. Apart from similar operators, we
will also discuss nilpotent operators here.

We emphasise here, as we did in the course introduction, that you must try every
exercise as you get to it. This will help you check whether you have understood the
concepts and results discussed upto that point. Further, after studying the unit, you
need to re-check whether you have achieved the following unit objectives.

Objectives
!

After going through this unit, you should be able to

explain, and give examples of, similar matrices;


prove, and apply, the result that similarity preserves trace, determinant, -
eigenvalues, and hence the minimal polynomial;
define the algebraic and geometric multiplicity of eigenvalues;
obtain, and apply, a characterisation of diagonalisable operators.

1.2 MATRIX OF A LINEAR TRANSFORMATION

i The purpose of this section is to review some results from linear algebra that are
needed in this course, and to establish certain notations, which we will use throughout
this text.
Jordan Canonical Form
Let V be,@-or space of dimension n over a field F , and let B = {v,, ...,v,} be a
fixed ordered basis of V . For each vector v in V, there are unique a,,. .. , a n in F
f4
such that v = a,v, + ...+ a,vn.. We write these scalars in the form of a column matrix

Since, for a fixed B , these scalars are uniquely determined by V E V, this defines a
1,
mapping [ :V + Fn . Verify that the mapping [ 1,
is actually an isomorphism of
vector spaces.

Let us look at some examples.

Example 1: Let V be the vector space of all polynomials of degree at most 2 , with
coefficients from the field of rational numbers, Q . Let p(t) E V be given by
p(t)=t2-4t+3.

.
i ) Find [p(t)], where B ={l, t, t2].
ii ) ~ i n [p(t)],.
d . where ~ ' = { t ' , t,l] .
Solution i ) Since p(t) = 3 - 4t + t2, [p(t)]B= -4
- [:I .

Note: From Example 1, you can see that the change in the order of the elements of the
basis changes the matrix representation of p(t) . This is why we usually fix the
ordering of a basis for a given situation.

Example 2: Find [x], ,where x = E Q3,and B ={e, , e,, e,} ,where e, =

Solution: Since x = , x = ae, +Be2 + p 3 , and so [x], =

Since [x], = x v X E Q3, {el, e2, e3} is called the standard basis of Q3

Remark: If F is a field and e, is a column matrix of size n whos2ith entry is 1 and


other entries are 0,then the ordered basis {el,...,en}is called the standard basis of
Z
Fn. The reason for this is the same aggiven in Example 2.
1
Similarity
Jnrd~nCanonical Form Let us now go further into what happens with a change of basis. Let B and B' be
bases of an n-dimensional vector space V over F . Let B = {v, ,...,v,,) . Then, for any
VE v ,

v = alvl +.+.+a n v n for some a,,...,a, in F.

SO, Tv = alTvl + ...+ a,Tvn.

S~ncethe mapping [ I,, is linear, we have

[ T V ] ~=a,
, [TV,],, + . . . + a n[TV,],,

So, [TV],, = ,, [TI,

Hence, for T E L(V), V E V and bases B and B' of V,

Pvl,, =,,[TI, [vl,

Next, if S, T E L(V) and B, B', B" are ordered bases of V, then for VE V, using
(1). we have

[isoCr)(v)], =,-[SOT], [v],

Aloe (S oT)(v) =S(TV),and so again by ( I ) ,

[(s T)(V)]=
~ ,"[SIB.[ T ( v ) I ~=,.ISI..
? 1~1.
B.jTIB

Thus

9'[S T:B [.JIB = B'Lrsl B' R'tT1B Cv1B.

Since the identity aboge holds for c v q v t V , and [ is one-one, we have the
fc>l10 & ing id~riiitj
:

0 7'1, = n,[S]B-B,[T]R

The next theorem states how a change of basis changes the matrix of the linear
transformation.

Theorem 1:Let V be an n-dimensional vector space over a field F and let


T E L(\T) . If B and B' are ordered bases of V , then there is an invertible matrix P
such that [T],, = P-' [T], P.

Proof. The proof is actually a repeated use of the identity ( 2 ) .

[TI,, = [ I ~ T I , ,= ,,[I],. [TI,. = ,.111,, [T [I,. = B,IIl, [TI,, [I],,


Similarity
Also, In = [I], = ,[I] B,,r [I],, and In = ['IB. ,
= B,[I]B [I],,.

Therefore, if P = ,[1IB,, then P-' = ,,[I], . This proves the proposition.

Theorem 1 can be restated as follows.

Theorem 1: Let B and B' be ordered bases of V and let T E L (v).Then

= B'['I, R [I],' ... (3)

(3) is called the change of basis formula.

We illustrate this with an example.

Example 4: Consider the vector space P3( R ) of polynomials with real coefficients
and having degree at most 3 . Let B = {I, t. t2.t3] and

~ ' = { q , ( t ) = l - t , q , ( t ) = l + t , q,(t)=t2-t3,q,(t)=t'+t3} beorderedbasesof

P3 ( R ) .

Let D be the differential operator on P, ( R ) . Find

1) [Dl,>
ii) [Dl,, ,

iii) an invertible matrix P such that [D],, = P-' [D], P.

ii) You can see that


1 1
Dql ( t ) = -1 = --q1 (t j --q2 (t),
2 2
Jordan Canonical Form Thus,
Similarity
1.3 SIMILAR MATRICES
In Theorem 1, you saw that if B and B' are two bases of a vector space V, and if
, [T]P
TE L(V) ,then [ T ] ~=P-' ~ for some non-sing~larmatrix P. In fact, this
relationship between [TI, and IT],, shows that they are similar to each other, as the
following definition tells us.

Definition: Let A and B be n x n matrices. We say that B is similar to A if there


is an n x n invertible matrix P such that B = P-'AP.

Now, if B is similar to A, is A similar to B? Note that, if B = P-'AP, then


A = Q-' BQ , where Q = P-'. So A is also similar to B . In other words, B is similar
to A if and only if A is similar to B . Thcrelbre. we also say that A and B are
similar matrices.

A few short exercises here.

E4) Check whether "is similar to" is an equivalence relation on M, (R).

E5) /
Show that the set S = {P-'A P P isinvertiblc] is the set of all those matrices
which are similar to AE M, (R) .

E6) Find all the matrices similar to the identity matrix I, , and all the matrices
similar to OEM, (R) .

Now some examples for showing how we can check if two given matrices are simiiar
or not.

Example 5: Check whether the matrices To


1 01
and r0
2 0 '
are similar.

Solution: Looking at the elements of both the matrices, and their positions, we see

So, the given matrices are similar.

Example 6: Show that the matrices [i i] [A i] and are not similar.

Solution: If these matrices were similar, then for some invertible matrix

Now, the first column of P


[ ]
1 0
0 2
is P C ] = rp"]
LP?I
. Also, the first column of
Jordan Canonical Form
[i :] [' P is
0 2
'1 [P"] . Thus, on equating these columns, we get
P21

So, we have p,, = 0 = p,, . Therefore, there is no such invertible mah-ix P .

From the examples above, you can see that showing that two matrices are not similar
could be tedious. There are some properties that similar matrices share, that can help
us in cutting short this tedious process. In the following theorem, we discuss some of
them. But first, some remarks.

Remarks: 1) Recall that if X , Y E M, (F) ,then det(XY) = det(X)det(Y) . So if X


is invertible, det (x-')= l/det(X) .

2) The trace of the matrix X , written as trX , is the sum of the diagonal entries
n
of X , that is, trX = Z x , , where X = [ x , ~ ] . You can verify
I='

that tr(XY) = tr(YX) .

Now, let us state the theorem.

Theorem 2: Let A, B E M, (F) be similar matrices. Then

i) det A = det B ,

ii) tr A = tr B ,

iii) A and B have the same characteristic polynomials.

Proof: i ) Let P be an invertible matrix so that B = P-' AP . Then


det B = det (P-' AP) = det P-' det A det P = (1 / det P)det Pdet A = det A .

ii) tr B = ~~(P-'AP) = tr A .
= tr(P-' (AP)) = ~~((AP)P-' ) = ~~(A(PP-'))

iii) If C,(t) and C,(t) are the characteristic polynomials of A and B , then
C, (t) = det (tI, - B) = det (t1, - P-'AP) = det(P-' (t1, - A)P)

= det (tI, -A)

Using Theorem 2, we are now in a position to make the following definitions.

Definition: Let be an n-dimensional vector space over a field F , and let T E L(V) .
Let B be an ordered basis of V . We define the determinant and the trace of T as
det T = det [TI, and tr T = tr . [TI,
Now, how does Theorem 2 help us in ensuring that these terms are well-defined? We
need to know that the definition does not depend on the basis we choose for B. So, if
we take two different baqes B and B' of V, is det [TI, = [T],. and tr IT], = tr [T],, ? Similarity

Since the change of basis is a similarity transfonilation and since similar matrices
have the same determinant and the same trace, it follows that these definitions of the
determinant and the trace of a linear transformation are independent of the choice of
basis, and so are well-defined.

Here are some exercises now.

2 5 r~ 21
E7) Check whether
[o 2] and I3 are similar.

E8) Give an example of two matrices which tlavc the same determinant and trace,
bot are not similar.

E9) Find the determinant and the trace of the lincar operator T : It3 + It3, defined
by

E10) Let V be a finite-dimensional vectpr space over a field F. Let S and T be


linear operators.on V . Show that if a € F and B is an ordered basis of V ,
then [as+ TI, = a [ ~ ]+[TI,
, .

E l l ) Show that the nlatrices

Now that you have studied similar matrices, let is see when a matrix is similar to a
diagonal matrix.

1.4 DIAGONALISABILITY
In your undergraduate studies you must have already studied a bit about
diagonalisable matrices. You can also refer to Block 3, MTE-02, which is the IGNOU
course "Linear Algebra".

.
Let V be an n-dimensional vector space over I, and let T E L(V), Then, as you
know, V cannot have more than n linearly independent eigenvectors. If T has n
linearly independent eigenvectors, v,, v,.... .v,, thenB = {v, ,....v,} is a basis of V .
Suppose Tv; = h, v, for i = 1.. ..,n,h, E F ,where not all h, 's need be distinct. Then

.
0
] , a diagonal rnat"x.

This leads us to the following definitions.


Jordan Canonical Form Definitions: A linear operator T on a vector space V is called diagonalisable if V
has a basis consisting of eigenvectors of T . An n x n matrix is called diagonalisable
if it is similar to a diagonal matrix.

Now, how do we find out if a linear operator has enough linearly independent
eigenvectors to be diagonalisable? The following theorem helps us in this.

Theorem 3: Eigenvectors corresponding to distinct eigenvalues are linearly


independent.

Proof: Let A,, ..., I , be distinct eigenvalues of T, and let u, ,...,u, be corresponding
eigenvectors, i.e., Tu, = 3Liui,i = 1,..., k.

For each i, define Si =


- (T-~L~I)~~~(T-~L~~~I)(T-~L~+~I)~~~(T-~L~I)
( I i -hl)...(3Li-3Li-I)(3Li-3Li+l)...(3Li-3Lk) '

Then Si is a linear operator on V, Siui = O for i # j and Siui = u j .

Now suppose that a l u , a,u, = 0 , a , € F V i . Then O = S , ( a l u l + ~ - . + a , u , ) = a , u l ,


+ . . a +

and so a , = O .
Hence u, ,...,u, are linearly independent.

It is immediate from the theorem above that if T has n (= dimV) distinct


eigenvalues, then T is diagonalisable. Let us use this fact in an example. .,

Example 7: Show that the matrix A = 0 -1


I' : 1 2 is diagonalisable.

Solution: The characteristic polynomial of A is (x - 2)(x - l)(x + 1) . Therefore, 'A


has 3 distinct eigenvalues, and hence is diagonalisable.

Example 8: Show that the matrix A = [::] is not diagonalisable.

Solution: If A is diagonalisable, then A must be similar to I, (Why?). If there is an


invertible matrix P such that P-'AP = I,, then A = PI,P-' = I,, a contradiction. So, A
is not diagonalisable.

Note: You know that if a matrix has all its eigenvalues distinct, then it is diagonalisable.
However, even if it doesn't have all its eigenvalues distinct, it can still be diagonalisable.
The only condition is to find enough linearly independent eigenvectors. Let us consider
an example of this situation.

[:: :I
Example 9: Show that A = 0 2 0 is diagonalisable.

Solution: A has eigenvalues 1 and 2 . Let us find bases for the eigenspaces of A, W1
and W2.
Similarity
-

Jordan Canonical Fonn


Finally, we give a isary and suffic:ient condition for T to be a diagonalisable
operator.

Theorem 5: Let T be a linear operator on an n-dimensional vector space over a


field F . Assume that the characteristic polynomial of T has all its roots in F . Then
T is diagonalisable if and only if for each eigenvalue h e F, its algebraic multiplicity
is equal to its geometric multiplicity.

Proof: Assume first that the algebraic multiplicity of each eigenvalue is equal to its
geometric multiplicity. Let A,, ...,h, be the distinct eigenvalues of T with geometric
multiplicity n, ,. ..,n, ,respectively. Since the geometric multiplicity of each
eigenvalue is equal to its algebraic multiplicity, the characteristic polynomial of T is
1
k , nl+.-.+n, = n . Let B, ={ui,, ..., u,,] bethesetof
( t - h l ) n l - ~ ~ ( t - h k ) nand
linearly independent eigenvectors corresponding to the eigenvalue hi. Then, to show
k
1
that T is diagonalisable, we need to verify that B = U B i is a basis of V
i=l

Since B has n elements, all we need to check is that B is a linearly independent set.
For this purpose, let S, be a linear operator, as in Theorem 3. Then S,u, = O if i # r ,
and Siu, = u, . To check the linear independence of B ,assume that 1

Then, operating Si on both sides, you can see that C a i q u l q=O.


q=1

Since the elements of B, are linearly independent, this proves the result.

Conversely, assume that T is diagonalisable. Then V has a basis consisting of

-
k
eigenvectors of T . Let B =I
JB, , where h, ,...,h, are distinct eigenvalues of T and
i=l

the set of liinearly independent eigenvectors of T corresponding


to &,.Then Tuii =h,uil for j=l, ...,ni, i = l , ...,k . Thus,

Therefore, the characteristic polynomial of T is (t-h,)"' ..-(t-h,)"" Hence the


algebraic multiplicity of each h, is n, . Since u,, ,. ..,urn, are linearly independent
eigenvectors corresponding to the eigenvalue hi ,it follows that the geometric
multiplicity of h, is at least n, . Since the geometric multiplicity cannot exceed the .
algebraic multiplicity, it follows that both are equal for each eigenvalue h, .

Now an example to show how Theorem 5 can be useful.


[ ::]
Example 10: Show that the matrix 0 1 1 is not diagonalisable.

Solution: The algebraic multiplicity of 1 is 2 . and the geometric multiplicity of 1 is


.
one. So, by Theorem 5 the matrix cannot be diagonalisable.

Try some exercises now.

I:[: :[
E14) L e t ~ : ~ ~ + ~ ~ b e d e f i n e d yb y=Ty-z .

Find the eigenvalues and eigenvectors of T . Is T a diagonalisable linear


operator? Why?

E1.5) Give an example of two matrices which have the same characteristic
polynomial, but one is diagonalisable and the other is not.

In Example 4, Section 1.2, we discussed the differential operator D . You know that
D~ = 0 and D~f 0. In fact, this kind of property is found in many operators of the
kind we define below.

Definition: A non-zero linear operator T ,on a vector space V ,is called nilpotent if
for some positive integer r, T'v = 0 for all V E V .The positive integer k such that
T~ = 0 and T'-' # 0 is called the nilpotency index of T .

Note: From the definition above, it follows that the minimal polynomial of a nilpotent
operator T is tk ,where k is the nilpotency index of T .

Corresponding to the definition for operators, we have the following definition for
matrices.

Definition: A non-zero matrix A is nilpotent if Ak = 0 for some k > 0 . The least


positive integer k such that Ak-' # 0 and = 0 is called the nilpotency index of
A.

Let us consider some examples.

Example 11: Find the nilpotency index of the following matrices:

Solution: i) You should verify that = 0 . Since A # 0 , the nilpotency index of


A is 2 .
Jordan Canonical Form

ii) 0=1/ 0 0 0 0

0 /I0 0 0l],andl=l.

0 0 0 0
Thusthenilpotencyindexof B 1.1.

iii) You can verify that the nilpotency index of C is 4.

Remarks: A nilpotent operator is never diagonalisable. Can you see why? Indeed if
T is nilpotent and d~agonalisable,then for any ordered basis B of V ,the matrix
['I'], is similar to the zero matrix (why?). But we have seen that the only matrix
similar to the zero matrix is the zero matrix itself.

Here's an exercise now.

E16) Let A = -2
[I:I: 1 2 . Check whether or not A - I, is a nilpotent matrix.
'

We have now come to the end of this unit. Let us look at a brief overview of what we
have covered in it.

1.5 SUMMARY
In this unit, we have covered the following +ints.

1. Change of basis is a similarity transformation.

2. Similar matrices have the same eigenvalues.

3. T is diagonalisable if the algebraic multiplicity of every eigenvalue h of T


equals its geometric multiplicity.

4. A nilpotent matrix (operator) is not diagonalisable.


:. ,the required matrix isI![ : :]-
Then, we get the equations

On solving these equations, we get .b = c = 1, a = -1 .


<r

Similarly, you can check that

You should verify that

Now, P = BII]B,.TOfind this, we write the columns of B' as a linear


combination of the columns of B ,as follows.
Jordan Canonical Form

So,

Check that [TI,* = P-'[TI, P .

-
E4) Let us say A B iff A is similar to B for A, B E M, (R).

NOW,A - A .

Also, - is a symmetric and transitive relation.


Hence - is an equivalence relation on M, (R) .
E5) By definition, any element of S is similar to A

-
Conversely, if B A , then 3 an invertible matrix P such that B = P-'AP, i.e.,
BE S .

E6) From E5 you can see that the only matrix similar to I, is I,. And the only
matrix similar to 0 is 0 .

gg: E7) If they were similar, they would have had the same determinant, which is not
SO.

E8) e . . [1 2
and [:y] (refer to E6).

d, E9) For this we find A =[TI, ,where B is the standard basis of R~. Then det A
and tr A are what we require.
IP.'

TrA=-1, det A = 4 .
Similarity

where
bn
] and [';
cn
1 are the jth columns of [SIB and [TIB, respectively.

Hence the result.

E l 1) The reason is as in E7.

E l 2) i) You can check that its eigenvalues &e - 1, - 1 and 2. So, to see if it is
diagonalisable, we need to see if it has 2 linearly independent eigenvectors
corresponding to - 1 . .

Now, 2 1
[:1 :j [:]I:[
0 y =- y .

So, the elements of W-, are -x


[:I I:= x -1 +z
I:]
0 .

So, there are two linearly independent eigenvectors, - 1 and 0 ,

corresponding to h = -1.
[ 1 il
.. Therefore, the given matrix is diagonalisable.

ii) The eigenvalues are 2, 1, 1

Check that any element of W, is y = y 1 .


[1 I1
So, there is only one linearly independent eigenvector corresponding to
h = 1. Therefore, the matrix is not diagonalisable.

E l 3) i) In E 12 (i), you have shown that the geometric multiplicity of h = -1 is 2 ,


the same as its algebraic multiplicity.

Again, both the multiplicities of 3L = 2 are 1.

ii) The geometric multiplicity of h = 1 is 1, and its algebraic multiplicity is 2 .

The geometric and algebraic multiplicities of h = 2 are 1 .


I
I Regarding the relationship, see Theorem 4.
E
Jordan Canonical Form

Since the eigenvalues of this are all distinct, i.e., 0, - 3 f Jz,is


2
diagonalisable. (You can also apply Theorem 5 to show this.)

E l 5) For example, both [::] and I2 have the same characteristic polynomial,

(t - 1)'. However, [i :] is not diagonalisable.

E16) Check that A - I, # 0 , but (A - I,)' = 0

You might also like