Partial Solution Set, Leon §4.
4.3.2 Let [u1 , u2 ] and [v1 , v2 ] be ordered bases for R2 , where u1 = (1, 1)T , u2 = (−1, 1)T , v1 = (2, 1)T , and
v2 = (1, 0)T . Let L be the linear transformation defined by L(x) = (−x1 , x2 )T , and let B be the matrix
representing L with respect to [u1 , u2 ]. {Note: B was actually part of problem 1 in this chapter. As
usual, the first column of B is [L(u1 )]U = (0, 1)T , and the second column of B is [L(u)2 ]U = (1, 0)T .}
(a) Find the transition matrix S corresponding to the change of basis from [u1 , u2 ] to [v1 , v2 ].
Solution: The transition matrix in question is the one I’ve been calling TUV , i.e.,
−1 0 1 1 −1 1 1
S=V U = = .
1 −2 1 1 −1 −3
(b) Find the matrix A representing L with respect to [v1 , v2 ] by computing A = SBS −1 .
−1 1 3 1
Solution: First we find S = . Then it is a simple matter to determine that
2 −1 −1
1 0
A = SBS −1 = .
−4 −1
4.3.3 Let L be the linear transformation on R3 given by
L(x) = (2x1 − x2 − x3 , 2x2 − x1 − x3 , 2x3 − x1 − x2 )T ,
and let A be the matrix representing L with respect to the standard basis for R3 . If u1 = (1, 1, 0)T , u2 =
(1, 0, 1)T , and u3 = (0, 1, 1)T , then [u1 , u2 , u3 ] is an ordered basis for R3 .
(a) Find the transition matrix U corresponding to the change of basis from [u1 , u2 , u3 ] to the standard
basis.
(b) Determine the matrix B representing L with respect to [u1 , u2 , u3 ].
Solution:
1 1 0
(a) This is simply U = 1 0 1 .
0 1 1
(b) Somewhat surprisingly, B = U −1 AU = A. An interesting sidelight: this means that U A = AU , i.e.,
we have an instance of a commuting pair of matrices.
3 −1 −2
4.3.4 Let L be the linear operator mapping R3 into R3 defined by L(x) = Ax, where A = 2 0 −2 .
2 −1 −1
Let v1 = (1, 1, 1)T , v2 = (1, 2, 0)T , and v3 = (0, −2, 1)T . Find the transition matrix V corresponding to
a change of basis from [v1 , v2 , v3 ] to the standard basis, and use it to determine the matrix B representing
L with respect to [v1 , v2 , v3 ].
1 1 0
Solution: The transition matrix is V = 1 2 −2 . We want
1 0 1
−2 1 2 3 −1 −2 1 1 0 0 0 0
B = V −1 AV = 3 −1 −2 2 0 −2 1 2 −2 = 0 1 0 .
2 −1 −1 2 −1 −1 1 0 1 0 0 1
4.3.5 Let L be the linear operator on P3 defined by
L(p(x)) = xp (x)” + p (x).
(a) Find the matrix A representing L with respect to [1, x, x2 ].
(b) Find the matrix B representing L with respect to [1, x, 1 + x2 ].
(c) Find the matrix S such that B = S −1 AS.
(d) Given p(x) = a0 + a1 x + x2 (1 + x2 ), find Ln (p(x)).
Solution:
(a) We start by applying L to the basis vectors: L(1) = 0, L(x) = x, andL(x2 ) = 2x2 + 2. The
0 0 2
corresponding coordinate vectors become the columns of A = 0 1 0 .
0 0 2
(b) The coordinate vectors
for 1 and x are unchanged, but the coordinate vector for 2x2 + 2 is now
0 0 0
(0, 0, 2)T , so B = 0 1 0 .
0 0 2
(c) The
change of basis
matrix has for its columns the coordinate vectors of the basis from part (b):
1 0 1
S = 0 1 0 .
0 0 1
(d) The coordinate vector of p(x) is (a0 , a1 , a2 )T . The nth power
of B is simple to compute because of
0 0 0
the simple diagonal structure of B; B n = 0 1 0 . It follows that the coordinate vector for
0 0 2n
L (p(x)) is B (a0 , a1 , a2 ) = (0, a1 , 2 a2 ), so Ln (p(x)) = a1 x + 2n a2 (1 + x2 ).
n n T n
4.3.8 Suppose that A = SΛS −1 , where Λ is a diagonal matrix with main diagonal λ1 , λ2 , . . . , λn .
(a) Show that Asi = λi si for each 1 ≤ i ≤ n.
n
n
(b) Show that if x = αi si , then Ak x = αi λki si .
i=1 i=1
(c) Suppose that |λi | < 1 for each 1 ≤ i ≤ n. What happens to Ak x as k → ∞?
Solution:
(a) For any choice of i, 1 ≤ i ≤ n, we have
Asi = SΛS −1 si
= SΛ S −1 si
= SΛei
= S (Λei )
= Sλi ei
= λi Sei
= λi si .
2
n
(b) This is easily proven by induction: A0 x = x = αi si .
i=1
n
Assume that Ak x = αi λki si for some k ∈ N. Then
i=1
Ak+1 x = AAk x
n
= A αi λki si
i=1
n
= Aαi λki si
i=1
n
= αi λki Asi
i=1
n
= αi λki λi si
i=1
n
= αi λk+1
i si ,
i=1
and the result follows by induction. ✷
(c) Each term in the preceding sum vanishes, since if |λ| < 1 then lim λk = 0.
k→∞
4.3.9 Suppose that A = ST , where S is nonsingular. Let B = T S. Show that B is similar to A.
Proof: Assume that A is as described, i.e., that A = ST and that S is nonsingular. Then
B = T S = (S −1 S)T S = S −1 (ST )S = S −1 AS,
so B is similar to A. ✷
What’s the point? Given any square S and T , with at least one of the two nonsingular, we know that
it’s unlikely that ST = T S. But at least ST and T S are similar. And that (as we shall see) means that
they have much in common (eigenvalues, for example).
4.3.10 Let A and B be n × n matrices. Show that if A is similar to B, then there exist n × n matrices S and T ,
with S nonsingular, such that A = ST and B = T S.
Solution: Well, at least a hint. Note that we are proving the converse of (9). This is perhaps easier
than it initially seems. Assume that A is similar to B. You may then write B in terms of A and another
(nonsingular) matrix S, right? Do so. Now what?
MA/Ra, October 24, 2002