0% found this document useful (0 votes)
286 views7 pages

Matrix Diagonalization Solutions

This document provides solutions to homework problems involving linear algebra concepts such as diagonalization of matrices and inner products. For problem 2b, the matrix A is tested for diagonalizability and found to have eigenvalues of 4 and -2, with corresponding eigenvectors given. An invertible matrix Q and diagonal matrix D such that Q^-1AQ=D are provided, showing A is diagonalizable. For problem 5, it is shown that the function (x,y) → xAy* defines an inner product on C^2, where A is a given 2x2 matrix.

Uploaded by

Cody Sage
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
286 views7 pages

Matrix Diagonalization Solutions

This document provides solutions to homework problems involving linear algebra concepts such as diagonalization of matrices and inner products. For problem 2b, the matrix A is tested for diagonalizability and found to have eigenvalues of 4 and -2, with corresponding eigenvectors given. An invertible matrix Q and diagonal matrix D such that Q^-1AQ=D are provided, showing A is diagonalizable. For problem 5, it is shown that the function (x,y) → xAy* defines an inner product on C^2, where A is a given 2x2 matrix.

Uploaded by

Cody Sage
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Homework 7 Solutions

Joshua Hernandez
November 16, 2009
5.2 - Diagonalizability
2. For each of the following matrices A M
nn
(R), test A for diagonalizability, and if A is diagonalizable,
nd an invertible matrix Q and a diagonal matrix D such that Q
-1
AQ = D.
b. A =
_
1 3
3 1
_
Solution: Computing eigenvalues:

A
() = det
_
1 3
3 1
_
= (1 )
2
3
2
This polynomial has roots 13 = 4, -2. Two distinct eigenvalues mean that A is diagonalizable.
Then
E
4
= N
_
1 4 3
3 1 4
_
= N
_
-3 3
3 -3
_
= span
__
1
1
__
E
-2
= N
_
1 (-2) 3
3 1 (-2)
_
= N
_
3 3
3 3
_
= span
__
1
-1
__
.
Our diagonalization is therefore
A =
_
1 1
1 -1
__
4 0
0 -2
__
1 1
1 -1
_
-1
=: QDQ
-1
.
d. A =
_
_
7 -4 0
8 -5 0
6 -6 3
_
_
Solution: Computing eigenvalues:

A
() = det
_
_
7 -4 0
8 -5 0
6 -6 3
_
_
= (7 )(-5 )(3 ) (-4 8(3 ))
= (3 )(
2
2 3) = (3 )( 3)( + 1).
This polynomial has roots 3 and -1 (a repeated root means that A might not be diagonalizable).
E
3
= N
_
_
7 3 -4 0
8 -5 3 0
6 -6 3 3
_
_
= N
_
_
4 -4 0
8 -8 0
6 -6 0
_
_
= span
_
_
_
_
_
1
1
0
_
_
,
_
_
0
0
1
_
_
_
_
_
1
E
-1
= N
_
_
7 (-1) -4 0
8 -5 (-1) 0
6 -6 3 (-1)
_
_
= N
_
_
8 -4 0
8 -4 0
6 -6 4
_
_
= span
_
_
_
_
_
2
4
3
_
_
_
_
_
.
The two eigenspaces have a total dimension of 3, so A is diagonalizable. Our diagonalization is
therefore
A =
_
_
1 0 2
1 0 4
0 1 3
_
_
_
_
3 0 0
0 3 0
0 0 -1
_
_
_
_
1 0 2
1 0 4
0 1 3
_
_
-1
=: QDQ
-1
.
f. A =
_
_
1 1 0
0 1 2
0 0 3
_
_
Solution: Since A is an upper-triangular matrix, we can read its eigenvalues o the diagonal:
= 1, 3. Then
E
1
= N
_
_
1 1 1 0
0 1 1 2
0 0 3 1
_
_
= N
_
_
0 1 0
0 0 2
0 0 2
_
_
= span
_
_
_
_
_
1
0
0
_
_
_
_
_
.
We neednt bother to compute E
3
. The dimension of E
1
is 1, although the root = 1 has
multiplicity 2. Therefore A is not diagonalizable.
3b. Let V = P
2
(R). Dene T : V V by the mapping T(ax
2
+ bx + c) = cx
2
+ bx + a. If diagonalizable,
nd a basis for V such that [T]

is a diagonal matrix.
Solution: If = 1, x, x
2
is the standard basis on P
2
(R), then
A := [T]

=
_
_
0 0 1
0 1 0
1 0 0
_
_
.
Computing eigenvalues of A:

A
() = det
_
_
0 0 1
0 1 0
1 0 0
_
_
= (0 )
2
(1 ) (1 ) = (1 )(
2
1)
= -(1 )
2
(1 +).
The roots of this polynomial are = 1. Now,
E
1
= N
_
_
0 1 0 1
0 1 1 0
1 0 0 1
_
_
= N
_
_
-1 0 1
0 0 0
1 0 -1
_
_
= span
_
_
_
_
_
1
0
1
_
_
,
_
_
0
1
0
_
_
_
_
_
E
-1
= N
_
_
0 (-1) 0 1
0 1 (-1) 0
1 0 0 (-1)
_
_
= N
_
_
1 0 1
0 2 0
1 0 1
_
_
= span
_
_
_
_
_
1
0
-1
_
_
_
_
_
.
2
Thus = (1, 0, 1), (0, 1, 0), (1, 0, -1) is a diagonalizing basis of L
A
. Noting that
[L
A
]

= [L
[T]

= [

T
-1

= [T]

-1

()
, (1)
we know that
-1

() = 1 +x
2
, x, 1 x
2
is a diagonalizing basis of T (equation (1) justies the
obvious nal step of converting the vectors of into their corresponding polynomials).
8. Suppose that A M
nn
n(F) has two distinct eigenvalues,
1
and
2
, and that dim(E

1
) = n 1. Prove
that A is diagonalizable.
Solution: Distinct eigenspaces intersect trivially, and any eigenspace has dimension 1, so
dim(E

1
+E

2
) = dim(E

1
E

2
) = dim(E

1
) + dim(E

2
) (n 1) + 1 = n.
The eigenspaces of A span F
n
, and so A is diagonalizable.
11. Let A be an nn matrix that is similar to an upper triangular matrix, and has the distinct eigenvalues

1
,
2
, . . . ,
k
with corresponding multiplicities m
1
, m
2
, . . . , m
k
. Prove the following statements.
Lemma: If A and B are similar matrices, then
A
() =
B
().
Let A = Q
-1
BQ for some matrix Q M
nn
(F). By the multiplicative property of determinants,

A
() = det(AI) = det(Q
-1
BQI) = det(Q
-1
(B I)Q) = det(B I) =
B
().
Solution: Let M be an upper-triangular matrix such that A = QMQ
-1
. It was proved (5.4:9)
that the eigenvalues of M coincide with the diagonal elements M
ii
. Since similar transforma-
tions have the same characteristic polynomials (lemma, above), they share eigenvalues
i
and
multiplicities m
i
.
a. trace(A) =

k
i=1
m
i

i
Solution: By (2.5:10),
trace(A) = trace(M) =
n

i=1
M
ii
=
k

i=1
m
i

i
. (2)
b. det(A) = (
1
)
m
1
(
2
)
m
2
(
n
)
m
n
.
Solution: The determinant of an upper-triangular matrix is the product of its diagonal entries
M
ii
(determinant property 4). By the multiplicative property of determinants,
det(A) = det(QMQ
-1
) = det(M) =
n

i=1
M
ii
=
n

i=1

m
i
i
. (3)
Finally, one can show (the proof is a little too complicated to show here, but see pp.370,385 in the
text) that every matrix is similar, over some eld, to an upper-triangular matrix. The identities (2)
and (3) above are therefore universal properties of matrices.
3
14a. Find the general solution to the system of dierential equations
x

= x +y, y

= 3x y. (4)
Solution: Let V = (

(R, R
2
) be the space of smooth curves in R
2
. We can consider the
derivative as a linear operator D : V V. Then
D
_
x
y
_
=
_
x

_
=
_
x +y
3x y
_
=
_
1 1
3 -1
__
x
y
_
=: A
_
x
y
_
.
We diagonalize A in the usual fashion:

A
() = det
_
1 1
3 -1
_
= (1 )(-1 ) 3 =
2
4.
This has roots = 2. Computing eigenspaces:
E
2
= N
_
1 2 1
3 -1 2
_
= N
_
-1 1
3 -3
_
= span
__
1
1
__
E
-2
= N
_
1 (-2) 1
3 -1 (-2)
_
= N
_
3 1
3 1
_
= span
__
1
-3
__
.
We have a basis of eigenvectors = (
1
1
) , (
1
-3
). Let be the standard basis of R
2
. Changing
basis,
_
x
y
_
= [I]

_
f
1
f
2
_
=
_
1 1
1 -3
__
f
1
f
2
_
,
where and f
1
(t) and f
2
(t) satisfy
f

1
(t) = 2f
1
(t) and f

2
(t) = -2f
2
(t).
These dierential equations have solutions f
1
(t) = c
1
e
2t
and f
2
(t) = c
2
e
-2t
. Observe, then,
_
x(t)
y(t)
_
=
_
1 1
1 -3
__
f
1
(t)
f
2
(t)
_
=
_
c
1
e
2t
+c
2
e
-2t
c
1
e
2t
3c
2
e
-2t
_
.
18. Two linear operators T and U on an n-dimensional vector space V are called simultaneously diago-
nalizable if there exists some basis of V such that [T]

and [U]

are diagonal matrices.


a. Prove that if T and U are simultaneously diagonalizable operators, then T and U commute.
Lemma: If D
1
, D
2
M
nn
are two diagonal matrices, then D
1
D
2
= D
2
D
1
.
(D
1
D
2
)
ij
=
n

k=1
(D
1
)
ik
(D
2
)
kj
=
n

k=1

ik
(D
1
)
ik

kj
(D
2
)
kj
=
ij
(D
1
)
ii
(D
2
)
ii
=
ij
(D
2
)
ii
(D
1
)
ii
= (D
2
D
1
)
ij
.
4
Solution: Let be a basis of V that diagonalizes both T and U. Since diagonal matrices
commute with each other (lemma, above),
[TU]

= [T]

[U]

= [U]

[T]

= [UT]

.
Now we can relate the two operators in the same way:
TU =
-1

L
[TU]

=
-1

L
[UT]

= UT.
6.1 - Inner Products and Norms
5. In C
2
, show that x, y) = xAy

is an inner product, where


A =
_
1 i
-i 2
_
.
Solution: We test , ) for the various properties of an inner product
1. Linearity (in the rst position) follows from linearity of matrix multiplication:
x
1
+cx
2
, y) = (x
1
+cx
2
)Ay

= (x
1
A+cx
2
A)y

= x
1
Ay

+x
2
Ay

= x
1
, y) +c x
2
, y)
2. Symmetry: Observe that
A

=
_
1 i
-i 2
_

=
_

1

-i

i

2
_
=
_
1 i
-i 2
_
= A.
Thus (observing that the Hermitian adjoint of a scalar is just its complex conjugate),
y, x) = yAx

= ((x

= (xAy

= xAy

= x, y).
3. Coercivity:
(x
1
, x
2
), (x
1
, x
2
)) =
_
x
1
x
2
_
_
1 i
-i 2
__
x
1
x
2
_
=
_
x
1
x
2
_
_
x
1
+i x
2
- x
1
+ 2 x
2
_
= [x
1
[
2
+ 2[x
2
[
2
If (x
1
, x
2
) ,= (0, 0), then the RHS is a positive real number.
Compute x, y) for x = (1 i, 2 + 3i) and y = (2 +i, 3 2i).
Solution:
x, y) =
_
1 i 2 + 3i
_
_
1 i
-i 2
__
2 +i
3 2i
_
=
_
1 i 2 + 3i
_
_
1 i
-i 2
__
2 i
3 + 2i
_
=
_
1 i 2 + 3i
_
_
(2 i) +i(3 + 2i)
-i(2 i) + 2(3 + 2i)
_
=
_
1 i 2 + 3i
_
_
2i
5 + 2i
_
= (1 i)(2i) + (2 + 3i)(5 + 2i) = (2 + 2i) + (4 + 19i) = 6 + 21i.
9. Let be a basis for a nite-dimensional inner product space.
a. Prove that if x, z) = 0 for all z , then x = 0.
5
Solution: Since is a spanning set, we may write x =

n
i=1
c
i
z
i
, with all z
i
. Then
x, x) =
_
x,

n
i=1
c
i
z
i
_
=
n

i=1
c
i
x, z) = 0.
Coercivity of the inner product implies that x = 0.
b. Prove that if x, z) = y, z) for all z , then x = y.
Solution: If x, z) = y, z), then x y, z) = x, z) y, z) = 0 for all z . By the above,
x y = 0, so x = y.
10. Let V be an inner product space, and suppose that x and y are orthogonal vectors in V. Prove that
|x +y|
2
= |x|
2
+ |y|
2
. Deduce the Pythagorean theorem in R
2
.
Solution:
|x +y|
2
= x +y, x +y) = x, x) + x, y) + y, x) + x, y) .
By orthogonality of x and y,
= x, x) + y, y) = |x|
2
+ |y|
2
.
In R
2
, if two orthogonal vectors x and y are laid head-to-tail, they form two edges of a right
triangle, of which x + y forms the third edge. If we denote the lengths of these edges by a, b,
and c, respectively, then
c
2
= |x +y|
2
= |x|
2
+ |y|
2
= a
2
+b
2
.
Thus we prove the Pythagorean Theorem.
12. Let v
1
, v
2
, . . . , v
k
be an orthogonal set in V, and let a
1
, a
2
, . . . , a
k
be scalars. Prove that
_
_
_
_
_
k

i=1
a
i
v
i
_
_
_
_
_
2
=
k

i=1
[a
i
[
2
|v
i
|
2
.
Solution: Clearly, in the case that k = 1,
_
_
_
_
_
k

i=1
a
i
v
i
_
_
_
_
_
2
= |a
1
v
1
|
2
= [a
1
[
2
|v
1
|
2
=
k

i=1
[a
i
[
2
|v
i
|
2
.
Now, assume the result is proven for sets of k1 or fewer orthogonal vectors. Suppose v
1
, . . . , v
k
are orthogonal, and dene b
k1
=

k1
i=1
a
i
k
i
. Observe that
b
k1
, a
k
) =
_
k1

i=1
a
i
v
i
, a
k
v
k
_
=
k1

i=1
a
i
v
i
, v
k
) = 0,
6
so a
k
and b
k1
are orthogonal. Applying the previous problem,
_
_
_
_
_
k

i=1
a
i
v
i
_
_
_
_
_
2
= |a
k
v
k
+b
k1
|
2
= [a
k
v
k
|
2
+ |b
k
|
2
.
By assumption,
= [a
k
[
2
|v
k
|
2
+
k1

i=1
[a
i
[
2
|v
i
|
2
=
k

i=1
[a
i
[
2
|v
i
|
2
.
17. Let T be a linear operator on an inner product space V, and suppose that |T(x)| = |x| for all x. Prove
that T is one-to-one.
Solution: If v N(T), then |v| = |T(v)| = |0| = 0. By coercivity of the norm, v = 0. Thus
T is one-to-one.
7

You might also like