0% found this document useful (0 votes)
28 views4 pages

Eigenspace

The document covers the concepts of eigenspaces and characteristic polynomials in linear transformations, providing definitions, examples, and proofs related to eigenvalues and eigenvectors. It includes exercises on finding eigenvalues, eigenspaces, and characteristic polynomials for various transformations, as well as theorems regarding the linear independence of eigenvectors and the conditions for diagonalization. The document serves as a comprehensive guide for understanding the mathematical principles of eigenvalues and their applications in linear algebra.

Uploaded by

Omar Galindo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views4 pages

Eigenspace

The document covers the concepts of eigenspaces and characteristic polynomials in linear transformations, providing definitions, examples, and proofs related to eigenvalues and eigenvectors. It includes exercises on finding eigenvalues, eigenspaces, and characteristic polynomials for various transformations, as well as theorems regarding the linear independence of eigenvectors and the conditions for diagonalization. The document serves as a comprehensive guide for understanding the mathematical principles of eigenvalues and their applications in linear algebra.

Uploaded by

Omar Galindo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Math 217: Eigenspaces and Characteristic Polynomials

Professor Karen Smith (c)2015 UM Math Dept


licensed under a Creative Commons
By-NC-SA 4.0 International License.

T
Definition. Let V −→ V be a linear transformation. An eigenvector of T is a non-zero vector
~v ∈ V such that T (~v ) = λ~v for some scalar λ. The scalar λ is the eigenvalue of the eigenvector ~v .

     
1 7 2 7
A. Warm-up: Show that and are eigenvectors of A = . What are the
−1 −1 −1 −6
corresponding eigenvalues? Find an eigenbasis B for A. Find [A]B .
     
1 −5 1
Solution note: Multiplying we get A = = −5 so we have an eigenvec-
−1 5 −1
   
7 7
tor of eigenvalue −5. Multiplying we get A = so we have an eigenvector of
−1 −1
   
1 7
eigenvalue 1. Since and are linearly independent, they form an eigenbasis
−1 −1
 
−5 0
for T , and the matrix in this basis is .
0 1

B. Definition. Let λ be an eigenvalue of a linear transformation T : V → V . The λ-eigenspace


of T is the subspace

Vλ = {~v ∈ V | T (~v ) = λ~v } = {~v ∈ V | ~v is an eigenvector with eigenvalue λ} ∪ 0.

1. Prove that the Vλ is a subspace of V .


 
0 1 0
2. Find the eigenvalues for the map multiplication by 0 0 0 . For each, compute the eigenspace.
0 0 1

Solution note: 1). Check that ~0 ∈ Vλ . If ~v and w ~ ∈ Vλ , then T (~v ) = λ~v and
T (w)
~ = λw, ~ so adding these we have T (~v + w)
~ = T (~v ) + T (w)~ = λ~v + λw ~ = λ(~v + w),
~
so Vλ is closed under addition. Also: if ~v ∈ Vλ , then T (k~v ) = kT (~v ) = kλ~v = λ(k~v ),
so k~v ∈ Vλ , and Vλ is closed under scalar multiplication. So Vλ is a subspace.

2). Looking at the matrix, we see ~e1 is in the kernel, and in fact spans the whole
kernel (by rank-nullity, the kernel has dimension 1). So zero is an eigenvalue and
V0 = span(~e1 ). Also
 we see
 ~e3 is taken to itself, so ~e3 is an eigenvector of eigenvalue
x x
1. Solving A y  = y , we see that the only 1-eigenvectors are scalar multiples
z z
   
x x
of ~e3 . So V1 = span(~e3 ). Finally, if k 6= 0, 1, we see that A y  = k y  has no
z z
solutions. So there are no other eigenvalues/vectors/spaces.
C. Definition: The dimension of the λ-eigenspace of T is called the geometric multiplicity of
λ.

Compute the eigenspaces and geometric multiplicities of each of the following transformations. Use
geometric intuituion and the definitions.

1. The map R3 → R3 scaling by 3.

2. The map R3 → R3 rotation by π around the line spanned by ~v = [1 1 1]T .


 
2 2 2 0
3. The map R → R given by multiplication by .
0 5

Solution note: 1). There is one eigenvalue, 3. The 3-eigenspace is all of R3 .


2). There are two eigenvalues, 1 and -1. We have V1 = span[1 1 1]T , the axis of
rotation L. Also V−1 is L⊥ .
3). There are 2 eigenvalues: 2 and 5. The eigenspaces are V2 = span (~e1 ) and
V5 = span (~e2 ).

D. Let T : R3 → R3 be a linear transformation given by multiplication by the matrix A.

1. Prove that the λ eigenspace is the kernel of the matrix A − λI3 .

2. Prove that λ is an eigenvalue if and only if the matrix A − λI3 has a non-zero kernel.

3. Explain why λ is an eigenvalue if and only if the matrix A − λI3 has rank less than 3.

4. Explain why λ is an eigenvalue if and only if the matrix A − λI3 has determinant zero.

5. Explain why det(A − xI3 ) is a polynomial of degree three in x. Explain the significance of
the roots of this polynomial.

6. Explain why T has at most three distinct eigenvalues. Find examples of 3 × 3 matrices A
with one, two and three eigenvalues.

SOLUTIONS NEXT PAGE.


Solution note:

1. Fix an eigenvalue λ. Then

Vλ ={~v ∈ R3 | A~v = λ~v } = {~v ∈ R3 | A~v − λ~v = 0}{~v ∈ R3 | (A − λI3 )~v = 0} = ker(A − λI3 ).

2. This follows from (1), since a vector ~v is an eigenvalue if and only if Vλ contains a non-zero
element.

3. We know ker(A − λI3 ) is non-zero if and only if (A − λI3 ) is not injective. Since this is a
square matrix, this is the same as not being surjective (rank nullity), which is the same as
the rank being less than the size of the square matrix.

4. Same as (3): determinant is zero if and only if not injective if and only if not surjective (for
square matrices!)

5. Expanding out the determinant of the 3 × 3 matrix A − xI3 using Laplace expansion along
the first row, we see that we get a polynomial of degree 3 in x. Its roots are the eigenvalues
from (4).

6. A polynomial of degree n can have at most n (real) roots. So a 3 × 3 matrix can have at most
3 eigenvalues.
 
1 0 0
7. The zero matrix has only the 0 eigenvalue since its char poly is (−x)3 . The matrix 0 1 0
0 0 0
 
1 0 0
has only the eigenvalues 0 and 1 since its char poly is −(x3 − x2 ). The matrix 0 2 0 has
0 0 3
three eigenvalues.

E. Definition: The characteristic polynomial of an n × n matrix A is the polynomial


χA (x) = det(A − xIn ).

Theorem: The eigenvalues of a matrix A are the roots of its characteristic polynomial.
Definition: The algebraic multiplicity of an eigenvalue λ of A is the largest k such that (x − λ)k
divides χA .

1. Find the characteristic polynomial of the matrices in Problem A on the previous page. Does
this agree with your previous computation?
T
2. How might you define the characteristic polynomial of a linear transformation V −→ V of
some finite dimensional vector space? Is your definition well-defined? That is, it is indepen-
dent of any choices you made to define it?
3. Find the characteristic polynomial for the differentiation transformation of P4 . What are the
eigenvalues of d/dx on P4 ? Compute the eigenspace of each. What are the algebraic and
geometric multiplicities of each eigenvalue? Does d/dx have an eigenbasis? Is there a basis
in which the matrix of d/dx is diagonal?
4. Find the characteristic polynomial for the identity transformation of R6 . Compute the al-
gebraic and geometric multiplicity for each eigenvalue, as well as the eigenspaces. Does this
map have an eigenbasis?
 
0 1
5. Find the characteristic polynomial for the map given by . Does this map have an
1 0
eigenbasis? Is this matrix similar to a diagonal matrix?
 
1 2
6. Find the characteristic polynomial for . Can this matrix be diagonalized?
4 −1

F. Theorem: Eigenvectors of distinct eigenvalues are linearly independent. That is, if {~v1 , . . . , ~vn }
are eigenvectors with different eigenvalues, then {~v1 , . . . , ~vn } is linearly independent.

1. Prove the Corollary: If A is an n × n matrix with n different eigenvalues, then A is similar


to a diagonal matrix.

2. Prove the Corollary: If T is a linear transformation of an n-dimensional space with n


different eigenvalues, then T has an eigenbasis.

3. Prove the theorem in the case of two eigenvectors. Do you see how to generalize your argument
to any number of vectors.
 
1 2 3
4. Find the eigenvalues of 0 −1 7  . Does this matrix have an eigenbasis? Can it be
0 0 −5
diagonalized?

You might also like