Linear Algebra- Final Exam Review
1. Show that Row(A) ⊥ Null(A).
SOLUTION: We can write matrix-vector multiplication in terms of the rows of the
matrix A. If A is m × n, then:
Row1 (A) Row1 (A)x
Row2 (A) Row2 (A)x
Ax = .. x =
..
. .
Rowm (A) Rowm (A)x
Each of these products is the “dot product” of a row of A with the vector x.
To show the desired result, let x ∈ Null(A). Then each of the products shown in the
equation above must be zero, since Ax = 0, so that x is orthogonal to each row of A.
Since the rows form a spanning set for the row space, x is orthogonal to every vector in
the row space.
2. Let A be invertible. Show that, if v1 , v2 , v3 are linearly independent vectors, so are
Av1 , Av2 , Av3 . NOTE: It should be clear from your answer that you know the definition.
SOLUTION: We need to show that the only solution to:
c1 Av1 + c2 Av2 + c3 Av3 = 0
is the trivial solution. Factoring out the matrix A,
A(c1 v1 + c2 v2 + c3 v3 ) = 0
Think of the form Ax̂ = 0. Since A is invertible, the only solution to this is x̂ = 0,
which implies that the only solution to the equation above is the solution to
c1 v1 + c2 v2 + c3 v3 = 0
Which is (only) the trivial solution, since the vectors are linearly independent. (NOTE:
Notice that if the original vectors had been linearly dependent, this last equation would
have non-trivial solutions).
3. Find the line of best first for the data:
x 0 1 2 3
y 1 1 2 2
Let A be the matrix formed by a column from x column of ones, then we form the
normal equations AT Ac = AT y and solve:
T T 14 6 11
A Ac = A y ĉ =
6 4 6
The solution is ĉ = (AT A)−1 AT y = 1
10
[4, 9]T , so the slope is 2/5 and the intercept is
9/10.
1
−3 0
4. Let A = . (a) Is A orthogonally diagonalizable? If so, orthogonally diagonalize
0 0
it! (b) Find the SVD of A.
SOLUTION: For part (a), the matrix A is symmetric, so it is orthogonally diagonalizable.
It is also a diagonal matrix, so the eigenvalues are λ = −3 and λ = 0. The eigenvalues
are the standard basis vectors, so P = I, and D = A.
For the SVD, the eigenvalues of AT A are 9 and 0, so the singular values are 3 and 0.
The column space is spanned by [1, 0]T , as is the row space. We also see that
−3 −1
Av1 = σ1 u1 = =3
0 0
This brings up a good point- You may use either:
−1 0 1 0
U= V =
0 1 0 1
or the reverse:
1 0 −1 0
U= V =
0 1 0 1
This problem with the ±1 is something we don’t run into with the usual diagonalization.
5. Let V be the vector space spanned by the functions on the interval [−1, 1].
1, t, t2
Use Gram-Schmidt to find an orthonormal basis, if we define the inner product:
Z 1
hf (t), g(t)i = 2f (t)g(t) dt
−1
SOLUTION: Let v1 = 1 (which is not normalized- We’ll normalize later). Then
R1
2t dt
v2 = t − Projv1 (t) = t − R−1
1 1=t−0=t
−1
2 dt
R1
2
R1 3
2t dt 2t dt
v3 = t2 − Projv1 (t2 ) − Projv2 (t2 ) = t2 − R−11 1 − R−1
1 t
2 dt 2t2 dt
−1 −1
We note that the integral of any odd function will be zero, so that last term drops:
4/3 1
v3 = t2 − 1 = t2 −
4 3
2
6. Suppose that vectors u, v, w are linearly independent vectors in IRn . Determine if the
set {u + v, u − v, u − 2v + w} are also linearly independent.
SOLUTION: We want to show that the only solution to the following equation is the
trivial solution:
C1 (u + v) + C2 (u − v) + C3 (u − 2v + w) = 0
Regrouping this using the original vectors, we have:
(C1 + C2 + C3 )u + (C1 − C2 − 2C3 )v + C3 w = 0
Since these vectors are linearly independent, each coefficient must be zero:
C1 + C2 + C3 = 0
C1 − C2 − 2C3 = 0
C3 = 0
From which we get that C1 = C2 = C3 = 0 is the only solution.
7. Let v1 , . . . , vp be orthonormal. If
x = c1 v1 + c2 v2 + · · · + cp vp
then show that kxk2 = |c1 |2 + · · · + |cp |2 . (Hint: Write the norm squared as the dot
product).
SOLUTION: Compute x · x, and use the property that the vectors vi are orthonormal:
(c1 v1 + c2 v2 + · · · + cp vp ) · (c1 v1 + c2 v2 + · · · + cp vp ) =
Since all dot products of the form ci ck vi ·vk = 0 for i 6= k, then the dot product simplifies
to:
c21 v1 · v1 + c22 v2 · v2 + · · · c2p vp · vp
And since the vectors are normalized, this gives the result:
kxk2 = c21 + · · · + c2p
(Don’t need the magnitudes here, since we’re working with real numbers).
8. Short answer:
(a) If ku + vk2 = kuk2 + kvk2 , then u, v are orthogonal.
SOLUTION: True- This is the Pythagorean Theorem.
(b) Let H be the subset of vectors in IR3 consisting of those vectors whose first element
is the sum of the second and third elements. Is H a subspace?
SOLUTION: One way of showing that a subset is a subspace is to show that the
subspace can be represented by the span of some set of vectors. In this case,
a+b 1 1
a =a 1 +b 0
b 0 1
Because H is the span of the given vectors, it is a subspace.
3
(c) Explain why the image of a linear transformation T : V → W is a subspace of W
SOLUTION: Maybe “Prove” would have been better than “Explain”, since we
want to go through the three parts:
i. 0 ∈ T (V ) since 0 ∈ V and T (0) = 0.
ii. Let u, v be in T (V ). Then there is an x, y in V so that T (x) = u and T (y) = v.
Since V is a subspace, x + y ∈ V , and therefore T (x + y) = T (x) + T (y) = u + v
so that u + v ∈ T (V ).
iii. Let u ∈ T (V ). Show that cu ∈ T (V ) for all scalars c. If u ∈ T (V ), there is an
x in V so that T (x) = u. Since V is a subspace, cu ∈ V , and T (cu) ∈ T (V ).
By linearity, this means cT (u) ∈ T (V ).
(OK, that probably should not have been in the short answer section)
1 2 3
(d) Is the following matrix diagonalizable? Explain. A = 0 5 8
0 0 13
SOLUTION: Yes. The eigenvalues are all distinct, so the corresponding eigenvec-
tors are linearly independent.
(e) If the column space of an 8 × 4 matrix A is 3 dimensional, give the dimensions of
the other three fundamental subspaces. Given these numbers, is it possible that
the mapping x → Ax is one to one? onto?
SOLUTION: If the column space is 3-d, so is the row space. Therefore the null space
(as a subspace of IR4 ) is 1 dimensional and the null space of AT is 5 dimensional.
Since the null space has more than the zero vector, Ax = 0 has non-trivial solutions,
so the matrix mapping will not be 1-1. Since the column space is a three dimensional
subspace of IR8 , the mapping cannot be onto.
(f) i. Suppose matrix Q has orthonormal columns. Must QT Q = I?
SOLUTION: Yes, QT Q = I.
ii. True or False: If Q is m × n with m > n, then QQT = I.
SOLUTION: False- If m 6= n, then QQT is the projection matrix that takes a
vector x and projects it to the column space of Q.
iii. Suppose Q is an orthogonal matrix. Prove that det(Q) = ±1.
SOLUTION: If Q is orthogonal, then QT Q = I, and if we take determinants
of both sides, we get:
(det(Q))2 = 1
Therefore, the determinant of Q is ±1.
1 1 2 2
9. Find a basis for the null space, row space and column space of A, if A = 2 2 5 5
0 0 3 3
The basis for the column space is the set containing the first and third columns of A. A
basis for the row space is the set of vectors [1, 1, 0, 0]T , [0, 0, 1, 1]T . A basis for the null
space of A is [−1, 1, 0, 0]T , [0, 0, −1, 1]T .
4
10. Find an orthonormal basis for W = Span {x1 , x2 , x3 } using Gram-Schmidt (you might
wait until the very end to normalize all vectors at once):
0 0 1
0 1 1
x1 = 1 , x2 = 1 , x3 =
,
1
1 2 1
SOLUTION: Using Gram Schmidt (before normalization, which is OK if doing by hand),
we get
0 0 3
0
, v2 = 2 , v3 = 1
v1 =
1 −1 1
1 1 −1
11. Let Pn be the vector space of polynomials of degree n or less. Let W1 be the subset of
Pn consisting of p(t) so that p(0)p(1) = 0. Let W2 be the subset of Pn consisting of
p(t) so that p(2) = 0. Which of the two is a subspace of Pn ?
SOLUTION: We check the properties-
• Is 0 in the subspace?
For both W1 and W2 , the zero polynomial satisfies both.
• Is the subspace closed under addition?
For W1 , let p(t) be a polynomial such that p(0)p(1) = 0, and let h(t) be another
polynomial with that property, h(0)h(1) = 0.
Does that imply that g(t) = p(t) + h(t) has the desired property?
g(0)g(1) = (p(0)+h(0))(p(1)+h(1)) = p(0)p(1)+p(0)h(1)+h(0)p(1)+h(0)h(1) = p(0)h(1)+h(0)p(1)
For example, if h(0) = 1, h(1) = 0, p(0) = 0, p(1) = 1, then this quantity is not
zero. Therefore, W1 is not closed under addition.
Alternate explanation: You can show that it doesn’t work by providing a specific
example: p(t) = t has the property, and h(t) = 1 − t also has the property (since it
is zero at t = 1). When you add them, g(t) = t + (1 − t) = 1, which does not have
the property (it is never zero).
Going back to the rest: We can check that W2 is closed under addition: Let
p(t), h(t) be two functions in W2 . Then g(t) = p(t) + h(t) satisfies the property
that
g(2) = p(2) + h(2) = 0 + 0 = 0
• Similarly, W2 is closed under scalar multiplication- If g(t) = cp(t), then g(2) =
cp(2) = c · 0 = 0.
Therefore, W2 is a subspace, and W1 is not.
5
12. For each of the following matrices, find the characteristic equation, the eigenvalues and
a basis for each eigenspace:
1 0 1
7 2 3 −1
A= B= C= 0 2
0
−4 1 1 3
1 0 1
SOLUTION: For matrix A, λ = 3, 5. Eigenvectors are [−1, 2]T and [−1, 1]T , respectively.
For matrix B, for λ = 3+i, an eigenvector is [1, i]T . The other eigenvalue and eigenvector
are the complex conjugates.
For matrix C, expand along the 2d row. λ = 2 is a double eigenvalue with eigenvectors
[0, 1, 0]T and [1, 0, 1]T . The third eigenvalue is λ = 0 with eigenvector [−1, 0, 1]T .
p(−1)
13. Define T : P2 → IR3 by: T (p) = p(0)
p(1)
(a) Find the image under T of p(t) = 5 + 3t.
SOLUTION: [2, 5, 8]T
(b) Show that T is a linear transformation.
SOLUTION: We show it using the definition.
i. Show that T (p + q) = T (p) + T (q):
p(−1) + q(−1) p(−1) q(−1)
T (p + q) = p(0) + q(0) = p(0) + q(0) = T (p) + T (q)
p(1) + q(1) p(1) q(1)
ii. Show that T (cp) = cT (p) for all scalars c.
cp(−1) p(−1)
T (cp) = cp(0) = c p(0) = cT (p)
cp(1) p(1)
(c) Find the kernel of T . Does your answer imply that T is 1 − 1? Onto? (Review the
meaning of these words: kernel, one-to-one, onto)
SOLUTION:
Since the kernel is the set of elements in the domain that map to zero, let’s see
what what the action of T is on an arbitrary polynomial. An arbitrary vector in
P2 is: p(t) = at2 + bt + c, and:
a−b+c
T (at2 + bt + c) = c
a+b+c
For this to be the zero vector, c = 0. Then a − b = 0 and a + b = 0, so a = 0, b = 0.
Therefore, the only vector mapped to zero is the zero vector.
6
Side Remark: Recall that for any linear function T , if we are solving T (x) = y,
then the solution can be written as x = xp + xh , where xp is the particular solution
(it solves T (xp ) = y), and T (xh ) = 0 (we said xh is the homogeneous part of the
solution). So the equation T (x) = y has at most one solution iff the kernel is only
the zero vector (if T was realized as a matrix, we get our familiar setting).
Therefore, T is 1 − 1. The mapping T will also be onto (see the next part).
14. Let v be a vector in IRn so that kvk = 1, and let Q = I − 2vvT . Show (by direct
computation) that Q2 = I.
SOLUTION: This problem is to practice matrix algebra:
Q2 = (I −2vvT )(I −2vvT ) = I 2 −2IvvT −2vvT I +4vvT vvT = I −4vvT +4v(1)vT = I
15. Let A be m × n and suppose there is a matrix C so that AC = Im . Show that the
equation Ax = b is consistent for every b. Hint: Consider ACb.
SOLUTION: Using the hint, we see that ACb = b. Therefore, given an arbitrary vector
b, the solution to Ax = b is x = Cb.
16. If B has linearly dependent columns, show that AB has linearly dependent columns.
Hint: Consider the null space.
SOLUTION: If B has linearly dependent columns, then the equation Bx = 0 has non-
trivial solutions. Therefore, the equation ABx = 0 has (the same) non-trivial solutions,
and the columns of AB must be linearly dependent.
17. If λ is an eigenvalue of A, then show that it is an eigenvalue of AT .
SOLUTION: Use the properties of determinants. Given
|A − λI| = |(A − λI)T | = |AT − λI T | = |AT − λI|
the solutions to |A − λI| = 0 and |AT − λI| = 0 are exactly the same.
2 −2
18. Let u = and v = , Let S be the parallelogram with vertices at 0, u, v, and
1 1
u + v. Compute the area of S.
SOLUTION: The area of the parallelogram formed by two vectors in IR2 is the deter-
minant of the matrix whose columns are those vectors. In this case, that would be
4.
a b c a + 2g b + 2h c + 2i g h i
19. Let A = d e f , B = d + 3g e + 3h f + 3i , and C = 2d 2e 2f .
g h i g h i a b c
If det(A) = 5, find det(B), det(C), det(BC).
SOLUTION: This question reviews the relationship between the determinant and row
operations. The determinant of B is 5. The determinant of C is −10. The determinant
of BC is −50.
7
20. Let 1, t be two vectors in C[−1, 1]. Find the length between the two vectors and the
cosine of the angle between them using the standard inner product (the integral). Find
the orthogonal projection of t2 onto the set spanned by {1, t}.
SOLUTION:
• The length between the vectors is:
sZ s
1
1
r
p −1 8
h(1 − t), (1 − t)i = (1 − t)2 dt = (1 − t)3 =
−1 3 −1 3
21. Define an isomorphism: A one-to-one and onto linear transformation between vector
spaces (see p. 251)
NOTE: An isomorphism was the critical piece to understanding when two vector spaces
had the same “form”- For example, a plane through the origin in IR3 and the plane IR2
are not equal, but they are isomorphic; the isomorphism takes a point of the plane and
returns its coordinates- That is, the plane in IR3 is the span of two vectors in IR3 , so
every point on the plane is a linear combination of those two. The point in IR2 that we
refer to is the ordered pair of weights from the linear combination.
As another example, if the plan is spanned by u and v in vector space V , and x is on the
plane so that x = c1 u + c2 v, then the isomorphism takes x ∈ V and gives (c1 , c2 ) ∈ IR2 .
22. Let
1 2 −3
B= , ,
−3 −8 7
Find at least two B−coordinate vectors for x = [1, 1]T .
SOLUTION: Row reduce to find x as a linear combination of the vectors in B:
1 2 −3 1 1 0 −5 5
→
−3 −8 7 1 1 1 1 −2
If we label the weights as c1 , c2 and c3 , then
c1 = 5 + 5t
c2 = −2 − t
c3 = t
Therefore, we could form x using the weights (5, −2, 0), or (10, −3, 1) or any combination
given.
NOTE: The columns do NOT form a basis for IR2 , but they do form a spanning set
for IR2 . If the columns formed a basis, the weights for the linear combination would
be unique (no free variables), but in this case, the expansion of x in this basis was not
unique.
8
23. Let U , V be orthogonal matrices. Show that U V is an orthogonal matrix.
SOLUTION: This question deals with the definition of an orthogonal matrix: A square
matrix such that U T = U −1 . First, if the product U V is defined, then U and V are both
n × n for some n.
Secondly, since U, V are each invertible, then so is U V . Furthermore,
(U V )−1 = V −1 U −1 = V T U T = (U V )T
Therefore, U V is an orthogonal matrix.
24. In terms of the four fundamental subspaces for a matrix A, what does it mean to say
that:
• Ax = b has exactly one solution.
For this to be true, we know that b ∈ Col(A) (to be consistent), and that
Null(A) = {0} (for the solution to be unique).
• Ax = b has no solution.
For this to be true, b cannot be an element of the column space of A.
• In the previous case, what is the “least squares” solution? What quantity is being
minimized?
The least squares solution is the vector x̂ where the magnitude of the difference
between the given b and b̂ = Ax̂] is as small as possible. Therefore, we are
minimizing the following, over all vectors x:
kb − Axk
• Ax = b has an infinite number of solutions.
For the system to be consistent, b ∈ Col(A). For us to have an infinite number of
solutions, the dimension of the null space is greater than 0 (or, the dimension of
the null space is 1 or more).
25. Let T be a one-to-one linear transformation for a vector space V into IRn . Show that
for u, v in V , the formula:
hu, vi = T (u) · T (v)
defines an inner product on V .
SOLUTION: This was a homework problem from 6.7. We want to check the properties
of the inner product, which are: (i) Symmetry, (ii) and (iii) Linear in the first coordinate,
and (iv) Inner product of a vector with itself is non-negative (and the special case of 0).
(a) hu, vi = T (u) · T (v) = T (v) · T (u) = hu, vi, where the second equality is true
because the regular dot product is symmetric.
(b)
hu + w, vi = T (u + w) · T (v) = (T (u) + T (w)) · T (v) = T (u) · T (v) + T (w) · T (v)
9
(c)
hcu, vi = T (cu) · T (v) = cT (u) · T (v)
(d)
hu, ui = T (u) · T (u) = kT (u)k2
The dot product of a vector with itself is always non-negative. Furthermore, by
the same equation, if hu, ui = 0, then u must be the zero vector.
x+y =2
26. Describe all least squares solutions to
x+y =4
SOLUTION: Interesting to think about- In the plane, these are two parallel lines (each
has a slope of −1, one has an intercept at 2, the other at 4).
Using linear algebra, we have
1 1 x 2
= ⇔ Ax = b
1 1 y 4
We cannot use the normal equations, because A does not have full rank (a rank of 2).
However, if we project b into the column space of A (which is the span of [1, 1]T ), then
we can solve the sytem (and in fact, we’ll have an infinite number of solutions since
there will be a free variable):
2+4 1 3
ProjCol(A) (b) = =
1+1 1 3
Now solve the system Ax̂ = [3, 3]T , which is:
1 1 3 1 1 3
→
1 1 3 0 0 0
If we let the free variable be ŷ = t, then x̂ = 3 − t. Notice that this set of points
represents the line ŷ = −x̂ + 3, which is the line right down the middle between the
other two lines!
27. Let u = [5, −6, 7]T . Let W be the set of all vectors orthogonal to u. (i) Geometrically,
what is W ? (ii) Find the projection of x = [1, 2, 3]T onto W . (iii) Find the distance
from the vector x = [1, 2, 3]T to the subspace W .
SOLUTIONS:
• W is the plane in IR3 going through the origin whose normal vector (in the sense
of Calc 3) is u.
• We can write x = x̂ + z, where x̂ is the projection onto u, then z will be the desired
vector in W :
x · u 5 7/11 0.6364 0.36
5 − 12 + 21
x̂ = u= −6 = −42/55 ≈ −0.7636 ⇒ z ≈ 2.76
u·u 25 + 36 + 49
7 49/55 0.8909 2.11
(NOTE: I’ll try to make the numbers work out nicely on the exam).
10
• The distance is then kx − zk = kx̂k ≈ 1.33
28. The SVD can be used to determine whether a matrix is invertible, and can provide a
formula for the inverse. The matrix A is invertible if it is square and all singular values
are positive (not zero).
11