0% found this document useful (0 votes)
104 views6 pages

Lin Test Solutions

The document provides solutions to practice exam questions on linear algebra. For question 1, the document finds bases for the row space, column space, and nullspace of a given matrix A. It also finds the rank and nullity of A and verifies orthogonality between the row and nullspaces. For question 2a, the document diagonalizes a given matrix A by finding its eigenvalues and orthogonal eigenvectors. It uses this to compute A8. Question 2b proves that eigenvectors corresponding to distinct eigenvalues of a symmetric matrix are orthogonal. For question 3, the document determines whether given sets S are vector spaces by checking if they satisfy the vector space properties, and finds bases/dimensions if they are vector

Uploaded by

rozemath
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
104 views6 pages

Lin Test Solutions

The document provides solutions to practice exam questions on linear algebra. For question 1, the document finds bases for the row space, column space, and nullspace of a given matrix A. It also finds the rank and nullity of A and verifies orthogonality between the row and nullspaces. For question 2a, the document diagonalizes a given matrix A by finding its eigenvalues and orthogonal eigenvectors. It uses this to compute A8. Question 2b proves that eigenvectors corresponding to distinct eigenvalues of a symmetric matrix are orthogonal. For question 3, the document determines whether given sets S are vector spaces by checking if they satisfy the vector space properties, and finds bases/dimensions if they are vector

Uploaded by

rozemath
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Linear Algebra 2S2

PRACTICE EXAMINATION SOLUTIONS

1. Find a basis for the row space, the column space, and the nullspace of the following matrix
A. Find rank A and nullity A. Verify that every vector in the row space of A is orthogonal to
every vector in the nullspace of A.
 
1 −1 7 3 4
 1 −1 2 3 1 
A=
 −2
.
2 1 −6 1 
0 4 16 0 8

Reduce A, getting the 4 × 5 matrix


 
1 0 0 3 −3/5
 0
 1 0 0 −2/5 
.
 0 0 1 0 3/5 
0 0 0 0 0
The row space has as basis the three non-zero rows of the reduced matrix, i.e. {(1, 0, 0, 3, −3/5),
(0, 1, 0, 0, −2/5), (0, 0, 1, 0, 3/5)}. The column space has basis the first three columns of the
original matrix A, since the “first one’s” of the reduced matrix appear in these columns:
{(1, 1, −2, 0), (−1, −1, 2, 4), (7, 2, 1, 16)}. For the nullspace, using the reduced matrix, we see that
x4 and x5 are arbitrary, and that x1 = −3x4 + 3/5 x5 , x2 = 2/5 x5 , and x3 = −3/5x5 . Thus, the
nullspace consists of all vectors in R5 of the form (−3x4 + 3/5 x5 , 2/5 x5 , −3/5 x5 , x4 , x5 ), which
we write as all vectors of the form x4 (−3, 0, 0, 1, 0) + x5 (3/5, 2/5, −3/5, 0, 1). In other words, a
basis for the nullspace is {(−3, 0, 0, 1, 0), (3/5, 2/5, −3/5, 0, 1)}. The rank of A is 3 and the nullity
of A is 2; note that Rank A+ Nullity A = 5. Finally, to verify that every vector in the row space
is orthogonal to every vector in the nullspace, it is enough to check the basis vectors. So, there
are 6 things to check, since there are 3 vectors in the basis of the row space and 2 vectors in the
basis for the nullspace. Checking the first one: (1, 0, 0, 3, −3/5) · (−3, 0, 0, 1, 0) = 0. The other 5
verifications are left to you.

 
2 1 1
2.(a). Let A = 1 2
 1 .
1 1 2
Find an orthogonal matrix P which diagonalizes A. Using this or otherwise, calculate A8 .
(b). Let A be a symmetric matrix, and let λ and µ be two, distinct eigenvalues of A. Let x be an
eigenvector of A corresponding to λ and let y be an eigenvector of A corresponding to µ. Prove
that x ⊥ y.

(a). The first step is to find the eigenvalues of A. So, solving det(A − λI) =
 
2−λ 1 1
det  1 2−λ 1  = 0,
1 1 2−λ
we get three roots λ = 1, λ = 1, λ = 4 (i.e. 1 is a double root).
The second step is to find the corresponding eigenvectors. Our goal is to find two perpendicular
eigenvectors, each of length 1, corresponding to λ = 1, and a third eigenvector, of length 1,
corresponding
 to λ =4 which is to be perpendicularto the firsttwo. For λ = 1, we therefore
1 1 1 1 1 1
consider  1 1 1  , obtaining the reduced form  0 0 0  , which yields the general so-
1 1 1 0 0 0
lution (−x2 − x3 , x2 , x3 ), where x2 and x3 are arbitrary. Letting x2 = 1 and x3 = 0, we get
u1 = (−1, 1, 0) as an eigenvector. Then, letting x2 = 0 and x3 = 1, we get u2 = (−1, 0, 1) as
another eigenvector. Note however, that neither u1 nor u2 has length 1; moreover, u1 is not
orthogonal to u2 . So, we must apply the Gram-Schmidt process:

√ √
Let v1 = u1 /||u1 ||, obtaining v1 = (−1/ 2, 1/ 2, 0). Then, take
u2 − < u2 , v1 > v1
v2 = ,
||u2 − < u2 , v1 > v1 ||
√ √ √
obtaining v2 = (−1/ 6, −1/ 6, 2/ 6). (Verify that both v1 and v2 are unit vectors, that they
are both eigenvectors corresponding to λ = 1, and finally that v1 ⊥ v2 .)

 Now, we findthe third eigenvector corresponding


 toλ = 4. So, to reduce the matrix
−2 1 1 1 0 −1
 1 −2 1  . We obtain the matrix  0 1 −1  . Solving, we get x3 is arbitrary,
1 1 −2 0 0 0
x1 = x2 = x3 . Thus, for example, u3 = (1, 1, 1) is an eigenvector. However, we need to normalise
√ √ √
u3 , i.e. make it have length 1. So, the vector we seek is u3 /||u3 || = (1/ 3, 1/ 3, 1/ 3).
Therefore the orthogonal matrix we seek is
 √ √ √ 
−1/√2 −1/√6 1/√3
P =  1/ 2 −1/√6 1/√3  .
0 2/ 6 1/ 3
The point of all of this: For this P , we have
 
1 0 0
P T AP = D =  0 1 0  .
0 0 4
 
1 0 0
Thus, A = P DP T , so that A8 = P D8 P T = P  0 1 0 PT.
0 0 48

(b). λ < x, y >=< λx, y >=< Ax, y >=< x, AT y >=< x, Ay >, since A is symmetric,
=< x, µy >= µ < x, y > . Therefore, (λ − µ) < x, y >= 0. Now, we are given that λ 6= µ. Thus,
< x, y > must be 0, i.e. x ⊥ y.

3. In each case, either explain why the set S in question is a vector space and find a basis and
dimension of S, or explain why S is not a vector space.

(a). S = {(x1 , x2 , x3 , x4 ) ∈ R4 : 2x1 + x3 − x4 = 0 and x1 − x3 − 2x4 = 0}.


(b). S = all 4 × 4 matrices which are anti-symmetric. (Recall that A is anti-symmetric
means that AT = −A.)
(c). S = all functions f : R → R such that f 0 (0) = 1.

Part(a). Since S is a subset of the vector space R4 , we need only show that if (x1 , x2 , x3 , x4 )
and (y1 , y2 , y3 , y4 ) are both in S and c is a scalar then their sum (x1 + y1 , x2 + y2 , x3 + y3 , x4 + y4 )
and the scalar product (cx1 , cx2 , cx3 , cx4 ) are both in S as well. For example, to verify the second
condition, 2(cx1 ) + (cx3 ) − (cx4 ) = c(2x1 + x3 − x4 ) = c · 0 = 0 and (cx1 ) − (cx3 ) − 2(cx4 ) =
c(x1 − x3 − 2x4 ) = c · 0 = 0. The verification of the first condition is just as simple.
 
  x1  
2 0 1 −1  x2  = 0 .
S = {(x1 , x2 , x3 , x4 ) ∈ R4

:
1 0 −1 −2  x3  0
x4
 
1 0 0 −1
Reducing, we get , from which the solution (x4 , x2 , −x4 , x4 ) = x2 (0, 1, 0, 0) +
0 0 1 1
x4 (1, 0, −1, 1) is obtained. Thus, a basis for S consists of the two vectors (0, 1, 0, 0), (1, 0, −1, 1).
dim S = 2.

Part (b). We argue as in part (a). Here, S is a subset of the vector space of all 4×4 matrices,
and so we must only verify that if A and B are anti-symmetric and c is a scalar, then A + B and
cA are also anti-symmetric. For example, (A + B)T = AT + B T = (−A) + (−B) = −(A + B).
Similarly, one shows that (cA)T = −(cA).
Any 4 × 4 anti-symmetric matrix A is of the form
 
0 a12 a13 a14
 −a12 0 a23 a24 
A=  −a13 −a23
,
0 a34 
−a14 −a24 −a34 0

which we can write as a sum of 6 matrices


     
0 1 0 0 0 0 1 0 0 0 0 0
 −1 0 0 0    0 0 0 0   0 0 0 0 
A = a12 
 0 + a13 
 −1
 + ... + a34  .
0 0 0  0 0 0   0 0 0 1 
0 0 0 0 0 0 0 0 0 0 −1 0

Since it is easy to see that these six matrices are linearly independent, they form a basis for S,
whose dimension is 6.

Part (c). Note that if f and g are in S, then f 0 (0) = g 0 (0) = 1. However, (f +g)0 (0) = 2 6= 1.
Hence S is not a vector space.

4. Let V be a vector space, and let {v1 , ..., vk } ⊂ V.


(a). Define the term {v1 , ..., vk } is linearly independent.
(b). Prove: If {v1 , v2 , v3 , v4 , v5 } is a linearly independent set, then so is {v1 , v2 , v3 }.
(c). Find all k ∈ R such that the set {(1, 2, 3), (0, −2, 1), (1, k 2 , 3k − 1)} is linearly independent.
Interpret geometrically.

Part (a). The set of vectors {v1 , ..., vk } is linearly independent means that the only possible
solution c1 , ..., ck to the equation c1 v1 + c2 v2 + ... + ck vk = 0 is c1 = 0, c2 = 0, ..., ck = 0.

Part (b). Suppose that {v1 , v2 , v3 , v4 , v5 } is a linearly independent set, and let us show that
the subset {v1 , v2 , v3 } is also linearly independent. So, consider the equation c1 v1 + c2 v2 + c3 v3 =
0. If {v1 , v2 , v3 } were not linearly independent (i.e. if it were linearly dependent) then there would
be a solution where not all the c1 , c2 , c3 were 0. Then, if we were to set c4 = 0 and c5 = 0, there
would be a solution to the equation c1 v1 + ... + c5 v5 = 0 where not all the c0 s are zero. But, this
contradicts our assumption that {v1 , v2 , v3 , v4 , v5 } is a linearly independent.

Part (c). The set of k ∈ R such that the set {(1, 2, 3), (0, −2, 1), (1, k 2 , 3k − 1)} is linearly
 
1 2 3
independent is the same as the set of k ∈ R such that det  0 −2 1  6= 0. Solving
2
1 k 3k − 1

k + 6k − 10 = 0, we get k = −3 ± 19. Thus, for all other k, {(1, 2, 3), (0, −2, 1), (1, k 2 , 3k − 1)}
2

is linearly independent. The geometric interpretation is that if k = −3 ± 19, then the three
vectors {(1, 2, 3), (0, −2, 1), (1, k 2 , 3k − 1)} lie on the same plane (and hence they are not linearly
independent). For all other values of k, {(1, 2, 3), (0, −2, 1), (1, k 2 , 3k − 1)} are not co-planar.

5. In each case, either diagonalise the matrix


 or explain why
 the matrix cannot be diagonalised.
  9 −9 0
−3 2
(a). A = . (b). A =  8 −8 0  .
0 −3
−14 14 0

Part (a). It is easy


 to see that the   λ = −3.
only eigenvalue of A is the double root  Now,
−3 − λ 2 0 1
reducing the matrix with λ = −3, we get the matrix . Thus,
0 −3 − λ 0 0
we get as eigenvector any vector of the form (x1 , 0). However, there is not a set of two linearly
independent eigenvectors corresponding to this double root. Hence, the matrix is not diagonal-
isable.

Part (b). First, the eigenvalues of A are obtained, as usual, by solving det(A − λI) = 0.
That is, one must solve the cubic λ2 (λ − 1) = 0.
Second, let’s find two-if possible-linearly independent eigenvectors corresponding to the double
root λ = 0 : We get x2 and x3 are arbitrary, and x1 = x2 . Thus, there are indeed 2 linearly
indendent eigenvectors, (1, 1, 0), (0, 0,
1). Corresponding
 to λ = 1, we find a third eigenvector
1 0 −9
(−9, −8, 14). Thus, if we take P =  1 0 −8  , then P −1 AP will be the diagonal matrix
0 1 14
 
0 0 0
D =  0 0 0 .
0 0 1
6. Let S : R2 → R2 be defined as rotation by an angle θ and let T : R2 → R2 be the projection
onto the x−axis.
(a). Find the standard matrices corresponding to S and T.
(b). Find the real eigenvalues, if any, of S and of T. Interpret your answers geometrically.
whether S ◦ T = 
(c). Determine  T ◦ S.  
cos θ − sin θ 1 0
Part(a). S ↔ , and T ↔ .
sin θ cos θ 0 0
Part(b). S has no real eigenvalues, unless θ is a multiple of π. The geometric reason is that
a rotation will not take a vector into a multiple of itself (unless the rotation is through a very
special angle of 0 or ±π or ±2π or ...).
T has eigenvalues 0 and 1, with corresponding eigenvectors (1, 0) and (0, 1), respectively. Note
that T takes (1, 0) to 1 times itself, while T takes (0, 1) to 0 times itself.
Part (c). S ◦ T is (almost) never equal to T ◦ S. One way to see this is to simply multiply the
two matrices S · T and T · S, and verify that the resulting products are indeed different. Another
way is to realise that, geometrically, T ◦ S(x1 , x2 ) is always a vector on the x−axis. On the other
hand, S ◦ T (x1 , x2 ) does not lie on the x− axis (unless the angle of rotation θ is a multiple of π.
can be any vector in R2 . So the two compositions, S ◦ T and T ◦ S, are different.

You might also like