0% found this document useful (0 votes)
43 views9 pages

Problems set # 1 Physics 400: V F T V V T λ λ F T λ λ F T λ V V

This document contains a problem set for a Physics 400 course, focusing on various topics in linear algebra and transformations. It includes problems related to eigenvectors, linear independence, matrix operations, orthogonal matrices, nilpotent matrices, and projections. The document also features proofs and computations involving matrices and linear transformations, along with properties of symmetric and Hermitian matrices.

Uploaded by

clintonbett19
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views9 pages

Problems set # 1 Physics 400: V F T V V T λ λ F T λ λ F T λ V V

This document contains a problem set for a Physics 400 course, focusing on various topics in linear algebra and transformations. It includes problems related to eigenvectors, linear independence, matrix operations, orthogonal matrices, nilpotent matrices, and projections. The document also features proofs and computations involving matrices and linear transformations, along with properties of symmetric and Hermitian matrices.

Uploaded by

clintonbett19
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Prof.

Anchordoqui

Problems set # 1 Physics 400 January 31 and February 7, 2019

1. Let V be a vector space over F and let T be a linear transformation of the vector space V
to itself. A nonzero element x ∈ V satisfying T (x) = λx for some λ ∈ F is called an eigenvector
of T , with eigenvalue λ. Prove that for any fixed λ ∈ F the collection of eigenvectors of T with
eigenvalue λ together with 0 forms a subspace of V , that is, a subset of the vector space V that is
closed under addition and scalar multiplication.

2. (i) Show that {t, sin t, cos 2t, sin t cos t} is a linearly independent set of functions.
(ii) Find all unit vectors lying in span{(3, 4)}.

3. Consider the following matrices:


 
1 2 −1
A= 0 3 1 
 
2 0 1
 
2 1 0
B =  0 −1 2 
 
1 1 3
 
2 1
C= 4 3 
 
1 0
find the following:
(i) det(AB) = |AB|, (ii) AC, (iii) ABC, and (iv) AB − BT AT .

4. A matrix is orthogonal if its transpose is equal to its inverse: AT = A−1 . Show that the
product of 2 orthogonal matrices is also an orthogonal matrix.

5. A matrix A ∈ Cn×n is nilpotent if Ak = 0 for some integer k > 0. Prove that the only
eigenvalue of a nilpotent matrix is zero.

6. (i) Determine whether the function T : R2 → R2 , such that T (x, y) = (x2 , y) is linear?
(ii) Let T : R3 → R3 be a linear transformation such that

T (1, 0, 0) = (2, 4, −1), T (0, 1, 0) = (1, 3, −2), T (0, 0, 1) = (0, −2, 2);

compute T (−2, 4, −1).


(iii) Let T : R3 → R3 be a linear transformation such that

T (x1 , x2 , x3 ) = (2x1 + x2 , 2x2 − 3x1 , x1 − x3 ), x = (x1 , x2 , x3 ) ∈ R3 ;


compute T (−4, −5, 1).
(iv) Let T : R5 → R2 be a linear transformation T (x) = Ax, with
!
−1 2 1 3 4
A= ;
0 0 2 −1 0

compute T (1, 0, −1, 3, 0).


(v) Let T (x, y, z) = (3x − 2y + z, 2x − 3y, y − 4z). Write down the matrix representation of T in
the standard basis and use it to find T (2, −1, −1).
(vi) Let T : R3 → R3 be given by T (a1 , a2 , a3 ) = (3a1 − 2a3 , a2 , 3a1 + 4a2 ). Prove that T is an
isomorphism and find T −1 .

7. (i) Show that if T : R2 ! → R2 is! the counterclockwise rotation by a fixed angle θ, then
cos θ − sin θ x
T (x, y) = Ax = .
sin θ cos θ y
(ii) Let T be the counterclockwise rotation in R2 by an angle 120◦ , write down the matrix of T
and compute T (2, 2).
(iii) Prove that if θ is not an integer multiple of π there does not exist a real valued matrix B such
that B−1 AB is a diagonal matrix.

x.y
8. Let x ∈ Rn be a vector. Then, for y ∈ Rn , define projx (y) = kxk 2 x. The point of
n
such projections is that any vector y ∈ R can be written uniquely as a sum of a vector along
x and another one perpendicular to x: y = projx (y) + [y − projx [y)]. It is easy to check that
[y − projx [y)] ⊥ projx (y).
(i) Show that projx : Rn → Rn is a linear transformation.
(ii) Let T be the projection on to the vector x = (1, −5) ∈ R2 : T (y) = projx (y); find the matrix
representation in the standard basis and compute T (2, 3).

9. (i) Show that the eigenvalues of a symmetric linear operator A are real.
(ii) Prove that the eigenvectors of a symmetric linear operator A associated to different eigenvalues
are mutually orthogonal.

10. (i) Show that Hermitian matrices satisfy the following properties (AB)† = B† A† .
(ii) Prove that the inverse of a Hermitian matrix is again a Hermitian matrix.

11. Find the eigenvalues and normalized eigenvectors of the Pauli matrices:
! ! !
0 1 0 −i 1 0
σ1 = , σ2 = , σ3 = .
1 0 i 0 0 −1

12. Show that the Pauli matrices obey the following commutation and anticommutation rela-
tions: [σi , σj ] = 2i 3k=1 ijk σk and {σi , σj } = 2δij 12 .
P

13. Show that {1, σ1 , σ2 , σ3 } is an appropriate basis to describe the space of operators on a two-
Hilbert space. (i) Show that {1, σ1 , σ2 , σ3 } are linearly independent. (ii) Prove that {1, σ1 , σ2 , σ3 }
form a basis in 2 × 2 matrix space, by showing that any arbitrary matrix
!
m11 m12
M=
m21 m22

can be written on the form M = a0 1+~a . ~σ , where a0 = 12 Tr (M ), ~a = 12 Tr (M~σ ), and ~σ = (σ1 , σ2 , σ3 )


is the Pauli vector.

R∞ R∞
14. Evaluate (i) −∞ [f (x)δ(x − 1) + f (x)δ(x + 2)] dx; (ii) −∞ f (x) δ 0 (x)dx (to do this integral
R∞ R∞
use integration by parts); (iii) −∞ [f (x)δ(x − a) − f (x)δ 00 (x)]dx; (iv) −∞ Θ(x) Θ(1 − x) f (x) dx;
R∞ R∞
(v) −∞ Θ(x) Θ(b − x) x f (x) dx; (vi) −∞ [f (x) δ(x − π) − f (x)δ 0 (x − 2π) + f (x)δ 00 (x − b)]dx.
(
d x if x > 0 d2
15. Show that: (i) dx |x| = sgn x = Θ(x) − Θ(−x), where |x| =
−x if x < 0
; (ii) dx2
|x| =
d
dx sgn x = 2δ(x).
SOLUTIONS

1. Let λ ∈ F , and let Vλ denote the set of eigenvectors for λ, together with 0. You have to
show that Vλ is a subspace of V . By construction, 0 ∈ Vλ . Suppose x, y ∈ Vλ , then T (x) = λx
and T (y) = λy. Hence T (x + y) = T (x) + T (y) = λx + λy = λ(x + y), so x + y ∈ Vλ . Similarly,
if c ∈ F ,then T (cx) = cT (x) = cλx = λ(cx), so cx ∈ Vλ . Therefore Vλ is a subspace.

2 (i) In order to prove this collection is linearly independent, you need to show that if c1 t +
c2 sin t + c3 cos 2t + c4 sin t cos t = 0 for all t, then c1 = c2 = c3 = c4 = 0. Firstly, plug in t = 0, and
find that 0 + 0 + c3 + 0 = 0, so that c3 = 0. Plugging this into the original equation, you now have
c1 t + c2 sin t + c4 sin t cos t = 0, for all t. Secondly, plug in t = π, and find that c1 π + 0 + 0 = 0,
so that c1 = 0. Plugging this into the original equation, you now have c2 sin t + c4 sin t cos t = 0
for all t. Thirdly, plug in t = π/2, and find that c2 + 0 = 0, so that c2 = 0. Plugging this into
our original equation, you now have: c4 sin t cos t = 0 for all t. Finally, plug in t = π/4, to find
√ √
c4 ( 2/2)( 2/2) = 0, or c4 = 0. So, overall, you have proven that c1 = c2 = c3 = c4 = 0, and thus
that the given collection of functions is linearly independent. (ii) Every element of span {(3, 4)}
has the form (3t, 4t), where t ∈ R. An element of span {(3, 4)} would then be a unit vector if
and only if k(3t, 4t)k = 1; in other words, if (3t)2 + (4t)2 = 1. This equation has two solutions,
t = ±1/5. Therefore, span {(3, 4)} has two unit vectors, (3/5, 4/5) and (−3/5, −4/5).

3. (i)     
1 2 −1 2 1 0 1 −2 1
AB =  0 3 1   0 −1 2  =  1 −2 9 
    
2 0 1 1 1 3 5 3 3
Expand by the first row
−2 9 −1 9 1 −2
|AB| = 1 +2 +1 = −104
3 3 5 3 5 3

(ii)     
1 2 −1 2 1 9 7
AC =  0 3 1   4 3  =  13 9 
    
2 0 1 1 0 5 2
(iii)     
1 2 −1 8 5 −5 −5
ABC = A(BC) =  0 3 1   −2 −3  =  3 −5 
    
2 0 1 9 4 25 14
(iv)     
2 0 1 1 0 2 1 1 5
BT AT =  1 −1 1   2 3 0  =  −2 −2 3 
    
0 2 3 −1 1 1 1 9 3
 
0 −3 −4
T T
AB − B A =  3 0 6 
 
4 −6 0
4. The product of two orthogonal matrices AB will be an orthogonal matrix C ⇔ CT = C−1 .
Now, Cij = k Aik Bkj and (C T )ij = Cji = k Ajk Bki = k Bki Ajk . Identifying Bki = (B T )ik
P P P

and Ajk = (AT )kj we have (C T )ij = k (B T )ik (AT )kj , or equivalently CT = (AB)T = BT AT . Then
P

(AB)T = BT AT , but because A and B are orthogonals AT = A−1 and BT = B−1 . Multiplying the
expression (AB)T = BT AT by AB from the right we have (AB)T AB = BT AT AB = BT B = 1. Then
1 = (AB)−1 AB therefore (AB)T = (AB)−1 and the matrix C is orthogonal.

5. Note that a matrix A ∈ Cn×n is nilpotent of degree k, if k is a positive integer such that
Ap = 0n×n for p ≥ k, and Ap 6= 0n×n for 0 < p < k. Suppose λ 6= 0 is an eigenvalue corresponding
to the eigenvector x 6= 0. It follows that Ax = λx and Ak x = λk x. However, by the nilpotent
assumption Ak = 0n×n and therefore Ak x = 0n×n x = 0 = λk x. Since x 6= 0, it follows that λ = 0,
which is a contradiction. Therefore all λ must be zero.

6. (i) Note that T ((x, y) + (z, w)) = T (x + z, y + w) = ((x + z)2 , y + w) 6= (x2 , y) + (z 2 , w) =


T (x, y) + T (z, w). So, T does not preserve additivity. So, T is not linear. (ii) Note that
(−2, 4, −1) = −2(1, 0, 0) + 4(0, 1, 0) − (0, 0, 1), so T (−2, 4, −1) = −2T (1, 0, 0) + 4T (0, 1, 0) −
T (0, 0, 1) = (−4, −8, 2) + (4, 12, −8) + (0, −2, 2) = (0, 6, −8). (iii) T (−4, 5, 1) = (2 × (−4)  − 5,2 ×
1
! 0 
−1 2 1 3 4  
(−5) − 3 × (−4), −4 − 1) = (−13, 2, −5). (iv) T (1, 0, −1, 3, 0) =  −1  =
 
0 0 2 −1 0  
 3 
0
!
7
. (v) With e1 = (1, 0, 0)T , e2 = (0, 1, 0)T , e3 = (0, 0, 1)T it follows that T (e1 ) =
−5
     
3 −2 1
 2 , T (e2 ) =  −3 , T (e3 ) =  0 , so the matrix representation in the standard
     
0 1 −4
      
3 −2 1 3 −2 1 2 7
basis is  2 −3 0 , and T (2, −1, −1) =  2 −3 0   −1  =  7 . (vi) Rel-
      
0 1 −4 0 1 −4 −1 3
 
3 0 −2
ative to the standard basis, the matrix of T is  0 1 0 . It is sufficient to prove that
 
3 4 0
this matrix is invertible. Its determinant is, using the column expansion for the last column,
−2 × (0 × 4 − 1 × 3) = 6 6= 0. Therefore, the matrix  is invertible
 because its column vec-
0 − 34 13
tors are linearly independent. The inverse matrix is  0 1 0 , so that T −1 is given by:
 
− 21 −2 12
T −1 (a1 , a2 , a3 ) = (− 3 a2 − 3 a3 , a2 , − 2 a1 − 2a2 + 2 a3 ).
4 1 1 1

p
7. (i) Write x = r cos φ and y = r sin φ, where r = x2 + y 2 and tan φ = y/x. By definition
T (x, y) = (r cos(φ + θ), r sin(φ + θ)). Using trigonometric formulas r cos(φ + θ) = r cos φ cos θ −
r sin φ sin θ = x cos!θ − y sin θ and r sin(φ +!θ) = r! sin φ cos θ + r cos φ sin θ = y cos θ + x sin θ. Thus,
x cos θ − sin θ x
T (x, y) = A = . (ii) The matrix representation in the stan-
y sin θ cos θ y
1
√ !
3 1
√ !
3
! √ !
− − − − 2 −1 − 3
dard basis A = √2 2 , and therefore T (2, 2) = √2 2 = √ .
3 1 3 1 2 −1 + 3
2 − 2 2 − 2
!
cos θ − λ − sin θ √
(iii) det(A − λ1) = = λ2 − 2λ cos θ + 1, then λ1,2 = cos θ ± cos2 θ − 1 ⇒
sin θ cos θ − λ
λ1,2 = a ± ib, b 6= 0, i.e. λ1,2 ∈ / R. Hence, the eigenvectors are not real and /∃ B ∈ M (R), such that
B−1 AB is diagonal.

8. (i) Let w ∈ Rn and µ ∈ R, then you can easily check by direct substitution that projx (y +
w) = projx (y) + projx (w) and projx (µ y) = µ [proj  x (y)]. (ii) T(y1 , y2 ) = projx (y1 , y2 ) =
x . (y1 ,y2 )
kxk2
x = k(1,−5)k2 (1, −5) = 26 (1, −5) = y1 −5y
(1,−5) . (y1 ,y2 ) y1 −5y2
26 ,
2 −5y1 +25y2
26 . Thus, with e1 = (1, 0)T ,
! !
1 5
− 26
e2 = (0, 1)T you obtain T (e1 ) = 26
5 , T (e 2 ) = 25 , so the “standard matrix” is
− 26
! ! 26 ! !
1 5 1 5 1
26 − 26 26 − 26 2 − 2
A= 5 25 , and therefore T (2, 3) = 5 25 = 65 .
− 26 26 − 26 26 3 26

9. (i) Assume Ax = λx; then it follows that

λkxk2 = hx, Axi = hAx, xi = λ∗ kxk2 ⇒ λ∗ = λ .

(ii) Assume Ax = λx and Ay = µy, with λ 6= µ. It follows that,

(λ − µ)hy, xi = hy, Axi − hAy, xi = 0 ⇒ hy, xi = 0 .

Therefore, x ⊥ y if λ 6= µ.

10. (i) Derive this using matrix multiplication (AB)ij = nk=1 Aik Bkj , where (AB)ij denotes
P

the (i, j)th entry of (AB), and likewise for A and B. Then (AB)†ji = (A∗ B ∗ )ji T = (A∗ B ∗ )ij =
Pn ∗ ∗
Pn ∗ T ∗ T =
Pn ∗ T ∗ T =
Pn † †
k=1 Aik Bkj = k=1 (Aki ) (Bjk ) k=1 (Bjk ) (Aki ) k=1 Bjk Aki . The product on
the right is the (j, i)th entry of B† A† , while (AB)†ji is the (j, i)th entry of (AB)† . Therefore,
(AB)† = B† A† . (ii) If A is Hermitian, then A = UDU† , where U is unitary and D is a real di-
agonal matrix. Therefore, A−1 = (UDU† )−1 = (U† )−1 D−1 U−1 = UD−1 U† , because U−1 = U† . Note
that D−1 is just the diagonal matrix with entries λ−1
i (where the λi are the entries in D). Hence,
(A ) = (UD U ) = U(D ) U = UD U = A , because D−1 is a real matrix, so that A−1 is
−1 † −1 † † −1 † † −1 † −1

Hermitian.

11. For σ1 the eigenvalue relation


!
−λ 1
det =0
1 −λ
leads to λ2 = 1, which implies λ = ±1. For λ = 1, we have
! ! ! (
−1 1 x1 0 −x1 + x2 = 0
= ⇒ ,
1 −1 x2 0 x1 − x2 = 0

yielding x1 = x2 = 1/ 2. For λ = −1, we have
! ! ! (
1 1 x1 0 x1 + x2 = 0
= ⇒ ,
1 1 x2 0 x1 + x2 = 0

yielding x1 = −x2 = 1/ 2.
For σ2 the eigenvalue relation !
−λ −i
det =0
i −λ
leads to λ2 = 1, which implies λ = ±1. For λ = 1, we have
! ! ! (
−1 −i x1 0 x1 + ix2 = 0
= ⇒ ,
i −1 x2 0 ix1 − x2 = 0
√ √
yielding x1 = i/ 2 and x2 = −1/ 2. For λ = −1, we have
! ! ! (
1 −i x1 0 x1 − ix2 = 0
= ⇒ ,
i 1 x2 0 ix1 + x2 = 0
√ √
yielding x1 = i/ 2 and x2 = 1/ 2.
For σ3 , the eigenvalue relation
!
1−λ 0
det =0
0 1−λ

leads to −(1 − λ)(1 + λ) = 0, which implies λ = ±1. For λ = 1, we have


! ! ! (
0 0 x1 0 0=0
= ⇒ ,
0 −2 x2 0 −2x2 = 0

yielding x1 = 1 and x2 = 0. For λ = −1, we have


! ! ! (
2 0 x1 0 2x1 = 0
= ⇒ ,
0 0 x2 0 0=0

yielding x1 = 0 and x2 = 1.
All in all, each Pauli matrix has eigenvalues 1 and −1. The normalized eigenvectors are
! !
1 1 1
σ1 ⇒ √1 , √ ;
2 1 2 −1
! !
1 1 1 1
σ2 ⇒ √ , √ ;
2 i 2 −i
! !
1 0
σ3 ⇒ , .
0 1
12. It is straightforward to check that the multiplication of two different Pauli matrices yields
the third one multiplied by the (positive or negative) imaginary unit, i.e., σ1 σ2 = iσ3 , σ1 σ3 = −iσ2 ,
σ2 σ3 = iσ1 , σ2 σ1 = −iσ3 , σ3 σ1 = iσ2 , σ3 σ2 = −iσ1 . This may be expressed in more compact form
for all cyclic permutations of i, j, k ∈ {1, 2, 3} as
3
X
σi σj = δij 12 + i ijk σk .
k=1

As a direct consequence of this last relation the commutation and anticommutation relations for
Pauli spin matrices are given by
3
X
[σi , σj ] = 2i ijk σk and {σi , σj } = 2δij 12 .
k=1

13. Suppose
! !
α + ξ β − iζ 0 0
α1 + βσ1 + ζσ2 + ξσ3 = = . (1)
β + iζ α − ξ 0 0

Then α = −ξ and α = ξ ⇔ α = ξ = 0. Similarly, β = −iζ and β = iζ, which implies β = ζ = 0.


(ii) Now we show that the vectors {1, σ1 , σ2 , σ3 } span the 2 × 2 matrix space. Let
!
m11 m12
M =
m21 m22
! !
1 1 0 1 1 0
= (m11 + m22 ) + (m11 − m22 )
2 0 1 2 0 −1
! !
1 0 1 i 0 −i
+ (m12 + m21 ) + (m12 − m21 )
2 1 0 2 i 0
1 1 i
= (m11 + m22 )1 + (m12 + m21 )σ1 + (m12 − m21 )σ2
2 2 2
1
+ (m11 − m22 )σ3 . (2)
2
Note that
1 1
Tr [M] = (m11 + m22 ) (3)
2 2
and so the first term in (2) can be written as 12 Tr [M] 1. Now,
!
1 1 m12 m11 1
Tr [Mσ1 ] = Tr = (m12 + m21 )
2 2 m22 m21 2
!
1 1 i m12 −i m11 1
Tr [Mσ2 ] = Tr = (m12 − m21 )
2 2 i m22 −i m21 2
!
1 1 m11 −m12 1
Tr [Mσ3 ] = Tr = (m11 − m22 ) .
2 2 m21 −m22 2
We define, Mσ = (Mσ1 , Mσ2 , Mσ3 ) so that the last three terms in (2) can be written as 21 Tr [Mσ] · σ.
Therefore, any 2×2 matrix can be written as M = a0 1+a·σ, where a0 = 21 Tr [M] and a = 12 Tr [Mσ].

R∞ R∞
14. (i) −∞ [f (x)δ(x − 1) + f (x)δ(x + 2)] dx = f (1) + f (−2); (ii) −∞ f (x) δ 0 (x)dx = −f 0 (0) ;
R∞ R∞ R1
(iii) −∞ [f (x)δ(x − a) − f (x)δ 00 (x)]dx = f (a) − f 00 (0); (iv) −∞ Θ(x) Θ(1 − x) f (x) dx = 0 f (x) dx;
R∞ Rb R∞
(v) −∞ Θ(x) Θ(b−x) x f (x) dx = 0 x f (x) dx; (vi) −∞ [f (x) δ(x−π)−f (x)δ 0 (x−2π)+f (x)δ 00 (x−
b)]dx = f (π) + f 0 (2π) + f 00 (b).

15. The signum


( function is the derivative(of the absolute value function (up to the indeterminacy
x if x > 0 1 if x > 0
at zero), |x| = ⇒ |x|0 = = sgn(x); on the other hand, Θ(x) =
−x if x < 0 −1 if x < 0
( (
0 if x < 0 1 if x > 0
, hence Θ(x) − Θ(−x) = = sgn(x) = |x|0 . (ii) The signum function
1 if x > 0 −1 if x < 0
is differentiable with derivative zero everywhere except at zero. It is not differentiable at zero in
the ordinary sense, but under the generalised notion of differentiation in distribution theory you
may write [sgn(x)]0 = [Θ(x) − Θ(−x)]0 = 2δ(x). Then |x|00 = [sgn(x)]0 = 2δ(x).

You might also like