Linear Systems and Matrices
Linear Systems and Matrices
Which are the following matrices in row echelon form? If not, explain why.
                                                                                
       1 5 12              1 3 2 7                 1 3 −5 17                1 0 1
  a)  0 1 5         b)  0 0 1 7           c)  0 1 1 7            d)  0 0 0 
                                                                           
0 0 1 0 1 9 −1 0 0 0 1 0 0 1
                                                                                         
                                                1 2 3                               1 0 5    
                                               1 4 1 .
  Find the reduced echelon form of the matrix                                    
                                                                                    0 1 −1 
                                                                                            
                                                2 1 9                                0 0 0
2 −8 5 0 0 0
                                                                                
    1 3 2 5                                                           1 0 0 4    
   2 5 2 3                                                          0 1 0 −3 
                                                                             
2 7 7 22 0 0 1 5
  Use elementary row operations to transform each augmented coefficient matrix to echelon form.
  Then solve the system by back substitution.
  2x − 4y + z = 3
   x − 3y + z = 5                                                   (inconsistent , no solution)
  3x − 7y + 2z = 12
  −3x − 2y + 4z = 9
        3y − 2z = 5                                                      (x = 3 , y = 7 , z = 8)
   4x − 3y + 2z = 7
  Use elementary row operations to transform each augmented coefficient matrix to echelon form.
  Then solve the system by back substitution.
  2x1 + 8x2 + 3x3 = 2
   x1 + 3x2 + 2x3 = 5                                               (x1 = 3 , x2 = −2 , x3 = 4)
  2x1 + 7x2 + 4x3 = 8
  3x1 + x2 − 3x3 = −4
   x1 + x2 + x 3 =  1                                                     (inconsistent , no solution)
  5x1 + 6x2 + 8x3 = 8
  Determine for what values of k each system has a unique solution, no solution or infinitely many
  solutions.
   3x + 2y = 11
                                                                             (k 6= 4 , k = 4 , none)
   6x + ky = 21
   3x + 2y = 1
                                                                              (all k , none , none)
   7x + 5y = k
  Determine for what values of k each system has a unique solution, no solution or infinitely many
  solutions.
    x + 2y + z = 3
   2x − y − 3z = 5                                                        (none , k 6= 11 , k = 11)
   4x + 3y − z = k
    x + 2y + 2z = 4
   2x 4 ky + 4z = 8                                                    (k 6= 0 or 4 , k = 0 , k = 4)
             kz = 4
  Use elementary row operations to transform each augmented coefficient matrix to reduced echelon
  form. Then solve the system by back substitution.
   x + 2y − z = 1
  2x + y + 4z = 2                                                       (x = 7 , y = −4 , z = −2)
  3x + 3y + 4z = 1
    x1     − x2    + x3 − x4       =  2
    x1     − x2    + x3 + x4       =  0
                                                      (x1 = 1 + t − s , x2 = t , x3 = s , x4 = −1)
   4x1     − 4x2   + 4x3           =  4
  −2x1     + 2x2   − 2x3 + x4      = −3
  Use elementary row operations to transform each augmented coefficient matrix to reduced echelon
  form. Then solve the system by back substitution.
  3x + 9y + z = 16
  2x + 6y + 7z =  3                                                      (x = 5−t , y = t , z = 1)
   x + 3y − 6z = −1
  Determine for what values of α each system has a unique solution, no solution or infinitely many
  solutions. In the case of infinitely many solutions, find the solution set.
    x + y − z = 1
   2x + 3y + αz = 3                              (α 6= 2 and α 6= −3 , α = −3 , α = 2 , (5t, 1 − 4t, t))
    x + αy + 3z = 2
    x + 4y −        7z =     1
   −x − 3y +        5z =     3                   (α 6= 3 and α 6= −3 , α = −3 , α = 3 , (−t, 2 + 2t, t))
               2
   2x + 5y + (α − 17)z = α + 7
  Determine for what values of λ and β each system has a unique solution, no solution or infinitely
  many solutions. In the case of unique and infinitely many solutions, find the solution sets.
    x      − βz =  1
  −2x + λy − βz = −1
   −x + λy   βz =  0
                          (λ 6= 0 and β 6= 0 , λ = 0 , λ 6= 0 and β = 0 , (1, 1/a, 0) , (1, 1/(2a − 1), t))
    x + 2y       +       5z     =   6
         y       +       2z     =   2
  −2x + 2y       + (β − 1)z     =   0
   2x + 2y       +       6z     = λ+2
                                (λ = 6 and β 6= 3 , λ 6= 6 , λ = 6 and β = 3 , (2, 2, 0) , (2−t, 2−2t, t))
  Determine for what values of λ and β each system has a unique solution, no solution or infinitely
  many solutions. In the case of infinitely many solutions, find the solution set.
       2 −1    2λ       1               x1              β
                                                          
      2 0     2λ       1             x2            1      
                                                 =
                                                          
       2 −1 (2λ + 1) (λ + 1)            x3              0
                                                           
                                                          
      −2 1 (1 − 2λ)    −2               x4            −β − 2
                           (λ 6= −1 , λ = −1 and β 6= 2 , λ = −1 and β = 2 , ((t − 3)/2, −1, t − 2, t))
       1 0   1     −1              x1             β
                                                   
     −2 −1 −4      2            x2           −2    
                                     
                                            =          
      −1 −2 −3    λ−1              x3             1
                                                   
                                                   
      −1 1 2λ − 3  1               x4            β−2
                                (λ 6= 2 , λ = 2 and β 6= 5 , λ = 2 and β = 5 , (2 − r + s, 1 − 2r, r, s))
                                           !
                                     a b
  Show that the 2 × 2 matrix A =               is row equivalent to the 2 × 2 identity matrix provided
                                     c d
  that ad − bc 6= 0.
                  !
            a b
  If A =           , then show that A2 = (traceA)A − (ad − bc)I where I is the 2 × 2 identity
            c d
  matrix.
                                                                                               
      2 −1 0                    6 −3 −4                                             44 −22 −20 
     4 0 −3 ,
  A=                     B =  5 2 −1 ,               c = 7,   d=5                53 10 −26 
                                                                                           
5 −2 7 0 7 9 35 21 94
                !                        !                       !
        2 1                  −1 0 4                   1 −2 −13
                                                                                      
  A=            , B=                                                    , not defined
        4 3                   3 −2 5                  5 −6 3
5 5 10 15
                                                                          
                !     0 −2                                                 4
            3
                                                                              
  A=            , B= 3
                    
                         1                               not defined ,  7 
                                                                             
           −2
                           
                      −4 5                                                −22
                                                                                           
      2 0                            !                         !          −4 −4 −2 2 
                              1 −1                  1 0 −1 2
                                                                    
  A= 0 3 
          ,           B=                ,   C=                          −9 −12   9 12 
                                                                       
                              3 −2                  3 2 0 1
                                                                                         
      1 4                                                                 −14 −18 −13 17
  Write each given homogeneous system in the matrix form Ax = 0. Then find the solution in
  vector form.
  x1        − 5x3 + 4x4 = 0
                                                              (x = s(5, −2, 1, 0) + t(−4, 7, 0, 1))
         x2 + 2x3 − 7x4 = 0
  x1                  + 3x4 − x5 = 0
         x2           − 2x4 + 6x5 = 0                    (x = s(−3, 2, −1, 1, 0)+t(1, −6, 8, 0, 1))
                 x3   + x4 − 8x5 = 0
  Write each given homogeneous system in the matrix form Ax = 0. Then find the solution in
  vector form.
  x1 − 3x2            + 6x4 = 0
                                                                  (x = s(3, 1, 0, 0) + t(−6, 0, −9, 1))
                   x3 + 9x4 = 0
  x1 − x2            + 7x4 + 3x5 = 0
                                               (x = r(1, 1, 0, 0, 0)+s(−7, 0, 1, 1, 0)+t(−3, 0, 2, 0, 1))
                  x3 − x4 − 2x5 = 0
                                                                   
            1 2 −3                                           1 2 −3
  Let B =  2 3 3 . Find elementary matrix E so that EB =  2 3 3 .
                                                                 
            5 9 −6                                           3 5 0
                                                                              
                                                                        1 0 0 
                                                                       0 1 0 
                                                                              
−2 0 1
                                                                                     
            2 4 −2           0 3 7            1 2 −1           2 3 −2
  Let A =  4 5 1  , B1 =  4 5 1  , B2 =  4 5 1  , B3 =  0 −3 5 .
                                                               
            0 3 7            2 4 −2           0 3 7            0 3  7
  Find elementary matrices E1 , E2 and E3 such that E1 A = B1 , E2 A = B2 and E3 A = B3 .
  Determine the inverses of E1 , E2 and E3 .
                                                                            
       0 0 1     1/2 0 0      1 0 0     0 0 1     2 0 0     1 0 0 
      
       0 1 0 ,
               
                   0  1 0 ,
                           
                              −2 1 0 ,
                                      
                                         0 1 0 ,
                                                
                                                   0 1 0 , 2 1 0 
                                                                 
1 0 0 0 0 1 0 0 1 1 0 0 0 0 1 0 0 1
Find a 2 × 2 matrix A with each main diagonal element zero such that A2 = I.
Find a 2 × 2 matrix A with each main diagonal element zero such that A2 = −I.
                                                                                   
                                                       0 1 1             1 1 0
                                                  3                 7
  Show taht there is no 3 × 3 matrix A such that A =  0 0 1  and A =  1 0 0 .
                                                                            
0 0 0 0 0 0
  Use matrix multiplication to show that if x1 and x2 are two solutions of the homogeneous system
  Ax = 0 and c1 and c2 are real numbers, then c1 x1 + c2 x2 is also a solution.
  Use matrix multiplication to show that if x1 and x2 are solutions of the nonhomogeneous system
  Ax = b, then x1 − x2 is a solution of the homogeneous system Ax = 0.
4 6 2 −4 5
                                                                           
    1 5 1                                                      −5 −2 5   
   2 5 0 
         
                                                               2
                                                              
                                                                   1 −2 
                                                                        
    2 7 1                                                       −4 −3 5
                                                                     
    2 0 −1                                                        3 1 0 
                                                             1
                                                         
   1 0 3                                                      −2 −3 7 
                                                                       
                                                             7
    1 1 1                                                        −1 2 0
                                                                                               
        1 4 1         1 0 3                                 11 −9 4            7 −14 15 
  A =  2 8 3 ,B =  0 2 2 
                  
                                                  A−1   =  −2 2 −1  , X =  −1 3 −2 
                                                                           
                                                                                        
        2 7 4         −1 1 0                                 −2 1  0           −2 2 −4
                                                                                        
        1 5 1            2 0 1                          −16 3 11          −21 9 6  
  A =  2 1 −2  , B =  0 3 0 
                     
                                              A−1   =  6 −1 −4  , X =  8 −3 −2 
                                                                                
1 7 2 1 0 2 −13 2 9 −17 6 5
                                           !             !             !                               !
                                     1 2           1 3           1 1                           −1 2
                                                                                          
  Find 2 × 2 matrix A such that                A             =             .
                                     0 1           0 1           1 1                           1 −2
                          
               1 1 1
  Let A−1   = 1 1 2 
             
                      . Use elementary row operations to find A. Use A to solve the linear
               1 −1 1
                                                                                                 
                              4                                        3/2 −1 1/2       −2 
          −1
  system A x = b where b =  7 .                                      1/2 0 −1/2  ,  3 
                                                                                       
−2 −1 1 0 3
  Suppose that A, B, and C are invertible matrices of the same size. Show that the product ABC
  is invertible and that (ABC)−1 = C −1 B −1 A−1 .
                        
            3  4 1
  Let A =  1 −2 0 . Find the determinant of A by cofactor expansion                  (−47)
                  
            5  3 6
  along the second column.
  Use cofactor expansions to evaluate the following determinants. Expand along the row or column
  that minimizes the amount of computation that is required.
            
  
     2 1 0 
  
     1 2 1                                                                                (4)
            
     0 1 2 
                    
  
     1   0 0    0   
                     
     2   0 5    0   
                    
                                                                                         (−210)
      3   6 9    8
                    
                    
                    
     4   0 10   7   
  Evaluate the determinants after first simplifying the computation by adding an appropriate
  multiple of some row or column to another.
                   
  
      2  3 4       
                    
  
     −2 −3 1       
                                                                                          (25)
                   
      3  2 7       
               
  
     3 −2 5 
  
     0  5 17                                                                            (30)
      6 −4 12 
              
  
                   
  
     1   4  4  1   
                    
     0   1 −2  2   
                   
                                                                                        (135)
      3   3  1  4
                   
                   
                   
     0   1 −3 −2   
                  
  
     a1 0 0 b1 
      0 a2 b2 0 
                                               ((a1 a4 − b1 b4 )(a2 a3 − b2 b3 ))
  
       0 b3 a3 0 
                 
  
  
     b 4 0 0 a4 
                2  5  3  4
                                 
              −1 −2 −2 −3 
  Let A =                 .
                          
               2  6  4  4 
                1  3  8  9
  Evaluate |A|.                                                             (2)
  Use properties of determinants to show the following. Do not use the cofactor expansion.
      x y + z y2 + z2
                               
                               
                               
  
     y x + z x2 + z 2          
                                   = (x + y + z)(x − y)(x − z)(z − y).
      z x + y x2 + y 2
                               
                               
                     
  
     a   b   b   b   
                      
     b   a   b   b   
                          = (a − b)3 (a + 3b).
                     
      b   b   a   b
                     
                     
                     
     b   b   b   a   
  3x1 − x2 − 5x3 =   3
  4x1 − 4x2 − 3x3 = −4                                         (x1 = 2 , x2 = 3 , x3 = 0)
   x1       − 5x3 =  2
  3x1 + 2x2 − x3 =   1
   x1 − 5x2 + 5x3 = −2                                (x1 = 12 , x2 = −21 , x3 = −7)
  2x1 + x2        =  3
    x    + 2y + z + w            =   8
   −x    − y + 2z                =   3
                                                                    (y = 2)
   2x    + 3y                    =   0
  −2x    + y − 2z − w            =   0
                    
            1 4 1        h      iT      h        iT
  Let A =  2 7 4  , b = 1 1 2    , x = x1 x2 x3 .
                 
            2 8 3
  Show that A is an invertible matrix.                                        (|A| = −1 6= 0)
                                                                                        
                                                                            11  4 −9 
  Find A−1 by using elementary row operations.                           
                                                                            −2 −1  2 
                                                                                      
                                                                             −2  0  1
  For the linear equation system Ax = b, find x2 using Cramer’s rule.              (x2 = 1)
Math210E        Linear Systems and Matrices                                                         44
  Find A−1 for the following matrices A by using the adjoint matrix.
                                                                                             
      −5 −2  2                                                                            4 4 4 
                                                                                     1
                                                                                 
  A= 1
    
          5 −3                                                                         16 15 13 
                                                                                                  
                                                                                     4
               
       5 −3  1                                                                           28 25 23
                                                                                              
         2 4 −3                                                                   9  12 −13 
                                                                            1 
                                                                       
  A =  2 −3 −1                                                                11 −21  −4 
               
                                                                           107
                                                                                            
        −5 0 −3                                                                 −15 −20 −14
                        
             3 0  1
  Let A =  −2 1  0 
                    .
          
             0 1 −2
                                                                       
                                                             −2  1 −1 
  Find the adjoint matrix of A.
                                                          
                                                             −4 −6 −2 
                                                                       
                                                              −2 −3  3
                                                                        
                                                       1/4 −1/8  1/8
  Find A−1 .                                         
                                                      1/2  3/4  1/4 
                                                                     
                                                       1/4  3/8 −3/8
                     
            2 2 2
  Let A =  1 x x2 . Find |A| and the values of x and y for which A is invertible.
                  
            1 y y2
                                                  (2(x − 1)(y − 1)(y − x) , x 6= 1 , y 6= 1 , x 6= y)
                                                                                        1   1   1   0
                                                                                                       
                                                                                       1   0   0   1   
  Show that the homogeneous system Ax = 0 has only the trivial solution for A =                        .
                                                                                                       
                                                                                       0   2   0   0   
                                                                                        1   0   1   1
                                                                                        (|A| = 2 6= 0)
                                                 
  
     a11 + ka12 a12 a13      
                              
                                   
                                   a11 a12 a13     
                                                    
  
     a21 + ka22 a22 a23      
                                 = a21 a22 a23
                                   
                                   
                                                    
                                                    
                                                 
     a31 + ka32 a32 a33          a31 a32 a33     
  Let A = [aij ] be a 3 × 3 matrix. Show that |AT | = |A| by expanding |A| along its first row and
  |AT | along its first column.
  The square matrix A is called orthogonal provided that AT = A−1 . Show that the determinant
  of such a matrix must be either 1 or −1.
  The matrices A and B are said to be similar provided that A = P −1 BP for some invertible matrix
  P . Show that if A and B are similar, then |A| = |B|.