Lecture Notes 1: Matrix Algebra
Part B: Determinants and Inverses
                                      Peter J. Hammond
                              revised 2020 September 14th
University of Warwick, EC9A0 Maths for Economists   Peter J. Hammond   1 of 87
Outline
   Special Matrices
      Square, Symmetric, and Diagonal Matrices
      The Identity Matrix
      The Inverse Matrix
      Partitioned Matrices
   Permutations and Their Signs
      Permutations
      Transpositions
      Signs of Permutations
      The Product Rule for the Signs of Permutations
   Determinants: Introduction
      Determinants of Order 2
      Determinants of Order 3
      The Determinant Function
      Permutation and Transposition Matrices
      Triangular Matrices
   University of Warwick, EC9A0 Maths for Economists   Peter J. Hammond   2 of 87
Square Matrices
   A square matrix has an equal number of rows and columns,
   this number being called its dimension.
   The (principal, or main) diagonal
   of a square matrix A = (aij )n×n of dimension n
   is the list (aii )ni=1 = (a11 , a22 , . . . , ann ) of its n diagonal elements.
   The other elements aij with i 6= j are the off-diagonal elements.
   A square matrix is often expressed in the form
                                                
                              a11 a12 . . . a1n
                           a21 a22 . . . a2n 
                      A= .
                                                
                                    .. . .     . 
                            ..      .     . .. 
                              an1 an2 . . . ann
   with some extra dots along the diagonal.
   University of Warwick, EC9A0 Maths for Economists   Peter J. Hammond        3 of 87
Symmetric Matrices
   Definition
   A square matrix A is symmetric just in case
   it is equal to its transpose — i.e., if A> = A.
   Example
   The product of two symmetric matrices need not be symmetric.
   Using again our example of non-commuting 2 × 2 matrices,
   here are two examples
   where the product of two symmetric matrices is asymmetric:
                           
          0 1     0 0       0 1
    I                   =         ;
          1 0     0 1       0 0
                           
          0 0     0 1       0 0
    I                   =
          0 1     1 0       1 0
   University of Warwick, EC9A0 Maths for Economists   Peter J. Hammond   4 of 87
Two Exercises with Symmetric Matrices
   Exercise
   Let x be a column n-vector.
     1. Find the dimensions of x> x and of xx> .
     2. Show that one is a non-negative number
        which is positive unless x = 0,
        and that the other is an n × n symmetric matrix.
   Exercise
   Let A be an m × n-matrix.
     1. Find the dimensions of A> A and of AA> .
     2. Show that both A> A and AA> are symmetric matrices.
     3. Show that m = n is a necessary condition for A> A = AA> .
     4. Show that m = n with A symmetric
        is a sufficient condition for A> A = AA> .
   University of Warwick, EC9A0 Maths for Economists   Peter J. Hammond   5 of 87
Diagonal Matrices
   A square matrix A = (aij )n×n is diagonal
   just in case all of its off diagonal elements are 0
   — i.e., i 6= j =⇒ aij = 0.
   A diagonal        matrix of dimension n can be written in the form
                                      
          d1          0 0 ... 0
        0           d2 0 . . . 0 
                                      
   D=0               0 d3 . . . 0     = diag(d1 , d2 , d3 , . . . , dn ) = diag d
        
         ..          ..   .. . .   .. 
        .             .    .     . .
           0           0      0     . . . dn
   where the n-vector d = (d1 , d2 , d3 , . . . , dn ) = (di )ni=1
   consists of the diagonal elements of D.
   Note that diag d = (dij )n×n where each dij = δij dii = δij djj .
   Obviously, any diagonal matrix is symmetric.
   University of Warwick, EC9A0 Maths for Economists   Peter J. Hammond         6 of 87
Multiplying by Diagonal Matrices
   Example
   Let D be a diagonal matrix of dimension n.
   Suppose that A and B are m × n and n × m matrices, respectively.
   Then E := AD and F := DB are well defined matrices
   of dimensions m × n and n × m, respectively.
   By the law of matrix multiplication, their elements are
           Xn                                    Xn
     eij =       aik δkj djj = aij djj and fij =    δik dii bkj = dii bij
                    k=1                                     k=1
   Thus, post-multiplying A by D is the column operation
   of simultaneously multiplying every column aj of A
   by its matching diagonal element djj .
   Similarly, pre-multiplying B by D is the row operation
   of simultaneously multiplying every row b> i of B
   by its matching diagonal element dii .
   University of Warwick, EC9A0 Maths for Economists   Peter J. Hammond   7 of 87
Two Exercises with Diagonal Matrices
   Exercise
   Let D be a diagonal matrix of dimension n.
   Give conditions that are both necessary and sufficient
   for each of the following:
     1. AD = A for every m × n matrix A;
     2. DB = B for every n × m matrix B.
   Exercise
   Let D be a diagonal matrix of dimension n,
   and C any n × n matrix.
   An earlier example shows that
   one can have CD 6= DC even if n = 2.
     1. Show that C being diagonal
        is a sufficient condition for CD = DC.
     2. Is this condition necessary?
   University of Warwick, EC9A0 Maths for Economists   Peter J. Hammond   8 of 87
Outline
   Special Matrices
      Square, Symmetric, and Diagonal Matrices
      The Identity Matrix
      The Inverse Matrix
      Partitioned Matrices
   Permutations and Their Signs
      Permutations
      Transpositions
      Signs of Permutations
      The Product Rule for the Signs of Permutations
   Determinants: Introduction
      Determinants of Order 2
      Determinants of Order 3
      The Determinant Function
      Permutation and Transposition Matrices
      Triangular Matrices
   University of Warwick, EC9A0 Maths for Economists   Peter J. Hammond   9 of 87
The Identity Matrix
   The identity matrix of dimension n is the diagonal matrix
                                       In = diag(1, 1, . . . , 1)
   whose n diagonal elements are all equal to 1.
   Equivalently, it is the n × n-matrix A = (aij )n×n
   whose elements are all given by aij = δij
   for the Kronecker delta function (i, j) 7→ δij
   defined on {1, 2, . . . , n}2 .
   Exercise
   Given any m × n matrix A, verify that Im A = AIn = A.
   University of Warwick, EC9A0 Maths for Economists         Peter J. Hammond   10 of 87
Uniqueness of the Identity Matrix
   Exercise
   Suppose that the two n × n matrices X and Y respectively satisfy:
     1. AX = A for every m × n matrix A;
     2. YB = B for every n × m matrix B.
   Prove that X = Y = In .
   (Hint: Consider each of the mn different cases where A (resp. B)
   has exactly one non-zero element that is equal to 1.)
   The results of the last two exercises together serve to prove:
   Theorem
   The identity matrix In is the unique n × n-matrix such that:
    I In B = B for each n × m matrix B;
    I AIn = A for each m × n matrix A.
   University of Warwick, EC9A0 Maths for Economists   Peter J. Hammond   11 of 87
How the Identity Matrix Earns its Name
   Remark
   The identity matrix In earns its name because it represents
   a multiplicative identity on the “algebra” of all n × n matrices.
   That is, In is the unique n × n-matrix with the property
   that In A = AIn = A for every n × n-matrix A.
   Typical notation suppresses the subscript n in In
   that indicates the dimension of the identity matrix.
   University of Warwick, EC9A0 Maths for Economists   Peter J. Hammond   12 of 87
Outline
   Special Matrices
      Square, Symmetric, and Diagonal Matrices
      The Identity Matrix
      The Inverse Matrix
      Partitioned Matrices
   Permutations and Their Signs
      Permutations
      Transpositions
      Signs of Permutations
      The Product Rule for the Signs of Permutations
   Determinants: Introduction
      Determinants of Order 2
      Determinants of Order 3
      The Determinant Function
      Permutation and Transposition Matrices
      Triangular Matrices
   University of Warwick, EC9A0 Maths for Economists   Peter J. Hammond   13 of 87
Left and Right Inverse Matrices
   Definition
   Let A denote any n × n matrix.
     1. The n × n matrix X is a left inverse of A
        just in case XA = In .
     2. The n × n matrix Y is a right inverse of A
        just in case AY = In .
     3. The n × n matrix Z is an inverse of A
        just in case it is both a left and a right inverse
        — i.e., ZA = AZ = In .
   University of Warwick, EC9A0 Maths for Economists   Peter J. Hammond   14 of 87
The Unique Inverse Matrix
   Theorem
   Suppose that the n × n matrix A has both a left and a right inverse.
   Then both left and right inverses are unique,
   and both are equal to a unique inverse matrix denoted by A−1 .
   Proof.
   If XA = AY = I, then XAY = XI = X and XAY = IY = Y,
   implying that X = XAY = Y.
   Now, if X̃ is any alternative left inverse,
   then X̃A = I and so X̃ = X̃AY = Y = X.
   Similarly, if Ỹ is any alternative right inverse,
   then AỸ = I and so Ỹ = XAỸ = X = Y.
   It follows that X̃ = X = Y = Ỹ, so we can define A−1
   as the unique common value of all these four matrices.
   Big question: when does the inverse exist?
   Answer: if and only if the determinant is non-zero.
   University of Warwick, EC9A0 Maths for Economists   Peter J. Hammond   15 of 87
Rule for Inverting Products
   Theorem
   Suppose that A and B are two invertible n × n matrices.
   Then the inverse of the matrix product AB exists,
   and is the reverse product B−1 A−1 of the inverses.
   Proof.
   Using the associative law for matrix multiplication repeatedly gives:
     (B−1 A−1 )(AB) = B−1 (A−1 A)B = B−1 (I)B = B−1 (IB) = B−1 B = I
   and
     (AB)(B−1 A−1 ) = A(BB−1 )A−1 = A(I)A−1 = (AI)A−1 = AA−1 = I.
   These equations confirm that X := B−1 A−1 is the unique matrix
   satisfying the double equality (AB)X = X(AB) = I.
   University of Warwick, EC9A0 Maths for Economists   Peter J. Hammond   16 of 87
Rule for Inverting Chain Products and Transposes
   Exercise
   Prove that, if A, B and C are three invertible n × n matrices,
   then (ABC)−1 = C−1 B−1 A−1 .
   Then use mathematical induction
   to extend the rule for inverting any product BC
   in order to find the inverse of the product A1 A2 · · · Ak
   of any finite chain of invertible n × n matrices.
   Theorem
   Suppose that A is an invertible n × n matrix.
   Then the inverse (A> )−1 of its transpose
   is (A−1 )> , the transpose of its inverse.
   Proof.
   By the rule for transposing products, one has
                              A> (A−1 )> = (A−1 A)> = I> = I
   University of Warwick, EC9A0 Maths for Economists   Peter J. Hammond   17 of 87
Orthogonal and Orthonormal Sets of Vectors
   Definition
   A set of k vectors {x1 , x2 , . . . , xk } ⊂ Rn is said to be:
    I pairwise orthogonal just in case xi · xj = 0 whenever j 6= i;
    I orthonormal just in case, in addition, each kxi k = 1
       — i.e., all k elements of the set are vectors of unit length.
   The set of k vectors {x1 , x2 , . . . , xk } ⊂ Rn is orthonormal
   just in case xi · xj = δij for all pairs i, j ∈ {1, 2, . . . , k}.
   University of Warwick, EC9A0 Maths for Economists   Peter J. Hammond   18 of 87
Orthogonal Matrices
   Definition
   Any n × n matrix is orthogonal
   just in case its n columns form an orthonormal set.
   Theorem
   Given any n × n matrix P, the following are equivalent:
     1. P is orthogonal;
     2. PP> = P> P = I;
     3. P−1 = P> ;
     4. P> is orthogonal.
   The proof follows from the definitions, and is left as an exercise.
   University of Warwick, EC9A0 Maths for Economists   Peter J. Hammond   19 of 87
Outline
   Special Matrices
      Square, Symmetric, and Diagonal Matrices
      The Identity Matrix
      The Inverse Matrix
      Partitioned Matrices
   Permutations and Their Signs
      Permutations
      Transpositions
      Signs of Permutations
      The Product Rule for the Signs of Permutations
   Determinants: Introduction
      Determinants of Order 2
      Determinants of Order 3
      The Determinant Function
      Permutation and Transposition Matrices
      Triangular Matrices
   University of Warwick, EC9A0 Maths for Economists   Peter J. Hammond   20 of 87
Partitioned Matrices: Definition
   A partitioned matrix is a rectangular array of different matrices.
   Example
   Consider the (m + `) × (n + k) matrix                                          
                      A B         Am×n Bm×k
                              =
                      C D          C`×n D`×k
   where, as indicated, the four submatrices A, B, C, D
   are of dimension m × n, m × k, ` × n and ` × k respectively.
   Note: Here matrix D may not be diagonal, or even square.
   For any scalar α ∈ R,
   the scalar multiple of a partitioned matrix is                                               
                            A B          αA αB
                        α            =
                            C D          αC αD
   University of Warwick, EC9A0 Maths for Economists   Peter J. Hammond   21 of 87
Partitioned Matrices: Addition
   Suppose the two partitioned matrices
                                                                  A B                E F
                                 and
                       C D                G H
   have the property that the following four pairs
   of corresponding matrices have equal dimensions:
   (i) A and E; (ii) B and F; (iii) C and G; (iv) D and H.
   Then the sum of the two matrices is                                          
               A B         E F         A+E B+F
                       +           =
                C D        G H         C+G D+H
   University of Warwick, EC9A0 Maths for Economists   Peter J. Hammond   22 of 87
Partitioned Matrices: Multiplication
   Suppose that the two partitioned matrices
                                                                   A B               E F
                                  and
                        C D               G H
   along with all the relevant pairs of their sub-matrices,
   are compatible for multiplication.
   Then their product is defined as                                               
              A B       E F         AE + BG AF + BH
                                =
              C D       G H         CE + DG CF + DH
   This extends the usual multiplication rule for matrices:
   multiply the rows of sub-matrices in the first partitioned matrix
   by the columns of sub-matrices in the second partitioned matrix.
   University of Warwick, EC9A0 Maths for Economists   Peter J. Hammond   23 of 87
Transposes and Some Special Matrices
   The rule for transposing a partitioned matrix is
                                         >  >
                                                C>                                                   
                                    A B      A
                                           =
                                     C D     B> D >
   So the original matrix is symmetric
   iff A = A> , D = D> , and B = C> ⇐⇒ C = B> .
   It is diagonal iff A, D are both diagonal,
   while also B = 0 and C = 0.
   The identity matrix is diagonal with A = I, D = I,
   possibly identity matrices of different dimensions.
   University of Warwick, EC9A0 Maths for Economists   Peter J. Hammond   24 of 87
Partitioned Matrices: Inverses, I
   For an (m + n) × (m + n) partitioned matrix to have an inverse,
   the equation                                                            
     A B      E F        AE + BG AF + BH               Im    0m×n
                     =                           =
     C D G H             CE + DG CF + DH             0n×m      In
   should have a solution for the matrices E, F, G, H, given A, B, C, D.
   Assuming that the m × m matrix A has an inverse, we can:
    1. construct new first m equations
       by premultiplying the old ones by A−1 ;
    2. construct new second n equations by:
             I premultiplying the new first m equations
               by the n × m matrix C;
             I then subtracting this product from the old second n equations.
   The result is
                                   −1
                 A−1 B                                            
            Im               E F     A    0m×n
                                  =
          0n×m D − CA−1 B   G H     −CA−1  In
   University of Warwick, EC9A0 Maths for Economists   Peter J. Hammond   25 of 87
Partitioned Matrices: Inverses, II
   For the next step,
   assume the n × n matrix X := D − CA−1 B
   also has an inverse X−1 = (D − CA−1 B)−1 .
                                            −1
                       A−1 B                                                            
              Im                     E F          A       0m×n
   Given                                     =                  ,
            0n×m D − CA−1 B          G H         −CA−1     In
   we first premultiply the last n equations by X−1 to get
                    A−1 B                      A−1
                                                         
               Im              E F                      0m×n
                                       =
             0n×m     In       G H         −X−1 CA−1 X−1
   Next, we subtract A−1 B times the last n equations
   from the first m equations to obtain                                          
          E F              Im    0m×n     E F
                    =
          G H            0n×m      In     G H
                         A + A BX CA−1 −A−1 BX−1
                        −1       −1    −1                                                          
                    =
                               −X−1 CA−1              X−1
   University of Warwick, EC9A0 Maths for Economists   Peter J. Hammond   26 of 87
Final Exercises
   Exercise
     1. Assume that A−1 and X−1 = (D − CA−1 B)−1 exist.
                     A + A−1 BX−1 CA−1 −A−1 BX−1
                     −1                                      
        Given Z :=                                              ,
                            −X−1 CA−1                X−1
        use direct multiplication twice in order to verify that
                                                          
                   A B              A B            Im    0m×n
                           Z=Z               =
                   C D              C D          0n×m      In
     2. Let A be any invertible m × m matrix.
                                                                                                                                                          A b
          Show that the bordered (m + 1) × (m + 1) matrix
                                                                          c> d
          is invertible provided that d 6= c> A−1 b,
          and find its inverse in this case.
   University of Warwick, EC9A0 Maths for Economists   Peter J. Hammond       27 of 87
Partitioned Matrices: Extension
   Exercise
   Suppose that the two partitioned matrices
                            A = (Aij )k×`              and B = (Bij )k×`
   are both k × ` arrays of respective mi × nj matrices Aij , Bij ,
   for i = 1, 2, . . . , k and j = 1, 2, . . . , `.
     1. Under what conditions can the product AB
        be defined as a k × ` array of matrices?
     2. Under what conditions can the product BA
        be defined as a k × ` array of matrices?
     3. When either AB or BA can be so defined,
        give a formula for its product, using summation notation.
     4. Express A> as a partitioned matrix.
     5. Under what conditions is the matrix A symmetric?
   University of Warwick, EC9A0 Maths for Economists           Peter J. Hammond   28 of 87
Outline
   Special Matrices
      Square, Symmetric, and Diagonal Matrices
      The Identity Matrix
      The Inverse Matrix
      Partitioned Matrices
   Permutations and Their Signs
      Permutations
      Transpositions
      Signs of Permutations
      The Product Rule for the Signs of Permutations
   Determinants: Introduction
      Determinants of Order 2
      Determinants of Order 3
      The Determinant Function
      Permutation and Transposition Matrices
      Triangular Matrices
   University of Warwick, EC9A0 Maths for Economists   Peter J. Hammond   29 of 87
Permutations
   Definition
   Given Nn = {1, . . . , n} for any n ∈ N with n ≥ 2,
   a permutation of Nn is a bijective mapping Nn 3 i 7→ π(i) ∈ Nn .
   That is, the mapping Nn 3 i 7→ π(i) ∈ Nn is both:
     1. a surjection, or mapping of Nn onto Nn ,
        in the sense that the range set satisfies
        π(Nn ) := {j ∈ Nn | ∃i ∈ Nn : j = π(i)} = Nn ;
     2. an injection, or a one to one mapping,
        in the sense that π(i) = π(j) =⇒ i = j or,
        equivalently, i 6= j =⇒ π(i) 6= π(j).
   Exercise
   Prove that the mapping Nn 3 i 7→ f (i) ∈ Nn is a bijection,
   and so a permutation, if and only if
   its range set f (Nn ) := {j ∈ Nn | ∃i ∈ Nn : j = f (i)}
   has cardinality #f (Nn ) = #Nn = n.
   University of Warwick, EC9A0 Maths for Economists   Peter J. Hammond   30 of 87
Products of Permutations
   Definition
   The product π ◦ ρ of two permutations π, ρ ∈ Πn
   is the composition mapping Nn 3 i 7→ (π ◦ ρ)(i) := π[ρ(i)] ∈ Nn .
   Exercise
   Prove that the product π ◦ ρ of any two permutations π, ρ ∈ Πn
   is a permutation.
   Hint: Show that #(π ◦ ρ)(Nn ) = #ρ(Nn ) = #Nn = n.
   Example
     1. If you shuffle a pack of 52 playing cards once,
        without dropping any on the floor,
        the result will be a permutation π of the cards.
     2. If you shuffle the same pack a second time,
        the result will be a new permutation ρ of the shuffled cards.
     3. Overall, the result of shuffling the cards twice
        will be the single permutation ρ ◦ π.
   University of Warwick, EC9A0 Maths for Economists   Peter J. Hammond   31 of 87
Finite Permutation Groups
   Definition
   Given any n ∈ N, the family Πn of all permutations of Nn includes:
    I the identity permutation ι defined by ι(h) = h for all h ∈ Nn ;
     I because the mapping Nn 3 i 7→ f (i) ∈ Nn is bijective,
       for each π ∈ Πn , a unique inverse permutation π −1 ∈ Πn
       satisfying π −1 ◦ π = π ◦ π −1 = ι.
   Definition
   The associative law for functions says that,
   given any three functions h : X → Y , g : Y → Z and f : Z → W ,
   the composite function f ◦ g ◦ h : X → W satisfies
     (f ◦ g ◦ h)(x) ≡ f (g (h(x))) ≡ [(f ◦ g ) ◦ h](x) ≡ [f ◦ (g ◦ h)](x)
   Exercise
   Given any n ∈ N, show that (Πn , π, ι) is an algebraic group
   — i.e., the group operation (π, ρ) 7→ π ◦ ρ is well-defined,
   associative, with ι as the unit, and an inverse π −1 for every π ∈ Πn .
   University of Warwick, EC9A0 Maths for Economists   Peter J. Hammond   32 of 87
Outline
   Special Matrices
      Square, Symmetric, and Diagonal Matrices
      The Identity Matrix
      The Inverse Matrix
      Partitioned Matrices
   Permutations and Their Signs
      Permutations
      Transpositions
      Signs of Permutations
      The Product Rule for the Signs of Permutations
   Determinants: Introduction
      Determinants of Order 2
      Determinants of Order 3
      The Determinant Function
      Permutation and Transposition Matrices
      Triangular Matrices
   University of Warwick, EC9A0 Maths for Economists   Peter J. Hammond   33 of 87
Transpositions
   Definition
   For each disjoint pair k, ` ∈ {1, 2, . . . , n},
   the transposition mapping i 7→ τk` (i) on {1, 2, . . . , n}
   is the permutation defined by
                                   
                                   ` if i = k;
                                   
                        τk` (i) := k if i = `;
                                   
                                     i otherwise;
                                   
   That is, τk` transposes the order of k and `,
   leaving all i 6∈ {k, `} unchanged.
   Evidently τk` = τ`k and τk` ◦ τ`k = ι, the identity permutation,
   and so τ ◦ τ = ι for every transposition τ .
   University of Warwick, EC9A0 Maths for Economists   Peter J. Hammond   34 of 87
Transposition is Not Commutative
   Any (j1 , j2 , . . . , jn ) ∈ Nnn whose components are all different
   corresponds to a unique permutation, denoted by π j1 j2 ... jn ∈ Πn ,
   that satisfies π(i) = ji for all i ∈ Nnn .
   Example
   Two transpositions defined
   on a set containing more than two elements
   may not commute because, for example,
                            τ12 ◦ τ23 = π 231 6= τ23 ◦ τ12 = π 312
   University of Warwick, EC9A0 Maths for Economists   Peter J. Hammond   35 of 87
Permutations are Products of Transpositions
   Theorem
   Any permutation π ∈ Πn on Nn := {1, 2, . . . , n}
   is the product of at most n − 1 transpositions.
   We will prove the result by induction on n.
   As the induction hypothesis,
   suppose the result holds for permutations on Nn−1 .
   Any permutation π on N2 := {1, 2} is either the identity,
   or the transposition τ12 , so the result holds for n = 2.
   University of Warwick, EC9A0 Maths for Economists   Peter J. Hammond   36 of 87
Proof of Induction Step
   For general n, let j := π −1 (n) denote the element
   that π moves to the end.
   By construction, the permutation π ◦ τjn
   must satisfy π ◦ τjn (n) = π(τjn (n)) = π(j) = n.
   So the restriction π̃ of π ◦ τjn to Nn−1 is a permutation on Nn−1 .
   By the induction hypothesis, for all k ∈ Nn−1 ,
   there exist transpositions τ 1 , τ 2 , . . . , τ q
   such that π̃(k) = (π ◦ τjn )(k) = (τ 1 ◦ τ 2 ◦ . . . ◦ τ q )(k)
   where q ≤ n − 2 is the number of transpositions in the product.
   For p = 1, . . . , q, because τ p interchanges only elements of Nn−1 ,
   one can extend its domain to include n by letting τ p (n) = n.
   Then (π ◦ τjn )(k) = (τ 1 ◦ τ 2 ◦ . . . ◦ τ q )(k) for k = n as well,
   so π = (π ◦ τjn ) ◦ τjn−1 = τ 1 ◦ τ 2 ◦ . . . ◦ τ q ◦ τjn−1 .
   Hence π is the product of at most q + 1 ≤ n − 1 transpositions.
   This completes the proof by induction on n.
   University of Warwick, EC9A0 Maths for Economists   Peter J. Hammond    37 of 87
Adjacency Transpositions and Their Products, I
   Definition
   For each k ∈ {1, 2, . . . , n − 1}, the transposition τk,k+1
   of element k with its successor is an adjacency transposition.
   Definition
   For each pair k, ` ∈ Nn with k < `, define:
     1. π k%` := τ`−1,` ◦ τ`−2,`−1 ◦ . . . ◦ τk,k+1 ∈ Πn
        as the composition of ` − k
        successive adjacency transpositions in order,
        starting with τk,k+1 and ending with τ`−1,` ;
     2. π `&k := τk,k+1 ◦ τk+1,k+2 ◦ . . . ◦ τ`−1,` ∈ Πn
        as the composition of the same ` − k
        successive adjacency transpositions in reverse order.
   University of Warwick, EC9A0 Maths for Economists   Peter J. Hammond   38 of 87
Adjacency Transpositions and Their Products, II
   Exercise
   For each pair k, ` ∈ Nn with k < `, prove that:
                    
                    i
                            if i < k or i > `;
    I π k%` (i) := i − 1 if k < i ≤ `;
                    
                      `      if i = k.
                    
     I π k%k = π k&k = ι
     I π k%` and π `&k are inverses
     I π k%` = π 1,2,...,k−1,k+1,...,`−1,`,k,`+1,...,n
     I π `&k = π 1,2,...,k−1,`,k,k+1,...,`−2,`−1,`+1,...,n
     1. Note that π k%` moves k up to the `th position,
        while moving each element between k + 1 and ` down by one.
     2. By contrast, π `&k moves ` down to the kth position,
        while moving each element between k and ` − 1 up by one.
   University of Warwick, EC9A0 Maths for Economists   Peter J. Hammond   39 of 87
Reduction to the Product of Adjacency Transpositions
   Lemma
   For each pair k, ` ∈ Nn with k < `, the transposition τk`
   equals both π `−1&k ◦ π k%` and π k+1%` ◦ π `&k ,
   the compositions of 2(` − k) − 1 adjacency transpositions.
   Proof.
     1. As noted, π k%` moves k up to the `th position,
        while moving each element between k + 1 and ` down by one.
          Then π `−1&k moves `, which π k%` left in position ` − 1,
          down to the k position, and moves k + 1, k + 2, . . . , ` − 1
          up by one, back to their original positions.
          This proves that π `−1&k ◦ π k%` = τk` .
          It also expresses τk` as the composition of
          (` − k) + (` − 1 − k) = 2(` − k) − 1 adjacency transpositions.
     2. The proof that π k+1%` ◦ π `&k = τk` is similar;
        details are left as an exercise.
   University of Warwick, EC9A0 Maths for Economists   Peter J. Hammond   40 of 87
Outline
   Special Matrices
      Square, Symmetric, and Diagonal Matrices
      The Identity Matrix
      The Inverse Matrix
      Partitioned Matrices
   Permutations and Their Signs
      Permutations
      Transpositions
      Signs of Permutations
      The Product Rule for the Signs of Permutations
   Determinants: Introduction
      Determinants of Order 2
      Determinants of Order 3
      The Determinant Function
      Permutation and Transposition Matrices
      Triangular Matrices
   University of Warwick, EC9A0 Maths for Economists   Peter J. Hammond   41 of 87
The Inversions of a Permutation
   Definition
     1. Let Nn,2 = {S ⊆ Nn | #S = 2} denote
        the set of all (unordered) pair subsets of Nn .
     2. Obviously, if {i, j} ∈ Nn,2 , then i 6= j.
     3. Given any pair {i, j} ∈ Nn,2 , define
                          i ∨ j := max{i, j} and i ∧ j := min{i, j}
          For all {i, j} ∈ Nn,2 , because i 6= j, one has i ∨ j > i ∧ j.
     4. Given any permutation π ∈ Πn ,
        the pair {i, j} ∈ Nn,2 is an inversion of π
        just in case π “reorders” {i, j}
        in the sense that π(i ∨ j) < π(i ∧ j).
     5. Denote the set of inversions of π by
                        N(π) := {{i, j} ∈ Nn,2 | π(i ∨ j) < π(i ∧ j)}
   University of Warwick, EC9A0 Maths for Economists   Peter J. Hammond    42 of 87
The Sign of a Permutation
   Definition
     1. Given any permutation π : Nn → Nn ,
        let n(π) := #N(π) ∈ N ∪ {0}
        denote the number of its inversions.
     2. A permutation π : Nn → Nn is either even or odd
        according as n(π) is an even or odd number.
     3. The sign or signature of a permutation π,
        is defined as sgn(π) := (−1)n(π) , which is:
        (i) +1 if π is even; (ii) −1 if π is odd.
   University of Warwick, EC9A0 Maths for Economists   Peter J. Hammond   43 of 87
The Sign of an Adjacency Transposition
   Theorem
   For each k ∈ Nn−1 , if π is the adjacency transposition τk,k+1 ,
   then N(π) = {{k, k + 1}}, so n(π) = 1 and sgn(π) = −1.
   Proof.
   If π is the adjacency transposition τk,k+1 , then
                                  
                                  i
                                                      if i 6∈ {k, k + 1}
                            π(i) = k + 1               if i = k
                                  
                                   k                   if i = k + 1
                                  
   It is evident that {k, k + 1} is an inversion.
   Also π(i) ≤ i for all i 6= k, and π(j) ≥ j for all j 6= k + 1.
   So if i < j, then π(i) ≤ i < j ≤ π(j) unless i = k and j = k + 1,
   and so π(i) > π(j) only if (i, j) = (k, k + 1).
   Hence N(π) = {{k, k + 1}}, implying that n(π) = 1.
   University of Warwick, EC9A0 Maths for Economists           Peter J. Hammond   44 of 87
A Multi-Part Exercise
   Exercise
   Show that:
     1. For each permutation π ∈ Πn , one has
                  N(π) := {{i, j} ∈ Nn,2 | (i − j)[π(i) − π(j)] < 0}
                                                                                                       π(i) − π(j)
                        =   {i, j} ∈ Nn,2 |              <0
                                               i −j
     2. n(π) = 0 ⇐⇒ π = ι, the identity permutation;
     3. n(π) ≤ 12 n(n − 1), with equality
        if and only if π is the reversal permutation
        defined by π(i) = n − i + 1 for all i ∈ Nn — i.e.,
                 (π(1), π(2), . . . , π(n − 1), π(n)) = (n, n − 1, . . . , 2, 1)
          Hint: Consider the number
          of ordered pairs (i, j) ∈ Nn × Nn that satisfy i < j.
   University of Warwick, EC9A0 Maths for Economists   Peter J. Hammond        45 of 87
Outline
   Special Matrices
      Square, Symmetric, and Diagonal Matrices
      The Identity Matrix
      The Inverse Matrix
      Partitioned Matrices
   Permutations and Their Signs
      Permutations
      Transpositions
      Signs of Permutations
      The Product Rule for the Signs of Permutations
   Determinants: Introduction
      Determinants of Order 2
      Determinants of Order 3
      The Determinant Function
      Permutation and Transposition Matrices
      Triangular Matrices
   University of Warwick, EC9A0 Maths for Economists   Peter J. Hammond   46 of 87
Double Products
   Let X = hxij i(i,j)∈Nn ×Nn denote an n × n matrix.
   We introduce the notation
           Yn           Yn Yn−1        Yn                                   Yn
                 xij :=         xij :=                                                 xij
                     i>j                 i=1           j=1            j=1      i=j+1
   for the product of all(the elements in the lower triangular matrix L
                           xij if i > j
   with elements `ij :=
                           0 if i ≤ j
   In case the matrix X is symmetric, one has
                    Yn           Yn         Yn
                           xij =      xji =    xij
                                   i>j                  i>j           i<j
                                           Qn                 Q
   This can be rewritten as i>j xij = {i,j}∈Nn,2 xij ,
   which is the product over all unordered pairs of elements in Nn .
   University of Warwick, EC9A0 Maths for Economists              Peter J. Hammond           47 of 87
Preliminary Example and Definition
   Example
   For every n ∈ N, define the double product
                Y                     Yn            Yn
        Pn,2 :=             |i − j| =     |i − j| =    |i − j|
                            {i,j}∈Nn,2                 i>j                      i<j
   Then one has
           Pn,2 = (n − 1) (n − 2)2 (n − 3)3 · · · 3n−3 2n−2 1n−1
                  Qn−1 n−k
                =   k=1 k                                  Qn−1
                = (n − 1)! (n − 2)! (n − 3)! · · · 3! 2! =   k=1 k!
   Definition
   For every permutation  π ∈ Πn , define the symmetric matrix Xπ
                   π(i) − π(j) if i 6= j
                  
            π
   so that xij :=      i −j
                   1              if i = j
                  
   University of Warwick, EC9A0 Maths for Economists         Peter J. Hammond         48 of 87
Basic Lemma
  Lemma
                                                                                     π
                                                                  Q
  For every permutation π ∈ Πn , one has sgn(π) =                        {i,j}∈Nn,2 xij .
  Proof.
    I Because π is a permutation,
      the mapping Nn,2 3 {i, j} 7→ {π(i), π(j)} ∈ Nn,2
      has inverse Nn,2 3 {i, j} 7→ {π −1 (i), π −1 (j)} ∈ Nn,2 .
      In fact it is a bijection between Nn,2 and itself.
    I Hence Pn,2 := {i,j}∈N |i − j| = {i,j}∈N |π(i) − π(j)| .
                       Q                  Q
                                n,2                n,2
                        |π(i) − π(j)| Q
    I So                              = {i,j}∈Nn,2 |xijπ | = 1.
              Q
                  {i,j}∈Nn,2
                           |i − j|
    I Also xijπ = ∓1 according as {i, j} is or is not a reversal of π.
    I It follows that
                     π         n(π)                π          n(π) = sgn(π)
      Q                             Q
         {i,j}∈Nn,2 xij = (−1)        {i,j}∈Nn,2 |xij | = (−1)
  University of Warwick, EC9A0 Maths for Economists   Peter J. Hammond                49 of 87
The Product Rule for Signs of Permutations
   Theorem
   For all permutations ρ, π ∈ Πn one has sgn(ρ ◦ π) = sgn(ρ) sgn(π).
   Proof.
   The basic lemma implies that
     sgn(ρ ◦ π)                     Q        ρ(π(i)) − ρ(π(j))      Q         k −`
                         =
       sgn(π)                  {i,j}∈Nn,2          i −j        {k,`}∈Nn,2 π(k)  − π(`)
                                    Q        ρ(π(i)) − ρ(π(j))     Q         i −j
                         =
                               {i,j}∈Nn,2          i −j        {i,j}∈Nn,2 π(i) − π(j)
                               Q
   After cancelling the product {i,j}∈Nn,2 (i − j)
   and then replacing π(i) by k and π(j) by `,
   because π and ρ are permutations, one obtains
                    sgn(ρ ◦ π)                Y        ρ(k) − ρ(`)
                               =                                   = sgn(ρ)
                      sgn(π)                              k −`
                                          {k,`}∈Nn,2
   University of Warwick, EC9A0 Maths for Economists         Peter J. Hammond    50 of 87
The Sign of the Inverse Permutations
   Corollary
   Given any permutation π ∈ Πn , one has sgn(π −1 ) = sgn(π).
   Proof.
   Because the identity permutation satisfies ι = π ◦ π −1 ,
   the product rule implies that
                   1 = sgn(ι) = sgn(π ◦ π −1 ) = sgn(π) sgn(π −1 )
   Because sgn(π), sgn(π −1 ) ∈ {−1, 1},
   they must both have the same sign, and the result follows.
   University of Warwick, EC9A0 Maths for Economists   Peter J. Hammond   51 of 87
Outline
   Special Matrices
      Square, Symmetric, and Diagonal Matrices
      The Identity Matrix
      The Inverse Matrix
      Partitioned Matrices
   Permutations and Their Signs
      Permutations
      Transpositions
      Signs of Permutations
      The Product Rule for the Signs of Permutations
   Determinants: Introduction
      Determinants of Order 2
      Determinants of Order 3
      The Determinant Function
      Permutation and Transposition Matrices
      Triangular Matrices
   University of Warwick, EC9A0 Maths for Economists   Peter J. Hammond   52 of 87
Determinants of Order 2: Definition
   Consider again the pair of linear equations
                                    a11 x1 + a12 x2 = b1
                                    a21 x1 + a12 x2 = b2
   with its associated coefficient matrix
                                                                                 a    a
                              A = 11 12
                                     a21 a22
   Let us define the number D := a11 a22 − a21 a12 .
                                        6 0,
   We saw earlier that, provided that D =
   the two simultaneous equations have a unique solution given by
                         1                                    1
                x1 =       (b1 a22 − b2 a12 ),         x2 =     (b2 a11 − b1 a21 )
                         D                                    D
   The number D is called the determinant of the matrix A.
   It is denoted by either det(A), or more concisely, by |A|.
   University of Warwick, EC9A0 Maths for Economists          Peter J. Hammond       53 of 87
Determinants of Order 2: Simple Rule
   Thus, for any 2 × 2 matrix A, its determinant D is
                                       a11 a12
                            |A| =              = a11 a22 − a21 a12
                                       a21 a22
   For this special case of order 2 determinants, a simple rule is:
     1. multiply the diagonal elements together;
     2. multiply the off-diagonal elements together;
     3. subtract the product of the off-diagonal elements
        from the product of the diagonal elements.
   Exercise
   Show that the determinant satisfies
                                                  1 0           0 1
                            |A| = a11 a22             + a21 a12
                                                  0 1           1 0
   University of Warwick, EC9A0 Maths for Economists       Peter J. Hammond   54 of 87
Transposing the Rows or Columns
   Example                                                                   
                                                       a b                  0 1
   Consider the two 2 × 2 matrices A =                       ,          T=        .
                                                       c d                  1 0
   Note that T is orthogonal.
                                                
                          b a                 c d
   Also, one has AT =           and TA =             .
                          d c                 a b
   Here T is a transposition matrix which interchanges:
   (i) the columns of A in AT; (ii) the rows of A in TA.
   Evidently |T| = −1 and |TA| = |AT| = (bc − ad) = −|A|.
   So interchanging the two rows or columns of A
   changes the sign of |A|.
   University of Warwick, EC9A0 Maths for Economists       Peter J. Hammond      55 of 87
Sign Adjusted Transpositions
   Example
   Next, consider the following three 2 × 2 matrices:
                                                   
                 a b               0 1              0 −1
           A=            , T=             , T̂ =
                 c d               1 0              1  0
   Note that, like T, the matrix T̂ is orthogonal.
                                                                              b −a                  −c −d
   Here one has AT̂ =              and T̂A =            .
                          d −c                     a b
   Evidently |T̂| = 1 and |T̂A| = |AT̂| = (ad − bc) = |A|.                                                               
                                                     >      0 1
   The same is true of its transpose (and inverse) T̂ =          .
                                                           −1 0
   This key property makes both T̂ and T̂>
   sign adjusted versions of the transposition matrix T.
   University of Warwick, EC9A0 Maths for Economists   Peter J. Hammond   56 of 87
Cramer’s Rule in the 2 × 2 Case
   Using determinant notation, the solution to the equations
                                    a11 x1 + a12 x2 = b1
                                    a21 x1 + a12 x2 = b2
   can be written in the alternative form
                                1 b1 a12                      1 a11 b1
                       x1 =              ,             x2 =
                                D b2 a22                      D a21 b2
   This accords with Cramer’s rule,
   which says that the solution to Ax = b is the vector x = (xi )ni=1
   each of whose components xi is the fraction with:
     1. denominator equal to the determinant D
        of the coefficient matrix A (provided, of course, that D 6= 0);
     2. numerator equal to the determinant of the matrix [A−i /b]
        formed from A by excluding its ith column,
        then replacing it with the b vector of right-hand side elements,
        while keeping all the columns in their original order.
   University of Warwick, EC9A0 Maths for Economists      Peter J. Hammond   57 of 87
Outline
   Special Matrices
      Square, Symmetric, and Diagonal Matrices
      The Identity Matrix
      The Inverse Matrix
      Partitioned Matrices
   Permutations and Their Signs
      Permutations
      Transpositions
      Signs of Permutations
      The Product Rule for the Signs of Permutations
   Determinants: Introduction
      Determinants of Order 2
      Determinants of Order 3
      The Determinant Function
      Permutation and Transposition Matrices
      Triangular Matrices
   University of Warwick, EC9A0 Maths for Economists   Peter J. Hammond   58 of 87
Determinants of Order 3: Definition
   Determinants of order 3 can be calculated
   from those of order 2 according to the formula
                               a22 a23       a   a       a   a
              |A| = a11                − a12 21 23 + a13 21 22
                               a32 a33       a31 a33     a31 a32
                        X3
                    =              (−1)1+j a1j |C1j |
                             j=1
   where, for j = 1, 2, 3, the 2 × 2 matrix C1j is the (1, j)-cofactor
   obtained by removing both row 1 and column j from the matrix A.
   The result is the following sum
      |A| = a11 a22 a33 − a11 a23 a32 + a12 a23 a31
                                                − a12 a21 a33 + a13 a21 a32 − a13 a22 a31
   of 3! = 6 terms, each the product of 3 elements chosen
   so that each row and each column is represented just once.
   University of Warwick, EC9A0 Maths for Economists          Peter J. Hammond        59 of 87
Determinants of Order 3: Cofactor Expansion
   The determinant expansion
      |A| = a11 a22 a33 − a11 a23 a32 + a12 a23 a31
                                                − a12 a21 a33 + a13 a21 a32 − a13 a22 a31
   is very symmetric, suggesting (correctly)
   that the cofactor expansion along the first row (a11 , a12 , a13 )
                                           X3
                                 |A| =                 (−1)1+j a1j |C1j |
                                                 j=1
   gives the same answer as the other cofactor expansions
                        X3                                    X3
              |A| =                 (−1)r +j arj |Crj | =               (−1)i+s ais |Cis |
                              j=1                                 i=1
   along, respectively:
     I the r th row (ar 1 , ar 2 , ar 3 )
     I the sth column (a1s , a2s , a3s )
   University of Warwick, EC9A0 Maths for Economists             Peter J. Hammond            60 of 87
Determinants of Order 3: Alternative Expressions
   One way of condensing the notation
      |A| = a11 a22 a33 − a11 a23 a32 + a12 a23 a31
                                − a12 a21 a33 + a13 a21 a32 − a13 a22 a31
   is to reduce it to |A| = π∈Π3 sgn(π) 3i=1 aiπ(i)
                           P               Q
   for the sign function Π3 3 π 7→ sgn(π) ∈ {−1, +1}.
   The six values of sgn(π) can be read off as
    sgn(π 123 ) = +1;                    sgn(π 132 ) = −1;     sgn(π 231 ) = +1;
    sgn(π 213 ) = −1;                    sgn(π 312 ) = +1;     sgn(π 321 ) = −1.
   Exercise
   Verify these values for each of the six π ∈ Π3 by:
     1. calculating the number of inversions directly;
     2. expressing each π as the product of transpositions,
        and then counting these.
   University of Warwick, EC9A0 Maths for Economists   Peter J. Hammond     61 of 87
Sarrus’s Rule: Diagram
   An alternative way to evaluate determinants only of order 3
   is to add two new columns
   that repeat the first and second columns:
                                     a11 a12 a13 a11 a12
                                     a21 a22 a23 a21 a22
                                     a31 a32 a33 a31 a32
   Then add lines/arrows going up to the right or down to the right,
   as shown below
                      a11             a12              a13        a11           a12
                              &               %
                                              &              %
                                                             &           %
                      a21             a22              a23        a21           a22
                              %               %
                                              &              %
                                                             &           &
                      a31             a32              a33        a31           a32
   Note that some pairs of arrows in the middle cross each other.
   University of Warwick, EC9A0 Maths for Economists             Peter J. Hammond     62 of 87
Sarrus’s Rule Defined
   Now:
     1. multiply along the three lines falling to the right,
        then sum these three products, to obtain
                               a11 a22 a33 + a12 a23 a31 + a13 a21 a32
     2. multiply along the three lines rising to the right,
        then sum these three products, giving the sum a minus sign,
        to obtain
                             −a11 a23 a32 − a12 a21 a33 − a13 a22 a31
   The sum of all six terms exactly equals the earlier formula for |A|.
   Note that this method, known as Sarrus’s rule,
   does not generalize to determinants of order higher than 3.
   University of Warwick, EC9A0 Maths for Economists    Peter J. Hammond   63 of 87
Outline
   Special Matrices
      Square, Symmetric, and Diagonal Matrices
      The Identity Matrix
      The Inverse Matrix
      Partitioned Matrices
   Permutations and Their Signs
      Permutations
      Transpositions
      Signs of Permutations
      The Product Rule for the Signs of Permutations
   Determinants: Introduction
      Determinants of Order 2
      Determinants of Order 3
      The Determinant Function
      Permutation and Transposition Matrices
      Triangular Matrices
   University of Warwick, EC9A0 Maths for Economists   Peter J. Hammond   64 of 87
The Determinant Mapping
  Let Dn denote the domain Rn×n of n × n matrices.
  Definition
  For all n ∈ N, the determinant mapping
                             X             Yn
           Dn 3 A 7→ |A| :=         sgn(π)                           aiπ(i) ∈ R
                                                  π∈Πn         i=1
  specifies the determinant |A| of each n × n matrix A
  as a function of its n row vectors (a>  n
                                       i )i=1 .
  Here the multiplier sgn(π) attached to each product of n terms
  can be regarded as the sign adjustment
  associated with the permutation π ∈ Πn .
  University of Warwick, EC9A0 Maths for Economists      Peter J. Hammond         65 of 87
Row Mappings
  For a general natural number n ∈ N, consider any row mapping
                                                               Dn 3 A 7→ D(A) = D ha>      n
                                         i i=1 ∈ R
                                           i
  defined on the domain Dn of n × n matrices A
  with row vectors ha>  n
                     i ii=1 .
  Notation: For each fixed r ∈ Nn , let D(A/b>     r )
  denote the new value D(a> 1 , . . . , a> , b> , a> , . . . , a> )
                                         r −1 r    r +1         n
  of the row mapping D after the r th row a>     r of the matrix A
  has been replaced by the new row vector b>      r ∈R .
                                                        n
  University of Warwick, EC9A0 Maths for Economists   Peter J. Hammond   66 of 87
Row Multilinearity
   Definition
   The function Dn 3 A 7→ D(A) of the n rows ha>     n
                                                  i ii=1 of A
   is (row) multilinear just in case,
   for each row number i ∈ {1, 2, . . . , n},
   each pair b>    >      n
              i , ci ∈ R of new versions of row i,
   and each pair of scalars λ, µ ∈ R, one has
           D(A−i /λb>     >              >              >
                    i + µci ) = λD(A−i /bi ) + µD(A−i /ci )
   Formally, the mapping Rn 3 a>                  >
                                    i 7→ D(A−i /ai ) ∈ R
   is required to be linear, for fixed each row i ∈ Nn .
   That is, D is a linear function of the ith row vector a>
                                                          i on its own,
   when all the other rows a> h (h 6
                                   =  i) are fixed.
   University of Warwick, EC9A0 Maths for Economists   Peter J. Hammond   67 of 87
Determinants are Row Multilinear
   Theorem
   For all n ∈ N, the determinant mapping
                              X             Yn
            Dn 3 A 7→ |A| :=         sgn(π)                           aiπ(i) ∈ R
                                                   π∈Πn         i=1
   is a row multilinear function of its n row vectors (a>  n
                                                        i )i=1 .
   Proof.
   For each fixed row r ∈ N, we have
            det(A−i /λb>
                       r + µcr )
                                >
            P                                      Q
          =   π∈Πn sgn(π) (λbr π(r ) + µcr π(r ) )   i6=r aiπ(i)
            P             h            Q                          Q            i
          =   π∈Πn sgn(π)   λb r π(r )       a
                                         i6=r iπ(i) +  µc r π(r )       a
                                                                    i6=r iπ(i)
          = λ det(A−i /b>                 >
                        r ) + µ det(A−i /cr )
   as required for multilinearity.
   University of Warwick, EC9A0 Maths for Economists      Peter J. Hammond         68 of 87
Outline
   Special Matrices
      Square, Symmetric, and Diagonal Matrices
      The Identity Matrix
      The Inverse Matrix
      Partitioned Matrices
   Permutations and Their Signs
      Permutations
      Transpositions
      Signs of Permutations
      The Product Rule for the Signs of Permutations
   Determinants: Introduction
      Determinants of Order 2
      Determinants of Order 3
      The Determinant Function
      Permutation and Transposition Matrices
      Triangular Matrices
   University of Warwick, EC9A0 Maths for Economists   Peter J. Hammond   69 of 87
Permutation Matrices: Definition
   Definition
   Given any permutation π ∈ Πn on {1, 2, . . . , n},
   define Pπ as the n × n permutation matrix
                           π
   whose elements satisfy pπ(i),j                         π =δ
                                  = δi,j or equivalently pi,j  π −1 (i),j .
   That is, the rows of the identity matrix In are permuted
   so that for each i = 1, 2, . . . , n,
   its ith row vector is moved to become row π(i) of Pπ .
   Lemma
                                                                          −1
   For each permutation matrix Pπ one has (Pπ )> = Pπ .
   Proof.
   Because π is a permutation, i = π(j) ⇐⇒ j = π −1 (i).
   Then the definitions imply that for all (i, j) ∈ N2n one has
                                                                  −1
                    (Pπ )>      π
                         i,j = pj,i = δπ(j),i = δπ −1 (i),j = p
                                                                π
                                                                  (i, j)
   University of Warwick, EC9A0 Maths for Economists   Peter J. Hammond        70 of 87
Permutation Matrices: Examples
   Example
   There are two 2 × 2 permutation matrices,                   which are given by:                                                                
                       12         21     0                     1
                      P = I2 ; P =                                 .
                                         1                     0
   Their signs are respectively +1 and −1.
   There are 3! = 6 permutation matrices in 3 dimensions given by:
                                                                               
                 1       0     0             1         0   0            0      1   0
      P123    = 0       1     0 ; P132 = 0          0   1 ; P213 = 1      0   0 ;
                 0       0     1             0         1   0            0      0   1
                                                                               
                 0       1     0             0         0   1            0      0   1
      P231    = 0       0     1 ; P312 = 1          0   0 ; P321 = 0      1   0 .
                 1       0     0             0         1   0            1      0   0
   Their signs are respectively +1, −1, −1, +1, +1 and −1.
   University of Warwick, EC9A0 Maths for Economists        Peter J. Hammond        71 of 87
Multiplying a Matrix by a Permutation Matrix
   Lemma
   Given any n × n matrix A, for each permutation π ∈ Πn
   the corresponding permutation matrix Pπ satisfies
                               (Pπ A)π(i),j = aij = (APπ )i,π(j)
   Proof.
   For each pair (i, j) ∈ N2n , one has
                          Xn                     Xn
          (Pπ A)π(i),j =            π
                                   pπ(i),k akj =                            δik akj = aij
                                           k=1                        k=1
   and also
                                     Xn                          Xn
              (APπ )i,π(j) =                          π
                                                 aik pk,π(j) =              aik δkj = aij
                                           k=1                        k=1
                                                            
            premultiplying         π                     rows
   So                        A by P applies π to A’s             .
            postmultiplying                            columns
   University of Warwick, EC9A0 Maths for Economists             Peter J. Hammond           72 of 87
Multiplying Permutation Matrices
   Theorem
   Given the composition π ◦ ρ of two permutations π, ρ ∈ Πn ,
   the associated permutation matrices satisfy Pπ Pρ = Pπ◦ρ .
   Proof.
   For each pair (i, j) ∈ N2n , one has
                             π ρ
                    Pn                    Pn
    (Pπ Pρ )ij =       k=1 pik pkj =         k=1 δπ −1 (i),k δρ−1 (k),j
                    Pn
               =       k=1 δ(ρ−1 ◦π −1 )(i),ρ−1 (k) δρ−1 (k),j
                    Pn                                                  π◦ρ
               =       `=1 δ(π◦ρ)−1 (i),` δ`,j = δ(π◦ρ)−1 (i),j = pij
   Corollary
                                                       1      2         q
   If π = π 1 ◦ π 2 ◦ · · · ◦ π q , then Pπ = Pπ Pπ · · · Pπ .
   Proof.
   By induction on q, using the result of the Theorem.
   University of Warwick, EC9A0 Maths for Economists       Peter J. Hammond   73 of 87
Any Permutation Matrix Is Orthogonal
   Proposition
   Any permutation matrix Pπ satisfies Pπ (Pπ )> = (Pπ )> Pπ = In ,
   so is orthogonal.
   Proof.
   For each pair (i, j) ∈ N2n , one has
                           Pn                Pn
        [Pπ (Pπ )> ]ij =            π π
                               k=1 pik pjk =  k=1 δπ −1 (i),k δπ −1 (j),k
                                = δπ−1 (i),π−1 (j) = δij
   and also
                                       Pn                   Pn
          [(Pπ )> Pπ ]ij        =             π π
                                        k=1 pki pkj     =       k=1 δπ −1 (k),i δπ −1 (k),j
                                       Pn
                                =       `=1 δ`,i δ`,j   = δij
   University of Warwick, EC9A0 Maths for Economists        Peter J. Hammond                  74 of 87
Transposition Matrices
   A special case of a permutation matrix
   is a transposition Trs of rows r and s.
   As the matrix I with rows r and s                   transposed, it satisfies
                                
                                δij
                                                       if i 6∈ {r , s}
                      (Trs )ij = δsj                    if i = r
                                
                                 δrj                    if i = s
                                
   Exercise
   Let A be any n × n matrix. Prove that:
   1) any transposition matrix Trs is symmetric and orthogonal;
   2) Trs = Tsr ; 3) Trs Tsr = Tsr Trs = I;
   4) Trs A is A with rows r and s interchanged;
   5) ATrs is A with columns r and s interchanged.
   University of Warwick, EC9A0 Maths for Economists          Peter J. Hammond    75 of 87
Determinants with Permuted Rows: Theorem
  Theorem
  Given any n × n matrix A and any permutation π ∈ Nn ,
  one has |Pπ A| = |APπ | = sgn(π) |A|.
  University of Warwick, EC9A0 Maths for Economists   Peter J. Hammond   76 of 87
Determinants with Permuted Rows: Proof
   Proof.
   The expansion formula for determinants gives
                        X              Yn
               |Pπ A| =         sgn(ρ)       (Pπ A)i,ρ(i)
                                           ρ∈Πn        i=1
   But for each i ∈ Nn , ρ ∈ Πn , one has (Pπ A)i,ρ(i) = aπ−1 (i),ρ(i) , so
                                Qn
        |Pπ A| =
                 P
                    ρ∈Πn sgn(ρ)    a −1
                             P i=1 π (i),ρ(i) Q
               = [1/ sgn(π)] π◦ρ∈Πn sgn(π ◦ ρ) ni=1 ai,(π◦ρ)(i)
               = sgn(π) σ∈Πn sgn(σ) ni=1 ai,σ(i) = sgn(π) |A|
                        P            Q
   because sgn(π ◦ ρ) = sgn(π) sgn(ρ) and 1/ sgn(π) = sgn(π),
   whereas there is an obvious bijection Πn 3 ρ ↔ π ◦ ρ = σ ∈ Πn
   on the set of permutations Πn .
   The proof that |APπ | = sgn(π) |A| is sufficiently similar
   to be left as an exercise.
   University of Warwick, EC9A0 Maths for Economists   Peter J. Hammond   77 of 87
The Alternation Rule for Determinants
   Corollary
   Given any n × n matrix A
   and any transposition τrs with associated transposition matrix Trs ,
   one has |Trs A| = |ATrs | = −|A|.
   Proof.
   Apply the previous theorem in the special case
   when π = τrs and so Pπ = Trs .
   Then, because sgn(π) = sgn(τrs ) = −1,
   the equality |Pπ A| = sgn(π) |A| implies that |Trs A| = −|A|.
   We have shown that, for any n × n matrix A, given any:
     1. permutation π ∈ Nn , one has |Pπ A| = |APπ | = sgn(π) |A|;
     2. transposition τrs , one has |Trs A| = |ATrs | = −|A|.
   University of Warwick, EC9A0 Maths for Economists   Peter J. Hammond   78 of 87
Sign Adjusted Transpositions
   We define the sign adjusted transposition matrix T̂rs
   as either one of the two matrices that:
   (i) swaps rows or columns r and s;
   (ii) then multiplies one, but only one,
   of the two swapped rows or columns by −1.
   As the matrix I with rows r and s transposed,
   and then one sign changed, it satisfies
                               
                               δij
                                        if i 6∈ {r , s}
                     (Trs )ij = αs δsj if i = r
                               
                                αr δrj if i = s
                               
   where αr , αs ∈ {−1, +1} with αr = −αs .
   It evidently satisfies |T̂rs A| = |AT̂rs | = |A|.
   University of Warwick, EC9A0 Maths for Economists   Peter J. Hammond   79 of 87
Sign Adjusted Permutations
   Given any permutation matrix P,
   there is a unique permutation π such that P = Pπ .
   Suppose that π = τr1 s1 ◦ · · · ◦ τr` s` is any one of the several ways
   in which the permutation π can be decomposed
   into a composition of transpositions.
   Then P = `k=1 Trk sk and |PA| = (−1)` |A| for any A.
              Q
   Definition
   Say that P̂ is a sign adjusted version of P = Pπ
   just in case it can be expressed as the product P̂ = `k=1 T̂rk sk
                                                       Q
   of sign adjusted transpositions satisfying P = `k=1 Trk sk .
                                                  Q
   Then it is easy to prove by induction on `
   that for every n × n matrix A one has |P̂A| = |AP̂| = |A|.
   Recall that all the elements of a permutation matrix P are 0 or 1.
   A sign adjustment of P involves changing some of the 1 elements
   into −1 elements, while leaving all the 0 elements unchanged.
   University of Warwick, EC9A0 Maths for Economists   Peter J. Hammond   80 of 87
Outline
   Special Matrices
      Square, Symmetric, and Diagonal Matrices
      The Identity Matrix
      The Inverse Matrix
      Partitioned Matrices
   Permutations and Their Signs
      Permutations
      Transpositions
      Signs of Permutations
      The Product Rule for the Signs of Permutations
   Determinants: Introduction
      Determinants of Order 2
      Determinants of Order 3
      The Determinant Function
      Permutation and Transposition Matrices
      Triangular Matrices
   University of Warwick, EC9A0 Maths for Economists   Peter J. Hammond   81 of 87
Triangular Matrices: Definition
   Definition
   A square matrix is upper (resp. lower) triangular
   if all its non-zero off diagonal elements are above and to the right
   (resp. below and to the left) of the diagonal
   — i.e., in the upper (resp. lower) triangle
   bounded by the principal diagonal.
     I The elements of an upper triangular matrix U
       satisfy (U)ij = 0 whenever i > j.
     I The elements of a lower triangular matrix L
       satisfy (L)ij = 0 whenever i < j.
   University of Warwick, EC9A0 Maths for Economists   Peter J. Hammond   82 of 87
Products of Upper Triangular Matrices
   Theorem
   The product W = UV of any two upper triangular matrices U, V
   is upper triangular,
   with diagonal elements wii = uii vii (i = 1, . . . , n) equal
   to the product of the corresponding diagonal elements of U, V.
   Proof.
   Given any two upper triangular n × n matrices U and V,
   one has uik vkj = 0 unless both i ≤ k and k ≤ j.
   So the elements (wij )n×n of their product W = UV satisfy
                            (P
                                 j
                                 k=i uik vkj if i ≤ j
                    wij =
                              0              if i > j
   Hence W = UV is upper triangular.
   Finally, when j = i the above sum collapses to just one term,
   and wii = uii vii for i = 1, . . . , n.
   University of Warwick, EC9A0 Maths for Economists   Peter J. Hammond   83 of 87
Triangular Matrices: Exercises
   Exercise
   Prove that the transpose:
     1. U> of any upper triangular matrix U is lower triangular;
     2. L> of any lower triangular matrix L is upper triangular.
   Exercise
   Consider the matrix Er +αq
   that represents the elementary row operation
   of adding a multiple of α times row q to row r , with r 6= q.
   Under what conditions is Er +αq
   (i) upper triangular? (ii) lower triangular?
   Hint: Apply the row operation to the identity matrix I.
   Answer: (i) iff q < r ; (ii) iff q > r .
   University of Warwick, EC9A0 Maths for Economists   Peter J. Hammond   84 of 87
Products of Lower Triangular Matrices
   Theorem
   The product of any two lower triangular matrices
   is lower triangular.
   Proof.
   Given any two lower triangular matrices L, M,
   taking transposes shows that (LM)> = M> L> = U,
   where the product U is upper triangular,
   as the product of upper triangular matrices.
   Hence LM = U> is lower triangular,
   as the transpose of an upper triangular matrix.
   University of Warwick, EC9A0 Maths for Economists   Peter J. Hammond   85 of 87
Determinants of Triangular Matrices
   Theorem
   The determinant of any n × n upper triangular matrix U
   equals the product of all the elements on its principal diagonal.
   Proof.
   Recall the expansion formula |U| = π∈Π sgn(π) ni=1 uiπ(i)
                                     P             Q
   where Π denotes the set of permutations on {1, 2, . . . , n}.
   Because U is upper triangular, one has uiπ(i) = 0 unless i ≤ π(i).
   So ni=1 uiπ(i) = 0 unless i ≤ π(i) for all i = 1, 2, . . . , n.
      Q
   But the identity ι is the only permutation π ∈ Π
   that satisfies i ≤ π(i) for all i ∈ Nn .
   Because sgn(ι) = +1, the expansion reduces to the single term
                               Yn            Yn
                  |U| = sgn(ι)      uiι(i) =      uii
                                                       i=1         i=1
   This is the product of the n diagonal elements, as claimed.
   University of Warwick, EC9A0 Maths for Economists         Peter J. Hammond   86 of 87
Invertible Triangular Matrices
                           Qn
   Similarly |L| =            i=1 `ii    for any lower triangular matrix L.
   Evidently:
   Corollary
   A triangular matrix (upper or lower) is invertible
   if and only if no element on its principal diagonal is 0.
   University of Warwick, EC9A0 Maths for Economists      Peter J. Hammond    87 of 87