0% found this document useful (0 votes)
24 views18 pages

Resumo Álgebra Linear

Uploaded by

batistajvictor
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views18 pages

Resumo Álgebra Linear

Uploaded by

batistajvictor
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

Mauricio A. Elzo, University of Florida, 1996, 2005, 2006, 2010, 2014.

[4-1]

ANIMAL BREEDING NOTES

CHAPTER 4

DEFINITE, ORTHOGONAL, AND IDEMPOTENT MATRICES

Definitions

Definite matrices are defined for symmetric matrices only. Let A be an n×n symmetric matrix and

xAx be a quadratic form. Then, the symmetric matrix A and the quadratic form xAx are said to

be:

a) positive definite (p.d.),

if xAx > 0 for all x ≠ 0,

b) positive semi-definite (p.s.d.),

if xAx ≥ 0 for all x ≠ 0, with xAx = 0 for at least one x ≠ 0,

c) non-negative definite (n.n.d),

if xAx ≥ 0 for all x ≠ 0,

d) negative definite (n.d.),

if xAx < 0 for all x ≠ 0,

e) negative semi-definite (n.s.d.),

if xAx ≤ 0 for all x ≠ 0, with xAx = 0 for at least one x ≠ 0, and

f) non-positive definite (n.p.d.),

if xAx ≤ 0 for all x ≠ 0.


[4-2]

Properties of positive definite (p.d.) matrices

(1) A symmetric matrix A is p.d. if and only if all the characteristic roots of A are positive.

Proof: (by contradiction)

 {λi > 0}  A p.d.

Let P be an orthogonal matrix that diagonalizes A, i.e.,

PAP = D = diag {λi},

where {λi} are the latent roots of A.

Let y = Px  x = (P)1y = Py

n
Thus xAx = yPAPy = yDy =  λiyi2
i=1

If all λi > 0, then xAx = yDy ≥ 0 for all y, with equality only when y = 0, i.e., when x = Py = P0 =

0  A is p.d.

 A p.d.  {λi > 0}

Assume a characteristic root of A, e.g. λ1, is not positive.

Let y* be the n×1 vector with the first element equal to 1 and the rest zeroes, and let x* = Py*, then

x* ≠ 0  because y* ≠ 0 (see 4.28, pg. 23, Goldberger, 1964).

Then,
[4-3]
n

λ
* * * *
x ’ Ax  y ’ P’ APy  y ’ Dy 
* *
i y*i2  λ1  0
i =1

which contradicts the assumption that A is p.d.  λ1 > 0 and by induction  {λi > 0}.

(2) If An×n is p.d., then

(a) │A│ > 0,

(b) rank (A) = n, and

(c) A is non-singular.

Proof:

(a) │A│ = │PAP│ = │D│ = λ1 λ2 ... λn, where {λi > 0} by property (1) of p.d. matrices, thus,

│D│ > 0  │A│ > 0,

(b) rank (A) = rank (PAP),

= rank (D),

= n because λi > 0, i = 1, ... , n,

(c) A is nonsingular because │A│ > 0 as proven in (a).

(3) If An×n is p.d. and P is an n×m matrix with rank (P) = m, then PAP is p.d.

Proof: PAP is an m×m symmetric matrix. Consider yn×1, y ≠ 0, then y(PAP)y = xAx for x =

Py. Because A is p.d. and x ≠ 0, then xAx > 0. But y(PAP) y = xAx, thus y(PAP) y > 0 for all

y ≠ 0, so, by definition, PAP is p.d.

Specializations of property (3)

(3.1) If A is p.d. and P is nonsingular, then PAP is p.d.

Proof: same as for property (3) above.


[4-4]

(3.2) If A is p.d., then A1 is p.d.

Proof: Let

P = (A1)AA1

= (A1)

= A1 because A is symmetric

 A1 is p.d.

(3.3) If P is an n×m matrix with rank (P) = m, then PP is p.d.

Proof: Consider A = I in (3) above. The identity matrix I is p.d. because

n
xIx =  xi2
i =1
> 0 for all x ≠ 0.

So, we have:

PAP = PIP = PP  PP is p.d., by property (3) above.

(4) A principal submatrix of a square matrix A is a submatrix whose diagonal elements coincide

with the diagonals of A. A principal submatrix is obtained by deleting the appropriate rows and

columns of A. If A is p.d., then every principal submatrix of A is p.d.

Proof: Without loss of generality, let B be the principal submatrix of A obtained by deleting the

last n-m rows and columns of A. Then,

 A11 A12   Im 
B   Im 0m, n m    
 A12’ A22   0n m, m 

 Im 
Because   is an n×m matrix of rank equal to m, it qualifies as the P of property (3) above.
 0n  m, m 

Thus, by property (3), B is p.d.

(5) A principal minor is the determinant of a principal submatrix. Then, if A is p.d., then every
[4-5]

principal minor of A is positive.

Proof: Let │B│, where B comes from (4) above, be a principal minor. Since B is p.d. by property

(4), │B│ > 0 by property (2).

A particular case of (5) is:

If A is p.d., then

(a) aii > 0, and

(b) aiiajj  aij2 > 0 for all i and j.

Proof:

(a) Without loss of generality choose Bn×1 with a 1 in the first element and zeroes elsewhere.

Hence, rank (B) = 1. Thus, by property (4) BAB = [a11] is p.d., and by property (2) its determinant

is positive, i.e.,

│BAB│ = │a11│ = a11 > 0

(b) Without loss of generality choose Bn×2 with 1s in positions (1,1) and (2,2), and zeroes

elsewhere. Hence, rank (B) = 2.

By property (4),

 a11 a12 
BAB =   is p.d.
 a12 a 22 

By property (2),

a11 a12
│BAB│ = = a11a22  a122 > 0
a12 a 22

(6) If A is p.d., there exists a nonsingular matrix P such that PAP = I and PP = A1.

Proof: Let E be the orthogonal matrix such that


[4-6]

EAE = D = diag {λi}

and let

 1 
T = diag  .
 λi 

Define:

P = TE, where P is nonsingular because it is the product of nonsingular matrices.

Thus,

PAP = TEAET

PAP = TDT

 1   1 
PAP = diag   diag{λi} diag  
 λi   λi 

PAP = I

Furthermore, from PAP = I we get:

PAP = I

P(PAP)P = PIP

PPAPP = PP

Because P is nonsingular, PP is also nonsingular, hence (PP)1 exists. Thus,

(PP)1PPAPP = (PP)1PP

APP = I

A1APP= A1I

PP = A1

(7) If A is p.d. of order n, there is a full rank n×n matrix L such that A = LL.
[4-7]

Proof: PAP = D for P orthogonal, where D = diagonal of order n whose elements are the

eigenvalues of A (and D). Because P is orthogonal, PP = PP = I. Thus,

PPAPP = PDP.

But since A is p.d. the elements of D = diag {λi} are all positive, thus

A = PDP

A = (PD2)(D2P)

A = LL, where L = D2P.

Also, note that

LL = D2PPD2

= D

(8) A symmetric matrix is p.d. if and only if it can be written as PP for a nonsingular P.

Proof:

(a) Necessary condition: existence of P.

Because A is symmetric, there is an orthogonal matrix Q such that

QAQ = D = diag {λi}

QAQ = D2ID2

 D2QAQD2 = D2D2ID2D2

TAT = I for T = D2Q

Note: T is nonsingular because D2 and Q are, which implies that (D2)1 and Q1 exist. If T is

nonsingular, T1 = Q1D2 exists, because Q1 and (D2)1 exist. Hence, T is nonsingular.

However, T is not orthogonal, even if Q is, because each element of each eigenvector is multiplied

by the reciprocal of the square root of each eigenvalue, e.g., for the jth eigenvector of A, i.e., qj, the
[4-8]

product D2qj = tj is:

 1   q1j 
   
λ1 
 λ1   q1j 
1 q    q2j 
D
½
qj   
 2j    tj
 λ2  λ2 
    
   
 
   
 

Thus,

n  
 qij 2 
t j’ t j     1
i 1 
 λ i 

and

n  (qij qij’) 
t j ’ t j’  
i 1  λi 
  0

Thus, A = T1(T)1 = PP for P = T1 = Q1D2.

(b) If A = PP for P nonsingular, then A is symmetric and

xAx = xPPx

which is the sum of squares of Px. Thus,

xAx > 0 for all Px ≠ 0

and

xAx = 0 for all Px = 0.

But Px = 0 only when x = 0 because P is non-singular, which implies that P1 exists. Thus,

xAx > 0 for all x ≠ 0

and

xAx = 0 only for x = 0


[4-9]

 by definition A is p.d.

(9) If Am×n has full column rank, i.e., the rank (A) = n, then AA is positive definite.

Proof: xAAx is the sum of squares of the elements of Ax. If A is full column rank, then Ax = 0

only when x = 0. Thus,

xAAx > 0 for all x ≠ 0

 AA is p.d.

Corollary: If Am×n has full row rank, i.e., the rank (A) = m, then AA is p.d.

(10) The sum of p.s.d. matrices is also p.s.d.

Proof: Let Ai, i=1, ..., p be a set of p.s.d. matrices. Then, consider:

 p 
x   Ai  x = xA1x + ... + xApx
 i =1 

Each one of the quadratics xAix, i = 1, ... , p, is p.s.d.  their sum is positive  the sum of p.d.

matrices is also p.d.

Properties of positive semi-definite (p.s.d.) matrices

(1) A symmetric matrix A is p.s.d. if and only if all the eigenvalues are either zero or positive

with at least one of them equal to zero.

(2) If An×n is p.s.d., then,

(a) │A│ = 0,

(b) rank (A) = r < n,


[4-10]

(c) A is singular.

(3) If An×n is p.s.d. and P is an n×m matrix with rank (P) = m, then PAP is p.s.d.

Specializations of property (3):

(3.1) If A is p.s.d. and P is nonsingular, then PAP is p.s.d.

(3.2) If A is p.s.d. then A is p.s.d.

(3.3) If P is an n×m matrix with rank (P) = r < m, then PP is p.s.d.

(4) If A is p.s.d., then some principal submatrices of A are p.s.d. while others are p.d.

(5) If A is p.s.d., then some principal minors of A are positive while others are zero. In

particular,

(a) aii ≥ 0 for all i with at least one i for which aii = 0, and

(b) aiiajj  aij2 ≥ 0 for all i and j, except for at least one i and j where aiiajj  aij2 = 0.

(6) If An×n is p.s.d. of rank r, there exists a singular matrix Pn×n of rank r, such that,

 Ir 0
(a) PAP =   , and
0 0

(b) PP = A.

Proof:

 Dr 0
(a) EAE =  
 0 0

= Dn for E orthogonal.

Define:

 D½r 0
T =  
 0 0

Then,
[4-11]

P = TE  P is singular because T is singular.

Thus,

PAP = TEAET

 Dr 0
PAP = T  T
 0 0

 Dr ½ 0   Dr 0   Dr ½ 0
PAP =      
 0 0  0 0  0 0

 Dr½ D½r D½r Dr½ 0


PAP =  
 0 0

 Ir 0
PAP =  
0 0

(b) A g-inverse for A must satisfy AAA = A, where A = EDnE, for E orthogonal.

Proof: Consider

A = (EDnE)

A = E Dn E

Thus,

AAA = (EDnE)(E Dn E)(EDnE)

= EDnI Dn IDnE

= EDnE

 A = E Dn E is a g-inverse of A.

But
[4-12]

Dn = Dn2Dn2 = TT = TT,

 A = ETTE

A = PP

 PP is a g-inverse of A.

(7) If An×n is p.s.d. of rank r, there is a full column rank n×r matrix L such that A = LL.

Proof:

 Dr 0
PAP =   for P orthogonal
 0 0

 D½r 
PAP =   Dr 0 
½

 0 

Thus,

 D½r  ½
A = P’    Dr 0  P
 0 

A = LL

where

 D½r 
L = P’   is n×r of full column rank,
 0 

and

L = [D½r 0] P is r×n of full row rank.

Also, note that


[4-13]

 D½r 
LL = D ½
r 0  PP’  
 0 

LL = Dr½ Dr½

LL = Dr

(8) A symmetric matrix is p.s.d. if it can be written as PP for a singular matrix P.

Proof:

(a) Necessary condition: existence of P.

Because A is symmetric,

 Dr 0
QAQ =   ≡ Dn for Q orthogonal
 0 0

 A = QDnQ

A = QDDQ

where

 D½r 0
D =  
 0 0

 A = PP for P = DQ

(b) If A = PP for P singular, then A is symmetric and xAx = xPPx, which is the sum of squares

of Px. Thus, xAx ≥ 0 for all Px ≠ 0 with at least one Px ≠ 0 for which xAx = 0. But Px = 0 at least

for one x ≠ 0  P is singular. Hence, xAx ≥ 0 for all x ≠ 0 with at least one x ≠ 0 for which xAx

= 0. So, by definition A is p.s.d.


[4-14]

(9) If Am×n does not have full column rank, i.e., rank (A) = r < m, then AA is p.s.d.

(10) The sum of p.s.d. matrices is also p.s.d.

Similar theorems to those described above can also be made for n.n.d, n.d., n.s.d. and n.p.d.

matrices. In particular, note that if A is n.d., the "nested" principal minors of A alternate in sign,

i.e., aii < 0, aiiajj  aij2 > 0 ...

Orthogonal matrices

A matrix A is orthogonal if AA = I, which implies that A = A1 and that AA = I.

Properties of orthogonal matrices:

(1) The inner product of any row (column) with itself is 1, and with any other row (column) is zero.

Proof: This is a consequence of AA = I.

(2) A product of orthogonal matrices is itself orthogonal.

Proof: Let A and B be two orthogonal matrices. Then,

(AB)(AB) = ABBA

= AIA

= II

= I

(3) The determinant of an orthogonal matrix is either 1 or 1.

Proof: For A orthogonal,

│AA│ = │I│

│A││A│ = │I│
[4-15]

Thus,

│A│ = │A│

 │A││A│ = 1

But (1)(1) = 1 or (1)(1) = 1

 │A│ = 1 or B1

1
(4) If λ is a latent root of an orthogonal matrix A, then so is .
λ

Proof:

│A  λI│ = │AA  λA│ = 0

= │I  λA│ = 0 for AA = I

1
= I  A = 0
λ

1
= I  A = 0
λ

1 '
=  I  A = 0
λ 

 1 '
=  A  I = 0
 λ 

1
= A  I = 0
λ
[4-16]

Idempotent Matrices

A matrix A is idempotent if A2 = A. For instance, the matrix H = GA is idempotent because

(GA)(GA) = G(AGA) = GA.

Properties of Idempotent Matrices

(1) Idempotent matrices are square.

Proof: A idempotent  AA = A2 exists only if A is square.

(2) The only nonsingular idempotent matrix is I.

Proof: Consider a nonsingular A, then

A2 = A

A1A2 = A1A

A1AA = I

A = I

(3) If A and B are idempotent so is AB, provided that AB = BA.

Proof:

(AB)2 = ABAB

= ABBA if AB = BA

= AB2A

= ABA

= AAB if BA = AB

= A2B

= A2B2

= AB
[4-17]

(4) If P is orthogonal and A is idempotent, PAP is idempotent.

Proof:

(PAP)(PAP) = PAIAP

= PA2P

= PAP

(5) The latent roots of an idempotent matrix are either 0 or 1.

Proof: Let A be an idempotent matrix with an eigenvalue λ and its eigenvector u.

Thus,

Au = λu

A2u = λ2u

But

A2u = Au

 λ2u = λu

 (λ2 - λ)u = 0

Also, because u ≠ 0,

(λ2 - λ) = 0

λ(λ  1) = 0

 λ1 = 0 and λ2 = 1

(6) The number of eigenvalues of an idempotent matrix is the same as its rank.

Proof: Let matrix A be idempotent with rank (A) = r. Let D be the equivalent diagonal form of A
[4-18]

whose diagonal elements are the eigenvalues of A. Thus, rank (D) = rank (A) = r  by property

(5) above, the only nonzero diagonal elements of D are 1's, and there must be r of them.

(7) The trace of an idempotent matrix is equal to its rank.

Proof: Trace (A) = Trace (D) = r by property (6).

(8) A general form for an idempotent matrix is A = X(YX)1Y provided that (YX)1 exists.

Proof:

A2 = (X(YX)1Y)(X(YX)1Y)

= X(YX)1IY

= X(YX)1Y

(9) A general form for an idempotent symmetric matrix is A = X(XX)1X, provided that

(XX)1 exists.

Proof:

A2 = X(XX)1XX(XX)1X

= X(XX)1IX

= X(XX)1X

References

Goldberger, A. S. 1964. Econometric Theory. John Wiley and Sons, Inc., NY.

Searle, S. R. 1982. Matrix Algebra Useful for Statistics. John Wiley and Sons, Inc., NY.

Searle, S. R. 1971. Linear Models. John Wiley and Sons, Inc., NY.

You might also like