Word Project
Word Project
Supervised By
Sikander Mehmood
Submitted By
DEPARTMENTOF MATHEMATICS
BARANI INSTITUTE OF SCIENCES
SAHIWAL CAMPUS
V V 6w w
DEDICATED
We dedicate our project to our respected parents, who brought us from skies to this
world and honorable teachers who took us from earth to skies. Undoubtedly, they make us
enable to achieve any goal in order to get success in life. Moreover, they always have been
role model for us. We believe that success is at their feet. They have not only taught us but
also inspire us.
FINAL APPROVAL
This is to certify that we have read this report with full attention in which is
submitted by Faiza Rafique, Awais Ahmad and Aleena Maheen. It is our judgment
that this report is of sufficient standard to warrant its acceptance by Barani Institute of
Sciences Sahiwal, for the degree of MSc (Math)
Committee
1.Supervisor
(Sikandar Mehmood)
2.External Examiner
ACKNOWLEDGMENT
We would like to express our gratitude to our supervisor Sikandar Mehmood
for the useful comments, remarks and engagement through the learning process of this
master thesis.Furthermore,we would like to thank members of Barani Institute of
Sciences Sahiwal for introducing us to the topic as well for the support on the way.
Also, we like to thank the participants in our project, who have willingly shared their
precious time during the process of development. We would like to thank all others,
who have supported us throughout entire process, both by keeping us harmoniousand
helping us putting pieces together. We will be grateful forever for your guidance.
DECLARATION
This project is Introduction to Diagonalisation of Matrices and it's neither as
a whole nor as a part have already been developed by any person. It is further
declared that we have developed this system and its interface entirely on the basis
of our personal effort, made under the guidance of our project supervisor. No
portion of this work presented in this report has been submitted in support of any
Application for any other degree or qualification of this or any other university or
institute of learning.
It is further stated that the project and all it associated documents, and
records are submitted as the partial fulfillment for MSC Mathematics. We
understand and transfer the copy right for this material to the Department of
Methematics, Barani Institute of Sciences Sahiwal.
2015-2017
Contents
1.Fundamentals Page No
1.2 History 2
1.5 Characterization 7
2. Process of diagonalization
2.1 Diagonalization 9
10
2.3 Diagonalizable matrices
10
2.4 Diagonalization of matrices
2.5 Non diagonalizable matrices 16
2.6 Applications 17
Conclusion 22
References 23
Chapter 1
Fundamentals
1.1 General Introduction
Matrix diagonalization is the process of taking a square matrix and convertingit into
a special type of matrix called diagonal matrix that shares the same fundamental properties of
the underlying matrix. Matrix diagonalization is equivalent to transforming the underlying system
of equations into a special set of coordinate axes in which the matrix takes this canonical form.
Diagonalizing a matrix is equivalent to find the matrix eigenvalues, which turn out to be precisely
the entries of the diagonalized matrix. Similarly, the eigenvectors make up the new set of axes
corresponding to the diagonal matrix. The remarkable relationship between a diagonalized
matrix, eigenvalues and eigenvectors follows from the beautiful mathematical identity the Eigen
decomposition that a square matrix A can be decomposed into the very special form
−1
A=PD P (1)
AX=Y (2)
−1
PD P X =Y (3)
As at least as long as P is a square matrix and pre-multiplying both sides by P−1 ,we get
−1 −1
D P X=P (4)
Since the same linear transformations P is being applied to both X andY,solving the original
system is equivalent to solving the transformed system.
D X =Y
' '
(5)
' −1 ' −1
w h ere X =P Xand Y =P Y .
This provides a way to canonical a system into the simplest possible form, reduce the number of
parameters fromnxn for an arbitrary matrix to n for a diagonal matrix and obtain the
characteristic properties of the initial matrix. This approach arises frequently in Physics and
engineering, where technique is often used and extremely powerful. The Eigen value problem is a
problem of considerable theoretical interest and wide-ranging application. For example, it is crucial
in solving systems of differential equations, analyzing population growth models and calculating of
powers of matrices (i.e., the exponential matrix). Other areas such as Physics,Sociology, Biology,
Economics and Statistics have focused considerable attention on “eigen values" and "eigen vectors"
their applications and their computations. The vector equation is equivalent to a matrix equation of
the form
AX=b
Where A is an mxn matrix, X is a column vector with n entries, and b is a column vector with m
entries.
The number of vectors in bases for the span is now expressed as the rank of the matrix.
1.2 History
The history of matrix diagonalization goes back to ancient times! But the term "Matrix"
was not applied to the concept until 1850. "Matrix"is the Latin word for womb, and it retains that
sense in English. It can also mean more generally any place in which something is formed or
produced.
The origins of mathematical matrixes lie with study of systems of simultaneous linear equations. An
important Chinese text from between 300 BC and AD 200, nine chapters of the mathematical Art
(Chiu Chang Suan Shu), gives the first known example of the use of matrix methods to solve
simultaneous equations. In the treatise's seventh chapter, “Too much and not enough", the concept
of a determinant first appears, nearly two millenary before its supposed inventions by Japanese
mathematician Seki Kowa in 1683 or his German contemporary Gottfried Leibnitz (who is also
credited with the invention of differential calculus,separately from but simultaneously with Isaac
Newton).More uses of matrix, like arrangements of numbers appear in chapter eight, "Methods of
rectangular arrays" , in which a method is given for solving simultaneous equations using a counting
board that is mathematically identical in the modern method of solution outlined by Carl Friedrich
Gauss (1777-1855), also known as Gaussian elimination. The term "matrix" for such arrangements
was introduced in 1850 by James Joseph Sylvester. Sylvester, incidentally, had a (very) brief career at
the University of
Virginia, which came to an abrupt end after an enraged Sylvester hit a newspaper-reading
student with a sword stick and fled the country, believing he had killed the student.
In linear algebra, a defective matrix is a square matrix that does not have complete
bases of eigenvector, and is therefore not diagonalizable. In particular, an nxn matrix is defective
if and only if it does not have n linearly independent eigenvectors. A complete basis is formed by
augmenting the eigenvectors with generalized eigenvectors, which are necessary for solving
defective systems of ordinary differential equation and other problems.
In linear algebra, an orthogonal matrix is a square matrix with real entries whose
columns and rows are orthogonal unit vectors (i.e., orthogonal vectors) i.e.
t t
Q Q=QQ =I
This leads to the equivalent characterization: a matrix Q is orthogonal if its transpose is equal to
its inverse i.e.,
An orthogonal matrix Q is necessarily invertible.
1.3.5Hermitian matrix
In mathematics, a Hermitian matrix (or self adjoint matrix) is a square matrix with
complex entries that is equal to its own conjugate transpose that is,the element in the i-th row
and j-th column is equal to the complex conjugate of the element in the j-th row and i-th column,
for all the indices i andj,
a ij=a ji Or A=At
Hermitian matrices can be understood as the complex extension of real symmetric matrices. If
the conjugate transpose of a mnatrix A is denoted byAt, then the Hermitian property can be
written concisely as,
t
A=A
Hermitian matrices are named after Charles Hermite, who demonstrated in 1885 that matrices of
this form share a property with real symmetric matrices of always having real eigenvalues.
Example
2 2 4
[ 2−i 3 i ]
4 −i 1
for some positive integer k. The smallest such k is sometimes called degree of N.
The matrix
0 1
M =[ ]
0 0
is nilpotent,since M 2=0.More generally, any triangular matrix with Os along the main diagonal is
nilpotent. For example,the matrix
0 2 1 6
0 0 1 2
N=[ ]
0 0 0 3
0 0 0 0
is nilpotent,with
0 0 2 7
0 0 0 0 6
0 0 0 3 0 0 0 0 0
N 2=[ ]; : N =[ 0 0 0 0 0 0 ¿ 0 ¿ 0 ¿ ¿]; : N 4 =[
3
0¿ 0¿0¿¿ 0¿ 0¿0¿
0 0 0 0 0 0 0 0 ¿
0 0 0 0 ¿
0 0 0 0
A homogeneous linear system X ' = AX ,where x=(x 1 , x 2 , x 3 , ⋯ , xn )in which each x iare expressed
as a linear combination of x 1 , x 2 , x 3 ,⋯ , x nis said to be coupled. If the co-efficient matrix A is
diagonalizable, then the system can be un-coupled in that each x ican be expressed solely in terms of x;.
∞
1 k
e X =∑ ❑ X
k=0 k!
The above series always converges, so the exponentialof Xiswell-defined.If X is a 1x1 matrix the matrix
exponential of X is a 1x1 matrix whosesingle element is the ordinary exponential of the single element of
X.
A diagonal matrix is a square matrix with all the non-diagonal elements 0. The diagonal matrix is
completely defined by the diagonal elements.
Example:
The matrix is denoted by diag(9,8,6)
Example:
Let
1 1
A=[ ]
2 2
Then 1 1 1 0
A−λI =[ ]−λ [ ]
2 2 0 1
And the characteristic polynomial is
1− λ
¿ A−λI ∨¿ ¿2− λ∨¿
=(1-λ)(2-λ)-2
2
¿ λ −3 λ+2−2
2
¿ λ −3 λ
2
So the eigenvalues are the solutions of λ −3 λ=0.To solve this, simply observe that the
equation isλ(λ-3)=0with solutions λ=0andλ=3.Hence the eigenvalues of A are 0 and 3.
To find an eigenvector for the eigenvalue λ, we have to find a solution to(A-λI)x=0,,other than the zero
vector. This is easy, since for a particular value of λ, all we need to do is solve a simple linear system we
illustrate by finding the eigenvectors for the matrix of the example just given.
Example
We find eigenvectors of
1 1
A=[ ]
2 2
We have seen that the eigenvalues are 0 and 3. To find an eigenvector for eigenvalue O we solve the
system(A-01)x=0:that is,Ax=0,or
1 1 x1
[ ][ ]=[ 0 ]
2 2 x2 0
x 1+ x2 =0 ,2 x 1+ 2 x 2=0
x=[ 1 ]
2
1.5 Characterization
The fundamental fact about diagonalizable maps and matrices is expressed by the following,An
nxn matrix A over the field F is diagonalizable ifand only if the sum of dimensions of its Eigenspaces is
equal to n, which is the case if and only if there exists bases of F consisting
of eigenvectors of A. If such base has been found, one can form the matrix P having these bases vectors
as columns and will be diagonal matrix. The diagonal entries of this matrix are the eigenvalues of
A. A linear map T:V→Vis diagonalizable if and only if the sum of the dimensions of its eigenspaces is equal
to dim(V), which is the case if and only if there exist bases of V consisting of eigenvectors of T, with
respect to such a bases T will be represented by a diagonal matrix. The diagonal entries of this matrix are
the eigenvalues of T. Another characterization: A matrix or linear map is diagonalizable over the field F if
and only if its minimal polynomial is the product of distinct linear factors over F, (Putin another way, a
matrix is diagonalizable if and only if all of its elementary divisors are linear).The following sufficient (but
not necessary) condition is often useful. A matrix of order nxn is diagonalizable over a field F if it has n
distinct eigenvalues in F i.e., if its characteristic polynomial has n distinct roots in F;however, the converse
may be false. A linear map T:V→Vwithn=dim(V)is diagonalizable if it has n distinct eigenvalues i.e., if its
characteristic polynomial has n distinct roots in F.
Chapter #2
Process of Diagonalisation
2.1 Diagonalisation
λ1 0 0 0˙
P−1 AP=( ˙ λn ¿ ¿ ¿ )
0 λ2 0 ¿
Then
λ1 0 0 0
AP=P( ˙ 0 ¿ λ n¿ ¿)
0 λ2 0 ¿
P=(1,2,⋯,n)
Above equation can be rewrite as
A i = λi where (i=1,2,3,⋯,n)
So the column vectors of P are right eigenvectors of A, and the corresponding diagonal entries
are the corresponding eigenvalues. The invariability of P also suggests that the eigenvectors are
linearly independent and form bases of F. This is the necessary and sufficient condition for
diagonalization and canonical approach of diagonalization. The row vector of p−1 are the left
eigenvector of A.
Involution are diagonalizable over the real (and indeed any field of characteristic not
2),with ±1 on the diagonal. Finite order endomorphismn are diagonalizable over C lor any
algebraically closed field where the characteristics of the field does not divide the order of the
endomorphism) with roots of unity on the diagonal. This follows since the minimal polynomial is
separable, because the roots of unity are distinct. Projection is diagonalizable, with O's and 1's on the
diagonal.
Real symmetric matrices are diagonalizable by orthogonal matrices i.e., given a real symmetric
matrices A,Qt AQis the diagonal for some orthogonal matrix Q. More generally, matrices are
diagonalizable by unitary matrices if and only if they are normal. In the case of real symmetric
matrix,we see that A=A t so clearly A At = At A holds. Examples of normal matrices are the
real symmetric (or skew-symmetric) matrices and Hermition matrices (or skew-Hermition matrices)
Consider a matrix,
1 2 0
A=[ 0 3 0 ]
2 −4 2
A v k =λ v k
−1 0 −1
P=[ −1 0 0 ]
2 1 2
Note there is no preferred order of the eigenvectors in P, changing the order of the
eigenvectors in P just changes the order of the eigenvalues in the diagonalized form of A.
[3]Then A is diagonalizable, as a simple computation confirms, having calculated P−1using any
suitable method.
−1 0 −1 0
P AP=[ ]¿
−1 0 0
Example 1
2. In order to find out whether A is diagonalizable, we only concentrate your attention on the
eigenvalue -1. Indeed, the eigenvectors associated to, are given by the system
0 −1 1
( A+ I ) X=( 0 −1 1 ) X=0
0 0 0
1 −1 1
( A+2 I ) X =( ) X=0
0 0 1
{ x− y=0
z=0
Setx=α,then we have
y α 1
x=( )=( )=α ( )
z 0 0
Set
1 0 1
p=( 0 1 1 )
0 1 0
Then
−1
P AP=¿
But if we set,
1 0 1
P=( 1 1 0 )
0 1 0
Then,
−1
P AP=¿
❑
We have seen that if A and B are similar,then ❑ can be expressed easily in term of B. indeed,if we have
−1 n −1 n n
A=P BP ,,then we have A =P B 8. In particular, if D is a diagonal matrix, D is easy to evaluate. This is
an application of diagonalization. In fact, above procedure may be used to find the square root and cubic
root of a matrix. Indeed, consider the matrix above
A=¿
Set
1 0 1
p=( 1 1 0 )
0 1 0
Then
−1
P AP=¿
−1
HenceA=PD P .
Set,
−2 0 0
B=P( 0 −1 0 ) p−1
Then,we have
0 −1 −1
3
B = A Example 2
The matrix A has eigen values 1,2, and -1 with corresponding eigenvector (1,1,0), (1,2,1) and (0,1,2).
Find A and computeA5.
Solution:-
Since A has three distinct eigenvalues, it is diagonalizable and thus A=TDT −1,where D is a diagonal
matrix having the eigenvalues of A on the main diagonal and T has eigenvectors of A its columns.
1 0 0 110
D=( 0 2 0 ) T =( 121)
, 012
0 0 −1
We compute T-1
110100
( 121010 )∼ ( 1110100 )∼(01120001)
0112001
012001
Subtract the1st row from the2nd row &Subtract2nd row from3rd row
Subtract the 3rd row from 2nd row &subtract the 2nd row from the 1st row
110 3 −2 −1
¿( 121)(−4 4 −2)
012 −1 1 −1
¿¿
Now,
5 5 −1
A =T D T
¿(−12612764)−12612764−6666−34
In general, a rotation matrix is not diagonalizable over the real, but all rotation
matrices are diagonalizable over the complex field. Even if a matrix not diagonalizable, it is
always possible to "do the best one can", and find a matrix with the same properties
consisting of eigenvalues on the leading diagonal, and either ones or zeroes on the super
diagonal (known as Jordan normal form).
Some matrices are not diagonalizable over any field, most notably nonzero nilpotent
matrices. This happens more generally if the algebraic and geometric multiplicities of an
eigenvalue do not coincide. For instance,consider
0 1
C=[ ]
0 0
The matrix B does not have any real eigenvalues, so there is no real matrix Q such t h at Q−1 BQ
is a diagonal. However, we can diagonalise B if we allow complex numbers.Indeed,if we take
1 i
Q=[ ]
i 1
thenQ−1 BQ is diagonal.
Note that the above example show that the sum of diagonalizable matrices need not be
diagonalizable.
−1 −1 −1 −1
¿ PD ( P P) D( P P)⋯( P P) D P
k −1
¿PD P
And the latter is easy to calculate since it involves the power of diagonal matrix. This approach
can be generalized to matrix exponential other matrix functions since they can be defined as
power series. This is particularly useful in finding closed form expressions for terms of linear
recursive sequences.
a b−a
M =[ ]
0 b
e 1=u 1 e 2=v−u
Mu=au1MV=aV
Thus, a and b are the eigenvalues corresponding to u and v respectively. By linearity of matrix
multiplication,we have
n n n n
M a=a u , M v=b v
n n n n n n n
M e2=M (v−u)=b v−a u=(b −a )e1 +b e 2
The preceding relations, expressed in matrix form are
n n n
n a b −a
M =[ ]
0 bn
Y 1=Y 1 −Y 2+ 2Y 3
Y 2=3 Y 1 + 4 Y 3
Y 3=2 Y 1 +Y 2
1 −1 2
[ A ]=[ 3 0 4 ]
2 1 0
The eigenvalues of [A] are 1.303,-2.303 and 2. The corresponding eigenvectors are:
[Z]=[λ][Z]
Where
[λ]=[M][A][M]
[ λ]=¿
˙
Therefore[Z]=[λ][Z]can be written as Z1 =1.303 z 1 i2 =−2.303 z2
˙
Z3 =2 z 1
These equations can be solved easily
1.303t
Z1 =a e
−2.303 t 2t
Z 2=b e Z3 =c e
Therefore,
1.303 t
ae
[Z ]=[ b e−2.303 t ]
2t
ce
Since[Y]=[M][Z] −2.303t
[Y ]=[−0.172−0.557 0 ¿ 0.891−0.467 0.894 ¿ 0.42 0.687 0.447][ a e ¿ 0.4447¿
0.42 ¿
In terms of component, the general solution is
1.303 t −2.303 t
y 1=−0.172a e −0.557 b e
1.303 t −2.303t 2t
y 2=0.891 a e −0.467 b e +0.894 c e
1.303 t −2.303 t 2t
y 3=0.42a e −0.687 b e +0.447 c e
The arbitrary constants a,b and c can be found if initial condition is given. Given
initial condition is
Y(0)=[0 0 1 ¿T
0.891a+0.467b+0.894c=0
0.42a+0.687b+0.447c=1
Therefore,constants a,b and c are-3.228, 0.997 and 3.738 respectively.
2.6.2 Quantum mechanical application
References
[5]E.Artin,Geometric Algebra,Interscience,Newyork,1957.
[6]http://www.maths.ed.ac.uk/~imf/teaching/MT3/diagonalization of matrices.pdf
23