0% found this document useful (0 votes)
6 views19 pages

Word Project

The document is a final project report on the introduction to diagonalization of matrices submitted by students at Barani Institute of Sciences Sahiwal. It covers fundamental concepts, history, definitions, and processes related to matrix diagonalization, including eigenvalues and eigenvectors. The project aims to provide a comprehensive understanding of diagonalization and its applications in various fields such as physics and engineering.

Uploaded by

laibazeyn54
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views19 pages

Word Project

The document is a final project report on the introduction to diagonalization of matrices submitted by students at Barani Institute of Sciences Sahiwal. It covers fundamental concepts, history, definitions, and processes related to matrix diagonalization, including eigenvalues and eigenvectors. The project aims to provide a comprehensive understanding of diagonalization and its applications in various fields such as physics and engineering.

Uploaded by

laibazeyn54
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 19

PIR MEHAR ALI SHAH

ARID AGRICULTURE UNIVERSITY


RAWALPINDI

Introduction to Diagonalisation of Matrics


Final Project (MSC Math Fall 2015)

Supervised By
Sikander Mehmood

Submitted By

15-ARID-4087 Faiza Rafique


15-ARID-4085 Awais Ahmad
15-ARID-4082 Aleena Maheen

DEPARTMENTOF MATHEMATICS
BARANI INSTITUTE OF SCIENCES
SAHIWAL CAMPUS

V V 6w w
DEDICATED
We dedicate our project to our respected parents, who brought us from skies to this
world and honorable teachers who took us from earth to skies. Undoubtedly, they make us
enable to achieve any goal in order to get success in life. Moreover, they always have been
role model for us. We believe that success is at their feet. They have not only taught us but
also inspire us.
FINAL APPROVAL

This is to certify that we have read this report with full attention in which is
submitted by Faiza Rafique, Awais Ahmad and Aleena Maheen. It is our judgment
that this report is of sufficient standard to warrant its acceptance by Barani Institute of
Sciences Sahiwal, for the degree of MSc (Math)

Committee
1.Supervisor
(Sikandar Mehmood)

2.External Examiner
ACKNOWLEDGMENT
We would like to express our gratitude to our supervisor Sikandar Mehmood
for the useful comments, remarks and engagement through the learning process of this
master thesis.Furthermore,we would like to thank members of Barani Institute of
Sciences Sahiwal for introducing us to the topic as well for the support on the way.
Also, we like to thank the participants in our project, who have willingly shared their
precious time during the process of development. We would like to thank all others,
who have supported us throughout entire process, both by keeping us harmoniousand
helping us putting pieces together. We will be grateful forever for your guidance.
DECLARATION
This project is Introduction to Diagonalisation of Matrices and it's neither as
a whole nor as a part have already been developed by any person. It is further
declared that we have developed this system and its interface entirely on the basis
of our personal effort, made under the guidance of our project supervisor. No
portion of this work presented in this report has been submitted in support of any
Application for any other degree or qualification of this or any other university or
institute of learning.
It is further stated that the project and all it associated documents, and
records are submitted as the partial fulfillment for MSC Mathematics. We
understand and transfer the copy right for this material to the Department of
Methematics, Barani Institute of Sciences Sahiwal.

Faiza Rafique 15-arid-4087


Awais Ahmad 15-arid-4085
Aleena Maheen 15-arid-4082

2015-2017
Contents
1.Fundamentals Page No

1.1 General introduction 1

1.2 History 2

1.3 Basic definition 3

1.4 Eigenvalues and eigenvector 6

1.5 Characterization 7

2. Process of diagonalization
2.1 Diagonalization 9

2.2 Simultaneous diagonalization 9

10
2.3 Diagonalizable matrices
10
2.4 Diagonalization of matrices
2.5 Non diagonalizable matrices 16

2.6 Applications 17

Conclusion 22
References 23
Chapter 1

Fundamentals
1.1 General Introduction
Matrix diagonalization is the process of taking a square matrix and convertingit into
a special type of matrix called diagonal matrix that shares the same fundamental properties of
the underlying matrix. Matrix diagonalization is equivalent to transforming the underlying system
of equations into a special set of coordinate axes in which the matrix takes this canonical form.
Diagonalizing a matrix is equivalent to find the matrix eigenvalues, which turn out to be precisely
the entries of the diagonalized matrix. Similarly, the eigenvectors make up the new set of axes
corresponding to the diagonal matrix. The remarkable relationship between a diagonalized
matrix, eigenvalues and eigenvectors follows from the beautiful mathematical identity the Eigen
decomposition that a square matrix A can be decomposed into the very special form
−1
A=PD P (1)

Where P is a matrix composed of the eigenvectors of A, D is the diagonal matrix constructed


from the corresponding eigenvalues, and P is the matrix inverse of P. According to the (eigen
decomposition) theorem, an initial matrix equation

AX=Y (2)

Can always be written as

−1
PD P X =Y (3)

As at least as long as P is a square matrix and pre-multiplying both sides by P−1 ,we get
−1 −1
D P X=P (4)

Since the same linear transformations P is being applied to both X andY,solving the original
system is equivalent to solving the transformed system.

D X =Y
' '
(5)
' −1 ' −1
w h ere X =P Xand Y =P Y .

This provides a way to canonical a system into the simplest possible form, reduce the number of
parameters fromnxn for an arbitrary matrix to n for a diagonal matrix and obtain the
characteristic properties of the initial matrix. This approach arises frequently in Physics and
engineering, where technique is often used and extremely powerful. The Eigen value problem is a
problem of considerable theoretical interest and wide-ranging application. For example, it is crucial
in solving systems of differential equations, analyzing population growth models and calculating of
powers of matrices (i.e., the exponential matrix). Other areas such as Physics,Sociology, Biology,
Economics and Statistics have focused considerable attention on “eigen values" and "eigen vectors"
their applications and their computations. The vector equation is equivalent to a matrix equation of
the form

AX=b

Where A is an mxn matrix, X is a column vector with n entries, and b is a column vector with m
entries.
The number of vectors in bases for the span is now expressed as the rank of the matrix.

1.2 History

The history of matrix diagonalization goes back to ancient times! But the term "Matrix"
was not applied to the concept until 1850. "Matrix"is the Latin word for womb, and it retains that
sense in English. It can also mean more generally any place in which something is formed or
produced.

The origins of mathematical matrixes lie with study of systems of simultaneous linear equations. An
important Chinese text from between 300 BC and AD 200, nine chapters of the mathematical Art
(Chiu Chang Suan Shu), gives the first known example of the use of matrix methods to solve
simultaneous equations. In the treatise's seventh chapter, “Too much and not enough", the concept
of a determinant first appears, nearly two millenary before its supposed inventions by Japanese
mathematician Seki Kowa in 1683 or his German contemporary Gottfried Leibnitz (who is also
credited with the invention of differential calculus,separately from but simultaneously with Isaac
Newton).More uses of matrix, like arrangements of numbers appear in chapter eight, "Methods of
rectangular arrays" , in which a method is given for solving simultaneous equations using a counting
board that is mathematically identical in the modern method of solution outlined by Carl Friedrich
Gauss (1777-1855), also known as Gaussian elimination. The term "matrix" for such arrangements
was introduced in 1850 by James Joseph Sylvester. Sylvester, incidentally, had a (very) brief career at
the University of
Virginia, which came to an abrupt end after an enraged Sylvester hit a newspaper-reading
student with a sword stick and fled the country, believing he had killed the student.

1.3 Basic Definitions

1.3.1 Defective matrix

In linear algebra, a defective matrix is a square matrix that does not have complete
bases of eigenvector, and is therefore not diagonalizable. In particular, an nxn matrix is defective
if and only if it does not have n linearly independent eigenvectors. A complete basis is formed by
augmenting the eigenvectors with generalized eigenvectors, which are necessary for solving
defective systems of ordinary differential equation and other problems.

1.3.2 Scaling (geometry)

Scaling is a linear transformation enlarges (increases) or shrinks (diminishes) object


factor by a scale factor that is the same in all directions. The result of uniform scaling is similar (in
the geometric sense) to the original. A scale factor of I is normally allowed, so that congruent
shapes are also classed as similar. Uniform scaling happens,for example, when enlarging or
reducing a photograph, or when creating a scale model of a building,car,airplane,etc.

1.3.3 Triangular matrix


A triangular matrix is a special kind of square matrix. A square matrix is called lower
triangular if all the entries above the main diagonal are zero. Similarly, a square matrix is called
upper triangular if all the entries below the main diagonal are zero. A triangular matrix is one
that is either lower triangular or upper triangular. A matrix that is both upper and lower
triangular is called diagonal matrix.

1.3.4 Orthogonal matrix

In linear algebra, an orthogonal matrix is a square matrix with real entries whose
columns and rows are orthogonal unit vectors (i.e., orthogonal vectors) i.e.
t t
Q Q=QQ =I

where / is the identity matrix.

This leads to the equivalent characterization: a matrix Q is orthogonal if its transpose is equal to
its inverse i.e.,
An orthogonal matrix Q is necessarily invertible.

1.3.5Hermitian matrix

In mathematics, a Hermitian matrix (or self adjoint matrix) is a square matrix with
complex entries that is equal to its own conjugate transpose that is,the element in the i-th row
and j-th column is equal to the complex conjugate of the element in the j-th row and i-th column,
for all the indices i andj,

a ij=a ji Or A=At

Hermitian matrices can be understood as the complex extension of real symmetric matrices. If
the conjugate transpose of a mnatrix A is denoted byAt, then the Hermitian property can be
written concisely as,

t
A=A

Hermitian matrices are named after Charles Hermite, who demonstrated in 1885 that matrices of
this form share a property with real symmetric matrices of always having real eigenvalues.

Example

2 2 4
[ 2−i 3 i ]
4 −i 1

1.3.6 Nilpotent matrix

In linear algebra, a nilpotent matrix is a square matrix N such that


K
N =0

for some positive integer k. The smallest such k is sometimes called degree of N.

The matrix
0 1
M =[ ]
0 0

is nilpotent,since M 2=0.More generally, any triangular matrix with Os along the main diagonal is
nilpotent. For example,the matrix
0 2 1 6
0 0 1 2
N=[ ]
0 0 0 3
0 0 0 0
is nilpotent,with

0 0 2 7
0 0 0 0 6
0 0 0 3 0 0 0 0 0
N 2=[ ]; : N =[ 0 0 0 0 0 0 ¿ 0 ¿ 0 ¿ ¿]; : N 4 =[
3
0¿ 0¿0¿¿ 0¿ 0¿0¿
0 0 0 0 0 0 0 0 ¿
0 0 0 0 ¿
0 0 0 0

1.3.7 Coupled system

A homogeneous linear system X ' = AX ,where x=(x 1 , x 2 , x 3 , ⋯ , xn )in which each x iare expressed
as a linear combination of x 1 , x 2 , x 3 ,⋯ , x nis said to be coupled. If the co-efficient matrix A is
diagonalizable, then the system can be un-coupled in that each x ican be expressed solely in terms of x;.

1.3.8 Matrix exponential

In mathematics, the matrix exponential is a matrix function on square matrices analogous to


the ordinary exponential function. Abstractly, the matrix exponential gives the connection between
matrix Lie algebra and the corresponding Lie group.
Suppose that X is nxn real or complex matrix. The exponential of X, denoted by ❑❑ or exp(X),is the nxn
matrix given by the power series


1 k
e X =∑ ❑ X
k=0 k!

The above series always converges, so the exponentialof Xiswell-defined.If X is a 1x1 matrix the matrix
exponential of X is a 1x1 matrix whosesingle element is the ordinary exponential of the single element of
X.

1.3.9 Diagonal matrix

A diagonal matrix is a square matrix with all the non-diagonal elements 0. The diagonal matrix is
completely defined by the diagonal elements.

Example:
The matrix is denoted by diag(9,8,6)

1.3.10 Diagonalizable matrix

The matrix A is diagonalizable if it is similar to a diagonal matrix, in other words: If there is a


diagonal matrix D and an invertible matrix P such that P−1 P=D .HVis a finite dimensional vector
space, then a linear mapping T:V→V is called diagonalizable if there exists an ordered bases of V with
respect to which T is represented by a diagonal matrix.Diagonalization is the process of finding a
corresponding diagonal matrix for a diagonalizable matrix or linear mapping. A square matrix that is
not diagonalizable is called defective.Diagonalizable matrices and maps are of interest because
diagonal matrices are especially easy to handle: their eigenvalues and eigenvectors are known and
one can raise a diagonal matrix to a power by simply raising the diagonal entries to that same power.
1.4 Eigenvalues and eigenvectors

Suppose that A is a square matrix. The number λ is said to be an eigenvalue of matrix A if


for some non-zero vector X, AX = λX. Any non-zero vector X for which this equation holds is called an
eigenvector for eigenvalue λ or an eigenvector of matrix A Corresponding to eigenvalue λ.

1.4.1 Process of finding eigenvalues and eigenvectors

To determine whether λ is an eigenvalue of A, we need to determine whether there exist


any non-zero solution to the matrix equationAX = λ X. Note that the matrix equation Ax=λ×isnot of
the standard form, since the right-hand side is not a fixed vector b, but depends explicitly on X.
However, we can rewrite it in standard form. Note thatλx=λIXwhere I is as usual the identity matrix.
So, the equation is equivalent toAX=λXorAX-λIX=0which is equivalent to(A-λI)X=0.

Now, a square linear systemBX=0has solutions other thanx=0precisely when|B|


=0.Therefore,takingB=A-λI,, λ is an eigenvalue if and only if the determinant ofthe matrix A-λI is zero.
This determinant,p(λ)=|A-λI|,is known as the characteristic polynomial ofAsince it is a polynomial in
the variable λ. To find the eigenvalues, we solve the equation |A-λI|=0

Example:

Let
1 1
A=[ ]
2 2
Then 1 1 1 0
A−λI =[ ]−λ [ ]
2 2 0 1
And the characteristic polynomial is
1− λ
¿ A−λI ∨¿ ¿2− λ∨¿

=(1-λ)(2-λ)-2
2
¿ λ −3 λ+2−2
2
¿ λ −3 λ
2
So the eigenvalues are the solutions of λ −3 λ=0.To solve this, simply observe that the
equation isλ(λ-3)=0with solutions λ=0andλ=3.Hence the eigenvalues of A are 0 and 3.
To find an eigenvector for the eigenvalue λ, we have to find a solution to(A-λI)x=0,,other than the zero
vector. This is easy, since for a particular value of λ, all we need to do is solve a simple linear system we
illustrate by finding the eigenvectors for the matrix of the example just given.

Example

We find eigenvectors of
1 1
A=[ ]
2 2

We have seen that the eigenvalues are 0 and 3. To find an eigenvector for eigenvalue O we solve the
system(A-01)x=0:that is,Ax=0,or
1 1 x1
[ ][ ]=[ 0 ]
2 2 x2 0
x 1+ x2 =0 ,2 x 1+ 2 x 2=0

Clearly both equations are equivalent. We obtain x 1=−x 2


Let's take x 2=1and we get x 1=−x 2=−1
Then the eigenvectors are
x=[−1 ]
1

Similarly, the eigenvector for eigenvalueλ=3is

x=[ 1 ]
2
1.5 Characterization

The fundamental fact about diagonalizable maps and matrices is expressed by the following,An
nxn matrix A over the field F is diagonalizable ifand only if the sum of dimensions of its Eigenspaces is
equal to n, which is the case if and only if there exists bases of F consisting
of eigenvectors of A. If such base has been found, one can form the matrix P having these bases vectors
as columns and will be diagonal matrix. The diagonal entries of this matrix are the eigenvalues of
A. A linear map T:V→Vis diagonalizable if and only if the sum of the dimensions of its eigenspaces is equal
to dim(V), which is the case if and only if there exist bases of V consisting of eigenvectors of T, with
respect to such a bases T will be represented by a diagonal matrix. The diagonal entries of this matrix are
the eigenvalues of T. Another characterization: A matrix or linear map is diagonalizable over the field F if
and only if its minimal polynomial is the product of distinct linear factors over F, (Putin another way, a
matrix is diagonalizable if and only if all of its elementary divisors are linear).The following sufficient (but
not necessary) condition is often useful. A matrix of order nxn is diagonalizable over a field F if it has n
distinct eigenvalues in F i.e., if its characteristic polynomial has n distinct roots in F;however, the converse
may be false. A linear map T:V→Vwithn=dim(V)is diagonalizable if it has n distinct eigenvalues i.e., if its
characteristic polynomial has n distinct roots in F.
Chapter #2

Process of Diagonalisation

2.1 Diagonalisation

If a matrix A is diagonalizable that is

λ1 0 0 0˙
P−1 AP=( ˙ λn ¿ ¿ ¿ )
0 λ2 0 ¿

Then

λ1 0 0 0
AP=P( ˙ 0 ¿ λ n¿ ¿)
0 λ2 0 ¿

Writing P as a block matrix of its column vector

P=(1,2,⋯,n)
Above equation can be rewrite as

A i = λi where (i=1,2,3,⋯,n)

So the column vectors of P are right eigenvectors of A, and the corresponding diagonal entries
are the corresponding eigenvalues. The invariability of P also suggests that the eigenvectors are
linearly independent and form bases of F. This is the necessary and sufficient condition for
diagonalization and canonical approach of diagonalization. The row vector of p−1 are the left
eigenvector of A.

2.2 Simultaneous diagonalisation


A set of matrices are said to be simultaneously diagonalizable if there exists a single
invertible matrix P such that P−1 AP is a diagonal matrix for every A in the set.The following theorem
characterizes simultaneously diagonalizable matrices: A set of diagonalizable matrices commutes if
and only if the set is simultaneously diagonalizable. The set of all nxn diagonalizable (over C)
withn>1is not simultaneously diagonalizable. For instance, the matrices
1 0 1 1
[ ],[ ]
0 0 0 0
are diagonalizable but not simultaneously diagonalizable because they do not commute. A set
consists of commuting normal matrices if and only if it is simultaneously diagonalizable by a unitary
matrix i.e., there exist a unitary matrix U such thatU −1 AU is diagonal forevery A in the set.

2.3 Diagonalizable matrices

Involution are diagonalizable over the real (and indeed any field of characteristic not
2),with ±1 on the diagonal. Finite order endomorphismn are diagonalizable over C lor any
algebraically closed field where the characteristics of the field does not divide the order of the
endomorphism) with roots of unity on the diagonal. This follows since the minimal polynomial is
separable, because the roots of unity are distinct. Projection is diagonalizable, with O's and 1's on the
diagonal.

Real symmetric matrices are diagonalizable by orthogonal matrices i.e., given a real symmetric
matrices A,Qt AQis the diagonal for some orthogonal matrix Q. More generally, matrices are
diagonalizable by unitary matrices if and only if they are normal. In the case of real symmetric
matrix,we see that A=A t so clearly A At = At A holds. Examples of normal matrices are the
real symmetric (or skew-symmetric) matrices and Hermition matrices (or skew-Hermition matrices)

2.4 Process of diagonalization of a matrix

Consider a matrix,

1 2 0
A=[ 0 3 0 ]
2 −4 2

This matrix has eigenvalues

λ 1=3 , λ 2=2 , λ3=1


A is a 3x 3 matrix with 3 different eigenvalues; therefore, it is diagonalizable. Note that if there
are exactly n distinct eigenvalues in an nxn matrix then this matrix is diagonalizable.These
eigenvalues are values that will appear in the diagonalised form of matrix A, so by finding the
eigenvalues of A we have diagonalized it. We could stop here, but it is a good check to use the
eigenvectors to diagonlize A.

The eigenvectors of A are


−1 0 −1
2 3
v 1=[ −1 ], v =[ 0 ], v =[ 0 ]
2 1 2
One can easily check that

A v k =λ v k

Now, let P be the matrix with these eigenvectors as its columns,

−1 0 −1
P=[ −1 0 0 ]
2 1 2

Note there is no preferred order of the eigenvectors in P, changing the order of the
eigenvectors in P just changes the order of the eigenvalues in the diagonalized form of A.
[3]Then A is diagonalizable, as a simple computation confirms, having calculated P−1using any
suitable method.
−1 0 −1 0
P AP=[ ]¿
−1 0 0

Example 1

Consider the matrix

In order to find out whether A is diagonalizable, it us follow the steps

1.The polynomial characteristic of Ais

So-1 is an eigenvaluewith multiplicity 2 and -2 with multiplicity 1.

2. In order to find out whether A is diagonalizable, we only concentrate your attention on the
eigenvalue -1. Indeed, the eigenvectors associated to, are given by the system

0 −1 1
( A+ I ) X=( 0 −1 1 ) X=0
0 0 0

This system reduce to the equation-y+z=α、 y=βthen we have


So the geometric of -1 is 2 the same as it's algebraic. Therefore, the matrix A is
diagonalizable.In order to find the matrix P we need to find an eigenvector associated to -2.
The associated system is

1 −1 1
( A+2 I ) X =( ) X=0
0 0 1

It reduces to the system to,

{ x− y=0
z=0

Setx=α,then we have
y α 1
x=( )=( )=α ( )
z 0 0

Set

1 0 1
p=( 0 1 1 )
0 1 0

Then
−1
P AP=¿

But if we set,

1 0 1
P=( 1 1 0 )
0 1 0
Then,

−1
P AP=¿

We have seen that if A and B are similar,then ❑ can be expressed easily in term of B. indeed,if we have
−1 n −1 n n
A=P BP ,,then we have A =P B 8. In particular, if D is a diagonal matrix, D is easy to evaluate. This is
an application of diagonalization. In fact, above procedure may be used to find the square root and cubic
root of a matrix. Indeed, consider the matrix above

A=¿

Set

1 0 1
p=( 1 1 0 )
0 1 0

Then
−1
P AP=¿
−1
HenceA=PD P .

Set,

−2 0 0
B=P( 0 −1 0 ) p−1
Then,we have
0 −1 −1
3
B = A Example 2

The matrix A has eigen values 1,2, and -1 with corresponding eigenvector (1,1,0), (1,2,1) and (0,1,2).
Find A and computeA5.

Solution:-

Since A has three distinct eigenvalues, it is diagonalizable and thus A=TDT −1,where D is a diagonal
matrix having the eigenvalues of A on the main diagonal and T has eigenvectors of A its columns.

1 0 0 110
D=( 0 2 0 ) T =( 121)
, 012
0 0 −1

We compute T-1
110100
( 121010 )∼ ( 1110100 )∼(01120001)
0112001
012001
Subtract the1st row from the2nd row &Subtract2nd row from3rd row

Subtract the 3rd row from 2nd row &subtract the 2nd row from the 1st row

110 3 −2 −1
¿( 121)(−4 4 −2)
012 −1 1 −1
¿¿
Now,
5 5 −1
A =T D T

¿(−12612764)−12612764−6666−34

2.5 Non diagonalizable matrices

In general, a rotation matrix is not diagonalizable over the real, but all rotation
matrices are diagonalizable over the complex field. Even if a matrix not diagonalizable, it is
always possible to "do the best one can", and find a matrix with the same properties
consisting of eigenvalues on the leading diagonal, and either ones or zeroes on the super
diagonal (known as Jordan normal form).

Some matrices are not diagonalizable over any field, most notably nonzero nilpotent
matrices. This happens more generally if the algebraic and geometric multiplicities of an
eigenvalue do not coincide. For instance,consider

0 1
C=[ ]
0 0

This matrix is not diagonalizable there is no marix U such thatU-1CUis a diagonal


matrix.Indeed, C has one eigenvalue (namely zero) and this eigenvalue has algebraic
multiplicity 2and geometric multiplicity 1.Some real matrices are not diagonalizable over
the real.Consider for instance the matrix,
0 1
B=[ ]
−1 0

The matrix B does not have any real eigenvalues, so there is no real matrix Q such t h at Q−1 BQ
is a diagonal. However, we can diagonalise B if we allow complex numbers.Indeed,if we take

1 i
Q=[ ]
i 1

thenQ−1 BQ is diagonal.

Note that the above example show that the sum of diagonalizable matrices need not be
diagonalizable.

2.6 Applications of diagonalization


Diagonalization can be used to compute the power of a matrix A efficiently,provided
the matrix is diagonalizable. Suppose we have found that
−1
P AP=D

Is diagonal matrix. Then, as the matrix product is associative,


k
A =¿
−1 −1 −1
¿(PD P ).(PD P )⋯( PD P )

−1 −1 −1 −1
¿ PD ( P P) D( P P)⋯( P P) D P

k −1
¿PD P

And the latter is easy to calculate since it involves the power of diagonal matrix. This approach
can be generalized to matrix exponential other matrix functions since they can be defined as
power series. This is particularly useful in finding closed form expressions for terms of linear
recursive sequences.

2.6.1 Particular application


For example, consider the following matrix

a b−a
M =[ ]
0 b

Calculating the various powers of M reveals a surprising pattern

The above phenomenon can be explained by diagonalizing M. To accomplish this, we need a


basis of R2consisting of eigenvectors of M. One such eigenvector basis is given by

u=[ 1 ]=e 1、 v=[ 1 ]=e 1+ e2


0 0
Where e,denotes the standard basis ofof Rn.The reverse change of basis is given by

e 1=u 1 e 2=v−u

Straight forwarded calculations show that

Mu=au1MV=aV

Thus, a and b are the eigenvalues corresponding to u and v respectively. By linearity of matrix
multiplication,we have
n n n n
M a=a u , M v=b v

Switching back to the standard basis, we have


n n n
M e1=M u=a e 1

n n n n n n n
M e2=M (v−u)=b v−a u=(b −a )e1 +b e 2
The preceding relations, expressed in matrix form are
n n n
n a b −a
M =[ ]
0 bn

There by explaining above phenomenon.

Application of Diagonalization of the Coefficient Matrices to Differential Equations

Consider first order differential equations

Y 1=Y 1 −Y 2+ 2Y 3
Y 2=3 Y 1 + 4 Y 3
Y 3=2 Y 1 +Y 2

We can write these equations as


[y]=[A][Y]
Where

1 −1 2
[ A ]=[ 3 0 4 ]
2 1 0

The eigenvalues of [A] are 1.303,-2.303 and 2. The corresponding eigenvectors are:

The model transformation matrix is


[ M ]=[−0.172−0.557 0 ¿ 0.891−0.467 0.894 ¿ 0.42 0.687 0.447]
[M] Diagonalise [A] and setting0 [y]=[A][y]into [y=[M[Ztransform

[Z]=[λ][Z]
Where

[λ]=[M][A][M]
[ λ]=¿

˙
Therefore[Z]=[λ][Z]can be written as Z1 =1.303 z 1 i2 =−2.303 z2

˙
Z3 =2 z 1
These equations can be solved easily
1.303t
Z1 =a e

−2.303 t 2t
Z 2=b e Z3 =c e

Therefore,
1.303 t
ae
[Z ]=[ b e−2.303 t ]
2t
ce

Since[Y]=[M][Z] −2.303t
[Y ]=[−0.172−0.557 0 ¿ 0.891−0.467 0.894 ¿ 0.42 0.687 0.447][ a e ¿ 0.4447¿
0.42 ¿
In terms of component, the general solution is

1.303 t −2.303 t
y 1=−0.172a e −0.557 b e

1.303 t −2.303t 2t
y 2=0.891 a e −0.467 b e +0.894 c e
1.303 t −2.303 t 2t
y 3=0.42a e −0.687 b e +0.447 c e

The arbitrary constants a,b and c can be found if initial condition is given. Given
initial condition is

Y(0)=[0 0 1 ¿T

We have the following


-0.172a-0.557b=0

0.891a+0.467b+0.894c=0

0.42a+0.687b+0.447c=1
Therefore,constants a,b and c are-3.228, 0.997 and 3.738 respectively.
2.6.2 Quantum mechanical application

In quantum mechanical and quantum chemical computations matrix


diagonalisation is one of the most frequently applied numerical processes. The basic reason is
that the time independent Schrödinger wave equation is an eigenvalue equation, albeit in
most of the physical situations on an infinite dimensional space (a Hilbert space).

A very common approximation is to truncate Hilbert space to finite dimension, after


which the Schrodinger equation can be formulated as an eigenvalue problem of a real
symmetric,or complex Hermitian, matrix. Formally this approximation is founded on the
variation principle,valid for Hamiltonians that are bounded from below. But also first order
perturbation theory for the degenerate states leads to a matrix eigenvalue problem.
Conclusion

We have concluded that solution by diagonalization will always work provided.


We can find in linearly independent eigenvectors of the nxn matrix be the eigenvalue of A
could be real and distinct,complex or repeated. But the method fails when A has repeated
eigenvalue and in linearly independent eigenvalue cannot be found. Of course, in the last
situation A is not diagonalizable. So in such conditions, one cannot use diagonalization
process for the ultimate solution of matrices. Diagonalization can also be used to solve
non-homogeneous system of linear equations.

References

[1]John Wiley,topics in Algebra (Second edition)


[2]Elementary Linear Algebra by Prof Emeritus,Drexel University.

[3] Linear algebra David Cherney, Tom Denton.

[4] Math pages on “Eigen value Problems and Matrix Invariants”,http://www.mathpages.

[5]E.Artin,Geometric Algebra,Interscience,Newyork,1957.

[6]http://www.maths.ed.ac.uk/~imf/teaching/MT3/diagonalization of matrices.pdf

[7]Linear Algebra by Khalid Latif Mir.

23

You might also like