0% found this document useful (0 votes)
13 views10 pages

MATH219 Lecture 6

The document provides lecture notes for MATH 219, focusing on systems of first order differential equations and a review of matrices. It introduces the concept of solving multi-variable differential equations, discusses examples such as predator-prey models, and presents the existence-uniqueness theorem for solutions. Additionally, it covers linear systems of ODEs and fundamental matrix operations, including definitions and properties of matrices.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views10 pages

MATH219 Lecture 6

The document provides lecture notes for MATH 219, focusing on systems of first order differential equations and a review of matrices. It introduces the concept of solving multi-variable differential equations, discusses examples such as predator-prey models, and presents the existence-uniqueness theorem for solutions. Additionally, it covers linear systems of ODEs and fundamental matrix operations, including definitions and properties of matrices.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

MATH 219

Fall 2023
Lecture 6
Lecture notes by Özgür Kişisel

Content: Introduction to systems of first order equations. Review of matrices.


Suggested Problems: (Boyce, Di Prima, 10th edition)
§7.1: 19, 23
§7.2: 1, 2, 9, 11, 16, 21, 24, 25
So far, we have explored methods of solutions and qualitative issues for a single first
order differential equation in one variable. Our next goal is to consider the multi-
variable version of this problem: n first order differential equations in n variables.
Just as in algebra, the essential difficulty is that the equations are almost always
entangled to each other. In order to make some simplifications and decouple these
equations so that each variable has its own equation, one must do a fair amount
of preparatory work. In algebra, we often do this by eliminating variables, for ex-
ample if we have two linear equations in two variables x and y, we try to find a
combination of these equations so as to eliminate either x or y from the equation.
In differential equations, the situation is similar but more intricate. The elimination
step is significantly more complicated, so it pays off to have a systematic approach
to the problem.

1 Systems of first order ODE’s


Suppose that x1 , x2 , . . . , xn are functions of t. A set of n differential equations of
the form

x′1 = f1 (x1 , x2 , . . . , xn , t)
x′2 = f2 (x1 , x2 , . . . , xn , t)
...
x′n = fn (x1 , x2 , . . . , xn , t)

1
is called a system of first order ODE’s. In these equations all derivatives are
with respect to the independent variable t.
Example: Consider the system
x′ = x − 0.5xy
y ′ = −0.75y + 0.25xy
This system could be a model for describing the population dynamics of two species,
so called a “predator-prey system”. For example, x(t) and y(t) could be the popu-
lations of rabbits and foxes in a particular region. The more rabbits there are, the
more they reproduce in the absence of other factors. This explains the +x term on
the right hand side of the first equation. The more foxes and rabbits there are, the
more interactions happen, namely more rabbits are devoured by foxes. Therefore it
is natural to have a negative xy term on the right hand side of the first equation.
Since interactions mean that the foxes find food, the coefficient of xy on the right
hand side of the second equation is positive. In the absence of rabbits, the fox pop-
ulation dies out because of absence of food, therefore the y term on the right hand
side of the second equation has a negative coefficient. □
A solution of a system of first order ODE’s as above is an n-tuple of functions
(x1 (t), x2 (t), . . . , xn (t)). It doesn’t make much sense to consider one of the functions
only in the absence of others; one should consider the whole n-tuple simultaneously
as a solution. An initial condition is a specification of the n values x1 (t0 ) =
a1 , x2 (t0 ) = a2 . . . xn (t0 ) = an . Note that all of these values should be specified for
a certain time instant t0 and not for different instants of time. (One could consider
other possibilities such as specifying different variables for different time instants,
but we do not call such problems initial value problems.)
Example: Consider the initial value problem
x′1 = 2x1 + x2
x′2 = x1 + 2x2
with x1 (0) = 5, x2 (0) = 1. Verify that x1 (t) = 3e3t + 2et , x2 (t) = 3e3t − 2et is a
solution of this initial value problem.
Solution: First, let us check that the two ODE’s in the system are satisfied by this
solution:
(3e3t + 2et )′ = 9e3t + 2et = 2(3e3t + 2et ) + (3e3t − 2et )
(3e3t − 2et )′ = 9e3t − 2et = (3e3t + 2et ) + 2(3e3t − 2et )

2
Next, let us check that the initial condition is satisfied:

x1 (0) = 3e0 + 2e0 = 5


x2 (0) = 3e0 − 2e0 = 1

Since all conditions of the initial value problem are satisfied, the proposed set of
functions is actually a solution. □
The existence and uniqueness of solutions of a first order system of ODE’s can be
guaranteed in the presence of similar conditions to those that we saw in the case of
a single first order ODE.

Theorem 1.1 (Existence-uniqueness theorem) Consider the system of first order


ODE’s

x′1 = f1 (x1 , x2 , . . . , xn , t)
x′2 = f2 (x1 , x2 , . . . , xn , t)
...
x′n = fn (x1 , x2 , . . . , xn , t)

together with the initial condition x1 (t0 ) = a1 , x2 (t0 ) = a2 . . . xn (t0 ) = an . Assume


that each fi (x1 , x2 , . . . , xn , t) and each ∂fi /∂xj (x1 , x2 , . . . , xn , t) is continuous on an
open rectangular box containing (a0 , a1 , . . . , an , t0 ). Then, there is a unique solution
of this initial value problem in some open interval containing t0 .

As in the case of a single equation, we will be unable to provide a proof of this


theorem in these notes.

1.1 Linear systems

Recall that a first order ODE is said to be linear if it can be written in the form
y ′ + p(t)y = q(t). By analogy, we define the concept of a system of first order linear
differential equations:

Definition 1.1 A system of first order ODE’s is said to be linear if it can be

3
written in the form

x′1 = a11 (t)x1 + a12 (t)x2 + . . . + a1n (t)xn + b1 (t)


x′2 = a21 (t)x1 + a22 (t)x2 + . . . + a2n (t)xn + b2 (t)
...
x′n = an1 (t)x1 + an2 (t)x2 + . . . + ann (t)xn + bn (t)

for some functions aij (t) and bi (t).


If furthermore bi (t) = 0 for all i, then the system is said to be a homogenous
system.

We will now state the existence-uniqueness theorem for first order linear systems.
This theorem could be viewed as a special case of the existence-uniqueness theorem
for general first order systems, which was stated above. However, we have a slightly
stronger conclusion in the linear case: The solution exists throughout the interval
on which the coefficient functions are continuous .

Theorem 1.2 (Existence-uniqueness theorem, linear case) Consider the linear sys-
tem of ODE’s

x′1 = a11 (t)x1 + a12 (t)x2 + . . . + a1n (t)xn + b1 (t)


x′2 = a21 (t)x1 + a22 (t)x2 + . . . + a2n (t)xn + b2 (t)
...
x′n = an1 (t)x1 + an2 (t)x2 + . . . + ann (t)xn + bn (t)

together with the initial condition x1 (t0 ) = d1 , x2 (t0 ) = d2 . . . xn (t0 ) = dn . Assume


that each aij (t) and bi (t) is continuous for t in an open interval (α, β). Then there
is a unique solution of this initial value problem, valid over the whole interval (α, β).

The aim of this part of the course is to develop the theory of systems of first order
linear ODE’s and to systematically find solutions when all aij (t)’s are constants.
Matrix algebra will be an essential tool for solving linear systems of ODE’s, therefore
we will spend some time reviewing basic concepts about matrices.

4
2 Review of Matrices
Definition 2.1 An n × m matrix A (of real numbers) is a rectangular array of
real numbers:  
a11 a12 . . . a1n
 a21 a22 . . . a2n 
 
... ... ... ... 
am1 am2 . . . amn
The numbers aij are called the entries of the matrix. The first index tells us which
row the entry is on, and the second index tells us which column the entry is on. The
vector [ai1 , ai2 . . . ain ] is said to be the i-th row vector of the matrix A. Similarly,
the vector  
a1j
 a2j 
 
...
amj
is said to be the j-th column vector of the matrix A. As a shorthand notation,
we will sometimes write
A = (aij )
for the whole matrix, which is meant to indicate that the ij-entry of the matrix A is
aij .

2.1 Complex numbers

Recall the set C of complex numbers: It is the set of numbers of the form a + bi
where a, b are real numbers. Complex numbers are subject to the usual rules of
arithmetic and i2 = −1. Therefore,

(a + bi) + (c + di) = (a + c) + (b + d)i


(a + bi)(c + di) = (ac − bd) + (bc + ad)i

The conjugate of a complex number √ z = a + bi is z = a − bi. The modulus of


a complex number z = a + bi is |z| = a2 + b2 . It is easy to check that zz = |z|2 .
The only complex number with modulus zero is z = 0.

5
1 z2
We can divide by any non-zero complex number. Indeed, if z2 ̸= 0 then = .
z2 |z2 |2
Therefore
z1 z1 z 2
= .
z2 |z2 |2
We can also consider matrices over complex numbers C, by just taking the entries
aij from C.

2.2 Transpose, conjugate and adjoint

Definition 2.2 Let A = (aij ) be an m × n matrix. Then its transpose is the


n × m matrix AT = (aji ). The conjugate of A is A = (aij ). The adjoint of A is
T
A∗ = A .

Example: Say  
1 2 + 9i 0
A=
−1 3 7 − 4i
Then,
   
1 −1   1 −1
1 2 − 9i 0
AT = 2 + 9i 3  A= A∗ = 2 − 9i 3 
−1 3 7 + 4i
0 7 − 4i 0 7 + 4i

2.3 Matrix Operations

Definition 2.3 Suppose that A = (aij ) and B = (bij ) are two m×n matrices. Then
their sum is the m × n matrix A + B = (aij + bij ).

Definition 2.4 Say A = (aij ) is an m × n matrix and c is a number. Then the


scalar multiplication of A by c is the matrix cA = (caij ).

Subtraction can be defined by combining these two operations. Namely, A − B =


A + (−1)B.

6
Definition 2.5 Say A = (aij ) is an m × n matrix and B = (bij ) is an n × k matrix.
Then the matrix product of A and B is the m × k matrix AB = (cij ) where
n
X
cij = ail blj
l=1

Remark 2.1 The entry cij in the definition above is equal to the dot product of the
ith row vector of A and the jth column vector of B. The condition on the sizes of
the matrices guarantees that these two vectors have the same length n.

Theorem 2.1 Matrix operations have the properties below. Let 0 denote a matrix
with all entries 0, and c a constant.

ˆ A+B =B+A
ˆ A+0=0+A=A
ˆ A + (−A) = 0
ˆ A + (B + C) = (A + B) + C
ˆ c(A + B) = cA + cB
ˆ A(BC) = (AB)C
ˆ A(B + C) = AB + AC
ˆ A(cB) = c(AB)

Proof: Exercise.

Remark 2.2 In general, AB ̸= BA unless A and B are chosen in a special way.


Matrix multiplication is not commutative.

Remark 2.3 We defined matrix operations quite mechanically and not conceptually.
From this viewpoint, the rule A(BC) = (AB)C (associativity of matrix multiplica-
tion) seems to be a quite miraculous fact. It is indeed an amazing fact, but it sounds
more natural and less like a miracle if one uncovers the conceptual meaning of matrix
multiplication: Multiplication of a matrix by a column vector essentially describes a
multivariable function that takes vectors to vectors. In this setup, matrix multipli-
cation corresponds to composition of functions. Since we know that composition of
functions is an associative operation, then so is matrix multiplication.

7
2.4 Invertibility and Determinants

We have four basic arithmetic operations on numbers. Out of these four operations,
we defined three of them for matrices. What about matrix division? First of all we
have to be careful about the order of division since AB ̸= BA. So it will be much
better to write AB −1 or B −1 A rather than A/B. Therefore definition of division
hinges on deciding what B −1 means. In the case of numbers, if a ̸= 0, then a−1 is
the number such that aa−1 = 1. First of all we should determine which matrix plays
the role of the number 1, namely the multiplicative identity element.

Definition 2.6 Suppose that I is the n × n matrix


 
1 0 0 ... 0
0 1 0 . . . 0
 
I= 0 0 1 . . . 0

 ... 
0 0 0 ... 1

namely, I = (aij ) where aij = 0 if i ̸= j and aij = 1 if i = j. Then I is called the


identity matrix of order n.

It can be easily checked that for any n × m matrix A or for any k × n matrix B,

IA = A, BI = B.

Therefore the matrix I plays the role of the multiplicative identity for matrix multi-
plication, which is the role of the number 1 in the case of real or complex numbers.

Definition 2.7 Suppose that A is an n × n matrix. We say that the n × n matrix


B is an inverse for A if
AB = BA = I

The inverse of a matrix, if exists, must be unique. Indeed, suppose that both B1
and B2 are inverses for A. Then B1 A = I and AB2 = I. But then

B1 = B1 I = B1 (AB2 ) = (B1 A)B2 = IB2 = B2 .

Hence there can be at most one inverse for a matrix A. Denote the inverse of A by
A−1 if it exists.

8
How do we decide whether A−1 exists or not? In the case of numbers, this question
has an easy answer: a has an inverse iff a ̸= 0. For matrices, this is a nontrivial
matter. A quick explanation can be made by using the concept of a determinant
which we will discuss next.

2.5 Determinants

Let A be an n × n matrix of real or complex numbers. The determinant of A,


denoted by det(A) is a number computed in terms of the entries of A. It can be
computed inductively as follows:
n = 1 : In this case A = [a] and det(A) = a.
 
a b
n = 2 : In this case A = and det(A) = ad − bc.
c d
Let Aij denote the matrix A with its ith row and jth column deleted. Therefore Aij
is an (n − 1) × (n − 1) matrix.
n × n : Suppose that A = (aij ). Then we define the determinant of A using smaller
determinants, by the so called “expansion with respect to the first row”:
n
X
det(A) = (−1)j−1 a1j det(A1j )
j=1

Example: Compute the determinant of


 
2 0 −2
A = 3 4 5 
1 1 −8
Solution:
det A = a11 det(A11 ) − a12 det(A12 ) + a13 det(A13 )
     
4 5 3 5 3 4
= 2 det − 0 det + (−2) det
1 −8 1 −8 1 1
= 2(−37) − 0 + (−2)(−1)
= −72

9
 
a b
Remark 2.4 In order to avoid the awkward notation det we write
c d

a b
c d
for this determinant instead. A similar notation is used for larger determinants.

Now, we can state the fundamental result that relates invertibility of a matrix and
the value of its determinant:

Theorem 2.2 Let A be an n × n matrix of real or complex numbers. Then A−1


exists if and only if det(A) ̸= 0.

The proof of this theorem will be omitted for the moment. Furthermore, there are
alternative ways to compute the determinant of a matrix, which we will explain and
use in the future lectures. Some of these will be used without proof. The right place
for a complete story for matrices and determinants is a course in linear algebra.

10

You might also like