Nonlinear Dynamical Systems
Prof. Madhu. N. Belur and Prof. Harish. K. Pillai
Department of Electrical Engineering
Indian Institute of Technology, Bombay
Lecture 1
Introduction
Welcome everyone. This is a course taught on non-linear dynamical systems by Madhu.
N. Belur, that is me, and my colleague Harish. K. Pillai, we both are in the control
computing group in department of electrical engineering IIT Bombay.
(Refer Slide Time: 00:41)
So, this course is relevant for primarily postgraduate students who are interested in non-
linear dynamical systems also senior undergraduate students are eligible for this course.
The prerequisites for this course, which is more important is some amount of linear
algebra essentially Eigen values, null space - the concept of null space, and positive
definite matrices.
We will also require some basics about differential equations in particular linear
differential equation, homogeneous and particular solutions of linear differential
equations. Some information about control in particular linear systems that we will
require are about transfer functions, some state space concepts, and the Nyquist criteria
for stability.
(Refer Slide Time: 01:31)
Some useful books that we will need in this course is the book by Hassan Khalil on non-
linear systems, the book by M. Vidyasagar on Nonlinear systems analysis and the book
by Shankar Sastry called non-linear systems analysis stability and control. These books
will be very useful, so the outline of this course will be as follows. We will first begin
with some properties of linear systems input output systems and autonomous systems.
Then we will see some features that is present only in non-linear systems. Then we will
move into existence and uniqueness of solutions to non-linear differential equations. We
will also see the notions of stability, linearization, we will also see the Lyapunov
theorem for stability, we will see the La Salle invariance principle.
(Refer Slide Time: 02:27)
Then we will see input output systems in particular l 2 stability. We will also see sector
bounded linearities in particular the lure problem. The Nyquist criteria for stability even
though that is applicable only for linear systems, we will review that particular part
because that will play extremely important role even for non-linear systems. We will
begin will passivity and small gain theorem as the main results for sector bound
nonlinearities and then we will see more generally circle and the povov criteria. We will
also see the describing function method in this course.
(Refer Slide Time: 03:07)
The outline of todays lecture will be the definition of linear systems. We will review the
principle of superposition. Then we will see some examples of non-linear systems. Then
we will see some features that characterize only non-linear systems, we will see
autonomous systems what is their definition. Then we will see what the notion of
equilibrium point or equilibrium position.
(Refer Slide Time: 03:48)
So, we will begin with the definition of a linear system, so when do we call a system as
linear a system with input u and output y is called linear, if it satisfies the principle of
superposition. What is the principle of superposition? Suppose inputs u 1 and u 2 give
outputs y 1 and y 2 respectively. Then we can ask what does the input u 1 plus u 2 give
as the output. So, in this context when we mean that the input u 1, we mean the trajectory
u 1.
So, I would like to spend a few minutes on the notation we will use in this course. So,
this particular lecture 1, should also keep looking regularly in the midst of the course
because it contains important notational aspects also. So, when we mean input u 1, we
mean the entire trajectory complete function u 1 as a function of time, which means we
mean the values of u 1 at all-time instants. On the other hand when we are interested in
the value of u 1 at a particular time instant t naught, t naught is some real number some
time value.
Then, we denote that as u 1 at t naught, this will be the notation in this complete course.
So, coming back to the principle of superposition, we can ask the question. Can the
output of the system for the input u 1 plus u 2? Can that output be obtained by
superimposing y 1 on y 2 or in other words superimpose y 2 on y 1?
(Refer Slide Time: 05:24)
So, we will use this ability to superimpose as the definition of linear systems is the
output of the system for input u 1 plus u 2 precisely equal to y 1 plus y 2. So, this
equality should be understood in the sense that it is equal at every time instant, why?
Because u 1 and u 2 are complete trajectories. So, at every time instant we want the
output to be equal to y 1 plus y 2 the trajectory y 1 plus y 2. The corresponding outputs
for inputs u 1 plus u 2, also we will like that the output is equal to y 1 plus y 2 for
arbitrary inputs u 1 and u 2 and not just for some carefully chosen inputs u 1 and u 2. So,
it is important that this ability to superimpose works for arbitrary inputs u 1 and u 2.
Moreover we will also like that if we scale the input by a real number alpha 1. Then the
output is the same output scaled by the same amount alpha 1 in other words for any real
number alpha 1. The output of the system for input alpha 1 u 1 is precisely equal to alpha
1 y 1. These two properties, the sum of the outputs and the scaling of the output these
both can be captured by just 1 sentence. In short a system is said to be linear, if output of
the system for the particular input alpha 1 u 1 plus alpha 2 u 2 is precisely equal to alpha
1 y 1 plus alpha 2 y 2.
So, again coming back to the notation we will say is said to be linear. If, so and so
property holds in my definition, it is a if and only if statement. So, we can rewrite the
same definition in a few other equivalent statements.
(Refer Slide Time: 07:28)
So, a system with input u and output y is linear if and only if, output of alpha 1 u 1 plus
alpha 2 u 2 is equal to alpha 1 y 1 plus alpha 2 y 2 by definition of linearity. We can also
state, this as the system is linear if and only if output of alpha 1 u 1 plus alpha 2 u 2
equals alpha 1 y 1 plus alpha 2 y 2. So, please note that we have this implication if and
only if in particular, we put this colon on the left side which means the left side of the
statement is being defined by the right hand of the statement of the if and only if sign.
And as I said this should be true for all functions u 1 and u 2 and for all real numbers
alpha 1 and alpha 2.
(Refer Slide Time: 08:21)
For a linear system we can ask, what happens when we give the input 0? So, the 0
function when that is given as the input, this function is same as u of t equal to 0 for all
time t. That is u of t equivalently equal to 0. This is different from a function that is equal
to 0 only at a few time instants. For example, the function u of t equal to sin of t or u of t
equal to t square minus 3. These are functions that become equal to 0only at specific time
instants, but not at all time instants, unlike the 0 function which is equal to 0 for all time
t.
So, what happens to the system when we give the 0 input the output, we will like to say
that the output is equal to 0. So, how do we obtain this as the consequence to the
definition? We already saw, so take the definition of linearity and take alpha equal to 0
and take any input u. So, recall that we had this sentence for any real number alpha 1 the
output of the system for input alpha 1 u 1 is equal to alpha 1 y 1. Here, if we substitute
alpha 1 equal to 0. Then will obtain that a linear system always gives output equal to 0.
When the 0 function is given as input.
(Refer Slide Time: 09:52)
Let us see some examples of linear and non-linear systems so consider static input output
system. So, if it is a static system which means that the output depends only on the value
of the input and not on its derivative or integral. So, in such a situation we can draw a
graph of the output versus input. So, the first situation these are 3 examples, where the
force is plotted against the input v f is plotted against v, v is the input f is the output. So,
the first one is clearly non-linear the last one which corresponds to saturation
nonlinearity is also non-linear. The middle one also is non-linear that requires a little
more careful look. So, please note that just because the graph is a line, it does not mean
that the system is a linear system. That is written here. So, a line is not equivalent to a
linear system.
(Refer Slide Time: 10:59)
But consider this system input x and output F, both are functions of time x and f are
functions of time. This is another example of a static linear system, the input output
relation does not involve derivatives integrals of the variables x and f, hence we can plot
the output variable as a function of the input variable at any time instant. So, does this
line pass through the origin? When we ask this question then we see that the input 0, the
0 function gives output 0 and that important property is to be satisfied for a line also and
only then we can call that this system is a linear system.
So, the system with input x and output F is a linear system. This is not related to the
graph of x of the variable x as a function of time t. It is not related to that graph being a
line. So, please note here that F and x are variables of the system 1 is the input 1 is the
output and it is this graph which is incidentally a line. If, this line passes through the
origin then this system is linear, but more generally for systems which do not involve a
static relation between the inputs and outputs such a graph, we do not draw in that
situation. We have to go back to the principle of superposition and check that for
checking whether the system is linear.
(Refer Slide Time: 12:35)
We now go into an autonomous system, when do we call an autonomous system linear
for that particular question, we will quickly see the definition of an autonomous system,
a system is autonomous if there are no inputs. In other words one of the initial conditions
are specified there is a trajectory that evolves a unique trajectory, that evolves. There is
no room for shaping or controlling, that is because there are no inputs to the system to
what extent the trajectory is unique? These are some important questions. We will
analyze in detail, so consider the differential equation x dot is equal to f of x once the
value of x at time t equal to 6 is specified.
Suppose, it is specified as minus 3.4, then we are able to see that for the solution x of t is
equal to a times e to the power 5 times t. This is how the solution to this differential
equation looks and if we use this initial condition then we are able to get a unique value
of a and this value of a corresponds to that initial condition. So, we see that this is an
autonomous system. Another important example is that d by d t of x is equal to minus 3 x
plus 2 this is another autonomous system. These are systems for which once the initial
condition is specified. There are no inputs and hence the trajectory is fully determined.
(Refer Slide Time: 14:02)
More generally if x is a map from R to R n in which the input space R, we interpret as
time and the output space R n is a vector space, which has n components. So, at each n
component at each time instants this we denote as x of t is an element of R n this is also
suppose the differential is x dot is equal to f of x at time t or in short we will suppress the
variable t and write x dot is equal to f of x. So, the dot means derivative with respect to
time. So, here at every time instant x of t is an element of R n. So, this short hand
notation x dot is equal to f of x has actually n equations within.
So, the first equation is x dot is equal to f 1 of x second 1 is x 2 dot equal to f 2 of x
etcetera. So, f also is a map from R n to R n. So, f takes a value of x which is an element
of R n and gives out another vector which is again in R n. So, for every each initial
condition, suppose there is a trajectory x of t satisfying the above differential equations,
so to what extent we can say that for every initial condition, there exists a solution and to
what extent is that unique. These are some important questions, we will address for the
time being please assume that for each initial condition, suppose there is a trajectory that
evolves from that initial condition. So, this trajectory itself is a vector valued function x
of t is a function of time.
(Refer Slide Time: 15:50)
Then we can ask is this map linear, which map? The map that sends each initial
condition to a trajectory is that map linear. In other words, if b 1 and b 2 are 2 vectors in
R n and with these initial conditions with the initial condition b 1 and b 2, we have
respectively solutions x 1 and x 2 as a function of time. Then the initial condition alpha 1
b 1 plus alpha 2 b 2 results in a trajectory alpha 1 x 1 plus alpha 2 x 2, suppose this
property is true. Then we will say that map is linear and in such a situation we will also
like to say that the autonomous system is a linear autonomous system.
So, again as I said these are required to be true for any real numbers alpha 1 alpha 2 and
for any 2 vectors b 1 and b 2 in R n. This also equivalent t saying, the set of solutions to
the differential equations forms a vector space over R, that is if x 1 and x 2 are two
solutions. Then alpha 1 x 1 plus alpha 2 x 2 also satisfies the differential equation. This
alpha 1 x 1 plus alpha 2 x 2 is also a solution to that differential equation. We can ask is
the trajectory 0 a solution to the differential equation.
The trajectory 0 now again means the 0 function. So, we can now see that the system d
by dt equal to minus 3 x plus 2. Here, we can substitute the trajectory x of t equivalently
equal to 0 and check whether the 0 function satisfies the differential equation. And we
will obtain that 0 is not a solution to this differential equation and hence this autonomous
system is not a linear autonomous system.
(Refer Slide Time: 17:47)
So, we come to some features that is present only in non-linear system. This makes the
study of non-linear systems extremely interesting and challenging also. So, what are
some features? That is present only in non-linear systems? The first important feature is
what we will like to say finite escape time so let me explain these terms 1 by 1 escape in
this situation means escape to infinity.
So, escape to infinity means does the solution x of t become unbounded in R n. So, can a
solution approach infinity, can it become unbounded in finite time instant? That is the
question escape. Here, means escape to infinity means, here it becomes unbounded and
can. This happen when t itself is finite that is the question we are asking here or is it that
the solution x of t can become unbounded only as t tends to infinity? So, this knows the
important question that we are going to address for a linear unstable system, exactly the
definition of unstable we will see later. But for now we see that the differential equation
x dot equal to f of x in which at any time instant x has only 1 component.
So, x of t is an element of R, solving this differential equation, we get x of t is equal to x
of 0 times e to the power t. So, we see that x of t becomes unbounded as t increases for a
non-zero initial condition x of 0, but we also see that x of t becomes unbounded only
when t tends to infinity. It does not become bounded for a finite time t. If, anybody gives
us a finite value of time t, we can evaluate x of 0 times e to the power t and see that it is
again a finite number, but for non-linear systems the escape time can be finite for non-
linear systems, which is not possible for linear systems.
(Refer Slide Time: 19:54)
Another important feature is for a linear system we can ask, if initially the system is at
equilibrium, what is equilibrium? All the forces acting on the trajectory are at balance in
such a situation we can ask, is the system going to remain in equilibrium? If, we are at an
initial condition, which is such that the system is at equilibrium, does it mean that the
system will remain at equilibrium for all future time? This is the question of uniqueness
and non-uniqueness of trajectories and we are going to address this situation for linear
systems. So, for linear systems it turns out that if initially the system is at equilibrium
then it cannot emanate out of equilibrium without perturbation.
So, but a non-linear system can exhibit this phenomena without perturbations also the
trajectory can emanate out of equilibrium in the absence of perturbations also and in that
sense we see that we also have non-uniqueness of solutions possibly in non-linear
differential equations. The analog of this particular situation is that the question can we
reach an equilibrium point in finite time under some suitable nonlinearity. It turns out
that we can reach the equilibrium point in finite time and this aspect is also not there in
linear systems. So, for linear systems what is possible? For linear systems the solution
can reach the equilibrium only asymptotically not in finite time, but only as t tends to
infinity.
(Refer Slide Time: 21:43)
Another important feature for non-linear systems is the notion of equilibrium point,
while equilibrium point is also present in linear systems. We will see that the equilibrium
points for linear systems are all connected unlike non-linear systems, where we could
have multiple equilibrium points which are not connected. In which case we will call
them isolated? So, we will see this in detail now, so an equilibrium point is a point b
such that if initially the trajectory is at b then it remains at b for all future time this
definition, we will see more carefully very soon. The equilibrium point we understand as
an initial condition, if we start of there we remain there for all future time? So, if b is an
equilibrium point are there other equilibrium points. This is the question we can ask.
The next question, we can ask, is if there are other equilibrium points are these other
equilibrium points close by, and if they are close by can they be really close in other
words can they be connected. So, if the closest other equilibrium point is at least some
non-zero distance away. In other words there is some small distance within which there
is no other equilibrium point, other than the point b in such a situation, we will call b is
isolated. When we will call b isolated? If, in the situation that there are other equilibrium
point every other equilibrium point is at least some non-zero distance away from the
point b. In such a situation we will say that there are multiple equilibrium points, but b is
an isolated equilibrium point.
So, for linear systems in case, there are multiple equilibrium points they are all non-
isolated in other words they are all connected to each other. We take any equilibrium
point for a linear system and for any small enough distance, we will see there is another
equilibrium point in the vicinity.
(Refer Slide Time: 23:48)
Another important feature of non-linear systems is that we can have periodic orbits
which are isolated, even the periodic orbits can be isolated just like the equilibrium
points can be isolated. So, this is relevant in the context of robust sustained oscillations,
we will see these terms carefully now. So, why are robust sustained oscillations
important if an amplitude is fixed, and if the frequency is fixed. Then they are very
relevant for building oscillators in a laboratory, so such a situation it turns out is not
possible for linear systems.
So, in linear systems if we get periodic orbits for a certain linear system, then the
amplitude will very crucially depend on the initial conditions. If, the initial conditions are
different then the amplitude will no longer be the same. It is very unlikely that by
changing the initial condition we will get the same amplitude. Also the frequency of the
periodic orbit also depends crucially on system parameters, small perturbations in the
system parameters can change the frequency. In fact it could also lose the property of
periodic orbits altogether.
So, why is this not acceptable this is not acceptable in laboratory condition? We will like
that oscillators are just switched on and they give fairly reliable amplitude and frequency
oscillations. So, that we are able to build an oscillator using this particular differential
equation. So, this is possible only using non-linear systems.
(Refer Slide Time: 25:24)
Now, we will see what a non-equilibrium point. So, consider the differential equation x
dot is equal to f of x in which x at any time t x of t is an element of R n. There are n
components in the vector x, a point is said to be an equilibrium point if x of t is always
equal to a. If, this is a solution of the differential equation, so take the differential
equation x of t equivalently equal to a for this particular trajectory. If, we see that this is
also a solution to the differential equation, then the point a is said to be an equilibrium
point. So, what does this require from f, what we see is if it should always remain at a the
rate of change of x with respect to time should be equal to 0, when evaluated at the point
x equal to a.
So, at the point x equal to a x dot is nothing but f of x in other words when f is evaluated
at a then we get 0 the 0 vector. Now, we can ask is the converse true, what is the
converse of the statement. Suppose, a is a vector in R n such that f evaluated at a is equal
to 0 then does that mean that x of t equivalently equal to 0 is the solution to that
differential equation. So, this really suggests that this converse should also be true and
we will see that under some fairly mild assumptions. This is indeed true and it is also the
only solution. So, what are these mild conditions? We will see that there is an important
condition called the Lipchitz condition and the Lipchitz condition, we will like to say is
mild because this is how most functions f would really look like, but this is the topic that
we will do in detail later.
(Refer Slide Time: 27:30)
Now, we will quickly see another interpretation of a differential equation. So, consider
the differential equation x dot is equal to f of x with x of t an element of R n. So, at each
point a in R n f of a is a vector starting from a, so at each point a we will evaluate f at a
this also is an element of R n. This vector we will like to place as starting from a what
does this vector denote it, denotes where the point a towards where the point a is
evolving both towards direction and magnitude.
So, f of a equal to 0 means the arrow there has length 0 in other words there is no
evolution from the point. In other words the rate of change at that point is equal to 0.
This is what we will like to also like to say is stationary if at that place we will start then
everything is stationary and the system does not evolve this also what we call an
equilibrium point. So, what is vector field about it?
At each point in R n we are sticking, we are attaching a vector there; this is unlike a
scalar field where at each point we could also specify a scalar value. For example, the
temperature at every point in the room, this would be a scalar field, but in our situation at
every point a in R n we have a vector with equal number of components. Hence, we will
say this is a vector field and moreover this vector at every point is precisely. The rate of
evolution when we are at that point, this is what is a differential equation a first order
differential equation is exactly this notion? Where at every point a we will stick a vector
there, and this vector denotes the rate of change of that particular point under the action
of that differential equation.
(Refer Slide Time: 29:37)
So, we will end todays lecture beginning with the topic of scalar systems. So, consider
the scalar differential equation what is scalar about it x dot t is equal to f of x where x of t
has only one component. It is a real number, so this is also called a one dimensional
system in such a situation f is a map from R to R. For example f of x is equal to 3x minus
2or f of x equal to x minus 3 times x minus 9 or x square or sin x. These are examples of
f that we will see today. So, this particular situation is best seen using a figure.
(Refer Slide Time: 30:27)
So we are going to attach at each point a vector. So, consider f of x is equal to 3 x minus
2 and suppose we take the point 4 y x equal to 4 for this point x equal to 4 we will
evaluate f at 4 and we obtain 14. So, what it means at this particular point. There is an
arrow which is starting from the point 4 and it is to the right. Why it is to the right?
Because this number 14 that we have obtained is positive and moreover in addition to
being to the right towards the direction of increasing x. It is a vector of length 14 at
another point for example, x equal to 0 we can check, what is f evaluated at x equal to 0
and for that we get minus 2.
So, it means that at 0 we draw a vector which is towards the negative direction of x and it
has length equal to 2. So, for a scalar differential equation x dot is equal to f of x it
means that at each point, we can draw a vector to the right or left depending on whether f
at that point is positive or negative. So, in this particular situation we see that f of x is
equal to 0 precisely at x equal to 2 by 3. So, suppose 2 by 3 is a point here and f of x is a
line.
So, now we are going to plot a graph of f versus x even though x itself was a function of
time, we are plotting f as a function of x and we see that we get this line, which passes
through the point x equal to plus 2 by 3, at this point f becomes equal to 0. So, at this
point this vector has length 0 everywhere to the right we see that this vector is pointed to
the right of this point, why is it to the right because we see that f to the right of the
particular point is all positive. So, this particular point let me draw a slightly bigger
figure.
(Refer Slide Time: 33:04)
We are considering the differential equation x dot is equal to 3 x minus 2 and we are
going to plot f versus x even though itself was a function of time. We will also later plot
x as a function of time, but for now we are interested in drawing the vector field we took
a point 4 there is a point 2 by 3 here and there is a point 0 at the point 4. We already saw
the vector is directed to the right at the point 2 by 3. The vector has length 0 and at the
point 0 the vector is directed to the left towards decreasing direction of x.
So, what does this mean that when we draw when we plot f of x versus x see that to the
right of the point 2 by 3. The arrow is marked to the right why is it to the right because at
this particular point say 1, we see that f is positive at x equal to 1 because f is positive, it
means x dot is positive in other words x is increasing. So, more generally we can see that
if we are given with a function f and if f is scalar we can draw a graph and decide at
which points x is increasing, which points x is decreasing by just seeing whether f is
positive or negative, at that particular value of x.
(Refer Slide Time: 34:51)
This is away we will analyze the other example that we saw. So, consider the differential
equation x dot is equal to x minus 3 times x minus 9 which we want to call as f of x. So,
this is equal to 3 this is equal to 9 and the graph of this function looks roughly like this.
This function has roots at x equal to 3 and 9 and hence it is passing through 0 precisely at
x equal to 3 and 9 and if we take a point 3. Then we see that the vector at the point 3 has
length 0 and hence we plot it neither to the right nor to the left.
On the other hand consider the point 6 at x equal to 6, we expect that f will be negative,
we can check that f at 6 is equal to 3 times minus 3 which is minus 9. So, since it is
negative we can also see that from the graph. Here, this is a vector pointing to the left
towards decreasing direction of x and that we can also see because x dot is equal to f of
x. So, at x equal to 6 x dot being negative x would eventually start decreasing. It would
decrease and that is precisely what this arrow shows the arrow shows the direction in
which x will evolve. On the other hand suppose we take x equal to 11 at x equal to 11 we
can easily draw the graph the arrow to the right why because at x equal to 11 the function
f takes a positive value.
So, we are able to draw all the arrows for this particular example, all we have to do is we
have to see where the equilibrium points are and to the left and the right of the
equilibrium points, we can draw the arrows towards increasing direction of x or
decreasing direction of x depending on whether f takes positive values there or negative
values. Another important point we can note is that if we start at the equilibrium point x
equal to 9. We, will of course, remain at n1 because x dot is equal to 0 x dot evaluated at
x equal to 9 is equal to 0 and hence x does not change at all it will remain in the point 9.
Similarly, x equal to 3 is also an equilibrium point.
So, as we can see we have made a mistake here all these arrows for x less than 3 because
f is positive it cannot be towards decreasing direction of x. On the other hand we should
be seeing that all the direction, all these directions have to be reversed. They are all
towards increasing direction of x, so this particular figure I will quickly draw again.
(Refer Slide Time: 38:17)
So, this is the differential equation x dot is equal to x minus 3 time x minus 9. Another
important feature we can see here is that if we start slightly to the right of equilibrium
point 9 equilibrium point 9 if we start to the right then x is going to increase and it will
go farther from 9. So, for a very small perturbation 9 to the positive side of n1 takes that
initial condition, further away from 9 even though we noted and at the point 9 the
trajectory remains at 9 for all future times, but slightly to the right for a very small
perturbation the trajectory goes away from 9.
Also slightly to the left of point 9 we see that the trajectories are again directed away
from 9, why? Slightly to the left of 9 the function f is negative. So, the x will become
further away from 9 it will further decrease. So, we will like to say that this particular
point is an equilibrium point, but it is also an unstable equilibrium point. For very small
perturbations both to the right and left we see that trajectories are going to move away
from this equilibrium point.
On the other hand please note that 3 is also an equilibrium point, but for very small
perturbations to the right all the arrows are pointed towards the point 3. We expect that
for small perturbation towards the positive direction of 3 the trajectories are moving back
towards 3. On the other hand if we move slightly to the left of 3 meaning if you start
from an initial condition. For example, x equal to 2.9 at t equal to 0 we see that x dot is
greater than 0 that is why the arrow is marked to the right and hence it will increase and
approach 3 again.
So, this equilibrium point we will like to call is a stable equilibrium point. In the context
Lyapunovs stability, we will see more precise definitions of stable unstable
asymptotically stable equilibrium points, for now for a scalar system looking at the graph
of f versus x, we are able to decide which are the equilibrium points. We are also able to
decide whether these equilibrium points are stable or unstable. So, now we will see
another example from the list of examples we saw.
(Refer Slide Time: 41:21)
Consider this graph, we see that f. So, this is equal to f of x this is f at 0 is equal to 0 why
because 0 is a root of this. So, we see that the point 0 itself is an equilibrium point if we
start there, we are going to remain there slightly to the right we see that f is positive and
hence the arrows are directed towards the right. On the other hand slightly to the left of
the point, we see that f is positive and hence again arrows are directed towards the right.
In other words the value of x is going to go on increasing whether it is to the right or to
the left of the point 0 and only at the point 0, the value of f being 0 x does not change x
dot is equal to 0.
So, now we will like to ask is this equilibrium point stable or unstable the property of
stable or unstable, we like to give only to equilibrium points this x equal to 0 is an
equilibrium point. Now, we see that slightly to the left we see that when x is slightly
negative. For example, x equal to minus 0.1 the value of x dot is positive and hence x is
going to increase and approach 0. So, can we call the 0 a stable equilibrium point? We
can answer after we analyze to the right of the point 0 to the right x dot is again and
hence x is going to further increase and become away from 0.
So, we see that for certain perturbations it comes back to 0 and for certain other
perturbations it goes away from 0. So, we can say that there exist some perturbations
such that such that initial conditions do not come back to equilibrium point. So, in such a
situation we are going to say that this equilibrium point is unstable when we will call it
unstable there are just some bad perturbations there exists. Some perturbations such that
those initial conditions they do not come back to the equilibrium point and they go away,
in such a situation that equilibrium point is unstable.
We are not going to be satisfied with some perturbations which come back to the
equilibrium point. We are unhappy that there are some perturbations that are going to go
away from that equilibrium point, and hence that equilibrium point has been classified as
unstable. So, we will see more about stability, unstability, asymptotic, stability in the
following lectures, but we will end this lecture by seeing a similar graph which we will
like that is d1 as homework for which examples?
(Refer Slide Time: 45:26)
We have already seen 3 x minus 2, we have also seen x square. Now, we will quickly
decide what are the equilibrium points for this particular function sin of x and whether
they are stable or unstable? So, there are several equilibrium points there are several
equilibrium points for the differential equation x dot is equal to sin of x. Please note and
x itself is not a sinusoidal trajectory; it is a differential equation in which sin comes in.
Suppose, this is an example so here we see that all the 0 crossings are equilibrium points
and depending on whether before and after that equilibrium point, whether this sin of x is
positive or negative based on that we are able to classify these equilibrium points as
stable or unstable.
So, here we can draw these arrows to the right and here to the left, similarly here again
we can draw. So, we see that this equilibrium point is unstable this one is stable, this is
unstable, this is stable. So, for this particular differential equation we are able to see that
there are several equilibrium points and alternatively they are, alternately they are stable
or unstable. So, this is something that we expect the viewer to carefully verify, so with
this we end todays lecture. We will continue with these aspects from the next lecture in
more detail.
Thank you.