0% found this document useful (0 votes)
21 views690 pages

非线性理论

The document serves as an introduction to nonlinear control, detailing the nonlinear state model and various types of systems, including linear, autonomous, and time-invariant systems. It discusses the existence and uniqueness of solutions, emphasizing the importance of local and global Lipschitz conditions. Additionally, it covers concepts such as equilibrium points, linearization, and nonlinear phenomena, providing foundational knowledge for understanding nonlinear control systems.

Uploaded by

borenzheng625
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views690 pages

非线性理论

The document serves as an introduction to nonlinear control, detailing the nonlinear state model and various types of systems, including linear, autonomous, and time-invariant systems. It discusses the existence and uniqueness of solutions, emphasizing the importance of local and global Lipschitz conditions. Additionally, it covers concepts such as equilibrium points, linearization, and nonlinear phenomena, providing foundational knowledge for understanding nonlinear control systems.

Uploaded by

borenzheng625
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 690

Nonlinear Control

Lecture # 1
Introduction

Nonlinear Control Lecture # 1 Introduction


Nonlinear State Model

ẋ1 = f1 (t, x1 , . . . , xn , u1, . . . , um )


ẋ2 = f2 (t, x1 , . . . , xn , u1, . . . , um )
.. ..
. .
ẋn = fn (t, x1 , . . . , xn , u1 , . . . , um)

ẋi denotes the derivative of xi with respect to the time


variable t
u1 , u2 , . . ., um are input variables
x1 , x2 , . . ., xn the state variables

Nonlinear Control Lecture # 1 Introduction


x1 f1 (t, x, u)
   
 



 u1 




 x2 





 f2 (t, x, u)



   u2   
 ..     .. 
x= . , u =   , f (t, x, u) =  .
     
..

     
   .   
 ..     .. 

 . 

  
 . 

  um  
xn fn (t, x, u)

ẋ = f (t, x, u)

Nonlinear Control Lecture # 1 Introduction


ẋ = f (t, x, u)
y = h(t, x, u)

x is the state, u is the input


y is the output (q-dimensional vector)

Special Cases:
Linear systems:
ẋ = A(t)x + B(t)u
y = C(t)x + D(t)u

Unforced state equation:


ẋ = f (t, x)

Results from ẋ = f (t, x, u) with u = γ(t, x)

Nonlinear Control Lecture # 1 Introduction


Autonomous System:
ẋ = f (x)
Time-Invariant System:

ẋ = f (x, u)
y = h(x, u)

A time-invariant state model has a time-invariance property


with respect to shifting the initial time from t0 to t0 + a,
provided the input waveform is applied from t0 + a rather than
t0

Nonlinear Control Lecture # 1 Introduction


Existence and Uniqueness of Solutions
ẋ = f (t, x)
f (t, x) is piecewise continuous in t and locally Lipschitz in x
over the domain of interest
f (t, x) is piecewise continuous in t on an interval J ⊂ R if for
every bounded subinterval J0 ⊂ J, f is continuous in t for all
t ∈ J0 , except, possibly, at a finite number of points where f
may have finite-jump discontinuities
f (t, x) is locally Lipschitz in x at a point x0 if there is a
neighborhood N(x0 , r) = {x ∈ Rn | kx − x0 k < r} where
f (t, x) satisfies the Lipschitz condition
kf (t, x) − f (t, y)k ≤ Lkx − yk, L > 0

Nonlinear Control Lecture # 1 Introduction


A function f (t, x) is locally Lipschitz in x on a domain (open
and connected set) D ⊂ Rn if it is locally Lipschitz at every
point x0 ∈ D
When n = 1 and f depends only on x

|f (y) − f (x)|
≤L
|y − x|

On a plot of f (x) versus x, a straight line joining any two


points of f (x) cannot have a slope whose absolute value is
greater than L
Any function f (x) that has infinite slope at some point is not
locally Lipschitz at that point

Nonlinear Control Lecture # 1 Introduction


A discontinuous function is not locally Lipschitz at the points
of discontinuity
The function f (x) = x1/3 is not locally Lipschitz at x = 0
since
f ′ (x) = (1/3)x−2/3 → ∞ a x → 0
On the other hand, if f ′ (x) is continuous at a point x0 then
f (x) is locally Lipschitz at the same point because |f ′ (x)| is
bounded by a constant k in a neighborhood of x0 , which
implies that f (x) satisfies the Lipschitz condition with L = k
More generally, if for t ∈ J ⊂ R and x in a domain D ⊂ Rn ,
f (t, x) and its partial derivatives ∂fi /∂xj are continuous, then
f (t, x) is locally Lipschitz in x on D

Nonlinear Control Lecture # 1 Introduction


Lemma 1.1
Let f (t, x) be piecewise continuous in t and locally Lipschitz
in x at x0 , for all t ∈ [t0 , t1 ]. Then, there is δ > 0 such that
the state equation ẋ = f (t, x), with x(t0 ) = x0 , has a unique
solution over [t0 , t0 + δ]

Without the local Lipschitz condition, we cannot ensure


uniqueness of the solution. For example, ẋ = x1/3 has
x(t) = (2t/3)3/2 and x(t) ≡ 0 as two different solutions when
the initial state is x(0) = 0
The lemma is a local result because it guarantees existence
and uniqueness of the solution over an interval [t0 , t0 + δ], but
this interval might not include a given interval [t0 , t1 ]. Indeed
the solution may cease to exist after some time

Nonlinear Control Lecture # 1 Introduction


Example 1.3

ẋ = −x2
f (x) = −x2 is locally Lipschitz for all x
1
x(0) = −1 ⇒ x(t) =
(t − 1)
x(t) → −∞ as t → 1
The solution has a finite escape time at t = 1
In general, if f (t, x) is locally Lipschitz over a domain D and
the solution of ẋ = f (t, x) has a finite escape time te , then
the solution x(t) must leave every compact (closed and
bounded) subset of D as t → te

Nonlinear Control Lecture # 1 Introduction


Global Existence and Uniqueness
A function f (t, x) is globally Lipschitz in x if

kf (t, x) − f (t, y)k ≤ Lkx − yk

for all x, y ∈ Rn with the same Lipschitz constant L


If f (t, x) and its partial derivatives ∂fi /∂xj are continuous for
all x ∈ Rn , then f (t, x) is globally Lipschitz in x if and only if
the partial derivatives ∂fi /∂xj are globally bounded, uniformly
in t
f (x) = −x2 is locally Lipschitz for all x but not globally
Lipschitz because f ′ (x) = −2x is not globally bounded

Nonlinear Control Lecture # 1 Introduction


Lemma 1.2
Let f (t, x) be piecewise continuous in t and globally Lipschitz
in x for all t ∈ [t0 , t1 ]. Then, the state equation ẋ = f (t, x),
with x(t0 ) = x0 , has a unique solution over [t0 , t1 ]

The global Lipschitz condition is satisfied for linear systems of


the form
ẋ = A(t)x + g(t)
but it is a restrictive condition for general nonlinear systems

Nonlinear Control Lecture # 1 Introduction


Lemma 1.3
Let f (t, x) be piecewise continuous in t and locally Lipschitz
in x for all t ≥ t0 and all x in a domain D ⊂ Rn . Let W be a
compact subset of D, and suppose that every solution of

ẋ = f (t, x), x(t0 ) = x0

with x0 ∈ W lies entirely in W . Then, there is a unique


solution that is defined for all t ≥ t0

Nonlinear Control Lecture # 1 Introduction


Example 1.4

ẋ = −x3 = f (x)
f (x) is locally Lipschitz on R, but not globally Lipschitz
because f ′ (x) = −3x2 is not globally bounded

If, at any instant of time, x(t) is positive, the derivative ẋ(t)


will be negative. Similarly, if x(t) is negative, the derivative
ẋ(t) will be positive
Therefore, starting from any initial condition x(0) = a, the
solution cannot leave the compact set {x ∈ R | |x| ≤ |a|}
Thus, the equation has a unique solution for all t ≥ 0

Nonlinear Control Lecture # 1 Introduction


Change of Variables
Map: z = T (x), Inverse map: x = T −1 (z)
Definitions
a map T (x) is invertible over its domain D if there is a
map T −1 (·) such that x = T −1 (z) for all z ∈ T (D)
A map T (x) is a diffeomorphism if T (x) and T −1 (x) are
continuously differentiable
T (x) is a local diffeomorphism at x0 if there is a
neighborhood N of x0 such that T restricted to N is a
diffeomorphism on N
T (x) is a global diffeomorphism if it is a diffeomorphism
on Rn and T (Rn ) = Rn

Nonlinear Control Lecture # 1 Introduction


Jacobian matrix
 
∂T1 ∂T1 ∂T1
∂x1 ∂x2
··· ∂xn
 . .. .. ..
 ..

∂T . . . 
=
 .. .. .. ..

∂x  . . . .


∂Tn ∂Tn ∂Tn
∂x1 ∂x2
··· ∂xn

Lemma 1.4
The continuously differentiable map z = T (x) is a local
diffeomorphism at x0 if the Jacobian matrix [∂T /∂x] is
nonsingular at x0 . It is a global diffeomorphism if and only if
[∂T /∂x] is nonsingular for all x ∈ Rn and T is proper; that is,
limkxk→∞ kT (x)k = ∞

Nonlinear Control Lecture # 1 Introduction


Example 1.5
Negative Resistance Oscillator
   
x2 z2 /ε
ẋ = , ż =
−x1 − εh′ (x1 )x2 ε[−z1 − h(z2 )]
   ′ 
−h(x1 ) − x2 /ε ∂T −h (x1 ) −1/ε
z = T (x) = , =
x1 ∂x 1 0
det(T (x)) = 1/ε is positive for all x

kT (x)k2 = [h(x1 ) + x2 /ε]2 + x21 → ∞ as kxk → ∞

Nonlinear Control Lecture # 1 Introduction


Equilibrium Points
A point x = x∗ in the state space is said to be an equilibrium
point of ẋ = f (t, x) if

x(t0 ) = x∗ ⇒ x(t) ≡ x∗ , ∀ t ≥ t0

For the autonomous system ẋ = f (x), the equilibrium points


are the real solutions of the equation

f (x) = 0

An equilibrium point could be isolated; that is, there are no


other equilibrium points in its vicinity, or there could be a
continuum of equilibrium points

Nonlinear Control Lecture # 1 Introduction


A linear system ẋ = Ax can have an isolated equilibrium point
at x = 0 (if A is nonsingular) or a continuum of equilibrium
points in the null space of A (if A is singular)
It cannot have multiple isolated equilibrium points,
for if xa and xb are two equilibrium points, then by linearity
any point on the line αxa + (1 − α)xb connecting xa and xb
will be an equilibrium point
A nonlinear state equation can have multiple isolated
equilibrium points. For example, the state equation

ẋ1 = x2 , ẋ2 = −a sin x1 − bx2

has equilibrium points at (x1 = nπ, x2 = 0) for


n = 0, ±1, ±2, · · ·

Nonlinear Control Lecture # 1 Introduction


Linearization
A common engineering practice in analyzing a nonlinear
system is to linearize it about some nominal operating point
and analyze the resulting linear model
What are the limitations of linearization?
Since linearization is an approximation in the
neighborhood of an operating point, it can only predict
the “local” behavior of the nonlinear system in the
vicinity of that point. It cannot predict the “nonlocal” or
“global” behavior
There are “essentially nonlinear phenomena” that can
take place only in the presence of nonlinearity

Nonlinear Control Lecture # 1 Introduction


Nonlinear Phenomena

Finite escape time


Multiple isolated equilibrium points
Limit cycles
Subharmonic, harmonic, or almost-periodic oscillations
Chaos
Multiple modes of behavior

Nonlinear Control Lecture # 1 Introduction


Approaches to Nonlinear Control

Approximate nonlinearity
Compensate for nonlinearity
Dominate nonlinearity
Use intrinsic properties
Divide and conquer

Nonlinear Control Lecture # 1 Introduction


Nonlinear Control
Lecture # 2
Two-Dimensional Systems

Nonlinear Control Lecture # 2 Two-Dimensional Systems


ẋ1 = f1 (x1 , x2 ) = f1 (x)
ẋ2 = f2 (x1 , x2 ) = f2 (x)

Let x(t) = (x1 (t), x2 (t)) be a solution that starts at initial


state x0 = (x10 , x20 ). The locus in the x1 –x2 plane of the
solution x(t) for all t ≥ 0 is a curve that passes through the
point x0 . This curve is called a trajectory or orbit
The x1 –x2 plane is called the state plane or phase plane
The family of all trajectories is called the phase portrait
The vector field f (x) = (f1 (x), f2 (x)) is tangent to the
trajectory at point x because

dx2 f2 (x)
=
dx1 f1 (x)

Nonlinear Control Lecture # 2 Two-Dimensional Systems


Vector Field diagram
Represent f (x) as a vector based at x; that is, assign to x the
directed line segment from x to x + f (x)

x2

x + fq (x) = (3, 2)

f (x)✟✟✟
✟✯
q✟
x = (1, 1)

x1
Repeat at every point in a grid covering the plane

Nonlinear Control Lecture # 2 Two-Dimensional Systems


2

x2 0

−1

−2
−2 −1 0 1 2
x1

ẋ1 = x2 , ẋ2 = − sin x1

Nonlinear Control Lecture # 2 Two-Dimensional Systems


Numerical Construction of the Phase Portrait

Select a bounding box in the state plane


Select an initial point x0 and calculate the trajectory
through it by solving

ẋ = f (x), x(0) = x0

in forward time (with positive t) and in reverse time (with


negative t)
ẋ = −f (x), x(0) = x0
Repeat the process interactively
Use Simulink or pplane

Nonlinear Control Lecture # 2 Two-Dimensional Systems


Qualitative Behavior of Linear Systems

ẋ = Ax, A is a 2 × 2 real matrix

x(t) = M exp(Jr t)M −1 x0


When A has distinct eigenvalues,
   
λ1 0 α −β
Jr = or
0 λ2 β α

x(t) = Mz(t)

ż = Jr z(t)

Nonlinear Control Lecture # 2 Two-Dimensional Systems


Case 1. Both eigenvalues are real:

M = [v1 , v2 ]

v1 & v2 are the real eigenvectors associated with λ1 & λ2

ż1 = λ1 z1 , ż2 = λ2 z2

z1 (t) = z10 eλ1 t , z2 (t) = z20 eλ2 t


λ /λ1
z2 = cz1 2 , c = z20 /(z10 )λ2 /λ1
The shape of the phase portrait depends on the signs of λ1
and λ2

Nonlinear Control Lecture # 2 Two-Dimensional Systems


λ2 < λ1 < 0
eλ1 t and eλ2 t tend to zero as t → ∞
eλ2 t tends to zero faster than eλ1 t
Call λ2 the fast eigenvalue (v2 the fast eigenvector) and λ1 the
slow eigenvalue (v1 the slow eigenvector)
λ /λ1
The trajectory tends to the origin along the curve z2 = cz1 2
with λ2 /λ1 > 1
dz2 λ2 [(λ /λ )−1]
= c z1 2 1
dz1 λ1

Nonlinear Control Lecture # 2 Two-Dimensional Systems


z2

z1

Stable Node
λ2 > λ1 > 0
Reverse arrowheads ⇒ Unstable Node

Nonlinear Control Lecture # 2 Two-Dimensional Systems


x2 v2 x2 v2

v1 v1

x1 x1

(a) (b)

Stable Node Unstable Node

Nonlinear Control Lecture # 2 Two-Dimensional Systems


λ2 < 0 < λ1

eλ1 t → ∞, while eλ2 t → 0 as t → ∞


Call λ2 the stable eigenvalue (v2 the stable eigenvector) and
λ1 the unstable eigenvalue (v1 the unstable eigenvector)
λ /λ1
z2 = cz1 2 , λ2 /λ1 < 0

Saddle

Nonlinear Control Lecture # 2 Two-Dimensional Systems


z2 x2
v1
v2

z1 x1

(a)
(b)

Phase Portrait of a Saddle Point

Nonlinear Control Lecture # 2 Two-Dimensional Systems


Case 2. Complex eigenvalues: λ1,2 = α ± jβ

ż1 = αz1 − βz2 , ż2 = βz1 + αz2

 
z2
q
r= z12 + z22 , θ = tan−1
z1

r(t) = r0 eαt and θ(t) = θ0 + βt

α < 0 ⇒ r(t) → 0 as t → ∞

α > 0 ⇒ r(t) → ∞ as t → ∞

α = 0 ⇒ r(t) ≡ r0 ∀ t

Nonlinear Control Lecture # 2 Two-Dimensional Systems


z2 (a) z2 (b) z
2 (c)

z1 z1 z1

α<0 α>0 α=0


Stable Focus Unstable Focus Center
x2 x2 x2
(a) (b) (c)

x1 x x1
1

Nonlinear Control Lecture # 2 Two-Dimensional Systems


Effect of Perturbations
A → A + δA (δA arbitrarily small)
The eigenvalues of a matrix depend continuously on its
parameters
A node (with distinct eigenvalues), a saddle or a focus is
structurally stable because the qualitative behavior remains
the same under arbitrarily small perturbations in A
A center is not structurally stable
 
µ 1
, Eigenvalues = µ ± j
−1 µ

µ < 0 ⇒ Stable Focus, µ > 0 ⇒ Unstable Focus

Nonlinear Control Lecture # 2 Two-Dimensional Systems


Qualitative Behavior Near Equilibrium Points

The qualitative behavior of a nonlinear system near an


equilibrium point can take one of the patterns we have seen
with linear systems. Correspondingly the equilibrium points are
classified as stable node, unstable node, saddle, stable focus,
unstable focus, or center
Can we determine the type of the equilibrium point of a
nonlinear system by linearization?

Nonlinear Control Lecture # 2 Two-Dimensional Systems


Let p = (p1 , p2 ) be an equilibrium point of the system

ẋ1 = f1 (x1 , x2 ), ẋ2 = f2 (x1 , x2 )

where f1 and f2 are continuously differentiable


Expand f1 and f2 in Taylor series about (p1 , p2 )

ẋ1 = f1 (p1 , p2 ) + a11 (x1 − p1 ) + a12 (x2 − p2 ) + H.O.T.


ẋ2 = f2 (p1 , p2 ) + a21 (x1 − p1 ) + a22 (x2 − p2 ) + H.O.T.

∂f1 (x1 , x2 ) ∂f1 (x1 , x2 )


a11 = , a12 =
∂x1 x=p ∂x2 x=p
∂f2 (x1 , x2 ) ∂f2 (x1 , x2 )
a21 = , a22 =
∂x1 x=p ∂x2 x=p

Nonlinear Control Lecture # 2 Two-Dimensional Systems


f1 (p1 , p2 ) = f2 (p1 , p2 ) = 0

y1 = x1 − p1 y2 = x2 − p2

ẏ1 = ẋ1 = a11 y1 + a12 y2 + H.O.T.


ẏ2 = ẋ2 = a21 y1 + a22 y2 + H.O.T.

ẏ ≈ Ay
   ∂f1 ∂f1 
a11 a12 ∂x1 ∂x2 ∂f
A= =   =
∂f2 ∂f2 ∂x
a21 a22 ∂x1 ∂x2
x=p
x=p

Nonlinear Control Lecture # 2 Two-Dimensional Systems


Eigenvalues of A Type of equilibrium point
of the nonlinear system
λ2 < λ1 < 0 Stable Node
λ2 > λ1 > 0 Unstable Node
λ2 < 0 < λ1 Saddle
α ± jβ, α < 0 Stable Focus
α ± jβ, α > 0 Unstable Focus
±jβ Linearization Fails

Nonlinear Control Lecture # 2 Two-Dimensional Systems


Example 2.1

ẋ1 = −x2 − µx1 (x21 + x22 )


ẋ2 = x1 − µx2 (x21 + x22 )

x = 0 is an equilibrium point

−µ(3x21 + x22 ) −(1 + 2µx1 x2 )


 
∂f
=
∂x (1 − 2µx1 x2 ) −µ(x21 + 3x22 )
 
∂f 0 −1
A= =
∂x x=0 1 0

x1 = r cos θ and x2 = r sin θ ⇒ ṙ = −µr 3 and θ̇ = 1


Stable focus when µ > 0 and Unstable focus when µ < 0

Nonlinear Control Lecture # 2 Two-Dimensional Systems


For a saddle point, we can use linearization to generate the
stable and unstable trajectories
Let the eigenvalues of the linearization be λ1 > 0 > λ2 and
the corresponding eigenvectors be v1 and v2
The stable and unstable trajectories will be tangent to the
stable and unstable eigenvectors, respectively, as they
approach the equilibrium point p
For the unstable trajectories use x0 = p ± αv1
For the stable trajectories use x0 = p ± αv2
α is a small positive number

Nonlinear Control Lecture # 2 Two-Dimensional Systems


Nonlinear Control
Lecture # 3
Two-Dimensional Systems

Nonlinear Control Lecture # 3 Two-Dimensional Systems


1

Multiple Equilibria
Example 2.2: Tunnel-diode circuit

iL L   s
+ vL
X
X


i,mA

CC iC CC iR 1

R
P
P


P
P

P

i=h(v)

P
P
P


P
P 0.5
+ +
vC C J
J
J
vR
E 0

−0.5
(a) 0 0.5 1 v,V
(b)

x1 = vC , x2 = iL

Nonlinear Control Lecture # 3 Two-Dimensional Systems


ẋ1 = 0.5[−h(x1 ) + x2 ]
ẋ2 = 0.2(−x1 − 1.5x2 + 1.2)

h(x1 ) = 17.76x1 − 103.79x21 + 229.62x31 − 226.31x41 + 83.72x51


i
R
1.2
1
0.8
Q
2
Q1 = (0.063, 0.758)
0.6 Q
1 Q2 = (0.285, 0.61)
0.4
Q Q3 = (0.884, 0.21)
0.2 3

0
0 0.5 1 v
R

Nonlinear Control Lecture # 3 Two-Dimensional Systems


 
∂f −0.5h′ (x1 ) 0.5
=
∂x −0.2 −0.3

 
−3.598 0.5
A1 = , Eigenvalues : − 3.57, −0.33
−0.2 −0.3
 
1.82 0.5
A2 = , Eigenvalues : 1.77, −0.25
−0.2 −0.3
 
−1.427 0.5
A3 = , Eigenvalues : − 1.33, −0.4
−0.2 −0.3

Q1 is a stable node; Q2 is a saddle; Q3 is a stable node

Nonlinear Control Lecture # 3 Two-Dimensional Systems


x
2
1.6

1.4

1.2

0.8 Q
1 Q
2
0.6

0.4

0.2 Q
3
0
x
1
−0.2

−0.4
−0.4 −0.2 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6

Nonlinear Control Lecture # 3 Two-Dimensional Systems


Example 2.3: Pendulum

ẋ1 = x2 , ẋ2 = − sin x1 − 0.3x2


Equilibrium points at (nπ, 0) for n = 0, ±1, ±2, . . .
   
x2 ∂f 0 1
f (x) = , =
− sin x1 − 0.3x2 ∂x − cos x1 −0.3

Nonlinear Control Lecture # 3 Two-Dimensional Systems


 
∂f 0 1
=
∂x − cos x1 −0.3
Linearization at (0, 0) and (π, 0):
 
0 1
A1 = ; Eigenvalues : − 0.15 ± j0.9887
−1 −0.3
 
0 1
A2 = ; Eigenvalues : − 1.1612, 0.8612
1 −0.3

(0, 0) is a stable focus and (π, 0) is a saddle

Nonlinear Control Lecture # 3 Two-Dimensional Systems


4
x2
3 B

1 A

0
x1

−1

−2

−3

−4
−8 −6 −4 −2 0 2 4 6 8

Nonlinear Control Lecture # 3 Two-Dimensional Systems


Oscillation
A system oscillates when it has a nontrivial periodic solution

x(t + T ) = x(t), ∀ t ≥ 0

Linear (Harmonic) Oscillator:


 
0 −β
ż = z
β 0

z1 (t) = r0 cos(βt + θ0 ), z2 (t) = r0 sin(βt + θ0 )


 
−1 z2 (0)
q
r0 = z12 (0) + z22 (0), θ0 = tan
z1 (0)

Nonlinear Control Lecture # 3 Two-Dimensional Systems


The linear oscillation is not practical because
It is not structurally stable. Infinitesimally small
perturbations may change the type of the equilibrium
point to a stable focus (decaying oscillation) or unstable
focus (growing oscillation)
The amplitude of oscillation depends on the initial
conditions
(The same problems exist with oscillation of nonlinear
systems due to a center equilibrium point, e.g., pendulum
without friction)

Nonlinear Control Lecture # 3 Two-Dimensional Systems


Limit Cycles
Example: Negative Resistance Oscillator





i
✟ +

✠ i = h(v)
C L




Resistive
✠ v
Element
❈✄ iC ❈✄ iL
v

(a) (b)

Nonlinear Control Lecture # 3 Two-Dimensional Systems


ẋ1 = x2
ẋ2 = −x1 − εh′ (x1 )x2

There is a unique equilibrium point at the origin


 
0 1
∂f
A= = 
∂x x=0
−1 −εh (0)′

λ2 + εh′ (0)λ + 1 = 0
h′ (0) < 0 ⇒ Unstable Focus or Unstable Node

Nonlinear Control Lecture # 3 Two-Dimensional Systems


Energy Analysis:
E = 12 CvC2 + 12 Li2L

1
vC = x1 and iL = −h(x1 ) − x2
ε

E = 12 C{x21 + [εh(x1 ) + x2 ]2 }

Ė = C{x1 ẋ1 + [εh(x1 ) + x2 ][εh′ (x1 )ẋ1 + ẋ2 ]}


= C{x1 x2 + [εh(x1 ) + x2 ][εh′ (x1 )x2 − x1 − εh′ (x1 )x2 ]}
= C[x1 x2 − εx1 h(x1 ) − x1 x2 ]
= −εCx1 h(x1 )

Nonlinear Control Lecture # 3 Two-Dimensional Systems


−a x
b 1

Ė = −εCx1 h(x1 )

Nonlinear Control Lecture # 3 Two-Dimensional Systems


Example 2.4: Van der Pol Oscillator

ẋ1 = x2
ẋ2 = −x1 + ε(1 − x21 )x2

4 x2 x
3 2
3
2 2

1 1

0 0
x x
1 −1 1
−1
−2 −2
−3
−2 0 2 4 −2 0 2 4
(a) (b)

ε = 0.2 ε=1

Nonlinear Control Lecture # 3 Two-Dimensional Systems


1
ż1 = z2
ε
ż2 = −ε(z1 − z2 + 31 z23 )

3
10 x z
2 2
2

5 1

0
0 z
−1 1
x
1
−5 −2

−3
−5 0 5 10 −2 0 2
(a) (b)

ε=5

Nonlinear Control Lecture # 3 Two-Dimensional Systems


x x
2 2

x x
1 1

(a) (b)

Stable Limit Cycle Unstable Limit Cycle

Nonlinear Control Lecture # 3 Two-Dimensional Systems


Nonlinear Control
Lecture # 4
Stability of Equilibrium Points

Nonlinear Control Lecture # 4 Stability of Equilibrium Points


Basic Concepts
ẋ = f (x)
f is locally Lipschitz over a domain D ⊂ Rn
Suppose x̄ ∈ D is an equilibrium point; that is, f (x̄) = 0
Characterize and study the stability of x̄
For convenience, we state all definitions and theorems for the
case when the equilibrium point is at the origin of Rn ; that is,
x̄ = 0. No loss of generality
y = x − x̄

def
ẏ = ẋ = f (x) = f (y + x̄) = g(y), where g(0) = 0

Nonlinear Control Lecture # 4 Stability of Equilibrium Points


Definition 3.1
The equilibrium point x = 0 of ẋ = f (x) is
stable if for each ε > 0 there is δ > 0 (dependent on ε)
such that

kx(0)k < δ ⇒ kx(t)k < ε, ∀ t ≥ 0

unstable if it is not stable


asymptotically stable if it is stable and δ can be chosen
such that
kx(0)k < δ ⇒ lim x(t) = 0
t→∞

Nonlinear Control Lecture # 4 Stability of Equilibrium Points


Scalar Systems (n = 1)
The behavior of x(t) in the neighborhood of the origin can be
determined by examining the sign of f (x)
The ε–δ requirement for stability is violated if xf (x) > 0 on
either side of the origin

f(x) f(x) f(x)

x x x

Unstable Unstable Unstable

Nonlinear Control Lecture # 4 Stability of Equilibrium Points


The origin is stable if and only if xf (x) ≤ 0 in some
neighborhood of the origin

f(x) f(x) f(x)

x x x

Stable Stable Stable

Nonlinear Control Lecture # 4 Stability of Equilibrium Points


The origin is asymptotically stable if and only if xf (x) < 0 in
some neighborhood of the origin

f(x) f(x)

−a b x x

(a) (b)

Asymptotically Stable Globally Asymptotically Stable

Nonlinear Control Lecture # 4 Stability of Equilibrium Points


Definition 3.2
Let the origin be an asymptotically stable equilibrium point of
the system ẋ = f (x), where f is a locally Lipschitz function
defined over a domain D ⊂ Rn ( 0 ∈ D)
The region of attraction (also called region of asymptotic
stability, domain of attraction, or basin) is the set of all
points x0 in D such that the solution of

ẋ = f (x), x(0) = x0

is defined for all t ≥ 0 and converges to the origin as t


tends to infinity
The origin is globally asymptotically stable if the region of
attraction is the whole space Rn

Nonlinear Control Lecture # 4 Stability of Equilibrium Points


Two-dimensional Systems(n = 2)

Type of equilibrium point Stability Property


Center
Stable Node
Stable Focus
Unstable Node
Unstable Focus
Saddle

Nonlinear Control Lecture # 4 Stability of Equilibrium Points


Example: Tunnel Diode Circuit

x
2
1.6

1.4

1.2

0.8 Q
1 Q
2
0.6

0.4

0.2 Q
3
0
x
1
−0.2

−0.4
−0.4 −0.2 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6

Nonlinear Control Lecture # 4 Stability of Equilibrium Points


Example: Pendulum Without Friction

x’=y
y ’ = − sin(x)

0
y

−1

−2

−3

−4 −3 −2 −1 0 1 2 3 4
x

Nonlinear Control Lecture # 4 Stability of Equilibrium Points


Example: Pendulum With Friction

4
x2
3 B

1 A

0
x
1

−1

−2

−3

−4
−8 −6 −4 −2 0 2 4 6 8

Nonlinear Control Lecture # 4 Stability of Equilibrium Points


Linear Time-Invariant Systems
ẋ = Ax
x(t) = exp(At)x(0)
P −1AP = J = block diag[J1 , J2 , . . . , Jr ]
 
λi 1 0 . . . . . . 0
 0 λi 1 0 . . . 0 
 .. .. .. 
 
 . . . 
Ji = 
 ... .. 
 . 0 
 . .
. .

 . . 1 
0 . . . . . . . . . 0 λi m×m

Nonlinear Control Lecture # 4 Stability of Equilibrium Points


mi
r X
X
exp(At) = P exp(Jt)P −1
= tk−1 exp(λi t)Rik
i=1 k=1

mi is the order of the Jordan block Ji

Re[λi ] < 0 ∀ i ⇔ Asymptotically Stable

Re[λi ] > 0 for some i ⇒ Unstable

Re[λi ] ≤ 0 ∀ i & mi > 1 for Re[λi ] = 0 ⇒ Unstable

Re[λi ] ≤ 0 ∀ i & mi = 1 for Re[λi ] = 0 ⇒ Stable


If an n × n matrix A has a repeated eigenvalue λi of algebraic
multiplicity qi , then the Jordan blocks of λi have order one if
and only if rank(A − λi I) = n − qi
Nonlinear Control Lecture # 4 Stability of Equilibrium Points
Theorem 3.1
The equilibrium point x = 0 of ẋ = Ax is stable if and only if
all eigenvalues of A satisfy Re[λi ] ≤ 0 and for every eigenvalue
with Re[λi ] = 0 and algebraic multiplicity qi ≥ 2,
rank(A − λi I) = n − qi , where n is the dimension of x. The
equilibrium point x = 0 is globally asymptotically stable if and
only if all eigenvalues of A satisfy Re[λi ] < 0

When all eigenvalues of A satisfy Re[λi ] < 0, A is called a


Hurwitz matrix
When the origin of a linear system is asymptotically stable, its
solution satisfies the inequality

kx(t)k ≤ kkx(0)ke−λt , ∀ t ≥ 0, k ≥ 1, λ > 0

Nonlinear Control Lecture # 4 Stability of Equilibrium Points


Exponential Stability

Definition 3.3
The equilibrium point x = 0 of ẋ = f (x) is exponentially
stable if
kx(t)k ≤ kkx(0)ke−λt , ∀ t ≥ 0
k ≥ 1, λ > 0, for all kx(0)k < c
It is globally exponentially stable if the inequality is satisfied
for any initial state x(0)

Exponential Stability ⇒ Asymptotic Stability

Nonlinear Control Lecture # 4 Stability of Equilibrium Points


Example 3.2

ẋ = −x3
The origin is asymptotically stable

x(0)
x(t) = p
1 + 2tx2 (0)

x(t) does not satisfy |x(t)| ≤ ke−λt |x(0)| because

e2λt
|x(t)| ≤ ke−λt |x(0)| ⇒ ≤ k2
1 + 2tx2 (0)

e2λt
Impossible because lim =∞
t→∞ 1 + 2tx2 (0)

Nonlinear Control Lecture # 4 Stability of Equilibrium Points


Linearization
ẋ = f (x), f (0) = 0
f is continuously differentiable over D = {kxk < r}

∂f
J(x) = (x)
∂x

h(σ) = f (σx) for 0 ≤ σ ≤ 1, h′ (σ) = J(σx)x


Z 1
h(1) − h(0) = h′ (σ) dσ, h(0) = f (0) = 0
0
Z 1
f (x) = J(σx) dσ x
0

Nonlinear Control Lecture # 4 Stability of Equilibrium Points


Z 1
f (x) = J(σx) dσ x
0

Set A = J(0) and add and subtract Ax


Z 1
f (x) = [A + G(x)]x, where G(x) = [J(σx) − J(0)] dσ
0

G(x) → 0 as x → 0
This suggests that in a small neighborhood of the origin we
can approximate the nonlinear system ẋ = f (x) by its
linearization about the origin ẋ = Ax

Nonlinear Control Lecture # 4 Stability of Equilibrium Points


Theorem 3.2

The origin is exponentially stable if and only if Re[λi ] < 0


for all eigenvalues of A
The origin is unstable if Re[λi ] > 0 for some i

Linearization fails when Re[λi ] ≤ 0 for all i, with Re[λi ] = 0


for some i

Nonlinear Control Lecture # 4 Stability of Equilibrium Points


Example 3.3

ẋ = ax3

∂f
A= = 3ax2 x=0
=0
∂x x=0
Stable if a = 0; Asymp stable if a < 0; Unstable if a > 0
When a < 0, the origin is not exponentially stable

Nonlinear Control Lecture # 4 Stability of Equilibrium Points


Nonlinear Control
Lecture # 5
Stability of Equilibrium Points

Nonlinear Control Lecture # 5 Stability of Equilibrium Points


Lyapunov’s Method
Let V (x) be a continuously differentiable function defined in a
domain D ⊂ Rn ; 0 ∈ D. The derivative of V along the
trajectories of ẋ = f (x) is
n n
X ∂V X ∂V
V̇ (x) = ẋi = fi (x)
i=1
∂xi i=1
∂xi
 
f1 (x)
 ∂V ∂V ∂V
 f2 (x) 
= , , ... , ..
 
∂x1 ∂x2 ∂xn  
 . 
fn (x)
∂V
= f (x)
∂x

Nonlinear Control Lecture # 5 Stability of Equilibrium Points


If φ(t; x) is the solution of ẋ = f (x) that starts at initial state
x at time t = 0, then
d
V̇ (x) = V (φ(t; x))
dt t=0

If V̇ (x) is negative, V will decrease along the solution of


ẋ = f (x)
If V̇ (x) is positive, V will increase along the solution of
ẋ = f (x)

Nonlinear Control Lecture # 5 Stability of Equilibrium Points


Lyapunov’s Theorem (3.3)

If there is V (x) such that

V (0) = 0 and V (x) > 0, ∀ x ∈ D with x 6= 0

V̇ (x) ≤ 0, ∀x∈D
then the origin is a stable
Moreover, if

V̇ (x) < 0, ∀ x ∈ D with x 6= 0

then the origin is asymptotically stable

Nonlinear Control Lecture # 5 Stability of Equilibrium Points


Furthermore, if V (x) > 0, ∀ x 6= 0,

kxk → ∞ ⇒ V (x) → ∞

and V̇ (x) < 0, ∀ x 6= 0, then the origin is globally


asymptotically stable

Nonlinear Control Lecture # 5 Stability of Equilibrium Points


Proof

D
B
r
0 < r ≤ ε, Br = {kxk ≤ r}

α = min V (x) > 0


Bδ kxk=r
Ωβ
0<β<α
Ωβ = {x ∈ Br | V (x) ≤ β}
kxk ≤ δ ⇒ V (x) < β

Nonlinear Control Lecture # 5 Stability of Equilibrium Points


Solutions starting in Ωβ stay in Ωβ because V̇ (x) ≤ 0 in Ωβ

x(0) ∈ Bδ ⇒ x(0) ∈ Ωβ ⇒ x(t) ∈ Ωβ ⇒ x(t) ∈ Br

kx(0)k < δ ⇒ kx(t)k < r ≤ ε, ∀ t ≥ 0

⇒ The origin is stable


Now suppose V̇ (x) < 0 ∀ x ∈ D, x 6= 0. V (x(t) is
monotonically decreasing and V (x(t)) ≥ 0
lim V (x(t)) = c ≥ 0 Show that c = 0
t→∞

Suppose c > 0. By continuity of V (x), there is d > 0 such


that Bd ⊂ Ωc . Then, x(t) lies outside Bd for all t ≥ 0

Nonlinear Control Lecture # 5 Stability of Equilibrium Points


γ = − max V̇ (x)
d≤kxk≤r
Z t
V (x(t)) = V (x(0)) + V̇ (x(τ )) dτ ≤ V (x(0)) − γt
0
This inequality contradicts the assumption c > 0
⇒ The origin is asymptotically stable

The condition kxk → ∞ ⇒ V (x) → ∞ implies that the set


Ωc = {x ∈ Rn | V (x) ≤ c} is compact for every c > 0. This is
so because for any c > 0, there is r > 0 such that V (x) > c
whenever kxk > r. Thus, Ωc ⊂ Br . All solutions starting Ωc
will converge to the origin. For any point p ∈ Rn , choosing
c = V (p) ensures that p ∈ Ωc
⇒ The origin is globally asymptotically stable

Nonlinear Control Lecture # 5 Stability of Equilibrium Points


Terminology
V (0) = 0, V (x) ≥ 0 for x 6= 0 Positive semidefinite
V (0) = 0, V (x) > 0 for x 6= 0 Positive definite
V (0) = 0, V (x) ≤ 0 for x 6= 0 Negative semidefinite
V (0) = 0, V (x) < 0 for x 6= 0 Negative definite
kxk → ∞ ⇒ V (x) → ∞ Radially unbounded

Lyapunov’ Theorem
The origin is stable if there is a continuously differentiable
positive definite function V (x) so that V̇ (x) is negative
semidefinite, and it is asymptotically stable if V̇ (x) is negative
definite. It is globally asymptotically stable if the conditions
for asymptotic stability hold globally and V (x) is radially
unbounded

Nonlinear Control Lecture # 5 Stability of Equilibrium Points


A continuously differentiable function V (x) satisfying the
conditions for stability is called a Lyapunov function. The
surface V (x) = c, for some c > 0, is called a Lyapunov surface
or a level surface

c3
c2
V (x) = c 1

c 1 <c 2 <c 3

Nonlinear Control Lecture # 5 Stability of Equilibrium Points


Why do we need the radial unboundedness condition to show
global asymptotic stability?
It ensures that Ωc = {V (x) ≤ c} is bounded for every c > 0.
Without it Ωc might not bounded for large c
Example

x2 c
x21 c
V (x) = +x2 c
1 + x21 2 c
c
c
hhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh
c
c x1
c
c
c
c
c

Nonlinear Control Lecture # 5 Stability of Equilibrium Points


Example: Pendulum equation without friction

ẋ1 = x2 , ẋ2 = − a sin x1

V (x) = a(1 − cos x1 ) + 21 x22


V (0) = 0 and V (x) is positive definite over the domain
−2π < x1 < 2π

V̇ (x) = aẋ1 sin x1 + x2 ẋ2 = ax2 sin x1 − ax2 sin x1 = 0

The origin is stable


Since V̇ (x) ≡ 0, the origin is not asymptotically stable

Nonlinear Control Lecture # 5 Stability of Equilibrium Points


Example: Pendulum equation with friction

ẋ1 = x2 , ẋ2 = − a sin x1 − bx2


1
V (x) = a(1 − cos x1 ) + x22
2
V̇ (x) = aẋ1 sin x1 + x2 ẋ2 = − bx22
The origin is stable
V̇ (x) is not negative definite because V̇ (x) = 0 for x2 = 0
irrespective of the value of x1

Nonlinear Control Lecture # 5 Stability of Equilibrium Points


The conditions of Lyapunov’s theorem are only sufficient.
Failure of a Lyapunov function candidate to satisfy the
conditions for stability or asymptotic stability does not mean
that the equilibrium point is not stable or asymptotically
stable. It only means that such stability property cannot be
established by using this Lyapunov function candidate

Try
1 T
V (x) = 2
x Px
+ a(1 − cos x1) 
1 p11 p12 x1
= 2 [x1 x2 ] + a(1 − cos x1 )
p12 p22 x2

p11 > 0, p11 p22 − p212 > 0

Nonlinear Control Lecture # 5 Stability of Equilibrium Points


V̇ (x) = (p11 x1 + p12 x2 + a sin x1 ) x2
+ (p12 x1 + p22 x2 ) (−a sin x1 − bx2 )
= a(1 − p22 )x2 sin x1 − ap12 x1 sin x1
+ (p11 − p12 b) x1 x2 + (p12 − p22 b) x22

p22 = 1, p11 = bp12 ⇒ 0 < p12 < b, Take p12 = b/2

V̇ (x) = − 21 abx1 sin x1 − 21 bx22

D = {|x1 | < π}
V (x) is positive definite and V̇ (x) is negative definite over D.
The origin is asymptotically stable

Nonlinear Control Lecture # 5 Stability of Equilibrium Points


Variable Gradient Method
∂V
V̇ (x) = f (x) = g T (x)f (x)
∂x
g(x) = ∇V = (∂V /∂x)T
Choose g(x) as the gradient of a positive definite function
V (x) that would make V̇ (x) negative definite
g(x) is the gradient of a scalar function if and only if

∂gi ∂gj
= , ∀ i, j = 1, . . . , n
∂xj ∂xi

Choose g(x) such that g T (x)f (x) is negative definite

Nonlinear Control Lecture # 5 Stability of Equilibrium Points


Compute the integral
Z x Z x n
X
T
V (x) = g (y) dy = gi (y) dyi
0 0 i=1

over any path joining the origin to x; for example


Z x1 Z x2
V (x) = g1 (y1 , 0, . . . , 0) dy1 + g2 (x1 , y2 , 0, . . . , 0) dy2
0 0
Z xn
+ ··· + gn (x1 , x2 , . . . , xn−1 , yn ) dyn
0

Leave some parameters of g(x) undetermined and choose


them to make V (x) positive definite

Nonlinear Control Lecture # 5 Stability of Equilibrium Points


Example 3.7

ẋ1 = x2 , ẋ2 = −h(x1 ) − ax2


a > 0, h(·) is locally Lipschitz,

h(0) = 0; yh(y) > 0 ∀ y 6= 0, y ∈ (−b, c), b > 0, c > 0

∂g1 ∂g2
=
∂x2 ∂x1

V̇ (x) = g1 (x)x2 − g2 (x)[h(x1 ) + ax2 ] < 0, for x 6= 0


Z x
V (x) = g T (y) dy > 0, for x 6= 0
0

Nonlinear Control Lecture # 5 Stability of Equilibrium Points


 
φ1 (x1 ) + ψ1 (x2 )
Try g(x) =
φ2 (x1 ) + ψ2 (x2 )
To satisfy the symmetry requirement, we must have
∂ψ1 ∂φ2
=
∂x2 ∂x1
ψ1 (x2 ) = γx2 and φ2 (x1 ) = γx1

V̇ (x) = −γx1 h(x1 ) − ax2 ψ2 (x2 ) + γx22


+ x2 φ1 (x1 ) − aγx1 x2 − ψ2 (x2 )h(x1 )

Nonlinear Control Lecture # 5 Stability of Equilibrium Points


To cancel the cross-product terms, take

ψ2 (x2 ) = δx2 and φ1 (x1 ) = aγx1 + δh(x1 )


 
aγx1 + δh(x1 ) + γx2
g(x) =
γx1 + δx2
Z x1 Z x2
V (x) = [aγy1 + δh(y1 )] dy1 + (γx1 + δy2 ) dy2
0 0
Z x1
2
1
= 2 aγx1 + δ h(y) dy + γx1 x2 + 12 δx22
0
Z x1  
1 T aγ γ
= 2x P x + δ h(y) dy, P =
0 γ δ

Nonlinear Control Lecture # 5 Stability of Equilibrium Points


Z x1  
1 T aγ γ
V (x) = x Px +δ h(y) dy, P =
2
0 γ δ
V̇ (x) = −γx1 h(x1 ) − (aδ − γ)x22

Choose δ > 0 and 0 < γ < aδ

If yh(y) > 0 holds for all y 6= 0, the conditions of Lyapunov’s


theorem hold globally and V (x) is radially unbounded

Nonlinear Control Lecture # 5 Stability of Equilibrium Points


Nonlinear Control
Lecture # 6
Stability of Equilibrium Points

Nonlinear Control Lecture # 6 Stability of Equilibrium Points


The Invariance Principle
Example: Pendulum equation with friction

ẋ1 = x2 , ẋ2 = − a sin x1 − bx2


1
V (x) = a(1 − cos x1 ) + x22
2
V̇ (x) = aẋ1 sin x1 + x2 ẋ2 = − bx22
The origin is stable. V̇ (x) is not negative definite because
V̇ (x) = 0 for x2 = 0 irrespective of the value of x1
However, near the origin, the solution cannot stay identically
in the set {x2 = 0}

Nonlinear Control Lecture # 6 Stability of Equilibrium Points


Definitions
Let x(t) be a solution of ẋ = f (x)
A point p is a positive limit point of x(t) if there is a sequence
{tn }, with limn→∞ tn = ∞, such that x(tn ) → p as n → ∞
The set of all positive limit points of x(t) is called the positive
limit set of x(t); denoted by L+
If x(t) approaches an asymptotically stable equilibrium point
x̄, then x̄ is the positive limit point of x(t) and L+ = x̄
A stable limit cycle is the positive limit set of every solution
starting sufficiently near the limit cycle

Nonlinear Control Lecture # 6 Stability of Equilibrium Points


A set M is an invariant set with respect to ẋ = f (x) if

x(0) ∈ M ⇒ x(t) ∈ M, ∀ t ∈ R

Examples:
Equilibrium points
Limit Cycles
A set M is a positively invariant set with respect to ẋ = f (x)
if
x(0) ∈ M ⇒ x(t) ∈ M, ∀ t ≥ 0
Example; The set Ωc = {V (x) ≤ c} with V̇ (x) ≤ 0 in Ωc

Nonlinear Control Lecture # 6 Stability of Equilibrium Points


The distance from a point p to a set M is defined by

dist(p, M) = inf kp − xk
x∈M

x(t) approaches a set M as t approaches infinity, if for each


ε > 0 there is T > 0 such that

dist(x(t), M) < ε, ∀ t > T

Example: every solution x(t) starting sufficiently near a stable


limit cycle approaches the limit cycle as t → ∞
Notice, however, that x(t) does converge to any specific point
on the limit cycle

Nonlinear Control Lecture # 6 Stability of Equilibrium Points


Lemma 3.1
If a solution x(t) of ẋ = f (x) is bounded and belongs to D for
t ≥ 0, then its positive limit set L+ is a nonempty, compact,
invariant set. Moreover, x(t) approaches L+ as t → ∞

LaSalle’s Theorem (3.4)


Let f (x) be a locally Lipschitz function defined over a domain
D ⊂ Rn and Ω ⊂ D be a compact set that is positively
invariant with respect to ẋ = f (x). Let V (x) be a
continuously differentiable function defined over D such that
V̇ (x) ≤ 0 in Ω. Let E be the set of all points in Ω where
V̇ (x) = 0, and M be the largest invariant set in E. Then
every solution starting in Ω approaches M as t → ∞

Nonlinear Control Lecture # 6 Stability of Equilibrium Points


Proof

V̇ (x) ≤ 0 in Ω ⇒ V (x(t)) is a decreasing

V (x) is continuous in Ω ⇒ V (x) ≥ b = min V (x)


x∈Ω

⇒ lim V (x(t)) = a
t→∞

x(t) ∈ Ω ⇒ x(t) is bounded ⇒ L+ exists

Moreover, L+ ⊂ Ω and x(t) approaches L+ as t → ∞


For any p ∈ L+ , there is {tn } with limn→∞ tn = ∞ such that
x(tn ) → p as n → ∞

V (x) is continuous ⇒ V (p) = lim V (x(tn )) = a


n→∞

Nonlinear Control Lecture # 6 Stability of Equilibrium Points


V (x) = a on L+ and L+ invariant ⇒ V̇ (x) = 0, ∀ x ∈ L+

L+ ⊂ M ⊂ E ⊂ Ω

x(t) approaches L+ ⇒ x(t) approaches M (as t → ∞)

Nonlinear Control Lecture # 6 Stability of Equilibrium Points


Theorem 3.5
Let f (x) be a locally Lipschitz function defined over a domain
D ⊂ Rn ; 0 ∈ D. Let V (x) be a continuously differentiable
positive definite function defined over D such that V̇ (x) ≤ 0
in D. Let S = {x ∈ D | V̇ (x) = 0}
If no solution can stay identically in S, other than the
trivial solution x(t) ≡ 0, then the origin is asymptotically
stable
Moreover, if Γ ⊂ D is compact and positively invariant,
then it is a subset of the region of attraction
Furthermore, if D = Rn and V (x) is radially unbounded,
then the origin is globally asymptotically stable

Nonlinear Control Lecture # 6 Stability of Equilibrium Points


Example 3.8
ẋ1 = x2 , ẋ2 = −h1 (x1 ) − h2 (x2 )
hi (0) = 0, yhi (y) > 0, for 0 < |y| < a
Z x1
1 2
V (x) = h1 (y) dy + x
2 2
0

D = {−a < x1 < a, −a < x2 < a}

V̇ (x) = h1 (x1 )x2 + x2 [−h1 (x1 ) − h2 (x2 )] = −x2 h2 (x2 ) ≤ 0

V̇ (x) = 0 ⇒ x2 h2 (x2 ) = 0 ⇒ x2 = 0

S = {x ∈ D | x2 = 0}

Nonlinear Control Lecture # 6 Stability of Equilibrium Points


ẋ1 = x2 , ẋ2 = −h1 (x1 ) − h2 (x2 )

x2 (t) ≡ 0 ⇒ ẋ2 (t) ≡ 0 ⇒ h1 (x1 (t)) ≡ 0 ⇒ x1 (t) ≡ 0


The only solution that can stay identically in S is x(t) ≡ 0
Thus, the origin is asymptotically stable
Ry
Suppose a = ∞ and 0 h1 (z) dz → ∞ as |y| → ∞
Rx
Then, D = R2 and V (x) = 0 1 h1 (y) dy + 21 x22 is radially
unbounded. S = {x ∈ R2 | x2 = 0} and the only solution that
can stay identically in S is x(t) ≡ 0
The origin is globally asymptotically stable

Nonlinear Control Lecture # 6 Stability of Equilibrium Points


Exponential Stability
The origin of ẋ = f (x) is exponentially stable if and only if the
linearization of f (x) at the origin is Hurwitz

Theorem 3.6
Let f (x) be a locally Lipschitz function defined over a domain
D ⊂ Rn ; 0 ∈ D. Let V (x) be a continuously differentiable
function such that

k1 kxka ≤ V (x) ≤ k2 kxka , V̇ (x) ≤ −k3 kxka

for all x ∈ D, where k1 , k2 , k3 , and a are positive constants.


Then, the origin is an exponentially stable equilibrium point of
ẋ = f (x). If the assumptions hold globally, the origin will be
globally exponentially stable

Nonlinear Control Lecture # 6 Stability of Equilibrium Points


Proof

Choose c > 0 small enough that {k1kxka ≤ c} ⊂ D

V (x) ≤ c ⇒ k1 kxka ≤ c

Ωc = {V (x) ≤ c} ⊂ { k1 kxka ≤ c} ⊂ D
Ωc is compact and positively invariant; ∀ x(0) ∈ Ωc

k3
V̇ ≤ −k3 kxka ≤ − V
k2
dV k3
≤− dt
V k2

V (x(t)) ≤ V (x(0))e−(k3 /k2 )t

Nonlinear Control Lecture # 6 Stability of Equilibrium Points


 1/a
V (x(t))
kx(t)k ≤
k1
1/a
V (x(0))e−(k3 /k2 )t


k1
1/a
k2 kx(0)ka e−(k3 /k2 )t


k1
 1/a
k2
= e−γt kx(0)k, γ = k3 /(k2 a)
k1

Nonlinear Control Lecture # 6 Stability of Equilibrium Points


Example 3.10

ẋ1 = x2 , ẋ2 = −h(x1 ) − x2


c1 y 2 ≤ yh(y) ≤ c2 y 2 , ∀ y, c1 > 0, c2 > 0
  Z x1
11 1 T
V (x) = x 2
x+2 h(y) dy
1 2 0
Z x1
2
c1 x1 ≤ 2 h(y) dy ≤ c2 x21
0

V̇ = [x1 + x2 + 2h(x1 )]x2 + [x1 + 2x2 ][−h(x1 ) − x2 ]


= −x1 h(x1 ) − x22 ≤ −c1 x21 − x22
The origin is globally exponentially stable

Nonlinear Control Lecture # 6 Stability of Equilibrium Points


Quadratic Forms
n X
X n
T
V (x) = x P x = pij xi xj , P = PT
i=1 j=1

λmin (P )kxk2 ≤ xT P x ≤ λmax (P )kxk2

P ≥ 0 (Positive semidefinite) if and only if λi (P ) ≥ 0 ∀i

P > 0 (Positive definite) if and only if λi (P ) > 0 ∀i


P > 0 if and only if all the leading principal minors of P are
positive

Nonlinear Control Lecture # 6 Stability of Equilibrium Points


Linear Systems
ẋ = Ax

V (x) = xT P x, P = PT > 0
def
V̇ (x) = xT P ẋ + ẋT P x = xT (P A + AT P )x = −xT Qx
If Q > 0, then A is Hurwitz
Or choose Q > 0 and solve the Lyapunov equation

P A + AT P = −Q

If P > 0, then A is Hurwitz


MATLAB: P = lyap(A′ , Q)

Nonlinear Control Lecture # 6 Stability of Equilibrium Points


Theorem 3.7
A matrix A is Hurwitz if and only if for every Q = QT > 0
there is P = P T > 0 that satisfies the Lyapunov equation

P A + AT P = −Q

Moreover, if A is Hurwitz, then P is the unique solution

Nonlinear Control Lecture # 6 Stability of Equilibrium Points


Linearization
ẋ = f (x) = [A + G(x)]x
G(x) → 0 as x → 0
Suppose A is Hurwitz. Choose Q = QT > 0 and solve
P A + AT P = −Q for P . Use V (x) = xT P x as a Lyapunov
function candidate for ẋ = f (x)

V̇ (x) = xT P f (x) + f T (x)P x


= xT P [A + G(x)]x + xT [AT + GT (x)]P x
= xT (P A + AT P )x + 2xT P G(x)x
= −xT Qx + 2xT P G(x)x

Nonlinear Control Lecture # 6 Stability of Equilibrium Points


V̇ (x) ≤ −xT Qx + 2kP G(x)k kxk2
Given any positive constant k < 1, we can find r > 0 such that

2kP G(x)k < kλmin (Q), ∀ kxk < r

xT Qx ≥ λmin(Q)kxk2 ⇐⇒ −xT Qx ≤ −λmin (Q)kxk2

V̇ (x) ≤ −(1 − k)λmin (Q)kxk2 , ∀ kxk < r


V (x) = xT P x is a Lyapunov function for ẋ = f (x)

Nonlinear Control Lecture # 6 Stability of Equilibrium Points


Nonlinear Control
Lecture # 7
Stability of Equilibrium Points

Nonlinear Control Lecture # 7 Stability of Equilibrium Points


Region of Attraction

Lemma 3.2
The region of attraction of an asymptotically stable
equilibrium point is an open, connected, invariant set, and its
boundary is formed by trajectories

Nonlinear Control Lecture # 7 Stability of Equilibrium Points


Example 3.11

ẋ1 = −x2 , ẋ2 = x1 + (x21 − 1)x2

4
x
2
2

0
x
1
−2

−4
−4 −2 0 2 4

Nonlinear Control Lecture # 7 Stability of Equilibrium Points


Example 3.12

ẋ1 = x2 , ẋ2 = −x1 + 31 x31 − x2

4
x
2
2

0
x
1
−2

−4
−4 −2 0 2 4

Nonlinear Control Lecture # 7 Stability of Equilibrium Points


Estimates of the Region of Attraction: Find a subset of the
region of attraction
Warning: Let D be a domain with 0 ∈ D such that for all
x ∈ D, V (x) is positive definite and V̇ (x) is negative definite

Is D a subset of the region of attraction?

NO

Why?

Nonlinear Control Lecture # 7 Stability of Equilibrium Points


Example 3.13
Reconsider

ẋ1 = x2 , ẋ2 = −x1 + 31 x31 − x2


 
1 11 T
Rx
V (x) = x x + 2 0 1 (y − 31 y 3 ) dy
1 22

= 32 x21 − 61 x41 + x1 x2 + x22

V̇ (x) = −x21 (1 − 31 x21 ) − x22


√ √
D = {− 3 < x1 < 3}

Is D a subset of the region of attraction?

Nonlinear Control Lecture # 7 Stability of Equilibrium Points


By Theorem 3.5, if D is a domain that contains the origin
such that V̇ (x) ≤ 0 in D, then the region of attraction can be
estimated by a compact positively invariant set Γ ∈ D if
V̇ (x) < 0 for all x ∈ Γ, x 6= 0, or
No solution can stay identically in {x ∈ D | V̇ (x) = 0}
other than the zero solution.
The simplest such estimate is the set Ωc = {V (x) ≤ c} when
Ωc is bounded and contained in D

Nonlinear Control Lecture # 7 Stability of Equilibrium Points


V (x) = xT P x, P = P T > 0, Ωc = {V (x) ≤ c}
If D = {kxk < r}, then Ωc ⊂ D if

c < min xT P x = λmin (P )r 2


kxk=r

If D = {|bT x| < r}, where b ∈ Rn , then

r2
min xT P x =
|bT x|=r bT P −1 b

Therefore, Ωc ⊂ D = {|bTi x| < ri , i = 1, . . . , p}, if

ri2
c < min
1≤i≤p bTi P −1bi

Nonlinear Control Lecture # 7 Stability of Equilibrium Points


Example 3.14

ẋ1 = −x2 , ẋ2 = x1 + (x21 − 1)x2


 
∂f 0 −1
A= =
∂x x=0 1 −1

has eigenvalues (−1 ± j 3)/2. Hence the origin is
asymptotically stable
 
T 1.5 −0.5
Take Q = I, P A + A P = −I ⇒ P =
−0.5 1

λmin (P ) = 0.691

Nonlinear Control Lecture # 7 Stability of Equilibrium Points


V (x) = 1.5x21 − x1 x2 + x22

V̇ (x) = −(x21 + x22 ) − x21 x2 (x1 − 2x2 )



|x1 x2 | ≤ 21 kxk2 , |x1 − 2x2 | ≤ 5||xk
|x1 | ≤ kxk,

2 5 2 def
V̇ (x) ≤ −kxk + kxk4 < 0 for 0 < kxk2 < √ = r 2
2 5
2
Take c < λmin(P )r 2 = 0.691 × √ = 0.618
5
{V (x) ≤ c} is an estimate of the region of attraction

Nonlinear Control Lecture # 7 Stability of Equilibrium Points


2 3
x x
2 2
2
1
1

0 x 0
1 x
1
−1
−1
−2

−2 −3
−2 −1 0 1 2 −3 −2 −1 0 1 2 3

(a) (b)

(a) Contours of V̇ (x) = 0 (dashed)


V (x) = 0.618 (dash-dot), V (x) = 2.25 (solid)
(b) comparison of the region of attraction with its estimate

Nonlinear Control Lecture # 7 Stability of Equilibrium Points


Remark 3.1
If Ω1 , Ω2 , . . . , Ωm are positively invariant subsets of the region
of attraction, then their union ∪m i=1 Ωi is also a positively
invariant subset of the region of attraction. Therefore, if we
have multiple Lyapunov functions for the same system and
each function is used to estimate the region of attraction, we
can enlarge the estimate by taking the union of all the
estimates

Remark 3.2
we can work with any compact set Γ ⊂ D provided we can
show that Γ is positively invariant. This typically requires
investigating the vector field at the boundary of Γ to ensure
that trajectories starting in Γ cannot leave it

Nonlinear Control Lecture # 7 Stability of Equilibrium Points


Example 3.15

ẋ1 = x2 , ẋ2 = −4(x1 + x2 ) − h(x1 + x2 )


h(0) = 0; uh(u) ≥ 0, ∀ |u| ≤ 1
 
T T 2 1
V (x) = x P x = x x = 2x21 + 2x1 x2 + x22
1 1

V̇ (x) = (4x1 + 2x2 )ẋ1 + 2(x1 + x2 )ẋ2


= −2x21 − 6(x1 + x2 )2 − 2(x1 + x2 )h(x1 + x2 )
≤ −2x21− 6(x1+ x2 )2 , ∀ |x1 + x2 | ≤ 1
8 6
= −xT x
6 6

Nonlinear Control Lecture # 7 Stability of Equilibrium Points


 
T T 2 1
V (x) = x P x = x x
1 1
V̇ (x) is negative definite in {|x1 + x2 | ≤ 1}
1
bT = [1 1], c= min xT P x = =1
|x1 +x2 |=1 bT P −1 b

The region of attraction is estimated by {V (x) ≤ 1}

Nonlinear Control Lecture # 7 Stability of Equilibrium Points


σ = x1 + x2

d 2
σ = 2σx2 − 8σ 2 − 2σh(σ) ≤ 2σx2 − 8σ 2 , ∀ |σ| ≤ 1
dt
d 2
On σ = 1, σ ≤ 2x2 − 8 ≤ 0, ∀ x2 ≤ 4
dt
d 2
On σ = −1, σ ≤ −2x2 − 8 ≤ 0, ∀ x2 ≥ −4
dt
c1 = V (x)|x1 =−3,x2 =4 = 10, c2 = V (x)|x1 =3,x2 =−4 = 10

Γ = {V (x) ≤ 10 and |x1 + x2 | ≤ 1}


is a subset of the region of attraction

Nonlinear Control Lecture # 7 Stability of Equilibrium Points


5 x
2
(−3,4)

V(x) = 10
0 x
1

V(x) = 1

(3,−4)
−5
−5 0 5

Nonlinear Control Lecture # 7 Stability of Equilibrium Points


Converse Lyapunov Theorems

Theorem 3.8 (Exponential Stability)


Let x = 0 be an exponentially stable equilibrium point for the
system ẋ = f (x), where f is continuously differentiable on
D = {kxk < r}. Let k, λ, and r0 be positive constants with
r0 < r/k such that

kx(t)k ≤ kkx(0)ke−λt , ∀ x(0) ∈ D0 , ∀ t ≥ 0

where D0 = {kxk < r0 }. Then, there is a continuously


differentiable function V (x) that satisfies the inequalities

Nonlinear Control Lecture # 7 Stability of Equilibrium Points


c1 kxk2 ≤ V (x) ≤ c2 kxk2
∂V
f (x) ≤ −c3 kxk2
∂x
∂V
≤ c4 kxk
∂x
for all x ∈ D0 , with positive constants c1 , c2 , c3 , and c4
Moreover, if f is continuously differentiable for all x, globally
Lipschitz, and the origin is globally exponentially stable, then
V (x) is defined and satisfies the aforementioned inequalities
for all x ∈ Rn

Nonlinear Control Lecture # 7 Stability of Equilibrium Points


Example 3.16
Consider the system ẋ = f (x) where f is continuously
differentiable in the neighborhood of the origin and f (0) = 0.
Show that the origin is exponentially stable only if
A = [∂f /∂x](0) is Hurwitz
f (x) = Ax + G(x)x, G(x) → 0 as x → 0

Given any L > 0, there is r1 > 0 such that


kG(x)k ≤ L, ∀ kxk < r1

Because the origin of ẋ = f (x) is exponentially stable, let


V (x) be the function provided by the converse Lyapunov
theorem over the domain {kxk < r0 }. Use V (x) as a
Lyapunov function candidate for ẋ = Ax

Nonlinear Control Lecture # 7 Stability of Equilibrium Points


∂V ∂V ∂V
Ax = f (x) − G(x)x
∂x ∂x ∂x
≤ −c3 kxk2 + c4 Lkxk2
= −(c3 − c4 L)kxk2

def
Take L < c3 /c4 , γ = (c3 − c4 L) > 0 ⇒
∂V
Ax ≤ −γkxk2 , ∀ kxk < min{r0 , r1 }
∂x
The origin of ẋ = Ax is exponentially stable

Nonlinear Control Lecture # 7 Stability of Equilibrium Points


Theorem 3.9 (Asymptotic Stability)
Let x = 0 be an asymptotically stable equilibrium point for
ẋ = f (x), where f is locally Lipschitz on a domain D ⊂ Rn
that contains the origin. Let RA ⊂ D be the region of
attraction of x = 0. Then, there is a smooth, positive definite
function V (x) and a continuous, positive definite function
W (x), both defined for all x ∈ RA , such that

V (x) → ∞ as x → ∂RA

∂V
f (x) ≤ −W (x), ∀ x ∈ RA
∂x
and for any c > 0, {V (x) ≤ c} is a compact subset of RA
When RA = Rn , V (x) is radially unbounded

Nonlinear Control Lecture # 7 Stability of Equilibrium Points


Nonlinear Control
Lecture # 8
Time Varying
and
Perturbed Systems

Nonlinear Control Lecture # 8 Time Varying and Perturbed Systems


Time-varying Systems
ẋ = f (t, x)
f (t, x) is piecewise continuous in t and locally Lipschitz in x
for all t ≥ 0 and all x ∈ D, (0 ∈ D). The origin is an
equilibrium point at t = 0 if
f (t, 0) = 0, ∀ t ≥ 0
While the solution of the time-invariant system
ẋ = f (x), x(t0 ) = x0
depends only on (t − t0 ), the solution of
ẋ = f (t, x), x(t0 ) = x0
may depend on both t and t0
Nonlinear Control Lecture # 8 Time Varying and Perturbed Systems
Comparison Functions
A scalar continuous function α(r), defined for r ∈ [0, a),
belongs to class K if it is strictly increasing and α(0) = 0.
It belongs to class K∞ if it is defined for all r ≥ 0 and
α(r) → ∞ as r → ∞
A scalar continuous function β(r, s), defined for r ∈ [0, a)
and s ∈ [0, ∞), belongs to class KL if, for each fixed s,
the mapping β(r, s) belongs to class K with respect to r
and, for each fixed r, the mapping β(r, s) is decreasing
with respect to s and β(r, s) → 0 as s → ∞

Nonlinear Control Lecture # 8 Time Varying and Perturbed Systems


Example 4.1

α(r) = tan−1 (r) is strictly increasing since


α′ (r) = 1/(1 + r 2 ) > 0. It belongs to class K, but not to
class K∞ since limr→∞ α(r) = π/2 < ∞
α(r) = r c , c > 0, is strictly increasing since
α′ (r) = cr c−1 > 0. Moreover, limr→∞ α(r) = ∞; thus, it
belongs to class K∞
α(r) = min{r, r 2 } is continuous, strictly increasing, and
limr→∞ α(r) = ∞. Hence, it belongs to class K∞ . It is
not continuously differentiable at r = 1. Continuous
differentiability is not required for a class K function

Nonlinear Control Lecture # 8 Time Varying and Perturbed Systems


β(r, s) = r/(ksr + 1), for any positive constant k, is
strictly increasing in r since
∂β 1
= >0
∂r (ksr + 1)2

and strictly decreasing in s since

∂β −kr 2
= <0
∂s (ksr + 1)2

β(r, s) → 0 as s → ∞. It belongs to class KL


β(r, s) = r c e−as , with positive constants a and c, belongs
to class KL

Nonlinear Control Lecture # 8 Time Varying and Perturbed Systems


Lemma 4.1
Let α1 and α2 be class K functions on [0, a1 ) and [0, a2 ),
respectively, with a1 ≥ limr→a2 α2 (r), and β be a class KL
function defined on [0, limr→a2 α2 (r)) × [0, ∞) with
a1 ≥ limr→a2 β(α2 (r), 0). Let α3 and α4 be class K∞
functions. Denote the inverse of αi by αi−1 . Then,
α1−1 is defined on [0, limr→a1 α1 (r)) and belongs to class
K
α3−1 is defined on [0, ∞) and belongs to class K∞
α1 ◦ α2 is defined on [0, a2 ) and belongs to class K
α3 ◦ α4 is defined on [0, ∞) and belongs to class K∞
σ(r, s) = α1 (β(α2 (r), s)) is defined on [0, a2 ) × [0, ∞)
and belongs to class KL

Nonlinear Control Lecture # 8 Time Varying and Perturbed Systems


Lemma 4.2
Let V : D → R be a continuous positive definite function
defined on a domain D ⊂ Rn that contains the origin. Let
Br ⊂ D for some r > 0. Then, there exist class K functions
α1 and α2 , defined on [0, r], such that

α1 (kxk) ≤ V (x) ≤ α2 (kxk)

for all x ∈ Br . If D = Rn and V (x) is radially unbounded,


then there exist class K∞ functions α1 and α2 such that the
foregoing inequality holds for all x ∈ Rn

Nonlinear Control Lecture # 8 Time Varying and Perturbed Systems


Definition 4.2
The equilibrium point x = 0 of ẋ = f (t, x) is
uniformly stable if there exist a class K function α and a
positive constant c, independent of t0 , such that

kx(t)k ≤ α(kx(t0 )k), ∀ t ≥ t0 ≥ 0, ∀ kx(t0 )k < c

uniformly asymptotically stable if there exist a class KL


function β and a positive constant c, independent of t0 ,
such that

kx(t)k ≤ β(kx(t0 )k, t − t0 ), ∀ t ≥ t0 ≥ 0, ∀ kx(t0 )k < c

globally uniformly asymptotically stable if the foregoing


inequality is satisfied for any initial state x(t0 )

Nonlinear Control Lecture # 8 Time Varying and Perturbed Systems


exponentially stable if there exist positive constants c, k,
and λ such that

kx(t)k ≤ kkx(t0 )ke−λ(t−t0 ) , ∀ kx(t0 )k < c

globally exponentially stable if the foregoing inequality is


satisfied for any initial state x(t0 )

Nonlinear Control Lecture # 8 Time Varying and Perturbed Systems


Theorem 4.1
Let the origin x = 0 be an equilibrium point of ẋ = f (t, x)
and D ⊂ Rn be a domain containing x = 0. Suppose f (t, x)
is piecewise continuous in t and locally Lipschitz in x for all
t ≥ 0 and x ∈ D. Let V (t, x) be a continuously differentiable
function such that

W1 (x) ≤ V (t, x) ≤ W2 (x)

∂V ∂V
+ f (t, x) ≤ 0
∂t ∂x
for all t ≥ 0 and x ∈ D, where W1 (x) and W2 (x) are
continuous positive definite functions on D. Then, the origin
is uniformly stable

Nonlinear Control Lecture # 8 Time Varying and Perturbed Systems


Theorem 4.2
Suppose the assumptions of the previous theorem are satisfied
with
∂V ∂V
+ f (t, x) ≤ −W3 (x)
∂t ∂x
for all t ≥ 0 and x ∈ D, where W3 (x) is a continuous positive
definite function on D. Then, the origin is uniformly
asymptotically stable. Moreover, if r and c are chosen such
that Br = {kxk ≤ r} ⊂ D and c < minkxk=r W1 (x), then
every trajectory starting in {W2 (x) ≤ c} satisfies
kx(t)k ≤ β(kx(t0 )k, t − t0 ), ∀ t ≥ t0 ≥ 0

for some class KL function β. Finally, if D = Rn and W1 (x)


is radially unbounded, then the origin is globally uniformly
asymptotically stable

Nonlinear Control Lecture # 8 Time Varying and Perturbed Systems


Theorem 4.3
Suppose the assumptions of the previous theorem are satisfied
with
k1 kxka ≤ V (t, x) ≤ k2 kxka
∂V ∂V
+ f (t, x) ≤ −k3 kxka
∂t ∂x
for all t ≥ 0 and x ∈ D, where k1 , k2 , k3 , and a are positive
constants. Then, the origin is exponentially stable. If the
assumptions hold globally, the origin will be globally
exponentially stable

Nonlinear Control Lecture # 8 Time Varying and Perturbed Systems


Terminology: A function V (t, x) is said to be
positive semidefinite if V (t, x) ≥ 0
positive definite if V (t, x) ≥ W1 (x) for some positive
definite function W1 (x)
radially unbounded if V (t, x) ≥ W1 (x) and W1 (x) is
radially unbounded
decrescent if V (t, x) ≤ W2 (x)
negative definite (semidefinite) if −V (t, x) is positive
definite (semidefinite)

Nonlinear Control Lecture # 8 Time Varying and Perturbed Systems


Theorems 4.1 and 4.2 say that the origin is uniformly stable if
there is a continuously differentiable, positive definite,
decrescent function V (t, x), whose derivative along the
trajectories of the system is negative semidefinite. It is
uniformly asymptotically stable if the derivative is negative
definite, and globally uniformly asymptotically stable if the
conditions for uniform asymptotic stability hold globally with a
radially unbounded V (t, x)

Nonlinear Control Lecture # 8 Time Varying and Perturbed Systems


Example 4.2

ẋ = −[1 + g(t)]x3 , g(t) ≥ 0, ∀ t ≥ 0

V (x) = 12 x2

V̇ (t, x) = −[1 + g(t)]x4 ≤ −x4 , ∀ x ∈ R, ∀ t ≥ 0


The origin is globally uniformly asymptotically stable

Example 4.3

ẋ1 = −x1 − g(t)x2 , ẋ2 = x1 − x2

0 ≤ g(t) ≤ k and ġ(t) ≤ g(t), ∀ t ≥ 0

Nonlinear Control Lecture # 8 Time Varying and Perturbed Systems


V (t, x) = x21 + [1 + g(t)]x22

x21 + x22 ≤ V (t, x) ≤ x21 + (1 + k)x22 , ∀ x ∈ R2

V̇ (t, x) = −2x21 + 2x1 x2 − [2 + 2g(t) − ġ(t)]x22

2 + 2g(t) − ġ(t) ≥ 2 + 2g(t) − g(t) ≥ 2


 
2 2 T 2 −1
V̇ (t, x) ≤ −2x1 + 2x1 x2 − 2x2 = − x x
−1 2
The origin is globally exponentially stable

Nonlinear Control Lecture # 8 Time Varying and Perturbed Systems


Nonlinear Control
Lecture # 9
Time Varying
and
Perturbed Systems

Nonlinear Control Lecture # 9 Time Varying and Perturbed Systems


Perturbed Systems
Nominal System:
ẋ = f (x), f (0) = 0
Perturbed System:
ẋ = f (x) + g(t, x), g(t, 0) = 0
Case 1: The origin of the nominal system is exponentially
stable
c1 kxk2 ≤ V (x) ≤ c2 kxk2
∂V
f (x) ≤ −c3 kxk2
∂x
∂V
≤ c4 kxk
∂x

Nonlinear Control Lecture # 9 Time Varying and Perturbed Systems


Use V (x) as a Lyapunov function candidate for the perturbed
system
∂V ∂V
V̇ (t, x) = f (x) + g(t, x)
∂x ∂x
Assume that
kg(t, x)k ≤ γkxk, γ ≥ 0

∂V
V̇ (t, x) ≤ −c3 kxk2 + kg(t, x)k
∂x
≤ −c3 kxk2 + c4 γkxk2

Nonlinear Control Lecture # 9 Time Varying and Perturbed Systems


c3
γ<
c4

V̇ (t, x) ≤ −(c3 − γc4 )kxk2


The origin is an exponentially stable equilibrium point of the
perturbed system

Nonlinear Control Lecture # 9 Time Varying and Perturbed Systems


Example 4.4

ẋ = Ax + g(t, x); A is Hurwitz; kg(t, x)k ≤ γkxk

Q = QT > 0; P A + AT P = −Q; V (x) = xT P x


λmin(P )kxk2 ≤ V (x) ≤ λmax (P )kxk2

∂V
Ax = −xT Qx ≤ −λmin (Q)kxk2
∂x
∂V
g = k2xT P gk ≤ 2kP kkxkkgk ≤ 2kP kγkxk2
∂x
V̇ (t, x) ≤ −λmin (Q)kxk2 + 2λmax (P )γkxk2
λmin (Q)
The origin is globally exponentially stable if γ < 2λmax (P )

Nonlinear Control Lecture # 9 Time Varying and Perturbed Systems


Example 4.5

ẋ1 = x2
ẋ2 = −4x1 − 2x2 + βx32 , β≥0

ẋ = Ax + g(x)
  
0 1 0
A= , g(x) =
−4 −2 βx32

The eigenvalues of A are −1 ± j 3
 3 1 
2 8
P A + AT P = −I ⇒ P = 
1 5
8 16

Nonlinear Control Lecture # 9 Time Varying and Perturbed Systems


∂V
Ax = −xT x
V (x) = xT P x,
∂x
c3 = 1, c4 = 2 kP k = 2λmax (P ) = 2 × 1.513 = 3.026

kg(x)k = β|x2 |3
g(x) satisfies the bound kg(x)k ≤ γkxk over compact sets of
x. Consider the compact set

Ωc = {V (x) ≤ c} = {xT P x ≤ c}, c>0


 
k2 = max |x2 | = max 0 1 x
xT P x≤c xT P x≤c
√   −1/2 √
= c 0 1 P = 1.8194 c

Nonlinear Control Lecture # 9 Time Varying and Perturbed Systems



k2 = max |[0 1]x| = 1.8194 c
xT P x≤c

kg(x)k ≤ β c (1.8194)2 kxk, ∀ x ∈ Ωc

kg(x)k ≤ γkxk, ∀ x ∈ Ωc , γ = β c (1.8194)2

c3 1 0.1
γ< ⇔ β< 2

c4 3.026 × (1.8194) c c

β < 0.1/c ⇒ V̇ (x) ≤ −(1 − 10βc)kxk2


Hence, the origin is exponentially stable and Ωc is an estimate
of the region of attraction

Nonlinear Control Lecture # 9 Time Varying and Perturbed Systems


Alternative Bound on β:

V̇ (x) = −kxk2 + 2xT P g(x) ≤ −kxk2 + 18 βx32 ([2 5]x)



≤ −kxk2 + 29
8
βx22 kxk2

Over Ωc , x22 ≤ (1.8194)2 c


 √ 
V̇ (x) ≤ − 1 − 829 β(1.8194)2c kxk2
 
βc
= − 1− kxk2
0.448

If β < 0.448/c, the origin will be exponentially stable and Ωc


will be an estimate of the region of attraction

Nonlinear Control Lecture # 9 Time Varying and Perturbed Systems


Remark
The inequality β < 0.448/c shows a tradeoff between the
estimate of the region of attraction and the estimate of the
upper bound on β

Nonlinear Control Lecture # 9 Time Varying and Perturbed Systems


Case 2: The origin of the nominal system is asymptotically
stable
∂V ∂V ∂V
V̇ (t, x) = f (x) + g(t, x) ≤ −W3 (x) + g(t, x)
∂x ∂x ∂x

Under what condition will the following inequality hold?

∂V
g(t, x) < W3 (x)
∂x

Special Case: Quadratic-Type Lyapunov function

∂V ∂V
f (x) ≤ −c3 φ2 (x), ≤ c4 φ(x)
∂x ∂x

Nonlinear Control Lecture # 9 Time Varying and Perturbed Systems


V̇ (t, x) ≤ −c3 φ2 (x) + c4 φ(x)kg(t, x)k

c3
If kg(t, x)k ≤ γφ(x), with γ <
c4

V̇ (t, x) ≤ −(c3 − c4 γ)φ2 (x)

Nonlinear Control Lecture # 9 Time Varying and Perturbed Systems


Example 4.6

ẋ = −x3 + g(t, x)
V (x) = x4 is a quadratic-type Lyapunov function for ẋ = −x3
∂V ∂V
(−x3 ) = −4x6 , = 4|x|3
∂x ∂x

φ(x) = |x|3 , c3 = 4, c4 = 4

Suppose |g(t, x)| ≤ γ|x|3 , ∀ x, with γ < 1

V̇ (t, x) ≤ −4(1 − γ)φ2 (x)


Hence, the origin is a globally uniformly asymptotically stable

Nonlinear Control Lecture # 9 Time Varying and Perturbed Systems


Remark
A nominal system with asymptotically, but not exponentially,
stable origin is not robust to smooth perturbations with
arbitrarily small linear growth bounds

Example 4.7

ẋ = −x3 + γx
The origin is unstable for any γ > 0

Nonlinear Control Lecture # 9 Time Varying and Perturbed Systems


Nonlinear Control
Lecture # 10
Time Varying
and
Perturbed Systems

Nonlinear Control Lecture # 10 Time Varying and Perturbed Systems


Boundedness and Ultimate Boundedness
Definition 4.3
The solutions of ẋ = f (t, x) are
uniformly bounded if there exists c > 0, independent of
t0 , and for every a ∈ (0, c), there is β > 0, dependent on
a but independent of t0 , such that

kx(t0 )k ≤ a ⇒ kx(t)k ≤ β, ∀ t ≥ t0

uniformly ultimately bounded with ultimate bound b if


there exists a positive constant c, independent of t0 , and
for every a ∈ (0, c), there is T ≥ 0, dependent on a and b
but independent of t0 , such that

kx(t0 )k ≤ a ⇒ kx(t)k ≤ b, ∀ t ≥ t0 + T

Nonlinear Control Lecture # 10 Time Varying and Perturbed Systems


Add “Globally” if a can be arbitrarily large
Drop “uniformly” if ẋ = f (x)

Nonlinear Control Lecture # 10 Time Varying and Perturbed Systems


Lyapunov Analysis: Let V (x) be a cont. diff. positive definite
function and suppose the sets

Ωc = {V (x) ≤ c}, Ωε = {V (x) ≤ ε}, Λ = {ε ≤ V (x) ≤ c}

are compact for some c > ε > 0


c


ε

Nonlinear Control Lecture # 10 Time Varying and Perturbed Systems


Suppose
∂V
V̇ (t, x) = f (t, x) ≤ −W3 (x), ∀ x ∈ Λ, ∀ t ≥ 0
∂x
W3 (x) is continuous and positive definite

Ωc and Ωε are positively invariant

k = min W3 (x) > 0


x∈Λ

V̇ (t, x) ≤ −k, ∀ x ∈ Λ, ∀ t ≥ t0 ≥ 0

V (x(t)) ≤ V (x(t0 )) − k(t − t0 ) ≤ c − k(t − t0 )


x(t) enters the set Ωε within the interval [t0 , t0 + (c − ε)/k]

Nonlinear Control Lecture # 10 Time Varying and Perturbed Systems


Suppose

V̇ (t, x) ≤ −W3 (x), ∀ x ∈ D with kxk ≥ µ, ∀ t ≥ 0

Choose c and ε such that Λ ⊂ D ∩ {kxk ≥ µ}


c

Ωε

Bb
B
µ

Nonlinear Control Lecture # 10 Time Varying and Perturbed Systems


Let α1 and α2 be class K functions such that

α1 (kxk) ≤ V (x) ≤ α2 (kxk)

V (x) ≤ c ⇒ α1 (kxk) ≤ c ⇔ kxk ≤ α1−1 (c)


If Br ⊂ D, c = α1 (r) ⇒ Ωc ⊂ Br ⊂ D

kxk ≤ µ ⇒ V (x) ≤ α2 (µ)

ε = α2 (µ) ⇒ Bµ ⊂ Ωε
What is the ultimate bound?

V (x) ≤ ε ⇒ α1 (kxk) ≤ ε ⇔ kxk ≤ α1−1 (ε) = α1−1 (α2 (µ))

Nonlinear Control Lecture # 10 Time Varying and Perturbed Systems


Theorem 4.4
Suppose Bµ ⊂ D ⊂ Rn and

α1 (kxk) ≤ V (x) ≤ α2 (kxk)

∂V
f (t, x) ≤ −W3 (x), ∀ x ∈ D with kxk ≥ µ, ∀ t ≥ 0
∂x
where α1 and α2 are class K functions and W3 (x) is a
continuous positive definite function. Choose c > 0 such that
Ωc = {V (x) ≤ c} is compact and contained in D and suppose
µ < α2−1 (c). Then, Ωc is positively invariant and there exists a
class KL function β such that for every x(t0 ) ∈ Ωc ,

kx(t)k ≤ max β(kx(t0 )k, t − t0 ), α1−1 (α2 (µ)) , ∀ t ≥ t0




If D = Rn and α1 ∈ K∞ , the inequality holds ∀x(t0 ), ∀µ

Nonlinear Control Lecture # 10 Time Varying and Perturbed Systems


Remarks
The ultimate bound is independent of the initial state
The ultimate bound is a class K function of µ; hence, the
smaller the value of µ, the smaller the ultimate bound.
As µ → 0, the ultimate bound approaches zero

Nonlinear Control Lecture # 10 Time Varying and Perturbed Systems


Example 4.8

ẋ1 = x2 , ẋ2 = −(1 + x21 )x1 − x2 + M cos ωt, M ≥0

With M = 0, ẋ2 = −(1 + x21 )x1 − x2 = −h(x1 ) − x2

1 1
 
2 2
Z x1
T
V (x) = x  x+ 2 (y + y 3 ) dy (Example 3.7)
1 0
2
1
3 1
 
2 2
V (x) = xT   x + 1 x41 def
= xT P x + 12 x41
2
1
2
1

Nonlinear Control Lecture # 10 Time Varying and Perturbed Systems


λmin (P )kxk2 ≤ V (x) ≤ λmax (P )kxk2 + 21 kxk4

α1 (r) = λmin (P )r 2 , α2 (r) = λmax (P )r 2 + 12 r 4

V̇ = −x21 − x41 − x22 + √


(x1 + 2x2 )M cos ωt
≤ −kxk2 − x41 + M 5kxk √
= −(1 − θ)kxk2 − x41 − θkxk2 + M 5kxk
(0 < θ < 1)
√ def
≤ −(1 − θ)kxk2 − x41 , ∀ kxk ≥ M 5/θ = µ

The solutions are GUUB by


s
λmax (P )µ2 + µ4 /2
b = α1−1 (α2 (µ)) =
λmin (P )

Nonlinear Control Lecture # 10 Time Varying and Perturbed Systems


Theorem 4.5
Suppose
c1 kxk2 ≤ V (x) ≤ c2 kxk2
∂V
f (t, x) ≤ −c3 kxk2 , ∀ x ∈ D with kxk ≥ µ, ∀ t ≥ 0
∂x
p
for some positive constants c1 to c3 , and µ < c/c2 . Then,
Ωc = {V (x) ≤ c} is positively invariant and ∀ x(t0 ) ∈ Ωc

V (x(t)) ≤ max V (x(t0 ))e−(c3 /c2 )(t−t0 ) , c2 µ2 , ∀ t ≥ t0




p
c2 /c1 max kx(t0 )ke−(c3 /c2 )(t−t0 )/2 , µ , ∀ t ≥ t0

kx(t)k ≤
If D = Rn , the inequalities hold ∀x(t0 ), ∀µ

Nonlinear Control Lecture # 10 Time Varying and Perturbed Systems


Example 4.9

ẋ1 = x2 , ẋ2 = −h(x1 ) − x2 + u(t), h(x1 ) = x1 − 31 x31

|u(t)| ≤ d
  Z x1
1 T k k
V (x) = 2
x x+ h(y) dy, 0<k<1
k 1 0

Z x1
2 2
x
3 1
≤ x1 h(x1 ) ≤ x21 , 5 2
x
12 1
≤ h(y) dy ≤ 12 x21 , ∀ |x1 | ≤ 1
0

λmin(P1 )kxk2 ≤ xT P1 x ≤ V (x) ≤ xT P2 x ≤ λmax (P2 )kxk2


k + 56 k
   
k+1 k
P1 = 12 , P2 = 21
k 1 k 1

Nonlinear Control Lecture # 10 Time Varying and Perturbed Systems


V̇ = −kx1 h(x1 ) − (1 − k)x22 + (kx1 + x2 )u(t)
≤ − 32 kx21 − (1 − k)x22 + |kx1 + x2 | d
3
k= 5
⇒ c1 = λmin (P1 ) = 0.2894, c2 = λmax (P2 ) = 0.9854

q
3 2
− 0.1×2 kxk2 0.9×2
kxk2

V̇ ≤ 5
− 5
+ 1+ 5
kxk d
def
≤ 0.1×2
5
kxk2 , ∀ kxk ≥ 3.2394 d = µ

c = min V (x) = 0.5367 ⇒ Ωc = {V (x) ≤ c} ⊂ {|x1 | ≤ 1}


|x1 |=1
p
For µ p
< c/c2 we need d < 0.2278. Theorem 4.5 holds and
b = µ c2 /c1 = 5.9775 d

Nonlinear Control Lecture # 10 Time Varying and Perturbed Systems


Perturbed Systems: Nonvanishing Perturbation
Nominal System:

ẋ = f (x), f (0) = 0

Perturbed System:

ẋ = f (x) + g(t, x), g(t, 0) 6= 0

Case 1:(Lemma 4.3) The origin of ẋ = f (x) is exponentially


stable
Case 2:(Lemma 4.4) The origin of ẋ = f (x) is asymptotically
stable

Nonlinear Control Lecture # 10 Time Varying and Perturbed Systems


Lemma 4.3
Suppose that ∀ x ∈ Br , ∀ t ≥ 0

c1 kxk2 ≤ V (x) ≤ c2 kxk2

∂V ∂V
f (x) ≤ −c3 kxk2 , ≤ c4 kxk
∂x ∂x
r
c3 c1
kg(t, x)k ≤ δ < θr, 0 < θ < 1
c4 c2
Then, for all x(t0 ) ∈ {V (x) ≤ c1 r 2 }

kx(t)k ≤ max {k exp[−γ(t − t0 )]kx(t0 )k, b} , ∀ t ≥ t0


r r
c2 (1 − θ)c3 δc4 c2
k= , γ= , b=
c1 2c2 θc3 c1

Nonlinear Control Lecture # 10 Time Varying and Perturbed Systems


Proof
Apply Theorem 4.5
∂V
V̇ (t, x) = ∂x
+ ∂V
f (x) ∂x
g(t, x)
≤ −c3 kxk2 + ∂V∂x
kg(t, x)k
2
≤ −c3 kxk + c4 δkxk
= −(1 − θ)c3 kxk2 − θc3 kxk2 + c4 δkxk
def
≤ −(1 − θ)c3 kxk2 , ∀ kxk ≥ δc4 /(θc3 ) = µ

x(t0 ) ∈ Ω = {V (x) ≤ c1 r 2 }
r r r r
c1 c3 c1 c2 δc4 c2
µ<r ⇔ δ< θr, b = µ ⇔ b=
c2 c4 c2 c1 θc3 c1

Nonlinear Control Lecture # 10 Time Varying and Perturbed Systems


Example 4.10

ẋ1 = x2 , ẋ2 = −4x1 − 2x2 + βx32 + d(t)


β ≥ 0, |d(t)| ≤ δ, ∀ t ≥ 0

3 1
 
2 8
V (x) = xT P x = xT   x (Example 4.5)
1 5
8 16

5 2
V̇ (t, x) = −kxk2 + 2βx22 81 x1 x2 + 16

x2
+ 2d(t) 18 x1 + 16
5

x2
√ √
2 29 2 2 29δ
≤ −kxk + βk2 kxk + kxk
8 8

Nonlinear Control Lecture # 10 Time Varying and Perturbed Systems



k2 = max |x2 | = 1.8194 c
xT P x≤c


Suppose β ≤ 8(1 − ζ)/( 29k22 ) (0 < ζ < 1)

V̇ (t, x) ≤ −ζkxk2 + 29δ
8
kxk

29δ def
≤ −(1 − θ)ζkxk2 , ∀ kxk ≥ 8ζθ
= µ
(0 < θ < 1)
If µ2 λmax (P ) < c, then all solutions of the perturbed system,
starting in Ωc , are uniformly ultimately bounded by
√ s
29δ λmax (P )
b=
8ζθ λmin (P )

Nonlinear Control Lecture # 10 Time Varying and Perturbed Systems


Lemma 4.4
Suppose that ∀ x ∈ Br , ∀ t ≥ 0
∂V
α1 (kxk) ≤ V (x) ≤ α2 (kxk), f (x) ≤ −α3 (kxk)
∂x
∂V θα3 (α2−1 (α1 (r)))
(x) ≤ k, kg(t, x)k ≤ δ <
∂x k
αi ∈ K, 0 < θ < 1 . Then, ∀ x(t0 ) ∈ {V (x) ≤ α1 (r)}

kx(t)k ≤ max {β(kx(t0 )k, t − t0 ), ρ(δ)} , ∀ t ≥ t0 , β ∈ KL


   
−1 −1 δk
ρ(δ) = α1 α2 α3
θ

Proof: Apply Theorem 4.4

Nonlinear Control Lecture # 10 Time Varying and Perturbed Systems


Compare r
c3 c1
Case 1: δ < θr
c4 c2
θα3 (α2−1 (α1 (r)))
Case 2: δ <
k

Nonlinear Control Lecture # 10 Time Varying and Perturbed Systems


Example 4.11

x
ẋ = − (Globally asymptotically stable)
1 + x2
4x4
 
4 ∂V x
V (x) = x ⇒ − = −
∂x 1 + x2 1 + x2
4|x|4
α1 (|x|) = α2 (|x|) = |x|4 ; α3 (|x|) = ; k = 4r 3
1 + |x|2
θα3 (α2−1 (α1 (r))) θα3 (r) rθ 1
= = < 2
k k 1 + r2
x 1
ẋ = − + δ, δ> 2
⇒ lim x(t) = ∞
1 + x2 t→∞

Nonlinear Control Lecture # 10 Time Varying and Perturbed Systems


Nonlinear Control
Lecture # 11
Time Varying
and
Perturbed Systems

Nonlinear Control Lecture # 11 Time Varying and Perturbed Systems


Input-to-State Stability (ISS)
Definition 4.4
The system ẋ = f (x, u) is input-to-state stable if there exist
β ∈ KL and γ ∈ K such that for any initial state x(t0 ) and
any bounded input u(t)
  
kx(t)k ≤ max β(kx(t0 )k, t − t0 ), γ sup ku(τ )k
t0 ≤τ ≤t

for all t ≥ t0
ISS of ẋ = f (x, u) implies
BIBS stability 
x(t) is ultimately bounded by γ supt0 ≤τ ≤t ku(τ )k
limt→∞ u(t) = 0 ⇒ limt→∞ x(t) = 0
The origin of ẋ = f (x, 0) is GAS

Nonlinear Control Lecture # 11 Time Varying and Perturbed Systems


Theorem 4.6
Let V (x) be a continuously differentiable function

α1 (kxk) ≤ V (x) ≤ α2 (kxk)

∂V
f (x, u) ≤ −W3 (x), ∀ kxk ≥ ρ(kuk) > 0
∂x
∀ x ∈ Rn , u ∈ Rm , where α1 , α2 ∈ K∞ , ρ ∈ K, and W3 (x) is
a continuous positive definite function. Then, the system
ẋ = f (x, u) is ISS with γ = α1−1 ◦ α2 ◦ ρ

Proof
Let µ = ρ(supτ ≥t0 ku(τ )k); then

∂V
f (x, u) ≤ −W3 (x), ∀ kxk ≥ µ
∂x

Nonlinear Control Lecture # 11 Time Varying and Perturbed Systems


Apply Theorem 4.4

kx(t)k ≤ max β(kx(t0 )k, t − t0 ), α1−1(α2 (µ))




  
kx(t)k ≤ max β(kx(t0 )k, t − t0 ), γ sup ku(τ )k
τ ≥t0

Since x(t) depends only on u(τ ) for t0 ≤ τ ≤ t, the supremum


on the right-hand side can be taken over [t0 , t]

Nonlinear Control Lecture # 11 Time Varying and Perturbed Systems


Lemma 4.5
Suppose f (x, u) is continuously differentiable and globally
Lipschitz in (x, u). If ẋ = f (x, 0) has a globally exponentially
stable equilibrium point at the origin, then the system
ẋ = f (x, u) is input-to-state stable

Proof: Apply (the converse Lyapunov) Theorem 3.8

Nonlinear Control Lecture # 11 Time Varying and Perturbed Systems


Example 4.12

ẋ = −x3 + u

The origin of ẋ = −x3 is globally asymptotically stable

V = 21 x2

V̇ = −x4 + xu
= −(1 − θ)x4 − θx4 + xu
 1/3
≤ −(1 − θ)x4 , ∀ |x| ≥ |u|
θ
0<θ<1
The system is ISS with γ(r) = (r/θ)1/3

Nonlinear Control Lecture # 11 Time Varying and Perturbed Systems


Example 4.13

ẋ = −x − 2x3 + (1 + x2 )u2

The origin of ẋ = −x − 2x3 is globally exponentially stable

V = 12 x2

V̇ = −x2 − 2x4 + x(1 + x2 )u2


= −x4 − x2 (1 + x2 ) + x(1 + x2 )u2
≤ −x4 , ∀ |x| ≥ u2
The system is ISS with γ(r) = r 2

Nonlinear Control Lecture # 11 Time Varying and Perturbed Systems


Example 4.14

ẋ1 = −x1 + x2 , ẋ2 = −x31 − x2 + u

Investigate GAS of ẋ1 = −x1 + x2 , ẋ2 = −x31 − x2

V (x) = 41 x41 + 21 x22 ⇒ V̇ = −x41 − x22


Now u 6= 0

V̇ = −x41 − x22 + x2 u ≤ −x41 − x22 + |x2 | |u|

V̇ ≤ −(1 − θ)[x41 + x22 ] − θx41 − θx22 + |x2 | |u|


(0 < θ < 1)

Nonlinear Control Lecture # 11 Time Varying and Perturbed Systems


−θx22 + |x2 | |u| ≤ 0 for |x2 | ≥ |u|/θ and has a maximum
value of u2/(4θ) for |x2 | < |u|/θ

|u| u2
x21 ≥ or x22 ≥ 2 ⇒ −θx41 − θx22 + |x2 | |u| ≤ 0
2θ θ

|u| u2
kxk2 ≥ + 2 ⇒ −θx41 − θx22 + |x2 | |u| ≤ 0
2θ θ
r
r r2
ρ(r) = + 2
2θ θ

V̇ ≤ −(1 − θ)[x41 + x22 ], ∀ kxk ≥ ρ(|u|)

The system is ISS

Nonlinear Control Lecture # 11 Time Varying and Perturbed Systems


Lemma 4.6
If the systems η̇ = f1 (η, ξ) and ξ˙ = f2 (ξ, u) are input-to-state
stable, then the cascade connection

η̇ = f1 (η, ξ), ξ˙ = f2 (ξ, u)

is input-to-state stable. Consequently, If η̇ = f1 (η, ξ) is


input-to-state stable and the origin of ξ˙ = f2 (ξ) is globally
asymptotically stable, then the origin of the cascade
connection
η̇ = f1 (η, ξ), ξ˙ = f2 (ξ)
is globally asymptotically stable

Nonlinear Control Lecture # 11 Time Varying and Perturbed Systems


Example 4.15

ẋ1 = −x1 + x22 , ẋ2 = −x2 + u


The system ẋ1 = −x1 + x22 is input-to-state stable, as seen
from Theorem 4.6 with V (x1 ) = 12 x21

V̇ = −x21 + x1 x22 ≤ −(1 − θ)x21 , for |x1 | ≥ x22 /θ, 0 < θ < 1

The linear system ẋ2 = −x2 + u is input-to-state stable by


Lemma 4.5
The cascade connection is input-to-state stable

Nonlinear Control Lecture # 11 Time Varying and Perturbed Systems


Definition 4.5
Let X ⊂ Rn and U ⊂ Rm be bounded sets containing their
respective origins as interior points. The system ẋ = f (x, u) is
regionally input-to-state stable with respect to X × U if there
exist β ∈ KL and γ ∈ K such that for any initial state
x(t0 ) ∈ X and any input u with u(t) ∈ U for all t ≥ t0 , the
solution x(t) belongs to X for all t ≥ t0 and satisfies
  
kx(t)k ≤ max β(kx(t0 )k, t − t0 ), γ sup ku(τ )k
t0 ≤τ ≤t

The system ẋ = f (x, u) is locally input-to-state stable if it is


regionally input-to-state stable with respect to some
neighborhood of the origin (x = 0, u = 0)

Nonlinear Control Lecture # 11 Time Varying and Perturbed Systems


Theorem 4.7
Suppose f (x, u) is locally Lipschitz in (x, u) for all x ∈ Br and
u ∈ Bλ . Let V (x) be a continuously differentiable function
that satisfies

α1 (kxk) ≤ V (x) ≤ α2 (kxk)

∂V
f (x, u) ≤ −W3 (x), ∀ kxk ≥ ρ(kuk) > 0
∂x
for all x ∈ Br and u ∈ Bλ , where α1 , α2 , ρ ∈ K and W3 (x) is
a continuous positive definite function. Suppose
α1 (r) > α2 (ρ(λ)) and let Ω = {V (x) ≤ α1 (r)}. Then, the
system ẋ = f (x, u) is regionally input-to-state stable with
respect to Ω × Bλ and γ = α1−1 ◦ α2 ◦ ρ

Nonlinear Control Lecture # 11 Time Varying and Perturbed Systems


Local input-to-state stability of ẋ = f (x, u) is equivalent to
asymptotic stability of the origin of ẋ = f (x, 0)
Lemma 4.7
Suppose f (x, u) is locally Lipschitz in (x, u) in some
neighborhood of (x = 0, u = 0). Then, the system
ẋ = f (x, u) is locally input-to-state stable if and only if the
unforced system ẋ = f (x, 0) has an asymptotically stable
equilibrium point at the origin
The proof uses (converse Lyapunov) Theorem 3.9

Nonlinear Control Lecture # 11 Time Varying and Perturbed Systems


Nonlinear Control
Lecture # 12
Passivity

Nonlinear Control Lecture # 12 Passivity


Memoryless Functions

+ y- P
y
PP

u P
P

P u

(a)
(b)
power inflow = uy

Resistor is passive if uy ≥ 0

Nonlinear Control Lecture # 12 Passivity


y y y

u u u

(a) (b) (c)

Passive Passive Not passive

y = h(t, u), h ∈ [0, ∞]

hT =
 
Vector case: y = h(t, u), h1 , h2 , · · · , hp

power inflow = Σpi=1 ui yi = uT y

Nonlinear Control Lecture # 12 Passivity


Definition 5.1
y = h(t, u) is
passive if uT y ≥ 0
lossless if uT y = 0
input strictly passive if uT y ≥ uT ϕ(u) for some function
ϕ where uT ϕ(u) > 0, ∀ u 6= 0
output strictly passive if uT y ≥ y T ρ(y) for some function
ρ where y T ρ(y) > 0, ∀ y 6= 0

Nonlinear Control Lecture # 12 Passivity


Sector Nonlinearity: h belongs to the sector [α, β]
(h ∈ [α, β]) if
αu2 ≤ uh(t, u) ≤ βu2

y=βu
y=β u
y y

y=αu
u u
y=αu

(a) α>0 (b) α<0

Also, h ∈ (α, β], h ∈ [α, β), h ∈ (α, β)

Nonlinear Control Lecture # 12 Passivity


αu2 ≤ uh(t, u) ≤ βu2 ⇔ [h(t, u) − αu][h(t, u) − βu] ≤ 0

Definition 5.2
A memoryless function h(t, u) is said to belong to the sector
[0, ∞] if uT h(t, u) ≥ 0
[K1 , ∞] if uT [h(t, u) − K1 u] ≥ 0
[0, K2 ] with K2 = K2T > 0 if hT (t, u)[h(t, u) − K2 u] ≤ 0
[K1 , K2 ] with K = K2 − K1 = K T > 0 if

[h(t, u) − K1 u]T [h(t, u) − K2 u] ≤ 0

Nonlinear Control Lecture # 12 Passivity


Example

 
h1 (u1 )
h(u) = , hi ∈ [αi , βi ], βi > αi i = 1, 2
h2 (u2 )
   
α1 0 β1 0
K1 = , K2 =
0 α2 0 β2

h ∈ [K1 , K2 ]
 
β1 − α1 0
K = K2 − K1 =
0 β2 − α2

Nonlinear Control Lecture # 12 Passivity


Example

kh(u) − Luk ≤ γkuk

K1 = L − γI, K2 = L + γI

[h(u) − K1 u]T [h(u) − K2 u] =


kh(u) − Luk2 − γ 2 kuk2 ≤ 0

K = K2 − K1 = 2γI

Nonlinear Control Lecture # 12 Passivity


A function in the sector [K1 , K2 ] can be transformed into a
function in the sector [0, ∞] by input feedforward followed by
output feedback

+✲ ✎☞ ✲ ✲
+✲ ✎☞ ✲
✍✌ K −1 y = h(t, u) ✍✌
+✻ −✻

✲ K1

Feedforward K −1 Feedback
[K1 , K2 ] [0, K] [0, I] [0, ∞]
−→ −→ −→

Nonlinear Control Lecture # 12 Passivity


State Models
Example 5.1

v2 = h2(i2) L  
y
 B B B

i2 + v2 iL
- - BB BB BB -

+ + + +
u v1 i1 = h1(v1) vC C v3 i3 = h3(v3)
P
P P
P
 

P 
P
P P
P
 P

 

P 
P
PP PP
 

P 
P
P P

i1 ?
i3 ?

Lẋ1 = u − h2 (x1 ) − x2
C ẋ2 = x1 − h3 (x2 )
y = x1 + h1 (u)

Nonlinear Control Lecture # 12 Passivity


V (x) = 21 Lx21 + 12 Cx22
Z t
u(s)y(s) ds ≥ V (x(t)) − V (x(0))
0

u(t)y(t) ≥ V̇ (x(t), u(t))

V̇ = Lx1 ẋ1 + Cx2 ẋ2


= x1 [u − h2 (x1 ) − x2 ] + x2 [x1 − h3 (x2 )]
= x1 [u − h2 (x1 )] − x2 h3 (x2 )
= [x1 + h1 (u)]u − uh1 (u) − x1 h2 (x1 ) − x2 h3 (x2 )
= uy − uh1 (u) − x1 h2 (x1 ) − x2 h3 (x2 )

Nonlinear Control Lecture # 12 Passivity


uy = V̇ + uh1 (u) + x1 h2 (x1 ) + x2 h3 (x2 )
If h1 , h2 , and h3 are passive, uy ≥ V̇ and the system is passive
Case 1: If h1 = h2 = h3 = 0, then uy = V̇ ; no energy
dissipation; the system is lossless
Case 2: If h1 ∈ (0, ∞] (uh1 (u) > 0 for u 6= 0), then

uy ≥ V̇ + uh1 (u)

The energy absorbed over [0, t] will be greater than the


increase in the stored energy, unless the input u(t) is
identically zero. This is a case of input strict passivity

Nonlinear Control Lecture # 12 Passivity


Case 3: If h1 = 0 and h2 ∈ (0, ∞], then

y = x1 and uy ≥ V̇ + yh2 (y)

The energy absorbed over [0, t] will be greater than the


increase in the stored energy, unless the output y is identically
zero. This is a case of output strict passivity
Case 4: If h2 ∈ (0, ∞) and h3 ∈ (0, ∞), then

uy ≥ V̇ + x1 h2 (x1 ) + x2 h3 (x2 )

x1 h2 (x1 ) + x2 h3 (x2 ) is a positive definite function of x. This


is a case of state strict passivity because the energy absorbed
over [0, t] will be greater than the increase in the stored
energy, unless the state x is identically zero

Nonlinear Control Lecture # 12 Passivity


Definition 5.3
The system
ẋ = f (x, u), y = h(x, u)
is passive if there is a continuously differentiable positive
semidefinite function V (x) (the storage function) such that

∂V
uT y ≥ V̇ = f (x, u), ∀ (x, u)
∂x
Moreover, it is
lossless if uT y = V̇
input strictly passive if uT y ≥ V̇ + uT ϕ(u) for some
function ϕ such that uT ϕ(u) > 0, ∀ u 6= 0
output strictly passive if uT y ≥ V̇ + y T ρ(y) for some
function ρ such that y T ρ(y) > 0, ∀ y 6= 0
strictly passive if uT y ≥ V̇ + ψ(x) for some positive
definite function ψ

Nonlinear Control Lecture # 12 Passivity


Example 5.2

ẋ = u, y=x

V (x) = 12 x2 ⇒ uy = V̇ ⇒ Lossless

ẋ = u, y = x + h(u), h ∈ [0, ∞]

V (x) = 12 x2 ⇒ uy = V̇ + uh(u) ⇒ Passive

h ∈ (0, ∞] ⇒ uh(u) > 0 ∀ u 6= 0


⇒ Input strictly passive

ẋ = −h(x) + u, y = x, h ∈ [0, ∞]

V (x) = 21 x2 ⇒ uy = V̇ + yh(y) ⇒ Passive

h ∈ (0, ∞] ⇒ Output strictly passive

Nonlinear Control Lecture # 12 Passivity


Example 5.3

ẋ = u, y = h(x), h ∈ [0, ∞]
Z x
V (x) = h(σ) dσ ⇒ V̇ = h(x)ẋ = yu ⇒ Lossless
0

aẋ = −x + u, y = h(x), h ∈ [0, ∞]


Z x
V (x) = a h(σ) dσ ⇒ V̇ = h(x)(−x + u) = yu − xh(x)
0

yu = V̇ + xh(x) ⇒ Passive

h ∈ (0, ∞] ⇒ Strictly passive

Nonlinear Control Lecture # 12 Passivity


Example 5.4

ẋ1 = x2 , ẋ2 = −h(x1 ) − ax2 + u, y = bx2 + u

h ∈ [α1 , ∞], a > 0, b > 0, α1 > 0

Z x1
V (x) = α h(σ) dσ + 21 αxT P x
Z0 x1
= α h(σ) dσ + 21 α(p11 x21 + 2p12 x1 x2 + p22 x22 )
0

α > 0, p11 > 0, p11 p22 − p212 > 0

Nonlinear Control Lecture # 12 Passivity


uy − V̇ = u(bx2 + u) − α[h(x1 ) + p11 x1 + p12 x2 ]x2
− α(p12 x1 + p22 x2 )[−h(x1 ) − ax2 + u]

Take p22 = 1, p11 = ap12 , and α = b to cancel the cross


product terms

uy − V̇ ≥ bp12 α1 − 14 bp12 x21 + b(a − p12 )x22




p12 = ak, 0 < k < min{1, 4α1/(ab)}


⇒ p11 > 0, p11 p22 − p212 > 0

⇒ bp12 α1 − 41 bp12 > 0, b(a − p12 ) > 0

⇒ Strictly passive

Nonlinear Control Lecture # 12 Passivity


Example 5.5

ẋ1 = x2 , ẋ2 = − sin x1 − bx2 + cu, y = x2


b ≥ 0, c>0

V (x) = α[(1 − cos x1 ) + 21 x22 ], α>0


V (x) is positive semidefinite but not positive definite ∀ x

uy − V̇ = ux2 − α[x2 sin x1 − x2 sin x1 − bx22 + cx2 u]

α = 1/c ⇒ uy − V̇ = (b/c)x22 ≥ 0

Lossless when b = 0

Output strictly passive when b > 0

Nonlinear Control Lecture # 12 Passivity


Nonlinear Control
Lecture # 13
Passivity

Nonlinear Control Lecture # 13 Passivity


Positive Real Transfer Functions
Definition 5.4
An m × m proper rational transfer function matrix G(s) is
positive real if
poles of all elements of G(s) are in Re[s] ≤ 0
for all real ω for which jω is not a pole of any element of
G(s), the matrix G(jω) + GT (−jω) is positive
semidefinite
any pure imaginary pole jω of any element of G(s) is a
simple pole and the residue matrix lims→jω (s − jω)G(s)
is positive semidefinite Hermitian
G(s) is strictly positive real if G(s − ε) is positive real for
some ε > 0

Nonlinear Control Lecture # 13 Passivity


Scalar Case (m = 1):

G(jω) + GT (−jω) = 2Re[G(jω)]

Re[G(jω)] is an even function of ω. The second condition of


the definition reduces to

Re[G(jω)] ≥ 0, ∀ ω ∈ [0, ∞)

which holds when the Nyquist plot of of G(jω) lies in the


closed right-half complex plane
This is true only if the relative degree of the transfer function
is zero or one

Nonlinear Control Lecture # 13 Passivity


Lemma 5.1
An m × m proper rational transfer function matrix G(s) is
strictly positive real if and only if
G(s) is Hurwitz
G(jω) + GT (−jω) > 0, ∀ ω ∈ R
G(∞) + GT (∞) > 0 or

lim ω 2(m−q) det[G(jω) + GT (−jω)] > 0


ω→∞

where q = rank[G(∞) + GT (∞)]

Nonlinear Control Lecture # 13 Passivity


Scalar Case (m = 1): G(s) is strictly positive real if and only if

G(s) is Hurwitz
Re[G(jω)] > 0, ∀ ω ∈ [0, ∞)
G(∞) > 0 or

lim ω 2Re[G(jω)] > 0


ω→∞

Nonlinear Control Lecture # 13 Passivity


Example 5.6

1
G(s) =
s
has a simple pole at s = 0 whose residue is 1
 
1
Re[G(jω)] = Re = 0, ∀ ω 6= 0

Hence, G is positive real. It is not strictly positive real since


1
(s − ε)

has a pole in Re[s] > 0 for any ε > 0

Nonlinear Control Lecture # 13 Passivity


1
G(s) = , a > 0, is Hurwitz
s+a
a
Re[G(jω)] = > 0, ∀ ω ∈ [0, ∞)
ω2 + a2
ω 2a
lim ω 2 Re[G(jω)] = lim = a > 0 ⇒ G is SPR
ω→∞ ω→∞ ω 2 + a2

1 1 − ω2
G(s) = , Re[G(jω)] =
s2 + s + 1 (1 − ω 2 )2 + ω 2
G is not PR

Nonlinear Control Lecture # 13 Passivity


s+2 1
 
s+1 s+2
G(s) =   is Hurwitz
−1 2
s+2 s+1

2(2+ω 2 )
 −2jω

1+ω 2 4+ω 2
G(jω) + GT (−jω) =   > 0, ∀ ω ∈ R
2jω 4
4+ω 2 1+ω 2
 
T 2 0
G(∞) + G (∞) = , q=1
0 0

lim ω 2 det[G(jω) + GT (−jω)] = 4 ⇒ G is SPR


ω→∞

Nonlinear Control Lecture # 13 Passivity


Positive Real Lemma (5.2)
Let
G(s) = C(sI − A)−1 B + D
where (A, B) is controllable and (A, C) is observable. G(s) is
positive real if and only if there exist matrices P = P T > 0, L,
and W such that

P A + AT P = −LT L
P B = C T − LT W
W T W = D + DT

Nonlinear Control Lecture # 13 Passivity


Kalman–Yakubovich–Popov Lemma (5.3)
Let
G(s) = C(sI − A)−1 B + D
where (A, B) is controllable and (A, C) is observable. G(s) is
strictly positive real if and only if there exist matrices
P = P T > 0, L, and W , and a positive constant ε such that

P A + AT P = −LT L − εP
P B = C T − LT W
W T W = D + DT

Nonlinear Control Lecture # 13 Passivity


Lemma 5.4
The linear time-invariant minimal realization

ẋ = Ax + Bu, y = Cx + Du

with
G(s) = C(sI − A)−1 B + D
is
passive if G(s) is positive real
strictly passive if G(s) is strictly positive real

Proof
Apply the PR and KYP Lemmas, respectively, and use
V (x) = 12 xT P x as the storage function

Nonlinear Control Lecture # 13 Passivity


∂V
uT y − (Ax + Bu)
∂x
= uT (Cx + Du) − xT P (Ax + Bu)
= uT Cx + 21 uT (D + D T )u
− 12 xT (P A + AT P )x − xT P Bu
= uT (B T P + W T L)x + 12 uT W T W u
+ 21 xT LT Lx + 12 εxT P x − xT P Bu
1
= 2
(Lx + W u)T (Lx + W u) + 21 εxT P x ≥ 12 εxT P x

In the case of the PR Lemma, ε = 0, and we conclude that


the system is passive; in the case of the KYP Lemma, ε > 0,
and we conclude that the system is strictly passive

Nonlinear Control Lecture # 13 Passivity


Connection with Stability
Lemma 5.5
If the system

ẋ = f (x, u), y = h(x, u)

is passive with a positive definite storage function V (x), then


the origin of ẋ = f (x, 0) is stable

Proof

∂V ∂V
uT y ≥ f (x, u) ⇒ f (x, 0) ≤ 0
∂x ∂x

Nonlinear Control Lecture # 13 Passivity


Lemma 5.6
If the system

ẋ = f (x, u), y = h(x, u)

is strictly passive, then the origin of ẋ = f (x, 0) is


asymptotically stable. Furthermore, if the storage function is
radially unbounded, the origin will be globally asymptotically
stable

Proof
The storage function V (x) is positive definite

∂V ∂V
uT y ≥ f (x, u) + ψ(x) ⇒ f (x, 0) ≤ −ψ(x)
∂x ∂x
Why is V (x) positive definite? Let φ(t; x) be the solution of
ż = f (z, 0), z(0) = x

Nonlinear Control Lecture # 13 Passivity


V̇ ≤ −ψ(x)
Z τ
V (φ(τ ; x)) − V (x) ≤ − ψ(φ(t; x)) dt, ∀ τ ∈ [0, δ]
0
Z τ
V (φ(τ ; x)) ≥ 0 ⇒ V (x) ≥ ψ(φ(t; x)) dt
0
Z τ
V (x̄) = 0 ⇒ ψ(φ(t; x̄)) dt = 0, ∀ τ ∈ [0, δ]
0

⇒ ψ(φ(t; x̄)) ≡ 0 ⇒ φ(t; x̄) ≡ 0 ⇒ x̄ = 0

Nonlinear Control Lecture # 13 Passivity


Definition 5.5
The system
ẋ = f (x, u), y = h(x, u)
is zero-state observable if no solution of ẋ = f (x, 0) can stay
identically in S = {h(x, 0) = 0}, other than the zero solution
x(t) ≡ 0

Linear Systems
ẋ = Ax, y = Cx
Observability of (A, C) is equivalent to

y(t) = CeAt x(0) ≡ 0 ⇔ x(0) = 0 ⇔ x(t) ≡ 0

Nonlinear Control Lecture # 13 Passivity


Lemma 5.6
If the system

ẋ = f (x, u), y = h(x, u)

is output strictly passive and zero-state observable, then the


origin of ẋ = f (x, 0) is asymptotically stable. Furthermore, if
the storage function is radially unbounded, the origin will be
globally asymptotically stable

Proof
The storage function V (x) is positive definite

∂V ∂V
uT y ≥ f (x, u) + y T ρ(y) ⇒ f (x, 0) ≤ −y T ρ(y)
∂x ∂x

V̇ (x(t)) ≡ 0 ⇒ y(t) ≡ 0 ⇒ x(t) ≡ 0


Apply the invariance principle

Nonlinear Control Lecture # 13 Passivity


Example 5.7

ẋ = f (x) + G(x)u, y = h(x), dim (u) = dim (y)

Suppose there is V (x) such that

∂V ∂V
f (x) ≤ 0, G(x) = hT (x)
∂x ∂x

∂V ∂V
uT y − V̇ = uT h(x) − f (x) − hT (x)u = − f (x) ≥ 0
∂x ∂x
If V (x) is positive definite, the origin of ẋ = f (x) is stable

Nonlinear Control Lecture # 13 Passivity


If we have the stronger condition
∂V ∂V
f (x) ≤ −khT (x)h(x), G(x) = hT (x), k>0
∂x ∂x
uT y − V̇ ≥ ky T y
The system is output strictly passive. If, in addition, it is
zero-state observable, then the origin of ẋ = f (x) is
asymptotically stable

Nonlinear Control Lecture # 13 Passivity


Example 5.8

ẋ1 = x2 , ẋ2 = −ax31 − kx2 + u, y = x2 , a, k > 0

V (x) = 41 ax41 + 12 x22

V̇ = ax31 x2 + x2 (−ax31 − kx2 + u) = −ky 2 + yu


The system is output strictly passive

y(t) ≡ 0 ⇔ x2 (t) ≡ 0 ⇒ ax31 (t) ≡ 0 ⇒ x1 (t) ≡ 0

The system is zero-state observable. V is radially unbounded.


Hence, the origin of the unforced system is globally
asymptotically stable

Nonlinear Control Lecture # 13 Passivity


Nonlinear Control
Lecture # 14
Input-Output Stability

Nonlinear Control Lecture # 14 Input-Output Stability


L Stability

Input-Output Models: y = Hu
u(t) is a piecewise continuous function of t and belongs to a
linear space of signals
The space of bounded functions: supt≥0 ku(t)k < ∞
The
R ∞ space of square-integrable functions:
T
0
u (t)u(t) dt < ∞
Norm of a signal kuk:
kuk ≥ 0 and kuk = 0 ⇔ u = 0
kauk = akuk for any a > 0
Triangle Inequality: ku1 + u2 k ≤ ku1 k + ku2 k

Nonlinear Control Lecture # 14 Input-Output Stability


Lp spaces:

L∞ : kukL∞ = sup ku(t)k < ∞


t≥0

sZ

L2 : kukL2 = uT (t)u(t) dt < ∞
0
Z ∞ 1/p
p
Lp : kukLp = ku(t)k dt < ∞, 1 ≤ p < ∞
0

Notation Lm
p :
p is the type of p-norm used to define the space
and m is the dimension of u

Nonlinear Control Lecture # 14 Input-Output Stability


Extended Space: Le = {u | uτ ∈ L, ∀τ ∈ [0, ∞)}
u(t), 0 ≤ t ≤ τ
uτ is a truncation of u: uτ (t) =
0, t>τ

Le is a linear space and L ⊂ Le

Example

t, 0 ≤ t ≤ τ
u(t) = t, uτ (t) =
0, t>τ

u∈
/ L∞ but uτ ∈ L∞e

Nonlinear Control Lecture # 14 Input-Output Stability


Causality: A mapping H : Lm q
e → Le is causal if the value of
the output (Hu)(t) at any time t depends only on the values
of the input up to time t

(Hu)τ = (Huτ )τ

Definition 6.1
A scalar continuous function g(r), defined for r ∈ [0, a), is a
gain function if it is nondecreasing and g(0) = 0

A class K function is a gain function but not the other way


around. By not requiring the gain function to be strictly
increasing we can have g = 0 or g(r) = sat(r)

Nonlinear Control Lecture # 14 Input-Output Stability


Definition 6.2
A mapping H : Lm q
e → Le is L stable if there exist a gain
function g, defined on [0, ∞), and a nonnegative constant β
such that

k(Hu)τ kL ≤ g (kuτ kL ) + β, ∀ u ∈ Lm
e and τ ∈ [0, ∞)

It is finite-gain L stable if there exist nonnegative constants γ


and β such that

k(Hu)τ kL ≤ γkuτ kL + β, ∀ u ∈ Lm
e and τ ∈ [0, ∞)

In this case, we say that the system has L gain ≤ γ. The bias
term β is included in the definition to allow for systems where
Hu does not vanish at u = 0.

Nonlinear Control Lecture # 14 Input-Output Stability


Example 6.1: Memoryless function y = h(u)

Suppose |h(u)| ≤ a + b|u|, ∀ u ∈ R

Finite-gain L∞ stable with β = a and γ = b


If a = 0, then for each p ∈ [1, ∞)
Z ∞ Z ∞
|h(u(t))|p dt ≤ (b)p |u(t)|p dt
0 0

Finite-gain Lp stable with β = 0 and γ = b


For h(u) = u2 , H is L∞ stable with zero bias and g(r) = r 2 .
It is not finite-gain L∞ stable because |h(u)| = u2 cannot be
bounded γ|u| + β for all u ∈ R

Nonlinear Control Lecture # 14 Input-Output Stability


Example 6.2: SISO causal convolution operator
Z t
y(t) = h(t − σ)u(σ) dσ, h(t) = 0 for t < 0
0
Z ∞
Suppose h ∈ L1 ⇔ khkL1 = |h(σ)| dσ < ∞
0
Rt
|y(t)| ≤ 0 |h(t − σ)| |u(σ)| dσ
Rt
≤ 0 |h(t − σ)| dσ sup0≤σ≤τ |u(σ)|
Rt
= 0 |h(s)| ds sup0≤σ≤τ |u(σ)|

kyτ kL∞ ≤ khkL1 kuτ kL∞ , ∀ τ ∈ [0, ∞)

Finite-gain L∞ stable

Also, finite-gain Lp stable for p ∈ [1, ∞) (see textbook)

Nonlinear Control Lecture # 14 Input-Output Stability


Small-signal L Stability
Example 6.3

y = tan u
The output y(t) is defined only when the input signal is
restricted to |u(t)| < π/2 for all t ≥ 0
 
tan r
u(t) ∈ {|u| ≤ r < π/2} ⇒ |y| ≤ |u|
r
 
tan r
kykLp ≤ kukLp , p ∈ [1, ∞]
r

Nonlinear Control Lecture # 14 Input-Output Stability


Definition 6.3
A mapping H : Lm q
e → Le is small-signal L stable
(respectively, small-signal finite-gain L stable) if there is a
positive constant r such that the condition for L stability (
respectively, finite-gain L stability ) is satisfied for all u ∈ Lm
e
with sup0≤t≤τ ku(t)k ≤ r

Nonlinear Control Lecture # 14 Input-Output Stability


L Stability of State Models
ẋ = f (x, u), y = h(x, u), 0 = f (0, 0), 0 = h(0, 0)

Case 1: The origin of ẋ = f (x, 0) is exponentially stable


Theorem 6.1
Suppose, ∀ kxk ≤ r, ∀ kuk ≤ ru ,

c1 kxk2 ≤ V (x) ≤ c2 kxk2

∂V ∂V
f (x, 0) ≤ −c3 kxk2 , ≤ c4 kxk
∂x ∂x

kf (x, u) − f (x, 0)k ≤ Lkuk, kh(x, u)k ≤ η1 kxk + η2 kuk


p
Then, for each x0 with kx0 k ≤ r c1 /c2 , the system is
small-signal finite-gain Lp stable for each p ∈ [1, ∞]. It is
finite-gain Lp stable ∀ x0 ∈ Rn if the assumptions hold
globally [see the textbook for β and γ]

Nonlinear Control Lecture # 14 Input-Output Stability


Proof

∂V ∂V
f (x, 0) +
V̇ = [f (x, u) − f (x, 0)]
∂x ∂x
c3 c4 L √
V̇ ≤ −c3 kxk2 + c4 Lkxk kuk ≤ − V + √ kuk V
c2 c1
p
W (x) = V (x)
c3 c4 L
Ẇ ≤ −aW + bku(t)k, a= , b= √
2c2 2 c1

U(t) = eat W (x(t)) ⇒ U̇ = eat Ẇ + aeat W ≤ beat kuk


Z t
U(t) ≤ U(0) + beaτ ku(τ )k dτ
0

Nonlinear Control Lecture # 14 Input-Output Stability


Z t
−at
W (x(t)) ≤ e W (x(0)) + e−a(t−τ ) bku(τ )k dτ
0

√ √
c1 kxk ≤ W (x) ≤ c2 kxk
r Z t
c2 c4 L
kx(t)k ≤ kx(0)ke−at + e−at ku(τ )k dτ
c1 2c1 0

ky(t)k ≤ η1 kx(t)k + η2 ku(t)k


Z t
−at
ky(t)k ≤ k0 kx(0)ke + k2 e−a(t−τ ) ku(τ )k dτ + k3 ku(t)k
0

Nonlinear Control Lecture # 14 Input-Output Stability


Example 6.4

ẋ = −x − x3 + u, y = tanh x + u

V = 12 x2 ⇒ V̇ = x(−x − x3 ) ≤ −x2

c1 = c2 = 21 , c3 = c4 = 1, L = η1 = η2 = 1

Finite-gain Lp stable for each x(0) ∈ R and each p ∈ [1, ∞]

Example 6.5

ẋ1 = x2 , ẋ2 = −x1 − x2 − a tanh x1 + u, y = x1 , a≥0

V (x) = xT P x = p11 x21 + 2p12 x1 x2 + p22 x22

Nonlinear Control Lecture # 14 Input-Output Stability


V̇ = −2p12 (x21 + ax1 tanh x1 ) + 2(p11 − p12 − p22 )x1 x2
− 2ap22 x2 tanh x1 − 2(p22 − p12 )x22

p11 = p12 + p22 ⇒ the term x1 x2 is canceled


p22 = 2p12 = 1 ⇒ P is positive definite

V̇ = −x21 − x22 − ax1 tanh x1 − 2ax2 tanh x1


V̇ ≤ −kxk2 + 2a|x1 | |x2 | ≤ −(1 − a)kxk2

a < 1 ⇒ c1 = λmin (P ), c2 = λmax (P ), c3 = 1 − a, c4 = 2c2


L = η1 = 1, η2 = 0
For each x(0) ∈ R2 , p ∈ [1, ∞], the system is finite-gain Lp
stable
γ = 2[λmax (P )]2 /[(1 − a)λmin(P )]

Nonlinear Control Lecture # 14 Input-Output Stability


Case 2: The origin of ẋ = f (x, 0) is asymptotically stable
Theorem 6.2
Suppose that, for all (x, u), f is locally Lipschitz and h is
continuous and satisfies

kh(x, u)k ≤ g1 (kxk) + g2 (kuk) + η, η≥0

for some gain functions g1 , g2 . If ẋ = f (x, u) is ISS, then, for


each x(0) ∈ Rn , the system

ẋ = f (x, u), y = h(x, u)

is L∞ stable

Nonlinear Control Lecture # 14 Input-Output Stability


Proof
  
kx(t)k ≤ max β(kx0 k, t), γ sup ku(t)k
0≤t≤τ
  
ky(t)k ≤ g1 max β(kx0 k, t), γ sup0≤t≤τ ku(t)k
+g2 (ku(t)k) + η

g1 (max{a, b}) ≤ g1 (a) + g1 (b)

kyτ kL∞ ≤ g (kuτ kL∞ ) + β0


g = g1 ◦ γ + g2 and β0 = g1 (β(kx0 k, 0)) + η

Nonlinear Control Lecture # 14 Input-Output Stability


Theorem 6.3
Suppose f is locally Lipschitz and h is continuous in some
neighborhood of (x = 0, u = 0). If the origin of ẋ = f (x, 0)
is asymptotically stable, then there is a constant k1 > 0 such
that for each x(0) with kx(0)k < k1 , the system

ẋ = f (x, u), y = h(x, u)

is small-signal L∞ stable

Proof
Use Lemma 4.7 (asymptotic stability is equivalent to local ISS)

Nonlinear Control Lecture # 14 Input-Output Stability


Example 6.6

ẋ = −x − 2x3 + (1 + x2 )u2 , y = x2 + u

ISS from Example 4.13


g1 (r) = r 2 , g2 (r) = r, η=0

L∞ stable

Nonlinear Control Lecture # 14 Input-Output Stability


Example 6.7

ẋ1 = −x31 + x2 , ẋ2 = −x1 − x32 + u, y = x1 + x2

V = (x21 + x22 ) ⇒ V̇ = −2x41 − 2x42 + 2x2 u

x41 + x42 ≥ 12 kxk4

V̇ ≤ −kxk4 + 2kxk|u|
= −(1 − θ)kxk4 − θkxk4 + 2kxk|u|, 0 < θ < 1
 1/3
≤ −(1 − θ)kxk4 , ∀ kxk ≥ 2|u|
θ
⇒ ISS

g1 (r) = 2r, g2 = 0, η=0

L∞ stable

Nonlinear Control Lecture # 14 Input-Output Stability


Nonlinear Control
Lecture # 15
Input-Output Stability

Nonlinear Control Lecture # 15 Input-Output Stability


L2 Gain
Theorem 6.4
Consider the linear time-invariant system

ẋ = Ax + Bu, y = Cx + Du

where A is Hurwitz. Let G(s) = C(sI − A)−1 B + D

The L2 gain ≤ supω∈R kG(jω)k

Actually, L2 gain = supω∈R kG(jω)k

Nonlinear Control Lecture # 15 Input-Output Stability


Proof
Z ∞
U(jω) = u(t)e−jωt dt, Y (jω) = G(jω)U(jω)
0
By Parseval’s theorem
Z ∞ Z ∞
T 1
2
kykL2 = y (t)y(t) dt = Y ∗ (jω)Y (jω) dω
0 2π −∞
Z ∞
1 ∗ T
= U (jω)G (−jω)G(jω)U(jω) dω
2π −∞
 2 Z ∞
1
≤ sup kG(jω)k U ∗ (jω)U(jω) dω
ω∈R 2π −∞
 2
= sup kG(jω)k kuk2L2
ω∈R

Nonlinear Control Lecture # 15 Input-Output Stability


Lemma 6.1
Consider the time-invariant system

ẋ = f (x, u), y = h(x, u)

where f is locally Lipschitz and h is continuous for all x ∈ Rn


and u ∈ Rm . Let V (x) be a positive semidefinite function
such that
∂V
V̇ = f (x, u) ≤ k(γ 2 kuk2 − kyk2 ), k, γ > 0
∂x
Then, for each x(0) ∈ Rn , the system is finite-gain L2 stable
and its L2 gain is less than or equal to γ. In particular
r
V (x(0))
kyτ kL2 ≤ γkuτ kL2 +
k

Nonlinear Control Lecture # 15 Input-Output Stability


Proof

Z τ Z τ
2 2
V (x(τ )) − V (x(0)) ≤ kγ ku(t)k dt − k ky(t)k2 dt
0 0

V (x) ≥ 0
τ Z τ
V (x(0))
Z
2 2
ky(t)k dt ≤ γ ku(t)k2 dt +
0 0 k
r
V (x(0))
kyτ kL2 ≤ γkuτ kL2 +
k

Nonlinear Control Lecture # 15 Input-Output Stability


Theorem 6.5
If the system

ẋ = f (x, u), y = h(x, u)

is output strictly passive with

uT y ≥ V̇ + δy T y, δ>0

then it is finite-gain L2 stable and its L2 gain is less than or


equal to 1/δ

Proof

V̇ ≤ uT y − δy T y
= − 2δ1 (u − δy)T (u− δy) + 1 T

u u − 2δ y T y
≤ 2δ δ12 uT u − y T y

Nonlinear Control Lecture # 15 Input-Output Stability


Theorem 6.6
Consider the time-invariant system

ẋ = f (x) + G(x)u, y = h(x)

f (0) = 0, h(0) = 0
where f and G are locally Lipschitz and h is continuous over
Rn . Suppose ∃ γ > 0 and a continuously differentiable,
positive semidefinite function V (x) that satisfies the
Hamilton–Jacobi inequality
 T
∂V 1 ∂V ∂V 1
f (x) + 2 G(x)GT (x) + hT (x)h(x) ≤ 0
∂x 2γ ∂x ∂x 2

∀ x ∈ Rn . Then, for each x(0) ∈ Rn , the system is finite-gain


L2 stable and its L2 gain ≤ γ

Nonlinear Control Lecture # 15 Input-Output Stability


Proof

∂V ∂V
f (x) + G(x)u =
∂x ∂x
 T 2
1 1 ∂V ∂V
− γ 2 u − 2 GT (x) + f (x)
2 γ ∂x ∂x
 T
1 ∂V ∂V 1
+ 2 G(x)GT (x) + γ 2 kuk2
2γ ∂x ∂x 2

1 1
V̇ ≤ γ 2 kuk2 − kyk2
2 2

Nonlinear Control Lecture # 15 Input-Output Stability


Example 6.8

ẋ1 = x2 , ẋ2 = −ax31 − kx2 + u, y = x2 , a, k > 0

V (x) = a4 x41 + 21 x22

V̇ = ax31 x2 + x2 (−ax31 − kx2 + u)


= −kx22 + x2 u = −ky 2 + yu

The system is finite-gain L2 stable and its L2 gain is less than


or equal to 1/k

Nonlinear Control Lecture # 15 Input-Output Stability


Example 6.9

ẋ = Ax + Bu, y = Cx
Suppose there is P = P T ≥ 0 that satisfies the Riccati
equation
1
P A + AT P + P BB T P + C T C = 0
γ2

for some γ > 0. Verify that V (x) = 12 xT P x satisfies the


Hamilton-Jacobi equation
The system is finite-gain L2 stable and its L2 gain is less than
or equal to γ

Nonlinear Control Lecture # 15 Input-Output Stability


Local Versions
Lemma 6.2
Suppose V (x) satisfies

∂V
V̇ = f (x, u) ≤ k(γ 2 kuk2 − kyk2 ), k, γ > 0
∂x
for x ∈ D ⊂ Rn and u ∈ Du ⊂ Rm , where D and Du are
domains that contain x = 0 and u = 0, respectively. Suppose
further that x = 0 is an asymptotically stable equilibrium point
of ẋ = f (x, 0). Then, there is r > 0 such that for each x(0)
with kx(0)k ≤ r, the system

ẋ = f (x, u), y = h(x, u)

is small-signal finite-gain L2 stable with L2 gain less than or


equal to γ

Nonlinear Control Lecture # 15 Input-Output Stability


Theorem 6.7
Consider the system

ẋ = f (x, u), y = h(x, u)

Assume
uT y ≥ V̇ + δy T y, δ>0
is satisfied for V (x) ≥ 0 in some neighborhood of
(x = 0, u = 0) and the origin is an asymptotically stable
equilibrium point of ẋ = f (x, 0). Then, the system is
small-signal finite-gain L2 stable and its L2 gain is less than or
equal to 1/δ

Nonlinear Control Lecture # 15 Input-Output Stability


Theorem 6.8
Consider the system

ẋ = f (x) + G(x)u, y = h(x)

Assume
 T
∂V 1 ∂V ∂V 1
f (x) + 2 G(x)GT (x) + hT (x)h(x) ≤ 0
∂x 2γ ∂x ∂x 2

is satisfied for V (x) ≥ 0 in some neighborhood of


(x = 0, u = 0) and the origin is an asymptotically stable
equilibrium point of ẋ = f (x). Then, the system is small-signal
finite-gain L2 stable and its L2 gain is less than or equal to γ

Nonlinear Control Lecture # 15 Input-Output Stability


Example 6.10

ẋ1 = x2 , ẋ2 = −a(x1 − 13 x31 ) −kx2 + u, y = x2 , a, k > 0


 
1 2 1 1
V (x) = a x − x4 + x22 ≥ 0 for |x1 | ≤ 6
2 1 12 1 2
V̇ = −kx22 + x2 u = −ky 2 + yu

u = 0 ⇒ V̇ = −kx22 ≤ 0

x2 (t) ≡ 0 ⇒ x1 (t)[3−x21 (t)] ≡ 0 ⇒ x1 (t) ≡ 0 for |x1 | < 3
By the invariance principle, the origin is asymptotically stable
when u = 0. By Theorem 6.7, the system is small-signal
finite-gain L2 stable and its L2 gain is ≤ 1/k

Nonlinear Control Lecture # 15 Input-Output Stability


Nonlinear Control
Lecture # 16
Stability of Feedback Systems

Nonlinear Control Lecture # 16 Stability of Feedback Systems


u1✲ ❧e1✲ y1 ✲
+ H1
−✻

y2 + u

✛ ✛ e2 ❧✛+ 2
H2

ẋi = fi (xi , ei ), yi = hi (xi , ei )

yi = hi (t, ei )

Nonlinear Control Lecture # 16 Stability of Feedback Systems


Passivity Theorems
Theorem 7.1
The feedback connection of two passive systems is passive

Proof
Let V1 (x1 ) and V2 (x2 ) be the storage functions for H1 and H2
(Vi = 0 if Hi is memoryless )

eTi yi ≥ V̇i , V (x) = V1 (x1 ) + V2 (x2 )

eT1 y1 + eT2 y2 = (u1 − y2 )T y1 + (u2 + y1 )T y2 = uT1 y1 + uT2 y2


   
u1 y
u= , y= 1
u2 y2

uT y = uT1 y1 + uT2 y2 ≥ V̇1 + V̇2 = V̇

Nonlinear Control Lecture # 16 Stability of Feedback Systems


Asymptotic Stability
Theorem 7.2
Consider the feedback connection of two dynamical systems.
When u = 0, the origin of the closed-loop system is
asymptotically stable if one of the following conditions is
satisfied:
both feedback components are strictly passive;
both feedback components are output strictly passive and
zero-state observable;
one component is strictly passive and the other one is
output strictly passive and zero-state observable.
If the storage function for each component is radially
unbounded, the origin is globally asymptotically stable

Nonlinear Control Lecture # 16 Stability of Feedback Systems


Proof
H1 is SP; H2 is OSP & ZSO

eT1 y1 ≥ V̇1 + ψ1 (x1 ), ψ1 (x1 ) > 0, ∀ x1 6= 0

eT2 y2 ≥ V̇2 + y2T ρ2 (y2 ), y2T ρ(y2 ) > 0, ∀y2 6= 0

eT1 y1 + eT2 y2 = (u1 − y2 )T y1 + (u2 + y1 )T y2 = uT1 y1 + uT2 y2


V (x) = V1 (x1 ) + V2 (x2 )
V̇ ≤ uT y − ψ1 (x1 ) − y2T ρ2 (y2 )
u = 0 ⇒ V̇ ≤ −ψ1 (x1 ) − y2T ρ2 (y2 )

Nonlinear Control Lecture # 16 Stability of Feedback Systems


V̇ ≤ −ψ1 (x1 ) − y2T ρ2 (y2 )
V̇ = 0 ⇒ x1 = 0 and y2 = 0
y2 (t) ≡ 0 ⇒ e1 (t) ≡ 0 ( & x1 (t) ≡ 0) ⇒ y1 (t) ≡ 0
y1 (t) ≡ 0 ⇒ e2 (t) ≡ 0
By zero-state observability of H2 : y2 (t) ≡ 0 ⇒ x2 (t) ≡ 0
Apply the invariance principle

Nonlinear Control Lecture # 16 Stability of Feedback Systems


Example 7.1

ẋ1 = x2 ẋ3 = x4
ẋ2 = −ax31 − kx2 + e1 ẋ4 = −bx3 − x34 + e2
y1 = x2 y2 = x4
| {z } | {z }
H1 H2

a, b, k > 0
V1 = 14 ax41 + 12 x22
V̇1 = ax31 x2 − ax31 x2 − kx22 + x2 e1 = −ky12 + y1 e1
With e1 = 0, y1 (t) ≡ 0 ⇔ x2 (t) ≡ 0 ⇒ x1 (t) ≡ 0
H1 is output strictly passive and zero-state observable

Nonlinear Control Lecture # 16 Stability of Feedback Systems


V2 = 21 bx23 + 12 x24
V̇2 = bx3 x4 − bx3 x4 − x44 + x4 e2 = −y24 + y2 e2

With e2 = 0, y2 (t) ≡ 0 ⇔ x4 (t) ≡ 0 ⇒ x3 (t) ≡ 0


H2 is output strictly passive and zero-state observable

V1 and V2 are radially unbounded

The origin is globally asymptotically stable

Nonlinear Control Lecture # 16 Stability of Feedback Systems


Example 7.2
Reconsider the previous example, but change the output of H1
to
y1 = x2 + e1

V̇1 = −kx22 + x2 e1 = −k(y1 − e1 )2 − e21 + y1 e1


H1 is passive, but we cannot conclude strict passivity or
output strict passivity. We cannot apply Theorem 7.2

V = V1 + V2 = 41 ax41 + 12 x22 + 12 bx23 + 21 x24

V̇ = −kx22 + x2 e1 − x44 + x4 e2
= −kx22 − x2 x4 − x44 + x4 (x2 − x4 )
= −kx22 − x44 − x24 ≤ 0

Nonlinear Control Lecture # 16 Stability of Feedback Systems


V̇ = −kx22 − x44 − x24 ≤ 0
V̇ = 0 ⇒ x2 = x4 = 0
x2 (t) ≡ 0 ⇒ ax31 (t) − x4 (t) ≡ 0 ⇒ x1 (t) ≡ 0
x4 (t) ≡ 0 ⇒ −bx3 (t) + x2 (t) − x4 (t) ≡ 0 ⇒ x3 (t) ≡ 0
By the invariance principle and the fact that V is radially
unbounded, we conclude that the origin is globally
asymptotically stable

Nonlinear Control Lecture # 16 Stability of Feedback Systems


Example 7.3
Reconsider

ẋ1 = x2 , ẋ2 = −h1 (x1 ) − h2 (x2 ), hi ∈ (0, ∞)

from Example 3.8.

H1
+
✲❣ +
✲ ❣✲
R x2
−✻ −✻

h2 (·) ✛

x R ✛
h1 (·) ✛ 1
H2

Nonlinear Control Lecture # 16 Stability of Feedback Systems


H1 : ẋ2 = −h2 (x2 ) + e1 , y1 = x2
1
V1 = x22 , V̇1 = −x2 h2 (x2 ) + x2 e1 = −y1 h1 (y1 ) + y1 e1
2
Output Strictly Passive (Strictly Passive)

H2 : ẋ1 = e2 , y2 = h1 (x1 )
Z x1
V2 = h1 (σ) dσ, V̇2 = h1 (x1 )e2 = y2 e2 Lossless
0
We cannot apply Theorem 7.2, but use
Z x1
1
V = V1 + V2 = h1 (σ) dσ + x22
0 2

as a Lyapunov function candidate (Examples 3.8 and 3.9)

Nonlinear Control Lecture # 16 Stability of Feedback Systems


Theorem 7.3
Consider the feedback connection of a strictly passive
dynamical system with a passive time-varying memoryless
function. When u = 0, the origin of the closed-loop system is
uniformly asymptotically stable. if the storage function for the
dynamical system is radially unbounded, the origin will be
globally uniformly asymptotically stable

Proof
Let V1 (x1 ) be (positive definite) storage function of H1 .

∂V1
V̇1 = f1 (x1 , e1 ) ≤ eT1 y1 − ψ1 (x1 ) = −eT2 y2 − ψ1 (x1 )
∂x1

eT2 y2 ≥ 0 ⇒ V̇1 ≤ −ψ1 (x1 )

Nonlinear Control Lecture # 16 Stability of Feedback Systems


Example 7.4
Consider the feedback connection of a strictly positive real
transfer function and a passive time-varying memoryless
function
From Lemma 5.4, we know that the dynamical system is
strictly passive with a positive definite storage function
V (x) = 21 xT P x
From Theorem 7.3, the origin of the closed-loop system is
globally uniformly asymptotically stable

Nonlinear Control Lecture # 16 Stability of Feedback Systems


Theorem 7.4
Consider the feedback connection of a time-invariant
dynamical system H1 with a time-invariant memoryless
function H2 . Suppose H1 is zero-state observable, V1 (x1 ) is
positive definite

eT1 y1 ≥ V̇1 + y1T ρ1 (y1 ), eT2 y2 ≥ eT2 ϕ2 (e2 )

Then, the origin of the closed-loop system (when u = 0) is


asymptotically stable if

v T [ρ1 (v) + ϕ2 (v)] > 0, ∀ v 6= 0

Furthermore, if V1 is radially unbounded, the origin will be


globally asymptotically stable

Nonlinear Control Lecture # 16 Stability of Feedback Systems


Example 7.5

ẋ = f (x) + G(x)e1
y = σ(e )
y1 = h(x) | 2 {z 2}
| {z } H2
H1
σ(0) = 0, eT2 σ(e2 ) > 0, ∀ e2 6= 0
Suppose H1 is zero-state observable and there is a radially
unbounded positive definite function V1 (x) such that

∂V1 ∂V1
f (x) ≤ 0, G(x) = hT (x), ∀ x ∈ Rn
∂x ∂x

∂V1 ∂V1
V̇1 = f (x) + G(x)e1 ≤ y1T e1
∂x ∂x

Nonlinear Control Lecture # 16 Stability of Feedback Systems


Apply Theorem 7.4:
V̇1 ≤ eT1 y1
eT1 y1 ≥ V̇1 + y1T ρ1 (y1 ) is satisfied with ρ1 = 0

eT2 y2 = eT2 σ(e2 )


eT2 y2 ≥ eT2 ϕ2 (e2 ) is satisfied with ϕ2 = σ

v T [ρ1 (v) + ϕ2 (v)] = v T σ(v) > 0, ∀ v 6= 0

The origin is globally asymptotically stable

Nonlinear Control Lecture # 16 Stability of Feedback Systems


Nonlinear Control
Lecture # 17
Stability of Feedback Systems

Nonlinear Control Lecture # 17 Stability of Feedback Systems


u1 ✲ e
♥ 1✲
y1 ✲
+ H1
−✻

y2 e +
❄ u2
✛ ✛ 2 ♥✛+
H2

ẋi = fi (xi , ei ), yi = hi (xi , ei )

yi = hi (t, ei )

Nonlinear Control Lecture # 17 Stability of Feedback Systems


Loop Transformations
Recall that a memoryless function in the sector [K1 , K2 ] can
be transformed into a function in the sector [0, ∞] by input
feedforward followed by output feedback

+✲ ✎☞ ✲ ✲
+✲ ✎☞ ✲
✍✌ K −1 y = h(t, u) ✍✌
+✻ −✻

✲ K1

Nonlinear Control Lecture # 17 Stability of Feedback Systems


+
✲ ❤ ✲ H1 ✲

H2 ✛

H1 is a dynamical system
H2 is a memoryless function in the sector [K1 , K2 ]

Nonlinear Control Lecture # 17 Stability of Feedback Systems


+ +
✲ ❤ ✲ ❤ ✲ H1 ✲
− −
✻ ✻

K1 ✛

+
❤✛ H2 ✛

K1 ✛

Nonlinear Control Lecture # 17 Stability of Feedback Systems


+ +
✲ ❤ ✲ ❤ ✲ H1 ✲ K ✲
− −
✻ ✻

K1 ✛

+
❤✛ H2 ✛ K −1 ✛

K1 ✛

Nonlinear Control Lecture # 17 Stability of Feedback Systems


H̃1
+
✲ ❤
+
✲ ❤ ✲ ✲ ❄
+
✲ ❤ ✲
H1 K
− − +
✻ ✻

K1 ✛

+ +
❤✛ H2 ✛ K −1 ✛ ❤✛
− +
✻ ✻

K1 ✛
H̃2

Nonlinear Control Lecture # 17 Stability of Feedback Systems


Example 7.6
ẋ1 = x2
ẋ2 = −h(x1 ) + cx2 + e1 y = σ(e )
| 2 {z 2}
y1 = x2 H2
| {z }
H1

σ ∈ [α, β], h ∈ [α1 , ∞], c > 0, α1 > 0, b = β − α > 0

ẋ1 = x2
ẋ2 = −h(x1 ) − ax2 + ẽ1 ỹ2 = σ̃(ẽ2 )
| {z }
ỹ1 = bx2 + ẽ1 H̃2
| {z }
H̃1

σ̃ ∈ [0, ∞], a=α−c

Nonlinear Control Lecture # 17 Stability of Feedback Systems


It is shown in Example 5.4 that when a = α − c > 0, H̃1 is
strictly passive with a radially unbounded storage function.
Thus, we conclude from Theorem 7.3 that the origin of the
feedback connection is globally asymptotically stable (when
u = 0)

Nonlinear Control Lecture # 17 Stability of Feedback Systems


Finite-Gain L2 Stability
Theorem 7.5
The feedback connection of two output strictly passive
systems with

eTi yi ≥ V̇i + δi yiT yi , δi > 0

is finite-gain L2 stable and its L2 gain is less than or equal to


1/ min{δ1 , δ2 }

Proof
V = V1 + V2 , δ = min{δ1 , δ2 }

uT y = eT1 y1 + eT2 y2 ≥ V̇1 + δ1 y1T y1 + V̇2 + δ2 y2T y2


≥ V̇ + δ(y1T y1 + y2T y2 ) = V̇ + δy T y

Apply Theorem 6.5

Nonlinear Control Lecture # 17 Stability of Feedback Systems


Theorem 7.6
Consider a feedback connection where

eTi yi ≥ V̇i + εi eTi ei + δi yiT yi , for i = 1, 2

Then, the closed-loop map from u to y is finite-gain L2 stable


if
ε1 + δ2 > 0 and ε2 + δ1 > 0

Special cases:

ε1 = ε2 = 0, δ1 > 0, δ2 > 0 Both are OSP

ε1 > 0, ε2 > 0, δ1 = δ2 = 0 Both are ISP

ε1 = δ1 = 0, ε2 > 0, δ2 > 0 H1 P, H2 OSP & ISP

ε1 < 0 can be compensated for by δ2 > 0

Nonlinear Control Lecture # 17 Stability of Feedback Systems


Proof
V (x) = V1 (x1 ) + V2 (x2 )

eT1 y1 + eT2 y2 = uT1 y1 + uT2 y2

eT1 e1 = uT1 u1 − 2uT1 y2 + y2T y2 , eT2 e2 = uT2 u2 + 2uT2 y1 + y1T y1


 
(ε2 + δ1 )I 0
V̇ ≤ −y T Ly−uT Mu+uT Ny, L=
0 (ε1 + δ2 )I
a = min{ε2 + δ1 , ε1 + δ2 }, b = kNk, c = kMk

V̇ ≤ −akyk2 + bkukkyk + ckuk2


1 b2 a
= − (bkuk − akyk)2 + kuk2 − kyk2 + ckuk2
2a 2a 2
k2 2 a 2
≤ kuk − kyk Apply Lemma 6.1
2a 2

Nonlinear Control Lecture # 17 Stability of Feedback Systems


Example 7.7

ẋ = f (x) + G(x)e1 , y1 = h(x) y2 = σ(e2 )


| {z } | {z }
H1 H2
σi ∈ [α, β], β>α>0
∂V1 ∂V1
f (x) ≤ 0, G(x) = hT (x) ⇒ eT1 y1 ≥ V̇1
∂x ∂x
1 T 1
αeT2 e2 ≤ eT2 y2 ≤ βeT2 e2 and y y2 ≤ eT2 y2 ≤ y2T y2
β 2 α
1
eT2 y2 = γeT2 y2 +(1−γ)eT2 y2 ≥ γαeT2 e2 +(1−γ) y2T y2 , 0 < γ < 1
β

ε1 = δ1 = 0, ε2 = γα, δ2 = (1 − γ)/β
The closed-loop map from u to y is finite-gain L2 stable

Nonlinear Control Lecture # 17 Stability of Feedback Systems


The Small-Gain Theorem

u1 ✲ e
♥ 1✲
y1 ✲
+ H1
−✻

y2 e +
❄ u2
✛ ✛ 2 ♥✛+
H2

ky1τ kL ≤ γ1 ke1τ kL + β1 , ∀ e1 ∈ Lm
e , ∀ τ ∈ [0, ∞)

ky2τ kL ≤ γ2 ke2τ kL + β2 , ∀ e2 ∈ Lqe , ∀ τ ∈ [0, ∞)

Nonlinear Control Lecture # 17 Stability of Feedback Systems


     
u1 y1 e1
u= , y= , e=
u2 y2 e2

Theorem 7.7
The feedback connection is finite-gain L stable if γ1 γ2 < 1

Proof

e1τ = u1τ − (H2 e2 )τ , e2τ = u2τ + (H1 e1 )τ

ke1τ kL ≤ ku1τ kL + k(H2 e2 )τ kL


≤ ku1τ kL + γ2 ke2τ kL + β2

Nonlinear Control Lecture # 17 Stability of Feedback Systems


ke1τ kL ≤ ku1τ kL + γ2 (ku2τ kL + γ1 ke1τ kL + β1 ) + β2
= γ1 γ2 ke1τ kL
+ (ku1τ kL + γ2 ku2τ kL + β2 + γ2 β1 )

1
ke1τ kL ≤ (ku1τ kL + γ2 ku2τ kL + β2 + γ2 β1 )
1 − γ1 γ2
1
ke2τ kL ≤ (ku2τ kL + γ1 ku1τ kL + β1 + γ1 β2 )
1 − γ1 γ2

keτ kL ≤ ke1τ kL + ke2τ kL

Nonlinear Control Lecture # 17 Stability of Feedback Systems


Example 7.9

H1 : Hurwitz G(s)
H2 : y2 = ψ(t, e2 ), kψ(t, y)k ≤ γ2 kyk, ∀ t, ∀ y
H1 is finite-gain L2 stable; L2 gain ≤ supw∈R kG(jω)k
H2 is finite-gain L2 stable; L2 gain ≤ γ2
The feedback connection is finite-gain L2 stable if

γ2 sup kG(jω)k < 1


w∈R

Nonlinear Control Lecture # 17 Stability of Feedback Systems


Example 7.10

ẋ = f (t, x, v + d1 (t)), εż = Az + B[u + d2 (t)], v = Cz

A is a Hurwitz, −CA−1 B = I, ε ≪ 1, di ∈ L
Model order reduction: ε = 0

v = −CA−1 B(u + d2 ) = u + d2

ẋ = f (t, x, u + d), d = d1 + d2
Disturbance attenuation: Design u = φ(t, x) such that

kxkL ≤ γkdkL + β, γ<δ

What happens when u is applied to the actual system?

Nonlinear Control Lecture # 17 Stability of Feedback Systems


ẋ = f (t, x, Cz + d1 (t)), εż = Az + B[φ(t, x) + d2 (t)]

Assume d˙2 ∈ L

η = z + A−1 B[φ(t, x) + d2 (t)]

ẋ = f (t, x, φ(t, x)+d(t)+Cη), εη̇ = Aη+εA−1 B[φ̇+d˙2 (t)]


∂φ ∂φ
+φ̇ = f (t, x, φ(t, x) + d(t) + Cη)
∂t ∂x
The system can be represented as a feedback connection

Nonlinear Control Lecture # 17 Stability of Feedback Systems


H1 : ẋ = f (t, x, φ(t, x) + e1 ), y1 = φ̇
1
H2 : η̇ = Aη + A−1 Be2 , y2 = −Cη
ε
u1 = d1 + d2 = d, u2 = d˙2

∂φ ∂φ
Assume + f (t, x, φ(t, x) + e1 ) ≤ c1 kxk + c2 ke1 k
∂t ∂x

ky1 kL ≤ γ1 ke1 kL + β1 , γ1 = c1 γ + c2 , β1 = c1 β

ky2 kL ≤ εγf ke2 kL + β2

Nonlinear Control Lecture # 17 Stability of Feedback Systems


By the small-gain theorem,
1
ke1 kL ≤ [ku1 kL + εγf ku2 kL + εγf β1 + β2 ]
1 − εγ1 γf

By design
kxkL ≤ γke1 kL + β
Hence
γ
kxkL ≤ [kdkL + εγf kd˙2 kL + εγf β1 + β2 ] + β
1 − εγ1 γf
| {z }
→γkdkL +β+γβ2 as ε→0

Nonlinear Control Lecture # 17 Stability of Feedback Systems


Nonlinear Control
Lecture # 18
Stability of Feedback Systems

Nonlinear Control Lecture # 18 Stability of Feedback Systems


Absolute Stability
r=0 + ✎☞u y
✲ ✲ G(s) ✲
✍✌
− ✻

ψ(·) ✛

Definition 7.1
The system is absolutely stable if the origin is globally
uniformly asymptotically stable for any nonlinearity in a given
sector. It is absolutely stable with finite domain if the origin is
uniformly asymptotically stable

Nonlinear Control Lecture # 18 Stability of Feedback Systems


Circle Criterion
Suppose G(s) = C(sI − A)−1 B + D is SPR, ψ ∈ [0, ∞]

ẋ = Ax + Bu
y = Cx + Du
u = −ψ(t, y)

By the KYP Lemma, ∃ P = P T > 0, L, W, ε > 0

P A + AT P = −LT L − εP
P B = C T − LT W
W W = D + DT
T

V (x) = 12 xT P x

Nonlinear Control Lecture # 18 Stability of Feedback Systems


1 T
V̇ = 2
x P ẋ + 21 ẋT P x
1 T
= 2
x (P A + AT P )x + xT P Bu
= − 21 xT LT Lx − 21 εxT P x + xT (C T − LT W )u
= − 12 xT LT Lx − 21 εxT P x + (Cx + Du)T u
T T T
− u Du − x L W u

uT Du = 12 uT (D + D T )u = 21 uT W T W u

V̇ = − 21 εxT P x − 1
2
(Lx + W u)T (Lx + W u) − y T ψ(t, y)

y T ψ(t, y) ≥ 0 ⇒ V̇ ≤ − 12 εxT P x

The origin is globally exponentially stable

Nonlinear Control Lecture # 18 Stability of Feedback Systems


What if ψ ∈ [K1 , ∞]?

✲+ ❢ ✲ G(s) ✲ ✲+ ❢ ✲+ ❢ ✲ G(s) ✲
− − −
✻ ✻ ✻
K1 ✛

ψ(·) ✛ ✛ ψ(·) ✛
+



K ✛1 ψ̃(·)

ψ̃ ∈ [0, ∞]; hence the origin is globally exponentially stable if


G(s)[I + K1 G(s)]−1 is SPR

Nonlinear Control Lecture # 18 Stability of Feedback Systems


What if ψ ∈ [K1 , K2 ]?

✲+ ❢ ✲ G(s) ✲ ✲+ ❢ ✲+ ❢ ✲ G(s) ✲ ❄
✲❢
+ ✲
− − − K +
✻ ✻ ✻
K1 ✛

ψ(·) ✛ ✛ ψ(·) ✛ K −1 ✛ + ❢✛
+
❢ +

✻ ✻
K ✛1 ψ̃(·)

ψ̃ ∈ [0, ∞]; hence the origin is globally exponentially stable if


I + KG(s)[I + K1 G(s)]−1 is SPR
Nonlinear Control Lecture # 18 Stability of Feedback Systems
I + KG(s)[I + K1 G(s)]−1 = [I + K2 G(s)][I + K1 G(s)]−1

Theorem 7.8 (Circle Criterion)


The system is absolutely stable if
ψ ∈ [K1 , ∞] and G(s)[I + K1 G(s)]−1 is SPR, or
ψ ∈ [K1 , K2 ] and [I + K2 G(s)][I + K1 G(s)]−1 is SPR
If the sector condition is satisfied only on a set Y ⊂ Rm , then
the foregoing conditions ensure absolute stability with finite
domain

Nonlinear Control Lecture # 18 Stability of Feedback Systems


Example 7.11
G(s) is Hurwitz, G(∞) = 0

kψ(t, y)k ≤ γ2 kyk, ∀ t, y

ψ ∈ [K1 , K2 ], K1 = −γ2 I, K2 = γ2 I
By the circle criterion, the system is absolutely stable if

Z(s) = [I + γ2 G(s)][I − γ2 G(s)]−1 is SPR

Z(∞) + Z T (∞) = 2I
By Lemma 5.1,

Z(s) SPR ⇔ Z(s) Hurwitz & Z(jω) + Z T (−jω) > 0 ∀ ω

Z(s) Hurwitz ⇔ [I − γ2 G(s)]−1 Hurwitz

Nonlinear Control Lecture # 18 Stability of Feedback Systems


From the multivariable Nyquist criterion, [I − γ2 G(s)]−1 is
Hurwitz if the plot of det[I − γ2 G(jω)] does not go through
nor encircle the origin. This will be the case if

σmin [I − γ2 G(jω)] > 0

σmin [I − γ2 G(jω)] ≥ 1 − γ1 γ2 , γ1 = sup kG(jω)k


ω∈R

γ1 γ2 < 1 ⇒ Z(s) Hurwitz

Z(jω)+Z T (−jω) = 2H T (−jω) I − γ22 GT (−jω)G(jω) H(jω)


 

H(jω) = [I − γ2 G(jω)]−1

Z(jω) + Z T (−jω) > 0 ⇔ I − γ22 GT (−jω)G(jω) > 0


 

γ1 γ2 < 1 ⇒ Z(s) SPR

Nonlinear Control Lecture # 18 Stability of Feedback Systems


Scalar Case: ψ ∈ [α, β], β > α
The system is absolutely stable if
1 + βG(s)
is Hurwitz and
1 + αG(s)
 
1 + βG(jω)
Re > 0, ∀ ω ∈ [0, ∞]
1 + αG(jω)

Nonlinear Control Lecture # 18 Stability of Feedback Systems


Case 1: α > 0
By the Nyquist criterion

1 + βG(s) 1 βG(s)
= +
1 + αG(s) 1 + αG(s) 1 + αG(s)

is Hurwitz if the Nyquist plot of G(jω) does not intersect the


point −(1/α) + j0 and encircles it p times in the
counterclockwise direction, where p is the number of poles of
G(s) in the open right-half complex plane
1
1 + βG(jω) β
+ G(jω)
>0 ⇔ 1 >0
1 + αG(jω) α
+ G(jω)

Nonlinear Control Lecture # 18 Stability of Feedback Systems


"1 #
β
+ G(jω)
Re 1 > 0, ∀ ω ∈ [0, ∞]
α
+ G(jω)

D(α,β) q

θ θ
2 1

−1/α −1/β

The system is absolutely stable if the Nyquist plot of G(jω)


does not enter the disk D(α, β) and encircles it m times in the
counterclockwise direction
Nonlinear Control Lecture # 18 Stability of Feedback Systems
Case 2: α = 0
1 + βG(s)

Re[1 + βG(jω)] > 0, ∀ ω ∈ [0, ∞]

1
Re[G(jω)] > − , ∀ ω ∈ [0, ∞]
β
The system is absolutely stable if G(s) is Hurwitz and the
Nyquist plot of G(jω) lies to the right of the vertical line
defined by Re[s] = −1/β

Nonlinear Control Lecture # 18 Stability of Feedback Systems


Case 3: α < 0 < β
"1 #
+ G(jω)
 
1 + βG(jω) β
Re > 0 ⇔ Re 1 <0
1 + αG(jω) α
+ G(jω)

The Nyquist plot of G(jω) must lie inside the disk D(α, β).
The Nyquist plot cannot encircle the point −(1/α) + j0.
From the Nyquist criterion, G(s) must be Hurwitz
The system is absolutely stable if G(s) is Hurwitz and the
Nyquist plot of G(jω) lies in the interior of the disk D(α, β)

Nonlinear Control Lecture # 18 Stability of Feedback Systems


Theorem 7.9
Consider an SISO G(s) and ψ ∈ [α, β]. Then, the system is
absolutely stable if one of the following conditions is satisfied.
1 0 < α < β, the Nyquist plot of G(s) does not enter the
disk D(α, β) and encircles it p times in the
counterclockwise direction, where p is the number of
poles of G(s) with positive real parts
2 0 = α < β, G(s) is Hurwitz and the Nyquist plot of G(s)
lies to the right of the vertical line Re[s] = −1/β.
3 α < 0 < β, G(s) is Hurwitz and the Nyquist plot of G(s)
lies in the interior of the disk D(α, β).
If the sector condition is satisfied only on an interval [a, b],
then the foregoing conditions ensure absolute stability with
finite domain

Nonlinear Control Lecture # 18 Stability of Feedback Systems


Example 7.12

24
G(s) =
(s + 1)(s + 2)(s + 3)

6
Im G
4

0
Re G
−2

−4
−5 0 5

Nonlinear Control Lecture # 18 Stability of Feedback Systems


Apply Case 3 with center (0, 0) and radius = 4

Sector is (−0.25, 0.25)

Apply Case 3 with center (1.5, 0) and radius = 2.9

Sector is [−0.227, 0.714]

Apply Case 2
The Nyquist plot is to the right of Re[s] = −0.857

Sector is [0, 1.166]

[0, 1.166] includes the saturation nonlinearity

Nonlinear Control Lecture # 18 Stability of Feedback Systems


Example 7.13
24
G(s) =
(s − 1)(s + 2)(s + 3)

0.4 Im G

0.2

0 G is not Hurwitz
Re G
−0.2 Apply Case 1
−0.4
−4 −2 0

Center = (−3.2, 0), Radius = 0.1688 ⇒ [0.2969, 0.3298]

Nonlinear Control Lecture # 18 Stability of Feedback Systems


Example 7.14

s+2
G(s) = , ψ(y) = sat(y) ∈ [0, 1]
(s + 1)(s − 1)
We cannot conclude absolute stability because case 1 requires
α>0

y/a
✻ ψ(y)✟✟✟
1 ✟
✟✟
✟ ✟ ✲

−a −1 1 a y

✟✟
✟✟

1
−a ≤ y ≤ a ⇒ ψ ∈ [α, 1], α =
a

Nonlinear Control Lecture # 18 Stability of Feedback Systems


1
Im G

0.5

0
Re G

−0.5
−2 −1.5 −1 −0.5 0

The Nyquist plot must encircle the disk D(1/a, 1) once in the
counterclockwise direction, which is satisfied for a = 1.818

The system is absolutely stable with finite domin

Nonlinear Control Lecture # 18 Stability of Feedback Systems


Estimate the region of attraction:

ẋ1 = x2 , ẋ2 = x1 + u, y = 2x1 + x2

Loop transformation:
1
u = −αy + ũ, ỹ = (β − α)y + ũ, α= = 0.55, β = 1
a

ẋ = Ax + B ũ, ỹ = Cx + Dũ
where
   
0 1 0  
A= ,B = , C = 0.9 0.45 , D = 1
−0.1 −0.55 1

Nonlinear Control Lecture # 18 Stability of Feedback Systems


The KYP equations have two solutions
   
0.4946 0.4834 0.7595 0.4920
P1 = , P2 =
0.4834 1.0774 0.4920 1.9426

V1 (x) = xT P1 x, V2 (x) = xT P2 x

min V1 (x) = 0.3445, min V2 (x) = 0.6212


{|y|=1.818} {|y|=1.818}

{V1 (x) ≤ 0.34}, {V2 (x) ≤ 0.62}

Nonlinear Control Lecture # 18 Stability of Feedback Systems


1
V2(x)=0.62 x2
y=1.818
0.5
V1(x)=0.34

0
x1

−0.5
y=−1.818
−1
−1.5 −1 −0.5 0 0.5 1 1.5

Nonlinear Control Lecture # 18 Stability of Feedback Systems


Nonlinear Control
Lecture # 19
Stability of Feedback Systems

Nonlinear Control Lecture # 19 Stability of Feedback Systems


Popov Criterion
✲+ ❢ ✲ G(s) ✲

ψ(·) ✛

ẋ = Ax + Bu, y = Cx
(A, B) controllable, (A, C) observable
ui = −ψi (yi ), ψi ∈ [0, ki ], 1 ≤ i ≤ m, (0 < ki ≤ ∞)

G(s) = C(sI − A)−1 B

Γ = diag(γ1 , . . . , γm), M = diag(1/k1 , · · · , 1/km)

Nonlinear Control Lecture # 19 Stability of Feedback Systems


Theorem 7.10
The system is absolutely stable if for 1 ≤ i ≤ m,

ψi ∈ [0, ki ], 0 < ki ≤ ∞

and there is γi ≥ 0, with (1 + λk γi ) 6= 0 for every eigenvalue


λk of A, such that

M + (I + sΓ)G(s) is SPR

If the sector condition ψi ∈ [0, ki ] is satisfied only on a set


Y ⊂ Rm , then the system is absolutely stable with finite
domain

Nonlinear Control Lecture # 19 Stability of Feedback Systems


Proof

✲ M
H̃1
✲ ❄
+ +
✲ ✐ ✲ G(s) ✲ (I + sΓ) ✐ ✲
+

+
ψ(·) ✛ (I + sΓ)−1 ✛ ✛

+

H̃2
M

Nonlinear Control Lecture # 19 Stability of Feedback Systems


H̃1 :

M + (I + sΓ)G(s)
= M + (I + sΓ)C(sI − A)−1 B
= M + C(sI − A)−1 B + ΓCs(sI − A)−1 B
= M + C(sI − A)−1 B + ΓC(sI − A + A)(sI − A)−1 B
= (C + ΓCA)(sI − A)−1 B + M + ΓCB
= C(sI − A)−1 B + D

A = A, B = B, C = C + ΓCA, D = M + ΓCB

Avk = λk vk
(C + ΓCA)vk = (C + ΓCλk )vk = (I + λk Γ)Cvk

Nonlinear Control Lecture # 19 Stability of Feedback Systems


(1 + λk γi ) 6= 0 ⇒ (A, C) observable
If M + (I + sΓ)G(s) is SPR, by the KYP lemma, there are
P = P T > 0, L, and W , and ε > 0 that satisfy

P A + AT P = −LT L − εP
P B = (C + ΓCA)T − LT W
W T W = 2M + ΓCB + B T C T Γ

and V1 = 21 xT P x is a storage function for H̃1

Nonlinear Control Lecture # 19 Stability of Feedback Systems


H̃2 consists of m decoupled components:
1
γi ẏi = −yi + ψi (yi ) + ẽ2i , ỹ2i = ψi (yi )
ki
Z yi
V2i = γi ψi (σ) dσ
0 h i
V̇2i = γi ψi (yi )ẏi = ψi (yi ) −yi + k1i ψi (yi ) + ẽ2i
= y2i e2i + k1i ψi (yi ) [ψi (yi ) − ki yi ]

ψi ∈ [0, ki ] ⇒ ψi (ψi − ki yi ) ≤ 0 ⇒ V̇2i ≤ y2i e2i


H̃2 is passive with the storage function
Xm Z yi
V2 = γi ψi (σ) dσ
i=1 0

Nonlinear Control Lecture # 19 Stability of Feedback Systems


m
X Z yi
1 T
Use V = 2
x Px + γi ψi (σ) dσ
i=1 0

as a Lyapunov function candidate for the original feedback


connection

ẋ = Ax + Bu, y = Cx, u = −ψ(y)


1 T
V̇ = 2
x P ẋ + 12 ẋT P x + ψ T (y)Γẏ
1 T
= 2
x (P A + AT P )x + xT P Bu
T
+ ψ (y)ΓC(Ax + Bu)
= − 12 xT LT Lx − 12 εxT P x
+ xT (C T + AT C T Γ − LT W )u
+ ψ T (y)ΓCAx + ψ T (y)ΓCBu

Nonlinear Control Lecture # 19 Stability of Feedback Systems


V̇ = − 12 εxT P x − 12 (Lx + W u)T (Lx + W u)
− ψ(y)T [y − Mψ(y)]
≤ − 21 εxT P x − ψ(y)T [y − Mψ(y)]

ψi ∈ [0, ki ] ⇒ ψ(y)T [y − Mψ(y)] ≥ 0 ⇒ V̇ ≤ − 12 εxT P x

The origin is globally asymptotically stable

Nonlinear Control Lecture # 19 Stability of Feedback Systems


Scalar case
1
+ (1 + sγ)G(s)
k
is SPR if G(s) is Hurwitz and
1
+ Re[G(jω)] − γωIm[G(jω)] > 0, ∀ ω ∈ [0, ∞)
k
If  
1
lim + Re[G(jω)] − γωIm[G(jω)] =0
ω→∞ k
we also need
 
2 1
lim ω + Re[G(jω)] − γωIm[G(jω)] >0
ω→∞ k

Nonlinear Control Lecture # 19 Stability of Feedback Systems


1
+ Re[G(jω)] − γωIm[G(jω)] > 0, ∀ ω ∈ [0, ∞)
k

slope = 1/γ ωIm[G(jω)]

−1/k Re[G(jω)]

Popov Plot

Nonlinear Control Lecture # 19 Stability of Feedback Systems


Example

ẋ1 = x2 , ẋ2 = −x2 − h(y), y = x1

ẋ2 = −αx1 − x2 − h(y) + αx1 , α>0

1
G(s) = , ψ(y) = h(y) − αy
s2 +s+α

h ∈ [α, β] ⇒ ψ ∈ [0, k] (k = β − α > 0)

α − ω 2 + γω 2
γ>1 ⇒ > 0, ∀ ω ∈ [0, ∞)
(α − ω 2 )2 + ω 2
ω 2(α − ω 2 + γω 2)
and lim =γ −1 >0
ω→∞ (α − ω 2 )2 + ω 2

Nonlinear Control Lecture # 19 Stability of Feedback Systems


The system is absolutely stable for ψ ∈ [0, ∞] (h ∈ [α, ∞])

ω Im G
0.2 slope=1

0
Re G
−0.2
−0.4
−0.6
−0.8
−1
−0.5 0 0.5 1

Compare with the circle criterion (γ = 0)

1 α − ω2 √
+ > 0, ∀ ω ∈ [0, ∞], for k < 1 + 2 α
k (α − ω 2 )2 + ω 2

Nonlinear Control Lecture # 19 Stability of Feedback Systems


Nonlinear Control
Lecture # 20
Special nonlinear Forms

Nonlinear Control Lecture # 20 Special nonlinear Forms


Normal Form
Relative Degree

ẋ = f (x) + g(x)u, y = h(x)

where f , g, and h are sufficiently smooth in a domain D


f : D → Rn and g : D → Rn are called vector fields on D
∂h def
ẏ = [f (x) + g(x)u] = Lf h(x) + Lg h(x) u
∂x
∂h
f (x)
Lf h(x) =
∂x
is the Lie Derivative of h with respect to f or along f

Nonlinear Control Lecture # 20 Special nonlinear Forms


∂(Lf h)
Lg Lf h(x) = g(x)
∂x
∂(Lf h)
L2f h(x) = Lf Lf h(x) = f (x)
∂x

∂(Lk−1
f h)
Lkf h(x) = Lf Lk−1
f h(x) = f (x)
∂x

L0f h(x) = h(x)

ẏ = Lf h(x) + Lg h(x) u

Lg h(x) = 0 ⇒ ẏ = Lf h(x)

∂(Lf h)
y (2) = [f (x) + g(x)u] = L2f h(x) + Lg Lf h(x) u
∂x

Nonlinear Control Lecture # 20 Special nonlinear Forms


Lg Lf h(x) = 0 ⇒ y (2) = L2f h(x)

y (3) = L3f h(x) + Lg L2f h(x) u

Lg Li−1
f h(x) = 0, i = 1, 2, . . . , ρ − 1; Lg Lρ−1
f h(x) 6= 0

y (ρ) = Lρf h(x) + Lg Lρ−1


f h(x) u

Definition 8.1
The system

ẋ = f (x) + g(x)u, y = h(x)

has relative degree ρ, 1 ≤ ρ ≤ n, in R ⊂ D if ∀ x ∈ R

Lg Li−1
f h(x) = 0, i = 1, 2, . . . , ρ − 1; Lg Lρ−1
f h(x) 6= 0

Nonlinear Control Lecture # 20 Special nonlinear Forms


Example 8.1
Controlled van der Pol equation

ẋ1 = x2 /ε, ẋ2 = ε[−x1 + x2 − 13 x32 + u], y = x1

ẏ = ẋ1 = x2 /ε, ÿ = ẋ2 /ε = −x1 + x2 − 13 x32 + u

Relative degree two over R2

ẋ1 = x2 /ε, ẋ2 = ε[−x1 + x2 − 31 x32 + u], y = x2

ẏ = ε[−x1 + x2 − 31 x32 + u], Relative degree one over R2

ẋ1 = x2 /ε, ẋ2 = ε[−x1 + x2 − 13 x32 + u], y = 12 (ε2 x21 + x22 )

ẏ = ε2 x1 ẋ1 + x2 ẋ2 = εx22 − (ε/3)x42 + εx2 u

Relative degree one in {x2 6= 0}

Nonlinear Control Lecture # 20 Special nonlinear Forms


Example 8.2 (Field-controlled DC motor)

ẋ1 = d1 (−x1 − x2 x3 + Va )
ẋ2 = d2 [−fe (x2 ) + u]
ẋ3 = d3 (x1 x2 − bx3 )
y = x3

ẏ = ẋ3 = d3 (x1 x2 − bx3 )


ÿ = d3 (x1 ẋ2 + ẋ1 x2 − bẋ3 ) = (· · · ) + d2 d3 x1 u
Relative degree two in {x1 6= 0}

Nonlinear Control Lecture # 20 Special nonlinear Forms


Example 8.3
bm sm + bm−1 sm−1 + · · · + b0
H(s) =
sn + an−1 sn−1 + · · · + a0

ẋ = Ax + Bu, y = Cx

0 1 0 ... ... 0
   
0
 0 0 1 ... ... 0   0 
.. .. ..  .. 
   
.
 

 . . 

 . 
 
 ..   
 .   
A =  ..
, B =  
 ..   

 . . 




 . 

 .. .. 
 .. 

 . . 0 
  
 0 0 1   0 
−a0 −a1 . . . . . . −am . . . . . . −an−1 1
 
C = b0 b1 ... ... bm 0 ... 0

Nonlinear Control Lecture # 20 Special nonlinear Forms


ẏ = CAx + CBu, If m = n − 1, CB = bn−1 6= 0 ⇒ ρ = 1

CAi−1 B = 0, i = 1, . . . , n − m − 1, CAn−m−1 B = bm 6= 0
y (n−m) = CAn−m x + CAn−m−1 Bu ⇒ ρ = n − m

1
N(s) N(s) Q(s)
H(s) = = = R(s)
D(s) Q(s)N(s) + R(s) 1+ 1
Q(s) N (s)

u + e 1 y
✲ ❦ ✲ ✲
−✻ Q(s)

w R(s) ✛
N (s)

Nonlinear Control Lecture # 20 Special nonlinear Forms


State model of 1/Q(s): ξ = col y, ẏ, . . . , y (ρ−1)


ξ˙ = (Ac + Bc λT )ξ + Bc bm e, y = Cc ξ

 
0 1 0 ... 0 
0

 0 0 1 ... 0   0 
.. .. 
 
 ..  ..
 
.
 
Ac =  . . 
 , Bc =   , Cc = 1 0 ... 0 0


..  . 
   0
. 0 1
  
0 ... ... 0 0 1

State model of R(s)/N(s)

η̇ = A0 η + B0 y, w = C0 η

Nonlinear Control Lecture # 20 Special nonlinear Forms


State model of H(s)

η̇ = A0 η + B0 Cc ξ
ξ˙ = Ac ξ + Bc (λT ξ − bm C0 η + bm u)
y = Cc ξ

The eigenvalues of A0 are the zeros of H(s)

Nonlinear Control Lecture # 20 Special nonlinear Forms


Change of variables:

φ1 (x)
 
..

 . 
    
 φ (x)
φ(x) η

 n−ρ 
 def  def
z = T (x) =  − − −  = −−−  =  −−− 

 h(x) ψ(x) ξ
 

 .. 
 . 
ρ−1
Lf h(x)

φ1 to φn−ρ are chosen such that T (x) is a diffeomorphism on


a domain Dx ⊂ R

When ρ = n, z = T (x) = ψ(x) = ξ

Nonlinear Control Lecture # 20 Special nonlinear Forms


∂φ
η̇ = [f (x) + g(x)u] = f0 (η, ξ) + g0 (η, ξ)u
∂x
ξ˙i = ξi+1 , 1 ≤ i ≤ ρ − 1
ξ˙ρ = Lρf h(x) + Lg Lfρ−1 h(x) u
y = ξ1

Choose φ(x) such that T (x) is a diffeomorphism and

∂φi
g(x) = 0, for 1 ≤ i ≤ n − ρ, ∀ x ∈ Dx
∂x

Always possible (at least locally)

η̇ = f0 (η, ξ)

Nonlinear Control Lecture # 20 Special nonlinear Forms


Theorem 8.1
Suppose the system

ẋ = f (x) + g(x)u, y = h(x)

has relative degree ρ (≤ n) in R. If ρ = n, then for every


x0 ∈ R, a neighborhood N of x0 exists such that the map
T (x) = ψ(x), restricted to N, is a diffeomorphism on N. If
ρ < n, then, for every x0 ∈ R, a neighborhood N of x0 and
smooth functions φ1 (x), . . . , φn−ρ (x) exist such that
∂φi
g(x) = 0, for 1 ≤ i ≤ n − ρ
∂x
 
φ(x)
is satisfied for all x ∈ N and the map T (x) = ,
restricted to N, is a diffeomorphism on N ψ(x)

Nonlinear Control Lecture # 20 Special nonlinear Forms


Normal Form: η̇ = f0 (η, ξ)
ξ˙i = ξi+1 , 1 ≤ i ≤ ρ − 1
ξ˙ρ = Lρf h(x) + Lg Lρ−1
f h(x) u
y = ξ1

0 1 0 ... 0
   
0
 0 0 1 ... 0   0 
 .. .. ..  
..

Ac = 

. . .

 , Bc =

 .


 ..  
 0

 . 0 1  
0 ... ... 0 0 1
 
Cc = 1 0 . . . 0 0

Nonlinear Control Lecture # 20 Special nonlinear Forms


η̇ = f0 (η, ξ)
ξ˙ = Ac ξ + Bc Lρf h(x) + Lg Lρ−1
 
f h(x) u
y = Cc ξ

ψ̃(η, ξ) = Lρf h(x) x=T −1 (z)


, γ̃(η, ξ) = Lg Lρ−1
f h(x) x=T −1 (z)

ξ˙ = Ac ξ + Bc [ψ̃(η, ξ) + γ̃(η, ξ)u]


If x∗ is an open-loop equilibrium point at which y = 0; i.e.,
f (x∗ ) = 0 and h(x∗ ) = 0, then ψ(x∗ ) = 0. Take φ(x∗ ) = 0 so
that z = 0 is an open-loop equilibrium point.

Nonlinear Control Lecture # 20 Special nonlinear Forms


Zero Dynamics
η̇ = f0 (η, ξ)
ξ˙ = Ac ξ + Bc Lρf h(x) + Lg Lρ−1
 
f h(x) u
y = Cc ξ

Lρf h(x(t))
y(t) ≡ 0 ⇒ ξ(t) ≡ 0 ⇒ u(t) ≡ −
Lg Lρ−1
f h(x(t))

⇒ η̇ = f0 (η, 0)

Definition
The equation η̇ = f0 (η, 0) is called the zero dynamics of the
system. The system is said to be minimum phase if the zero
dynamics have an asymptotically stable equilibrium point in
the domain of interest (at the origin if T (0) = 0)

Nonlinear Control Lecture # 20 Special nonlinear Forms


Z ∗ = {x ∈ R | h(x) = Lf h(x) = · · · = Lρ−1
f h(x) = 0}

y(t) ≡ 0 ⇒ x(t) ∈ Z ∗

def Lρf h(x)


⇒ u = u∗ (x) = −
Lg Lρ−1
f h(x) x∈Z ∗

The restricted motion of the system is described by


" ρ
#
def L f h(x)
ẋ = f ∗ (x) = f (x) − g(x)
Lg Lρ−1
f h(x) x∈Z ∗

Nonlinear Control Lecture # 20 Special nonlinear Forms


Example 8.4

ẋ1 = x2 /ε, ẋ2 = ε[−x1 + x2 − 31 x32 + u], y = x2

ẏ = ẋ2 = ε[−x1 + x2 − 31 x32 + u] ⇒ ρ = 1

The system is in the normal form with η = x1 and ξ = x2

y(t) ≡ 0 ⇒ x2 (t) ≡ 0 ⇒ ẋ1 = 0

Non-minimum phase

Nonlinear Control Lecture # 20 Special nonlinear Forms


Example 8.5

2 + x23
ẋ1 = −x1 + u, ẋ2 = x3 , ẋ3 = x1 x3 + u, y = x2
1 + x23

ẏ = ẋ2 = x3

ÿ = ẋ3 = x1 x3 + u ⇒ ρ = 2

Z ∗ = {x2 = x3 = 0}

u = u∗ (x) = 0 ⇒ ẋ1 = −x1

Minimum phase

Nonlinear Control Lecture # 20 Special nonlinear Forms


Find φ(x) such that
 
2+x23
∂φ h
∂φ ∂φ ∂φ
i 1+x23
φ(0) = 0, g(x) = , , =0
 
∂x ∂x1 ∂x2 ∂x3  0
1
and  T
T (x) = φ(x) x2 x3
is a diffeomorphism
∂φ 2 + x23 ∂φ
· 2
+ =0
∂x1 1 + x3 ∂x3

φ(x) = x1 − x3 − tan−1 x3

Nonlinear Control Lecture # 20 Special nonlinear Forms


   
x1 − x3 − tan−1 x3 1 0 ⋆
∂T
T (x) =  x2 , = 0 1 0
∂x
x3 0 0 1
T (x) is a global diffeomorphism
2 + ξ22
 
−1

η̇ = − η + ξ2 + tan ξ2 1+ ξ2
1 + ξ22
ξ˙1 = ξ2
ξ˙2 = η + ξ2 + tan−1 ξ2 ξ2 + u


y = ξ1

Nonlinear Control Lecture # 20 Special nonlinear Forms


Nonlinear Control
Lecture # 21
Special nonlinear Forms

Nonlinear Control Lecture # 21 Special nonlinear Forms


Controller Form
Definition
A nonlinear system is in the controller form if

ẋ = Ax + B[ψ(x) + γ(x)u]

where (A, B) is controllable and γ(x) is a nonsingular matrix


for all x in the domain of interest

u = γ −1 (x)[−ψ(x) + v] ⇒ ẋ = Ax + Bv
Any system that can be represented in the controller form is
said to be feedback linearizable

Nonlinear Control Lecture # 21 Special nonlinear Forms


Example 8.7 (m-link robot)

M(q)q̈ + C(q, q̇)q̇ + D q̇ + g(q) = u


q is an m-dimensional vector of joint positions and M(q) is a
nonsingular inertial matrix
     
q 0 Im 0
x= , A= , B=
q̇ 0 0 Im

ψ = −M −1 (C q̇ + D q̇ + g), γ = M −1

Nonlinear Control Lecture # 21 Special nonlinear Forms


An n-dimensional single-input system

ẋ = f (x) + g(x)u

is transformable into the controller form if and only if ∃ h(x)


such that
ẋ = f (x) + g(x)u, y = h(x)
has relative degree n
Search for a smooth function h(x) such that

Lg Li−1 n−1
f h(x) = 0, i = 1, 2, . . . , n − 1, and Lg Lf h(x) 6= 0

Lfn−1 h(x)

T (x) = col h(x), Lf h(x), ···

Nonlinear Control Lecture # 21 Special nonlinear Forms


The Lie Bracket: For two vector fields f and g, the Lie bracket
[f, g] is a third vector field defined by

∂g ∂f
[f, g](x) = f (x) − g(x)
∂x ∂x
Notation:

ad0f g(x) = g(x), adf g(x) = [f, g](x)

adkf g(x) = [f, adfk−1 g](x), k ≥ 1


Properties:
[f, g] = −[g, f ]
For constant vector fields f and g, [f, g] = 0

Nonlinear Control Lecture # 21 Special nonlinear Forms


Example 8.8
   
x2 0
f= , g=
− sin x1 − x2 x1
     
0 0 x2 0 1 0
[f, g] = −
1 0 − sin x1 − x2 − cos x1 −1 x1
 
−x1
adf g = [f, g] =
x1 + x2

Nonlinear Control Lecture # 21 Special nonlinear Forms


   
x2 −x1
f= , adf g =
− sin x1 − x2 x1 + x2
ad2f g = [f, adf g]
   
−1 0 x2
=
1 1 − sin x1 − x2
   
0 1 −x1

− cos x1 −1 x1 + x2
 
−x1 − 2x2
=
x1 + x2 − sin x1 − x1 cos x1

Nonlinear Control Lecture # 21 Special nonlinear Forms


Example 8.9
f (x) = Ax, g is a constant vector field

adf g = [f, g] = −Ag, ad2f g = [f, adf g] = −A(−Ag) = A2 g

adkf g = (−1)k Ak g

Distribution: For vector fields f1 , f2 , . . . , fk on D ⊂ Rn , let

∆(x) = span{f1 (x), f2 (x), . . . , fk (x)}

The collection of all vector spaces ∆(x) for x ∈ D is called a


distribution and referred to by

∆ = span{f1 , f2 , . . . , fk }

Nonlinear Control Lecture # 21 Special nonlinear Forms


If dim(∆(x)) = k for all x ∈ D, we say that ∆ is a
nonsingular distribution on D, generated by f1 , . . . , fk
A distribution ∆ is involutive if

g1 ∈ ∆ and g2 ∈ ∆ ⇒ [g1 , g2 ] ∈ ∆

If ∆ is a nonsingular distribution, generated by


f1 , . . . , fk , then it is involutive if and only if

[fi , fj ] ∈ ∆, ∀ 1 ≤ i, j ≤ k

Nonlinear Control Lecture # 21 Special nonlinear Forms


Example 8.10
D = R3 ; ∆ = span{f1 , f2 }
   
2x2 1
f1 =  1  , f2 =  0  , dim(∆(x)) = 2, ∀ x ∈ D
0 x2


0
∂f2 ∂f1
[f1 , f2 ] = f1 − f2 =  0 
∂x ∂x
1

rank [f1 (x), f2 (x), [f1 , f2 ](x)] =


 
2x2 1 0
rank  1 0 0  = 3, ∀ x ∈ D
0 x2 1

∆ is not involutive

Nonlinear Control Lecture # 21 Special nonlinear Forms


Example 8.11
D = {x ∈ R3 | x21 + x23 6= 0}; ∆ = span{f1 , f2 }
   
2x3 −x1
f1 =  −1  , f2 =  −2x2  , dim(∆(x)) = 2, ∀ x ∈ D
0 x3
 
−4x3
∂f2 ∂f1
[f1 , f2 ] = f1 − f2 =  2 
∂x ∂x
  0
2x3 −x1 −4x3
rank −1 −2x2
 2  = 2, ∀ x ∈ D
0 x3 0
∆ is involutive

Nonlinear Control Lecture # 21 Special nonlinear Forms


Theorem 8.2
The n-dimensional single-input system

ẋ = f (x) + g(x)u

is feedback linearizable in a neighborhood of x0 ∈ D if and


only if there is a domain Dx ⊂ D, with x0 ∈ Dx , such that
n−1
1 the matrix G(x) = [g(x), adf g(x), . . . , adf g(x)] has
rank n for all x ∈ Dx ;
n−2
2 the distribution D = span {g, adf g, . . . , adf g} is
involutive in Dx .

Nonlinear Control Lecture # 21 Special nonlinear Forms


Example 8.12
   
a sin x2 0
ẋ = + u
−x21 1
 
∂f −a cos x2
adf g = [f, g] = − g=
∂x 0
 
0 −a cos x2
[g(x), adf g(x)] =
1 0

rank[g(x), adf g(x)] = 2, ∀ x such that cos x2 6= 0

span{g} is involutive

Find h such that Lg h(x) = 0, and Lg Lf h(x) 6= 0

Nonlinear Control Lecture # 21 Special nonlinear Forms


∂h ∂h
g= = 0 ⇒ h is independent of x2
∂x ∂x2
∂h
Lf h(x) = a sin x2
∂x1
∂(Lf h) ∂(Lf h) ∂h
Lg Lf h(x) = g= = a cos x2
∂x ∂x2 ∂x1
∂h
Lg Lf h(x) 6= 0 in D0 = {x ∈ R2 | cos x2 6= 0} if ∂x1
6= 0
   
h x1
Take h(x) = x1 ⇒ T (x) = =
Lf h a sin x2

Nonlinear Control Lecture # 21 Special nonlinear Forms


Example 8.13 (A single link manipulator with flexible joints)

   
x2 0
 −a sin x1 − b(x1 − x3 )   0 
f (x) =  , g= 
 x4   0 
c(x1 − x3 ) d
 
0
∂f  0 
adf g = [f, g] = − g= 
 −d 
∂x
0

Nonlinear Control Lecture # 21 Special nonlinear Forms


 
0
∂f  bd 
ad2f g = [f, adf g] = − adf g = 
 0 

∂x
−cd
 
−bd
∂f 2  0 
ad3f g = [f, ad2f g] = − adf g = 
 cd 

∂x
0
 
0 0 0 −bd
 0 0 bd 0 
rank 
 0 −d 0
=4
cd 
d 0 −cd 0

Nonlinear Control Lecture # 21 Special nonlinear Forms


span(g, adf g, ad2f g) is involutive Why?
The system is feedback linearizable. Find h(x) such that

∂(Li−1
f h) ∂(L3f h)
g = 0, i = 1, 2, 3, g 6= 0, h(0) = 0
∂x ∂x

∂h ∂h
g=0 ⇒ =0
∂x ∂x4
∂h ∂h ∂h
Lf h(x) = x2 + [−a sin x1 − b(x1 − x3 )] + x4
∂x1 ∂x2 ∂x3
∂(Lf h) ∂(Lf h) ∂h
g=0 ⇒ =0 ⇒ =0
∂x ∂x4 ∂x3

Nonlinear Control Lecture # 21 Special nonlinear Forms


∂h ∂h
Lf h(x) = x2 + [−a sin x1 − b(x1 − x3 )]
∂x1 ∂x2
∂(Lf h) ∂(Lf h) ∂(Lf h)
L2f h(x) = x2 + [−a sin x1 −b(x1 −x3 )]+ x4
∂x1 ∂x2 ∂x3
∂(L2f h) ∂(Lf h) ∂h
=0 ⇒ =0 ⇒ =0
∂x4 ∂x3 ∂x2
∂(L3f h) ∂h
g=
6 0 ⇔ 6= 0
∂x ∂x1
 
x1
 x2 
h(x) = x1 , T (x) =  −a sin x1 − b(x1 − x3 ) 

−ax2 cos x1 − b(x2 − x4 )

Nonlinear Control Lecture # 21 Special nonlinear Forms


Example 8.14 ( Field-Controlled DC Motor)

   
d1 (−x1 − x2 x3 + Va ) 0
f = −d2 fe (x2 )  , g = d2  , fe ∈ C 2 for x2 ∈ J
d3 (x1 x2 − bx3 ) 0
 
d1 d2 x3
adf g =  d22 fe′ (x2 ) 
−d2 d3 x1
 
d1 d2 x3 (d1 + d2 fe′ (x2 ) − bd3 )
ad2f g =  d32 (fe′ (x2 ))2 − d32 f2 (x2 )fe′′ (x2 ) 
2 2
d1 d2 d3 (x1 − Va ) − d2 d3 x1 fe (x2 ) − bd2 d3 x1

Nonlinear Control Lecture # 21 Special nonlinear Forms


det G = −2d21 d32 d3 x3 (x1 − a)(1 − bd3 /d1 )
a = 12 Va /(1 − bd3 /d1 ) > 0
x1 6= a and x3 6= 0 ⇒ rank(G) = 3

[g, adf g] = d22 fe′′ (x2 )g ⇒ span{g, adf g} is involutive


The conditions of Theorem 8.2 are satisfied in the domain

Dx = {x1 > a, x2 ∈ J, x3 > 0}

Find h(x) such that

∂h ∂(Lf h) ∂(L2f h)
g = 0; g = 0; g 6= 0
∂x ∂x ∂x

Nonlinear Control Lecture # 21 Special nonlinear Forms


∂h ∂h
g=0 ⇒ =0
∂x ∂x2
∂h ∂h
Lf h(x) = d1 (−x1 − x2 x3 + Va ) + d3 (x1 x2 − bx3 )
∂x1 ∂x3
∂(Lf h) ∂(Lf h) ∂h ∂h
g=0 ⇒ = 0 ⇒ −d1 x3 + d3 x1 =0
∂x ∂x2 ∂x1 ∂x3

Take h = d3 x21 + d1 x23 + c


Lf h(x) = 2d1 d3 x1 (Va − x1 ) − 2bd1 d3 x23
L2f h(x) = 2d21 d3 (Va −2x1 )(−x1 −x2 x3 +Va )−4bd1 d23 x3 (x1 x2 −bx3 )
∂(L2f h) ∂(L2f h)
g = d2 = 4d21d2 d3 (1 − bd3 /d1 )x3 (x1 − a) 6= 0
∂x ∂x2

Nonlinear Control Lecture # 21 Special nonlinear Forms


Nonlinear Control
Lecture # 22
Special nonlinear Forms

Nonlinear Control Lecture # 22 Special nonlinear Forms


Observer Form
Definition
A nonlinear system is in the observer form if

ẋ = Ax + ψ(y, u), y = Cx

where (A, C) is observable

Observer:
x̂˙ = Ax̂ + ψ(y, u) + H(y − C x̂)

x̃ = x − x̂

x̃˙ = (A − HC)x̃
Design H such that (A − HC) is Hurwitz
Nonlinear Control Lecture # 22 Special nonlinear Forms
Example 8.15 (A single link manipulator with flexible joints)

   
x2 0
 −a sin x1 − b(x1 − x3 )   0 
ẋ =  +
  0
 u, y = x1
 x4 
c(x1 − x3 ) d
ẋ = Ax + ψ(u, y), y = Cx
   
0 1 0 0 0
 −b 0 b 0   −a sin y 
A= 0 0 0 1 
, ψ =  
 0 
c 0 −c 0 du
 
C = 1 0 . . . 0 0 , (A, C) is observable

Nonlinear Control Lecture # 22 Special nonlinear Forms


Example 8.16 (Inverted pendulum)

ẋ1 = x2 , ẋ2 = a(sin x1 + u cos x1 ), y = x1

ẋ = Ax + ψ(u, y), y = Cx
   
0 1 0
A= , ψ=
0 0 a(sin y + u cos y)
 
C= 1 0

Nonlinear Control Lecture # 22 Special nonlinear Forms


m
X
ẋ = f (x) + gi (x)ui , y = h(x)
i=1

Is there z = T (x) such that


m
X
ż = Ac z + φ(y) + γi (y)ui , y = Cc z
i=1

0 1 0 ... 0
 
 0 0 1 ... 0 
 .. .. ..  
Ac = 

. . .,

Cc = 1 0 ... 0 0
 .. 
 . 0 1 
0 ... ... 0 0

Nonlinear Control Lecture # 22 Special nonlinear Forms


ẋ = f (x), y = h(x)
   
h(x) y
 Lf h(x)   ẏ 
Φ(x) =  .. = ..
   

 .   . 
Lfn−1 h(x) y (n−1)
 
z1
 z2 + F1 (z1 ) 
Φ̃(z) = Φ(x)|x=T −1 (z) =  ..
 

 . 
zn + Fn−1 (z1 , . . . , zn−1 )

Nonlinear Control Lecture # 22 Special nonlinear Forms


∂ Φ̃ ∂Φ ∂T −1
=
∂z ∂x ∂z
 
1 0 ··· 0
 ∗ 1 0 0 
∂ Φ̃ 
 .. ..

= .

.
∂z

 
 ∗ ··· ∗ 1 0 
∗ ··· ∗ 1
∂Φ
must be nonsingular
∂x

Nonlinear Control Lecture # 22 Special nonlinear Forms


∂Φ 
τ = b, b = col 0, · · · 0, 1
∂x

Lτ Lkf h(x) = 0, 0 ≤ k ≤ n − 2, Lτ Lfn−1 h(x) = 1


Equivalently

Ladkf τ h(x) = 0, 0 ≤ k ≤ n − 2, Ladn−1 τ h(x) = (−1)n−1


f

Define τk = (−1)n−k adfn−k τ, 1≤k≤n

∂T  
τ1 τ2 · · · τn = I
∂x

Nonlinear Control Lecture # 22 Special nonlinear Forms


0
 
 .. 
 . 
 0 
 
∂T def 
τk = ek =  1  ← kth row

∂x
 0 
 
 . 
 .. 
0

∂T ∂T
ek = (−1)n−k adfn−k τ = (−1)n−k [f, adn−k−1
f τ]
∂x ∂x
∂ f˜
= (−1)n−k [f˜(z), (−1)n−k−1 ek+1 ] = ek+1
∂z

Nonlinear Control Lecture # 22 Special nonlinear Forms


∗ 1 0 ... 0
 
 ∗ 0 1 ... 0 
∂ f˜  .. .. ..
=

. . .

∂z

 .. 
 . 0 1 
∗ ... ... 0 0
By integration
f˜(z) = Ac z + φ(z1 )

Nonlinear Control Lecture # 22 Special nonlinear Forms


∂ h̃ ∂h ∂T −1
h̃(z) = h(T −1 (z)), =
∂z ∂x ∂z
∂T −1  
= τ1 τ2 · · · τn x=T −1 (z)
∂z

∂ h̃ h n−1 n−2
i
= (−1) Ladn−1 τ h, (−1) L n−2 h,
adf τ · · · Lτ h
∂z f

∂ h̃  
= 1, 0, · · · 0 ⇒ h̃ = z1
∂z

Nonlinear Control Lecture # 22 Special nonlinear Forms


Theorem 8.3
An n-dimensional single-output (SO) systems

ẋ = f (x), y = h(x)

is transformable into the observer form if and only if there is a


domain D0 such that ∀x ∈ D0
 
∂Φ
(x) = n, Φ = col h, Lf h, · · · Lfn−1 h

rank
∂x

and the unique vector field solution τ of


∂Φ 
τ = b, b = col 0, ··· 0, 1
∂x
satisfies [adif τ, adjf τ ] = 0, 0 ≤ i, j ≤ n − 1

Nonlinear Control Lecture # 22 Special nonlinear Forms


m
X
ẋ = f (x) + gi (x)ui , y = h(x)
i=1

∂T
When will g̃i (z) = gi (x) be independent of z2 to zn ?
∂x x=T −1 (z)

∂T ∂g̃i
[gi , adn−k−1
f τ ] = [g̃i , (−1)n−k−1 ek+1 ] = (−1)n−k
∂x ∂zk+1
∂g̃i
= 0 ⇔ [gi , adn−k−1
f τ] = 0
∂zk+1

Nonlinear Control Lecture # 22 Special nonlinear Forms


Corollary 8.1
Suppose the assumptions of Theorem 8.3 are satisfied. Then,
the change of variables z = T (x) transforms the system into
the observer form if and only if

[gi , adkf τ ] = 0, , for 0 ≤ k ≤ n − 2 and 1 ≤ i ≤ m

Moreover, if for some i the foregoing condition is strengthened


to
[gi , adkf τ ] = 0, , for 0 ≤ k ≤ n − 1
then the vector field γi is constant

Nonlinear Control Lecture # 22 Special nonlinear Forms


Example 8.17
   
β1 (x1 ) + x2 b1
ẋ = + u, y = x1
f2 (x) b2
   
h(x) x1
Φ(x) = =
Lf h(x) β1 (x1 ) + x2
   
∂Φ 1 0 ∂Φ
= ∂β1 ; rank (x) = 2, ∀ x
∂x ∂x1
1 ∂x
   
∂Φ 0 0
τ= ⇒ τ=
∂x 1 1

Nonlinear Control Lecture # 22 Special nonlinear Forms


    
∂f ∗ 1 0 1
adf τ = [f, τ ] = − τ =− ∂f2 =− ∂f2
∂x ∗ ∂x2
1 ∂x2

" #
0 0

∂(adf τ ) 0
[τ, adf τ ] = τ =− ∂ 2 f2 ∂ 2 f2
∂x ∂x1 ∂x2 ∂x22 1

∂ 2 f2
[τ, adf τ ] = 0 ⇔ = 0 ⇔ f2 (x) = β2 (x1 ) + x2 β3 (x1 )
∂x22

[g, τ ] = 0 (g and τ are constant vector fields)


  
0 0 b1 ∂β3
[g, adf τ ] = = 0 if = 0 or b1 = 0
− ∂β
∂x1
3
0 b2 ∂x1

Nonlinear Control Lecture # 22 Special nonlinear Forms


 
1 1
τ1 = (−1) ad1f τ
= −adf τ =
β (x )
 3 1
0
τ2 = (−1)0 ad0f τ = τ =
1
∂T  
τ1 , τ2 = I
∂x
 ∂T1 ∂T1 
∂x1 ∂x2
   
1 0 1 0
  =
∂T2 ∂T2 β3 (x1 ) 1 0 1
∂x1 ∂x2

∂T1 ∂T1
= 0 and = 1 ⇒ T1 = x1
∂x2 ∂x1

Nonlinear Control Lecture # 22 Special nonlinear Forms


∂T2 ∂T2
= 1 and + β3 (x1 ) = 0
∂x2 ∂x1
Z x1
⇒ T2 (x) = x2 − β3 (σ) dσ
0

ż = Az + φ(y) + γ(y)u, y = Cz
 
0 1  
A= , C= 1 0
0 0
 Ry   
β1 (y) + 0 β3 (σ) dσ b1
φ= , γ=
β2 (y) − β1 (y)β3(y) b2 − b1 β3 (y)

Nonlinear Control Lecture # 22 Special nonlinear Forms


Special Case: SISO system

ẋ = f (x) + g(x)u, y = h(x)

Suppose the assumptions of Corollary 8.1 hold with

[g, adkf τ ] = 0, , for 0 ≤ k ≤ n − 1

z = T (x) → ż = Ac z + φ(y) + γu, y = Cc z

 T
Rel deg = ρ ⇔ γ = 0, . . . , 0, γρ , . . . , γn , γρ 6= 0

Minimum Phase ⇔ γρ sn−ρ + · · · + γn−1 s + γn Hurwitz

Nonlinear Control Lecture # 22 Special nonlinear Forms


Nonlinear Control
Lecture # 23
State Feedback Stabilization

Nonlinear Control Lecture # 23 State Feedback Stabilization


Basic Concepts
We want to stabilize the system

ẋ = f (x, u)

at the equilibrium point x = xss


Steady-State Problem: Find steady-state control uss s.t.

0 = f (xss , uss )

xδ = x − xss , uδ = u − uss
def
ẋδ = f (xss + xδ , uss + uδ ) = fδ (xδ , uδ )
fδ (0, 0) = 0

uδ = φ(xδ ) ⇒ u = uss + φ(x − xss )

Nonlinear Control Lecture # 23 State Feedback Stabilization


State Feedback Stabilization: Given

ẋ = f (x, u) [f (0, 0) = 0]

find
u = φ(x) [φ(0) = 0]
s.t. the origin is an asymptotically stable equilibrium point of

ẋ = f (x, φ(x))

f and φ are locally Lipschitz functions

Nonlinear Control Lecture # 23 State Feedback Stabilization


Notions of Stabilization

ẋ = f (x, u), u = φ(x)


Local Stabilization: The origin of ẋ = f (x, φ(x)) is
asymptotically stable (e.g., linearization)
Regional Stabilization: The origin of ẋ = f (x, φ(x)) is
asymptotically stable and a given region G is a subset of the
region of attraction (for all x(0) ∈ G, limt→∞ x(t) = 0) (e.g.,
G ⊂ Ωc = {V (x) ≤ c} where Ωc is an estimate of the region
of attraction)
Global Stabilization: The origin of ẋ = f (x, φ(x)) is globally
asymptotically stable

Nonlinear Control Lecture # 23 State Feedback Stabilization


Semiglobal Stabilization: The origin of ẋ = f (x, φ(x)) is
asymptotically stable and φ(x) can be designed such that any
given compact set (no matter how large) can be included in
the region of attraction (Typically u = φp (x) is dependent on a
parameter p such that for any compact set G, p can be chosen
to ensure that G is a subset of the region of attraction )
What is the difference between global stabilization and
semiglobal stabilization?

Nonlinear Control Lecture # 23 State Feedback Stabilization


Example 9.1

ẋ = x2 + u
Linearization:
ẋ = u, u = −kx, k > 0

Closed-loop system:
ẋ = −kx + x2

Linearization of the closed-loop system yields ẋ = −kx. Thus,


u = −kx achieves local stabilization
The region of attraction is {x < k}. Thus, for any set
{−a ≤ x ≤ b} with b < k, the control u = −kx achieves
regional stabilization

Nonlinear Control Lecture # 23 State Feedback Stabilization


The control u = −kx does not achieve global stabilization
But it achieves semiglobal stabilization because any compact
set {|x| ≤ r} can be included in the region of attraction by
choosing k > r
The control
u = −x2 − kx
achieves global stabilization because it yields the linear
closed-loop system ẋ = −kx whose origin is globally
exponentially stable

Nonlinear Control Lecture # 23 State Feedback Stabilization


Linear Systems
ẋ = Ax + Bu
(A, B) is stabilizable (controllable or every uncontrollable
eigenvalue has a negative real part)
Find K such that (A − BK) is Hurwitz

u = −Kx

Typical methods:
Eigenvalue Placement
Eigenvalue-Eigenvector Placement
LQR

Nonlinear Control Lecture # 23 State Feedback Stabilization


Linearization
ẋ = f (x, u)
f (0, 0) = 0 and f is continuously differentiable in a domain
Dx × Du that contains the origin (x = 0, u = 0) (Dx ⊂ Rn ,
Du ⊂ R m )
ẋ = Ax + Bu
∂f ∂f
A= (x, u) ; B= (x, u)
∂x x=0,u=0 ∂u x=0,u=0

Assume (A, B) is stabilizable. Design a matrix K such that


(A − BK) is Hurwitz

u = −Kx

Nonlinear Control Lecture # 23 State Feedback Stabilization


Closed-loop system:

ẋ = f (x, −Kx)

Linearization:
 
∂f ∂f
ẋ = (x, −Kx) + (x, −Kx) (−K) x
∂x ∂u x=0
= (A − BK)x

Since (A − BK) is Hurwitz, the origin is an exponentially


stable equilibrium point of the closed-loop system

Nonlinear Control Lecture # 23 State Feedback Stabilization


Example 9.2 (Pendulum Equation)

θ̈ = − sin θ − bθ̇ + cu
Stabilize the pendulum at θ = δ1

0 = − sin δ1 + cuss

x1 = θ − δ1 , x2 = θ̇, uδ = u − uss

ẋ1 = x2
ẋ2 = −[sin(x1 + δ1 ) − sin δ1 ] − bx2 + cuδ

   
0 1 0 1
A= =
− cos(x1 + δ1 ) −b x1 =0
− cos δ1 −b

Nonlinear Control Lecture # 23 State Feedback Stabilization


   
0 1 0
A= ; B=
− cos δ1 −b c
 
K= k1 k2
 
0 1
A − BK =
−(cos δ1 + ck1 ) −(b + ck2 )
cos δ1 b
k1 > − , k2 > −
c c
sin δ1 sin δ1
u= − Kx = − k1 (θ − δ1 ) − k2 θ̇
c c

Nonlinear Control Lecture # 23 State Feedback Stabilization


Feedback Linearization
Consider the nonlinear system

ẋ = f (x) + G(x)u

f (0) = 0, x ∈ Rn , u ∈ Rm
Suppose there is a change of variables z = T (x), defined for
all x ∈ D ⊂ Rn , that transforms the system into the controller
form
ż = Az + B[ψ(x) + γ(x)u]
where (A, B) is controllable and γ(x) is nonsingular for all
x∈D

u = γ −1 (x)[−ψ(x) + v] ⇒ ż = Az + Bv

Nonlinear Control Lecture # 23 State Feedback Stabilization


v = −Kz
Design K such that (A − BK) is Hurwitz
The origin z = 0 of the closed-loop system

ż = (A − BK)z

is globally exponentially stable

u = γ −1 (x)[−ψ(x) − KT (x)]

Closed-loop system in the x-coordinates:


def
ẋ = f (x) + G(x)γ −1 (x)[−ψ(x) − KT (x)] = fc (x)

Nonlinear Control Lecture # 23 State Feedback Stabilization


What can we say about the stability of x = 0 as an equilibrium
point of ẋ = fc (x)?

∂T
z = T (x) ⇒ (x)fc (x) = (A − BK)T (x)
∂x

∂fc ∂T
(0) = J −1 (A − BK)J, J = (0) (nonsingular)
∂x ∂x
The origin of ẋ = fc (x) is exponentially stable
Is x = 0 globally asymptotically stable? In general No
It is globally asymptotically stable if T (x) is a global
diffeomorphism

Nonlinear Control Lecture # 23 State Feedback Stabilization


Estimate of the region of attraction: If T (x) is a
diffeomorphism on a domain D ⊂ Rn , the equation
ż = (A − BK)z is valid in the domain T (D)

P (A − BK) + (A − BK)T P = −Q, Q = QT > 0

Estimate in the z-coordinates:

Ωc = {z T P z ≤ c} ⊂ T (D)

Estimate in the x-coordinates:

T −1 (Ωc ) = {T T (x)P T (x) ≤ c}

Nonlinear Control Lecture # 23 State Feedback Stabilization


Example 9.3 (Recall Example 8.12)

ẋ1 = a sin x2 , ẋ2 = −x21 + u


   
x1 z2
z = T (x) = ⇒ ż = p
a sin x2 a − z2 (−z12 + u)
2 2

D = {|x2 | < π/2}, T (D) = {|z2 | < a}


 
K = σ 2 2σ , σ > 0, ⇒ λ(A − BK) = −σ, −σ

2a2
   
3σ 2 σ 2σ 3 0
P = , Q= , c < min z T P z =
σ 1 0 2σ |z2 |=a 3
T −1 (Ωc ) = {3σ 2 x21 + 2σax1 sin x2 + a2 sin2 x2 ≤ c}

Nonlinear Control Lecture # 23 State Feedback Stabilization


Nonlinear Control
Lecture # 24
State Feedback Stabilization

Nonlinear Control Lecture # 24 State Feedback Stabilization


Feedback Lineaization
What information do we need to implement the control

u = γ −1 (x)[−ψ(x) − KT (x)] ?

What is the effect of uncertainty in ψ, γ, and T ?


Let ψ̂(x), γ̂(x), and T̂ (x) be nominal models of ψ(x), γ(x),
and T (x)
u = γ̂ −1 (x)[−ψ̂(x) − K T̂ (x)]
Closed-loop system:

ż = (A − BK)z + B∆(z)

Nonlinear Control Lecture # 24 State Feedback Stabilization


ż = (A − BK)z + B∆(z) (∗)

V (z) = z T P z, P (A − BK) + (A − BK)T P = −I

Lemma 9.1
Suppose (*) is defined in Dz ⊂ Rn
If k∆(z)k ≤ kkzk ∀ z ∈ Dz , k < 1/(2kP Bk), then the
origin of (*) is exponentially stable. It is globally
exponentially stable if Dz = Rn
If k∆(z)k ≤ kkzk + δ ∀ z ∈ Dz and Br ⊂ Dz , then there
exist positive constants c1 and c2 such that if δ < c1 r and
z(0) ∈ {z T P z ≤ λmin (P )r 2}, kz(t)k will be ultimately
bounded by δc2 . If Dz = Rn , kz(t)k will be globally
ultimately bounded by δc2 for any δ > 0

Nonlinear Control Lecture # 24 State Feedback Stabilization


Example 9.4 (Pendulum Equation)

ẋ1 = x2 , ẋ2 = − sin(x1 + δ1 ) − bx2 + cu


 
1
u= [sin(x1 + δ1 ) − (k1 x1 + k2 x2 )]
c  
0 1
A − BK =
−k1 −(k2 + b)
 
1
u= [sin(x1 + δ1 ) − (k1 x1 + k2 x2 )]

ẋ1 = x2 , ẋ2 = −k1 x1 − (k2 + b)x2 + ∆(x)
 
c − ĉ
∆(x) = [sin(x1 + δ1 ) − (k1 x1 + k2 x2 )]

Nonlinear Control Lecture # 24 State Feedback Stabilization


|∆(x)| ≤ kkxk + δ, ∀ x
 
c − ĉ c − ĉ
q
2 2
k= 1 + k1 + k2 , δ = | sin δ1 |
ĉ ĉ
 
T p11 p12
P (A − BK) + (A − BK) P = −I, P =
p12 p22
1
k< p 2 ⇒ GUB
2 p12 + p222
1
sin δ1 = 0 & k < p 2 ⇒ GES
2 p12 + p222

Nonlinear Control Lecture # 24 State Feedback Stabilization


b = 0, ĉ = mo /m̂
 
m̂ − m
q
2 2
k = ∆m 1 + k1 + k2 , δ = ∆m | sin δ1 |, ∆m =
m

K = σ 2 2σ , σ > 0, ⇒ λ(A − BK) = −σ, −σ


 

1 2σ 3
k< p 2 2
⇔ ∆m < √ p
2 p12 + p22 2
[1 + σ σ + 4] 4σ 2 + (σ 2 + 1)2

0.4 0.3951

0.3
RHS

0.2
0.1
0
0 5 10
σ

Nonlinear Control Lecture # 24 State Feedback Stabilization


Is feedback linearization a good idea?

Example 9.5

ẋ = ax − bx3 + u, a, b > 0

u = −(k + a)x + bx3 , k > 0, ⇒ ẋ = −kx

−bx3 is a damping term. Why cancel it?

u = −(k + a)x, k > 0, ⇒ ẋ = −kx − bx3

Which design is better?

Nonlinear Control Lecture # 24 State Feedback Stabilization


Example 9.6

ẋ1 = x2 , ẋ2 = −h(x1 ) + u


h(0) = 0 and x1 h(x1 ) > 0, ∀ x1 6= 0
Feedback Linearization:

u = h(x1 ) − (k1 x1 + k2 x2 )

With y = x2 , the system is passive with


Z x1
V = h(z) dz + 21 x22
0

V̇ = h(x1 )ẋ1 + x2 ẋ2 = yu

Nonlinear Control Lecture # 24 State Feedback Stabilization


The control

u = −σ(x2 ), σ(0) = 0, x2 σ(x2 ) > 0 ∀ x2 6= 0

creates a feedback connection of two passive systems with


storage function V
V̇ = −x2 σ(x2 )

x2 (t) ≡ 0 ⇒ ẋ2 (t) ≡ 0 ⇒ h(x1 (t)) ≡ 0 ⇒ x1 (t) ≡ 0


Asymptotic stability of the origin follows from the invariance
principle
Which design is better?

Nonlinear Control Lecture # 24 State Feedback Stabilization


The control u = −σ(x2 ) has two advantages:
It does not use a model of h
The flexibility in choosing σ can be used to reduce |u|

However, u = −σ(x2 ) cannot arbitrarily assign the rate of


decay of x(t). Linearization of the closed-loop system at the
origin yields the characteristic equation

s2 + σ ′ (0)s + h′ (0) = 0

One of thep
two roots cannot be moved to the left of
Re[s] = − h′ (0)

Nonlinear Control Lecture # 24 State Feedback Stabilization


Partial Feedback Linearization
Consider the nonlinear system

ẋ = f (x) + G(x)u [f (0) = 0]

Suppose there is a change of variables


   
η T1 (x)
z= = T (x) =
ξ T2 (x)

defined for all x ∈ D ⊂ Rn , that transforms the system into

η̇ = f0 (η, ξ), ξ˙ = Aξ + B[ψ(x) + γ(x)u]

(A, B) is controllable and γ(x) is nonsingular for all x ∈ D

Nonlinear Control Lecture # 24 State Feedback Stabilization


u = γ −1 (x)[−ψ(x) + v]
η̇ = f0 (η, ξ), ξ˙ = Aξ + Bv

v = −Kξ, where (A − BK) is Hurwitz

Nonlinear Control Lecture # 24 State Feedback Stabilization


Lemma 9.2
The origin of the cascade connection
η̇ = f0 (η, ξ), ξ˙ = (A − BK)ξ

is asymptotically (exponentially) stable if the origin of


η̇ = f0 (η, 0) is asymptotically (exponentially) stable

Proof
With b > 0 sufficiently small,
p
V (η, ξ) = bV1 (η) + ξ T P ξ, (asymptotic)

V (η, ξ) = bV1 (η) + ξ T P ξ, (exponential)

Nonlinear Control Lecture # 24 State Feedback Stabilization


If the origin of η̇ = f0 (η, 0) is globally asymptotically stable,
will the origin of

η̇ = f0 (η, ξ), ξ˙ = (A − BK)ξ

be globally asymptotically stable?


In general No
Example 9.7
The origin of η̇ = −η is globally exponentially stable, but
η̇ = −η + η 2 ξ, ξ˙ = −kξ, k > 0

has a finite region of attraction {ηξ < 1 + k}

Nonlinear Control Lecture # 24 State Feedback Stabilization


Example 9.8

η̇ = − 21 (1 + ξ2 )η 3 , ξ˙1 = ξ2 , ξ˙2 = v

The origin of η̇ = − 12 η 3 is globally asymptotically stable


 
2 def 0 1
v = −k ξ1 − 2kξ2 = −Kξ ⇒ A − BK = 2
−k −2k

The eigenvalues of (A − BK) are −k and −k


 
(1 + kt)e−kt te−kt
e(A−BK)t =  
2
−k te −kt
(1 − kt)e −kt

Nonlinear Control Lecture # 24 State Feedback Stabilization


Peaking Phenomenon:
k
max{k 2 te−kt } = → ∞ as k → ∞
t e

ξ1 (0) = 1, ξ2 (0) = 0 ⇒ ξ2 (t) = −k 2 te−kt

1
1 − k 2 te−kt η 3 ,

η̇ = − 2
η(0) = η0

η02
η 2 (t) =
1 + η02 [t + (1 + kt)e−kt − 1]
If η02 > 1, the system will have a finite escape time if k is
chosen large enough

Nonlinear Control Lecture # 24 State Feedback Stabilization


Lemma 9.3
The origin of

η̇ = f0 (η, ξ), ξ˙ = (A − BK)ξ

is globally asymptotically stable if the system η̇ = f0 (η, ξ) is


input-to-state stable

Proof
Apply Lemma 4.6

Nonlinear Control Lecture # 24 State Feedback Stabilization


u = γ −1 (x)[−ψ(x) − KT2 (x)]
What is the effect of uncertainty in ψ, γ, and T2 ?
Let ψ̂(x), γ̂(x), and T̂2 (x) be nominal models of ψ(x), γ(x),
and T2 (x)
u = γ̂ −1 (x)[−ψ̂(x) − K T̂2 (x)]

η̇ = f0 (η, ξ), ξ˙ = (A − BK)ξ + B∆(z) (∗)


∆ = ψ + γγ̂ −1 [−ψ̂ − K T̂2 ] + KT2

Nonlinear Control Lecture # 24 State Feedback Stabilization


η̇ = f0 (η, ξ), ξ˙ = (A − BK)ξ + B∆(z) (∗)

Lemma 9.4
If η̇ = f0 (η, ξ) is input-to-state stable and k∆(z)k ≤ δ for all
z, for some δ > 0, then the solution of (*) is globally
ultimately bounded by a class K function of δ

Lemma 9.5
If the origin of η̇ = f0 (η, 0) is exponentially stable, (*) is
defined in Dz ⊂ Rn , and k∆(z)k ≤ kkzk + δ ∀ z ∈ Dz , then
there exist a neighborhood Nz of z = 0 and positive constants
k ∗ , δ ∗ , and c such that for k < k ∗ , δ < δ ∗ , and z(0) ∈ Nz ,
kz(t)k will be ultimately bounded by cδ. If δ = 0, the origin of
(*) will be exponentially stable

Nonlinear Control Lecture # 24 State Feedback Stabilization


Nonlinear Control
Lecture # 25
State Feedback Stabilization

Nonlinear Control Lecture # 25 State Feedback Stabilization


Backstepping
η̇ = fa (η) + ga (η)ξ
ξ˙ = fb (η, ξ) + gb (η, ξ)u, gb 6= 0, η ∈ Rn , ξ, u ∈ R

Stabilize the origin using state feedback


View ξ as “virtual” control input to the system

η̇ = fa (η) + ga (η)ξ

Suppose there is ξ = φ(η) that stabilizes the origin of

η̇ = fa (η) + ga (η)φ(η)

∂Va
[fa (η) + ga (η)φ(η)] ≤ −W (η)
∂η

Nonlinear Control Lecture # 25 State Feedback Stabilization


z = ξ − φ(η)

η̇ = [fa (η) + ga (η)φ(η)] + ga (η)z


ż = F (η, ξ) + gb (η, ξ)u

V (η, ξ) = Va (η) + 21 z 2 = Va (η) + 21 [ξ − φ(η)]2

∂Va ∂Va
V̇ = [fa (η) + ga (η)φ(η)] + ga (η)z
∂η ∂η
+zF (η, ξ) + zgb (η, ξ)u
 
∂Va
≤ −W (η) + z ga (η) + F (η, ξ) + gb (η, ξ)u
∂η

Nonlinear Control Lecture # 25 State Feedback Stabilization


 
∂Va
V̇ ≤ −W (η) + z ga (η) + F (η, ξ) + gb (η, ξ)u
∂η
 
1 ∂Va
u=− ga (η) + F (η, ξ) + kz , k > 0
gb (η, ξ) ∂η

V̇ ≤ −W (η) − kz 2

Nonlinear Control Lecture # 25 State Feedback Stabilization


Example 9.9

ẋ1 = x21 − x31 + x2 , ẋ2 = u

ẋ1 = x21 − x31 + x2

x2 = φ(x1 ) = −x21 − x1 ⇒ ẋ1 = −x1 − x31

Va (x1 ) = 21 x21 ⇒ V̇a = −x21 − x41 , ∀ x1 ∈ R

z2 = x2 − φ(x1 ) = x2 + x1 + x21

ẋ1 = −x1 − x31 + z2


ż2 = u + (1 + 2x1 )(−x1 − x31 + z2 )

Nonlinear Control Lecture # 25 State Feedback Stabilization


V (x) = 21 x21 + 12 z22

V̇ = x1 (−x1 − x31 + z2 )
+ z2 [u + (1 + 2x1 )(−x1 − x31 + z2 )]

V̇ = −x21 − x41
+ z2 [x1 + (1 + 2x1 )(−x1 − x31 + z2 ) + u]

u = −x1 − (1 + 2x1 )(−x1 − x31 + z2 ) − z2

V̇ = −x21 − x41 − z22

The origin is globally asymptotically stable

Nonlinear Control Lecture # 25 State Feedback Stabilization


Example 9.10

ẋ1 = x21 − x31 + x2 , ẋ2 = x3 , ẋ3 = u

ẋ1 = x21 − x31 + x2 , ẋ2 = x3

def
x3 = −x1 − (1 + 2x1 )(−x1 − x31 + z2 ) − z2 = φ(x1 , x2 )

Va (x) = 21 x21 + 12 z22 , V̇a = −x21 − x41 − z22

z3 = x3 − φ(x1 , x2 )

ẋ1 = x21 − x31 + x2 , ẋ2 = φ(x1 , x2 ) + z3


∂φ 2 ∂φ
ż3 = u − (x1 − x31 + x2 ) − (φ + z3 )
∂x1 ∂x2

Nonlinear Control Lecture # 25 State Feedback Stabilization


V = Va + 12 z32

∂Va 2 ∂Va
V̇ = (x1 − x31 + x2 ) + (z3 + φ)
∂x1 ∂x2
 
∂φ 2 3 ∂φ
+ z3 u − (x − x1 + x2 ) − (z3 + φ)
∂x1 1 ∂x2

V̇ = −x21 − x41 − (x2 + x1 + x21 )2


 
∂Va ∂φ 2 ∂φ
+z3 − (x1 − x31 + x2 ) − (z3 + φ) + u
∂x2 ∂x1 ∂x2

∂Va ∂φ 2 ∂φ
u=− + (x1 − x31 + x2 ) + (z3 + φ) − z3
∂x2 ∂x1 ∂x2

The origin is globally asymptotically stable

Nonlinear Control Lecture # 25 State Feedback Stabilization


Strict-Feedback Form

ẋ = f0 (x) + g0 (x)z1
ż1 = f1 (x, z1 ) + g1 (x, z1 )z2
ż2 = f2 (x, z1 , z2 ) + g2 (x, z1 , z2 )z3
..
.
żk−1 = fk−1 (x, z1 , . . . , zk−1 ) + gk−1(x, z1 , . . . , zk−1 )zk
żk = fk (x, z1 , . . . , zk ) + gk (x, z1 , . . . , zk )u

gi (x, z1 , . . . , zi ) 6= 0 for 1 ≤ i ≤ k

Nonlinear Control Lecture # 25 State Feedback Stabilization


Example 9.12

ẋ = −x + x2 z, ż = u

ẋ = −x + x2 z

z = 0 ⇒ ẋ = −x, Va = 12 x2 ⇒ V̇a = −x2

V = 21 (x2 + z 2 )

V̇ = x(−x + x2 z) + zu = −x2 + z(x3 + u)

u = −x3 − kz, k > 0, ⇒ V̇ = −x2 − kz 2

Global stabilization

Compare with semiglobal stabilization in Example 9.7

Nonlinear Control Lecture # 25 State Feedback Stabilization


Example 9.13

ẋ = x2 − xz, ż = u

ẋ = x2 − xz
1
z = x + x2 ⇒ ẋ = −x3 , V0 (x) = x2 ⇒ V̇ = −x4
2
1
V = V0 + (z − x − x2 )2
2
V̇ = −x4 + (z − x − x2 )[−x2 + u − (1 + 2x)(x2 − xz)]
u = (1 + 2x)(x2 − xz) + x2 − k(z − x − x2 ), k > 0

V̇ = −x4 − k(z − x − x2 )2 Global stabilization

Nonlinear Control Lecture # 25 State Feedback Stabilization


Passivity-Based Control
ẋ = f (x, u), y = h(x), f (0, 0) = 0

∂V
uT y ≥ V̇ = f (x, u)
∂x

Theorem 9.1
If the system is
(1) passive with a radially unbounded positive definite
storage function and
(2) zero-state observable,
then the origin can be globally stabilized by

u = −φ(y), φ(0) = 0, y T φ(y) > 0 ∀ y 6= 0

Nonlinear Control Lecture # 25 State Feedback Stabilization


Proof

∂V
V̇ = f (x, −φ(y)) ≤ −y T φ(y) ≤ 0
∂x

V̇ (x(t)) ≡ 0 ⇒ y(t) ≡ 0 ⇒ u(t) ≡ 0 ⇒ x(t) ≡ 0


Apply the invariance principle
A given system may be made passive by
(1) Choice of output,
(2) Feedback,
or both

Nonlinear Control Lecture # 25 State Feedback Stabilization


Choice of Output
∂V
ẋ = f (x) + G(x)u, f (x) ≤ 0, ∀x
∂x
No output is defined. Choose the output as
 T
def ∂V
y = h(x) = G(x)
∂x

∂V ∂V
V̇ =
f (x) + G(x)u ≤ y T u
∂x ∂x
Check zero-state observability

Nonlinear Control Lecture # 25 State Feedback Stabilization


Example 9.14

ẋ1 = x2 , ẋ2 = −x31 + u

V (x) = 41 x41 + 12 x22

With u = 0 V̇ = x31 x2 − x2 x31 = 0

∂V ∂V
Take y = G= = x2
∂x ∂x2
Is it zero-state observable?

with u = 0, y(t) ≡ 0 ⇒ x(t) ≡ 0

u = −kx2 or u = −(2k/π) tan−1 (x2 ) (k > 0)

Nonlinear Control Lecture # 25 State Feedback Stabilization


Feedback Passivation
Definition
The system

ẋ = f (x) + G(x)u, y = h(x) (∗)

is equivalent to a passive system if ∃ u = α(x) + β(x)v such


that

ẋ = f (x) + G(x)α(x) + G(x)β(x)v, y = h(x)

is passive

Theorem [20]
The system (*) is locally equivalent to a passive system (with
a positive definite storage function) if it has relative degree
one at x = 0 and the zero dynamics have a stable equilibrium
point at the origin with a positive definite Lyapunov function

Nonlinear Control Lecture # 25 State Feedback Stabilization


Example 9.15 (m-link Robot Manipulator)

M(q)q̈ + C(q, q̇)q̇ + D q̇ + g(q) = u

M = M T > 0, (Ṁ − 2C)T = −(Ṁ − 2C), D = D T ≥ 0


Stabilize the system at q = qr
e = q − qr , ė = q̇

M(q)ë + C(q, q̇)ė + D ė + g(q) = u

(e = 0, ė = 0) is not an open-loop equilibrium point

u = g(q) − Kp e + v, (Kp = KpT > 0)

M(q)ë + C(q, q̇)ė + D ė + Kp e = v

Nonlinear Control Lecture # 25 State Feedback Stabilization


M(q)ë + C(q, q̇)ė + D ė + Kp e = v

V = 12 ėT M(q)ė + 21 eT Kp e

V̇ = 21 ėT (Ṁ − 2C)ė − ėT D ė − ėT Kp e + ėT v + eT Kp ė ≤ ėT v

y = ė
Is it zero-state observable? Set v = 0
ė(t) ≡ 0 ⇒ ë(t) ≡ 0 ⇒ Kp e(t) ≡ 0 ⇒ e(t) ≡ 0

v = −φ(ė), [φ(0) = 0, ėT φ(ė) > 0, ∀ė 6= 0]

u = g(q) − Kp e − φ(ė)

Special case: u = g(q) − Kp e − Kd ė, Kd = KdT > 0

Nonlinear Control Lecture # 25 State Feedback Stabilization


Nonlinear Control
Lecture # 26
State Feedback Stabilization

Nonlinear Control Lecture # 26 State Feedback Stabilization


Passivity-Based Control: Cascade Connection

ẋ = fa (x) + F (x, y)y, ż = f (z) + G(z)u, y = h(z)

fa (0) = 0, f (0) = 0, h(0) = 0

∂V ∂V ∂W
f (z) + G(z)u ≤ y T u, fa (x) ≤ 0
∂z ∂z ∂x

U(x, z) = W (x) + V (z)


"  T #
∂W T T ∂W
U̇ ≤ F (x, y)y + y u = y u + F (x, y)
∂x ∂x
 T
∂W
u=− F (x, y) + v ⇒ U̇ ≤ y T v
∂x

Nonlinear Control Lecture # 26 State Feedback Stabilization


The system

ẋ = fa (x) + F (x, y)y


 T
∂W
ż = f (z) − G(z) F (x, y) + G(z)v
∂x
y = h(z)

with input v and output y is passive with storage function U

v = −φ(y), [φ(0) = 0, y T φ(y) > 0 ∀ y 6= 0]

∂W
U̇ ≤ fa (x) − y T φ(y) ≤ 0, U̇ = 0 ⇒ x = 0&y = 0 ⇒ u = 0
∂x

ZSO of driving system: U̇ (t) ≡ 0 ⇒ z(t) ≡ 0

Nonlinear Control Lecture # 26 State Feedback Stabilization


Theorem 9.2
Suppose
the system

ż = f (z) + G(z)u, y = h(z)

is zero-state observable and passive with a radially


unbounded, positive definite storage function;
the origin of ẋ = fa (x) is globally asymptotically stable
and W (x) is a radially unbounded, positive definite
Lyapunov function
 T
∂W
Then, u = − F (x, y) − φ(y),
∂x
globally stabilizes the origin (x = 0, z = 0)

Nonlinear Control Lecture # 26 State Feedback Stabilization


Example 9.16 (see Examples 9.7 and 9.12)

ẋ = −x + x2 z, ż = u
With y = z as the output, the system takes the form of the
cascade connection

ż = u, y=z

is passive with V (z) = 21 z 2 and zero-state observable

ẋ = −x, W (x) = 12 x2 ⇒ Ẇ = −x2

u = −x3 − kz, k>0

Nonlinear Control Lecture # 26 State Feedback Stabilization


Control Lyapunov Functions
ẋ = f (x) + g(x)u, f (0) = 0, x ∈ Rn , u ∈ R
Suppose there is a continuous stabilizing state feedback
control u = χ(x) such that the origin of

ẋ = f (x) + g(x)χ(x)

is asymptotically stable
By the converse Lyapunov theorem, there is V (x) such that

∂V
[f (x) + g(x)χ(x)] < 0, ∀ x ∈ D, x 6= 0
∂x
If u = χ(x) is globally stabilizing, then D = Rn and V (x) is
radially unbounded
Nonlinear Control Lecture # 26 State Feedback Stabilization
∂V
[f (x) + g(x)χ(x)] < 0, ∀ x ∈ D, x 6= 0
∂x
∂V ∂V
g(x) = 0 for x ∈ D, x 6= 0 ⇒ f (x) < 0
∂x ∂x

Definition
A continuously differentiable positive definite function V (x) is
a Control Lyapunov Function (CLF) for the system
ẋ = f (x) + g(x)u if

∂V ∂V
g(x) = 0 for x ∈ D, x 6= 0 ⇒ f (x) < 0 (∗)
∂x ∂x
It is a Global Control Lyapunov Function if it is radially
unbounded and (∗) holds with D = Rn

Nonlinear Control Lecture # 26 State Feedback Stabilization


The system ẋ = f (x) + g(x)u is stabilizable by a state
feedback control only if it has a CLF
Is it sufficient? Yes
Sontag’s Formula:
 q
2 4
∂V
f + ( ∂V f +( ∂V
∂x ) ∂x )
g ∂V
 − ∂x
, if g 6= 0


( ∂V
∂x )
g ∂x
φ(x) =

∂V

0, if g =0

∂x

Nonlinear Control Lecture # 26 State Feedback Stabilization


ẋ = f (x) + g(x)φ(x)

∂V
V̇ = [f (x) + g(x)φ(x)]
∂x
∂V ∂V
If x 6= 0 and g(x) = 0, V̇ = f (x) < 0
∂x ∂x
∂V
If x 6= 0 and g(x) 6= 0
∂x
 q 
2 4
V̇ ∂V
= ∂x f − ∂V ∂x
f + ∂V
∂x
f + ∂V
∂x
g
q 2 4
∂V
= − ∂x
f + ∂V ∂x
g <0

Nonlinear Control Lecture # 26 State Feedback Stabilization


Lemma 9.6
If f (x), g(x) and V (x) are smooth then φ(x) will be smooth
for x 6= 0. If they are of class C ℓ+1 for ℓ ≥ 1, then φ(x) will be
of class C ℓ . Continuity at x = 0:
φ(x) is continuous at x = 0 if V (x) has the small control
property; namely, given any ε > 0 there δ > 0 such that if
x 6= 0 and kxk < δ, then there is u with kuk < ε such
that ∂V
[f (x) + g(x)u] < 0
∂x
φ(x) is locally Lipschitz at x = 0 if there is a locally
Lipschitz function χ(x), with χ(0) = 0, such that
∂V
[f (x) + g(x)χ(x)] < 0, for x 6= 0
∂x

Nonlinear Control Lecture # 26 State Feedback Stabilization


How can we find a CLF?
If we know of any stabilizing control with a corresponding
Lyapunov function V , then V is a CLF
Feedback Linearization

ẋ = f (x) + G(x)u, z = T (x), ż = (A − BK)z

P (A − BK) + (A − BK)T P = −Q, Q = QT > 0


V = z T P z = T T (x)P T (x) is a CLF
Backstepping

Nonlinear Control Lecture # 26 State Feedback Stabilization


Example 9.17

ẋ = x − x3 + u
Feedback Linearization:

u = χ(x) = −x + x3 − αx (α > 0)

ẋ = −αx

V (x) = 21 x2 is a CLF

∂V ∂V
g = x, f = x(x − x3 )
∂x ∂x

Nonlinear Control Lecture # 26 State Feedback Stabilization


q 2 4
∂V ∂V ∂V
∂x
f + ∂x
f + ∂x
g
− ∂V

∂x
g
p
3
x(x − x ) +
x2 (x − x3 )2 + x4
= −
3
p x
= −x + x − x (1 − x2 )2 + 1

p
φ(x) = −x + x3 − x (1 − x2 )2 + 1
Compare with
χ(x) = −x + x3 − αx

Nonlinear Control Lecture # 26 State Feedback Stabilization


20
20

10
10

0 0
u

f
−10
−10 FL
CLF
−20
−20
−3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3
x x


α= 2

Nonlinear Control Lecture # 26 State Feedback Stabilization


Robustness Property

Lemma 9.7
Suppose f , g, and V satisfy the conditions of Lemma 9.6 and
φ is given by Sontag’s formula. Then, the origin of
ẋ = f (x) + g(x)kφ(x) is asymptotically stable for all k ≥ 21 . If
V is a global control Lyapunov function, then the origin is
globally asymptotically stable

Nonlinear Control Lecture # 26 State Feedback Stabilization


Proof
Let
 s 
2  4
∂V ∂V ∂V
q(x) = 12 − f+ f + g 
∂x ∂x ∂x

Because V (x) is positive definite and smooth,


∂V
(0) = 0 ⇒ q(0) = 0
∂x
For x 6= 0
∂V ∂V ∂V
g 6= 0 ⇒ q > 0 & g=0 ⇒ q=− f >0
∂x ∂x ∂x
q(x) is positive definite

Nonlinear Control Lecture # 26 State Feedback Stabilization


u = kφ(x) ⇒ ẋ = f (x) + g(x)kφ(x)

∂V ∂V
V̇ = f+ gkφ
∂x ∂x
∂V ∂V
For x 6= 0, g = 0 ⇒ V̇ = f <0
∂x ∂x
∂V ∂V ∂V
g 6= 0, V̇ = −q + q + f+ gkφ
∂x ∂x ∂x
∂V ∂V
q+ f+ gkφ
∂x ∂x  s 
2  4
 ∂V ∂V ∂V
= − k − 12  f+ f + g ≤0
∂x ∂x ∂x

Nonlinear Control Lecture # 26 State Feedback Stabilization


Example 9.18
Reconsider ẋ = x − x3 + u. Compare u = χ(x) with u = φ(x)
By Lemma 9.7 the origin of ẋ = x − x3 + kφ(x) is globally
asymptotically stable for all k ≥ 12

ẋ = x − x3 + kχ(x) = −[k(1 + α) − 1]x + (k − 1)x3

The origin is not globally asymptotically stable for any k > 1

It is exponentially stable for k > 1/(1 + α)


Region of attraction:
( s )
kα n √ o
|x| < 1 + ) → |x| < 1 + α as k → ∞
(k − 1)

Nonlinear Control Lecture # 26 State Feedback Stabilization


Nonlinear Control
Lecture # 27
Robust State Feedback Stabilization

Nonlinear Control Lecture # 27 Robust State Feedback Stabilization


Definition 10.1
The system
ẋ = f (x, u) + δ(t, x, u)
is said to be practically stabilizable by the feedback control
u = φ(x) if for every b > 0, the control can be designed such
that the solutions are ultimately bounded by b; that is

kx(t)k ≤ b, ∀ t ≥ T, for some T > 0

local practical stabilization


regional practical stabilization
global practical stabilization
semiglobal practical stabilization

Nonlinear Control Lecture # 27 Robust State Feedback Stabilization


Sliding Mode Control
Example 10.1

ẋ1 = x2 ẋ2 = h(x) + g(x)u, g(x) ≥ g0 > 0


Sliding Manifold (Surface):

s = ax1 + x2 = 0

s(t) ≡ 0 ⇒ ẋ1 = −ax1

a>0 ⇒ lim x1 (t) = 0


t→∞

How can we bring the trajectory to the manifold s = 0?


How can we maintain it there?

Nonlinear Control Lecture # 27 Robust State Feedback Stabilization


ṡ = aẋ1 + ẋ2 = ax2 + h(x) + g(x)u
Suppose
ax2 + h(x)
≤ ̺(x)
g(x)

V = 12 s2

V̇ = sṡ = s[ax2 + h(x)] + g(x)su ≤ g(x)|s|̺(x) + g(x)su

β(x) ≥ ̺(x) + β0 , β0 > 0

s > 0, u = −β(x)

V̇ ≤ g(x)|s|̺(x) − g(x)β(x)|s| ≤ −g(x)β0 |s|

Nonlinear Control Lecture # 27 Robust State Feedback Stabilization


s < 0, u = β(x)

V̇ ≤ g(x)|s|̺(x) + g(x)su = g(x)|s|̺(x) − g(x)β(x)|s|

V̇ ≤ g(x)|s|̺(x) − g(x)(̺(x) + β0 )|s| = −g(x)β0 |s|



1, s>0
sgn(s) =
−1, s<0

u = −β(x) sgn(s) ⇒ V̇ ≤ −g(x)β0 |s| ≤ −g0 β0 |s|


V̇ ≤ −g0 β0 2V

dV √ p p 1
√ ≤ −g0 β0 2 dt ⇒ V (s(t)) ≤ V (s(0)) − g0 β0 √ t
V 2

Nonlinear Control Lecture # 27 Robust State Feedback Stabilization


|s(t)| ≤ |s(0)| − g0 β0 t

s(t) reaches zero in finite time

The trajectory cannot leave the surface s = 0

s=0

Nonlinear Control Lecture # 27 Robust State Feedback Stabilization


Pendulum Equation

θ̈ + sin θ + bθ̇ = cu
0 ≤ b ≤ 0.2, 0.5 ≤ c ≤ 2
Stabilize the pendulum at θ = π/2
π
x1 = θ − , x2 = θ̇ ⇒ ẋ1 = x2 , ẋ2 = − cos x1 − bx2 + cu
2
s = x1 + x2 , ṡ = x2 − cos x1 − bx2 + cu

x2 − cos x1 − bx2 (1 − b)x2 − cos x1


= ≤ 2(|x2 | + 1)
c c
u = −(2.5 + 2|x2 |) sgn(s)

Nonlinear Control Lecture # 27 Robust State Feedback Stabilization


2
0

1.5
−0.5
θ

s
−1

0.5 −1.5

0 −2
0 2 4 6 8 10 0 0.2 0.4 0.6 0.8 1
Time Time

5 5

Filtered u
3
u

0 2

−5 −1
0 0.2 0.4 0.6 0.8 1 0 2 4 6 8 10
Time Time

b = 0.01, c = 0.5, θ(0) = θ̇(0) = 0

Nonlinear Control Lecture # 27 Robust State Feedback Stabilization


Estimate the region of attraction when

ax2 + h(x)
≤ ̺(x), ∀x ∈ D ⊂ R2
g(x)

ẋ1 = −ax1 + s ṡ = ax2 + h(x) − g(x)β(x)sgn(s)

sṡ ≤ −g0 β0 |s| ⇒ {|s| ≤ c} is positively invariant

V0 = 12 x21 ⇒ V̇0 = x1 ẋ1 = −ax21 + x1 s ≤ −ax21 + |x1 |c


c
V̇0 ≤ 0,
∀ |s| ≤ c and |x1 | ≥
a
n c o
Ω = |x1 | ≤ , |s| ≤ c ⊂ D is positively invariant
a

Nonlinear Control Lecture # 27 Robust State Feedback Stabilization


n c o
Ω = |x1 | ≤ , |s| ≤ c
a

x2 ✻
s=0
❍ ❍ ❍❍
❍❍ ❍❍
❍❍ ❍c❍
❍❍ ❍❍ c/a
❍❍ ❍ ✲

❍❍ ❍❍ x1
❍❍ ❍❍
❍❍ ❍❍
❍❍ ❍❍

ax2 + h(x)
≤ k1 < k, ∀ x ∈ Ω
g(x)

u = −k sgn(s)

Nonlinear Control Lecture # 27 Robust State Feedback Stabilization


Chattering

❇ Sliding manifold


❇✁
✁❇
✁✕ ❇

❍❍❇
s < 0 ❍❇❍ ❨ s>0
❇ ✁
✁❇
✁✕ ❇
❍ ❍❍ ❇ a
❇ ❍❍
❇ ❍
❨❍

How can we reduce or eliminate chattering?

Nonlinear Control Lecture # 27 Robust State Feedback Stabilization


Reduce the amplitude of the signum function

ṡ = ax2 + h(x) + g(x)u

[ax2 + ĥ(x)]
u=− +v
ĝ(x)

ṡ = δ(x) + g(x)v
 
g(x) g(x)
δ(x) = a 1 − x2 + h(x) − ĥ(x)
ĝ(x) ĝ(x)
δ(x)
≤ ̺(x), β(x) ≥ ̺(x) + β0
g(x)

v = −β(x) sgn(s)

Nonlinear Control Lecture # 27 Robust State Feedback Stabilization


Back to the pendulum equation

ẋ1 = x2 , ẋ2 = − cos x1 − bx2 + cu, b̂ = 0, ĉ

−x2 + cos x1
u= + v ⇒ ṡ = δ + cv

   
δ 1−b 1 1 1
= − x2 − − cos x1
c c ĉ c ĉ
Take ĉ = 1/1.2 to minimize |(1 − b)/c − 1/ĉ|

δ
≤ 0.8|x2 | + 0.8
c

u = 1.2 cos x1 − 1.2x2 − (1 + 0.8|x2 |) sgn(s)

Nonlinear Control Lecture # 27 Robust State Feedback Stabilization


Simulation with unmodeled actuator dynamics
1
(0.01s + 1)2

Dashed lines:
u = −(2.5 + 2|x2 |) sgn(s)
Solid lines:

u = 1.2 cos x1 − 1.2x2 − (1 + 0.8|x2 |) sgn(s)

Nonlinear Control Lecture # 27 Robust State Feedback Stabilization


2
0

1.5
−0.5

1
θ

s
−1

0.5 −1.5

0 −2
0 2 4 6 8 10 0 1 2 3
Time Time

−3
x 10
1.575 5

π/2 0
1.57

−5
θ

1.565
−10

1.56 −15
9 9.2 9.4 9.6 9.8 10 9.5 9.6 9.7 9.8 9.9 10
Time Time

b = 0.01, c = 0.5, θ(0) = θ̇(0) = 0

Nonlinear Control Lecture # 27 Robust State Feedback Stabilization


Replace the signum function by a high-slope saturation
function  
s
u = −β(x) sat
µ

y, if |y| ≤ 1
sat(y) =
sgn(y), if |y| > 1
 
y
sgn(y) ✻ sat µ ✻
1 1
✂✂
✲ ✂ ✲
y ✂ µ y

−1 ✂ −1

Nonlinear Control Lecture # 27 Robust State Feedback Stabilization


How can we analyze the system?

For |s| ≥ µ, u = −β(x) sgn(s)

With c ≥ µ

Ω = |x1 | ≤ ac , |s| ≤ c is positively invariant
The trajectory reaches the boundary layer {|s| ≤ µ}in
finite time
The boundary layer is positively invariant

Nonlinear Control Lecture # 27 Robust State Feedback Stabilization


Inside the boundary layer:
s
ẋ1 = −ax1 + s ṡ = ax2 + h(x) − g(x)β(x)
µ

x1 ẋ1 ≤ −ax21 + |x1 |µ

µ
x1 ẋ1 ≤ −(1 − θ1 )ax21 , ∀ |x1 | ≥ , 0 < θ1 < 1
θ1 a
The trajectories reach the positively invariant set
µ
Ωµ = {|x1 | ≤ , |s| ≤ µ}
θ1 a
in finite time

Nonlinear Control Lecture # 27 Robust State Feedback Stabilization


(a) (b)
2 0.1

1.5 0.05
θ

1 0

s
µ=0.1
0.5 µ=0.001 −0.05

0 −0.1
0 2 4 6 8 10 9.5 9.6 9.7 9.8 9.9 10
Time Time
(c) (d)
2 0.1

1.5 0.05
θ

1 0
s

0.5 −0.05

0 −0.1
0 2 4 6 8 10 9.5 9.6 9.7 9.8 9.9 10
Time Time

(a) & (b) without and (c) & (d) with actuator dynamics

Nonlinear Control Lecture # 27 Robust State Feedback Stabilization


Special case h(0) = 0
Inside the boundary layer, the system

ẋ1 = x2 , ẋ2 = h(x) − [g(x)β(x)/µ] (ax1 + x2 )

has an equilibrium point at the origin. We can stabilize the


origin by choosing µ small enough
Pendulum equation: Stabilize the pendulum at θ = π

x1 = θ − π, x2 = θ̇, s = x1 + x2

ṡ = x2 + sin x1 − bx2 + cu

Nonlinear Control Lecture # 27 Robust State Feedback Stabilization


0 ≤ b ≤ 0.2, 0.5 ≤ c ≤ 2
(1 − b)x2 + sin x1
≤ 2(|x1 | + |x2 |)
c
u = −2(|x1 | + |x2 | + 1) sat(s/µ)
Inside the boundary layer

ẋ1 = −x1 + s, ṡ = (1 − b)x2 + sin x1 − 2c(|x1 | + |x2 | + 1)s/µ

1 1
V1 = x21 + s2
2 2
 T   
|x1 | 1 −3/2 |x1 |
V̇1 ≤ −
|s| −3/2 (1/µ − 1) |s|

Nonlinear Control Lecture # 27 Robust State Feedback Stabilization


Nonlinear Control
Lecture # 28
Robust State Feedback Stabilization

Nonlinear Control Lecture # 28 Robust State Feedback Stabilization


Sliding Mode Control
ẋ = f (x) + B(x)[G(x)u + δ(t, x, u)]
x ∈ Rn , u ∈ Rm , f and B are known, while G and δ could be
uncertain, f (0) = 0, G(x) is a positive definite symmetric
matrix with
λmin (G(x)) ≥ λ0 > 0
Regular Form:
   
η ∂T 0
= T (x), B(x) =
ξ ∂x I

η̇ = fa (η, ξ), ξ˙ = fb (η, ξ) + G(x)u + δ(t, x, u)

Nonlinear Control Lecture # 28 Robust State Feedback Stabilization


η̇ = fa (η, ξ), ξ˙ = fb (η, ξ) + G(x)u + δ(t, x, u)
Sliding Manifold:

s = ξ − φ(η) = 0, φ(0) = 0

s(t) ≡ 0 ⇒ η̇ = fa (η, φ(η))


Design φ s.t. the origin of η̇ = fa (η, φ(η)) is asymp. stable

Nonlinear Control Lecture # 28 Robust State Feedback Stabilization


∂φ
ṡ = fb (η, ξ) − fa (η, ξ) + G(x)u + δ(t, x, u)
∂η
u = ψ(η, ξ) + v
Typical choices of ψ:

ψ = 0, ψ = −Ĝ−1 [fb − (∂φ/∂η)fa ]

ṡ = G(x)v + ∆(t, x, v)

∆(t, x, v)
≤ ̺(x)+κ0 kvk, ∀ (t, x, v) ∈ [0, ∞)×D×Rm
λmin (G(x))
̺(x) ≥ 0, 0 ≤ κ0 < 1 (Known)

Nonlinear Control Lecture # 28 Robust State Feedback Stabilization


1
V = sT s ⇒ V̇ = sT ṡ = sT G(x)v + sT ∆(t, x, v)
2
s ̺(x)
v = −β(x) , β(x) ≥ + β0 , β0 > 0
ksk 1 − κ0

V̇ = −β(x)sT G(x)s/ksk + sT ∆(t, x, v)


≤ λmin (G(x))[−β(x) + ̺(x) + κ0 β(x)] ksk
= λmin (G(x))[−(1 − κ0 )β(x) + ̺(x)] ksk
≤ −λmin (G(x))β0 (1 − κ0 )ksk

≤ −λ0 β0 (1 − κ0 )ksk = −λ0 β0 (1 − κ0 ) 2V

Trajectories reach the manifold s = 0 in finite time and cannot


leave it

Nonlinear Control Lecture # 28 Robust State Feedback Stabilization


Continuous Implementation

 y, if kyk ≤ 1
Sat(y) =
y/kyk, if kyk > 1

 
s
v = −β(x) Sat
µ

ksk ≥ µ ⇒ Sat(s/µ) = s/ksk ⇒ sT ṡ ≤ −λ0 β0 (1 − κ0 )ksk


Trajectories reach the boundary layer {ksk ≤ µ} in finite time
and remains inside thereafter

Study the behavior of η: η̇ = fa (η, φ(η) + s)

Nonlinear Control Lecture # 28 Robust State Feedback Stabilization


α1 (kηk) ≤ V0 (η) ≤ α2 (kηk)
∂V0
fa (η, φ(η) + s) ≤ −α3 (kηk), ∀ kηk ≥ α4 (ksk)
∂η

ksk ≤ c ⇒ V̇0 ≤ −α3 (kηk), for kηk ≥ α4 (c)

α(r) = α2 (α4 (r))

V0 (η) ≥ α(c) ⇔ V0 (η) ≥ α2 (α4 (c)) ⇒ α2 (kηk) ≥ α2 (α4 (c))


⇒ kηk ≥ α4 (c)
⇒ V̇0 ≤ −α3 (kηk) ≤ −α3 (α4 (c))

Ω = {V0 (η) ≤ c0 } × {ksk ≤ c}, c0 ≥ α(c), Ω ⊂ T (D)

Nonlinear Control Lecture # 28 Robust State Feedback Stabilization


V0 (η) ≥ α(µ) ⇒ V̇0 ≤ −α3 (α4 (µ))
⇒ Ωµ = {V0 (η) ≤ α(µ)} × {ksk ≤ µ} is positively invariant
In summary, all trajectories starting in Ω remain in Ω and
reach Ωµ in finite time and remain inside thereafter

V
0
c0
α(⋅)
α(c)

α(µ)
µ c |s|

Nonlinear Control Lecture # 28 Robust State Feedback Stabilization


Theorem 10.1
Suppose all the assumptions hold over Ω. Then, for all
(η(0), ξ(0)) ∈ Ω, the trajectory (η(t), ξ(t)) is bounded for all
t ≥ 0 and reaches the positively invariant set Ωµ in finite time.
If the assumptions hold globally and V (η) is radially
unbounded, the foregoing conclusion holds for any initial state

Nonlinear Control Lecture # 28 Robust State Feedback Stabilization


Example 10.2 (Magnetic levitation - friction neglected)

mo
ẋ1 = x2 , u, x1 ≥ 0, −2 ≤ u ≤ 0
ẋ2 = 1 +
m
We want to stabilize the system at x1 = 1. Nominal
steady-state control is uss = −1

Shift the equilibrium point to the origin: x1 → x1 −1, u → u+1


m − mo mo
ẋ1 = x2 , ẋ2 = + u
m m
x1 ≥ −1, |u| ≤ 1
(m − mo ) 1
Assume ≤
mo 3

Nonlinear Control Lecture # 28 Robust State Feedback Stabilization


s = x1 + x2 ⇒ ẋ1 = −x1 + s
1
V0 = x21
2
V̇0 = −x21 +x1 s ≤ −(1−θ)x21 , ∀ |x1 | ≥ |s|/θ, 0<θ<1
1
α1 (r) = α2 (r) = r 2 , α3 (r) = (1 − θ)r 2 , α4 (r) = r/θ
2
1
α(r) = α2 (α4 (r)) = (r/θ)2
2
With c0 = α(c), Ω = {|x1 | ≤ c/θ} × {|s| ≤ c}
Ωµ = {|x1 | ≤ µ/θ} × {|s| ≤ µ}

Nonlinear Control Lecture # 28 Robust State Feedback Stabilization


Ω = {|x1 | ≤ c/θ} × {|s| ≤ c}
Take c ≤ θ to meet the constraint x1 ≥ −1

m − mo mo
ṡ = x2 + + u
m m
x2 + (m − mo )/m m m − mo
= x2 + ≤ 13 (4|x2 | + 1)
mo /m mo mo

In Ω, |x2 | ≤ |x1 | + |s| ≤ c(1 + 1/θ)


1 x2 + (m − mo )/m 8.4c + 1
with = 1.1, ≤
θ mo /m 3

Nonlinear Control Lecture # 28 Robust State Feedback Stabilization


To meet the constraint |u| ≤ 1 limit c to
 
8.4c + 1 s
< 1 ⇔ c < 0.238 and take u = −sat
3 µ

With c = 0.23, Theorem 10.1 ensures that all trajectories


starting in Ω stay in Ω and enter Ωµ in finite time

Inside Ωµ , |x1 | ≤ µ/θ = 1.1µ

µ can be chosen small enough to meet any specified ultimate


bound on x1

For |x1 | ≤ 0.01, take µ = 0.01/1.1 ≈ 0.009

Nonlinear Control Lecture # 28 Robust State Feedback Stabilization


With further analysis inside Ωµ we can derive a less
conservative estimate of the ultimate bound of |x1 |. In Ωµ , the
closed-loop system is represented by

m − mo mo (x1 + x2 )
ẋ1 = x2 , ẋ2 = −
m mµ
which has a unique equilibrium point at
 
µ(m − mo )
x1 = , x2 = 0
mo

and its matrix is Hurwitz


µ(m − mo )
lim x1 (t) = , lim x2 (t) = 0
t→∞ mo t→∞

Nonlinear Control Lecture # 28 Robust State Feedback Stabilization


(m − mo ) 1
≤ ⇒ |x1 | ≤ 0.34µ
mo 3
For |x1 | ≤ 0.01, take µ = 0.029
We can also obtain a less conservative estimate of the region
of attraction
1
V1 = (x21 + s2 )
2
 
2 2 mo m − mo
V̇1 ≤ −x1 + s − 1− |s| ≤ −x21 + s2 − 21 |s|
m mo
for |s| ≥ µ

Nonlinear Control Lecture # 28 Robust State Feedback Stabilization


m − mo mo s2 3s2
V̇1 ≤ −x21 + s2 + |s| − ≤ −x21 + s2 + 21 |s| −
mo m µ 4µ

for |s| ≤ µ
With µ = 0.029, it can be verified that V̇1 is less than a
negative number in the set {0.0012 ≤ V1 ≤ 0.12}. Therefore,
all trajectories starting in Ω1 = {V1 ≤ 0.12} enter
Ω2 = {V1 ≤ 0.0012} in finite time. Since Ω2 ⊂ Ω, our earlier
analysis holds and the ultimate bound of |x1 | is 0.01. The new
estimate of the region of attraction, Ω1 , is larger than Ω

Nonlinear Control Lecture # 28 Robust State Feedback Stabilization


0.6 s

1
0.4

0.2
0
Ω2 x1
−0.2
−0.4
−0.6
−0.8
−0.5 0 0.5

Nonlinear Control Lecture # 28 Robust State Feedback Stabilization


Nonlinear Control
Lecture # 29
Robust State Feedback Stabilization

Nonlinear Control Lecture # 29 Robust State Feedback Stabilization


Sliding Mode Control
Theorem 10.2
Suppose all the assumptions of Theorem 10.1 hold over Ω
with
̺(0) = 0
The origin of η̇ = fa (η, φ(η)) is exponentially stale
Then there exits µ∗ > 0 such that for all 0 < µ < µ∗ , the
origin of the closed-loop system is exponentially stable and Ω
is a subset of its region of attraction. If the assumptions hold
globally, the origin will be globally uniformly asymptotically
stable

Nonlinear Control Lecture # 29 Robust State Feedback Stabilization


Proof
By Theorem 10.1, all trajectories starting in Ω enter Ωµ in
finite time. Inside Ωµ

η̇ = fa (η, φ(η) + s), µṡ = β(x)G(x)s + µ∆(t, x, v)

By the converse Lyapunov theorem, there is V1 (η) that satisfies

c1 kηk2 ≤ V1 (η) ≤ c2 kηk2

∂V1
fa (η, φ(η)) ≤ −c3 kηk2
∂η
∂V1
≤ c4 kηk
∂η
in some neighborhood Nη of η = 0

Nonlinear Control Lecture # 29 Robust State Feedback Stabilization


By the smoothness of fa we have

kfa (η, φ(η) + s) − fa (η, φ(η))k ≤ k1 ksk

in some neighborhood N of (η, ξ) = (0, 0)


Choose µ small enough that Ωµ ⊂ Nη ∩ N. Inside Ωµ

β(x) T
sT ṡ = − s G(x)s + sT ∆(t, x, v)
µ
 
βλmin (G) 2 κ0 βksk
≤ − ksk + λmin (G) ̺ + ksk
µ µ
λ0 β0 (1 − κ0 )
≤ − ksk2 + λmin (G)̺ksk
µ

Nonlinear Control Lecture # 29 Robust State Feedback Stabilization


Since G is continuous and ̺ is locally Lipschitz with ̺(0) = 0,
we arrive at
λ0 β0 (1 − κ0 )
sT ṡ ≤ − ksk2 + k2 kηkksk + k3 ksk2
µ

W = V1 (η) + 12 sT s

Ẇ ≤ −c3 kηk2 + c4 k1 kηkksk + k2 kηkksk


λ0 β0 (1 − κ0 )
+ k3 ksk2 − ksk2
µ

Nonlinear Control Lecture # 29 Robust State Feedback Stabilization


T " #
c3 − c4 k12+k2
 
kηk kηk
Ẇ ≤ − c4 k1 +k2 λ0 β0 (1−κ0 )
ksk − 2 µ
− k3 ksk

4c3 λ0 β0 (1 − κ0 )
µ<
4c3 k3 + (c4 k1 + k2 )2
The basic idea of the foregoing proof is that, inside the
boundary layer, the control

β(x)s
v=−
µ
acts as high-gain feedback for small µ. By choosing µ small
enough, the high-gain feedback stabilizes the origin

Nonlinear Control Lecture # 29 Robust State Feedback Stabilization


Unmatched Uncertainty

ẋ = f (x) + B(x)[G(x)u + δ(t, x, u)] + δ1 (x)

η̇ = fa (η, ξ) + δa (η, ξ)
ξ˙ = fb (η, ξ) + G(x)u + δ(t, x, u) + δb (η, ξ)

s = ξ − φ(η)
Reduced-order model on the sliding manifold:

η̇ = fa (η, φ(η)) + δa (η, φ(η))

Design of φ to stabilize η = 0 in the presence of δa

Nonlinear Control Lecture # 29 Robust State Feedback Stabilization


Example 10.3

ẋ1 = x2 + θ1 x1 sin x2 , ẋ2 = θ2 x22 + x1 + u


|θ1 | ≤ a, |θ2 | ≤ b

x2 = −kx1 ⇒ ẋ1 = −kx1 + θ1 x1 sin x2

V1 = 21 x21 ⇒ x1 ẋ1 ≤ −kx21 + ax21

s = x2 + kx1 , k>a

ṡ = θ2 x22 + x1 + u + k(x2 + θ1 x1 sin x2 )

u = −x1 − kx2 + v ⇒ ṡ = v + ∆(x)

∆(x) = θ2 x22 + kθ1 x1 sin x2

Nonlinear Control Lecture # 29 Robust State Feedback Stabilization


∆(x) = θ2 x22 + kθ1 x1 sin x2
|∆(x)| ≤ ak|x1 | + bx22

β(x) = ak|x1 | + bx22 + β0 , β0 > 0


By Theorem 10.2,
 
s
u = −x1 − kx2 − β(x) sat
µ

with sufficiently small µ, globally stabilizes the origin

Nonlinear Control Lecture # 29 Robust State Feedback Stabilization


Example 10.4

ẋ1 = x1 + (1 − θ1 )x2 , ẋ2 = θ2 x22 + x1 + u


|θ1 | ≤ a, |θ2 | ≤ b

ẋ1 = x1 + (1 − θ1 )x2
Design x2 to robustly stabilize the origin x1 = 0

We must have |a| < 1

x2 = −kx1 ⇒ x1 ẋ1 = x21 − k(1 − θ1 )x21 ≤ −[k(1 − a) − 1]x21


1
k>
1−a

Nonlinear Control Lecture # 29 Robust State Feedback Stabilization


s = x2 + kx1
Proceeding as in the previous example, we end up with

u = −(1 + k)x1 − kx2 − β(x) sat(s/µ)

β(x) = bx22 + ak|x2 | + β0 , β0 > 0

Nonlinear Control Lecture # 29 Robust State Feedback Stabilization


Alternative Approach: Suppose G(x) is a diagonal matrix with
positive elements

ṡ = G(x)v + ∆(t, x, v)

ṡi = gi (x)vi + ∆i (t, x, v), 1≤i≤m

∆i (t, x, v)
gi (x) ≥ g0 > 0 ≤ ̺(x) + κ0 max |vi |
gi (x) 1≤i≤m

vi = −β(x) sat(si /µ), 1≤i≤m

1
Vi = s2i
2

Nonlinear Control Lecture # 29 Robust State Feedback Stabilization


V̇i = si gi (x)vi + si ∆i (t, x, v)
≤ gi (x){si vi + |si |[̺(x) + κ0 max |vi |]}
1≤i≤m
≤ gi (x)[−β(x) + ̺(x) + κ0 β(x)]|si |
= gi (x)[−(1 − κ0 )β(x) + ̺(x)]|si |
≤ gi (x)[−̺(x) − (1 − κ0 )β0 + ̺(x)]|si |
≤ −g0 β0 (1 − κ0 )|si |
p
V̇i ≤ −g0 β0 (1 − κ0 ) 2Vi
ensures that all trajectories reach the boundary layer

{|si | ≤ µ, 1 ≤ i ≤ m}

in finite time
Nonlinear Control Lecture # 29 Robust State Feedback Stabilization
Results similar to Theorems 10.1 and 10.2 can be proved with

Ω = {V0 (η) ≤ c0 } × {|si | ≤ c, 1 ≤ i ≤ m}



c0 ≥ α(c), α(r) = α2 (α4 (r m))
Ωµ = {V0 (η) ≤ α(µ)} × {|si | ≤ µ, 1 ≤ i ≤ m}

Nonlinear Control Lecture # 29 Robust State Feedback Stabilization


Nonlinear Control
Lecture # 30
Robust State Feedback Stabilization

Nonlinear Control Lecture # 30 Robust State Feedback Stabilization


Lyapunov Redesign
ẋ = f (x) + G(x)[u + δ(t, x, u)], x ∈ Rn , u ∈ Rm

Nominal Model: ẋ = f (x) + G(x)u

Stabilizing Control: u = φ(x)


∂V
[f (x) + G(x)φ(x)] ≤ −W (x), ∀ x ∈ D, W is p.d.
∂x
u = φ(x) + v

kδ(t, x, φ(x) + v)k ≤ ̺(x) + κ0 kvk, 0 ≤ κ0 < 1

ẋ = f (x) + G(x)φ(x) + G(x)[v + δ(t, x, φ(x) + v)]


∂V ∂V
V̇ = (f + Gφ) + G(v + δ)
∂x ∂x

Nonlinear Control Lecture # 30 Robust State Feedback Stabilization


∂V
wT = G
∂x
V̇ ≤ −W (x) + w T v + w T δ

w T v + w T δ ≤ w T v + kwk kδk ≤ w T v + kwk[̺(x) + κ0 kvk]


 
w w
v = −β(x) = sgn(w) for m = 1
kwk kwk

w T v + w T δ ≤ −βkwk + ̺kwk + κ0 βkwk


= −β(1 − κ0 )kwk + ̺kwk

̺(x)
β(x) ≥ ⇒ w T v + w T δ ≤ 0 ⇒ V̇ ≤ −W (x)
(1 − κ0 )

Nonlinear Control Lecture # 30 Robust State Feedback Stabilization


Continuous Implementation
w
(
−β(x) kwk , if β(x)kwk ≥ µ
 
β(x)w
v = −β(x) Sat =
µ −β 2 (x) wµ , if β(x)kwk < µ

β(x)kwk ≥ µ ⇒ V̇ ≤ −W (x)
For β(x)kwk < µ
 
2 w T
V̇ ≤ −W (x) + w −β · + δ
µ
2
β
≤ −W (x) − kwk2 + ̺kwk + κ0 kwkkvk
µ
β2 κ0 β 2
= −W (x) − kwk2 + ̺kwk + kwk2
µ µ

Nonlinear Control Lecture # 30 Robust State Feedback Stabilization


β2
 
2
V̇ ≤ −W (x) + (1 − κ0 ) − kwk + βkwk
µ
y2 µ
− + y ≤ , for y ≥ 0
µ 4
(1 − κ0 )
V̇ ≤ −W (x) + µ
4
α1 (kxk) ≤ V (x) ≤ α2 (kxk), W (x) ≥ α3 (kxk)
For 0 < θ < 1,
V̇ ≤ −(1 − θ)α3 (kxk) − θα3 (kxk) + µ(1 − κ0 )/4
 
−1 µ(1 − κ0 ) def
≤ −(1 − θ)α3 (kxk), ∀ kxk ≥ α3 = µ0

Nonlinear Control Lecture # 30 Robust State Feedback Stabilization


4θα3 (α2−1 (α1 (r)))
Br ⊂ D, µ< ⇒ µ0 < α2−1 (α1 (r))
(1 − κ0 )

Theorem 10.3
Under the foregoing assumptions, for any
x(t0 ) ∈ {V (x) ≤ α1 (r)}, the solution of the closed-loop
system satisfies
kx(t)k ≤ max {β1 (kx(t0 )k, t − t0 ), b(µ)}

where β1 is a class KL function and


b(µ) = α1−1 (α2 (µ0 )) = α1−1 (α2 (α3−1 (0.25µ(1 − κ0 )/θ)))

If all the assumptions hold globally and α1 ∈ K∞ , then the


above inequality holds for any initial state x(t0 )

Nonlinear Control Lecture # 30 Robust State Feedback Stabilization


Under what conditions will the origin be uniformly
asymptotically stable?
W (x) ≥ ϕ2 (x), β(x) ≥ β0 > 0, ̺(x) ≤ ̺1 ϕ(x)

∀ x ∈ Ba = {kxk ≤ a}; ϕ(x) is positive definite

ϕ(0) = 0 ⇒ ̺(0) = 0
When β(x)kwk < µ
β 2 (x)(1 − κ0 )
V̇ ≤ −W (x) − kwk2 + ̺(x)kwk
µ

−W (x) = −(1 − θ)W (x) − θW (x) ≤ −(1 − θ)W (x) − θϕ2 (x)

0<θ<1

Nonlinear Control Lecture # 30 Robust State Feedback Stabilization


V̇ ≤ −(1 − θ)W (x)
 T    
1 ϕ(x) 2θ −̺1 ϕ(x)

2 kwk −̺1 2β02 (1 − κ0 )/µ kwk

For sufficiently small µ, V̇ ≤ −(1 − θ)W (x)


Theorem 10.4
For sufficiently small µ, the origin is uniformly asymptotically
stable. If
c1 kxk2 ≤ V (x) ≤ c2 kxk2 , W (x) ≥ c3 kxk2

for ci > 0, then the origin is exponentially stable

Nonlinear Control Lecture # 30 Robust State Feedback Stabilization


Example 10.5 (Pendulum equation)

ẋ1 = x2 , ẋ2 = − sin(x1 + δ1 ) − b0 x2 + cu

0 ≤ b0 ≤ 0.2, 0.5 ≤ c ≤ 2
The system is feedback linearizable
ẋ = Ax + B[− sin(x1 + δ1 ) − b0 x2 + cu]

Nominal parameters: b̂0 = 0, ĉ


 
1
φ(x) = [sin(x1 + δ1 ) − k1 x1 − k2 x2 ]

 
K = k1 k2 is chosen such that A − BK is Hurwitz

Nonlinear Control Lecture # 30 Robust State Feedback Stabilization


u = φ(x) + v
   
c − ĉ b0 c − ĉ
δ= [sin(x1 + δ1 ) − k1 x1 − k2 x2 ] − x2 + v
ĉ2 ĉ ĉ
|δ| ≤ ̺0 + ̺1 |x1 | + ̺2 |x2 | + κ0 |v|

(c − ĉ) sin δ1 c − ĉ
̺0 ≥ 2
, ̺1 ≥ (1 + k1 )
ĉ ĉ2
b0 c − ĉ c − ĉ
̺2 ≥ + 2
k2 , κ0 ≥
ĉ ĉ ĉ
̺0 + ̺1 |x1 | + ̺2 |x2 |
Assume κ0 < 1, β(x) ≥ β0 + , β0 > 0
1 − κ0

Nonlinear Control Lecture # 30 Robust State Feedback Stabilization


V (x) = xT P x, P (A − BK) + (A − BK)T P = −I

w = 2xT B T P = 2(p12 x1 + p22 x2 )


The control
u = φ(x) − β(x) sat (β(x)w/µ)

achieves global ultimate boundedness with ultimate bound



proportional to µ. If sin δ1 = 0, we take ̺0 = 0 and the
origin of the closed-loop system will be globally exponentially
stable

Nonlinear Control Lecture # 30 Robust State Feedback Stabilization


Compare with Example 9.4: There
 
c − ĉ 1
q
2 2
1 + k1 + k2 ≤ p
ĉ 2 p12 + p222
2

Ultimate bound is proportional to | sin(δ1 )(c − ĉ)/ĉ|


Here c − ĉ
< κ0 < 1


Ultimate bound is proportional to µ

Nonlinear Control Lecture # 30 Robust State Feedback Stabilization


Calculation of the ultimate bound: Let δ1 = π/2
Take ĉ = (2 + 0.5)/2 = 1.25 to minimze|(c − ĉ)/ĉ|

|c − ĉ/ĉ| ≤ κ0 = 0.6 Compare with 0.3951 in Example 9.4


 
K = 1 2 ⇒ λ(A − BK) = −1, −1 ⇒ w = x1 + x2

β(x) = 2.4|x1 | + 2.8|x2 | + 1.5

u = 0.8(cos x1 − x1 − 2x2 ) − β(x) sat (β(x)w/µ)

α1 (r) = λmin (P )r 2 , α2 (r) = λmax (P )r 2, α3 (r) = r 2


With θ = 0.9,

b(µ) = α1−1 (α2 (α3−1 (0.25µ(1 − κ0 )/θ))) ≈ 0.8 µ

Nonlinear Control Lecture # 30 Robust State Feedback Stabilization


For x(t) to be ultimately in {|x1 | ≤ 0.01, |x2 | ≤ 0.01}

µ < 0.01/0.8 ⇒ Bb ⊂ {|x1 | ≤ 0.01, |x2 | ≤ 0.01}

Analysis inside Bb yields a less conservative estimate of µ.


Trajectories inside Bb converge to the equilibrium point
(x̄1 ≈ µ(0.8c − 1)/(2.25c), x̄2 = 0)
c ∈ [0.5, 2] ⇒ |µ(0.8c − 1)/(2.25c)| ≤ 0.53µ

It is sufficient to choose µ < 0.01/0.53


Simulation: b0 = 0.01, c = 0.5, and µ = 0.01

Nonlinear Control Lecture # 30 Robust State Feedback Stabilization


(a) (b)
3
π/2
1.5 2

1
1

2
0
θ

x
−1
0.5
−2

0 −3
0 2 4 6 −3 −2 −1 0 1 2 3
Time x1

when β|w| ≥ µ, w ẇ ≤ −w 2

Nonlinear Control Lecture # 30 Robust State Feedback Stabilization


High-Gain Feedback
The sliding mode and Lyapunov redesign controllers can be
replaced by high-gain feedback controllers
 
Sliding mode control: Replace β(x) Sat µs by β(x)s
µ
1
V = sT s
2
β T
V̇ = − s Gs + sT ∆
µ
β β
≤ − λmin (G)ksk2 + λmin (G)̺ksk + λmin (G)κ0 ksk2
µ µ
   
ksk
= λmin (G) − − 1 β(1 − κ0 )ksk − β0 (1 − κ0 )ksk
µ
≤ −λ0 β0 (1 − κ0 )ksk, for ksk ≥ µ

Nonlinear Control Lecture # 30 Robust State Feedback Stabilization


The trajectories reach {ksk ≤ µ} in finite time
What is the difference from sliding mode control?
The trajectories reach the boundary layer faster because

µṡ = −G(x)s + µ∆

Example 10.6
Recall Example 10.1 where the pendulum is stabilized at
θ = π/2 and s = θ − π/2 + θ̇
Sliding Mode: u = −(2.5 + 2|θ̇|) sat(s/µ)

High-gain Feedback: u = −(2.5 + 2|θ̇|)(s/µ)

Simulation: b = 0.01, c = 0.5, µ = 0.1

Nonlinear Control Lecture # 30 Robust State Feedback Stabilization


(a) (b) (c)
5
0 40
4

3 30
−0.5
2

u
s

20
−1 1

0 10
−1.5 −1
0
−2
0 0.5 1 0 2 4 0 0.2 0.4
Time Time Time

Sliding mode (solid); High-gain (dashed)

Nonlinear Control Lecture # 30 Robust State Feedback Stabilization


Lyapunov redesign: Replace
 
β(x)w
β(x) Sat
µ

by
β 2 (x)w
µ

Nonlinear Control Lecture # 30 Robust State Feedback Stabilization


Nonlinear Control
Lecture # 31
Nonlinear Observers

Nonlinear Control Lecture # 31 Nonlinear Observers


Local Observers

ẋ = f (x, u), y = h(x)

x̂˙ = f (x̂, u) + H[y − h(x̂)]

x̃ = x − x̂

x̃˙ = f (x, u) − f (x̂, u) − H[h(x) − h(x̂)]


We seek a local solution for sufficiently small kx̃(0)k
Linearization at x̃ = 0:
 
∂f ∂h
x̃˙ = (x(t), u(t)) − H (x(t)) x̃
∂x ∂x

Nonlinear Control Lecture # 31 Nonlinear Observers


Steady-state solution:
0 = f (xss , uss ), 0 = h(xss )

Assumption: given ε > 0, there exist δ1 > 0 and δ2 > 0 such


that

kx(0) − xss k ≤ δ1 and ku(t) − uss k ≤ δ2 ∀ t ≥ 0

⇒ kx(t) − xss k ≤ ε ∀ t ≥ 0

∂f ∂h
A=
(xss , uss), C= (xss )
∂x ∂x
Assume that (A, C) is detectable. Design H such that
A − HC is Hurwitz

Nonlinear Control Lecture # 31 Nonlinear Observers


Lemma 11.1
For sufficiently small kx̃(0)k, kx(0) − xss k, and
supt≥0 ku(t) − uss k,
lim x̃(t) = 0
t→∞

Proof
Z 1
∂f
f (x, u) − f (x̂, u) = (x − σx̃, u) dσ x̃
0 ∂x
kf (x, u)−f (x̂, u)−Ax̃k =
Z 1 
∂f ∂f ∂f ∂f
(x − σx̃, u) − (x, u) + (x, u) − (xss , uss) dσ x̃
0 ∂x ∂x ∂x ∂x
≤ L1 ( 21 kx̃k + kx − xss k + ku − uss k)kx̃k

Nonlinear Control Lecture # 31 Nonlinear Observers


kh(x) − h(x̂) − C x̃k ≤ L2 ( 21 kx̃k + kx − xss k)kx̃k

x̃˙ = (A − HC)x̃ + ∆(x, u, x̃)

k∆(x, u, x̃)k ≤ k1 kx̃k2 + k2 (ε + δ2 )kx̃k

P (A − HC) + (A − HC)T P = −I
V = x̃T P x̃

V̇ ≤ −kx̃k2 + c4 k1 kx̃k3 + c4 k2 (ε + δ2 )kx̃k2

V̇ ≤ − 31 kx̃k2 , for c4 k1 kx̃k ≤ 1


3
and c4 k2 (ε + δ2 ) ≤ 1
3

For sufficiently small kx̃(0)k, ε, and δ2 , the estimation error


converges to zero as t tends to infinity

Nonlinear Control Lecture # 31 Nonlinear Observers


The Extended Kalman Filter

ẋ = f (x, u), y = h(x)

x̂˙ = f (x̂, u) + H(t)[y − h(x̂)]

x̃ = x − x̂

x̃˙ = f (x, u) − f (x̂, u) − H(t)[h(x) − h(x̂)]


Expand the right-hand side in a Taylor series about x̃ = 0 and
evaluate the Jacobian matrices along x̂
x̃˙ = [A(t) − H(t)C(t)]x̃ + ∆(x̃, x, u)

∂f ∂h
A(t) = (x̂(t), u(t)), C(t) = (x̂(t))
∂x ∂x

Nonlinear Control Lecture # 31 Nonlinear Observers


Kalman Filter Gain:
H(t) = P (t)C T (t)R−1

Ṗ = AP + P AT + Q − P C T R−1 CP, P (t0 ) = P0

P0 , Q and R are symmetric, positive definite matrices


Assumption 11.1: P (t) exists for all t ≥ t0 and satisfies
α1 I ≤ P (t) ≤ α2 I, αi > 0

Remarks:
Assumption 11.1 cannot be checked offline
The observer and Riccati equations are solved
simultaneously in real time

Nonlinear Control Lecture # 31 Nonlinear Observers


Lemma 11.2
There exist positive constants c, k, and λ such that
kx̃(0)k ≤ c ⇒ kx̃(t)k ≤ ke−λ(t−t0 ) , ∀ t ≥ t0 ≥ 0

Proof

kf (x, u)−f (x̂, u)−A(t)x̃k =


Z 1 
∂f ∂f
(σx̃ + x̂, u) − (x̂, u) dσ x̃ ≤ 1
L kx̃k2
2 1
0 ∂x ∂x
kh(x)−h(x̂)−C(t)x̃k =
Z 1 
∂h ∂h
(σx̃ + x̂) − (x̂) dσ x̃ ≤ 1
L kx̃k2
2 2
0 ∂x ∂x

Nonlinear Control Lecture # 31 Nonlinear Observers


∂h ∂h
kC(t)k = (x − x̃) ≤ (0) + L2 (kxk + kx̃k)
∂x ∂x

k∆(x̃, x, u)k ≤ k1 kx̃k2 + k3 kx̃k3

α1 I ≤ P (t) ≤ α2 I ⇔ α3 I ≤ P −1(t) ≤ α4 I, αi > 0

V = x̃T P −1 x̃

d −1
P = −P −1 Ṗ P −1
dt

Nonlinear Control Lecture # 31 Nonlinear Observers


d −1
V̇ = x̃T P −1 x̃˙ + x̃˙ T P −1 x̃ + x̃TP x̃
dt
= x̃T P −1 (A − P C T R−1 C)x̃
+ x̃T (AT − C T R−1 CP )P −1x̃
− x̃T P −1 Ṗ P −1 x̃ + 2x̃T P −1∆
= x̃T P −1 (AP + P AT − P C T R−1 CP − Ṗ )P −1 x̃
− x̃T C T R−1 C x̃ + 2x̃T P −1 ∆
= −x̃T (P −1QP −1 + C T R−1 C)x̃ + 2x̃T P −1∆

V̇ ≤ −c1 kx̃k2 + c2 k1 kx̃||3 + c2 k2 kx̃||4


V̇ ≤ − 21 c1 kx̃k2 , for kx̃k ≤ r

Nonlinear Control Lecture # 31 Nonlinear Observers


Example 11.1

ẋ = A1 x + B1 [0.25x21 x2 + 0.2 sin 2t], y = C1 x


   
0 1 0  
A1 = , B1 = , C1 = 1 0
−1 −2 1
Investigate boundedness of x(t)
 
1 3 1
P1 A1 + AT1 P1 = −I ⇒ P1 =
2 1 1

V (x) = xT P1 x

Nonlinear Control Lecture # 31 Nonlinear Observers


V̇ = −xT x + 2xT P1 B1 [0.25x21 x2 + 0.2 sin 2t]
≤ −kxk2 + 0.5kP1 B1 kx21 kxk2 + 0.4kP1 B1 kkxk
x2 0.4
= −kxk2 + √1 kxk2 + √ kxk
2 2 2
0.4 √
≤ −0.5kxk2 + √ kxk, for x21 ≤ 2
2

2 √
  −1  T = 2
1 0 P1 1 0
√ √
Ω = {V (x) ≤ 2} ⊂ {x21 ≤ 2}

Nonlinear Control Lecture # 31 Nonlinear Observers


Inside Ω
0.4
V̇ ≤ −0.5kxk2 + √ kxk
2
0.4
≤ −0.15kxk2 , ∀ kxk ≥ √ = 0.8081
0.35 2

λmax (P1 ) = 1.7071 ⇒ (0.8081)2 λmax (P1 ) < 2

⇒ {kxk ≤ 0.8081} ⊂ Ω

⇒ Ω is positively invariant

Design EKF to estimate x(t) for x(0) ∈ Ω

Nonlinear Control Lecture # 31 Nonlinear Observers


 
0 1
A(t) =
−1 + 0.5x̂1 (t)x̂2 (t) −2 + 0.25x̂21 (t)
 
C= 1 0

Q = R = P (0) = I

Ṗ = AP + P AT + I − P C T CP, P (0) = I
 
p11 p12
P =
p12 p22

Nonlinear Control Lecture # 31 Nonlinear Observers


x̂˙ 1 = x̂2 + p11 (y − x̂1 )
x̂˙ 2 = −x̂1 − 2x̂2 + 0.25x̂21 x̂2 + 0.2 sin 2t + p12 (y − x̂1 )
ṗ11 = 2p12 + 1 − p211
ṗ12 = p11 (−1 + 0.5x̂1 x̂2 ) + p12 (−2 + 0.25x̂21 )
+ p22 − p11 p12
ṗ22 = 2p12 (−1 + 0.5x̂1 x̂2 ) + 2p22 (−2 + 0.25x̂21 )
+ 1 − p212

Nonlinear Control Lecture # 31 Nonlinear Observers


(a) (b)
1
x1
1
p11
x2

Components of P(t)
0.5 0.8
Estimation Error

0.6
p22
0 0.4
0.2
−0.5 0 p12
−0.2
−1 −0.4
0 1 2 3 4 0 1 2 3 4
Time Time

Nonlinear Control Lecture # 31 Nonlinear Observers


Nonlinear Control
Lecture # 32
Nonlinear Observers

Nonlinear Control Lecture # 32 Nonlinear Observers


Global Observers
Observer Form:
ẋ = Ax + ψ(u, y), y = Cx

(A, C) observable
x̂˙ = Ax̂ + ψ(u, y) + H(y − C x̂)

x̃ = x − x̂

x̃˙ = (A − HC)x̃
Design H such that A − HC is Hurwitz
limt→∞ x̃(t) = 0, ∀ x̃(0)

Nonlinear Control Lecture # 32 Nonlinear Observers


ẋ = Ax + ψ(u, y) + φ(x, u), y = Cx

kφ(x, u) − φ(z, u)k ≤ Lkx − zk

x̂˙ = Ax̂ + ψ(u, y) + φ(x̂, u) + H(y − C x̂)

x̃˙ = (A − HC)x̃ + φ(x, u) − φ(x̂, u)

P (A − HC) + (A − HC)T P = −I, V = x̃T P x̃

V̇ = −x̃T x̃ + 2x̃T P [φ(x, u) − φ(x̂, u)] ≤ −kx̃k2 + 2LkP kkx̃k2

1
L< ⇒ limt→∞ x̃(t) = 0, ∀ x̃(0)
2kP k

Nonlinear Control Lecture # 32 Nonlinear Observers


High-Gain Observers
Example 11.2

ẋ1 = x2 , ẋ2 = φ(x, u), y = x1

x̂˙ 1 = x̂2 + h1 (y − x̂1 ), x̂˙ 2 = φ0 (x̂, u) + h2 (y − x̂1 )

|φ0 (z, u) − φ(x, u)| ≤ Lkx − zk + M

x̃˙ = Ao x̃ + Bδ(x, x̃, u)


   
−h1 1 0
Ao = , B= , δ = φ(x, u) − φ0 (x̂, u)
−h2 0 1
Design h1 and h2 such that Ao is Hurwitz

Nonlinear Control Lecture # 32 Nonlinear Observers


Transfer function from δ to x̃:
 
1 1
Go (s) = 2
s + h1 s + h2 s + h1

We can make supω∈R kGo (jω)k arbitrarily small by choosing


h2 ≫ h1 ≫ 1
α1 α2
h1 = , h2 = 2 , ε≪1
ε ε
 
ε ε
Go (s) =
(εs)2 + α1 εs + α2 εs + α1
lim Go (s) = 0
ε→0

Nonlinear Control Lecture # 32 Nonlinear Observers


x̃1
η1 = , η2 = x̃2
ε
 
−α1 1
εη̇ = F η + εBδ, where F =
−α2 0
P F + F T P = −I, V = ηT P η

|δ| ≤ Lkx̃k + M ≤ Lkηk + M

εV̇ = −η T η + 2εη T P Bδ
≤ −kηk2 + 2εLkP Bkkηk2 + 2εMkP Bkkηk

Nonlinear Control Lecture # 32 Nonlinear Observers


εLkP Bk ≤ 1
4
⇒ εV̇ ≤ − 21 kηk2 + 2εMkP Bkkηk


kη(t)k ≤ max ke−at/ε kη(0)k, εcM , ∀t≥0
Peaking Phenomenon:
x1 (0) 6= x̂1 (0) ⇒ η1 (0) = O(1/ε)

The solution will contain a term of the form (1/ε)e−at/ε

Nonlinear Control Lecture # 32 Nonlinear Observers


Example 11.1 (revisited)

ẋ1 = x2 , ẋ2 = −x1 − 2x2 + ax21 x2 + b sin 2t, y = x1



a = 0.25, b = 0.2, and Ω = {1.5x21 + x1 x2 + 0.5x22 ≤ 2} is
positively invariant
2
x̂˙ 1 = x̂2 + (y − x̂1 )
ε
1
x̂˙ 2 = −x̂1 − 2x̂2 + âx̂21 x̂2 + b̂ sin 2t + 2 (y − x̂1 )
ε

Case 1: â = 0.25 and b̂ = 0.2 (Figures (a) and (b))


Case 2: â = b̂ = 0 (Figures (c) and (d))

Nonlinear Control Lecture # 32 Nonlinear Observers


(a) (b)
1.2 40
ε = 0.01
1
ε = 0.1 30
ε = 0.01
x1 and Estimates

x and Estimates
0.8

0.6 20

0.4
10

2
0.2 ε = 0.1
0
0
0 0.1 0.2 0.3 0.4 0.5 0 0.1 0.2 0.3 0.4 0.5
Time Time

(c) (d)
0.04
0
ε = 0.1
Estimation Error of x2

Estimation Error of x2

ε = 0.1 0.02
−10
ε = 0.01
0
−20
ε = 0.01
−30 −0.02

−40 −0.04
0 0.1 0.2 0.3 0.4 5 6 7 8 9 10
Time Time

Nonlinear Control Lecture # 32 Nonlinear Observers


Measurement Noise
y = x1 + v, |v(t)| ≤ N
 
1 α1
εη̇ = F η + εBδ − Ev, where E=
ε α2
2N
εV̇ ≤ − 12 kηk2 + 2εMkP Bkkηk + kP Ekkηk
ε
Ultimate bound: c2 N
kx̃k ≤ c1 Mε +
ε

Nonlinear Control Lecture # 32 Nonlinear Observers


ε c M + c N/ε
1 2

ε ε
1

Nonlinear Control Lecture # 32 Nonlinear Observers


General case:
ẇ = f0 (w, x, u)
ẋi = xi+1 + ψi (x1 , . . . , xi , u), for 1 ≤ i ≤ ρ − 1
ẋρ = φ(w, x, u)
y = x1

i
X
|ψi (x1 , . . . , xi , u) − ψi (z1 , . . . , zi , u)| ≤ Li |xk − zk |
k=1

αi
x̂˙ i = x̂i+1 + ψ1 (x̂1 , . . . , x̂i , u) + i (y − x̂1 )
ε
α ρ
x̂˙ ρ = φ0 (x̂, u) + ρ (y − x̂1 )
ε

Nonlinear Control Lecture # 32 Nonlinear Observers


α1 to αρ are chosen such that the roots of
sρ + α1 sρ−1 + · · · + αρ−1 s + αρ = 0

have negative real parts


|φ(w, x, u) − φ0 (x, u)| ≤ M

|φ0 (x, u) − φ0 (x̂, u)| ≤ Lkx − x̂k

|φ(w, x, u) − φ0 (x̂, u)| ≤ Lkx − x̂k + M

Lemma 11.3
For sufficiently small ε
 
b
|x̃i | ≤ max e−at/ε , ερ+1−i cM
εi−1

Nonlinear Control Lecture # 32 Nonlinear Observers


Proof

x1 − x̂1 xρ−1 − x̂ρ−1


η1 = , . . . , ηρ−1 = , ηρ = xρ − x̂ρ
ερ−1 ε

εη̇ = F η + εδ(w, x, x̃, u)

δ = col(δ1 , δ2 , · · · , δρ ), δρ = φ(w, x, u) − φ0 (x̂, u)


i
X
|δi | ≤ Li εi−k |ηk |, 1 ≤i≤ρ−1
k=1
 
−α1 1 0 ··· 0
 −α2 0 1 · · · 0
.. .. 
 
F = ..
.

 . .
−αρ−1 0 1
−αρ 0 ··· ··· 0

Nonlinear Control Lecture # 32 Nonlinear Observers


For ε ≤ ε∗
kδk ≤ Lδ kηk + M

V = η T P η, P F + F T P = −I

εV̇ = −η T η + 2εη T P δ

εV̇ ≤ −kηk2 + 2εkP kLδ kηk2 + 2εkP kMkηk


For ǫkP kLδ ≤ 14 ,
εV̇ ≤ − 12 kηk2 + 2εkP kMkηk ≤ − 41 kηk2 , ∀ kηk ≥ 8εkP kM


By Theorem 4.5, kη(t)k ≤ max ke−at/ε kη(0)k, εcM

|x̃i | ≤ ερ−i |ηi |

Nonlinear Control Lecture # 32 Nonlinear Observers


Nonlinear Control
Lecture # 33
Output Feedback Stabilization

Nonlinear Control Lecture # 33 Output Feedback Stabilization


Linearization
ẋ = f (x, u), y = h(x), f (0, 0) = 0, h(0) = 0

ẋ = Ax + Bu y = Cx

∂f ∂f ∂h
A= , B= , C=
∂x x=0,u=0 ∂u x=0,u=0 ∂x x=0

Design a linear output feedback controller:


ż = F z + Gy, u = Lz + My

such that the closed-loop matrix


 
A + BMC BL
is Hurwitz
GC F

Nonlinear Control Lecture # 33 Output Feedback Stabilization


Examples

Static output feedback controller


u = My

where A + BMC is Hurwitz


Observer-based controller
x̂˙ = Ax̂ + Bu + H(y − C x̂), u = −K x̂

The closed-loop matrix Hurwitz if A − BK and A − HC


are Hurwitz

Nonlinear Control Lecture # 33 Output Feedback Stabilization


Closed-loop system:
ẋ = f (x, Lz + Mh(x)), ż = F z + Gh(x)

Linearization at the origin (x = 0, z = 0) results in the


Hurwitz matrix 
A + BMC BL


GC F
By Theorem 3.2, the origin is exponentially stable

Nonlinear Control Lecture # 33 Output Feedback Stabilization


Passivity-Based Control
In Section 9.6 we saw that if the system
ẋ = f (x, u), y = h(x)

is passive (with a positive definite storage function) and


zero-state observable, it can be stabilized by
u = −φ(y), φ(0) = 0, y T φ(y) > 0, ∀ y 6= 0

Suppose the system


∂h def
ẋ = f (x, u), ẏ = f (x, u) = h̃(x, u)
∂x
is passive (with a positive definite storage function V (x)) and
zero state observable
Nonlinear Control Lecture # 33 Output Feedback Stabilization
+ u y
✲ ❥ ✲ Plant ✲
−✻

z
φ(·) ✛ ✛
s
τ s+1

+ u y ẏ
✲ ❥ ✲ Plant ✲ s ✲
−✻

z
φ(·) ✛ 1
τ s+1

Nonlinear Control Lecture # 33 Output Feedback Stabilization


s
τs + 1
τ ẇ = −w + y, z = (−w + y)/τ
MIMO systems
τi ẇi = −wi + yi , zi = (−wi + yi )/τi , for 1 ≤ i ≤ m

Note that
τi żi = −zi + ẏi

Nonlinear Control Lecture # 33 Output Feedback Stabilization


Lemma 12.1
Consider the system
ẋ = f (x, u), y = h(x)

and the output feedback controller


ui = −φi (zi ), τi ẇi = −wi + yi , zi = (−wi + yi )/τi

τi > 0, φi (0) = 0, zi φi (zi ) > 0 ∀ zi 6= 0


Suppose the auxiliary system
ẋ = f (x, u), ẏ = h̃(x, u)

is

Nonlinear Control Lecture # 33 Output Feedback Stabilization


passive with a positive definite storage function V (x)
∂V
uT ẏ ≥ V̇ = f (x, u), ∀ (x, u)
∂x
zero-state observable
with u = 0, ẏ(t) ≡ 0 ⇒ x(t) ≡ 0

Then the origin of the closed-loop system is asymptotically


stable. It is globally
R zi asymptotically stable if V (x) is radially
unbounded and 0 φi (σ) dσ → ∞ as |zi | → ∞

Nonlinear Control Lecture # 33 Output Feedback Stabilization


Proof
m
X Z zi
W (x, z) = V (x) + τi φi (σ) dσ
i=1 0
m
X m
X
T
Ẇ = V̇ + τi φi (zi )żi ≤ u ẏ − zi φi (zi ) − uT ẏ
i=1 i=1
m
X
Ẇ ≤ − zi φi (zi )
i=1

Ẇ ≡ 0 ⇒ z(t) ≡ 0 ⇒ u(t) ≡ 0 and ẏ(t) ≡ 0

Apply the invariance principle

Nonlinear Control Lecture # 33 Output Feedback Stabilization


Example 12.2 (m-link Robot Manipulator)

M(q)q̈ + C(q, q̇)q̇ + D q̇ + g(q) = u

M = M T > 0, (Ṁ − 2C)T = −(Ṁ − 2C), D = D T ≥ 0


Stabilize the system at q = qr , e = q − qr , ė = q̇
M(q)ë + C(q, q̇)ė + D ė + g(q) = u

u = g(q) − Kp e + v, [Kp = Kp > 0]

M(q)ë + C(q, q̇)ė + D ė + Kp e = v, y=e

V = 12 ėT M(q)ė + 21 eT Kp e

Nonlinear Control Lecture # 33 Output Feedback Stabilization


V = 12 ėT M(q)ė + 12 eT Kp e

V̇ = 21 ėT (Ṁ − 2C)ė − ėT D ė − ėT Kp e + ėT v + eT Kp ė ≤ ėT v


Is it zero-state observable? Set v = 0
ė(t) ≡ 0 ⇒ ë(t) ≡ 0 ⇒ Kp e(t) ≡ 0 ⇒ e(t) ≡ 0

τi ẇi = −wi + ei , zi = (−ai wi + ei )/τi , for 1 ≤ i ≤ m

u = g(q) − Kp (q − qr ) − Kd z
Kd is positive diagonal matrix. Compare with state feedback
u = g(q) − Kp (q − qr ) − Kd q̇

Nonlinear Control Lecture # 33 Output Feedback Stabilization


Observer-Based Control
ẋ = f (x, u), y = h(x)
State Feedback Controller: Design a locally Lipschitz u = γ(x)
to stabilize the origin of
ẋ = f (x, γ(x))

Observer:
x̂˙ = f (x̂, u) + H[y − h(x̂)]

x̃ = x − x̂
def
x̃˙ = f (x, u) − f (x̂, u) − H[h(x) − h(x̂)] = g(x, x̃)

g(x, 0) = 0

Nonlinear Control Lecture # 33 Output Feedback Stabilization


Design H such that x̃˙ = g(x, x̃) has an exponentially stable
equilibrium point at x̃ = 0 and there is Lyapunov function
V1 (x̃) such that
∂V1 ∂V1
c1 kx̃k2 ≤ V1 ≤ c2 kx̃k2 , g ≤ −c3 kx̃k2 , ≤ c4 kx̃k
∂ x̃ ∂ x̃

u = γ(x̂)
Closed-loop system:
ẋ = f (x, γ(x − x̃)), x̃˙ = g(x, x̃) (⋆)

Nonlinear Control Lecture # 33 Output Feedback Stabilization


Theorem 12.1
If the origin of ẋ = f (x, γ(x)) is asymptotically stable, so
is the origin of (⋆)
If the origin of ẋ = f (x, γ(x)) is exponentially stable, so
is the origin of (⋆)
If the assumptions hold globally and the system
ẋ = f (x, γ(x − x̃)), with input x̃, is ISS, then the origin
of (⋆) is globally asymptotically stable

Nonlinear Control Lecture # 33 Output Feedback Stabilization


Proof
If the origin of ẋ = f (x, γ(x)) is asymptotically stable, then by
the (converse Lyapunov) Theorem 3.9
∂V0
f (x, γ(x)) ≤ −W0 (x)
∂x
∂V0
[f (x, γ(x)) − f (x, γ(x − x̃))] ≤ Lkx̃k
∂x
p
V (x, x̃) = bV0 (x) + V1 (x̃), b > 0

∂V0 1 ∂V1
V̇ = b f (x, γ(x − x̃)) + √ g(x, x̃)
∂x 2 V1 ∂ x̃
f (x, γ(x − x̃)) = f (x, γ(x)) + [f (x, γ(x − x̃)) − f (x, γ(x))]

Nonlinear Control Lecture # 33 Output Feedback Stabilization


c3 kx̃k2
V̇ ≤ −bW0 (x) + bLkx̃k − √
2 V1
−1 −1
V1 ≤ c2 kx̃k2 ⇒ √ ≤ √
V1 c2 kx̃k
c3
V̇ ≤ −bW0 (x) + bLkx̃k − √ kx̃k
2 c2

b < c3 /(2L c2 ) ensure that V̇ is negative definite
If the origin of ẋ = f (x, γ(x)) is exponentially stable, then by
(the converse Lyapunov) Theorem 3.8
a1 kxk2 ≤ V0 (x) ≤ a2 kxk2
∂V0 ∂V0
f (x, γ(x)) ≤ −a3 kxk2 , ≤ a4 kxk
∂x ∂x

Nonlinear Control Lecture # 33 Output Feedback Stabilization


V (x, x̃) = bV0 (x) + V1 (x̃), b>0

∂V0 ∂V1
V̇ = b f (x, γ(x − x̃)) + g(x, x̃)
∂x ∂ x̃
≤ −ba3 kxk2 + ba4 L1 kxkkx̃k − c3 kx̃k2
 T   
kxk ba3 −ba4 L1 /2 kxk
= −
kx̃k −ba4 L1 /2 c3 kx̃k
| {z }
Q

b < 4a3 c3 /(a4 L1 )2 ensures that Q is positive definite


The proof of the third bullet follows from Lemma 4.6

Nonlinear Control Lecture # 33 Output Feedback Stabilization


Nonlinear Control
Lecture # 34
Output Feedback Stabilization

Nonlinear Control Lecture # 34 Output Feedback Stabilization


High-Gain Observers
Example 12.3

ẋ1 = x2 , ẋ2 = φ(x, u), y = x1


State feedback control: u = γ(x) stabilizes the origin of
ẋ1 = x2 , ẋ2 = φ(x, γ(x))

High-gain observer
x̂˙ 1 = x̂2 + (α1 /ε)(y − x̂1 ), x̂˙ 2 = φ0 (x̂, u) + (α2 /ε2 )(y − x̂1 )

φ0 is a nominal model of φ, αi > 0, 0 < ε ≪ 1


 
 −at/ε 2 b −at/ε
|x̃1 | ≤ max be , ε cM , |x̃2 | ≤ max e , εcM
ε

Nonlinear Control Lecture # 34 Output Feedback Stabilization


The bound on x̃2 demonstrates the peaking phenomenon,
which might destabilize the closed-loop system
Example:
ẋ1 = x2 , ẋ2 = x32 + u, y = x1

State feedback control:


u = −x32 − x1 − x2

Output feedback control:


u = −x̂32 − x̂1 − x̂2

x̂˙ 1 = x̂2 + (2/ε)(y − x̂1 ), x̂˙ 2 = (1/ε2 )(y − x̂1 )

Nonlinear Control Lecture # 34 Output Feedback Stabilization


0.5
0 SFB
−0.5 OFB ε = 0.1
1

OFB ε = 0.01
x

−1
−1.5
OFB ε = 0.005
−2
0 1 2 3 4 5 6 7 8 9 10

−1
2
x

−2

−3
0 1 2 3 4 5 6 7 8 9 10

−100
u

−200

−300

−400
0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1
t

Nonlinear Control Lecture # 34 Output Feedback Stabilization


ε = 0.004

0.2

0
x1

−0.2

−0.4

−0.6
0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08

−200
x2

−400

−600
0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08

2000

1000
u

−1000
0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08
t

Nonlinear Control Lecture # 34 Output Feedback Stabilization


Closed-loop system under state feedback:
 
0 1
ẋ = Ax, A=
−1 −1
 
T 1.5 0.5
P A + A P = −I ⇒ P =
0.5 1
Suppose x(0) belongs to the positively invariant set
Ω = {V (x) ≤ 0.3}
|u| ≤ |x2 |3 + |x1 + x2 | ≤ 0.816, ∀x∈Ω

Saturate u at ±1

Nonlinear Control Lecture # 34 Output Feedback Stabilization


u = sat(−x̂32 − x̂1 − x̂2 )

SFB
0.15
OFB ε = 0.1
0.1 OFB ε = 0.01
OFB ε = 0.001
x1

0.05

−0.05
0 1 2 3 4 5 6 7 8 9 10

0.05

0
x2

−0.05

−0.1

0 1 2 3 4 5 6 7 8 9 10

0
u

−0.5

−1

0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1
t

Nonlinear Control Lecture # 34 Output Feedback Stabilization


Region of attraction under state feedback:

1
x2

−1

−2
−3 −2 −1 0 1 2 3
x1

Nonlinear Control Lecture # 34 Output Feedback Stabilization


Region of attraction under output feedback:

0.5
x2

−0.5

−1
−2 −1 0 1 2
x1

ε = 0.08 (dashed) and ε = 0.01 (dash-dot)

Nonlinear Control Lecture # 34 Output Feedback Stabilization


Analysis of the closed-loop system:
ẋ1 = x2 ẋ2 = φ(x, γ(x − x̃))
εη̇1 = −α1 η1 + η2 εη̇2 = −α2 η1 + εδ(x, x̃)

η ✻
O(1/ε)
q q
❉ ❉
❉ ❉
❉ ❉
❉ ❉
❉ ❉
❉ ❉
O(ε) ❉ ❉
❲ ❲

✛ Ωb ✲ x
✛ Ωc ✲

Nonlinear Control Lecture # 34 Output Feedback Stabilization


General case
ẇ = ψ(w, x, u)
ẋi = xi+1 + ψi (x1 , . . . , xi , u), 1≤i≤ρ−1
ẋρ = φ(w, x, u)
y = x1
z = q(w, x)

φ(0, 0, 0) = 0, ψ(0, 0, 0) = 0, q(0, 0) = 0


ψi satisfies a global Lipschitz condition. The normal form and
models of mechanical and electromechanical systems take this
form with
ψ1 = · · · = ψρ = 0
Why the extra measurement z?

Nonlinear Control Lecture # 34 Output Feedback Stabilization


In many problems, we can measure some state variables in
addition to y
Magnetic levitation system
ẋ1 = x2
4cx23
ẋ2 = −bx2 + 1 −
(1 + x1 )2
 
1 βx2 x3
ẋ3 = −x3 + u +
T (x1 ) (1 + x1 )2

Typical measurements are the ball position x1 and the current


x3

Nonlinear Control Lecture # 34 Output Feedback Stabilization


Stabilizing state feedback controller:
ϑ̇ = Γ(ϑ, x, z), u = γ(ϑ, x, z)

γ and Γ are globally bounded functions of x


Closed-loop system
Ẋ = f (X ), X = col(w, x, ϑ)

Output feedback controller


ϑ̇ = Γ(ϑ, x̂, z), u = γ(ϑ, x̂, z)

Nonlinear Control Lecture # 34 Output Feedback Stabilization


Observer
αi
x̂˙ i = x̂i+1 + ψi (x̂1 , . . . , x̂i , u) + i (y − x̂1 ),
ε
1≤i≤ρ−1
α ρ
x̂˙ ρ = φ0 (z, x̂, u) + ρ (y − x̂1 )
ε
ε > 0 and α1 to αρ are chosen such that the roots of
sρ + α1 sρ−1 + · · · + αρ−1 s + αρ = 0

have negative real parts

Nonlinear Control Lecture # 34 Output Feedback Stabilization


Separation Principle
Theorem 12.2
Suppose the origin of Ẋ = f (X ) is asymptotically stable and
R is its region of attraction. Let S be any compact set in the
interior of R and Q be any compact subset of Rρ . Then,
given any µ > 0 there exist ε∗ > 0 and T ∗ > 0, dependent on
µ, such that for every 0 < ε ≤ ε∗ , the solutions (X (t), x̂(t)) of
the closed-loop system, starting in S × Q, are bounded for all
t ≥ 0 and satisfy
kX (t)k ≤ µ and kx̂(t)k ≤ µ, ∀ t ≥ T∗

kX (t) − Xr (t)k ≤ µ, ∀t≥0


where Xr is the solution of Ẋ = f (X ), starting at X (0)

Nonlinear Control Lecture # 34 Output Feedback Stabilization


If the origin of Ẋ = f (X ) is exponentially stable, then the
origin of the closed-loop system is exponentially stable and
S × Q is a subset of its region of attraction

Nonlinear Control Lecture # 34 Output Feedback Stabilization


Nonlinear Control
Lecture # 35
Output Feedback Stabilization

Nonlinear Control Lecture # 35 Output Feedback Stabilization


Robust Stabilization of Minimum Phase Systems
Relative Degree One
η̇ = f0 (η, y), ẏ = a(η, y) + b(η, y)u + δ(t, η, y, u)

f0 (0, 0) = 0, a(0, 0) = 0, b(η, y) ≥ b0 > 0

The origin of η̇ = f0 (η, 0) is asymptotically stable

α1 (kηk) ≤ V (η) ≤ α2 (kηk)

∂V
f0 (η, y) ≤ −α3 (kηk), ∀ kηk ≥ α4 (|y|)
∂η
Sliding Mode Control: Sliding surface y = 0
u = ψ(y) + v

Nonlinear Control Lecture # 35 Output Feedback Stabilization


a(η, y) + b(η, y)ψ(y) + δ(t, η, y, ψ(y) + v)
≤ ̺(y) + κ0 |v|
b(η, y)

0 ≤ κ0 < 1

̺(y)
β(y) ≥ + β0
1 − κ0
 
y
v = −β(y) sat
µ
 
y
u = ψ(y) − β(y) sat
µ
All the assumptions hold in a domain D

Nonlinear Control Lecture # 35 Output Feedback Stabilization


Theorem 12.3
Define the class K function α by α(r) = α2 (α4 (r)) and
suppose µ, c > µ, and c0 ≥ α(c) are chosen such that the set
Ω = {V (η) ≤ c0 } × {|y| ≤ c}, with c0 ≥ α(c)

is compact and contained in D. Then, Ω is positively invariant


and for any initial state in Ω, the state is bounded for all t ≥ 0
and reaches the positively invariant set
Ωµ = {V (η) ≤ α(µ)} × {|y| ≤ µ}

in finite time. Moreover, if the assumptions hold globally and


V (η) is radially unbounded, the foregoing conclusion holds for
any initial state

Nonlinear Control Lecture # 35 Output Feedback Stabilization


Theorem 12.4
Suppose ̺(0) = 0 and the origin of η̇ = f0 (η, 0) is
exponentially stable. Then, there exists µ∗ > 0 such that for
all 0 < µ < µ∗ , the origin of the closed-loop system is
exponentially stable and Ω is a subset of its region of
attraction. Moreover, if the assumptions hold globally and
V (η) is radially unbounded, the origin will be globally
uniformly asymptotically stable

Nonlinear Control Lecture # 35 Output Feedback Stabilization


Relative Degree Higher Than One
η̇ = f0 (η, ξ)
ξ˙i = ξi+1 , for 1 ≤ i ≤ ρ − 1
ξ˙ρ = a(η, ξ) + b(η, ξ)u + δ(t, η, ξ, u)
y = ξ1

f0 (0, 0) = 0, a(0, 0) = 0, b(η, ξ) ≥ b0 > 0

The origin of η̇ = f0 (η, 0) is asymptotically stable


Partial State Feedback: Assume ξ is available for feedback
s = k1 ξ1 + k2 ξ2 + · · · + kρ−1 ξρ−1 + ξρ

Nonlinear Control Lecture # 35 Output Feedback Stabilization


With s as the output, the system has relative degree one and
the normal form is given by
ż = f¯0 (z, s), ṡ = ā(z, s) + b̄(z, s)u + δ̄(t, z, s, u)


z = col η, ξ1 , . . . ξρ−2 , ξρ−1
Zero Dynamics (s = 0):
ż = f¯0 (z, 0)

Nonlinear Control Lecture # 35 Output Feedback Stabilization


ż = f¯0 (z, 0)

⇔ η̇ = f0 (η, ξ) , ζ̇ = F ζ
Pρ−1
ξρ =− i=1 ki ξi
 
  0 1 0 ··· 0
ξ1
 ξ2 
 0 0  1 ··· 0
.. ..
 
ζ =  ..  , F =
   
.  .
 .   
 0 ··· 0 1 
ξρ−1
−k1 −k2 · · · −kρ−2 −kρ−1
When ρ = n, the zero dynamics are ζ̇ = F ζ

Nonlinear Control Lecture # 35 Output Feedback Stabilization


k1 to kρ−1 are chosen such that the polynomial
λρ−1 + kρ−1 λρ−2 + · · · + k2 λ + k1

is Hurwitz
α1 (kzk) ≤ V (z) ≤ α2 (kzk)

∂V ¯
f0 (z, s) ≤ −α3 (kzk), ∀ kzk ≥ α4 (|s|)
∂η
We have converted the relative degree ρ system into a relative
degree one system that satisfies the earlier assumptions
u = ψ(ξ) + v

Nonlinear Control Lecture # 35 Output Feedback Stabilization


ā(z, s) + b̄(z, s)ψ(ξ) + δ̄(t, z, s, ψ(ξ) + v)
≤ ̺(ξ) + κ0 |v|
b̄(z, s)

Left hand side equals


Pρ−1
i=1 ki ξi+1 + a(η, ξ) + b(η, ξ)ψ(ξ) + δ(t, η, ξ, ψ(ξ) + v)
b(η, ξ)

̺(ξ)
β(ξ) ≥ + β0 , β0 > 0
1 − κ0
 
s
u = ψ(ξ) − β(ξ) sat = γ(ξ)
µ

Nonlinear Control Lecture # 35 Output Feedback Stabilization


Saturation Scheme 1:
Mi = max{|ξi |}, 1≤i≤ρ

ψs (ξ) = ψ(ξ) βs (ξ) = β(ξ)


   
ξ ξ
ξi =Mi sat Mi ξi =Mi sat Mi
i i

Scheme 2:
Mψ = max{|ψ(ξ)|}, Mβ = max{|β(ξ)|}
Ω Ω

ψs (ξ) = Mψ sat(ψ(ξ)/Mψ ), βs (ξ) = Mβ sat(β(ξ)/Mβ )

Nonlinear Control Lecture # 35 Output Feedback Stabilization


 
s
u = ψs (ξ) − βs (ξ) sat
µ
βs and ψs are globally bounded functions of ξ
Scheme 3:
Mu = max{|ψ(ξ) − β(x) sat(s/µ)|}

 
ψ(ξ) − β(ξ) sat(s/µ)
u = Mu sat
Mu

Nonlinear Control Lecture # 35 Output Feedback Stabilization


Output Feedback Controller
˙ αi
ξˆi = ξˆi+1 + i (y − ξˆ1 ), 1 ≤ i ≤ ρ − 1
ε
ˆ˙ ˆ ˆ + αρ (y − ξˆ1 )
ξρ = a0 (ξ) + b0 (ξ)u
ερ

sρ + α1 sρ−1 + · · · + αρ−1 s + αρ is Hurwitz

ˆ γs (ξ)
u = γs (ξ), ˆ is given by
!
ˆ ˆ
ˆ sat( ŝ ) or Mu sat ψ(ξ) − β(ξ) sat(ŝ/µ)
ˆ − βs (ξ)
ψs (ξ)
µ Mu
ρ−1
ki ξˆi + ξˆρ
X
ŝ =
i=1

Nonlinear Control Lecture # 35 Output Feedback Stabilization


Theorem 12.5
Let Ω0 be a compact set in the interior of Ω, X a compact
ˆ ∈ X. Then, there
subset of Rρ , (η(0), ξ(0)) ∈ Ω0 , and ξ(0)
exists ε , dependent on µ, such that for all ε ∈ (0, ε∗) the

states (η(t), ξ(t), ξ̂(t)) of the closed-loop system are bounded


for all t ≥ 0 and there is a finite time T , dependent on µ, such
that (η(t), ξ(t)) ∈ Ωµ for all t ≥ T . Moreover, given any
λ > 0, there exists ε∗∗ > 0, dependent on µ and λ, such that
for all ε ∈ (0, ε∗∗ ),
kη(t) − ηr (t)k ≤ λ and kξ(t) − ξr (t)k ≤ λ, ∀ t ∈ [0, T ]

where (ηr (t), ξr (t)) is the state of the closed-loop system


under state feedback with initial conditions ηr (0) = η(0) and
ξr (0) = ξ(0)

Nonlinear Control Lecture # 35 Output Feedback Stabilization


Theorem 12.6
Suppose all the assumptions of Theorem 12.5 are satisfied.
Then, there exists µ∗ > 0 and for each µ ∈ (0, µ∗) there exists
ε∗ > 0, dependent on µ, such that for all ε ∈ (0, ε∗ ), the
origin of the closed-loop system under output feedback is
exponentially stable and Ω0 × X is a subset of its region of
attraction

Nonlinear Control Lecture # 35 Output Feedback Stabilization


Example 12.4 (Pendulum Equation)

θ̈ + sin θ + bθ̇ = cu

0 ≤ b ≤ 0.2, 0.5 ≤ c ≤ 2

From Example 10.1: u = −2(|x1 | + |x2 | + 1) sat (s/µ)


stabilizes the pendulum at (θ = π, θ̇ = 0)
Suppose now that we only measure θ. In preparation for using
a high-gain observer, we will saturate the state feedback
control outside a compact set
Ω = {|x1 | ≤ c/θ1 } × {|s| ≤ c}, c > 0, 0 < θ1 < 1

Take c = 2π and θ = 0.8

Nonlinear Control Lecture # 35 Output Feedback Stabilization


|x1 | ≤ 2.5π, |x2 | ≤ 4.5π, ∀ x ∈ Ω
Output Feedback Controller:
       
|x̂1 | |x̂2 | ŝ
u = −2 2.5π sat + 4.5π sat + 1 sat
2.5π 4.5π µ
ŝ = x̂1 + x̂2
˙x̂1 = x̂2 + (2/ε)(x1 − x̂1 ), x̂˙ 2 = φ0 (x̂, u) + (1/ε2 )(x1 − x̂1 )

Simulation:
b = 0.01, c = 0.5, x1 (0) = −π, x2 (0) = x̂i (0) = 0

0 Figures (a) & (b)
φ0 (x̂) =
− sin(x̂1 + π) − 0.1x̂2 + 1.25u Figures (c) & (d)

Nonlinear Control Lecture # 35 Output Feedback Stabilization


(a) (b)
3.5 4

3
3
2.5

ω
θ

2
1.5
SF
1
OF ε = 0.05 1
0.5
OF ε = 0.01
0 0
0 1 2 3 0 1 2 3

(c) (d)
3.5 4

3
3
2.5

2
ω
θ

2
1.5

1
1
0.5

0 0
0 1 2 3 0 1 2 3
Time Time

Nonlinear Control Lecture # 35 Output Feedback Stabilization


Nonlinear Control
Lecture # 36
Tracking & Regulation

Nonlinear Control Lecture # 36 Tracking & Regulation


Normal form:
η̇ = f0 (η, ξ)
ξ˙i = ξi+1 , for 1 ≤ i ≤ ρ − 1
ξ˙ρ = a(η, ξ) + b(η, ξ)u
y = ξ1

η ∈ Dη ⊂ Rn−ρ , ξ = col(ξ1 , . . . , ξρ ) ∈ Dξ ⊂ Rρ
Tracking Problem: Design a feedback controller such that
lim [y(t) − r(t)] = 0
t→∞

while ensuring boundedness of all state variables


Regulation Problem: r is constant

Nonlinear Control Lecture # 36 Tracking & Regulation


Assumption 13.1

b(η, ξ) ≥ b0 > 0, ∀ η ∈ Dη , ξ ∈ D ξ

Assumption 13.2
η̇ = f0 (η, ξ) is bounded-input–bounded-state stable over
Dη × Dξ

Assumption 13.2 holds locally if the system is minimum phase


and globally if η̇ = f0 (η, ξ) is ISS
Assumption 13.3
r(t) and its derivatives up to r (ρ) (t) are bounded for all t ≥ 0
and the ρth derivative r (ρ) (t) is a piecewise continuous
function of t. Moreover, R = col(r, ṙ, . . . , r (ρ−1) ) ∈ Dξ for all
t≥0

Nonlinear Control Lecture # 36 Tracking & Regulation


The reference signal r(t) could be specified as given functions
of time, or it could be the output of a reference model
Example: For ρ = 2
ωn2
, ζ > 0, ωn > 0
s2 + 2ζωns + ωn2

ẏ1 = y2 , ẏ2 = −ωn2 y1 − 2ζωn y2 + ωn2 uc , r = y1

ṙ = y2 , r̈ = ẏ2
Assumption 13.3 is satisfied when uc (t) is piecewise
continuous and bounded

Nonlinear Control Lecture # 36 Tracking & Regulation


Change of variables:
e1 = ξ1 − r, e2 = ξ2 − r (1) , ..., eρ = ξρ − r (ρ−1)

η̇ = f0 (η, ξ)
ėi = ei+1 , for 1 ≤ i ≤ ρ − 1
ėρ = a(η, ξ) + b(η, ξ)u − r (ρ)

Goal: Ensure e = col(e1 , . . . , eρ ) = ξ − R is bounded for all


t ≥ 0 and converges to zero as t tends to infinity
Assumption 13.4
r, r (1) , . . . , r (ρ) are available to the controller (needed in state
feedback control)

Nonlinear Control Lecture # 36 Tracking & Regulation


Feedback controllers for tracking and regulation are classified
as in stabilization
State versus output feedback
Static versus dynamic controllers
Region of validity
local tracking
regional tracking
semiglobal tracking
global tracking
Local tracking is achieved for sufficiently small initial states
and sufficiently small kRk, while global tracking is achieved
for any initial state and any bounded R.

Nonlinear Control Lecture # 36 Tracking & Regulation


Practical tracking: The tracking error is ultimately bounded
and the ultimate bound can be made arbitrarily small by
choice of design parameters
local practical tracking
regional practical tracking
semiglobal practical tracking
global practical tracking

Nonlinear Control Lecture # 36 Tracking & Regulation


Tracking
 
η̇ = f0 (η, ξ), ė = Ac e + Bc a(η, ξ) + b(η, ξ)u − r (ρ)

Feedback linearization:
 
u = −a(η, ξ) + r (ρ) + v /b(η, ξ)

η̇ = f0 (η, ξ), ė = Ac e + Bc v

v = −Ke, Ac − Bc K is Hurwitz

η̇ = f0 (η, ξ), ė = (Ac − Bc K)e

Ac − Bc K Hurwitz ⇒ e(t) is bounded and limt→∞ e(t) = 0

⇒ ξ = e + R is bounded ⇒ η is bounded

Nonlinear Control Lecture # 36 Tracking & Regulation


Example 13.1 (Pendulum equation)

ẋ1 = x2 , ẋ2 = − sin x1 − bx2 + cu, y = x1

We want the output y to track a reference signal r(t)

e1 = x1 − r, e2 = x2 − ṙ

ė1 = e2 , ė2 = − sin x1 − bx2 + cu − r̈

1
u = [sin x1 + bx2 + r̈ − k1 e1 − k2 e2 ]
c
K = [k1 , k2 ] assigns the eigenvalues of Ac − Bc K at desired
locations in the open left-half complex plane

Nonlinear Control Lecture # 36 Tracking & Regulation


Simulation
r = sin(t/3), x(0) = col(π/2, 0)

Nominal: b = 0.03, c = 1 Figures (a) and (b)

Perturbed: b = 0.015, c = 0.5 Figure (c)

Reference (dashed)
  √
Low gain: K = 1 1 , λ = −0.5 ± j0.5 3, (solid)
  √
High gain: K = 9 3 , λ = −1.5 ± j1.5 3, (dash-dot)

Nonlinear Control Lecture # 36 Tracking & Regulation


(a) (b)
2 2

1.5 1.5
Output

Output
1 1

0.5 0.5

0 0

−0.5 −0.5
0 2 4 6 8 10 0 2 4 6 8 10
Time Time
(c) (d)
2

5
1.5
Control
Output

1 0

0.5
−5

0
−10
−0.5
0 2 4 6 8 10 0 2 4 6 8 10
Time Time

Nonlinear Control Lecture # 36 Tracking & Regulation


Robust Tracking

η̇ = f0 (η, ξ)
ėi = ei+1 , 1≤i≤ρ−1
ėρ = a(η, ξ) + b(η, ξ)u + δ(t, η, ξ, u) − r (ρ) (t)

Sliding mode control: Design the sliding surface


ėi = ei+1 , 1≤ i≤ρ−1

View eρ as the control input and design it to stabilize the


origin
eρ = −(k1 e1 + · · · + kρ−1 eρ−1 )

λρ−1 + kρ−1 λρ−2 + · · · + k1 is Hurwitz

Nonlinear Control Lecture # 36 Tracking & Regulation


s = (k1 e1 + · · · + kρ−1 eρ−1 ) + eρ = 0
ρ−1
X
ṡ = ki ei+1 + a(η, ξ) + b(η, ξ)u + δ(t, η, ξ, u) − r (ρ) (t)
i=1
" ρ−1 #
1 X
u = v or u = − ki ei+1 + â(η, ξ) − r (ρ) (t) + v
b̂(η, ξ) i=1

ṡ = b(η, ξ)v + ∆(t, η, ξ, v)

∆(t, η, ξ, v)
Suppose ≤ ̺(η, ξ) + κ0 |v|, 0 ≤ κ0 < 1
b(η, ξ)
 
s ̺(η, ξ)
v = −β(η, ξ) sat , β(η, ξ) ≥ + β0 , β0 > 0
µ (1 − κ0 )

Nonlinear Control Lecture # 36 Tracking & Regulation


sṡ ≤ −β0 b0 (1 − κ0 )|s|, |s| ≥ µ

ζ = col(e1 , . . . , eρ−1 ), ζ̇ = (Ac − Bc K) ζ + Bc s


| {z }
Hurwitz
V0 = ζ T P ζ, P (Ac − Bc K) + (Ac − Bc K)T P = −I

V̇0 = −ζ T ζ+2ζ T P Bc s ≤ −(1−θ)kζk2 , ∀ kζk ≥ 2kP Bc k |s|/θ


0 < θ < 1. For σ ≥ µ
{kζk ≤ 2kP Bc k σ/θ} ⊂ {ζ T P ζ ≤ λmax (P )(2kP Bck/θ)2 σ 2 }

ρ1 = λmax (P )(2kP Bck/θ)2 , c > µ

Ω = {ζ T P ζ ≤ ρ1 c2 } × {|s| ≤ c} is positively invariant

Nonlinear Control Lecture # 36 Tracking & Regulation


For all e(0) ∈ Ω, e(t) enters the positively invariant set
Ωµ = {ζ T P ζ ≤ ρ1 µ2 } × {|s| ≤ µ}

Inside Ωµ ,
|e1 | ≤ kµ
√  
k = kLP −1/2 k ρ1 , L = 1 0 ... 0

Nonlinear Control Lecture # 36 Tracking & Regulation


Example 13.2 (Reconsider Example 13.1)

ė1 = e2 , ė2 = − sin x1 − bx2 + cu − r̈

r(t) = sin(t/3), 0 ≤ b ≤ 0.1, 0.5 ≤ c ≤ 2

s = e1 + e2

ṡ = e2 − sin x1 − bx2 + cu − r̈
= (1 − b)e2 − sin x1 − bṙ − r̈ + cu

(1 − b)e2 − sin x1 − bṙ − r̈ |e2 | + 1 + 0.1/3 + 1/9



c 0.5
 
e1 + e2
u = −(2|e2 | + 3) sat
µ

Nonlinear Control Lecture # 36 Tracking & Regulation


Simulation:
µ = 0.1, x(0) = col(π/2, 0)

b = 0.03, c = 1 (solid)

b = 0.015, c = 0.5 (dash-dot)

Reference (dashed)

Nonlinear Control Lecture # 36 Tracking & Regulation


(a) (b)
2
1.2
1.5
1
0.8
Output

1
0.6

s
0.5 0.4
0.2
0
0
−0.5 −0.2
0 2 4 6 8 10 0 0.2 0.4 0.6 0.8 1
Time Time

Nonlinear Control Lecture # 36 Tracking & Regulation


Nonlinear Control
Lecture # 37
Tracking & Regulation

Nonlinear Control Lecture # 37 Tracking & Regulation


Transition Between Set Points

η̇ = f0 (η, ξ)
ξ˙i = ξi+1 , 1 ≤ i ≤ ρ − 1
ξ˙ρ = a(η, ξ) + b(η, ξ)u
y = ξ1

Equilibrium point:
0 = ¯
f0 (η̄, ξ)
0 = ξ¯i+1 , 1 ≤ i ≤ ρ − 1
0 = ¯
a(η̄, ξ̄) + b(η̄, ξ)ū
ȳ = ξ¯1

Nonlinear Control Lecture # 37 Tracking & Regulation


ξ¯1 = ȳ, ξ¯i = 0 for 2 ≤ i ≤ ρ − 1

ξ¯ = col(ȳ, 0, · · · , 0)
¯
a(η̄, ξ)
¯
0 = f0 (η̄, ξ), ū = − ¯
b(η̄, ξ)
¯ has a unique solution η̄ in the domain of
Assume 0 = f0 (η̄, ξ)
interest
η̄ = φη (ȳ), ū = φu (ȳ)
By assumption (and without loss of generality)
φη (0) = 0, φu (0) = 0

Nonlinear Control Lecture # 37 Tracking & Regulation


Goal: Move the system from equilibrium at y = 0 to
equilibrium at y = y ∗ , either asymptotically or over a finite
time period
First Approach: Apply a step command

0, for t < 0
r(t) = uc (t) =
y∗ for t ≥ 0

Is this allowed ?
Take r = y ∗ , for t ≥ 0

r (i) = 0 for i ≥ 2

η(0) = 0, e1 (0) = −y ∗ , ei (0) = 0 for i ≥ 2

Nonlinear Control Lecture # 37 Tracking & Regulation


The shape of the transient response depends on the solution
of
ė = (Ac − Bc K)e
in feedback linearization
or the solution of
    
ė1 1 e1
 
 ė2   .. e2
.
   
 ..  =  .. + s
     

 .   1  .  
ėρ−1 −k1 −kρ−1 eρ−1 1

in sliding mode control


What is the impact of the reaching phase?

Nonlinear Control Lecture # 37 Tracking & Regulation


Second Approach: Take r(t) as the zero-state response of a
Hurwitz transfer function driven by uc
Typical Choice:

s + a1 s
ρ ρ−1 + · · · + aρ−1 s + aρ

Choose the parameters a1 to aρ to shape the response of r


r(0) = 0 ⇒ e1 (0) = 0 ⇒ e(0) = 0

Feedback Linearization:
e(0) = 0 ⇒ e(t) ≡ 0

Sliding Mode Control:


e(0) = 0 ⇒ e(0) ∈ Ωµ ⇒ e(t) ∈ Ωµ , ∀ t ≥ 0

Nonlinear Control Lecture # 37 Tracking & Regulation


The derivatives of r are generated by the pre-filter
 
1
 
..
.
   
ż =  z +   uc
 
 1   
−aρ −a1 aρ
 
r = 1 z

r = z1 , ṙ = z2 , . . . . . . r (ρ−1) = zρ
ρ
X
(ρ)
r =− aρ−i+1 zi + aρ uc
i=1

Does r(t) satisfy the assumptions imposed last lecture?

Nonlinear Control Lecture # 37 Tracking & Regulation


Third Approach: Plan a trajectory (r(t), ṙ(t), . . . , r (ρ) (t)) to
move from (0, 0, . . . , 0) to (y ∗ , 0, . . . , 0) in finite time T
Example: ρ = 2
 at2
 2 for 0 ≤ t ≤ T2
2 2
r(t) = − aT + aT t − at2 for T2 ≤ t ≤ T
 aT 24
4
for t ≥ T
4y ∗
a= ⇒ r(t) = y ∗ for t ≥ T
T2

Nonlinear Control Lecture # 37 Tracking & Regulation


a
r(2)

0 T
T/2
−a

r(1)
at a(T−t)

aT2/4
r

−aT2/4+aTt−at2/2
at2/2

Nonlinear Control Lecture # 37 Tracking & Regulation


Example 12.3 (Reconsider Example 13.1)

ẋ1 = x2 , ẋ2 = − sin x1 − 0.03x2 + u, y = x1


Move the pendulum from equilibrium at x = 0 to equilibrium
at x = col(π/2, 0) Pre-Filter:

1 0, for t < 0
, uc = π
(τ s + 1) 2
2
for t ≥ 0

u = sin x1 + 0.03x2 + r̈ − 9e1 − 3e2

Constraint : |u(t)| ≤ 2

Nonlinear Control Lecture # 37 Tracking & Regulation


τ = 0.2 τ = 0.8
2 2

1.5 1.5
Output

Output
1 1

output output
0.5 0.5
reference reference
0 0
0 1 2 3 4 5 0 1 2 3 4 5

τ = 0.2 τ = 0.8
2 2

1 1
Control

Control

0 0

−1 −1

−2 −2

0 1 2 3 4 5 0 1 2 3 4 5
Time Time

Nonlinear Control Lecture # 37 Tracking & Regulation


Robust Regulation via Integral Action
η̇ = f0 (η, ξ, w)
ξ˙i = ξi+1 , 1≤i≤ ρ−1
ξ˙ρ = a(η, ξ, w) + b(η, ξ, w)u
y = ξ1

Disturbance w and reference r are constant


Equilibrium point:
0 = ¯ w)
f0 (η̄, ξ,
0 = ξ¯i+1 , 1≤i≤ρ−1
0 = ¯ ¯ w)ū
a(η̄, ξ, w) + b(η̄, ξ,
r = ξ¯1

Nonlinear Control Lecture # 37 Tracking & Regulation


Assumption 13.5
¯ w) has a unique solution η̄ = φη (r, w)
0 = f0 (η̄, ξ,

¯ w) def
a(η̄, ξ,
ū = − ¯ w) = φu (r, w)
b(η̄, ξ,
Augment the integrator ė0 = y − r
   
e1 ξ1 − r
 e2   ξ2 
z = η − η̄, e =  ..  =  ..
   

 .   . 
eρ ξρ

Nonlinear Control Lecture # 37 Tracking & Regulation


def
ż = f0 (z + η̄, ξ, w) = f˜0 (z, e, r, w)
ėi = ei+1 , for 0 ≤ i ≤ ρ − 1
ėρ = a(η, ξ, w) + b(η, ξ, w)u

Sliding mode control:


s = k0 e0 + k1 e1 + · · · + kρ−1 eρ−1 + eρ

λρ + kρ−1 λρ−1 + · · · + k1 λ + k0 is Hurwitz


ρ−1
X
ṡ = ki ei+1 + a(η,"ξ, w) + b(η, ξ, w)u #
ρ−1
i=0 1 X
u=v or u = − ki ei+1 + â(η, ξ) + v
b̂(η, ξ) i=0

Nonlinear Control Lecture # 37 Tracking & Regulation


ṡ = b(η, ξ, w)v + ∆(η, ξ, r, w)

∆(η, ξ, r, w)
≤ ̺(η, ξ)
b(η, ξ, w)
 
s
v = −β(η, ξ) sat , β(η, ξ) ≥ ̺(η, ξ) + β0 , β0 > 0
µ

Assumption 13.6

α1 (kzk) ≤ V1 (z, r, w) ≤ α2 (kzk)

∂V1 ˜
f0 (z, e, r, w) ≤ −α3 (kzk), ∀ kzk ≥ α4 (kek)
∂z

Assumption 13.7
z = 0 is an exponentially stable equilibrium point of
ż = f˜0 (z, 0, r, w)

Nonlinear Control Lecture # 37 Tracking & Regulation


Theorem 13.1
Under the stated assumptions, there are positive constants c,
ρ1 and ρ2 and a positive definite matrix P such that the set
Ω = {V1 (z) ≤ α2 (α4 (cρ2 )} × {ζ T P ζ ≤ ρ1 c2 } × {|s| ≤ c}

where ζ = col(e0 , e1 , . . . , eρ−1 ), is compact and positively


invariant, and for all initial states in Ω
lim |y(t) − r| = 0
t→∞

Special case: β = k (a constant) and u = v


 
k0 e0 + k1 e1 + · · · + kρ−1 eρ−1 + eρ
u = −k sat
µ

Nonlinear Control Lecture # 37 Tracking & Regulation


Example 13.4 (Pendulum with horizontal acceleration)

ẋ1 = x2 , ẋ2 = − sin x1 − bx2 + cu + d cos x1 , y = x1

d is constant. Regulate y to a constant reference r


0 ≤ b ≤ 0.1, 0.5 ≤ c ≤ 2, 0 ≤ d ≤ 0.5

e1 = x1 − r, e2 = x2

ė0 = e1 , ė1 = e2 , ė2 = − sin x1 − bx2 + cu + d cos x1

s = e0 + 2e1 + e2

ṡ = e1 + (2 − b)e2 − sin x1 + cu + d cos x1

Nonlinear Control Lecture # 37 Tracking & Regulation


e1 + (2 − b)e2 − sin x1 + d cos x1 |e1 | + 2|e2 | + 1 + 0.5

c 0.5
 
e0 + 2e1 + e2
u = −(2|e1 | + 4|e2 | + 4) sat
µ
For comparison, SMC without integrator
s = e1 + e2 , ṡ = (1 − b)e2 − sin x1 + cu + d cos x1
 
e1 + e2
u = −(2|e2 | + 4) sat
µ
Simulation: With integrator (dashed), without (solid)
µ = 0.1, x(0) = 0, r = π/2, b = 0.03, c = 1, d = 0.3

Nonlinear Control Lecture # 37 Tracking & Regulation


2 1.6

1.58
1.5
Output

Output
1.56
1
1.54

0.5
1.52

0 1.5
0 2 4 6 8 10 9 9.2 9.4 9.6 9.8 10
Time Time

Nonlinear Control Lecture # 37 Tracking & Regulation


Nonlinear Control
Lecture # 38
Tracking & Regulation

Nonlinear Control Lecture # 38 Tracking & Regulation


Output Feedback
Tracking:
η̇ = f0 (η, ξ)
ėi = ei+1 , 1≤i≤ρ−1
ėρ = a(η, ξ) + b(η, ξ)u + δ(t, η, ξ, u) − r (ρ) (t)

Regulation:
η̇ = f0 (η, ξ, w)
ξ˙i = ξi+1 , 1≤i≤ ρ−1
ξ˙ρ = a(η, ξ, w) + b(η, ξ, w)u
y = ξ1

Design partial state feedback control that uses ξ


Use a high-gain observer

Nonlinear Control Lecture # 38 Tracking & Regulation


Tracking sliding mode controller:
 
k1 e1 + · · · + kρ−1 eρ−1 + eρ
u = −β(ξ) sat
µ

Regulation sliding mode controller:


 
k0 e0 + k1 e1 + · · · + kρ−1 eρ−1 + eρ
u = −β(ξ) sat
µ

ė0 = e1 = y − r
β is allowed to depend only on ξ rather than the full state
vector. On compact sets, the η-dependent part of ̺(η, ξ) can
be bounded by a constant

Nonlinear Control Lecture # 38 Tracking & Regulation


High-gain observer:
αi
ê˙ i = êi+1 + i (y − r − ê1 ), 1≤i≤ρ−1
ε
α
ê˙ ρ = ρ (y − r − ê1 )
ρ
ε

λρ + α1 λρ−1 + · · · + αρ−1 λ + αρ Hurwitz

e → ê

ξ → ξˆ = ê + R

ˆ → βs (ξ)
β(ξ) ˆ (saturated)

Nonlinear Control Lecture # 38 Tracking & Regulation


Tracking:
 
ˆ sat k1 ê1 + · · · + kρ−1 êρ−1 + êρ
u = −βs (ξ)
µ

Regulation:
 
ˆ sat k0 e0 + k1 ê1 + · · · + kρ−1 êρ−1 + êρ
u = −βs (ξ)
µ

We can replace ê1 by e1


Special case: When βs is constant or function of ê rather
ˆ we do not need the derivatives of r, as required by
than ξ,
Assumption 13.4

Nonlinear Control Lecture # 38 Tracking & Regulation


The output feedback controllers recover the performance of
the partial state feedback controllers for sufficiently small ε. In
the regulation case, the regulation error converges to zero
Relative degree one systems: No observer
   
y−r k0 e0 + y − r
u = −β(y) sat , u = −β(y) sat
µ µ

Nonlinear Control Lecture # 38 Tracking & Regulation


Example 13.5 (Revisit Example 13.2 and 13.4)
Use the high-gain observer
2 1
ê˙ 1 = ê2 + (e1 − ê1 ), ê˙ 2 = 2 (e1 − ê1 )
ε ε
to implement the tracking controller
 
e1 + e2
u = −(2|e2 | + 3) sat (Example 13.2)
µ

and the regulating controller


 
e0 + 2e1 + e2
u = −(2|e1 |+4|e2|+4) sat (Example 13.4)
µ

Replace e2 by ê2 but keep e1

Nonlinear Control Lecture # 38 Tracking & Regulation


Saturate |ê2 | in the β function over a compact set of interest.
There is no need to saturate ê2 inside the saturation
For Example 13.2,
Ω = {|e1 | ≤ c/θ} × {|s| ≤ c}, c > 0, 0 < θ < 1

is positively invariant. Take c = 2 and 1/θ = 1.1


Ω = {|e1 | ≤ 2.2} × {|s| ≤ 2}

Over Ω, |e2 | ≤ |e1 | + |s| ≤ 4.2. Saturate |ê2 | at 4.5


     
|ê2 | e1 + ê2
u = − 2 × 4.5 sat + 3 sat
4.5 µ

Nonlinear Control Lecture # 38 Tracking & Regulation


For Example 13.4,
     
e 0 1 0
ζ̇ = Aζ + Bs, ζ= 0 , A= B=
e1 −1 −2 1

P A + AT P = −I, 0 < θ < 1, ρ1 = λmax (P )(2kP Bk/θ)2, c > 0

Ω = {ζ T P ζ ≤ ρ1 c2 } × {|s| ≤ c}
is positively invariant. Take c = 4 and 1/θ = 1.003
Ω = {ζ T P ζ ≤ 55} × {|s| ≤ 4}

Over Ω,
|e0 + 2e1 | ≤ 22.25 ⇒ |e2 | ≤ |e0 + 2e1 | + |s| ≤ 26.25

Nonlinear Control Lecture # 38 Tracking & Regulation


Saturate |ê2 | at 27
     
|ê2 | e0 + 2e1 + ê2
u = − 2|e1 | + 4 × 27 sat + 4 sat
27 µ

Simulation; (a) Tracking, (b) regulation



0.05 (dashed)
ε=
0.01 (dash-dot)

State feedback (solid)

Nonlinear Control Lecture # 38 Tracking & Regulation


(a) (b)
1.6 2

1.4
1.5
1.2
Output

Output
1
1

0.8 0.5

0.6
0
0 1 2 3 4 0 2 4 6
Time Time

Nonlinear Control Lecture # 38 Tracking & Regulation

You might also like