0% found this document useful (0 votes)
90 views6 pages

Slides Poly PDF

The document outlines optimal model predictive control. It begins by defining model predictive control as using a model to predict a system's behavior and compute a sequence of future control inputs to minimize a quadratic error over a receding horizon. Only the first input is applied to the system with the sequence recomputed at each time step. It discusses different flavors of MPC including DMC, MAC, PFC and GPC, and notes GPC is most commonly used as it uses a CARMA model. The document also covers modelling a system for GPC using a CARMA model and defining the GPC cost function as minimizing a quadratic error between predictions and references plus control signal changes.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
90 views6 pages

Slides Poly PDF

The document outlines optimal model predictive control. It begins by defining model predictive control as using a model to predict a system's behavior and compute a sequence of future control inputs to minimize a quadratic error over a receding horizon. Only the first input is applied to the system with the sequence recomputed at each time step. It discusses different flavors of MPC including DMC, MAC, PFC and GPC, and notes GPC is most commonly used as it uses a CARMA model. The document also covers modelling a system for GPC using a CARMA model and defining the GPC cost function as minimizing a quadratic error between predictions and references plus control signal changes.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Optimal control Outline

1.  Introduction
University of Strasbourg
2.  System modelling
Telecom Physique Strasbourg, ISAV option
Master IRIV, AR track 3.  Cost function
Part 2 – Predictive control 4.  Prediction equation
5.  Optimal control
6.  Examples
7.  Tuning of the GPC
8.  Nonlinear predictive control
9.  References
18/11/16 jacques.gangloff@unistra.fr 2

1. Introduction 1. Introduction
1.1. Definition of MPC 1.2. Principle of MPC

—  Model Predictive Control (MPC) ⎡ r ( t +1) ⎤ ⎡ u (t ) ⎤


–  Use of a model to predict the behaviour of the ⎢ ⎥ N2 future ⎢ ⎥ Nu future control
⎢  ⎥ references ⎢  ⎥ signals
system.
⎣ ( 2 )⎦ ⎣ ( )⎦
⎢r t + N ⎥ ⎢u t + N − 1 ⎥
u (t )
u
–  Compute a sequence of future control inputs that
minimize the quadratic error over a receding
+
− Optimization System
horizon of time.
–  Only the first sample of the sequence is applied to y (t )
the system. The whole sequence is re-evaluated at Prediction
⎡ y ( t +1) ⎤
each sampling time. ⎢ ⎥
⎢  ⎥
⎢ y t + N ⎥ N2 predicted
⎣ ( )
2 ⎦ outputs

18/11/16 jacques.gangloff@unistra.fr 3 18/11/16 jacques.gangloff@unistra.fr 4


1. Introduction 1. Introduction
1.2. Principle of MPC 1.3.Various flavours of MPC
y —  DMC (Dynamic Matrix Control)
–  Uses the system’s step response.
r
–  The system must be stable and without integrator.
—  MAC (Model Algorithmic Control)
Receding
–  Uses the system’s impulse response.
Horizon
—  PFC (Predictive Functional Control)
–  Uses a state space representation of the system.
–  Can apply to nonlinear systems.
—  GPC (Generalized Predictive Control)
t + N1 t + N2 t –  Uses a CARMA model of the system.
–  The most commonly used.
Goal of the optimization : minimizing

18/11/16 jacques.gangloff@unistra.fr 5 18/11/16 jacques.gangloff@unistra.fr 6

1. Introduction 2. Modelling
1.4. Advantages / drawbacks of MPC 2.1. Example of MAC

—  Advantages —  Input-output relationship :



–  Simple principle, easy and quick tuning. y ( t ) = ∑ hiu ( t − i )
i=1
–  Applies to every kind of systems (non minimum
phase, instable, MIMO, nonlinear, variant). —  Truncation of the response :
N

–  If the reference or the disturbance is known in ŷ ( t + k |t ) = ∑ hiu ( t + k − i |t )


i=1
advance, it can drastically improve the reference
—  Drawbacks :
tracking accuracy.
–  Numerically stable. –  Model is not in its minimal form.
–  Computationally demanding.
—  Drawback
–  Good knowledge of the system model.

18/11/16 jacques.gangloff@unistra.fr 7 18/11/16 jacques.gangloff@unistra.fr 8


2. Modelling 3. GPC cost function
2.2. The case of the GPC

—  CARMA modelling (Controller Auto- —  For the GPC :


Regressive Moving Average) :
C (q ) -1 N2 Nu
A( q ) y ( t ) = B ( q ) u ( t − 1) + e (t )
∑ ⎡⎣ ŷ (t + j | t ) − r (t + j )⎤⎦ + ∑ λ ⎡⎣ Δu (t + j − 1)⎤⎦
-1 -1 2 2
D (q ) -1 J=
j= N1 j=1

—  With : ( )
⎧ A q -1 = 1+ a q -1 + a q -2 +…+ a q -na
⎪ 1 2 na
⎪ -1
( ) -1 -2
⎨ B q = b0 + b1q + b2 q +…+ bnbq
-nb

⎪ -1
( ) -1 -2
⎪⎩ C q = 1+ c1q + c2 q +…+ cnc q
-nc

Quadratic error Energy of the control signal


—  Usually : D q ( ) = Δ ( q ) = 1− q
-1 -1 -1

—  Tuning parameters : N1 N2 Nu λ
18/11/16 jacques.gangloff@unistra.fr 9 18/11/16 jacques.gangloff@unistra.fr 10

4. GPC prediction equations 4. GPC prediction equations


—  First Diophantine equation : —  Using the Diophantine equation :
C = E j ΔA+ q Fj -j
(1− q F ) y (t + j ) = E BΔu (t + j − 1) + E e(t + j )
-j
j j j

—  With C=1 : —  Which yields :


⎧deg E = j − 1

1= E j ΔA+ q Fj with ⎨
-j j ( ) y ( t + j ) = Fj y ( t ) + E j BΔu ( t + j − 1) + E j e ( t + j )

⎪⎩deg Fj = na ( ) —  Thus, the best prediction is :


—  Let : ŷ ( t + j |t ) = E j BΔu ( t + j − 1) + Fj y ( t )
⎡ e (t ) ⎤
⎢ Ay ( t ) = Bu ( t − 1) + ⎥ × ΔE j q
j

⎢⎣ Δ ⎥⎦
⇒ ΔAE j y ( t + j ) = E j BΔu ( t + j − 1) + E j e ( t + j )
18/11/16 jacques.gangloff@unistra.fr 11 18/11/16 jacques.gangloff@unistra.fr 12
4. GPC prediction equations 4. GPC prediction equations
—  Second Diophantine equation : —  And : ⎡ g 0  0 ⎤
⎢ 0

E j B = G j + q Γ j with deg(G j ) = j − 1
-j
⎢ g1 g0  0 ⎥
⎢ ⎥
—  Separation of control inputs :    
GN =⎢ ⎥
⎢ g N −1 ⎥
ŷ ( t + j |t ) = G j Δu ( t + j − 1) + Γ j Δu ( t − 1) + Fj y ( t )
2 × Nu gN  g0
2 −2
!##"## $ !###"### $ ⎢ 2

Forced response Free response ⎢     ⎥
⎢ g gN  gN −N ⎥
—  Prediction equation : ŷ = Gu + fˆ ⎢⎣ N 2 −1 2 −2 2 u ⎥⎦

—  With : ⎣ ( ) ( 2 )⎦
⎧ ŷ = ⎡ ŷ t +1|t … ŷ t + N |t ⎤T

—  With g0 … gN2-1 the samples of the

⎨u! = ⎡⎣ Δu ( t |t )…Δu ( t + N u − 1|t ) ⎤⎦
T
system’s step response.

⎪ fˆ = ⎡ fˆ ( t +1|t )… fˆ ( t + N 2 |t ) ⎤
T

⎩ ⎣ ⎦
18/11/16 jacques.gangloff@unistra.fr 13 18/11/16 jacques.gangloff@unistra.fr 14

5. Optimal control 6. Examples


6.1. First order system

—  Cost function : J = ( ŷ − r ) ( ŷ − r ) + λ u T u —  A system in the CARMA form has the


T

dJ following parameters : ⎧ A = 1− 0.7q-1


—  Let : uopt s.t. =0
du ⎪
⎨ B = 0.9 − 0.6q
-1

( ) ( )
-1
⇒ uopt = G G + λ I T
G r − fˆ
T
⎪C = 1

r = ⎡⎣ r ( t +1)…r ( t + N 2 ) ⎤⎦ —  Compute the system’s prediction


T
—  With :
Future references equations 3 steps ahead.
—  Only the first optimal control sample is
applied to the system.

18/11/16 jacques.gangloff@unistra.fr 15 18/11/16 jacques.gangloff@unistra.fr 16


6. Examples 6. Examples
6.1. First order system 6.1. First order system

—  Using three times the CARMA model : —  Putting everything in matrix form :

⎛ 0.8947 0.0929 0.0095 ⎞


(G G + λ I ) ⎜ ⎟
-1
T
3
G = ⎜ −0.8316 0.8091 0.0929 ⎟
T

⎝ −0.0766 −0.8316 0.8947 ⎠

18/11/16 jacques.gangloff@unistra.fr 17 18/11/16 jacques.gangloff@unistra.fr 18

6. Examples 6. Examples
6.1. First order system 6.2. Simulation results

—  Optimal control (differential) :


Δu ( t ) = 0.644Δu ( t − 1) − 1.7483y ( t ) + 0.7513y ( t − 1)
+0.8947r ( t +1) + 0.0929r ( t + 2 ) + 0.0095r ( t + 3)

—  Optimal control (absolute) :


u ( t ) − u ( t − 1) = 0.644 ⎡⎣u ( t − 1) − u ( t − 2 ) ⎤⎦ − 1.7483y ( t )
+0.7513y ( t − 1) + 0.8947r ( t +1) + 0.0929r ( t + 2 ) + 0.0095r ( t + 3)

u ( t ) = 1.644u ( t − 1) − 0.644u ( t − 2 ) − 1.7483y ( t )
+0.7513y ( t − 1) + 0.8947r ( t +1) + 0.0929r ( t + 2 ) + 0.0095r ( t + 3)

18/11/16 jacques.gangloff@unistra.fr 19 18/11/16 jacques.gangloff@unistra.fr 20


6. Examples 7. Tuning the GPC
6.2. Simulation results
—  Parameter λ:
–  Increase : response slow down.
–  Decrease : more energy in the control signal, thus
faster response.
—  Parameter N2 :
–  At least the size of the step response of the system.
—  Parameter N1 :
–  Greater than the system’s delay.
—  Parameter Nu :
–  Tends toward dead-beat control when Nu tends
toward zero.

18/11/16 jacques.gangloff@unistra.fr 21 18/11/16 jacques.gangloff@unistra.fr 22

8. Nonlinear predictive control 9. References


—  The system can be nonlinear. —  R. Bitmead, M. Gevers et V. Wertz,
—  The optimal solution is computed using « Adaptive Optimal control – The thinking man's
an iterative optimization algorithm. GPC », Prentice Hall International, 1990.
—  E. F. Camacho et C. Bordons, « Model
—  The optimization is performed at each
Predictive Control », Springer Verlag, 1999.
sampling time.
—  J.-M. Dion et D. Popescu, « Commande
—  Additional constraints can be added.
optimale, conception optimisée des systèmes »,
—  The cost function can be more complex. Diderot, 1996.
—  Main drawback : very computationally —  P. Boucher et D. Dumur, « La commande
intensive. prédictive », Technip, 1996.

18/11/16 jacques.gangloff@unistra.fr 23 18/11/16 jacques.gangloff@unistra.fr 24

You might also like