Internship Report - Final
Internship Report - Final
This study presents a comprehensive and technically rigorous investigation into the design,
formulation, and implementation of Model Predictive Control (MPC) for both Single-Input
Single-Output (SISO) and Multi-Input Multi-Output (MIMO) dynamic systems. Motivated
by the limitations of classical control strategies in handling multivariable coupling, input/state
constraints, and time-varying dynamics, the study begins with a historical and conceptual
overview of MPC, tracing its evolution from heuristic industrial applications to its modern
optimization-based framework.
Chapter 1 develops the discrete-time MPC formulation for SISO systems using state-space
modelling and augmented representations to incorporate integral action. The predictive
control law is derived analytically through matrix-based horizon prediction and quadratic cost
minimization, culminating in a receding horizon strategy. The novelty lies in the symbolic
derivation of the control law and its integration with real-time feedback mechanisms.
Chapter 2 extends the MPC framework to MIMO systems, introducing disturbance modeling
and augmented state-space representations. It rigorously addresses controllability and
observability of the extended system and derives the optimal control law under multivariable
constraints. Eigenvalue analysis and noise modeling are incorporated to enhance robustness
and predictive fidelity, showcasing MPC’s scalability and constraint-handling capabilities in
high-dimensional environments.
Chapter 3 presents three diverse case studies — a mass-spring-damper system, lateral control
of an autonomous vehicle, and an inverted pendulum on a cart — each serving as a
benchmark for validating MPC’s performance. Both GUI-based (mpcDesigner) and script-
based implementations are explored, with comparative insights into tuning flexibility,
constraint enforcement, and sampling time effects. The study includes a detailed analysis of
how sampling time influences prediction depth, control aggressiveness, and transient shaping,
supported by simulation results across multiple regimes.
List of Figures 1
Introduction 2
1.3 Optimization 9
3. Case Studies 19
a. System Description 19
b. Control Objective 21
a. System Description 29
b. Model Formulation 30
c. Control Objective 31
e. Simulation Results 32
a. System Description 36
d. Simulation Results 40
Conclusion 46
References 47
Appendix                                         48
LIST OF FIGURES
3.4 MPC Controlled Cart’s Position and Applied Control Input (i.e., Force) against Time (in
seconds)
3.7 Geometric diagram of a vehicle’s lateral displacement and heading control vectors
3.8 Evolution of heading angle a(t), lateral position y(t) and steering input u(t) over time
3.12 Input Signal applied to the plant during closed loop MPC simulation
3.13 Closed-loop response of the inverted pendulum system under MPC regulation
3.14 Step Response of the linearized pendulum system under LQR regulation
3.15 Plant input signal u(t) under MPC regulation at Sampling Time 𝑇𝑠 = 0.01 𝑠
3.16 Output responses of the Inverted Pendulum System at Sampling Time 𝑇𝑠 = 0.01 𝑠
      1
INTRODUCTION
Model Predictive Control (MPC) is an advanced control strategy which computes a trajectory
of future control inputs that optimizes the future behaviour of plant output, where the
optimization is carried out within a limited time window.
   ●   There was a strong need for a control strategy that could ensure constraint satisfaction
       at all times.
   ●   MPC allows optimization under constraints, which was not feasible with classical
       methods.
Historical Background
Model Predictive Control (MPC) originated in the 1970s within the process industries,
particularly in chemical and oil refining, where traditional control methods struggled with
multivariable systems and operational constraints. Early implementations like IDCOM (by
Richalet et al., 1978) and DMC (Dynamic Matrix Control, by Cutler and Ramaker, 1979 at
Shell) [7] were primarily heuristic, using step-response models and matrix-based predictions
to optimize control actions over a finite horizon. In the 1980s, MPC began to gain a more
       2
formal theoretical foundation with developments such as Generalized Predictive Control
(GPC) and QDMC, which introduced quadratic cost functions and explicit optimization
frameworks. During the 1990s, research focused on ensuring closed-loop stability and
robustness, leading to the development of robust and nonlinear MPC variants. With
advancements in computing during the 2000s, MPC became more practical for real-time
applications and was integrated into commercial software tools, expanding its reach into
automotive, aerospace, and robotics. In recent years, MPC has continued to evolve with
trends such as economic MPC, distributed MPC, and learning-based approaches, making it a
central technique in modern control systems across a wide range of applications.
     3
1. PREDICTIVE CONTROL FOR SINGLE INPUT SINGLE INPUT
SINGLE OUTPUT (SISO) SYSTEM
MPC uses a dynamic model of the process to predict future outputs of the system over a
prediction horizon based on the current state and/or output. The popular approaches to system
modelling for predictive control design are:
Here, we consider only State-Space Representation for further discussion because it provides
a comprehensive, flexible, and efficient framework for prediction, optimization, and control
of dynamic systems—especially multivariable and constrained ones.[1]
xm(k+1)=Amxm(k) + Bmu(k)
                           y= Cmxm(k)
In order to eliminate steady-state error from a control system we need to include integral
action. For model predictive controllers, a convenient way to introduce integral action is by
modifying the state-space model.
xm(k+1)-xm(k)=Amxm(k) + Bmu(k)-xm(k)
and
= Cmxm(k+1)-Cmxm(k)
Δxm(k+1)=xm(k+1)-xm(k)
Δxm(k+1)=xm(k)-xm(k-1)
Putting together (3) with (4) leads to the following state-space model:
    ∆𝑥𝑚 (𝑘 + 1)                   𝑇    ∆𝑥 (𝑘)   𝐵
                    𝐴            𝑜𝑚
[               ]=[ 𝑚               ] [ 𝑚 ] + [ 𝑚 ] Δu(k)
     𝑦(𝑘 + 1)      𝐶𝑚 𝐴𝑚          1     𝑦(𝑘)   𝐶𝑚 𝐵𝑚
                               ∆𝑥 (𝑘)
            y(k) = [𝑜𝑚     1] [ 𝑚 ]                                       (1.5)
                                𝑦(𝑘)
                     ∆𝑥𝑚 (𝑘)
         y(k) = C[           ]
                      𝑦(𝑘)
        5
             = Cx(k)
                         𝑇               𝐵𝑚
                 𝐴𝑚     𝑜𝑚
where     A=[              ]      B=[         ]   and    C=[𝑜𝑚     1]
                𝐶𝑚 𝐴𝑚    1              𝐶𝑚 𝐵𝑚
Assuming that at the sampling instant ki, ki>0 , the state vector xi(k) is available through
measurement, the state x(ki) is available through measurement, the state x(ki) provides the
current plant information .
Where
Nc is the control horizon dictating the number of parameters used to capture the future control
trajectory.
The future state variables are predicted for Np number of samples, where Np is called the
prediction horizon.
𝑥(𝑘𝑖 + 1|𝑘𝑖 ), 𝑥(𝑘𝑖 + 2|𝑘𝑖 ), 𝑥(𝑘𝑖 + 3|𝑘𝑖 ), … … … … , 𝑥(𝑘𝑖 + 𝑚|𝑘𝑖 ), … … … , 𝑥(𝑘𝑖 + 𝑁𝑝 |𝑘𝑖 ),
where 𝑥(𝑘𝑖 + 𝑚|𝑘𝑖 ) is the predicted state variable at 𝑘𝑖 + 𝑚 with given current plant
information 𝑥(𝑘𝑖 ). The choice of control horizon, i.e., Nc , is made such that Nc is less than
or equal to the prediction horizon, i.e., Np.
Based on the state-space model (A,B,C), the future state variables are calculated sequentially
using the set future control parameters:
      6
               = 𝐴2 𝑥(𝑘𝑖 ) + 𝐴𝐵∆𝑢(𝑘𝑖 ) + 𝐵∆𝑢(𝑘𝑖 + 1)
. .
. .
. .
From the predicted state variables, the predicted output variables are by substitution,
. .
. .
. .
. .
           7
   𝑦(𝑘𝑖 + 𝑁𝑝 |𝑘𝑖 ) = 𝐶𝐴𝑁𝑝 𝑥(𝑘𝑖 ) + 𝐶𝐴𝑁𝑝 −1 𝐵∆𝑢(𝑘𝑖 ) + 𝐶𝐴𝑁𝑝 −2 𝐵∆𝑢(𝑘𝑖 + 1) + … … . +
Defining Vectors:
     𝑦(𝑘𝑖 + 1|𝑘𝑖 )
     𝑦(𝑘𝑖 + 2|𝑘𝑖 )
     𝑦(𝑘𝑖 + 3|𝑘𝑖 )
𝑌=         .
           .
           .
   [𝑦(𝑘𝑖 + 𝑁𝑝 |𝑘𝑖 )]
          ∆𝑢(𝑘𝑖 )
        ∆𝑢(𝑘𝑖 + 1)
        ∆𝑢(𝑘𝑖 + 2)
∆𝑈 =    ∆𝑢(𝑘𝑖 + 3)
             .
             .
             .
     [∆𝑢(𝑘𝑖 + 𝑁𝑐 − 1)]
Where in the single-input single-output(SISO) case, the dimensions of 𝑌 and ∆𝑈 are Np and
Nc respectively.
𝑌 = 𝐹𝑥(𝑘𝑖 ) + 𝜙∆𝑈
Where
     𝐶𝐴             𝐶𝐵                 0            0      …      0
        2
    𝐶𝐴             𝐶𝐴𝐵                𝐶𝐵            0      …      0
        3
    𝐶𝐴            𝐶𝐴2 𝐵              𝐶𝐴𝐵           𝐶𝐵      …      0
𝐹=    .    ; 𝜙=      .                                                      [1]
      .              .
      .              .
   [𝐶𝐴𝑁𝑝 ]      [𝐶𝐴𝑁𝑝 −1 𝐵        𝐶𝐴𝑁𝑝 −2 𝐵   𝐶𝐴𝑁𝑝 −3 𝐵    … 𝐶𝐴𝑁𝑝 −𝑁𝑐 𝐵 ]
      8
1.3 Optimization
Given a set-point signal 𝑟(𝑘𝑖 ) at a specific sample time 𝑘𝑖 the goal of the predictive control
system within a defined prediction horizon is to drive the predicted output as close as
possible to the set-point. It is assumed that the set-point remains constant throughout the
optimization window. This goal is formulated as an optimization problem, where the aim is to
determine the optimal control parameter vector ∆𝑈 that minimizes the error between the set-
point and the predicted output.
Assuming that the data vector that contains the set-point information is
     1
     1
     1
𝑅𝑠 = . 𝑟𝑖 (𝑘)         𝑅𝑠 ∈ ℝNp×1 ,
     .
     .
    [1]
we define the cost function 𝐽 that reflects the control objective as:
where the first term corresponds to the objective of minimizing the error between the
predicted output and the set-point signal, while the second term accounts for the magnitude of
∆𝑈 when aiming to minimize the objective function 𝐽. The matrix 𝑅̅ is a diagonal matrix
defined as 𝑅̅ = 𝑟𝑤 𝐼𝑁𝑐×𝑁𝑐 where 𝑟𝑤 ≥ 0 serves as a tuning parameter to adjust the desired
closed-loop performance and 𝐼𝑁𝑐×𝑁𝑐 is a 𝑁𝑐 × 𝑁𝑐 identity matrix.
Now, to find the optimal ∆𝑈 which minimizes the cost function 𝐽, we differentiate 𝐽 partially
w.r.t ∆𝑈 and equate it to 0.
                                             𝜕𝐽
                                                =0
                                            𝜕∆𝑈
Expanding 𝐽 as:
                  𝑇
𝐽 = (𝑅𝑠 − 𝐹𝑥(𝑘𝑖 )) (𝑅𝑠 − 𝐹𝑥(𝑘𝑖 )) − 2∆𝑈 𝑇 𝜙 𝑇 (𝑅𝑠 − 𝐹𝑥(𝑘𝑖 )) + ∆𝑈 𝑇 (𝜙 𝑇 𝜙 + 𝑅̅ )∆𝑈      (1.8)
                     𝐶𝐴
                    𝐶𝐴2
                    𝐶𝐴3
Where           𝐹=    .           𝐹 ∈ ℝNp×1 ;
                      .
                      .
                   [𝐶𝐴𝑁𝑝 ]
      9
       𝐶𝐵                0          0       …         0
      𝐶𝐴𝐵               𝐶𝐵          0       …         0
     𝐶𝐴2 𝐵             𝐶𝐴𝐵         𝐶𝐵       …         0
𝜙=      .                                                          𝜙 ∈ ℝNp×Nc ;
        .
        .
   [𝐶𝐴𝑁𝑝 −1 𝐵      𝐶𝐴𝑁𝑝 −2 𝐵    𝐶𝐴𝑁𝑝 −3 𝐵   … 𝐶𝐴𝑁𝑝 −𝑁𝑐 𝐵 ]
          ∆𝑢(𝑘𝑖 )
        ∆𝑢(𝑘𝑖 + 1)                                     1
                                                       1
        ∆𝑢(𝑘𝑖 + 2)
                                                       1
∆𝑈 =    ∆𝑢(𝑘𝑖 + 3)             ∆𝑈 ∈ ℝNc ×1      ; 𝑅𝑠 = . 𝑟𝑖 (𝑘)    𝑅𝑠 ∈ ℝNp ×1 ;
             .                                         .
             .                                         .
             .                                        [1]
     [∆𝑢(𝑘𝑖 + 𝑁𝑐 − 1)]
                        𝜕𝐽
                           = −2𝜙 𝑇 (𝑅𝑠 − 𝐹𝑥(𝑘𝑖 )) + 2(𝜙 𝑇 𝜙 + 𝑅̅ )∆𝑈
                       𝜕∆𝑈
                                       𝜕 2𝐽
                                            = 2(𝜙 𝑇 𝜙 + 𝑅̅ )
                                     𝜕(∆𝑈)2
           𝜕2 𝐽
Since,            >0
         𝜕(∆𝑈)2
      𝜕𝐽
         =0
     𝜕∆𝑈
with the assumption that (𝜙 𝑇 𝜙 + 𝑅̅ )−1 exists. The matrix (𝜙 𝑇 𝜙 + 𝑅̅ )−1 is called the
Hessian matrix.[1]
    10
1.4. Receding Horizon Control
Receding Horizon Control, also known as Finite Horizon Model Predictive Control(MPC),
relies on optimization to make decisions for a system over a moving time horizon. The
fundamental idea is to compute the control inputs over a finite prediction horizon, apply only
the first control input, and then shift the horizon forward as time progresses, recalculating the
control sequence at each step. This approach allows the system to continually adapt to
changes and uncertainties in the environment, while ensuring optimal performance in real-
time.
Although the optimal parameter vector ∆𝑈 contains the controls ∆𝑢(𝑘𝑖 ), ∆𝑢(𝑘𝑖 + 1),
∆𝑢(𝑘𝑖 + 2), ∆𝑢(𝑘𝑖 + 3), … … . , ∆𝑢(𝑘𝑖 + 𝑁𝑐 − 1), with the receding horizon control principle,
we only implement the first sample of the sequence, i.e., ∆𝑢(𝑘𝑖 ), while ignoring the rest of
the sequence. When the next sample period arrives, the more recent measurement is taken to
form the state vector 𝑥(𝑘𝑖 + 1) for calculation of the new sequence of control signal. This
procedure is given in real time to give the receding horizon control law.
𝑦(𝑘𝑖 ) = 𝐶𝑥(𝑘𝑖 )
Where 𝑥(𝑘𝑖 ) ∈ ℝ𝑛 is the system at time step 𝑘𝑖 , 𝑢(𝑘) ∈ ℝ𝑚 is the control input, and 𝑦(𝑘) is
the output. At each time instant 𝑘𝑖 , MPC solves an optimal control problem over a prediction
horizon 𝑁𝑝 , aiming to minimize a cost function of the form:
    𝑵𝒑 −𝟏
Here, 𝑸 ≥ 𝟎 , 𝑹 ≥ 𝟎are the weighting matrices penalizing the state deviation and the control
effort, while 𝒙 and 𝒖 denote admissible sets for state and input respectively. Once this finite-
horizon optimization is solved, only the first control input is applied. In the next time step
𝒌𝒊 + 𝟏 , the entire process is repeated with the updated state 𝒙(𝒌𝒊 + 𝟏), hence the term
“receding horizon”.
This approach ensures feedback by continuously updating the control sequence based on the
latest state information, while inherently handling constraints and anticipating future
disturbances. The iterative resolution of the optimization problem enables MPC to maintain
optimal performance over time despite uncertainties, delays, or nonlinearities, making it
particularly powerful for real-time control in dynamic systems.[1]
    11
2. PREDICTIVE CONTROL OF MIMO SYSTEMS
In MIMO systems, the inputs and outputs are often coupled, meaning that a change in one
input can affect multiple outputs. MPC provides a natural framework for handling such
interactions, as it uses an internal model of the system to predict future behaviour and
optimize control actions accordingly. By solving a constrained optimization problem at each
time step over a finite prediction horizon, MPC ensures coordinated control of all inputs
while satisfying constraints on both states and control variables.
Model Predictive Control is particularly well-suited for handling MIMO systems because of
its ability to incorporate a dynamic model of the plant into the control design process. At
every sampling instant, MPC predicts the future evolution of the system over a finite
prediction horizon using the current state and an internal model of the process. Based on
these predictions, it solves an optimization problem that minimizes a predefined cost
function—typically a weighted sum of deviations from reference trajectories and control
effort—subject to constraints on inputs, outputs, and states. Only the first control input in the
optimal sequence is applied, and the procedure is repeated at the next time step, a strategy
known as receding horizon control.
In MIMO systems, predictive control offers several critical advantages. First, it naturally
handles multivariable interactions by optimizing all control inputs simultaneously while
accounting for their coupling. Second, it enforces hard constraints on inputs and outputs—
such as actuator limits, safety requirements, or system operating bounds—directly within the
optimization framework. This is especially important in practical systems where violating
physical or safety constraints can lead to instability or damage. Third, MPC can incorporate
future reference trajectories and disturbance predictions, making it inherently anticipative
rather than purely reactive. This feature is particularly advantageous in MIMO contexts
where a change in one output may require coordinated changes across multiple inputs. [1]
    12
2.1 General Formulation of the Model
Assume that the plant has m inputs, q outputs and 𝑛1 states. We also assume that the number
of outputs is less that or equal to the number of inputs (i.e., 𝑞 ≤ 𝑚). If the numbers of
outputs is greater than the number of inputs, we cannot hope to control each of the measured
outputs independently with zero steady-state errors. In the general formulation of the
predictive control problem, we also take the plant noise and disturbance into consideration.
where 𝑤(𝑘) is the input disturbance, assumed to be a sequence of integrated white noise.
This means that the input disturbance 𝑤(𝑘) is related to a zero-mean, white noise sequence ∈
(𝑘) by the difference equation
It’s also worth noting that from (2.11), the following equation is also true:
By defining ∆𝑥𝑚 (𝑘) = 𝑥𝑚 (𝑘) − 𝑥𝑚 (𝑘 − 1) and ∆𝑢(𝑘) = 𝑢(𝑘) − 𝑢(𝑘 − 1), then
subtracting (2.14) from (2.11) leads to:
In order to relate the output 𝑦(𝑘) to the state variable ∆𝑥𝑚 (𝑘), we deduce that
Choosing a new state variable vector 𝑥(𝑘) = [∆𝑥𝑚 (𝑘)𝑇 𝑦(𝑘)𝑇 ]𝑇 , we have:
    ∆𝑥𝑚 (𝑘 + 1)               𝑇     ∆𝑥 (𝑘)
                    𝐴        𝑜𝑚              𝐵             𝐵
[               ]=[ 𝑚            ] [ 𝑚 ] + [ 𝑚 ] ∆𝑢(𝑘) + [ 𝑑 ] ∈ (𝑘)
     𝑦(𝑘 + 1)      𝐶𝑚 𝐴𝑚    𝐼𝑞×𝑞     𝑦(𝑘)   𝐶𝑚 𝐵𝑚         𝐶𝑚 𝐵𝑑
                                 ∆𝑥 (𝑘)
           𝑦(𝑘) = [𝑜𝑚    𝐼𝑞×𝑞 ] [ 𝑚 ]                                                (2.16)
                                  𝑦(𝑘)
      13
where 𝐼𝑞×𝑞 is the identity matrix with dimension 𝑞 × 𝑞, which is the number of outputs; and
𝑜𝑚 is a 𝑞 × 𝑛1 zero matrix . In (2.16), 𝐴𝑚 , 𝐵𝑚 and 𝐶𝑚 have dimension 𝑛1 × 𝑛1 , 𝑛1 × 𝑚 and
𝑞 × 𝑛1 , respectively.
Note that the characteristics polynomial equation of the augmented model is:
                                                          𝑇
                                           λ𝐼 − 𝐴𝑚       𝑜𝑚
                            𝑝(𝜆) = det [                         ]
                                           −𝐶𝑚 𝐴𝑚    (λ − 1)𝐼𝑞×𝑞
where we used the property that the determinant of a block lower triangular matrix equals the
product of the determinants of the matrices on the diagonal. Hence, the eigenvalues of the
augmented model are the union of the eigenvalues of the plant model and the q eigenvalues,
λ=1. This means that there are q integrators embedded into the augmented design model.[1]
Since the original plant model is extended by incorporating integrators and the MPC design is
based on this augmented state-space representation, it is crucial to ensure that the augmented
model remains both controllable and observable—especially in relation to the system’s
unstable dynamics. Controllability is essential to enable the predictive controller to deliver
the desired closed-loop performance, while observability is necessary for the successful
implementation of a state observer. Nonetheless, if the primary objective is to guarantee
closed-loop stability rather than full control performance, these requirements can be relaxed
to the weaker conditions of stabilizability and detectability.
    14
Controllability
A state x of a plant is said to be ‘controllable’ if there exists a control signal 𝑢1 (𝑡) defined
over a finite interval 0 ≤ 𝑡 ≤ 𝑡1 such that 𝜙(𝑡1 ; 𝑥, 0) = 0. In general, the time 𝑡1 will depend
on x. If every state is controllable, the plant is said to be ‘completely controllable’.[2]
A necessary and sufficient condition for complete controllability in the discrete time case is
the following:
A discrete time plant is completely controllable (i)if and (ii) only if the vectors
𝒂, 𝝓−𝟏 𝒂, … . , 𝝓−𝒏+𝟏 𝒂 are linearly independent. [3]
Observability
Let 𝑋 ∗ be the dual vector space of the state space 𝑋, i.e. the space of all linear functions on 𝑋.
An element 𝒛∗ or 𝒙∗ of 𝑋 ∗ is called a costate.
A costate 𝒛∗ of a plant is said to be ‘observable’ if its exact value [𝒛∗ , 𝒙] at any state x at time
0 can be determined from measurements of the output signal 𝑦1 (𝑡) = [𝒃∗ , 𝝓(𝑡; 𝒙, 0)] over the
finite interval 0 ≥ 𝑡 ≥ 𝑡2 . The time 𝑡2 will depend in general on 𝒛∗ . If every costate is
observable, we say that the plant is ‘completely observable’.[4]
The extension of the predictive control solution is quite straightforward, and we need to pay
attention to the dimensions of the state, control and output vectors in a multi-input, multi-
output environment. Define the vectors 𝑌 and ∆𝑈 as:
Based on the state-space model (A,B,C), the future state variables are calculated sequentially
using the set of future control parameters
    15
𝑥(𝑘𝑖 + 2|𝑘𝑖 ) = 𝐴𝑥(𝑘𝑖 + 1|𝑘𝑖 ) + 𝐵∆𝑢(𝑘𝑖 + 1) + 𝐵𝑑 ∈ (𝑘𝑖 + 1|𝑘𝑖 )
With the assumption that ∈ (𝑘) is a zero-mean white noise sequence, the predicted value of
∈ (𝑘𝑖 + 𝑖|𝑘𝑖 ) at future sample 𝑖 is assumed to be zero. The prediction of state variable and
output variable is calculated as the expected values of the respective variables, hence, the
noise effect to the predicted values being zero. For notational simplicity, the expectation
operator is omitted without confusion.
Effectively, we have
where
     𝐶𝐴                       𝐶𝐵               0            0        …      0
    𝐶𝐴2                      𝐶𝐴𝐵              𝐶𝐵            0        …      0
    𝐶𝐴3                     𝐶𝐴2 𝐵            𝐶𝐴𝐵           𝐶𝐵        …      0
𝐹=    .    ;           𝜙=      .
      .                        .
      .                        .
   [𝐶𝐴𝑁𝑝 ]                [𝐶𝐴𝑁𝑝 −1 𝐵       𝐶𝐴𝑁𝑝 −2 𝐵    𝐶𝐴𝑁𝑝 −3 𝐵    … 𝐶𝐴𝑁𝑝 −𝑁𝑐 𝐵 ]
    16
where matrix 𝜙 𝑇 𝜙 has dimension 𝑚𝑁𝑐 × 𝑚𝑁𝑐 and 𝜙 𝑇 𝐹 has dimension 𝑚𝑁𝑐 × 𝑛, and 𝜙 𝑇 𝑅̅𝑠
equals the last 𝑞 columns of 𝜙 𝑇 𝐹. The weight matrix 𝑅̅ is a block matrix with 𝑚 blocks and
has a dimension equal to the dimension of 𝜙 𝑇 𝜙. The set-point signal is
                                                   𝑇
𝑟(𝑘𝑖 ) = [𝑟1 (𝑘𝑖 ) 𝑟2 (𝑘𝑖 ) 𝑟3 (𝑘𝑖 ) . . . 𝑟𝑞 (𝑘𝑖 )]
Applying the receding horizon control principle, the first 𝑚 elements in ∆𝑈 are taken to form
the incremental optimal control:
where 𝐼𝑚 and 𝑜𝑚 are respectively, the identity and zero matrix with dimension 𝑚 × 𝑚.[1]
     17
3. CASE STUDIES
a. System Description
We consider the classic mass-spring-damper system as plant for deploying Model Predictive
Control.
 An external force 𝑢(𝑡) is applied to the mass, which serves as the control input.
Fig. 3.1. Mass-Spring-Damper System representation with force F as the control input 𝑢(𝑡)
    18
Governing Dynamics
Rearranging:
                                 𝑐              𝑘              1
                      𝑥̇ (𝑡) = − 𝑚 𝑥̇ (𝑡) − 𝑚 𝑥(𝑡) + 𝑚 𝑢(𝑡)
Let,
𝑥1 (𝑡) = 𝑥(𝑡)
𝑥2 (𝑡) = 𝑥̇ (𝑡)
Therefore,
                                                𝑥1 (𝑡)
Let the state variable vector be, 𝑥(𝑡) = [             ]
                                                𝑥2 (𝑡)
                                                  𝑥̇ 1 (𝑡)
                                     𝑥̇ (𝑡) = [            ]
                                                  𝑥̇ 2 (𝑡)
       19
                            𝑥̇ 1 (𝑡)       0     1 𝑥1 (𝑡)        0
               𝑥̇ (𝑡) = [            ] = [− 𝑘     𝑐][
                                                − 𝑚 𝑥2 (𝑡) ] + [ 1 ] 𝑢(𝑡)              (3.15)
                            𝑥̇ 2 (𝑡)        𝑚                   𝑚
                                    𝑥1 (𝑡)
               𝑦(𝑡) = [1 0] [              ] + [0]𝑢(𝑡)                                 (3.16)
                                    𝑥2 (𝑡)
               0      1                  0
where    𝐴 = [− 𝑘    − 𝑚]
                       𝑐            𝐵 = [1]         𝐶 = [1    0]       𝐷 = [0]
                 𝑚                         𝑚
b. Control Objective
The control objective is to regulate the position of the cart in one-degree-of-freedom mass-
spring-damper system subject to external actuation. The mechanical setup consists of a cart of
a cart of mass m=1 kg, coupled to a fixed wall via a linear spring of stiffness k=5 N/m and a
viscous damper characterized by damping coefficient c=2 Ns/m. The cart is actuated by a
time-varying control force u(t), applied directly along the direction of motion.
To govern the system dynamics and ensure precise setpoint regulation, a discrete-time Model
Predictive Controller (MPC) is employed. The controller is designed with a predictive
horizon 𝑁𝑝 = 10 and a control horizon 𝑁𝑐 = 5, operating at a sampling period of 𝑇𝑠 = 0.1s.
The control force is subject to saturation constraints, bounded within 𝑢(𝑡) ∈ [−10,10]
The controller aims to steer the cart’s position 𝑥(𝑡) from rest to a desired setpoint 𝑟 = 1m
while satisfying the following performance goals:
    20
These objectives are encoded in a quadratic cost function penalizing deviations from the
reference and excessive control effort:
                                   𝑁
                                   𝑝
                             𝐽 = ∑𝑘=1[𝑤𝑇 (𝑥𝑘 − 𝑟)2 + 𝑤𝑢 𝑢𝑘2 ]                  (3.17)
with, 𝑤𝑇 = 1, 𝑤𝑢 = 0.01
mpcDesigner(plant)
This command launches MATLAB’s interactive MPC design environment. Within the GUI,
users can:
      Prediction Horizon: Defines the time window over which the future output trajectory
       is predicted.
      Control Horizon: Determines how many future control moves are independently
       optimized.
      Sampling Time: Governs the discrete update frequency for the controller.
      Constraints: The applied force 𝑢(𝑡) is bounded within the physical range of
       [−10,10], to simulate actuator saturation limits.
    21
Simulation Results
Fig. 3.2. Control Force 𝑢(𝑡) over the course of the simulation
  22
Position Response Analysis
      Monotonic Rise Without Overshoot: The system reaches the desired position
       smoothly, without exceeding the reference value. This indicates that the controller
       accurately predicts future behavior and adjusts input force conservatively to avoid
       excessive momentum or elastic recoil.
      Settling Behavior: The displacement stabilizes at 𝑥 = 1 m within approximately 10
       time steps (i.e., 1s given a sampling time of 0.1 s), demonstrating rapid convergence.
       This confirms proper tuning of the prediction and control horizons of the system’s
       natural dynamics.
      Steady—State Accuracy: Once the setpoint is reached, the output remains locked at
       the reference with zero steady-state error. This showcases the inherent feedback and
       receding-horizon nature of MPC, which compensates for model inaccuracies and
       guarantees closed-loop stability.
      Initial Actuation Peak: At the onset, the controller applies a non-zero force to
       overcome spring resistance and initiate motion. The magnitude is within allowable
       bounds (𝑢(𝑡) ∈ [−10,10]) and represents the system’s transient push toward the
       target.
      Rapid Decay and Stabilization: Following the initial peak, the force decreases
       progressively and approaches zero as the system nears the setpoint. This behavior
       arises from the MPC’s cost function minimization-once the displacement error
       negligible, the controller reduces input effort to avoid unnecessary actuation.
      Smooth Transitions: The absence of abrupt jumps or oscillations in the force signal
       is attributable to the weight on the manipulated variable rate (𝑤∆𝑢 = 0.01). This
       penalizes aggressive changes in control and promotes actuator-friendly behavior.
      Input Saturation Avoidance: Throughout the simulation, the force remains well
       within defined limits. This validates that the controller honors constraints and avoids
       nonlinear effect such as saturation-induced delay or instability.
This section presents a script-based implementation of Model Predictive Control (MPC) for
the mass-spring-damper system. The mass-spring-damper is modeled and simulated in
MATLAB without using the inbuilt MATLAB function 𝑚𝑝𝑐𝐷𝑒𝑠𝑖𝑔𝑛𝑒𝑟().
    23
Algorithmic Description
Simulation Results
Fig. 3.4. MPC Controlled Cart’s Position and Applied Control Input (i.e.,Force) against
    24
Position Response
      Rapid Rise to Setpoint: The cart accelerates from the initial rest position x=0 and
       reaches the target position x=1 m within the first 10 steps (i.e.,1 second). This
       demonstrates fast convergence governed by the predictive nature of the controller and
       the optimal balancing of effort and response.
      Zero Overshoot & Stability: The output stabilizes precisely at the reference value
       without exceeding it, indicating critically damped behavior. This is a direct outcome
       of the tuned cost function and system damping (), which prevent oscillations and
       allow smooth settling.
      Steady-State Accuracy: Beyond the transient phase, the position remains constant at
       validating the controller's ability to eliminate steady-state error. This confirms that the
       control law properly compensates for the passive system forces and maintains
       regulation under zero-disturbance conditions.
      Initial Saturation: The controller applies the maximum permissible force of for the
       first two steps to overcome the resisting spring and damper forces. This aggressive
       push is a deliberate move by the MPC to reduce settling time while obeying input
       bounds.
      Dynamic Modulation: As the cart gains momentum, the control input begins to taper
       smoothly. It decreases from peak force to intermediate values between and , actively
       shaping the position response while minimizing control effort.
      Final Stabilization: After approximately 20 steps, the force converges to zero, as the
       cart reaches and holds its reference position. This reflects energy-efficient behavior
       and validates the output tracking weight and input rate penalty chosen in your design.
      No Constraint Violation: Throughout the simulation, the force stays strictly within
       the bounds of, demonstrating the controller's compliance with actuator limits and
       robustness against saturation.
    25
Simulation via 𝑚𝑝𝑐𝐷𝑒𝑠𝑖𝑔𝑛𝑒𝑟() produced a well-damped output trajectory that smoothly
converged to the reference position without overshoot. The control input demonstrated
moderate initial actuation followed by a gradual decline, indicative of conservative tuning
and GUI-default regularization. However, design flexibility was somewhat limited by
interface constraints; for example, fine-grained logging, state access, and post-processing
were less accessible.
In contrast, the scripted implementation provided full programmatic control over model
definition, simulation logic, and data extraction. The corresponding output achieved accurate
reference tracking with slightly faster transient response, attributed to optimized initial force
application (reaching saturation) and precise state propagation. The control input exhibited
sharper dynamics early in the trajectory, followed by smooth decay—reflecting finely tuned
weight parameters and discrete-time responsiveness.
While both methods achieved successful closed-loop regulation, the script-based approach
offered superior configurability, simulation transparency, and reproducibility for academic
analysis. The GUI tool facilitated intuitive controller setup, making it well-suited for rapid
prototyping and conceptual validation.
In discrete-time Model Predictive Control (MPC), the selection of sampling time significantly
affects control resolution, prediction depth, and system responsiveness. This influence was
evaluated using three distinct sampling intervals—0.01 s, 0.1 s, and 1 s—while maintaining a
fixed prediction horizon of 10 steps.
For 𝑇𝑠 = 0.01𝑠 (Fig. 3.4), the controller operated at a high update frequency of 100 Hz,
offering fine temporal granularity. However, with only 0.1 s of predictive coverage, the MPC
lacked sufficient foresight to drive the system to the reference. As a result, the control input
saturated at its maximum value throughout the simulation, and the position failed to reach the
target within 50 steps. This demonstrates that ultra-short sampling intervals require
proportionally longer horizons to be effective in setpoint regulation.
At 𝑇𝑠 = 0.1𝑠 (Fig. 3.5), a balanced interaction between update rate and prediction span
emerged. The horizon extended over 1 s, allowing the controller to anticipate system
dynamics effectively. The position reached the reference smoothly within 0.7 s, with zero
overshoot and critically damped behavior. The control input modulated efficiently, respecting
input bounds while converging to zero. This case reflects optimal tuning for systems with
moderate dynamic speeds.
With 𝑇𝑠 = 1𝑠 (Fig. 3.6), the controller performed updates less frequently, but its prediction
horizon spanned 10 s of physical time. The position settled quickly within five time steps
(i.e., 5 s), demonstrating fast convergence over coarse time increments. The control input
stabilized early, though temporal resolution was lower, resulting in broader corrections and
slightly reduced smoothness. This configuration favors long-horizon planning but may
obscure short-term dynamics or fast disturbances.
    26
In conclusion, selecting an appropriate sampling time demands a careful balance between
prediction coverage and update precision. While smaller 𝑇𝑠 enhances numerical fidelity, it
reduces predictive reach unless the horizon length is scaled accordingly. Conversely, larger
𝑇𝑠 expands prediction span but compromises real-time responsiveness. Hence, the choice of
𝑇𝑠 should align with system time constants and control performance requirements.
    27
3.2 LATERAL CONTROL OF A CAR MODEL
a. System Description
The system under study represents the lateral motion of a vehicle travelling at a constant
forward velocity along a straight or gently curving road. In autonomous driving applications,
maintaining accurate lateral positioning within the lane is critical for safety and comfort. The
control task involves minimizing lateral deviation from the road centreline using steering
inputs, which influence the vehicle’s heading and displacement over time.
Fig. 3.7. Geometric diagram of a vehicle lateral displacement and heading control vectors
[5]
    28
b. Model Formulation
The system models a vehicle's lateral motion and heading dynamics using a discrete-time,
linear kinematic framework. The formulation assumes a constant longitudinal velocity and
small-angle steering behavior over each sampling interval. The states of the system are
defined as:
      Heading angle deviation 𝑎(𝑡): the angular displacement of the vehicle’s nose
       relative to the road centerline.
      Lateral position 𝑦(𝑡): the transverse distance between the vehicle’s center and the
       desired lane center.
The control input is the steering rate 𝑢(𝑡), assumed to affect heading angle incrementally at
each time step. Given a constant forward velocity V = 15 m/s, and a sampling periodTs =
0.1 s, the discrete-time evolution of the system is captured using a second-order
approximation derived from basic kinematics.
The heading angle 𝑎(𝑡) changes over time as a result of steering input 𝑢(𝑡), modeled as:
The lateral position 𝑦(𝑡) evolves due to both current heading and steering-induced
acceleration, given by:
                                                          1
                     𝑦(𝑡 + 1) = 𝑦(𝑡) + 𝑉. 𝑇𝑠 . 𝑎(𝑡) + 2 . 𝑉. 𝑇𝑠2 . 𝑢(𝑡)               (3.22)
We now express these equations in matrix form. Defining the state vector as:
                                𝑎(𝑡)                                    𝑟𝑎𝑑
                     𝑥(𝑡) = [        ],       𝑢(𝑡) = 𝑠𝑡𝑒𝑒𝑟𝑖𝑛𝑔 𝑖𝑛𝑝𝑢𝑡 (       )
                                𝑦(𝑡)                                     𝑠
                                    1          0            𝑇𝑠
                      𝑥(𝑡 + 1) = [               ] 𝑥(𝑡) + [1 2 ] 𝑢(𝑡)                  (3.23)
                                 ⏟ 𝑇𝑠
                                   𝑉.          1
                                                         2⏟ 𝑉𝑇𝑠
                                          𝐴                   𝐵
                                 𝑦(𝑡) = [⏟
                                         0 1] 𝑥(𝑡)                                     (3.24)
                                              𝐶
    29
c. Control Objective
The goal of the controller is to regulate the lateral position 𝑦(𝑡) to zero (i.e., centered within
                                                                                     0
the lane) while maintaining smooth steering actions. The initial condition 𝑥0 = [ ] represents
                                                                                     2
a vehicle aligned with the road but offset by 2 meters. A prediction horizon of Np = 10 and
control horizon of 𝑁𝑐 = 3 were selected to balance responsiveness with computation.
The system models vehicle heading and lateral displacement based on constant forward
velocity. Steering inputs affect both states over time, and only lateral position is measured for
feedback. A discrete-time state-space model is constructed to serve as the prediction model
within the MPC framework.
Initialization and Setup: The vehicle’s discrete-time dynamics are defined under constant
longitudinal velocity 𝑉 = 15 𝑚/𝑠 with a sampling interval 𝑇𝑠 = 0.1𝑠. State-space matrices
𝐴, 𝐵 𝑎𝑛𝑑 𝐶 are derived to capture the kinematic relationships between heading angle, lateral
drift, and steering input. An MPC object is constructed with the following configuration:
    30
      Input constraints: 𝑢(𝑡) ∈ [−0.5, +0.5]𝑟𝑎𝑑/𝑠
Closed-Loop Execution: A discrete-time simulation runs over 50 steps. At each time instant:
      The current lateral position y(t) is measured using the output matrix.
      The MPC controller predicts system evolution and computes an optimal steering rate
       u(t) via the mpcmove() function, ensuring the input respects defined bounds.
      The plant state is updated using the equation x(t + 1) = Ax(t) + Bu(t).
       The resulting state and input are logged for post-analysis and performance
       evaluation.
Result Interpretation: The heading angle, lateral position trajectory, and steering profile are
plotted to verify:
e. Simulation Results
 Fig. 3.8. Evolution of heading angle 𝑎(𝑡), lateral position 𝑦(𝑡) and steering input 𝑢(𝑡) over
                                             time
The simulation results illustrate the closed-loop performance of the Model Predictive
Controller (MPC) designed to regulate a vehicle's lateral deviation from the road centerline.
The system starts with an initial lateral offset of 2 m and zero heading angle. Over 50 discrete
    31
time steps, the controller computes bounded steering inputs to bring the vehicle back to the
lane center smoothly and efficiently.
The first subplot (Fig. 3.8) displays the evolution of the heading angle 𝑎(𝑡). Initially, the
controller applies negative steering to correct the lateral drift, resulting in a leftward
orientation (around -0.35 radians). This is followed by a gradual return toward zero heading
as the vehicle reorients. By approximately the 20𝑡ℎ time step, the heading stabilizes,
indicating directional convergence.
The second subplot (Fig. 3.8) shows the lateral position 𝑦(𝑡) response. The vehicle begins at
2 m offset and traverses across the lane, reaching a minimum of roughly -2 m before settling
precisely at the center. The symmetric trajectory and eventual stabilization highlight the
controller's ability to correct overshoot and achieve accurate centering.
The third subplot (Fig. 3.8) captures the applied control input 𝑢(𝑡), interpreted as the steering
rate. The input initially varies aggressively to counteract the lateral error, peaking at the
bounds of ±0.5 rad/s. As the error diminishes, the input tapers toward zero, ensuring smooth
actuation without chattering.
Overall, the results confirm that the scripted MPC maintains lateral stability, honors steering
constraints, and guides the system toward zero offset within the defined prediction horizon.
These behaviors validate the correctness of the model formulation and efficacy of the MPC
synthesis.
At a sampling time 𝑇𝑠 = 0.1 𝑠 (Fig. 3.8), the MPC exhibits balanced responsiveness and
smooth control behavior. Starting from a 2 m lateral offset, the controller actively
compensates by steering leftward, inducing a negative heading angle that gradually returns to
zero as alignment is restored. The lateral position overshoots slightly, reaching approximately
−2m, then smoothly converges to the lane center. The control input modulates within bounds,
initially aggressive but tapering off as the error decreases. This setup allows for moderate
horizon coverage (1 s in total) and sufficient temporal resolution, leading to stable correction
without constraint saturation or oscillation. The trajectory is symmetric and well-regulated,
confirming that the system dynamics are adequately captured at this update rate.
Under 𝑇𝑠 = 0.01 𝑠 sampling (Fig. 3.9), the controller operates at high frequency (100 Hz),
but the predictive horizon spans just 0.1 s in physical time, significantly limiting lookahead.
The simulation shows the steering input saturated at −0.5 rad/s for the entire duration, without
any modulation or adaptation. Consequently, the heading angle continuously declines, and
the lateral position reduces only marginally — settling at roughly 1.5 m after 50 steps. The
controller appears locked in a constraint-driven regime, unable to vary its input due to
insufficient predictive depth. This highlights a key limitation of overly fine sampling: without
proportionally scaling 𝑁𝑝 , the horizon becomes too short for meaningful dynamic shaping,
leading to conservative or ineffective control.
    32
With 𝑇𝑠 = 1 𝑠 sampling (Fig. 4.1), the controller updates infrequently but enjoys a long
predictive horizon of 10 s. The result is a fast and aggressive correction, where the heading
angle shifts sharply, reverses direction, and undergoes transient oscillation before settling.
The lateral position dips well below the center (as far as −4 m) before stabilizing near −2 m,
suggesting that the vehicle overcorrected its offset. The control input alternates between
saturation bounds, indicating that large corrective actions were triggered at each time step to
handle the coarse system transitions. This behavior demonstrates that longer 𝑇𝑠 introduces
resolution loss — steering actions become abrupt, and state evolution happens in larger leaps,
potentially causing overshoot or oscillation if weight tuning and horizon calibration are not
carefully adjusted.
    33
     Fig. 3.10. Simulation Results with Sampling Time 𝑇𝑠 = 1 𝑠
34
3.3 MPC CONTROL OF AN INVERTED PENDULUM SYSTEM
In this section, we develop and analyze a control strategy for an inverted pendulum system
mounted on a motorized cart as described in [6] — a widely studied benchmark for nonlinear
and underactuated control problems. The goal is to design a stabilizing controller using a
discrete-time linearized model derived via symbolic computation in MATLAB. Both Linear
Quadratic Regulator (LQR) and Model Predictive Control (MPC) techniques are employed to
regulate the pendulum around its unstable upright position. We begin with a description of
the physical system, followed by its linearized model formulation.
a. System Description
The inverted pendulum-cart setup involves a pendulum that pivots freely atop a cart
constrained to horizontal motion. The pendulum’s natural stable configuration is downward;
however, the control challenge lies in maintaining it in the upright position through
coordinated movement of the cart. The system is underactuated, featuring four state variables
but only one control input — the horizontal force applied to the cart.
The control input 𝑢(𝑡) represents the horizontal force applied to the cart, which indirectly
affects both the translational and rotational motion due to dynamic coupling. The system
parameters used in simulation are:
    35
            Parameters                                    Values
          Cart Mass (M)                                    0.5
       Pendulum Mass (m)                                   0.5
     Damping Coefficient (b)                               0.1
       Moment of Inertia (I)                              0.006
   Gravitational Acceleration (g)                          9.8
     Length to Centre of Mass (l)                            0.3
In this section, we present the dynamic modelling of the inverted pendulum on a cart system,
beginning with its nonlinear equations of motion derived from Newtonian mechanics. This
formulation captures the essential physics of the system, including the coupling between the
cart’s translational motion and the pendulum’s rotational dynamics. From these equations, we
derive the state-space representation and proceed to linearize the system about its unstable
equilibrium point to facilitate controller design.
The system consists of a pendulum with mass 𝑚, length to its center of mass 𝑙, and moment
of inertia 𝐼, pivoting freely atop a cart of mass 𝑀. The cart moves along a horizontal track
and is subject to a control input force 𝐹. The pendulum angle θ is measured from the vertical
(with θ = 0 representing the upright position), and the cart position is denoted by 𝑥. Friction
between the cart and the track is modeled by a damping coefficient 𝑏, and gravitational
acceleration is taken as 𝑔.
Applying Newton’s second law to both the cart and the pendulum yields a coupled set of
nonlinear differential equations. For the cart, the total horizontal force includes friction,
inertia, and the reactive force from the pendulum’s motion. This leads to :
For the pendulum, torque equilibrium about its center of mass accounts for gravitational
torque and the influence of the cart’s acceleration projected along the pendulum axis:
These equations exhibit nonlinearities arising from the trigonometric functions and the
product of states, such as cos 𝜃 . 𝜃̈ and sin 𝜃 . 𝜃̇ 2 . The dynamics are also highly coupled: cart
acceleration affects pendulum rotation and vice versa. Together, these equations form the
foundation for further modeling, analysis, and controller synthesis.
    36
State-Space Representation
                                             𝑥1     𝑥
                                             𝑥2     𝑥̇
                                        𝑥 = [𝑥 ] = [ ]
                                              3     𝜃
                                             𝑥4     𝜃̇
                                            𝑥̇ 1 = 𝑥2                                     (3.33)
                       1
                 𝑥̇ 2 = 𝐷 (−𝑚2𝑙 2 𝑔𝑐𝑜𝑠𝑥3 𝑠𝑖𝑛𝑥3 + 𝑚𝑙𝑥42 sin 𝑥3 − 𝑏𝑥2 + 𝑚𝑙𝑢                 (3.34)
                                           𝑥̇ 3 = 𝑥4                                      (3.35)
                 1
           𝑥4̇ = 𝐷 ((𝑀 + 𝑚)𝑚𝑔𝑙 sin 𝑥3 − 𝑚𝑙 cos 𝑥3 (𝑚𝑙𝑥42 sin 𝑥3 − 𝑏𝑥2 + 𝑢))               (3.36)
Where:
𝐷 = 𝐼(𝑀 + 𝑚) + 𝑀𝑚𝑙 2
                                              0
                                              0
                                     𝑥𝑜𝑝   = [ ], 𝑢𝑜𝑝 = 0
                                              𝜋
                                              0
The system is linearized about the upright equilibrium point, where the pendulum is perfectly
balanced in the vertical position. This corresponds to θ=π in the coordinate system used, with
all velocities and applied force set to zero. Although this equilibrium configuration is not
explicitly defined in the referenced paper, it is standard practice in inverted pendulum control
design to linearize around this inherently unstable point, as it represents the desired operating
condition in most stabilization tasks. Linearizing around this point simplifies controller
synthesis and allows for effective implementation of both LQR and MPC schemes under
small-angle assumptions and near-equilibrium deviations.
To linearize the system, we apply Jacobian linearization, which involves computing the first-
order Taylor expansion of the nonlinear system dynamics around the equilibrium point. Let
the nonlinear system be represented as:
    37
                                            𝑥̇ = 𝑓(𝑥, 𝑢)
𝑥̇ ≈ 𝐴𝑥 + 𝐵𝑢
Where the matrices 𝐴 and 𝐵 are the Jacobians of the system dynamics with respect to the
state input, respectively:
                                      𝜕𝑓                   𝜕𝑓
                                𝐴 = 𝜕𝑥 |𝑥𝑜𝑝. 𝑢𝑜𝑝    𝐵 = 𝜕𝑢 |𝑥𝑜𝑝 ,𝑢𝑜𝑝
In your MATLAB implementation, these derivatives are computed symbolically using the
jacobian() function applied to the vector of nonlinear differential equations. The resulting
Jacobian matrices capture the local sensitivity of the system to small perturbations in state
and input near the operating point.
Evaluating the Jacobians at 𝑥𝑜𝑝 and 𝑢𝑜𝑝 yields the numerical matrices A and B which define
the linearized state-space model. This model retains the essential dynamics of the system near
the upright position and is suitable for designing stabilizing controllers that operate within a
small neighborhood of the equilibrium.
The output matrix C is chosen as the identity matrix to ensure full-state observability, and
the feedthrough matrix D is set to zero, indicating no direct influence of the input on the
output.
This linearization process transforms the complex nonlinear system into a tractable linear
form, enabling the application of optimal control techniques while preserving the physical
interpretability of the original dynamics.
𝑦(𝑡) = 𝐶𝑥(𝑡)
where 𝑥 ∈ ℝ4 represents the state vector (cart position, velocity, pendulum angle, and
angular velocity), and 𝑢(𝑡) ∈ ℝ is the scalar control input−the force applied to the cart.
    38
The MPC algorithm minimizes a finite-horizon quadratic cost function of the form:
                             𝑁𝑝 −1
where 𝑁𝑝 is the prediction horizon, 𝑄𝑥 ∈ ℝ4×4 is the state weight matrix, 𝑅𝑢 ∈ ℝ is the input
rate weight, 𝑟 is the reference state and ∆𝑢𝑘 = 𝑢𝑘 − 𝑢𝑘−1 is the control rate.
These constraints reflect physical actuator limits and ensure control feasibility within the
optimization problem.
This design approach enables predictive closed-loop control of the nonlinear pendulum
constraints. It forms the foundation for robust and optimally constrained trajectory regulation
in real-time applications.
d. Simulation Results
The simulation results clearly illustrate the dynamic behavior of the inverted pendulum
system under both MPC and LQR regulation strategies as implemented in the provided
MATLAB code. Under MPC control (Figs. 3.12–3.13), the input signal initially spikes to
around 6 units due to the controller's effort to quickly reject the pendulum's large initial
deviation from the upright position. This is followed by bounded oscillations, indicating that
the MPC efficiently respects the imposed input constraints while progressively damping out
system disturbances. The measured outputs exhibit characteristic nonlinear transient
behavior: the cart position and velocity gradually converge to zero, while the pendulum angle
and angular velocity display sinusoidal patterns that decay over time. These oscillations
reflect both the system's underactuated nature and the predictive controller's effort to correct
multi-variable coupling while honoring finite control horizons.
In contrast, the LQR simulation (Fig. 3.14) shows a markedly different transient profile. The
system’s response to a unit step input reveals smooth settling with minimal overshoot. Cart
dynamics (position and velocity) stabilize rapidly, whereas the pendulum angle exhibits
sharper but still bounded excursions before achieving equilibrium. This behavior stems from
    39
the infinite-horizon quadratic cost minimized by LQR, which assumes full-state feedback and
unconstrained control. The absence of input bounds allows LQR to apply more aggressive or
continuous corrections, thereby reducing the settling time and peak deviations at the cost of
potentially larger control energy.
Comparatively, MPC offers superior constraint handling and tunable responsiveness through
horizon selection and weight prioritization. It is particularly well-suited for systems with
actuation limits, as seen in the bounded input signal of the MPC simulation. LQR, on the
other hand, provides elegant optimal control in idealized settings without input constraints.
Thus, the MPC scheme demonstrates robust and constraint-aware stabilization, whereas LQR
yields faster transients but lacks bounded control enforcement. The choice between them
ultimately depends on the application’s tolerance for control effort, physical limitations, and
desired performance metrics.
Fig. 3.12. Input signal applied to the plant during closed-loop MPC simulation
    40
 Fig. 3.13. Closed-loop response of the inverted pendulum system under MPC regulation
Fig. 3.14. Step response of the linearized inverted pendulum system under LQR regulation
 41
e. Effect of Sampling Time
The simulation results across sampling times of 𝑇𝑠 =0.01 s (Figs. 3.15, 3.16 and 3.17),
𝑇𝑠 =0.1s (Figs. 3.12, 3.13 and 3.14) and 𝑇𝑠 =1 s (Figs. 3.18 and 3.19) reveal distinct control
characteristics shaped by temporal resolution. At 𝑇𝑠 =0.1 s, the MPC controller achieves
balanced responsiveness and stability, benefiting from adequate horizon depth and frequent
updates. Reducing the sampling time to 0.01 s increases update frequency but compresses the
prediction horizon in physical time, causing constraint-driven input saturation, poor foresight,
and aggressive oscillations. Conversely, at 1 s, the extended horizon enables strategic long-
term planning, yet sparse updates introduce coarse control and delayed transients. Thus,
sampling time critically influences MPC performance — too fine hampers anticipation, too
coarse delays correction — and optimal tuning requires harmonizing horizon length, weight
regularization, and system dynamics.
Fig. 3.15. Plant input signal u(t) under MPC regulation at Sampling Time 𝑇𝑠 = 0.01 𝑠
    42
Fig. 3.16. Output responses of the Inverted Pendulum System at Sampling Time 𝑇𝑠 = 0.01 s
  43
             Fig. 3.18. Input Signal applied to the plant at Sampling Time 𝑇𝑠 = 1 𝑠
Fig. 3.19. Closed-loop response of the inverted pendulum system at Sampling Time 𝑇𝑠 = 1s
   44
CONCLUSION
The thesis further demonstrates MPC’s versatility through detailed case studies, including the
mass-spring-damper system, lateral control of an autonomous vehicle, and the inverted
pendulum on a cart. Each system is modeled, linearized where necessary, and regulated using
both GUI-based (mpcDesigner) and script-based implementations. Simulation results
validate MPC’s ability to enforce constraints, minimize transient errors, and adapt to varying
sampling times. Comparative analysis with LQR control underscores MPC’s strength in
constraint handling and anticipative behavior, while revealing trade-offs in responsiveness
and computational complexity.
Overall, the thesis substantiates MPC as a robust, scalable, and constraint-aware control
strategy, capable of delivering high-performance regulation in both SISO and MIMO
environments. It bridges theoretical rigor with practical insight, offering a valuable reference
for advanced control system design in academic and engineering contexts.
    45
REFERENCES
[1] L. Wang, Model Predictive Control System Design and Implementation Using
MATLAB®, 1st ed. London, U.K.: Springer-Verlag, 2009. doi: 10.1007/978-1-84882-331-0
[2] BELLMAN, R.E. Dynamic Programming. 1957 Princeton University Press
[3] KALMAN, R.E. AND BERTRAM, J.E. Control System analysis and design via the
‘second method’ of Lyapunov,J. Basic Engng (Trans. A.S.M.E.), 82D (1960) 371,394
[4] R.E. KALMAN, On the general theory of Control Systems, IFAC Proceedings Volumes,
Volume 1, Issue 1, 1960, Pages 491-502, ISSBN 1474-6670, https://doi.org/10.1016/S1474-
6670(17)70094-8
[5] S. Gorinevsky, “Lecture 14 – Model Predictive Control Part 1: The Concept,” EE392m:
Control Engineering, Stanford University, Spring 2005. [Online]. Available:
https://web.stanford.edu/class/archive/ee/ee392m/ee392m.1056/Lecture14_MPC.pdf
    46
APPENDIX
In this section, we present the MATLAB programs used to obtain the simulation results of the
case studies. These implementations encompass both GUI-based and script-driven
approaches, tailored to the dynamic models discussed in earlier chapters — including the
mass-spring-damper system, lateral vehicle control, and the inverted pendulum. Each code
segment reflects the corresponding control architecture, prediction horizon configuration, and
constraint handling strategy, thereby reinforcing the theoretical constructs with reproducible
computational validation.
MATLAB program:
m = 1;
c = 2;
k = 5;
A = [0 1; -k/m -c/m];
B = [0; 1/m];
C = [1 0];
D = 0;
plant = ss(A, B, C, D);
mpcDesigner(plant)
MATLAB program:
    47
%% Continuous-time plant model
Ts = 1; % Sampling time
plant = ss(A, B, C, D); % Continuous-time system
sys_d = c2d(plant, Ts); % Discrete-time system
%% Design MPC Controller
Np = 10; Nc = 5;
mpcobj = mpc(sys_d, Ts, Np, Nc);
% Tuning weights
mpcobj.Weights.ManipulatedVariablesRate = 0.01;
mpcobj.Weights.OutputVariables = 1;
% Input constraints (optional)
mpcobj.MV.Min = -10;
mpcobj.MV.Max = 10;
%% Closed-Loop Simulation
simSteps = 50;
x = [0; 0]; % Initial state
r = 1; % Desired position (setpoint)
xlog = zeros(simSteps+1, 2);
ulog = zeros(simSteps+1, 1);
xlog(1,:) = x';
mpcState = mpcstate(mpcobj); % MPC controller internal state
for k = 1:simSteps
y_meas = C * x; % Output measurement
u = mpcmove(mpcobj, mpcState, y_meas, r); % Compute control input
x = sys_d.A * x + sys_d.B * u; % State update
xlog(k+1,:) = x';
ulog(k+1) = u;
end
%% Plotting
time = 0:simSteps;
figure;
subplot(2,1,1);
plot(time, xlog(:,1), 'b-', 'LineWidth', 2);
ylabel('Position (m)');
title('MPC-Controlled Position Response');
grid on;
subplot(2,1,2);
stairs(time, ulog, 'r-', 'LineWidth', 2);
ylabel('Control Input (Force)');
xlabel('Time Steps');
title('Applied Control Input');
grid on;
MATLAB program:
clc; clear; close all;
%% System Parameters
   48
Ts = 1; % Sampling time [s]
V = 15; % Constant forward velocity [m/s]
A = [1 0;
V*Ts 1];
B = [Ts;
0.5*V*Ts^2];
C = [0 1]; % Only lateral position is measured
D = 0;
%% Create Discrete-Time State-Space System
sys = ss(A, B, C, D, Ts);
%% MPC Controller Design
Np = 10; % Prediction horizon
Nc = 3; % Control horizon
mpcobj = mpc(sys, Ts, Np, Nc);
% Weighting: prioritize lateral position tracking, smooth inputs
mpcobj.Weights.OutputVariables = 1;
mpcobj.Weights.ManipulatedVariablesRate = 0.1;
% Input constraints (e.g., steering saturation)
mpcobj.MV.Min = -0.5;
mpcobj.MV.Max = 0.5;
%% Simulation Setup
simSteps = 50;
x = [0; 2]; % Initial state: zero angle, 2m lateral offset
r = 0; % Desired lateral position (lane center)
xlog = x'; % State history
ulog = 0; % Control input history
% Create MPC state tracker
mpcState = mpcstate(mpcobj);
for k = 1:simSteps
y_meas = C * x; % Measured lateral position
u = mpcmove(mpcobj, mpcState, y_meas, r); % Compute optimal control
x = A * x + B * u; % Update plant state
xlog = [xlog; x'];
ulog = [ulog; u];
end
%% Plot Results
time = 0:simSteps;
figure('Name','Lateral MPC Controller','NumberTitle','off');
subplot(3,1,1);
plot(time, xlog(:,1), 'b', 'LineWidth', 2);
ylabel('Heading Angle a(t)'); grid on;
subplot(3,1,2);
plot(time, xlog(:,2), 'r', 'LineWidth', 2);
ylabel('Lateral Position y(t)');
title('MPC Control of Lateral Position'); grid on;
subplot(3,1,3);
stairs(time, ulog, 'k', 'LineWidth', 2);
xlabel('Time Step'); ylabel('Control Input u(t)');
title('Control Input (e.g., Steering Rate)'); grid on;
   49
MATLAB program:
% Parameters of the inverted pendulum on a cart
M = 0.5; % Mass of the cart (kg)
m = 0.2; % Mass of the pendulum (kg)
b = 0.1; % Coefficient of friction (N/m/sec)
I = 0.006; % Moment of inertia of the pendulum (kg*m^2)
g = 9.8; % Acceleration due to gravity (m/s^2)
l = 0.3; % Length to pendulum center of mass (m)
% Symbolic linearization variables
syms x1 x2 x3 x4 u_sym real
X = [x1; x2; x3; x4];
% Define the symbolic dynamics
Sy = sin(x3);
Cy = cos(x3);
D = I*(M + m) + M*m*l^2;
dx_sym = sym(zeros(4,1));
dx_sym(1) = x2;
dx_sym(2) = (1/D)*(-m^2*l^2*g*Cy*Sy + m*l*x4^2*Sy - b*x2 + m*l*u_sym);
dx_sym(3) = x4;
dx_sym(4) = (1/D)*((M + m)*m*g*l*Sy - m*l*Cy*(m*l*x4^2*Sy - b*x2 + u_sym));
% Operating point
x_op = [0; 0; pi; 0];
u_op = 0;
% Linearize
A = double(subs(jacobian(dx_sym, X), [x1 x2 x3 x4 u_sym], [x_op(:)' u_op]));
B = double(subs(jacobian(dx_sym, u_sym), [x1 x2 x3 x4 u_sym], [x_op(:)' u_op]));
% Output and feedthrough matrices
C = eye(4);
D_mat = zeros(4,1);
% Create state-space model
sys = ss(A, B, C, D_mat);
% Define weighting matrices for LQR
Q = [1.5 0 0 0; 0 1 0 0; 0 0 1 0; 0 0 0 1]; % State weighting matrix
R = 1; % Control weighting matrix
% Create MPC controller
Ts = 0.1;
mpcobj = mpc(sys, Ts);
% Launch MPC Designer
mpcDesigner(mpcobj);
[K1,S1,P1] = lqr(A,B,Q,R);
sys1 = ss(A-B*K1,B,C,D_mat);
step(sys1)
50