Modern Control Theory
Lecture 7
1
Optimal Control
INTRODUCTION
The design of a control system is an attempt to meet a set of specifications
that define the overall performance of the system in terms of certain
measurable quantities.
In the classical design method of control systems, the designer is given a
set of specifications in time domain or in frequency domain with the system
configuration. Peak overshoot, settling time, gain-margin, phase-margin,
steady-state error, etc., are among the most commonly used specifications.
These specifications have to be satisfied simultaneously in design.
In practice, it may not be possible to satisfy all the desired specifications
and hence, the design necessarily becomes a trial - and - error procedure.
This trial - and - error design procedure works satisfactorily for single-
input-single-output systems. However, for a system with multi-input-multi-
output having a high degree of complexity, the trial - and - error approach
may not lead to a satisfactory design.
2
Optimal Control Cont’d
The optimal control design is aimed at obtaining a best
possible system of a particular type with respect to a certain
performance index or design criterion.
Hence the word ‘optimal.’ In the optimal control design, the
performance index replaces the conventional design criteria,
such as peak overshoot, settling time, gain-margin, phase-
margin, steady-state error, etc.
Of course, the designer must be able to select the performance
index properly so that one may describe the goodness of the
system response on the basis of this performance index.
3
Optimal Control Cont’d
For a continuous system, state equations are a set of first-order
D.E, i.e.
x (t ) f (x ,u , t )
Where time t belongs to (t o , t f ) , t o is the initial time andt f the final
time. In the problem of control system, we have to specify
properly the objective function f which is to optimized.
4
Optimal Control Cont’d
5
Optimal Control Cont’d
We know that a control system is usually described by the block
diagram as shown below,
6
Optimal Control Cont’d
1. Characteristics of the plant
2. Requirements of the plant
3. Data of the plant received by the controller
7
Optimal Control Cont’d
1. Characteristics of the plant
8
Optimal Control Cont’d
2. Requirements of the plant
9
Optimal Control Cont’d
a. Minimum Time Problem
Where PI is the performance index, t 0 is the initial time, t 1 is the
first instant of time when the state x(t) and the target intersect.
The final state may be in a region of (n x1) dimensional state time
space.
10
Optimal Control Cont’d
b. Minimum Energy Problem
11
Optimal Control Cont’d
c. Minimum Fuel Problem
12
Optimal Control Cont’d
d. State Regulator Problem
13
Optimal Control Cont’d
e. Output Regulator Problem
14
Optimal Control Cont’d
f. Servomechanism or Tracking Problem
15
Optimal Control Cont’d
below
16
Optimal Control Cont’d
below
17
Optimal Control Cont’d
18
Optimal Control Cont’d
19
Optimal Control Cont’d
20
Optimal Control Cont’d
3. Plant Data Supplied to the Controller
21