0% found this document useful (0 votes)
86 views3 pages

Random

This MATLAB function calculates new PID controller gains using iterative feedback tuning (IFT) to minimize the error between the actual and desired closed-loop system response. It takes in the current PID gains, desired and actual system outputs and inputs from experiments, and performs gradient descent on a cost function to calculate the gains that will move the actual response closer to the desired one. It constructs gradient controller transfer functions, calculates gradient signals, and uses these to determine the gradient of the cost function with respect to each gain. The new gains are then calculated by taking a step in the negative gradient direction.

Uploaded by

Tech
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
86 views3 pages

Random

This MATLAB function calculates new PID controller gains using iterative feedback tuning (IFT) to minimize the error between the actual and desired closed-loop system response. It takes in the current PID gains, desired and actual system outputs and inputs from experiments, and performs gradient descent on a cost function to calculate the gains that will move the actual response closer to the desired one. It constructs gradient controller transfer functions, calculates gradient signals, and uses these to determine the gradient of the cost function with respect to each gain. The new gains are then calculated by taking a step in the negative gradient direction.

Uploaded by

Tech
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
You are on page 1/ 3

function K = pidift(K, yd, y, u, gamma, lambda, beta)

% K = pidift - Calculate new PID gains from existing gains of a discrete


% PID controller
%
% K = pidift(K, yd, y, u)
%
% Calculate the new PID gains that will move the current system
% response closer to the desired system response. K is a vector of
% the current PID gains, [Kp, Ki, Kd], where the controller structure
% is of the following form:
%
% ----------
% | Ki |
% -->| -------- |---
% | | 1 - z^-1 | |
% | ---------- |
% | |+
% + | ---- + v
% r ----> -------->| Kp |-----> ------> u
% ^ ---- ^
% -| -|
% | |
% y -----
% | Kd |
% -----
% ^
% |
% dy/dt
%
% yd is the desired system output. y is an Nx3 column vector
% resulting from the following experiments:
%
% Start with a reference signal r. yd should be the desired
% closed-loop response of the system to the reference input r.
%
% y1,u1 = generated by system with the reference input r
% y2,u2 = generated by system with the reference input r - y1
% y3,u3 = generated by system with the reference input r
%
% Note that y1,u3 and y3,u3 must be generated from _separate_
% experiments.
%
% Once complete, y(:,1) = y1, etc.
%
% K = pidift(K, yd, y, u, gamma)
%
% Generate new gains with a step-size of gamma used in the
% gradient-descent of the cost funciton. gamma=0 when omitted.
%
% K = pidift(K, yd, y, u, gamma, lambda)
%
% Add input cost to the IFT cost-function. u is an Nx3 column vector
% of input signals generated from the same experiments as y.

%
% Source:
% H. Hjalmarsson et al., Iterative feedback tuning: theory and
% applications, Control Systems Magazine, IEEE, vol. 18, 1998, pp.
% 26-41.
%
% (C) 2008 Dan Miller danielmiller@ucsd.edu

% Force K into column form.


K = K(:);

if nargin < 4
gamma = 0.1;
end
if nargin < 5
lambda = 0;
end
if nargin < 6
beta = ones(3,1);
end

% Extract Kp and Ki, Kd is not used directly in the cost calculations.


Kp = K(1);
Ki = K(2);

% Extract the test signals.


y1 = y(:,1);
y2 = y(:,2);
y3 = y(:,3);
u1 = u(:,1);
u2 = u(:,2);
u3 = u(:,3);

% Construct the gradient PID controller functions. These are the partial
% derivatives of the PID controller TF.
dKp = tf([1, -1], [Kp + Ki, -Kp], 1);
dKi = tf([1, 0], [Kp + Ki, -Kp], 1);
dKd = tf([1, -2, 1], [Kp + Ki, -Kp, 0], 1);

% Calculate the gradient signals.


dydKp = beta(1)*lsim(dKp, y2);
dydKi = beta(2)*lsim(dKi, y2);
dydKd = beta(3)*lsim(dKd, y2 - y3);

dudKp = lsim(dKp, u2);


dudKi = lsim(dKi, u2);
dudKd = lsim(dKd, u2 - u3);

% Calculate the gradient of the cost function for each gain.


dJ = 0;
for i = 1:length(y1)
dJ = dJ + (y1(i) - yd(i)) * [dydKp(i); dydKi(i); dydKd(i)] ...
+ lambda * u1(i) * [dudKp(i); dudKi(i); dudKd(i)];
end
dJ = dJ/length(y1);

% Calculate the Guass-Newton-ish gradient matrix.


R = 0;
for i = 1:length(y1)
dydrho = [dydKp(i); dydKi(i); dydKd(i)];
dudrho = [dudKp(i); dudKi(i); dudKd(i)];
R = R + dydrho*dydrho' + dudrho*dudrho';
end
R = R/length(y1);
% Calculate the new gains.
K = K - gamma*inv(R)*dJ;
end

You might also like