Policy Iterations On The Hamilton-Jacobi-Isaacs Equation For $H - (/infty) $ State Feedback Control With Input Saturation
Policy Iterations On The Hamilton-Jacobi-Isaacs Equation For $H - (/infty) $ State Feedback Control With Input Saturation
net/publication/3032597
CITATIONS READS
200 1,173
3 authors, including:
Murad Abu-Khalaf
SEE PROFILE
All content following this page was uploaded by Murad Abu-Khalaf on 29 March 2013.
VI. CONCLUSION
Abstract—An H suboptimal state feedback controller for constrained
In this note, we have provided sufficient conditions to guarantee that input systems is derived using the Hamilton–Jacobi–Isaacs (HJI) equation
a matrix quadratic polynomial with an arbitrary although finite number of a corresponding zero-sum game that uses a special quasi-norm to encode
of variables has invariant sign in a simplex. We believe that the reported the constraints on the input. The unique saddle point in feedback strategy
conditions are useful in matrix analysis and in particular on the deter- form is derived. Using policy iterations on both players, the HJI equation is
broken into a sequence of differential equations linear in the cost for which
mination of robust performance bounds for problems involving poly- closed-form solutions are easier to obtain. Policy iterations on the distur-
topic parameter uncertainties. We have shown theoretically that several bance are shown to converge to the available storage function of the as-
sufficient robust stability conditions available in the literature to date sociated L -gain dissipative dynamics. The resulting constrained optimal
are special cases of the proposed ones. In our opinion, one of the main control feedback strategy has the largest domain of validity within which
contributions of this note is a new matrix gain parametrization which is L -performance for a given is guaranteed.
used to design stabilizing state feedback controllers from the proposed Index Terms—Controller saturation, H control, policy iterations,
stability conditions and which is sufficiently general to cope with other zero-sum games.
classes of robust control design problems. This aspect and others in-
cluding discrete time systems stability, performance analysis and con-
trol design are now under investigation. I. INTRODUCTION
In this note, we derive the Hamilton–Jacobi–Isaacs (HJI) equation
REFERENCES for systems with input constraints and then develop an algorithm based
[1] P. Apkarian and H. D. Tuan, “Parameterized LMIs in control theory,” on policy iterations to solve the obtained HJI equation. Although the
SIAM Control Optim., vol. 38, no. 4, pp. 1241–1264, 2000.
[2] P. A. Bliman, “A convex approach to robust stability for linear systems
1
formulation of the nonlinear H control theory has been well devel-
with uncertain scalar parameters,” SIAM Control Optim., vol. 42, no. oped, [4], [5], [7], [11], [17], and [19], solving the corresponding HJI
46, pp. 2016–2042, 2004. equation remains a challenge. Several methods have been proposed to
[3] S. P. Boyd, L. El Ghaoui, E. Feron, and V. Balakrishnan, Linear Matrix solve the HJI equation. When its solution is smooth, it can be deter-
Inequalities in System and Control Theory. Philadelphia, PA: SIAM,
1994.
[4] G. Chesi, A. Garulli, A. Tesi, and A. Vicino, “Polynomially param-
eter-dependent Lyapunov functions for robust stability of polytopic Manuscript received May 25, 2005; revised December 4, 2005, May 2, 2006,
systems: An LMI approach,” IEEE Trans. Autom. Control, vol. 50, no.
and June 2, 2006. Research supported by the National Science Foundation
3, pp. 365–379, Mar. 2005.
under Grant ECS-0501451 and by the Army Research Office under Grant
[5] M. Dettori and C. W. Scherer, “New robust stability and performance
conditions based on parameter dependent multipliers,” in Proc. IEEE W91NF-05-1-0314.
Conf. Decision Control, Sydney, Australia, 2000, pp. 4187–4192. M. Abu-Khalaf and F. L. Lewis are with the Automation and Robotics Re-
[6] P. Gahinet, P. Apkarian, and M. Chilali, “Affine parameter-dependent search Institute, The University of Texas at Arlington, Fort Worth, TX 76118
Lyapunov functions and real parametric uncertainty,” IEEE Trans. USA (e-mail: abukhalaf@arri.uta.edu; lewis@uta.edu).
Autom. Control, vol. 41, no. 3, pp. 436–442, Mar. 1996. J. Huang is with the Department of Automation and Computer-Aided En-
[7] J. C. Geromel, M. C. de Oliveira, and L. Hsu, “LMI characterization of gineering, The Chinese University of Hong Kong, Shatin, N.T., Hong Kong
structural and robust stability,” Linear Alg. Appl., vol. 285, pp. 69–80, (e-mail: jhuang@acae.cuhk.edu.hk).
1998. Digital Object Identifier 10.1109/TAC.2006.884959
mined directly by solving for the coefficients of the Taylor series ex- Definition 1: The available storage Va when it exists is the solution
pansion of the value function, as it has been proposed in [10]. In [17], of the optimal control problem
it was proven that there exist a sequence of policy iterations on the con-
trol input to pursue the smooth solution of the HJI equation. Later in T
[8], policy iterations on the disturbance input was suggested in addi-
V a (x ) = sup kz(t)k2 0 2 kd(t)k2 dt:
tion to policy iterations on the control input. However, the existence d (1);T 0
and stability of the disturbance policy iterations were not proven. 0
In this note, we have three objectives. First, prove the existence of
policy iterations on the disturbance input under certain assumptions
When the available storage Va 0 is smooth V 2 C 1 and T ! 1,
a
A0 P + P A + 12 P KK 0 P + H 0 H = 0
problem in [4]. The results in this note are done under regularity as-
sumptions as done in [11], [17], and [1] for the HJB case. (6)
II. POLICY ITERATIONS AND THE AVAILABLE STORAGE Va (x) that appears in the Bounded Real Lemma problem for linear systems
[15], [20].
Consider the system described by
Theorem 1: Let the system (1) be zero-state observable, locally
asymptotically stable with d = 0, and in addition has an L2 0 gain <
x_ = f (x) + k(x)d . Assume that the available storage is a smooth function V 3 > 0 2
z = h (x ) (1) C 1 with a DOV 3 . Then starting with d0 = 0 and assuming that
8i V i 2 C 1 , there exists a sequence of policies resulting from itera-
where f (0) = 0, d(t) is a disturbance, and z (t) is a fictitious output. tions between (7) and (8)
x = 0 is assumed to be an equilibrium point of the system. It is said
that (1) has an L2 0 gain , 0, if 0 + kdi ) + h0 h 0 2 kdi k2 = 0
Vxi (f (7)
di =
1 k0 V i01
T T 2 2 x (8)
x 2
The existence of the so-called available storage function is essential in
determining whether or not a system is dissipative. with 0 < V i (x) V i +1 (x) 8x 2 3 and i
01 .i
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 51, NO. 12, DECEMBER 2006 1991
Proof: Assume that there is di such that x_ = f + kdi is asymp- that x_ = f + kdi is stable 8i. To show uniform convergence of V i to
totically stable. Then V 3 , note that
1 Vxi+1 (f
0 + kdi+1 ) = 0 h0 h + 2 kdi+1 k2
V i (x0 ) = h0 h 0
1 i01 0
0 i01 dt 0 0
4 2 Vx kk Vx (9)
Vxi f = 0 Vxi kdi 0 h0 h + 2 kdi k2
0 0 0
V i k = 2 2 di+1 :
x
is well defined and its infinitesimal version is
By integrating V i and V i+1 over the state trajectory of x_ = f + kdi+1
0 f + 1 kk0 V i01 = 0h0 h + 1 V i01 0 kk0 V i01 for x0 2 i ^ i+1 . It follows that
Vxi 2 2 x
2 x x
4 (10)
0
Px 0 Vxi + 21 2 kk0 Vxi
f x_ = 0x3 + d z = x3 : (15)
0
= 0"(x) 0 41 2 Px 0 Vxi kk0 Px 0 Vxi The corresponding HJ equation is
0
0 41 2 Vxi 0 Vxi01 kk0 Vxi 0 Vxi01
V x (0 x 3 ) +
1 2 6
< 0: 4 2 Vx + x = 0 (16)
Hence, x_ = f + kdi+1 is locally asymptotically stable. Starting with The available storage is V (x) = 2 2 (1 0 (1 0 02 )1=2 )x4 =4. Note
d0 0, and by asymptotic stability of x_ = f , it follows by induction that the available storage cease to exists for < 1. Hence the L2 -gain
1992 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 51, NO. 12, DECEMBER 2006
is equal to 1. Note that the closed-loop dynamics with d = (1 0 (1 0 Note that this is a challenging constrained optimization since the min-
02 )1=2 )x3 is imization of the Hamiltonian with respect to u is constrained, u 2 U .
To confront this constrained optimization problem, we propose the use
of a quasi-norm to transform the constrained optimization problem (23)
x_ = 0(1 0 02 )1=2 x3 (17) into
U = fu(t) 2 L2 [0; 1)j 0 ui i ; i = 1; . . . ; mg : An example is the use of (1) = tanh(1) when juj 1. In this case,
the range of (1) and the domain of 01 (u) is (01; 1) and, therefore,
i
1 u 1
1 kz(t)k
h0 h + 2 01 (v)dv dt 2
kdk dt:
2
(26)
h 0 h + ku k 2 0 2
kdk dt
2
(21) 0 0 0
0
The Hamiltonian of the game (27) is Next, it is shown that (29) remains in saddle point equilibrium as
T ! 1 if they are sought among finite energy strategies. See [6] and
[12] for unconstrained policies.
u
Theorem 2: Suppose that there exists a V (x) 2 C 1 satisfying the
H (x; p; u; d) = p0 (f + gu + kd) + h
0h + 2 01 (v )dv 0 2
kdk 2 : HJI (33) and that
0
(28)
0 g g 0 Vx kk0 Vx
From Lemma 3, Isaacs’s condition follows as shown in the next 1 1
x_ = f + (34)
Lemma. 2 2 2
Lemma 4: For the Hamiltonian (28), Isaacs’s condition is satisfied
min max H = max min H . is locally asymptotically stable, then
u d d u
Proof: Applying the stationarity conditions @H=@u = 0,
@H=@d = 0 on (28) gives (29)
u 3 (x ) = 0 g 0 Vx d 3 (x ) =
1 1 0
k Vx (35)
2 2 2
2
01 (u3 ) + g(x)0 p = 0 ) u3 (x) = 0 1
g (x ) 0 p are in saddle point equilibrium for the infinite horizon game among
2
strategies u 2 U , d 2 L2 [0; 1).
d 3 (x ) = k(x)0 p:
1
(29) Proof: The proof is made by completing the squares
2 2
Defining
JT (u; d; x0 )
T
H 3 (x; p; u3 ; d3 ) = p0 f 0 201 (u3 )0 u3 + h0 h = h0 h + ku(t)kq2 0 2
kdk 2 dt
u 0
01 (v )dv + 0 kk0 p
1
+2 p (30) T
4 2
0 = h0 h + ku(t)kq2 0 2
kdk 2 dt
and rewriting (28) in terms of (30) gives 0
T
3
+ V (x 0 ) 0 V 3 (x T ) + V_ 3 dt
H (x; p; u; d) = H 3 (x; p; u3 ; d3 ) 0 2
kd 0 d 3 k 0
u 2 T
+2 01 (v )dv 0 01 (u3 )0 (u 0 u3 ) : = h0 h + ku(t)kq2 0 2
kdk 2 dt
u 0
T
which is valid expression for all d and all u 2 U . From Lemma 3, one
has
3
+ V (x 0 ) 0 V 3 (x T ) + Vx3 0 (f + gu + kd)dt
0
T u
H (x0 ; u3 ; d) H (x0 ; u3 ; d3 ) H (x0 ; u; d3 ) (31) = 2 01 (v )dv 0 201 (u3 )0 (u 0 u3 )
0 u
and Isaacs’s condition follows.
Under regularity assumptions, from [7, Th. 2.6], there exists 0 2 kd 0 d 3 k 2 dt + V 3 (x0 ) 0 V 3 (xT ) (36)
V 3 (x0 ) 2 C 1 solving the HJI, then V (x0 ; u3 ; d) V (x0 ; u3 ; d3 ) where V 3 solves (33). Since u(t), d(t) 2 L2 [0; 1), and since the
V (x0 ; u; d3 ) and the zero-sum game has a value and the pair of game has a finite value as T ! 1, this implies that x(t) 2 L2 [0; 1),
therefore x(t) ! 0, V 3 (x(1)) = 0, and
policies (29) are in saddle point equilibrium.
For the infinite horizon game, as T ! 1 in (27), one obtains the
following Isaacs equation:
1 u
J1 (u; d; x0 ) = V 3 (x0 ) + 2 01 (v )dv
H 3 (x; Vx ; u3 ; d3 ) = Vx0 (f + gu3 + kd3 ) + h0 h 0 u
u
0201 (u3 )0 (u 0 u3 ) 0 2 kd 0 d3 k2 dt: (37)
+2 01 (v )dv 0 2 kd3 k2
0 Using Lemma 3, u3 and d3 are in saddle point equilibrium in the class
=0 V (0) = 0: (32) of finite energy strategies.
Since (35) satisfies the Isaacs equation, it can be shown that the feed-
On substitution of (29) in (32), the HJI equation for constrained input back saddle point is unique in the sense that it is strongly time consis-
systems is obtained tent and noise insensitive [6].
Example 2: Consider the following nonlinear system
0( gV )
V 0 f 0 V 0 g g 0 Vx 0 01 (v )dv + x_ = 0 x3 + u + d; 01 u 1
1
x x +h h+2
2 u
0
kzk = 0 ln 1 0tanh (2x ) +2
2 2 3 01 (v)dv
1
Vx kk 0 0 Vx = 0 V (0) = 0: (33)
tanh (38)
4 2 0
1994 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 51, NO. 12, DECEMBER 2006
Note that h0 (x)h(x) = 0 ln[1 0 tanh2 (2x3 )] > 0 and is monotoni- From Lemma 3, it follows that
cally increasing in x. It follows that the HJI (33) in this case is given by
0 = Vx (0x3 ) + Vx tanh(00:5Vx ) 0 fj +1 + h0 h + 2
u
1 V 0 kk0 V 0
tanh(00:5V ) Vxj 01 (v )dv +
4 2 xj xj
+2 tanh01 (v)dv + 41 2 Vx2 0
0 with Vj is a possible storage for x_ = fj +1 which by zero-state observ-
0 ln 1 0 tanh2 (2x3 ) ability is asymptotically stable and the available storage for x_ = fj +1
0 = Vx (0x3 ) + Vx tanh(00:5Vx ) is such that Vj +1 Vj .
+ 2 tanh(00:5Vx )tanh01 (tanh(00:5Vx )) Theorem 3: Assume that the value function of the game
is smooth V 3 2 C 1 and solves (33) with the property that
+ ln 1 0 tanh2 (00:5Vx ) + 41 2 Vx2 x_ = f 0 g((1=2)g 0 Vx3 ) + (1=2 2 )kk0 Vx3 is asymptotically
stable. Assume also that 8j x_ = fj is asymptotically stable with
0 ln 1 0 tanh2 (2x3 ) Vj 2 C 1 solving (41) and x_ = f + guj + (1=2 2 )kk0 Vxj is asymp-
0 = Vx (0x3 ) + ln 1 0 tanh2 (00:5Vx) + 41 2 Vx2 totically stable. Then j ! 1 ) sup jVj 0 V 3 j ! 0. Moreover,
x2
V 3 has the largest DOV of any other constrained controller that has
0 ln 1 0 tanh2 (2x3 ) : (39) an L2 0 gain < .
Assume that = 1, then the available storage of the HJI equation exists Proof: From Lemma 5 Vj +1 Vj . Hence, Vj converges point-
and is given by V (x) = x4 and the closed-loop dynamics wise to V 3 and since 3 is compact, uniform convergence of Vj to V 3
on 3 follows by Dini’s theorem, [3]. Since Vj +1 is valid on j and,
hence, valid on 0 . Therefore, V 3 is valid for any 0 .
x_ = f 0 g 12 g0 Vx + 21 2 kk0 Vx The last part of Theorem 3 implies that u3 has the largest region
of asymptotic stability of any other constrained controller that is finite
= x3 0 tanh(2x3 ) (40) L2 -gain stable for a prescribed .
Combining Theorem 1 with Theorem 3, one obtains a two loop
is locally asymptotically stable and, hence, the L2 0 gain < 1. policy iterations solution method for the HJI (33). Specifically, select
Note that for arbitrary f (x), g (x), k(x) and h(x). Obtaining the an- uj , and find Vj that solves (41) by inner loop policy iterations on the
alytical solution to the HJI (33) is not possible in general. In the next disturbance as in Theorem 1 until Vj1 ! Vj by solving
section, a policy iterations technique as done in Section II is proposed
that reduces the solution of the HJI equation to an easier to solve iter-
u
ative equation similar to (7).
i
Vxj
0
(fj + kdi )+ h0 h +2 01 (v )dv 0 2 kdi k2 = 0: (42)
V. SOLVING THE HJI USING POLICY ITERATIONS 0
To solve (33) by policy iterations, we start by showing the existence Then, by Theorem 3, use uj +1 = 0((1=2)g 0 Vxj 1 ) in outer loop
and convergence of control policy iterations on the constrained input policy iterations on the constrained control.
similar to work done on systems with no input constraints in [17]. Then It is important to note that one may use techniques such as neural
policy iterations on both players are performed on the constrained con- networks to obtain a closed-form approximation of the exact solution
trol policy and disturbance policy. to (42) over a domain of the state–space. See [2] for a successful im-
Lemma 5: Assume that the closed-loop dynamics for the con- plementation to the nonlinear benchmark problem.
strained stabilizing controller uj Controllers derived using (33) for a fixed are suboptimal H1 con-
trollers. Optimal H1 are achieved for the lowest possible 3 for which
x_ = f (x) + g (x)uj + k (x )d f j (x ) + k (x )d the HJI is solvable. It is straightforward to show that the DOV for
has an L2 0 gain < with the associated available storage Vj 2 C1 the game value functions V 3 and V 3 are such that 3 3 for
solving 1 2 > 3 with 3 being the smallest gain for which a stabilizing
u solution of the HJI (33) exists.
0 fj + h0 h + 2
Vxj 01 (v )dv +
1 V 0 kk0 V = 0:
4 2 xj xj (41)
VI. CONCLUSION
0
The constrained input HJI equation along with two players policy it-
Furthermore, assume that (20) is zero-state observable. Then, the up-
dated control policy uj +1 = 0((1=2)g 0 Vxj ) guarantees that the
erations provide a sequence of differential equations for which approx-
closed-loop dynamics x_ = fj +1 + kd will have an L2 0 gain and
imate closed-form solutions are easier to obtain. The presented method
x_ = fj +1 is asymptotically stable. It also implies that if Vj +1 2 C 1 ,
can be combined with neural networks to obtain least squares solution
then Vj +1 Vj .
of the HJI equation therefore obtaining a practical method to derive
L2 -gain optimal, or suboptimal H1 , controllers of nonlinear systems
Proof: Note that
that are affine in input and with actuator saturation. The method re-
quires the problem to possess a smooth solution of the HJI equations.
u
0 fj +1 = 0h0 h 0 2
Vxj 01 (v )dv 0
1 V 0 kk0 V This is an extension to our earlier work on HJB equations [1].
4 2 xj xj
0 REFERENCES
u
[2] M. Abu-Khalaf, F. L. Lewis, and J. Huang, “Neural network H state Residual Generation for Fault Diagnosis of Systems
feedback control with actuator saturation: the nonlinear benchmark Described by Linear Differential-Algebraic Equations
problem,” in Proc. 5th Int. Conf. Control Automation, Budapest, Hun-
gary, Jun. 2005, pp. 1–9.
Mattias Nyberg and Erik Frisk
[3] T. Apostol, Mathematical Analysis. Reading, MA: Addison-Wesley,
1974.
[4] J. Ball and W. Helton, “Viscosity solutions of Hamilton-Jacobi equa-
tions arising in nonlinear H -control,” J. Math. Syst., Estimat., Con-
trol, vol. 6, no. 1, pp. 1–22, 1996. Abstract—Linear residual generation for differential-algebraic equation
[5] M. Bardi and I. Capuzzo-Dolcetta, Optimal Control and Viscosity (DAE) systems is considered within a polynomial framework where a com-
Solutions of Hamilton-Jacobi-Bellman Equations. Boston, MA: plete characterization and parameterization of all residual generators is
Birkhauser, 1997. presented. Further, a condition for fault detectability in DAE systems is
[6] T. Başar and G. J. Olsder, Dynamic Noncooperative Game Theory, given. Based on the characterization of all residual generators, a design
2nd ed. Philadelphia, PA: SIAM, 1999, vol. 23, SIAM’s Classic in strategy for residual generators for DAE systems is presented. The design
Applied Mathematics. strategy guarantees that the resulting residual generator is sensitive to all
[7] T. Başar and P. Bernard, H Optimal Control and Related Minimax the detectable faults and also that the residual generator is of lowest pos-
Design Problems. Boston, MA: Birkhäuser, 1995. sible order. In all results derived, no assumption about observability or con-
[8] R. Beard and T. McLain, “Successive Galerkin approximation algo- trollability is needed. In particular, special care has been devoted to assure
rithms for nonlinear optimal and robust control,” Int. J. Control, vol. the lowest-order property also for non-controllable systems.
71, no. 5, pp. 717–743, 1998.
[9] G. Bianchini, R. Genesio, A. Parenti, and A. Tesi, “Global H con-
trollers for a class of nonlinear systems,” IEEE Trans. Autom. Control,
vol. 49, no. 2, pp. 244–249, Feb. 2004. I. INTRODUCTION
[10] J. Huang and C. F. Lin, “Numerical approach to computing nonlinear
H control laws,” J. Guid., Control, Dyna., vol. 18, no. 5, pp. Fault diagnosis consists of detecting and isolating faults acting on a
989–994, 1995.
[11] A. Isidori and A. Astolfi, “Disturbance attenuation and H -control
process. In many methods, e.g., structured residuals [1], the concept
of residuals play a central role. Commonly, a set of residuals is used
via measurement feedback in nonlinear systems,” IEEE Trans. Autom.
Control, vol. 37, no. 9, pp. 1283–1293, Sep. 1992. where different subsets of residuals are sensitive to different subsets of
[12] D. Jacobson, “On values and strategies for infinite-time linear quadratic faults and in this way isolation between faults is possible.
games,” IEEE Trans. Autom. Control, vol. 22, no. 3, pp. 490–491, Mar.
1977.
In this note, residual generation for models described by general
[13] J. Si, A. Barto, W. Powell, and D. Wunsch, Handbook of Learning and linear differential-algebraic equations (DAEs) is considered. Previous
Approximate Dynamic Programming. New York: Wiley-IEEE Press, works on residual generation have all considered more specific classes
2004. of models, i.e., transfer functions [1], [2], state-space models [3]–[5],
[14] H. Knobloch, A. Isidori, and D. Flockerzi, Topics in Control Theory.
Boston, MA: Springer-Verlag, 1993. or descriptor models e.g., [6], [7]. Since DAE models cover all these
[15] P. Lancaster and L. Rodman, Algebraic Riccati Equations. New classes of models, the methods presented in this note are applicable to
York: Oxford Univ. Press, 1995. all the three previous cases.
[16] S. E. Lyshevski, “Role of performance functionals in control laws de- In the context of residual generation, DAE-models are important be-
sign,” in Proc. Amer. Control Conf., 2001, pp. 2400–2405.
L
[17] A. J. Van Der Schaft, “ -gain analysis of nonlinear systems and non- cause they appear in large classes of engineering systems like elec-
linear state feedback H control,” IEEE Trans. Autom. Control, vol. trical systems, chemical processes, robotic manipulators, and mechan-
37, no. 6, pp. 770–784, Jun. 1992. ical systems. For example, in mechanical systems, differential equa-
[18] ——, L
-Gain and Passivity Techniques in Nonlinear Con-
tions arise from equations of motion while algebraic constraints model
trol. London, U.K.: Springer-Verlag, 1999.
[19] J. C. Willems, “Dissipative dynamical systems part I-II: linear systems geometrical constraints. Further, DAE-models are also the result when
with quadratic supplies,” Arch. Rational Mech. Anal., vol. 45, no. 1, pp. using a physically based object-oriented modelling approach [8].
321–393, 1972. The approach presented in this note is an extension of the previous
[20] K. Zhou and J. Doyle, Essentials of Robust Control. New York: Pren-
tice-Hall, 1997.
work [3] and one main contribution is a new method for designing
residual generators for DAE-models. The method finds residual gen-
erators of lowest possible order, and which are guaranteed to be sen-
sitive to detectable faults. Another main contribution is a criterion for
fault detectability in DAE-systems, i.e., a criterion that says if it is at
all possible to find any residual generator sensitive to a fault. A help in
developing these results, but also a contribution on its own, is that we
derive a characterization of all possible residual generators.
Previous works on residual generation for linear DAE systems have
all assumed that the model is in descriptor form. As said previously,
the models considered here [see (1)] are more general. However, they
can with a straightforward transformation be taken to the descriptor
form and, therefore, it makes sense to relate the present work to pre-
vious works dealing with descriptor models. For descriptor models, two
Manuscript received June 18, 2004; revised April 5, 2005, January 9, 2006,
and June 2, 2006. Recommended by Associate Editor M. Demetriou.
The authors are with the Department of Electrical Engineering, Linköping
University, SE-581 83 Linköping, Sweden (e-mail: matny@isy.liu.se;
frisk@isy.liu.se).
Digital Object Identifier 10.1109/TAC.2006.884960