State Estimation For Discrete-Time Complex Networks With Randomly Occurring Sensor Saturations and Randomly Varying Sensor Delays
State Estimation For Discrete-Time Complex Networks With Randomly Occurring Sensor Saturations and Randomly Varying Sensor Delays
5, MAY 2012
725
Abstract In this paper, the state estimation problem is investigated for a class of discrete time-delay nonlinear complex
networks with randomly occurring phenomena from sensor
measurements. The randomly occurring phenomena include
randomly occurring sensor saturations (ROSSs) and randomly
varying sensor delays (RVSDs) that result typically from networked environments. A novel sensor model is proposed to
describe the ROSSs and the RVSDs within a unified framework
via two sets of Bernoulli-distributed white sequences with known
conditional probabilities. Rather than employing the commonly
used Lipschitz-type function, a more general sector-like nonlinear
function is used to describe the nonlinearities existing in the
network. The purpose of the addressed problem is to design a
state estimator to estimate the network states through available
output measurements such that, for all probabilistic sensor
saturations and sensor delays, the dynamics of the estimation
error is guaranteed to be exponentially mean-square stable and
the effect from the exogenous disturbances to the estimation
accuracy is attenuated at a given level by means of an H -norm.
In terms of a novel LyapunovKrasovskii functional and the
Kronecker product, sufficient conditions are established under
which the addressed state estimation problem is recast as solving
a convex optimization problem via the semidefinite programming
method. A simulation example is provided to show the usefulness
of the proposed state estimation conditions.
Index Terms Complex networks, randomly occurring sensor
saturations, randomly varying sensor delays, state estimation.
I. I NTRODUCTION
Manuscript received July 25, 2011; revised December 19, 2011; accepted
January 30, 2012. Date of publication March 1, 2012; date of current version
May 2, 2012. This work was supported in part by the Engineering and
Physical Sciences Research Council of the U.K. under Grant GR/S27658/01,
the Royal Society of the U.K., the National Natural Science Foundation
of China under Grant 61028008, Grant 61134009, Grant 61104125, and
Grant 60974030, the Natural Science Foundation of Universities in Anhui
Province of China under Grant KJ2011B030, and the Alexander von Humboldt
Foundation, Germany.
D. Ding, B. Shen, and H. Shu are with the School of Information
Science and Technology, Donghua University, Shanghai 200051, China
(e-mail:
deruiding2010@gmail.com;
shenbodh@gmail.com;
hsshu@dhu.edu.cn).
Z. Wang is with the Department of Information Systems and Computing,
Brunel University, Uxbridge UB8 3PH, U.K., and also with the School of
Information Science and Technology, Donghua University, Shanghai 200051,
China (e-mail: zidong.wang@brunel.ac.uk).
Color versions of one or more of the figures in this paper are available
online at http://ieeexplore.ieee.org.
Digital Object Identifier 10.1109/TNNLS.2012.2187926
726
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 23, NO. 5, MAY 2012
In addition to the appearance of ROSSs, the sensor measurement delay results in another common phenomenon that
occurs in a random way especially when the sensors are
connected via communication networks. Sensor delays may
be induced by a variety of reasons such as an asynchronous time-division-multiplexed network, intermittent sensor
failures, random congestion of packet transmissions, etc. Such
phenomena are customarily referred to as randomly varying
sensor delays (RVSDs), see [31] and [32] for more details.
In many cases, the RVSDs are a source of instability and
performance deterioration of a complex network equipped
with a large number of sensors. One of the most popular
ways to describe the RVSDs is to use a Bernoulli-distributed
(binary switching) white sequence specified by a conditional
probability distribution in the sensor output. This approach
was first proposed in [33] to deal with the optimal recursive
estimation problem. Recently, it was used in [32] for filtering
problems and in [34] for control designs. Obviously, to reflect
the network reality, it makes practical sense to consider both
the ROSSs and RVSDs where their occurrence probabilities
can be estimated via statistical tests. To date, to the best of
our knowledge, the estimation problem for complex networks
with both ROSSs and RVSDs remains an open yet challenging
issue, and the main purpose of this paper to narrow such a
gap. It is worth pointing out that the main difficulty lies in
how to establish a unified framework to account for the two
phenomena of ROSSs and RVSDs.
Summarizing the above discussions, the focus of this paper
is on the state estimation problem for a class of discrete timedelay complex networks with randomly occurring phenomena including ROSSs and RVSDs that result typically from
networked environments. Two sets of Bernoulli-distributed
white sequences with known conditional probabilities are
introduced to describe the ROSSs and the RVSDs within a
unified framework. A general sector-like nonlinear function
is employed to describe the inherently nonlinear nature of
the complex networks. By employing the Lyapunov stability
theory combined with the stochastic analysis approach, a
delay-dependent criterion is established that guarantees the
existence of the desired estimator gains, and then the explicit
expression of such estimator gains is characterized in terms
of the solution to a convex optimization problem via the
semidefinite programming method. Moreover, a simulation
example is provided to show the effectiveness of the proposed
estimator design scheme. The main contribution of this paper
is twofold: 1) novel sensor model is established to account
for both the ROSSs and RVSDs in a unified framework; and
2) based on this sensor model, the estimator design approach
is proposed to ensure that the error dynamics is exponentially
mean-square stable and the H performance constraint is
satisfied.
The rest of this paper is organized as follows. In Section II,
a class of discrete time-delayed complex networks with both
the ROSSs and RVSDs is presented. In Section III, by employing the Lyapunov stability theory, some sufficient conditions
are established in the form of LMI and then the explicit
expression of the estimator gains is given. In Section IV,
an example is presented to demonstrate the effectiveness
+
wi j x j (k) + L i v 1 (k),
(1)
j
=1
z i (k) =M x i (k),
(2)
727
(3)
728
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 23, NO. 5, MAY 2012
Remark 3: The measurement model proposed in (3) provides a novel unified framework to account for the phenomenon of both ROSSs and RVSDs. The stochastic variable i (k)
characterizes the random nature of sensor saturation, whereas
the stochastic variable i (k) is used to describe the phenomenon of the probabilistic sensor delay. By combining these
two stochastic variables, model (3) represents the following
four different phenomena: 1) when i (k) = 1 and i (k) = 0,
sensor i works normally; 2) when i (k) = 1 and i (k) = 1,
model (3) is reduced to yi (k) = (C x i (k)), which means that
the measurements received by sensor i are saturated; 3) when
i (k) = 0 and i (k) = 0, it can be seen from model (3) that
the measurements at previous d time instant are employed
by the estimator i instead of the one at current time instant;
and 4) when i (k) = 0 and i (k) = 1, model (3) implies
that the measurements are not only delayed but also saturated
before they enter into the estimator i . In addition, it is easy to
observe that the time delay in the measurements takes random
values as 0 when i (k) = 1 and d when i (k) = 0. Such
kind of phenomenon is referred to as the randomly varying
delays.
Based on the measurement yi (k), we construct the following
state estimator for node i :
z i (k) =M xi (k),
(5)
1
[ G 1T
2
G 2T
N
G TN ]T ,
M = I M,
E i = diag{0, . . . , 0, I, 0, . . . , 0}.
i1
Ni
follows:
k + (W + K C)x
k
ek+1 = fk + g kk K Ce
N
kK
+ Lv
i (k)i (k)E i (C x k )
i=1
N
+ (1 i (k))i (k)E i C x k
+
+
i=1
N
i=1
N
i (k)(1 i (k))E i (C x kd )
(6)
(1 i (k))(1 i (k))E i C x kd ,
i=1
k.
z k = Me
k+1 = W 1 k + W2 kd + fk + gkk
kd )
+H (CSk ) + H (CS
N
i
k)
N
k
+ i=1
(ki i )Gi CS
N
kd )
(7)
+ i=1
( ki i )Gi (CS
N
kd + Lv k ,
+ i=1 ( k i )Gi CS
=
M
,
k
k
i = [1T (i ), 2T (i ), . . . , NT (i ), 1T (i ), 2T (i ),
. . . , NT (i )]T , i [M , 0]
where
fk = [ f T (x k ) fkT ]T , gk = [ g T (x k ) g kT ]T ,
ki = i (k)i (k), ki = i (k)(1 i (k)),
ki = (1 i (k))i (k), ki = (1 i (k))(1 i (k)),
i = i i , = diag{ 1 I, 2 I, . . . , N I },
i = i (1 i ), = diag{1 I, 2 I, . . . , N I },
i = i (1 i ), = diag{ 1 I, 2 I, . . . , N I },
i = (1 i )(1 i ), = diag{ 1 I, 2 I, . . . , N I },
H = [ 0 K T ]T , Gi = [ 0 E iT K T ]T ,
M = [ 0 M ], S = [ I 0 ],
W
0
W1 =
,
W + K (I )C K C
0
0
L
0
W2 =
,
L
=
.
L K G.
K C 0
As analyzed in [30] and [35], the saturation function
k ) satisfies
(CS
T
[
(CS)
CS]
0
[
(CS)
(I )CS]
(8)
[ gk (I 1 )k ]T [ gk (I 2 )k ] 0.
(9)
max
i[M , 0]
E{||i ||2 }, k N.
1
E{||z k ||2 } 2
||v k ||2 .
N
k=0
(10)
k=0
33 0
1 =
44
(11)
T
T
16
W1 P1 H
W1 P1
W2T P1 W2T P1 H
27
gT
2 2
0
0
<0
P1 H
P1 H
P1
55
P1 H
P1 H
66
67
77
729
where
i = i (1 i )[1 i (1 i )],
i = (1 i )(1 i )[1 (1 i )(1 i )],
i = i (1 i )[1 i (1 i )], i = i i (1 i i ),
i = i
i
i
=
=
= i i (1 i )(1 i ),
i i2 (1 i ),
i2 i (1 i ),
= i (1 i )2 (1 i ),
i = i (1 i )(1 i )2 ,
1 fT f
f
f
f
f
1 = I Sym{ 1 2 }, 2 = I (1 + 2 )/2,
2
1 gT g
g
g
g
g
1 = I Sym{ 1 2 }, 2 = I (1 + 2 )/2,
2
g
P1 = diag{I Q 1 , I Q 2 }, 33 = P3 2 1 ,
P2 = diag{I Q 3 , I Q 4 }, 44 = P1 1 I,
P3 = diag{I Q 5 , I Q 6 }, 55 = P1 2 I,
11 = W1T P1 W1 P1 + P2 + (M m + 1)P3
f
1 3 S T C T (I )CS
1
N
+
(i + i + i + i )S T C T GiT P1 Gi CS,
i=1
77 =
N
( i + i + i + i )S T C T GiT P1 Gi CS,
i=1
W2T P1 H + 4 S T C T (I + I )/2,
T HT P1 H 3 I
N
+
( i + i + i + i )GiT P1 Gi ,
i=1
T T
H P1 H 4 I
N
+
( i + i + i + i )GiT P1 Gi .
i=1
(12)
where
V1 (k) = k P1 k +
k1
i P2 i ,
i=kd
V2 (k) =
k1
i=kk
i P3 i , V3 (k) =
k
m
k1
i P3 i .
j =kM +1 i= j
730
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 23, NO. 5, MAY 2012
T
+ fkT P1 fk + gk
P g
k 1 kk
T
T T
k)
+
(CSk ) H P1 H (CS
kd ) T HT P1 H (CS
kd )
+
T (CS
+
+
+
+
N
i=1
N
i=1
N
i=1
N
= E k P3 k kk P3 kk +
k )GiT P1 Gi (CS
k)
i T (CS
k
m
i P3 i
i=kk+1 +1
k1
k1
i P3 i
i=kk +1
i P3 i
i=km +1
k
m
k
i kT S T C T GiT P1 Gi CS
E k P3 k kk P3 kk +
kd )GiT P1 Gi (CS
kd )
i T (CS
E{V
3 (k)} = E{V3 (k + 1) V3 (k)}
m +1
k
m
k
k1
k
E
i P3 i
i P3 i
j =kM +2 i= j
j =kM +1 i= j
m
k
m
k
k1
k
E
i P3 i
i P3 i
j =kM +1 i= j +1
j =kM +1 i= j
k
(k P3 k j P3 j )
E
j =kM +1
k
m
(15)
i P3 i .
E (M m )k P3 k
i P3 i
i=kM +1
(14)
T
kd
i kd
S T C T GiT P1 Gi CS
i=1
k)
+2kT W1T P1 gkk + 2kT W1T P1 H (CS
T
T
T
kd ) + 2kd W2T P1 fk
+2k W1 P1 H (CS
T
T
k)
+2kd
W2T P1 gkk + 2kd
W2T P1 H (CS
T
kd ) + 2 fkT P1 gkk
+2kd
W2T P1 H (CS
k ) + 2 fkT P1 H (CS
kd )
+2 fkT P1 H (CS
T
k)
+2 gk
P1 H (CS
i=kM +1
T
kd )
+2gk
P H (CS
k 1
k ) T HT P1 H (CS
kd )
+2
T (CS
N
k
k )GiT P1 Gi CS
i T (CS
2
i=1
N
k )GiT P1 Gi (CS
kd )
2
i T (CS
i=1
N
kd
k )GiT P1 Gi CS
2
i T (CS
i=1
N
kd )
2
i kT S T C T GiT P1 Gi (CS
i=1
N
kd
2
i kT S T C T GiT P1 Gi CS
i=1
N
kd
kd )GiT P1 Gi CS
2
i T (CS
i=1
k P1 k + k P2 k kd P2 kd } .
=E
i P3 i
i P3 i
i=kk+1 +1
i=kk
= E k P3 k kk P3 kk
k1
k1
+
i P3 i
i P3 i
i=kk+1 +1
i=kk +1
k
+kT S T C T GiT P1 Gi CS
T
k )Gi P1 Gi (CS
kd )
2
(CS
(16)
k )GiT P1 Gi (CS
k)
T (CS
T
T
kd )
+
(CSkd )Gi P1 Gi (CS
(17)
kd
k )GiT P1 Gi CS
2
(CS
k )GiT P1 Gi (CS
k)
T (CS
T
T
kd
+kd
S T C T GiT P1 Gi CS
T T T T
kd )
2k S C Gi P1 Gi (CS
k
kT S T C T GiT P1 Gi CS
T
T
kd )
+
(CSkd )Gi P1 Gi (CS
(13)
kd
2kT S T C T GiT P1 Gi CS
T T T T
k
k S C Gi P1 Gi CS
T
T T T
kd
+kd S C Gi P1 Gi CS
T
T
kd
2
(CSkd )Gi P1 Gi CS
T
T
kd )
(CSkd )Gi P1 Gi (CS
T
kd .
+kd
S T C T GiT P1 Gi CS
(18)
(19)
(20)
(21)
3
i=1
1 k }
E{Vi (k)} E{kT
(22)
where
T
T
k = kT kd
fkT
k
k
k ) T (CS
kd )
gkk T (CS
T
(i + i + i + i )S T C T GiT P1 Gi CS,
i=1
N
( i + i + i
E{||i ||2 }
i=kM
22 = W2T P1 W2 P2
+
k1
11 = W1T P1 W1 P1 + P2 + (M m + 1)P3
N
731
+ i )S T C T GiT P1 Gi CS,
+3
k1
E{||i ||2 }
(26)
i=kM
i=1
66 = T HT P1 H
N
+
( i + i + i + i )GiT P1 Gi ,
77 =
i=1
T T
H P1 H
N
+
( i + i + i + i )GiT P1 Gi ,
1 =
i=1
11 W T
1
P1 W2 0
22
0
P3
W1T
W2T
P1
P1
0
P1
+2 ()
E{||i || }
2 [gkk (I 1 )kk ]T [gkk (I 2 )kk ]
k ) (I )CS
k ]T [
k ) CS
k]
(CS
(CS
3 [
kd ) (I )CS
kd ]T
4 [
(CS
kd ) CS
kd ]
[
(CS
(23)
Then, it is easy to see from (23) and (24) that the following
inequality holds:
E{V (k)} 0 E{||k ||2 }.
i=0
2
(28)
E{V (k)}
1 k 1 [ fk (I f )k ]T [ fk (I f )k ]
E kT
1
2
1 + 0 diag{I, 0} < 0.
i E{||i ||2 }
where 1 () = ( 1)1 0 , 2 () = M M ( 1)
(2 + 3 ).
Let 0 = min (P1 ) and = max{1 , 2 , 3 }. It is obvious
from (12) that
(29)
E{V (k)} 0 E{||k ||2 }.
E{kT 1 k }.
k1
M i0
0
0
0
.
P1
P1 H
P1 H
P1
P1 H
P1 H
T
T
66
H P1 H
77
(27)
i=kM
(25)
(30)
(32)
M i0
M i0
0
0
According to Definition 1, the augmented system (7) with
v k = 0 is exponentially mean-square stable, which completes
the proof.
732
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 23, NO. 5, MAY 2012
gT
33 0 2 2
44
P1
2 =
55
(34)
T
T
W1 P1 H
W1 P1 L
16
W2T P1 H
27
W2T P1 L
0
0
0
P1 H
P1 L
P1 H
<0
P1 H
P1 L
P1 H
T
67
H P1 L
66
HT P1 L
77
LT P1 L 2 I
where
11 =W1T P1 W1 P1 + P2 + (M m + 1)P3
f
+ 1 MT M
1 1 3 S T C T (I )CS
N
N
+
(i + i + i + i )S T C T GiT P1 Gi CS
i=1
where
L
=
[ LT P1 W1 LT P1 W2 0 LT P1 LT P1
T
T
L P1 H L P1 H ].
In order to analyze the H performance of (7), we introduce
!
s
1
J (s) = E
(37)
||z k ||2 2 ||v k ||2
N
k=0
s
E
kT 2 k < 0.
1
E{||z k ||2 } 2
||v k ||2 .
N
k=0
E{V (k)} E k
(36)
k
LT P1 L
k=0
RT F T F T F T F T
1
2
3
4
Q2 0
0
0
0
Q2 0
0
0
<0
(39)
Q
0
0
2
Q2 0
Q2
where
(35)
(38)
k=0
=
11
12
22
13
0
33
0
0
0
44
0
0
0
0
55
16
26
36
0
0
66
17
27
37
0
57
67
77
18
0
0
0
0
68
78
88
19
0
39
0
0
69
79
0
99
1,10
2,10
3,10
0
,
6,10
7,10
8,10
9,10
10,10
39 = 4 I C T ( + I )/2,
3,10 = [ (I C)T Y T L 0 ], 44 = I Q 4 ,
g
Z2 =
Z3 =
2
(2)
2 E 2 Y T
T
(3)
2 E2Y
(4)
2 E 2 Y T
66 = diag{I Q 1 , I Q 2 } 1 I,
67 = diag{I Q 1 , I Q 2 },
68 = 78 = [ 0 Y T ]T ,
(2)
T
N E N Y T ] ,
T ]T ,
(3)
N EN Y
(4)
N E N Y T ]T ,
F1 = [ Z 1 (I C) 0 0 0 0 0 0 0 0 0 ],
F2 = [ 0 0 Z 2 (I C) 0 0 0 0 0 0 0 ],
F3 = [ 0 0 0 0 0 0 0 Z 3 0 0 ],
F4 = [ 0 0 0 0 0 0 0 0 Z 4 0 ],
R = [ Y (I )(I C) Y (I C) Y (I C)
0 0 0 0 Y Y 0 Y G ],
"
(1)
i =
+ i + i + i ,
" i
i + i + i + i ,
(2)
i =
"
=
i + i + i + i ,
(3)
i
"
(4)
i =
i + i + i + i ,
Z4 =
12
13
16
I (Q 1 Q 3 (M m + 1)Q 5 ) 1 1 ,
= (W )T Y (I C),
= (W )T Y (I C),
= [ W T ( Q 1 ) + 1 2f T
W T ( Q 2 ) + (I C)T (I )Y T ],
17 = [ W T ( Q 1 )
18
W T ( Q 2 ) + (I C)T (I )Y T ],
= (W T )Y T + 3 I C T ( + I )/2,
= (W T )Y T ,
19
1,10 = [ W T ((Q 1 + Q 2 ))L + (I C)T (I )Y T L
(W )T Y G ],
22 = I (Q 2 Q 4 (M m + 1)Q 6 )
1
f
1 1 + (I (M T M)),
N
26 = [ 0 (I C)T Y T + 1 2f T ],
27 = [ 0 (I C)T Y T ],
2,10 = [ (I C)T Y T L 0 ],
33 = I Q 3 4 I (C T C),
36 = 37 = [ 0 (I C)T Y T ],
gT
Y = diag{Y1 , Y2 , . . . , Y N }, Q2 = I Q 2 ,
Z 1 = [ (1) E 1 Y T (1) E 2 Y T (1) E N Y T ]T ,
1
T
[ (2)
1 E1 Y
T
[ (3)
1 E1 Y
T
[ (4)
1 E1 Y
733
69 = 79 = [ 0 Y T ]T ,
77 = diag{I Q 1 , I Q 2 } 2 I,
88 = 3 I, 8,10 = [ Y T L 0 ],
99 = 4 I, 9,10 = [ Y T L 0 ],
(I Q 2 )L 0
6,10 = 7,10 =
,
(I Q 2 )L Y G
T
L (I (Q 1 + Q 2 ))L 2 I L T Y G
10,10 =
2 I
and other parameters are defined as in Theorem 1. Moreover,
if the above inequality is feasible, the desired state estimator
gains can be determined by
K i = Q 1
2 Yi .
(40)
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 23, NO. 5, MAY 2012
1.2
0.8
0.6
0.4
0.2
0
0.2
0.4
0.6
Fig. 1.
10
15
20
Time (k)
25
30
35
40
0.5
0.5
0.6 0.6
0
W = 0.6 1.1 0.5
0
0.5 0.5
and the inner-coupling matrix is given as = diag{0.1, 0.1}.
The disturbance matrices and the output matrix are
T
T
L 1 = 0.04 0.03 , L 2 = 0.02 0.04 ,
T
L 3 = 0.02 0.03 , M = [0.70 0.65].
Sensor 1
Sensor 2
734
1.5
Fig. 2.
10
15
20
Time (k)
25
30
35
40
The nonlinear vector-valued functions f (x i (k)) and M = 3, the disturbance attenuation level is = 0.92, and the
diagonal matrix is = diag{0.7, 0.7}. By using the MATLAB
g(x i (k)) are chosen as
(with YALMIP 3.0 and SeDuMi 1.1), we solve LMI (39) and
0.6x i1 (k) + 0.3x i2 (k) + tanh(0.3x i1 (k))
obtain a set of feasible solutions as follows:
,
f (x i (k)) =
0.6x i2 (k) tanh(0.2x i2 (k))
6.6408 2.5172
1 = 14.6065, Q 1 =
,
0.02x i1 (k) + 0.06x i2 (k)
2.5172 1.3932
g(x i (k)) =
.
0.03x i1 (k) + 0.02x i2 (k) + tanh(0.01x i1 (k))
5.4508 1.9621
,
2 = 53.3398, Q 2 =
Then, it is easy to see that the constraint (2) can be met
1.9621 3.0666
with
0.2706 0.0581
=
0.3624,
Q
=
,
3
3
0.6 0.3
0.3 0.3
f
f
0.0581 0.1037
, 2 =
,
1 =
0 0.4
0 0.6
0.1516 0.1421
,
4 = 2.1997, Q 4 =
0.02 0.06
0.02 0.06
g
g
0.1421
0.3701
, 2 =
.
1 =
0.03 0.02
0.02 0.02.
0.5182 0.1538
,
Q5 =
0.1538 0.0527
Consider the sensors with both the ROSSs and RVSDs
described by (3) with the following parameters:
0.4129 0.1257
Q6 =
,
0.1257
0.0975
0.03
0.02
G1 =
, G2 =
,
0.0023 0.3538
0.02
0.04
,
Y1 =
0.3023 0.0403
0.06
0.8 0.6
, C=
.
G3 =
0.0467 0.3267
0.02
0.9 0.4
,
Y2 =
0.3426 0.0630
V. C ONCLUSION
Sensor 3
0.5
0.5
Fig. 3.
10
15
20
Time (k)
25
30
35
40
735
R EFERENCES
0.5
0.5
Fig. 4.
10
15
20
Time (k)
25
30
35
40
Y3 =
0.0162 0.3243
.
0.2813 0.0882.
[1] P. Erds and A. Rnyi, On random graphs, Publ. Math., vol. 6, pp.
290297, Jan. 1959.
[2] P. Erds and A. Rnyi, On the evolution of random graphs, Publ.
Math. Inst. Hung. Acad. Sci., vol. 5, pp. 1761, Jan. 1960.
[3] A. L. Barabsi and R. Albert, Emergence of scaling in random
networks, Science, vol. 286, no. 5439, pp. 509512, Oct. 1999.
[4] D. J. Watts and S. H. Strogatz, Collective dynamics of small-world
networks, Nature, vol. 393, no. 6684, pp. 440442, Jun. 1998.
[5] R. Albert and A. L. Barabsi, Statistical mechanics of complex networks, Rev. Modern Phys., vol. 74, no. 1, pp. 4797, Jan. 2002.
[6] A. Arenas, A. Daz-Guilera, J. Kurthset, Y. Moreno, and C. S. Zhou,
Synchronization in complex networks, Phys. Rep., vol. 469, no. 3, pp.
93153, Dec. 2008.
[7] S. Boccaletti, V. Latora, Y. Moreno, M. Chavez, and D. U. Hwang,
Complex networks: Structure and dynamics, Phys. Rep., vol. 424,
nos. 45, pp. 175308, Feb. 2006.
[8] H. Gao, J. Lam, and G. Chen, New criteria for synchronization stability
of general complex dynamical networks with coupling delays, Phys.
Lett. A, vol. 360, no. 2, pp. 263273, Dec. 2006.
[9] W. He and J. Cao, Exponential synchronization of hybrid coupled
networks with delayed coupling, IEEE Trans. Neural Netw., vol. 21,
no. 4, pp. 571583, Apr. 2010.
[10] H. R. Karimi and H. Gao, New delay-dependent exponential H
synchronization for uncertain neural networks with mixed time delays,
IEEE Trans. Syst., Man, Cybern. Part B: Cybern., vol. 40, no. 1, pp.
173185, Feb. 2010.
[11] X. Li, X. F. Wang, and G. R. Chen, Pinning a complex dynamical
network to its equilibrium, IEEE Trans. Circuits Syst. I, vol. 51, no. 10,
pp. 20742087, Oct. 2004.
[12] J. Lu and D. W. C. Ho, Globally exponential synchronization and
synchronizability for general dynamical networks, IEEE Trans. Syst.,
Man, Cybern. Part B: Cybern., vol. 40, no. 2, pp. 350361, Apr. 2010.
[13] J. Lu, D. W. C. Ho, J. Cao, and J. Kurths, Exponential synchronization
of linearly coupled neural networks with impulsive disturbances, IEEE
Trans. Neural Netw., vol. 22, no. 2, pp. 329335, Feb. 2011.
[14] M. Porfiri and M. Bernardo, Criteria for global pinning-controllability
of complex networks, Automatica, vol. 44, no. 12, pp. 31003106, Dec.
2008.
[15] Q. Song and J. Cao, On pinning synchronization of directed and
undirected complex dynamical networks, IEEE Trans. Circuits Syst.
I, vol. 57, no. 3, pp. 672680, Mar. 2010.
[16] Y. Tang, J. Fang, M. Xia, and X. Gu, Synchronization of TakagiSugeno fuzzy stochastic discrete-time complex networks with mixed
time-varying delays, Appl. Math. Modell., vol. 34, no. 4, pp. 843855,
Apr. 2010.
[17] M. Ursino, E. Magosso, and C. Cuppini, Recognition of abstract objects
via neural oscillators: Interaction among topological organization, associative memory and gamma band synchronization, IEEE Trans. Neural
Netw., vol. 20, no. 2, pp. 316335, Feb. 2009.
736
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 23, NO. 5, MAY 2012
[18] X. Wang, X. Li, and J. Lu, Control and flocking of networked systems
via pinning, IEEE Circuits Syst. Mag., vol. 10, no. 3, pp. 8391, Sep.
2010.
[19] Y. Wang, M. Yang, H. O. Wang, and Z. Guan, Robust stabilization
of complex switched networks with parametric uncertainties and delays
via impulsive control, IEEE Trans. Circuits Syst. I, vol. 56, no. 9, pp.
21002108, Sep. 2009.
[20] L. Wu, Z. Feng, and W. X. Zheng, Exponential stability analysis for
delayed neural networks with switching parameters: Average dwell time
approach, IEEE Trans. Neural Netw., vol. 21, no. 9, pp. 13961407,
Sep. 2010.
[21] J. Xiang and G. Chen, On the V -stability of complex dynamical
networks, Automatica, vol. 43, no. 6, pp. 10491057, Jun. 2007.
[22] W. Yu, G. Chen, and J. L, On pinning synchronization of complex
dynamical networks, Automatica, vol. 45, no. 2, pp. 429435, Feb.
2009.
[23] J. Zhou and T. Chen, Synchronization in general complex delayed
dynamical networks, IEEE Trans. Circuits Syst. I, vol. 53, no. 4, pp.
733744, Mar. 2006.
[24] Z. Duan, J. Wang, G. Chen, and L. Huang, Stability analysis and decentralized control of a class of complex dynamical networks, Automatica,
vol. 44, no. 4, pp. 10281035, Apr. 2008.
[25] H. Li, H. Gao, and P. Shi, New passivity analysis for neural networks
with discrete and distributed delays, IEEE Trans. Neural Netw., vol. 21,
no. 11, pp. 18421847, Nov. 2010.
[26] Y. Liu, Z. Wang, J. Liang, and X. Liu, Synchronization and state
estimation for discrete-time complex networks with distributed delays,
IEEE Trans. Syst., Man, Cybern. Part B: Cybern., vol. 38, no. 5, pp.
314325, Oct. 2008.
[27] Y. He, Q. G. Wang, M. Wu, and C. Lin, Delay-dependent state
estimation for delayed neural networks, IEEE Trans. Neural Netw.,
vol. 17, no. 4, pp. 10771081, Jul. 2006.
[28] B. Shen, Z. Wang, and X. Liu, Bounded H synchronization and state
estimation for discrete time-varying stochastic complex networks over a
finite-horizon, IEEE Trans. Neural Netw., vol. 22, no. 1, pp. 145157,
Jan. 2011.
[29] G. Garcia, S. Tarbouriech, J. M. G. Silva, and D. Eckhard, Finite L 2
gain and internal stabilisation of linear systems subject to actuator and
sensor saturations, IET Control Theory Appl., vol. 3, no. 7, pp. 799812,
Jul. 2009.
[30] F. Yang and Y. Li, Set-membership filtering for systems with sensor
saturation, Automatica, vol. 45, no. 8, pp. 18961902, Aug. 2009.
[31] E. Yaz and A. Ray, Linear unbiased state estimation for random models
with sensor delay, in Proc. IEEE Conf. Decis. Control, vol. 1. Kobe,
Japan, Dec. 1996, pp. 4752.
[32] Z. Wang, D. W. C. Ho, and X. Liu, Robust filtering under randomly
varying sensor delay with variance constraints, IEEE Trans. Circuits
Syst. II, vol. 51, no. 6, pp. 320326, Jun. 2004.
[33] N. Nahi, Optimal recursive estimation with uncertain observation,
IEEE Trans. Inf. Theory, vol. 15, no. 4, pp. 457462, Jul. 1969.
[34] C. Lin, Z. Wang, and F. Yang, Observer-based networked control
for continuous-time systems with random sensor delays, Automatica,
vol. 45, no. 2, pp. 578584, Feb. 2009.
[35] Z. Q. Zuo, D. W. C. Ho, and Y. J. Wang, Fault tolerant control for
singular systems with actuator saturation and nonlinear perturbation,
Automatica, vol. 46, no. 3, pp. 569576, Mar. 2010.
[36] Z. Wang, Y. Liu, X. Liu, and Y. Shi, Robust state estimation for
discrete-time stochastic neural networks with probabilistic measurement
delays, Neurocomputing, vol. 74, nos. 13, pp. 256264, Dec. 2010.
Derui Ding received the B.Sc. degree in industrial engineering and the M.Sc. degree in detection
technology and automation equipments from Anhui
Polytechnic University, Wuhu, China, in 2004 and
2007, respectively. He is currently pursuing the
Ph.D. degree in control science and engineering with
Donghua University, Shanghai, China.
He is a Lecturer with Anhui Polytechnic University. He is an active reviewer for many international
journals. His current research interests include nonlinear stochastic control and filtering, as well as
complex networks and sensor networks.
Huisheng Shu received the B.Sc. degree in mathematics from Anhui Normal University, Wuhu, China,
in 1984, the M.Sc. degree in applied mathematics
and the Ph.D. degree in control theory and control
engineering from Donghua University, Shanghai,
China, in 1990 and 2005, respectively.
He is currently a Professor with Donghua University. He has published 16 papers in refereed
international journals. His current research interests
include mathematical theory of stochastic systems,
robust controls, and robust filtering.