IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 26, NO. 6, JUNE 2015

1335

Brief Papers Further Result on Guaranteed H∞ Performance State Estimation of Delayed Static Neural Networks He Huang, Tingwen Huang, and Xiaoping Chen Abstract— This brief considers the guaranteed H∞ performance state estimation problem of delayed static neural networks. An Arcaktype state estimator, which is more general than the widely adopted Luenberger-type one, is chosen to tackle this issue. A delay-dependent criterion is derived under which the estimation error system is globally asymptotically stable with a prescribed H∞ performance. It is shown that the design of suitable gain matrices and the optimal performance index are accomplished by solving a convex optimization problem subject to two linear matrix inequalities. Compared with some previous results, much better performance is achieved by our approach, which is greatly benefited from introducing an additional gain matrix in the domain of activation function. An example is finally given to demonstrate the advantage of the developed result.

Index Terms— Activation function, Arcak-type state estimator, performance analysis, static neural networks, time delay. I. I NTRODUCTION EPENDING on basic variables used when modeling, recurrent neural networks are classified into two kinds. One is local field neural networks in which the local field states of neurons are adopted as the basic variables. Another is static neural networks with the neuron states being the basic variables [1]. Only when some critical conditions are satisfied (see [2] and [3] for details), the two kinds of recurrent neural networks are equivalent. In [4], a practical example was presented to show that these conditions cannot be easily reached. That is to say, static neural networks are generally distinct from local field neural networks. The typical examples of static neural networks include brain-state-in-a-box neural networks and projection neural networks, and so forth [5], [6]. In the past decades, many exciting applications of static neural networks were found in various fields such as pattern recognition, classification, as well as combinatorial optimization. In the VLSI implementations of recurrent neural networks, time delay is inevitably introduced. It has been well recognized that the existence of time delay is one of main sources leading to instability and/or poor performance. As a result, stability analysis of delayed recurrent neural networks was extensively investigated and a great number of interesting results were published in [7]–[19]. In practice, it may be very expensive or even impossible to acquire the complete information of states of all neurons in a large-scale recurrent neural network. However, in some engineering applications (e.g., state feedback control and signal processing [20], [21]), it is

D

Manuscript received June 20, 2013; accepted June 21, 2014. Date of publication July 23, 2014; date of current version May 15, 2015. This work was jointly supported by the National Natural Science Foundation of China under Grant 61005047 and the Natural Science Foundation of Jiangsu Province of China under Grant BK2010214. This publication was made possible by NPRP under Grant 4-1162-1-181 from the Qatar National Research Fund (a member of Qatar Foundation). The statements made herein are solely the responsibility of the authors. H. Huang and X. Chen are with the School of Electronics and Information Engineering, Soochow University, Suzhou 215006, China (e-mail: [email protected]; [email protected]). T. Huang is with Texas A&M University, Doha 5825, Qatar (e-mail: [email protected]). Digital Object Identifier 10.1109/TNNLS.2014.2334511

required to know these information in advance. Then, one can utilize them to achieve specific objectives. Since the seminal work of [22], the study of state estimation of delayed recurrent neural networks has gained considerable interest [23]–[29]. Huang and Feng [30] studied the guaranteed performance state estimation problem for a class of delayed neural networks, where some delay-independent and delaydependent design criteria were derived by means of linear matrix inequalities (LMIs). The free-weighting matrices-based technique was employed in [23] to establish a less conservative delay-dependent condition, which was solved by the cone complementary linearization algorithm. In [24], the state estimation problem was studied for a class of discrete-time stochastic neural networks with random delay, and a delay-distribution-dependent condition was proposed. All of these results were concerned with delayed local field neural networks. As discussed before, local field neural networks and static neural networks are two different kinds of neural networks. As a result, the design criteria proposed in the above literature cannot be directly applied to delayed static neural networks. It thus means that the study of state estimation of delayed static neural networks is of practical significance. However, comparatively speaking, less effort has been paid to the state estimation problem of delayed static neural networks. For example, in [31], a delay partition approach was applied to handle this issue. Some other interesting results were available in [3] and [32]. It is worth noting that the above mentioned works only considered Luenberger-type state estimator for delayed recurrent neural networks. Recently, in [33], another type of observer was proposed, which was named later as Arcak’s observer in [34]. This kind of observer was successfully adopted to study the synchronization problem of chaotic systems in [35] and [36]. For example, Liu et al. [36] discussed the synchronization and state estimation problems for chaotic systems via a unified model and obtained a delay-independent result. On the other hand, as suggested in [37]–[40], some parameters (such as the gain and amplitude) of an activation function play a remarkable role in learning process and generalization capability of the underlying neural network. Naturally, from this point of view, it is of great interest to investigate the effect of the parameters of activation function on the performance analysis in the state estimation theory of delayed static neural networks. To the best of our knowledge, up to now, there is no reported result taking this issue into account. While, inspired by the works in [33], [34], and [36], one way to change the structure (or shape) of an activation function is to introduce an additional control term in its domain. We take the widely employed sigmoid function as an example. Its activation area can be displaced by adjusting its variable. Obviously, this can be realized via the additional control term. These motivate this brief. In this brief, our attention focuses on studying the guaranteed H∞ performance state estimation problem of delayed static neural networks. Based on the above observations, an Arcaktype state estimator, which is more general than Luenberger-type one, is constructed to investigate this problem. By choosing a suitable Lyapunov functional with a third-order integral term, and

2162-237X © 2014 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

1336

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 26, NO. 6, JUNE 2015

employing the reciprocally convex combination technique [41] and a double-integral inequality, a delay-dependent design criterion is presented such that the estimation error system is globally asymptotically stable with a prescribed H∞ performance. It is shown that the two gain matrices and optimal H∞ performance index can be simultaneously accomplished by solving a convex optimization problem subject to two LMI-based constraints. This is facilitated readily by standard algorithms [42]. Since an additional gain matrix is involved in the domain of activation function and plays a positive effect, much better performance can be achieved by our approach than by some previous results, especially when the slope constants of activation function are large. Therefore, in this sense, the choice of activation functions becomes more flexible in our framework. The main contributions of this brief are threefold: 1) a suitable Arack-type state estimator with two gain matrices is designed for delayed static neural networks; 2) it is the first attempt to analyze the effect of the parameters of activation function on the H∞ performance; and 3) the optimal performance index achieved by our approach is much better than some existing ones. II. N OTATIONS AND P ROBLEM D ESCRIPTION Throughout this brief, for a real matrix X, X > 0(X < 0) means that X is symmetric and positive definite (negative definite). diag{. . .} denotes a block diagonal matrix. L 2 [0, ∞) denotes the space of square-integrable vector functions defined on [0, ∞). I is an identity matrix with appropriate dimension. The superscripts T and −1, respectively, stand for the transpose and inverse of a matrix. The asterisk ∗ represents the symmetric block in a symmetric matrix. Matrices, if not explicitly stated, are assumed to have compatible dimensions. Consider a static neural network with time delay and noise disturbance, which is described by ⎧   ⎪ x(t) ˙ = −Ax(t) + f W x(t − τ (t)) + J + B1 w(t) ⎪ ⎪ ⎨ y(t) = C x(t) + Dx(t − τ (t)) + B2 w(t) (1) ⎪ z(t) = H x(t) ⎪ ⎪ ⎩ x(t) = ϕ(t), t ∈ [−τ, 0] where x(t) = [x1 (t), x2 (t), . . . , xn (t)]T ∈ Rn is the state vector of the model, n is the number of neurons, w(t) ∈ Rq is a noise disturbance belonging to L 2 [0, ∞), y(t) ∈ Rm is the network measurement, and z(t) ∈ R p , to be estimated, is a linear combination of the states. A is a diagonal matrix with positive entries, W is a delayed connection weight matrix, and B1 , B2 , C, D, and H are real known constant matrices with appropriate dimensions. f (x(t)) = [ f 1 (x1 (t)), f2 (x2 (t)), . . . , fn (xn (t))]T is a continuous activation function, J = [J1 , J2 , . . . , Jn ]T is an external input vector, τ (t) is a time-varying delay with an upper bound τ , and ϕ(t) is an initial condition defined on [−τ, 0]. As mentioned before, the major objective of this brief is to present an efficient algorithm to deal with the guaranteed H∞ performance state estimation problem of the static neural network (1). In order to better discuss the effect of the activation function on the performance analysis, an Arcak-type state estimator is constructed for (1) ⎧ ˙ˆ = −A x(t) x(t) ˆ + f (W x(t ˆ − τ (t)) + J + K 1 (y(t) − yˆ (t))) ⎪ ⎪ ⎪ ⎪ +K 2 (y(t) − yˆ (t)) ⎨ (2) yˆ (t) = C x(t) ˆ + D x(t ˆ − τ (t)) ⎪ ⎪ ⎪ zˆ (t) = H x(t) ˆ ⎪ ⎩ x(t) ˆ = 0, t ∈ [−τ, 0] where x(t) ˆ ∈ Rn , yˆ (t) ∈ Rm , zˆ (t) ∈ R p , K 1 and K 2 are the gain matrices to be determined. Remark 1: Since two gain matrices are involved, the Arcak-type state estimator (2) is more general and contains Luenberger-type ones

in [3], [22]–[28], [31], and [32] as its special cases. Particularly, it is worth emphasizing that a gain matrix K 1 is felicitously involved in the domain of the activation function f . It can be effectively used to adjust the parameters of the activation function, and brings a positive effect on the performance analysis. Therefore, it is believed that much better H∞ performance will be achieved by our result. Let the error signals be e(t) = x(t) − x(t) ˆ and z¯ (t) = z(t) − zˆ (t). Then, the estimation error system is immediately obtained from (1) and (2) ⎧ ˙ = −(A + K 2 C)e(t) − K 2 De(t − τ (t)) ⎨ e(t) (3) +g(t, τ (t), x(t), e(t)) + (B1 − K 2 B2 )w(t) ⎩ z¯ (t) = H e(t) where g(t, τ (t), x(t), e(t)) = f (W x(t − τ (t)) + J ) − f (W x(t ˆ − τ (t)) + J + K 1 (y(t) − yˆ (t))). The guaranteed H∞ performance state estimation problem is stated as follows. For a prescribed level γ > 0 of noise attenuation, it is to find a suitable state estimator (2) such that: 1) the estimation error system (3) with w(t) ≡ 0 is globally asymptotically stable; 2) under the zero-initial condition ¯z (t)2 < γ w(t)2

(4)

holds  for all nonzero w(t) ∈ L 2 [0, ∞), where ψ(t)2 = ∞ T 0 ψ (t)ψ(t)dt. Assumption 1: For each i = 1, 2, . . . , n, the activation function f i (·) satisfies f (a) − f i (b) 0≤ i (5) ≤ li , a = b ∈ R a−b with li being a real known scalar. Denote L = diag(l1 , l2 , . . . , ln ). Assumption 2: There exist scalars τ > 0 and μ such that 0 ≤ τ (t) ≤ τ and τ˙ (t) ≤ μ.

(6)

III. M AIN R ESULT This section is dedicated to presenting a delay-dependent approach to the guaranteed H∞ performance state estimation problem of the neural network (1) via (2). The following propositions are essential to the derivative of the main result. Proposition 1: For any  = diag(λ1 , λ2 , . . . , λn ) > 0 0 ≤ −2g T (t, τ (t), x(t), e(t))g(t, τ (t), x(t), e(t)) +2g T (t, τ (t), x(t), e(t))L[−K 1 Ce(t) + (W − K 1 D) × e(t −τ (t))− K 1 B2 w(t)]. (7) Proof: See Appendix A. T T T T Proposition

2: Let η(t) = [e (t −τ (t))−e (t −τ ), e (t)−e (t − S Z τ (t))]T . If ≥ 0, one has ∗ S

t S Z e˙T (s)S e(s)ds ˙ ≤ −ηT (t) η(t). (8) −τ ∗ S t −τ Proof: See Appendix B. Proposition 3: For given scalar τ > 0, real matrices T > 0 and X with compatible dimensions, and any vector function π(t), the following condition holds: 0 t 1 e˙T (s)T e˙(s)dsdθ ≤ τ 2 π T (t)X T T −1 Xπ(t) − 2 −τ t +θ t

 +2π T (t)X T τ e(t) − e(s)ds . (9) t −τ

Proof: See Appendix C. Remark 2: In (9), no any restriction is imposed on X. It can be easily found that the double-integral inequality in [44, Lemma 1]

HUANG et al.: FURTHER RESULT ON GUARANTEED H∞ PERFORMANCE STATE ESTIMATION

 T is a special case of (9) as π(t) = eT (t), tt−τ eT (s)ds and   X = − τ2 T, 22 T . In addition, (9) can be also regarded as an τ extension of the free-weighting matrix technique in [23] to the double-integral case. Now, we have the following theorem. Theorem 1: For given scalars τ > 0, μ and γ > 0, the guaranteed H∞ performance state estimation problem is solvable if there exist real matrices P > 0, Q 1 > 0, Q 2 > 0, R > 0, S > 0, T > 0, X i (i = 1, 2, . . . , 6), Z, G 1 , G 2 , and  = diag(λ1 , λ2 , . . . , λn ) > 0 such that the LMIs (10) and (11), as shown at the top of the next page are satisfied

S Z ≥0 (10) ∗ S

z¯ (t) = H e(t), it can be deduced that z¯ T (t)¯z (t) − γ 2 w T (t)w(t) + V˙ (t) = 2eT (t)P e˙(t) + eT (t)(Q 1 + Q 2 + τ 2 R + H T H )e(t) −(1 − τ˙ (t))eT (t − τ (t))Q 1 e(t − τ (t)) t eT (s)Re(s)ds −eT (t − τ )Q 2 e(t − τ ) − τ t −τ t +τ 2 e˙ T (t)S e(t) ˙ −τ e˙T (s)S e(s)ds ˙ t −τ

0 t τ2 T ˙ − e˙T (s)T e(s)dsdθ ˙ + e˙ (t)T e(t) 2 −τ t +θ −γ 2 w T (t)w(t)

τ 2 T −1 X T X ≤ π T (t) 1 + 2  τ2 T +τ 2 2T S 2 +

2 T 2 π(t) 2

where 11 = −P A − A T P − G 2 C − C T G 2T + Q 1 + Q 2 +τ 2 R − S + τ X 1 + τ X 1T + H T H 12 = −G 2 D + S − Z T + τ X 2

where



15 = P + τ X 5 − C T G 1

⎢ ⎢ ⎢ ⎢

1 = ⎢ ⎢ ⎢ ⎣

16 = P B1 − G 2 B2 + τ X 6 18 = −τ A T P − τ C T G 2T 22 = −(1 − μ)Q 1 − 2S + Z + Z T 25 = W T L − D T G 1 44 = −R − X 4 − X 4T 68 = τ B1T P − τ B2T G 2T .

X=

Furthermore, the gain matrices K 1 and K 2 can be, respectively, designed as K 1 = (L)−1 G 1T , K 2 = P −1 G 2 . (12) Proof: The proof is divided into two parts. We shall first show that (4) holds for all nonzero w(t) under zero-initial conditions. Then, the globally asymptotical stability of the error system (3) with w(t) ≡ 0 will be proven. 1) To verify (4), a Lyapunov functional candidate is constructed as t eT (s)Q 1 e(s)ds V (t) = eT (t)Pe(t) + t −τ (t ) t eT (s)Q 2 e(s)ds + +τ +τ +

t −τ 0 t

−τ t +θ 0 t

eT (s)Re(s)dsdθ e˙T (s)S e(s)dsdθ ˙

−τ t +θ 0 0 t −τ

θ

t +α

e˙T (s)T e(s)dsdαdθ. ˙

(13)

It is known that, under the zero-initial conditions, V (t)|t =0 = 0 and V (t) ≥ 0 for t > 0. Define ∞   T (14) z¯ (t)¯z (t) − γ 2 w T (t)w(t) dt. I= 0

Then, one has

0

0

2 =

 

11 12 Z T + τ X 3 14 15

16 ∗ 22 S − Z T −X 2T 25 0 T ∗ ∗ −Q 2 − S −X 3 0 0 ∗ ∗ ∗ 44 −X 5 −X 6 ∗ ∗ ∗ ∗ −2 −L K 1 B2 ∗ ∗ ∗ ∗ ∗ −γ 2 I  X1 X2 X3 X4 X5 X6 −(A + K 2 C)

−K 2 D

0

0

I

B1 − K 2 B2

(16) ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦



with 22 , 44 being the same as those in Theorem 1, and

11 = −P A − A T P − P K 2 C − C T K 2T P + Q 1 + Q 2 +τ 2 R − S + τ X 1 + τ X 1T + H T H

12 = −P K 2 D + S − Z T + τ X 2 , 14 = −X 1T + τ X 4

15 = P + τ X 5 − C T K 1T L, 25 = W T L − D T K 1T L

16 = P B1 − P K 2 B2 + τ X 6 . By the well-known Schur complement [42], it is known that 2 2

1 + τ2 X T T −1 X + τ 2 2T S 2 + τ2 2T T 2 < 0 is equivalent to ⎡ ⎤

1 τ X T τ 2T S τ 2T T ⎢ ∗ −2T ⎥ 0 0 ⎢ ⎥ < 0. (17) ⎣ ∗ ⎦ ∗ −S 0 ∗ ∗ ∗ −2T By pre and postmultiplying (17), respectively, by diag{I, I, P S −1 , PT −1 } and its transpose, and noting (12) and the facts that −P S −1 P ≤ −2P + S and −2PT −1 P ≤ −4P + 2T , one can deduce after some manipulations that (17) is ensured by (11), and thus

1 +

τ 2 T −1 τ2 T X T X + τ 2 2T S 2 +

T 2 < 0. 2 2 2

It immediately follows from (16) that, for any nonzero π(t) z¯ T (t)¯z (t) − γ 2 w T (t)w(t) + V˙ (t) < 0.

∞   T I≤ z¯ (t)¯z (t) − γ 2 w T (t)w(t) dt +V (t)|t →∞ − V (t)|t =0 ∞   T = z¯ (t)¯z (t) − γ 2 w T (t)w(t) + V˙ (t) dt.

1337

(15)

 Let π(t) = eT (t), eT (t − τ (t)), eT (t − τ ), tt−τ eT (s)ds, g T (t, τ (t), T x(t), e(t)), w T (t) . By employing Propositions 1–3 and noting

Therefore, one has I < 0, which implies ¯z (t)2 < γ w(t)2 for any nonzero w(t) ∈ L 2 (0, ∞). 2) In sequel, we show the globally asymptotical stability of the estimation error system (3) with w(t) ≡ 0. For convenience, it is rewritten as e(t) ˙ = −(A+ K 2 C)e(t)− K 2 De(t − τ (t))+g(t, τ (t), x(t), e(t)). (18)

1338

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 26, NO. 6, JUNE 2015

⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣

11 ∗ ∗ ∗

12 22 ∗ ∗

Z T + τ X3 S − ZT −Q 2 − S ∗

−X 1T + τ X 4 −X 2T −X 3T 44

15 25 0 −X 5

16 0 0 −X 6

τ X 1T τ X 2T τ X 3T τ X 4T

18 −τ D T G 2T 0 0

18 −τ D T G 2T 0 0

∗ ∗ ∗ ∗ ∗

∗ ∗ ∗ ∗ ∗

∗ ∗ ∗ ∗ ∗

∗ ∗ ∗ ∗ ∗

−2 ∗ ∗ ∗ ∗

−G 1T B2 −γ 2 I ∗ ∗ ∗

τ X 5T τ X 6T −2T ∗ ∗

τP 68 0 −2P + S ∗

τP 68

We still consider the Lyapunov functional (13) and calculate its timederivative along the solutions of (18). Similar to the proof of (16), one can derive that  

T τ 2 T −1 ϒ3 ζ (t) V˙ (t) ≤ ζ T (t) ϒ1 + ϒ2 T ϒ2 + τ 2 ϒ3T S + 2 2 ≤0 (19) is still guaranteed by (10) and (11), where

ζ (t) = eT (t), eT (t − τ (t)), eT (t − τ ), t T eT (s)ds, g T (t, τ (t), x(t), e(t)) ⎡ ⎢ ⎢ ϒ1 = ⎢ ⎢ ⎣

t −τ

ϒ11 12 ∗ 22 ∗ ∗ ∗ ∗ ∗ ∗

Z T + τ X 3 14 S − ZT −X 2T −Q 2 − S −X 3T ∗ 44 ∗ ∗

15

25 0 −X 5 −2

⎤ ⎥ ⎥ ⎥ ⎥ ⎦

ϒ11 = −P A − A T P − P K 2 C − C T K 2T P +Q 1 + Q 2 + τ 2 R − S + τ X 1 + τ X 1T   ϒ2 = X 1 X 2 X 3 X 4 X 5   ϒ3 = −(A + K 2 C) −K 2 D 0 0 I . According to the Lyapunov stability theory, the error system (3) with w(t) ≡ 0 is globally asymptotically stable. This completes the proof. Remark 3: Theorem 1 presents a delay-dependent condition to guaranteed H∞ performance state estimation of delayed static neural networks. It is seen that there are two gain matrices to be determined. Detailedly, besides K 2 , K 1 is intentionally designed in the domain of the activation function. The reason of the introduction of such a gain matrix is that it has a positive effect on adjusting the parameters of the considered activation function. It is thus that much better H∞ performance can be achieved by our approach, which is greatly benefited from the introduction of the gain matrix K 1 . Remark 4: The stability analysis of delayed static neural networks has been studied in the open literature (see [7], [13], [14]). Compared with the proof of stability of delayed static neural networks, the proof of Theorem 1 is composed of two parts: 1) the correctness of (4) under zero-initial conditions and 2) the globally asymptotical stability of the resulting estimation error system (3) with w(t) ≡ 0. That is to say, to establish a design criterion of guaranteed H∞ performance state estimator, it is further required to ensure that (4) is true. As illustrated in the proof of Theorem 1, it is verified by defining an index I in (14). This is the major difference between the proofs of stability of delayed static neural networks and Theorem 1. Remark 5: Inspired by [44], a third-order integral is introduced in the Lyapunov functional (13), which was not taken into account in [3], [30], and [31]. Meanwhile, as seen from the proof of

0 −4P + 2T

⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ 0, one has 0 ≤ −2 = −2

n  i=1 n 

λi gi [gi − li (θ1i − θ2i )] [λi gi2 − λi li gi (θ1i − θ2i )]

i=1

= −2g T (t, τ (t), x(t), e(t))g(t, τ (t), x(t), e(t))

+2g T (t, τ (t), x(t), e(t))L − K 1 Ce(t)  +(W − K 1 D)e(t − τ (t)) − K 1 B2 w(t) . 

This completes the proof. A PPENDIX B P ROOF OF P ROPOSITION 2 It is known that t −τ (t ) t e˙T (s)S e(s)ds ˙ = −τ e˙T (s)S e(s)ds ˙ −τ t −τ t −τ t −τ e˙T (s)S e(s)ds. ˙ t −τ (t )

Then, (8) can be easily verified by employing Jensen’s inequality [43] and the reciprocally convex combination technique [41]. This completes the proof. 

1340

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 26, NO. 6, JUNE 2015

TABLE II C OMPARISON OF THE O PTIMAL H∞ P ERFORMANCE I NDEX γ FOR D IFFERENT τ

A PPENDIX C P ROOF OF P ROPOSITION 3 Since

T −1 X T X X

XT T



=

1

0

XT T −2

0

T2

1



1

0

XT T−2

0

T2

T

1

≥0 one has T T −1

0 t

π(t) π(t) X T X XT dsdθ ≥ 0. ˙ e˙(s) X T −τ t +θ e(s) 0 t ˙ = τ e(t)− tt−τ e(s)ds and some simple By noting −τ t +θ e(s)dsdθ manipulations, one can arrive at (9).  R EFERENCES [1] M. M. Gupta, L. Jin, and N. Homma, Static and Dynamic Neural Networks: From Fundamentals to Advanced Theory. New York, NY, USA: Wiley, 2003. [2] H. Qiao, J. Peng, Z.-B. Xu, and B. Zhang, “A reference model approach to stability analysis of neural networks,” IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 33, no. 6, pp. 925–936, Dec. 2003. [3] H. Huang, G. Feng, and J. Cao, “Guaranteed performance state estimation of static neural networks with time-varying delay,” Neurocomputing, vol. 74, no. 4, pp. 606–616, 2011. [4] H. S. Seung, “How the brain keeps the eye still,” in Proc. Nat. Acad. Sci. USA, vol. 93. 1996, pp. 13339–13344. [5] Y. Xia, “An extended projection neural network for constrained optimization,” Neural Comput., vol. 16, no. 4, pp. 863–883, 2004. [6] I. Varga, G. Elek, and H. Zak, “On the brain-state-in-a-convex-domain neural models,” Neural Netw., vol. 9, no. 7, pp. 1173–1184, 1996. [7] C.-D. Zheng, H. Zhang, and Z. Wang, “Delay-dependent globally exponential stability criteria for static neural networks: An LMI approach,” IEEE Trans. Circuits Syst. II, Exp. Briefs, vol. 56, no. 7, pp. 605–609, Jul. 2009. [8] B. Liu, W. Lu, and T. Chen, “Generalized Halanay inequalities and their applications to neural networks with unbounded time-varying delays,” IEEE Trans. Neural Netw., vol. 22, no. 9, pp. 1508–1513, Sep. 2011. [9] Z.-G. Wu, P. Shi, H. Su, and J. Chu, “Stochastic synchronization of Markovian jump neural networks with time-varying delay using sampled data,” IEEE Trans. Cybern., vol. 43, no. 6, pp. 1796–1806, Dec. 2013. [10] H. Zhang, F. Yang, X. Liu, and Q. Zhang, “Stability analysis for neural networks with time-varying delay based on quadratic convex combination,” IEEE Trans. Neural Netw. Learn. Syst., vol. 24, no. 4, pp. 513–521, Apr. 2013. [11] M. D. Marco, M. Forti, M. Grazzini, and L. Pancioni, “Limit set dichotomy and multistability for a class of cooperative neural networks with delays,” IEEE Trans. Neural Netw. Learn. Syst., vol. 23, no. 9, pp. 1473–1485, Sep. 2012. [12] Z. Wang, H. Zhang, and B. Jiang, “LMI-based approach for global asymptotic stability analysis of recurrent neural networks with various delays and structures,” IEEE Trans. Neural Netw., vol. 22, no. 7, pp. 1032–1045, Jul. 2010. [13] X. Li, H. Gao, and X. Yu, “A unified approach to the stability of generalized static neural networks with linear fractional uncertainties and delays,” IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 41, no. 5, pp. 1275–1286, Oct. 2011. [14] P. Li and J. Cao, “Stability in static delayed neural networks: A nonlinear measure approach,” Neurocomputing, vol. 69, nos. 13–15, pp. 1776–1781, 2006.

[15] Z.-G. Wu, P. Shi, H. Su, and J. Chu, “Passivity analysis for discretetime stochastic Markovian jump neural networks with mixed time delays,” IEEE Trans. Neural Netw., vol. 22, no. 10, pp. 1566–1575, Oct. 2011. [16] Z.-G. Wu, J. Lam, H. Su, and J. Chu, “Stability and dissipativity analysis of static neural networks with time delay,” IEEE Trans. Neural Netw. Learn. Syst., vol. 23, no. 2, pp. 199–210, Feb. 2012. [17] H. Zhang, Z. Wang, and D. Liu, “Global asymptotic stability and robust stability of a class of Cohen–Grossberg neural networks with mixed delays,” IEEE Trans. Circuits Syst. I, Reg. Papers, vol. 56, no. 3, pp. 616–629, Mar. 2009. [18] S. Arik, “Global robust stability of delayed neural networks,” IEEE Trans. Circuits Syst. I, Fundam. Theory Appl., vol. 50, no. 1, pp. 156–160, Jan. 2003. [19] Z. Zeng and W. X. Zheng, “Multistability of neural networks with time-varying delays and concave-convex characteristics,” IEEE Trans. Neural Netw. Learn. Syst., vol. 23, no. 2, pp. 293–305, Feb. 2012. [20] L. Jin, P. N. Nikiforuk, and M. M. Gupta, “Adaptive control of discrete-time nonlinear systems using recurrent neural networks,” IEE Proc. Control Theory Appl., vol. 141, no. 3, pp. 169–176, May 1994. [21] A. Wu and Z. Zeng, “Exponential stabilization of memristive neural networks with time delays,” IEEE Trans. Neural Netw. Learn. Syst., vol. 23, no. 12, pp. 1919–1929, Dec. 2012. [22] Z. Wang, D. W. C. Ho, and X. Liu, “State estimation for delayed neural networks,” IEEE Trans. Neural Netw., vol. 16, no. 1, pp. 279–284, Jan. 2005. [23] Y. He, Q.-G. Wang, M. Wu, and C. Lin, “Delay-dependent state estimation for delayed neural networks,” IEEE Trans. Neural Netw., vol. 17, no. 4, pp. 1077–1081, Jul. 2006. [24] H. Bao and J. Cao, “Delay-distribution-dependent state estimation for discrete-time stochastic neural networks with random delay,” Neural Netw., vol. 24, no. 1, pp. 19–28, 2011. [25] Z. Wang, Y. Liu, and X. Liu, “State estimation for jumping recurrent neural networks with discrete and distributed delays,” Neural Netw., vol. 22, no. 1, pp. 41–48, 2009. [26] C.-D. Zheng, M. Ma, and Z. Wang, “Less conservative results of state estimation for delayed neural networks with fewer LMI variables,” Neurocomputing, vol. 74, no. 6, pp. 974–982, 2011. [27] Y. Liu, Z. Wang, and X. Liu, “Design of exponential state estimators for neural networks with mixed time delays,” Phys. Lett. A, vol. 364, no. 5, pp. 401–412, 2007. [28] D. Zhang, L. Yu, Q.-G. Wang, and C.-J. Ong, “Estimator design for discrete-time switched neural networks with asynchronous switching and time-varying delay,” IEEE Trans. Neural Netw. Learn. Syst., vol. 23, no. 5, pp. 827–834, May 2012. [29] J. Lian, Z. Feng, and P. Shi, “Observer design for switched recurrent neural networks: An average dwell time approach,” IEEE Trans. Neural Netw., vol. 22, no. 10, pp. 1547–1556, Oct. 2011. [30] H. Huang and G. Feng, “Delay-dependent H∞ and generalized H2 filtering for delayed neural networks,” IEEE Trans. Circuits Syst. I, Reg. Papers, vol. 56, no. 4, pp. 846–857, Apr. 2009. [31] H. Huang, G. Feng, and J. Cao, “State estimation for static neural networks with time-varying delay,” Neural Netw., vol. 23, no. 10, pp. 1202–1207, 2010. [32] H. Huang, T. Huang, and X. Chen, “Guaranteed H∞ performance state estimation of delayed static neural networks,” IEEE Trans. Circuits Syst. II, Exp. Briefs, vol. 60, no. 6, pp. 371–375, Jun. 2013. [33] M. Arcak and P. Kokotovi´c, “Nonlinear observers: A circle criterion design and robustness analysis,” Automatica, vol. 37, no. 12, pp. 1923–1930, 2001. [34] A. Zemouche and M. Boutayeb, “Comments on ‘A note on observers for discrete-time Lipschitz nonlinear systems’,” IEEE Trans. Circuits Syst. II, Exp. Briefs, vol. 60, no. 1, pp. 56–60, Jan. 2013. [35] A. Zemouche and M. Boutayeb, “Nonlinear-observer-based H∞ synchronization and unknown input recovery,” IEEE Trans. Circuits Syst. I, vol. 56, no. 8, pp. 1720–1731, Aug. 2009.

HUANG et al.: FURTHER RESULT ON GUARANTEED H∞ PERFORMANCE STATE ESTIMATION

[36] M. Liu, S. Zhang, Z. Fan, S. Zheng, and W. Sheng, “Exponential H∞ synchronization and state estimation for chaotic systems via a unified model,” IEEE Trans. Neural Netw. Learn. Syst., vol. 24, no. 7, pp. 1114–1126, Jul. 2013. [37] E. Trentin, “Networks with trainable amplitude of activation functions,” Neural Netw., vol. 14, nos. 4–5, pp. 471–493, 2001. [38] D. P. Mandic and J. Chambers, “Relating the slope of the activation function and the learning rate within a recurrent neural network,” Neural Comput., vol. 11, no. 5, pp. 1069–1077, Jul. 1999. [39] S. L. Goh and D. P. Mandic, “Recurrent neural networks with trainable amplitude of activation functions,” Neural Netw., vol. 16, no. 8, pp. 1095–1100, 2003.

1341

[40] I. Castelli and E. Trentin, “Combination of supervised and unsupervised learning for training the activation functions of neural networks,” Pattern Recognit. Lett., vol. 37, pp. 178–191, Feb. 2014. [41] P. Park, J. W. Ko, and C. Jeong, “Reciprocally convex approach to stability of systems with time-varying delays,” Automatica, vol. 47, no. 1, pp. 235–238, 2011. [42] S. Boyd, L. E. Ghaoui, E. Feron, and V. Balakrishnan, Linear Matrix Inequalities in System and Control Theory. Philadelphia, PA, USA: SIAM, 1994. [43] K. Gu, V. L. Kharitonov, and J. Chen, Stability of Time-Delay Systems. Massachusetts, MA, USA: Birkhäuser, 2003. [44] J. Sun, G. P. Liu, J. Chen, and D. Rees, “Improved delay-range-dependent stability criteria for linear systems with time-varying delays,” Automatica, vol. 46, no. 2, pp. 466–470, 2010.

Further result on guaranteed H∞ performance state estimation of delayed static neural networks.

This brief considers the guaranteed H∞ performance state estimation problem of delayed static neural networks. An Arcak-type state estimator, which is...
491KB Sizes 1 Downloads 4 Views