IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 24, NO. 3, MARCH 2013

345

Dissipativity Analysis for Discrete-Time Stochastic Neural Networks With Time-Varying Delays Zheng-Guang Wu, Peng Shi, Senior Member, IEEE, Hongye Su, and Jian Chu

Abstract— In this paper, the problem of dissipativity analysis is discussed for discrete-time stochastic neural networks with timevarying discrete and finite-distributed delays. The discretized Jensen inequality and lower bounds lemma are adopted to deal with the involved finite sum quadratic terms, and a sufficient condition is derived to ensure the considered neural networks to be globally asymptotically stable in the mean square and strictly ( Q, S, R)-γ -dissipative, which is delay-dependent in the sense that it depends on not only the discrete delay but also the finite-distributed delay. Based on the dissipativity criterion, some special cases are also discussed. Compared with the existing ones, the merit of the proposed results in this paper lies in their reduced conservatism and less decision variables. Three examples are given to illustrate the effectiveness and benefits of our theoretical results. Index Terms— Delay-dependent, dissipativity, neural networks, stochastic systems, time-delays.

I. I NTRODUCTION

F

OR more than a decade, many researchers have paid attention to this paper on neural networks. This stems from the fact that neural networks have been successfully applied in a variety of areas, such as signal processing, pattern recognition, and combinatorial optimization [1]–[4]. In recent years, neural networks with time-delay have also been studied extensively, because time delays do occur in electronic implementation of analog neural networks due to the transmission of signal and the finite switching speed of amplifiers, and may lead to the instability and poor performance of systems [5]–[7], and a great number of important and interesting results have been obtained on the analysis and synthesis of time-delay neural networks, including stability analysis, state estimation, passivity, etc. For example, the stability problem has been investigated for continuous-time neural networks with timedelay in [8]–[11], and several stability conditions have been proposed based on linear matrix inequality (LMI) approach.

Manuscript received April 22, 2012; revised August 27, 2012; accepted December 2, 2012. Date of publication January 1, 2013; date of current version January 30, 2013. This work was supported in part by the National Natural Science Foundation of China, under Grant 61174029 and Grant 61174058, the National Key Basic Research Program, China, under Grant 2012CB215202, and the 111 Project, under Grant B12018. Z.-G. Wu, H. Su, and J. Chu are with the National Laboratory of Industrial Control Technology, Institute of Cyber-Systems and Control, Zhejiang University, Hangzhou 310027, China (e-mail: [email protected]; [email protected]; [email protected]). P. Shi is with the School of Engineering and Science, Victoria University, Melbourne, 8001, Australia, and also with the School of Electrical and Electronic Engineering, The University of Adelaide, Adelaide 5005, Australia (e-mail: [email protected]). Digital Object Identifier 10.1109/TNNLS.2012.2232938

In the discrete-time setting, some sufficient criteria have been established in [12] to ensure the delay-dependent stability of neural networks with time-varying delay. In [13], the state estimation problem has been investigated for continuous-time neural networks with time-delay, and some algorithms have been presented to compute the desired state estimators. The results on the state estimation problem of discrete-time neural networks with time-delay can be found in [14]. The problem of passivity analysis for time-delay neural networks has been considered in [15]–[17], and some types of delay-dependent passivity conditions have been derived. It should be pointed out that all the time-delays considered in the above-mentioned references are of the discrete nature. It is well known that neural networks usually have a spatial extent due to the presence of an amount of parallel pathways with a variety of axon sizes and lengths, and there will be a distribution of propagation delays. In this case, the signal propagation is no longer instantaneous and cannot be modeled with discrete delays [18], [19]. Generally speaking, the distributed delays in the neural networks can be classified into two types, finite-distributed delays and infinite-distributed delays. Recent work has advanced the research for neural networks with distributed delays. Continuous-time neural networks with infinite-distributed delays have been investigated in [19]. Based on the LMI method and other approaches, continuoustime neural networks with finite-distributed delays have also been discussed in [20] and [21]. It seems that the corresponding results for discrete-time neural networks with distributed delays are relatively few. However, it has been shown that discrete-time neural networks are more important than continuous-time neural networks in our digital world. In [22], the authors made the first attempt to discuss the problem of stability analysis for discrete-time neural networks with infinitedistributed delays. The results obtained in [22] include both deterministic and stochastic cases, and thus the results are very general and powerful. The problem of passivity analysis for a class of uncertain discrete-time stochastic neural networks with infinite-distributed delays has been investigated in [23], where some delay-dependent criteria have been established to ensure the passivity of the considered neural networks. In [24], the authors introduced the finite-distributed delay in the discrete-time setting for the first time, and have further investigated the state estimation problem for discrete-time neural networks with Markov jumping parameters as well as mode-dependent mixed time-delays, and sufficient conditions have been established that guarantee the existence of the state estimators.

2162–237X/$31.00 © 2013 IEEE

346

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 24, NO. 3, MARCH 2013

In addition, the past decade has witnessed the rapid development of dissipative theory in system and control areas. The reason is mainly twofold: 1) the dissipative theory gives a framework for the design and analysis of control systems using an input–output description based on energy-related considerations [14], [25], and 2) the dissipative theory serves as a powerful or even indispensable tool in characterizing important system behaviors, such as stability and passivity, and has close connections with passivity theorem, bounded real lemma, Kalman–Yakubovich lemma, and the circle criterion [26]. Very recently, in [27], the robust reliable dissipative filtering problem has been investigated for uncertain discrete-time singular system with interval time-varying delays and sensor failures. The problem of static output-feedback dissipative control has been studied for linear continuous-time system based on an augmented system approach in [28], where a necessary and sufficient condition for the existence of a desired controller has been given, and a corresponding iterative algorithm has also been developed to solve the condition. Note that the problem of dissipativity analysis has also been investigated for neural networks with time-delay in [29] and [30], and some delay-dependent sufficient conditions have been given to guarantee the dissipativity of the considered neural networks. However, the neural networks considered in [29] and [30] are of the continuous-time nature. Unfortunately, up to now, there is little information in the published literature about the dissipativity analysis problem for discrete-time stochastic neural networks with discrete and finite-distributed delays. It is, therefore, the main purpose of the present research to linkage such a gap by making the first attempt to deal with the dissipativity analysis problem for discrete-time stochastic neural networks with both time-varying discrete and finitedistributed delays. In this paper, we consider the problem of dissipativity analysis for discrete-time stochastic neural networks with both time-varying discrete and finite-distributed delays. Based on the discretized Jensen inequality and lower bounds lemma proposed in [31], a condition is established ensuring the stability and strict (Q, S, R)-γ -dissipativity of the considered neural networks, which depends not only on the discrete delay but also on the finite-distributed delay. Based on the derived condition, we also develop some results on several special cases. The obtained results have advantages over the existing ones because they not only have less conservatism but also require less decision variables. Three numerical examples are given to illustrate the effectiveness and superiority of the proposed methods. Notation: The notations used throughout this paper are fairly standard. Rn and Rm×n denote the n-dimensional Euclidean space and the set of all m × n real matrices, respectively. The notation X > Y (X ≥ Y ), where X and Y are symmetric matrices, means that X − Y is positive definite (positive semidefinite). I and 0 represent the identity matrix and a zero matrix, respectively. (, F , P) is a probability space,  is the sample space, F is the σ -algebra of subsets of the sample space, and P is the probability measure on F . E[·] denotes the expectation operator with respect to some probability measure P. For integers a and b with a < b,

N[a, b] = {a, a + 1, . . . , b − 1, b}. The superscript “T” represents the transpose, and diag{. . .} stands for a blockdiagonal matrix. || ·|| denotes the Euclidean norm of a vector and its induced norm of a matrix, and l2 [0, +∞) is the space of square summable infinite sequence. For an arbitrary matrix B and two symmetric matrices A and C   A B ∗ C denotes a symmetric matrix, where “∗” denotes the term that is induced by symmetry. Matrices, if they are not explicitly specified, are assumed to have compatible dimensions. II. P RELIMINARIES Consider the following discrete-time stochastic neural network with mixed time-delays: ⎧ ⎪ x(k + 1) = Dx(k) + Ag(x(k)) + Bg(x(k − d(k))) ⎪ ⎪ ⎪ ⎪ τ (k) ⎪ ⎪ ⎪ + u(k) + C g(x(k − v)) ⎪ ⎨ v=1 (1) + σˆ (k, x(k), x(k − d(k)))ω(k) ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ y(k) = g(x(k)) ⎪ ⎪ ⎪ ⎩ x(k) = φ(k), k ∈ N[− max{d2 , τ2 }, 0]

T where x(k) = x 1 (k)x 2 (k) . . . x n (k) , g(x(k)) = g1 (x 1 (k))

T g2 (x 2 (k)) . . . gn (x n (k)) , and x i (k) is the state of the i -th neuron at time k, gi (x i (k)) denotes the activation function of the i -th neuron at time k, y(k) is the output of the neural network, u(k) ∈ l2 [0, +∞) is the input, the function φ(k) is the initial condition; D = diag{d¯1 , d¯2 , . . . , d¯n } describes the rate with which the each neuron will reset its potential to the resting state in isolation when disconnected from the networks and external inputs; A = (ai j )n×n , B = (bi j )n×n , and C = (ci j )n×n are, respectively, the connection weight matrix, the discretely delayed connection weight matrix, and the distributively delayed connection weight matrix; ω(k) is a scalar Wiener process (Brownian Motion) on (, F , P) with E[ω(k)] = 0, E[ω(k)2 ] = 1, E[ω(i )ω( j )] = 0 (i = j ) (2) d(k) and τ (k) denote the discrete delay and the finitedistributed delay, respectively, and satisfy d1 ≤ d(k) ≤ d2 and τ1 ≤ τ (k) ≤ τ2 , where d2 ≥ d1 > 0 and τ2 ≥ τ1 > 0 are prescribed integers. The following assumptions are imposed on neural network (1), which will be needed to develop our main results in this paper. Assumption 1 [20]: Each activation function gi (·) in (1) is continuous and bounded, and gi (0) = 0, and there exist constants δi and ρi such that δi ≤

gi (α1 ) − gi (α2 ) ≤ ρi , i = 1, 2, . . . , n α1 − α2

(3)

where α1 , α2 ∈ R, and α1 = α2 . Assumption 2 [17]: σˆ (k) = σˆ (k, x(k), x(k − d(k))) : N × Rn ×Rn → Rn is the diffusion coefficient vector, and assumed

WU et al.: DISSIPATIVITY ANALYSIS FOR DISCRETE-TIME STOCHASTIC NEURAL NETWORKS

to satisfy  σˆ (k)T σˆ (k) ≤  where



x(k) x(k − d(k))

T 

G1 G2 ∗ G3





x(k) x(k − d(k))

(4)

G1 G2 ∗ G 3 ≥ 0 is known constant matrices.

We now introduce the following definitions for neural network (1). Definition 1: The neural network (1) with u(k) = 0 is said to be globally asymptotically stable in the mean square, if for each solution x(k) of (1) with u(k) = 0, the following holds: lim E[||x(k)||2 ] = 0.

k→+∞

(5)

Before giving the definition of dissipativity for neural network (1), we introduce a function r (u, y) : Rm × Rl → R satisfying r (0, 0) = 0, associated with neural network (1), which is called a supply rate if for all input–output pairs (u, y) sat2 |E[r (u(α), y(α))]| < +∞, isfying (1), r (u, y) satisfies kα=k 1 k2 ≥ k1 ≥ 0. The classical form of dissipation inequality [25] can be extended to the neural network (1) as follows: τ

E[r (u(α), y(α))] ≥ 0 ∀τ ≥ 0.

(6)

α=0

The neural network (1) is dissipative with respect to the supply rate r (u, y) if the dissipation inequality (6) holds under zero initial condition for any nonzero input u ∈ l2 [0, +∞). It is easy to see that if there exists a nonnegative-definite function V (x) : Rn → R satisfying V (0) = 0 and E[V (x(τ + 1))] − E[V (x(0))] ≤

τ

E[r (u(α), y(α))], τ ≥ 0

α=0

(7) then the neural network (1) is dissipative with respect to the supply rate r (u, y). In this case, the function V (x) is called a storage function for the neural network (1). Motivated by [32], in this paper, we consider a quadratic supply rate r (u, y). Specifically, the supply rate r (u, y) associated with neural network (1) is defined by r (u, y) = y T Qy + 2y T Su + u T Ru

(8)

where Q, S, and R are real matrices with Q and R symmetric. In this paper, it is assumed that Q ≤ 0. Definition 2: The neural network (1) is said to be (Q, S, R)dissipative, the dissipation inequality (6) holds under zero initial condition for any nonzero input u ∈ l2 [0, +∞). Furthermore, if for some scalar γ > 0, the dissipation inequality τ α=0

E[r (u(α), y(α))] ≥ γ

τ

E[u(α)T u(α)] ∀τ ≥ 0

(9)

α=0

holds under zero initial condition for any nonzero input u ∈ l2 [0, +∞), then the neural network (1) is said to be strictly (Q, S, R)-γ -dissipative. In this paper, we focus on the problem of dissipativity analysis for neural network (1). More specifically, by utilizing

347

the LMI approach, we will derive a delay-dependent sufficient condition, under which the following two requirements are met simultaneously: 1) the neural network (1) with u(k) = 0 is globally asymptotically stable in the mean square; 2) the neural network (1) is strictly (Q, S, R)-γ -dissipative. In this case, the neural network (1) is said to be globally asymptotically stable in the mean square and strictly (Q, S, R)-γ -dissipative; In order to solve the above-mentioned problem, we present the following results. Lemma 1 (Discretized Jensen Inequality) [33]: For any matrix M > 0, integers γ1 and γ2 satisfying γ2 > γ1 , and vector function ω : N[γ1 , γ2 ] → Rn , such that the sums concerned are well defined, then γ2

(γ2 − γ1 + 1)

ω(α)T Mω(α)

α=γ1 γ2



T

ω(α) M

α=γ1

γ2

ω(α)

(10)

α=γ1



 M S > 0, integers ∗ M d1 , d2 , d(k) satisfying d1 ≤ d(k) ≤ d2 , and vector function x(k + ·) : N[−d2, −d1 ] → Rn , such that the sums concerned are well defined, then Lemma 2 [31], [34]: For any matrix

−(d2 − d1 )

k−d 1 −1

ζ(α)T Mζ(α) ≤ (k)T Π (k)

(11)

α=k−d2

where ζ(α) = x(α + 1) − x(α) and

T (k) = x(k − d1 )T x(k − d(k))T x(k − d2 )T ⎡ ⎤ −M M−S S Π = ⎣ ∗ −2M + S + S T −S + M ⎦. ∗ ∗ −M

III. M AIN R ESULTS In this section, we will focus our study on dissipativity analysis of neural network (1). For simplicity, in the following, we denote F1 = diag{δ1 ρ1 , δ2 ρ2 , . . . , δn ρn }   δn + ρn δ1 + ρ1 δ2 + ρ2 , ,..., F2 = diag 2 2 2 τ2 (τ2 + τ1 )(τ2 − τ1 + 1) . d12 = d2 − d1 , ϑ = 2 Theorem 1: Under Assumptions 1 and 2, neural network (1) is globally asymptotically stable in the mean square and strictly (Q, S, R)-γ -dissipative, if there exist P > 0, Q 1 > 0, Q 2 > 0, Q 3 > 0, Z 1 > 0, Z 2 > 0, R > 0, S, diagonal matrices Y > 0, H > 0, and scalars ρ > 0 and γ > 0 such that 2 Z2 ≤ ρ I P + d12 Z 1 + d12

(12a)

348

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 24, NO. 3, MARCH 2013



Ξ11 ⎢ ∗ ⎢ ⎢ ∗ ⎢ ⎢ ∗ Ξ =⎢ ⎢ ∗ ⎢ ⎢ ∗ ⎢ ⎣ ∗ ∗

Z 1 ρG 2 0 Ξ15 Ξ16 Ξ22 Ξ23 S 0 0 ∗ Ξ33 Ξ34 0 F2 H 0 ∗ ∗ Ξ44 0 ∗ ∗ ∗ Ξ55 Ξ56 ∗ ∗ ∗ ∗ Ξ66 ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗   Z2 S >0 ∗ Z2

Ξ17 0 0 0 Ξ57 Ξ67 Ξ77 ∗

⎤ Ξ18 0 ⎥ ⎥ 0 ⎥ ⎥ 0 ⎥ ⎥ < 0 (12b) Ξ58 ⎥ ⎥ Ξ68 ⎥ ⎥ Ξ78 ⎦ Ξ88

Ξ11 = D P D − P + Q 1 + (d12 + 1)Q 2 + Q 3 +d12 (D − I )Z 1 (D − I ) − Z 1 Ξ15

2 +d12 (D − I )Z 2 (D − I ) + ρG 1 − F1 Y 2 = D P A + d12 (D − I )Z 1 A + d12 (D − I )Z 2 A + F2 Y

k−1

η(α)T Z 1 η(α)

β=−d1 α=k+β

V6 (k, x(k)) = d12

−d 1 −1

k−1

η(α)T Z 2 η(α)

β=−d2 α=k+β

V7 (k, x(k)) = τ2

β τ2 k−1

g(x(α))T Rg(x(α)).

β=τ1 v=1 α=k−v

(12c)

where

−1

V5 (k, x(k)) = d1

Letting E[ΔV (k)] = E[V (k + 1, x(k + 1)) − V (k, x(k))], along the solution of neural network (1) with u(k) = 0, we have   E[ΔV1(k)] = E x(k +1)TPx(k +1)−x(k)TPx(k)  = E x(k)TD P Dx(k) + 2x(k)T D PAg(x(k))

2 Ξ16 = D P B + d12 (D − I )Z 1 B + d12 (D − I )Z 2 B 2 2 Ξ17 = D PC + d1 (D − I )Z 1 C + d12 (D − I )Z 2 C

+ 2x(k)T D P Bg(x(k −d(k))) τ (k)

2 Ξ18 = D P + d12 (D − I )Z 1 + d12 (D − I )Z 2 Ξ22 = −Q 1 − Z 1 − Z 2

+ 2x(k)T D PC

Ξ23 = Z 2 − S Ξ33 = −Q 2 − 2Z 2 + S + S T + ρG 3 − F1 H

+ g(x(k))TAT P Ag(x(k)) + 2g(x(k))TAT P Bg(x(k −d(k)))

Ξ34 = −S + Z 2 Ξ44 = −Q 3 − Z 2 Ξ55 = Ξ56 = Ξ57 = Ξ58 = Ξ66 = Ξ67 = Ξ68 = Ξ77 = Ξ78 =

+ 2g(x(k)) A PC T T

2 T A P A + d12 AT Z 1 A + d12 A Z2 A + ϑ R T 2 T 2 T A P B + d1 A Z 1 B + d12 A Z 2 B 2 T AT PC + d12 AT Z 1 C + d12 A Z 2C T 2 T 2 T −S + A P + d1 A Z 1 + d12 A Z2 T 2 T 2 T B P B + d1 B Z 1 B + d12 B Z 2 B − H 2 T B T PC + d12 B T Z 1 C + d12 B Z 2C T 2 T 2 T B P + d1 B Z 1 + d12 B Z 2 2 T C T PC + d12 C T Z 1 C + d12 C Z 2C − R T 2 T 2 T C P + d1 C Z 1 + d12 C Z 2 2 P + d12 Z 1 + d12 Z 2 − R + γ I. T

V (k, x(k)) =

Vs (k, x(k))

s=1

V1 (k, x(k)) = x(k)T Px(k) k−1 x(α)T Q 1 x(α) V2 (k, x(k)) = α=k−d1

V3 (k, x(k)) =

−d 1 +1

k−1

+ 2g(x(k − d(k)))T B TPC

V4 (k, x(k)) =

α=k−d2

τ (k)

g(x(k −v))

v=1

+

τ (k)

g(x(k −v))TC TPC

v=1

(13)

τ (k)

g(x(k −v))

v=1

 (14) + σ(k) ˆ T P σˆ (k) − x(k)T Px(k)   E[ΔV2 (k)] = E x(k)T Q 1 x(k)−x(k − d1 )TQ 1 x(k − d1 ) (15) E[ΔV3 (k)] ⎡ = E ⎣(d2 − d1 + 1)x(k)T Q 2 x(k) −

k−d 1

⎤ x(α)T Q 2 x(α)⎦

α=k−d2



≤ E (d12 + 1)x(k)T Q 2 x(k) − x(k − d(k))T Q 2 x(k − d(k))



(16)  E[ΔV4 (k)] = E x(k) Q 3 x(k) − x(k − d2 ) Q 3 x(k − d2 ) T

T

⎡ E[ΔV5 (k)] = E ⎣d12 η(k)T Z 1 η(k)−d1

k−1

(17) ⎤ η(α)T Z 1 η(α)⎦

α=k−d1

x(α)Q 2 x(α)

β=−d2 +1 α=k−1+β k−1

g(x(k −v))

+ g(x(k −d(k))) B P Bg(x(k − d(k)))



where

τ (k)

v=1 T T

−Y −Q

Ξ88 = Proof: First, we prove the stability of neural network (1). To this end, we define η(k) = x(k + 1) − x(k) and consider the following Lyapunov functional for neural network (1) with u(k) = 0: 7

g(x(k −v))

v=1

x(α)T Q 3 x(α)

(18) E[ΔV6 (k)] ⎡ 2 η(k)T Z 2 η(k) − d12 = E ⎣d12

k−d 1 −1 α=k−d2

⎤ η(α)T Z 2 η(α)⎦ (19)

WU et al.: DISSIPATIVITY ANALYSIS FOR DISCRETE-TIME STOCHASTIC NEURAL NETWORKS

  E[ΔV7 (k)] = E ϑg(x(k))T Rg(x(k)) ⎤ ⎡ β τ2 g(x(k − v))T Rg(x(k − v))⎦ −E ⎣τ2 

β=τ1 v=1

2 x(k)T (D − I )Z 2 Ag(x(k)) + 2d12 2 + 2d12 x(k)T (D − I )Z 2 Bg(x(k − d(k))) 2 x(k)T (D − I )Z 2 C + 2d12



2 + 2d12 g(x(k))T AT Z 2 Bg(x(k − d(k)))

α=k−d1

=

η(α)T Z 1

k−1

η(α)

α=k−d1 α=k−d1 T −x(k) Z 1 x(k) + 2x(k)T Z 1 x(k −x(k − d1 )T Z 1 x(k − d1 ).

− d1 ) (21)

Note that η(k) = x(k + 1) − x(k) and thus in the case of u(k) = 0 η(k) = (D − I )x(k) + Ag(x(k)) + Bg(x(k − d(k))) τ (k) g(x(k − v)) + σˆ (k, x(k), x(k − d(k)))ω(k). +C v=1

(22)

Using Lemma 1 again, we have that − τ2

Based on (18), (21), and (22), we have that  E[ΔV5 (k)] ≤ E d12 x(k)T (D − I )Z 1 (D − I )x(k)

+ 2d12 x(k)T (D

− I )Z 1 C

g(x(k − v))

v=1

+ 2d12 g(x(k))T AT Z 1 Bg(x(k − d(k))) τ (k)

v=1

τ (k)

g(x(k − v)).

(25)

v=1

  E[ΔV7 (k)] ≤ E ϑg(x(k))T Rg(x(k)) ⎡ ⎤ τ (k) τ (k) −E ⎣ g(x(k − v))T R g(x(k − v))⎦. v=1

v=1

(26)

g(x(k − v))

v=1 2 T T + d1 g(x(k − d(k))) B Z 1 Bg(x(k − d(k))) τ (k) + 2d12 g(x(k − d(k)))T B T Z 1 C g(x(k − v)) v=1 τ (k) τ (k) + d12 g(x(k − v))T C T Z 1 C g(x(k − v)) v=1 v=1 + d12 σˆ (k)T Z 1 σˆ (k) − x(k)T Z 1 x(k) + 2x(k)T Z 1 x(k − d1 )  (23) − x(k − d1 )T Z 1 x(k − d1 ) .

Furthermore, according to Lemma 2, we have E[ΔV6 (k)]

g(x(k − v))T R

Thus

+ d12 g(x(k))T AT Z 1 Ag(x(k))

+ 2d12 g(x(k))T AT Z 1 C

g(x(k − v))T Rg(x(k − v))

≤−

− I )Z 1 Bg(x(k − d(k))) τ (k)

τ (k)

v=1 τ (k)

+ 2d12 x(k)T (D − I )Z 1 Ag(x(k)) + 2d12 x(k)T (D

g(x(k − v))

v=1 2 + d12 g(x(k − d(k)))T B T Z 2 Bg(x(k − d(k))) τ (k) 2 g(x(k − d(k)))T B T Z 2 C g(x(k − v)) + 2d12 v=1 τ (k) τ (k) 2 T T g(x(k − v)) C Z 2 C g(x(k − v)) + d12 v=1 v=1 2 σˆ (k)T Z 2 σˆ (k) − x(k − d1 )T Z 2 x(k − d1 ) + d12 + 2x(k − d1 )T (Z 2 − S)x(k − d(k)) + 2x(k − d1 )T Sx(k − d2 ) + x(k − d(k))T (−2Z 2 + S + S T )x(k − d(k)) + 2x(k − d(k))T (−S + Z 2 )x(k − d2 )  (24) − x(k − d2 )T Z 2 x(k − d2 ) .

η(α)T Z 1 η(α) k−1

τ (k)

2 g(x(k))T AT Z 2 C + 2d12

On the other hand, applying Lemma 1, we have

≤ −

g(x(k − v))

2 g(x(k))T AT Z 2 Ag(x(k)) + d12

v=1

k−1

τ (k) v=1

≤ E ϑg(x(k))T Rg(x(k)) ⎡ ⎤ τ (k) −E ⎣τ2 g(x(k − v))T Rg(x(k − v))⎦. (20)

−d1

349

 2 = E d12 x(k)T (D − I )Z 2 (D − I )x(k)

On the other hand, from Assumption 2 and (12a), we have   2 σˆ (k)T P + d12 Z 1 + d12 Z 2 σˆ (k) ≤ ρx(k)T G 1 x(k) + 2ρx(k)T G 2 x(k − d(k)) +ρx(k − d(k))T G 3 x(k − d(k))

(27)

From Assumption 1 and [20], we have that for any i = 1, 2, . . ., n (gi (x i (k)) − ρi x i (k))(gi (x i (k)) − δi x i (k)) ≤ 0 which is equivalent to  T    T i x(k) δi ρi ei eiT − δi +ρ x(k)) 2 ei ei ≤0 T i g(x(k)) g(x(k)) − δi +ρ ei eiT 2 ei ei

(28)

(29)

where ei denotes the unit column vector having 1 element on its i -th row and zeros elsewhere. Thus, for any appropriately

350

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 24, NO. 3, MARCH 2013

dimensioned diagonal matrix Y > 0, the following inequality holds:    T  x(k)) x(k) F1 Y −F2 Y ≤0 (30) ∗ Y g(x(k)) g(x(k)) that is −x(k)T F1 Y x(k) + 2x(k)T F2 Y g(x(k)) T

−g(x(k)) Y g(x(k)) ≥ 0.

(31)

Similarly, for any appropriately dimensioned diagonal matrix H > 0, the following inequality also holds: −x(k − d(k))T F1 H x(k − d(k)) + 2x(k − d(k))T F2 H g(x(k − d(k))) − g(x(k − d(k))) H g(x(k − d(k))) ≥ 0. T

which in turn implies limk→+∞ E[||x(k)||2 ] = 0. According to Definition 1, neural network (1) is globally asymptotically stable in the mean square. Next, we study the dissipativity of neural network (1). To this end, we consider Lyapunov functional (13) and the following index for neural network (1): Jτ,u = τ E[y(k)T Qy(k) + 2y(k)T Su(k) + u(k)T (R − γ I )u(k)]. k=0

(37) Then, we can find that τ

(32)

E[ΔV (k)] − Jτ,u ≤

k=0

Hence, adding the left-hand sides of (31) and (32), we can obtain from (14)–(17), and (23)–(27) that   7 ΔVs (k, x(k)) E[ΔV (k)] = E  s=1  ≤ E θ (k)T Ξˆ θ (k) where

We can get from (12b) and (38) that

θ (k) =

τ





v=1

T

θ1 (k)T θ2 (k)T ⎡ Ξ11 Z 1 ρG 2 ⎢ ∗ Ξ22 Ξ23 ⎢ ⎢ ∗ ∗ Ξ33 ⎢ ˆ ∗ Ξ = ⎢ ⎢ ∗ ∗ ⎢ ∗ ∗ ∗ ⎢ ⎣ ∗ ∗ ∗ ∗ ∗ ∗

0 S Ξ34 Ξ44 ∗ ∗ ∗

Ξ15 0 0 Ξ45 Ξ55 ∗ ∗

Ξ16 0 F2 H 0 Ξ56 Ξ66 ∗

⎤ Ξ17 0 ⎥ ⎥ 0 ⎥ ⎥ 0 ⎥ ⎥. Ξ57 ⎥ ⎥ Ξ67 ⎦ Ξ77

It is clear that we can conclude from (12b) that there exists a scalar ρ¯ > 0 such that 2 E[ΔV (k)] < −ρE[||x(k)|| ¯ ]

(34)

which implies that for any k ≥ 0 −E[V (0, x(0))] ≤ E[V (k + 1, x(k + 1))] − E[V (0, x(0))] k = E[ΔV (α)] α=0

≤ −ρ¯

k

E[||x(α)||2 ].

(35)

α=0

So the following inequality holds:  k  1 2 E ||x(α)|| ≤ E[V (0, x(0))] < ∞ ρ¯ α=0

(38)

k=0

  θ (k) θ¯ (k) = . u(k)

(33)

T x(k)T x(k − d1 )T x(k − d(k))T x(k − d2 )T ⎤T ⎡ τ (k) T T T θ2 (k) = ⎣g(x(k)) g(x(k − d(k))) g(x(k − v)) ⎦ θ1 (k) =

where

τ   E θ¯ (k)T Ξ θ¯ (k)

(36)

E[ΔV (k)] ≤ Jτ,u

(39)

k=0

which implies E[V (x(τ + 1))] − E[V (x(0))] ≤ Jτ,u .

(40)

Thus, (9) holds under zero initial condition. Therefore, according to Definition 2, neural network (1) is strictly (Q, S, R)γ -dissipative. This completes the proof. Remark 1: It is noted a sufficient condition is established in Theorem 1 to ensure the considered neural networks to be globally asymptotically stable in the mean square and strictly (Q, S, R)-γ -dissipative. It is clear that the given condition is delay-dependent in the sense that it depends on not only the discrete delay but also the finite-distributed delay. Thanks to the usage of inequalities (10) and (11), only one matrix, including four decision variables are introduced in the theoretical derivation. Thus, the expression of the LMIs in Theorem 1 is much simpler than the corresponding results based on the free-weighting matrix method [10]. Remark 2: It is worth pointing out that the LMIs in (12) are not only over the matrix variables, but also over the scalar γ . This implies that by setting δ = −γ and minimizing δ subject to (12), we can obtain the optimal dissipativity performance γ (by γ = −δ). Also, it is worth mentioning that given different d1 , d2 , τ1 , and τ2 , the optimal dissipativity performance γ achieved should be different, which will be illustrated via a numerical example in the next section. We now give the characterization of dissipativity, namely, passivity of neural network (1) by choosing Q = 0, S = I , and R = 2γ I as follows. Corollary 1: Under Assumptions 1 and 2, neural network (1) is passive, if there exist P > 0, Q 1 > 0, Q 2 > 0, Q 3 > 0, Z 1 > 0, Z 2 > 0, R > 0, S, diagonal matrices Y > 0, H > 0,

WU et al.: DISSIPATIVITY ANALYSIS FOR DISCRETE-TIME STOCHASTIC NEURAL NETWORKS

and scalars ρ > 0 and γ > 0 such that (12a), (12c), and (41) hold ⎡ Ξ11 ⎢ ⎢ ∗ ⎢ ⎢ ∗ ⎢ ⎢ ⎢ ∗ ⎢ ⎢ ∗ ⎢ ⎢ ⎢ ∗ ⎢ ⎢ ∗ ⎣ ∗

Z 1 ρG 2 0 Ξ15 Ξ16 Ξ17 Ξ18 Ξ22 Ξ23 ∗ ∗ ∗ ∗

S

0

Ξ33 Ξ34 0 ∗ Ξ44 0 ∗ ∗ Ξˆ 55 ∗























⎥ 0 ⎥ ⎥ F2 H 0 0 ⎥ ⎥ ⎥ 0 0 0 ⎥ ⎥ 0, Q 1 > 0, Q 2 > 0, Q 3 > 0, Z 1 > 0, Z 2 > 0, S, diagonal matrices Y > 0, H > 0, and scalars ρ > 0 and γ > 0 such that (12a),

2 T Ξ˜ 55 = AT P A + d12 AT Z 1 A + d12 A Z 2 A − Y − Q. Similarly, we can also specialize Theorem 2 to the passivity analysis of neural network (42) by choosing Q = 0, S = I , and R = 2γ I . Corollary 2: Under Assumptions 1 and 2, neural network (42) is passive, if there exist P > 0, Q 1 > 0, Q 2 > 0, Q 3 > 0, Z 1 > 0, Z 2 > 0, S, diagonal matrices Y > 0, H > 0, and scalars ρ > 0 and γ > 0 such that (12a), (12c), and (45) hold ⎡ ⎤ Ξ11 Z 1 ρG 2 0 Ξ15 Ξ16 Ξ18 ⎢ ∗ Ξ22 Z 2 − S S 0 0 0 ⎥ ⎢ ⎥ ⎢ ∗ ∗ Ξ −S + Z 0 F H 0 ⎥ 33 2 2 ⎢ ⎥ ⎢ ∗ ∗ ∗ −Q 3 − Z 2 0 0 0 ⎥ ⎢ ⎥ < 0 (45) ⎢ ∗ ∗ ˇ 55 Ξ56 Ξˆ 58 ⎥ ∗ ∗ Ξ ⎢ ⎥ ⎣ ∗ ∗ ∗ ∗ ∗ Ξ66 Ξ68 ⎦ ∗ ∗ ∗ ∗ ∗ ∗ Ξˆ 88

where Ξ11 , Ξ15 , Ξ16 , Ξ18 , Ξ22 , Ξ33 , Ξ56 , Ξ66 , and Ξ68 follow the same definitions as those in Theorem 1, Ξˆ 58 and Ξˆ 88 follow the same definitions as those in Corollary 1, and 2 A T Z A − Y. Ξˇ 55 = AT P A + d12 AT Z 1 A + d12 2 Remark 5: In [16], a delay-dependent criterion has also been established for the passivity analysis of neural network −1 Z , the (42). It is noted that when Z 1 = d1−1 Z and Z 2 = d12 Lyapunov functional (43) reduces to the Lyapunov functional applied in [16]. Thus, our Lyapunov functional is much more general than that of [16]. On the other hand, the delay term d2 − d(k) is enlarged as d2 − d1 and another term d(k) − d1 is also enlarged as d2 − d1 in [16], that is, d2 − d1 = d2 − d(k) + d(k) − d1 is enlarged as 2(d2 − d1 ). It is clear that the aforementioned treatment may neglect some information of the time-varying delay d(k) and lead to conservatism to some extent. Owing to the inequality (11), the condition in Corollary 2 takes full advantage of the information of the time-varying delay d(k). Considering the above, our result derived here has less conservativeness than that of [16]. Remark 6: It should be pointed out that, in order to get a less conservative condition, the free-weighting matrix method is used in [16], and fifteen additional matrices, including sixty decision variables are introduced. However, only one additional matrix, including four decision variables are introduced in Corollary 2 obtained in this paper. Therefore, the criterion derived here has much less decision variables than that of [16]. More specifically, the number of decision variables involved in [16] is 17.5n 2 + 4.5n + 2, whereas the number of decision variables involved in (12a), (12c), and (45) is only 4n 2 +5n+2.

352

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 24, NO. 3, MARCH 2013

It is well known that more decision variables imply an increase in the computational burden, and thus our result has clear computational advantages over [16]. IV. N UMERICAL E XAMPLES In this section, three numerical examples will be adopted to demonstrate the effectiveness and advantage of the results proposed in the above section. The first example is utilized to demonstrate the validity of the given dissipativity condition. The second example shows the reduced conservatism of the proposed passivity criterion. In the third example, the genetic regulatory network is applied to validate the proposed stability criterion. Example 1: Consider neural network (1) with     0.04 0 0.04 0.02 D= , A= 0 0.01 −0.09 0.01     −0.05 0.07 −0.02 0.09 B = , C= −0.02 0.07 −0.03 0.09 and the activation functions are taken as follows: 1 (|α + 1| + |α − 1|) f 1 (α) = 20 1 (|α + 1| + |α − 1|). f 2 (α) = 10 It can be verified that Assumption 1 is satisfied with δ1 = −0.1, ρ1 = 0.1, δ2 = −0.2, and ρ2 = 0.2. Thus     −0.01 0 00 , F2 = . F1 = 0 −0.04 00 The noise diffusion coefficient tion 2 with ⎡ 0.01   ⎢0.01 G1 G2 =⎢ ⎣ 0 ∗ G3 0

vector σˆ (k) satisfies Assump0.01 0.02 0.01 0

0 0.01 0.03 0.01

⎤ 0 0 ⎥ ⎥ 0.01⎦ 0.01

In this example, we choose that       −1 0 10 30 Q= ,S= ,R= . 0 −1 11 03 1) It is assumed that the distributed delay τ (k) satisfies 5 ≤ τ (k) ≤ 9, that is, τ1 = 5 and τ2 = 9. Our purpose here is to discuss the relationship between the optimal dissipativity performance γ and the discrete delay d(k). We first assume that the lower bound of the discrete delay d(k) is fixed to be 2. When the upper bound of the discrete delay d(k) is fixed to be 8, by using Theorem 1, the optimal dissipativity performance obtained is γ = 1.7915. However, if we assume d2 = 9, the optimal dissipativity performance obtained is γ = 1.7284. A more detailed comparison for different values of d2 is provided in Table I, which shows that when d1 is fixed, a larger d2 corresponds to a smaller optimal dissipativity performance γ . Next, the upper bound of the discrete delay d(k) is assumed to be 14. By Theorem 1, the optimal dissipativity performance obtained is γ = 2.0478, when the lower bound of the discrete delay d(k) is 13.

TABLE I O PTIMAL D ISSIPATIVITY P ERFORMANCE γ FOR D IFFERENT d2 d2

8

9

10

11

12

Theorem 1

1.7915

1.7284

1.6591

1.5819

1.4951

TABLE II O PTIMAL D ISSIPATIVITY P ERFORMANCE γ FOR D IFFERENT d1 d1

9

10

11

12

13

Theorem 1

1.8496

1.9034

1.9539

2.0017

2.0478

TABLE III O PTIMAL D ISSIPATIVITY P ERFORMANCE γ FOR D IFFERENT τ2 τ2

8

9

10

11

12

Theorem 1

2.0269

1.9042

1.7337

1.3837

0.6501

TABLE IV O PTIMAL D ISSIPATIVITY P ERFORMANCE γ FOR D IFFERENT τ1 τ1

1

2

3

4

5

Theorem 1

1.8825

1.8921

1.9114

1.9403

1.9786

When d1 = 12, the optimal dissipativity performance obtained is γ = 2.0017. This shows that when the upper bound of the discrete delay d(k) is fixed, for a smaller d1 , the obtained optimal dissipativity performance γ is usually smaller. A more detailed comparison for different values of d1 is provided in Table II. 2) The second task in this example is to show the relationship between the optimal dissipativity performance γ and the distributed delay τ (k). To this end, we assume d1 = 3 and d2 = 8, that is, the discrete delay d(k) satisfies 3 ≤ d(k) ≤ 8. Now, we choose τ1 = 6. When τ2 = 10 (corresponding to 6 ≤ τ (k) ≤ 10), by Theorem 1, the optimal dissipativity performance obtained is γ = 1.7337. When τ2 = 12 (corresponding to 6 ≤ τ (k) ≤ 12), by Theorem 1, the optimal dissipativity performance obtained is γ = 0.6501. This shows that for the same τ1 , a larger τ2 corresponds to a smaller optimal dissipativity performance γ . A more detailed comparison for different values of τ2 is provided in Table III. Next, we choose τ2 = 8. According to Theorem 1, Table IV gives the optimal dissipativity performance γ with different τ1 . We can find from Table IV that for the same τ2 , a larger τ1 corresponds to a larger optimal dissipativity performance γ . Example 2: Consider neural network (42) with     0.8 0 0.001 0 D= , A= 0 0.9 0 0.005   −0.1 0.01 B = −0.2 −0.1 and suppose the activation functions satisfy Assumption 2 with δ1 = δ2 = 0 and ρ1 = ρ2 = 0.5. A straightforward calculation gives     00 0.25 0 F1 = , F2 = . 00 0 0.25

WU et al.: DISSIPATIVITY ANALYSIS FOR DISCRETE-TIME STOCHASTIC NEURAL NETWORKS

TABLE V O PTIMAL PASSIVITY P ERFORMANCE γ FOR D IFFERENT d2 d2

8

9

10

11

12

[16] Corollary 2

3.9708 2.9660

4.8961 3.3483

6.2501 3.8428

8.3991 4.4968

12.2439 5.4007

TABLE VI O PTIMAL PASSIVITY P ERFORMANCE γ FOR D IFFERENT d1 d1

4

5

6

7

8

[16] Corollary 2

16.1853 6.1649

13.2125 5.6743

11.0130 5.2451

9.2978 4.8664

7.9134 4.5300

The noise diffusion coefficient vector satisfies Assumption 2 with ⎡ ⎤ 0.003 0 0 0   ⎢ 0 0.003 0 G1 G2 0 ⎥ ⎥. =⎢ ⎣ 0 ∗ G3 0 0.002 0 ⎦ 0 0 0 0.002 In this example, we are interested in comparing our result with the one of [16]. To this end, we first assume that the time-varying delay d(k) has lower bound d1 = 3. When the upper bound of time-varying delay d2 = 8, the optimal passivity performance obtained is γ = 3.9708 by the method in [16]. While by Corollary 2 proposed in this paper, the optimal passivity performance obtained is γ = 2.9660, which is 25.3% smaller than that in [16]. Table V also lists the comparisons of γ , when the upper bound d2 = 9, 10, 11, and 12. Next, we assume that the time-varying delay d(k) has upper bound d2 = 13. The optimal passivity performance γ obtained by Corollary 2 and [16] for different d1 can be found in Table VI. It can be seen from Remark 6 and Tables V and VI that our result uses fewer decision variables, but this is not at the expense of degrading the passivity performance, that is, compared with the condition in [16], Corollary 2 derived here not only has less decision variables, but also gives less conservative results. It can also be found from Tables V and VI that for the same d1 , a larger d2 corresponds to a larger optimal passivity performance γ , and when the upper bound d2 is fixed, a larger d1 corresponds to a smaller optimal passivity performance γ . Example 3: In this example, we will show the application of the proposed result to a biological network, which has been presented as the mathematical model of the repressilator and experimentally studied in Escherichia coli [35], [36]. Here, we take the time-varying delay and noise perturbation into account and consider the following discrete-time genetic regulatory network: ⎧ ˆ ⎪ + Bˆ f ( p(k − d(k))) ⎨m(k + 1) = Am(k) (46) + σ¯ (k, p(k), p(k − d(k)))ω(k) ⎪ ⎩ ˆ ˆ p(k + 1) = C p(k) + Dm(k − d(k))

T where m(k) = m 1 (k)m 2 (k) · · · m n (k) , p(k) = p1 (k),

T p2 (k), · · · , pn (k) , and m i (k) and pi (k) can be viewed as the concentrations of mRNA and protein of the i -th node. Aˆ = diag{aˆ 1 , aˆ 2 , . . . , aˆ n } with |aˆ i | < 1, Cˆ = diag{cˆ1 , cˆ2 , . . . , cˆn }

353

with |cˆi | < 1, and aˆ i and cˆi are the decay rates of mRNA and protein. Dˆ = diag{d 1 , d2 , . . . , dn }. f ( p(t)) = T f1 ( p1(t)) f 2 ( p2 (t)) · · · f n ( pn (t)) , and fi (·) represents the feedback regulation of the protein on the transcription, which is generally a nonlinear function but has a form of monotonicity with each variable. Bˆ = (bˆi j ) ∈ Rn×n is the coupling matrix of the genetic network, which is defined as follows: if there is no link from node j to node i , bˆi j = 0; if transcription factor j is an activator of gene i , bˆi j = αi j ; if transcription factor j is a repressor of gene i , bˆi j = −αi j , αi j is the dimensionless transcriptional rate and is the bounded constant. In this example, we consider a three-gene network, that is, n = 3. Choose f i (s) = s 2 /(1 + s 2 ) for any i [35], [36], which implies δi = 0 and ρi = 0.65 for any i , and ⎡ ⎤ 0.1 0 0 Aˆ = ⎣ 0 0.1 0 ⎦ 0 0 0.1 ⎡ ⎤ 0 0 −0.5 0 ⎦ Bˆ = ⎣−0.5 0 0 −0.5 0 ⎡ ⎤ 0.08 0 0 Cˆ = ⎣ 0 0.08 0 ⎦ 0 0 0.08 ⎡ ⎤ 0.08 0 0 Dˆ = ⎣ 0 0.08 0 ⎦ 0 0 0.08 and 

p(k) σ¯ (k) σ¯ (k) ≤ p(k − d(k))

T 

T

where H2 = 0 and



0.1 H1 = ⎣ ∗ ∗ ⎡ 0.1 H3 = ⎣ ∗ ∗

H1 H2 ∗ H3



p(k) p(k − d(k))



⎤ 0 0 0.1 0 ⎦ ∗ 0.1 ⎤ 0 0 0.1 0 ⎦. ∗ 0.1

Letting 

   m(t) m(k − d(k)) , g(x(k − d(k))) = p(t) f ( p(k − d(k)))       ˆ ˆ A 0 0 B σ¯ (k) D= , B = , σ ˆ (k) = 0 0 Cˆ Dˆ 0

x(k) =

and A = 0, we can find that genetic regulatory network (46) can be transformed in neural network (42). It is assumed that   kπ . d(k) = 3 + 2 sin 2 A straightforward calculation gives d1 = 1 and d2 = 5. Applying Theorem 2, it can be checked that neural network (42) is globally asymptotically stable, which implies the genetic regulatory network (46) is stable.

354

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 24, NO. 3, MARCH 2013

V. C ONCLUSION In this paper, the problem of dissipativity analysis for discrete-time stochastic neural networks with time-varying discrete and finite-distributed delays was studied. The discretized Jensen inequality and lower bounds lemma were used to deal with the involved finite sum quadratic terms, and a delay-dependent condition was given to ensure the considered neural network to be globally asymptotically stable in the mean square and strictly (Q, S, R)-γ -dissipative. The obtained condition depends on not only the discrete delay but also the finite-distributed delay. Some special cases were also investigated. Thanks to the usage of the discretized Jensen inequality and lower bounds lemma, the results derived here not only have less decision variables, but also have less conservatism compared with the existing ones, so can be easily used in practice. Numerical examples were given to show the effectiveness and advantage of the proposed methods. It is worth mentioning that the results proposed in this paper can be extended to more complex neural networks, such as nonhomogeneous Markov jump neural networks, switched neural networks, and Markov jump neural networks with incomplete transition descriptions, which deserves further investigation. R EFERENCES [1] S. Arik, “Global asymptotic stability of a class of dynamical neural networks,” IEEE Trans. Circuits Syst. I, Fundam. Theory Appl., vol. 47, no. 4, pp. 568–571, Apr. 2000. [2] Y. Fu and T. Chai, “Nonlinear adaptive decoupling control based on neural networks and multiple models,” Int. J. Innov. Comp. Inf. Control, vol. 8, no. 3, pp. 1867–1878, Mar. 2012. [3] X. Luan, F. Liu, and P. Shi, “Neural network based stochastic optimal control for nonlinear markov jump systems,” Int. J. Innov. Comp. Inf. Control, vol. 6, no. 8, pp. 3715–3723, Aug. 2010. [4] Y. Zhao, L. Zhang, S. Shen, and H. Gao, “Robust stability criterion for discrete-time uncertain markovian jumping neural networks with defective statistics of modes transitions,” IEEE Trans. Neural Netw., vol. 22, no. 1, pp. 164–170, Jan. 2011. [5] X. Sun, G. Liu, and W. Wang, “Stability analysis for systems with large delay period: A switching method,” Int. J. Innov. Comp. Inf. Control, vol. 8, no. 6, pp. 4235–4247, Jun. 2012. [6] Z. Sun, Y. Liu, and X. Xie, “Global stabilization for a class of highorder time-delay nonlinear systems,” Int. J. Innov. Comp. Inf. Control, vol. 7, no. 12, pp. 7119–7130, Dec. 2011. [7] Z. Lin, Y. Xia, P. Shi, and H. Wu, “Robust sliding mode control for uncertain linear discrete systems independent of time-delay,” Int. J. Innov. Comp. Inf. Control, vol. 7, no. 2, pp. 869–880, Feb. 2011. [8] J. Lam, S. Xu, D. W. C. Ho, and Y. Zou, “On global asymptotic for a class of delayed neural networks,” Int. J. Circuit Theory Appl., vol. 40, no. 11, pp. 1165–1174, Nov. 2012. [9] H. Zhang, Z. Liu, and G. Huang, “Novel delay-dependent robust stability analysis for switched neutral-type neural networks with time-varying delays via SC technique,” IEEE Trans. Syst., Man, Cybern., Part B, Cybern., vol. 40, no. 6, pp. 1480–1491, Dec. 2010. [10] Y. He, G. Liu, and D. Rees, “New delay-dependent stability criteria for neural networks with time-varying delay,” IEEE Trans. Neural Netw., vol. 18, no. 1, pp. 310–314, Jan. 2007. [11] Z. Wu, P. Shi, H. Su, and J. Chu, “Delay-dependent stability analysis for switched neural networks with time-varying delay,” IEEE Trans. Syst., Man, Cybern., Part B, Cybern., vol. 41, no. 6, pp. 1522–1530, Dec. 2011. [12] Y. Zhao, H. Gao, J. Lam, and K. Che, “Stability analysis of discrete-time recurrent neural networks with stochastic delay,” IEEE Trans. Neural Netw., vol. 20, no. 8, pp. 1330–1339, Aug. 2009.

[13] Z. Wang, D. W. C. Ho, and X. Liu, “State estimation for delayed neural networks,” IEEE Trans. Neural Netw., vol. 16, no. 1, pp. 279–284, Jan. 2005. [14] Z. Wu, P. Shi, H. Su, and J. Chu, “State estimation for discrete-time neural networks with time-varying delay,” Int. J. Syst. Sci., vol. 43, no. 4, pp. 647–655, 2012. [15] C. Li and X. Liao, “Passivity analysis of neural networks with time delay,” IEEE Trans. Circuits Syst. II, Exp. Briefs, vol. 52, no. 8, pp. 471–475, Aug. 2005. [16] Q. Song, J. Liang, and Z. Wang, “Passivity analysis of discrete-time stochastic neural networks with time-varying delays,” Neurocomputing, vol. 72, nos. 7–9, pp. 1782–1788, Mar. 2009. [17] Z. Wu, P. Shi, H. Su, and J. Chu, “Passivity analysis for discretetime stochastic Markovian jump neural networks with mixed time delays,” IEEE Trans. Neural Netw., vol. 22, no. 10, pp. 1566–1575, Oct. 2011. [18] Z. Wang, Y. Liu, G. Wei, and X. Liu, “A note on control of a class of discrete-time stochastic systems with distributed delays and nonlinear disturbances,” Automatica, vol. 46, no. 3, pp. 543–548, Mar. 2010. [19] Z. Wang and H. Zhang, “Global asymptotic stability of reactiondiffusion Cohen-Grossberg neural networks with continuously distributed delays,” IEEE Trans. Neural Netw., vol. 20, no. 1, pp. 39–49, Jan. 2010. [20] Y. Liu, Z. Wang, and X. Liu, “Global exponential stability of generalized recurrent neural networks with discrete and distributed delays,” Neural Netw., vol. 19, no. 5, pp. 667–675, Jun. 2006. [21] Z. Wang, Y. Liu, M. Li, and X. Liu, “Stability analysis for stochastic Cohen-Grossberg neural networks with mixed time delays,” IEEE Trans. Neural Netw., vol. 17, no. 3, pp. 814–820, May 2006. [22] Y. Liu, Z. Wang, and X. Liu, “Asymptotic stability for neural networks with mixed time-delays: The discrete-time case,” Neural Netw., vol. 22, no. 1, pp. 67–74, Jan. 2009. [23] H. Li, C. Wang, P. Shi, and H. Gao, “New passivity results for uncertain discrete-time stochastic neural networks with mixed time delays,” Neurocomputing, vol. 73, nos. 16–18, pp. 3291–3299, Oct. 2010. [24] Y. Liu, Z. Wang, and X. Liu, “State estimation for discrete-time Markovian jumping neural networks with mixed mode-dependent delays,” Phys. Lett. A, vol. 372, no. 48, pp. 7147–7155, Dec. 2008. [25] W. M. Haddad and V. Chellaboina, Nonlinear Dynamical Systems and Control: A Lyapunov-Based Approach. Princeton, NJ: Princeton Univ. Press, 2008. [26] D. J. Hill and P. J. Moylan, “The stability of nonlinear dissipative systems,” IEEE Trans. Autom. Control, vol. 21, no. 5, pp. 708–711, Oct. 1976. [27] Z. Feng and J. Lam, “Robust reliable dissipative filtering for discrete delay singular systems,” Signal Process., vol. 92, no. 12, pp. 3010–3025, Dec. 2012. [28] Z. Feng, J. Lam, and Z. Shu, “Dissipative control for linear systems by static output feedback,” Int. J. Syst. Sci., 2012, to be published. [29] Z. Feng and J. Lam, “Stability and dissipativity analysis of distributed delay cellular neural networks,” IEEE Trans. Neural Netw., vol. 22, no. 6, pp. 976–981, Jun. 2011. [30] Z. Wu, J. Lam, H. Su, and J. Chu, “Stability and dissipativity analysis of static neural networks with time delay,” IEEE Trans. Neural Netw. Learn. Syst., vol. 47, no. 2, pp. 199–210, Feb. 2012. [31] P. Park, J. W. Ko, and C. Jeong, “Reciprocally convex approach to stability of systems with time-varying delays,” Automatica, vol. 47, no. 1, pp. 235–238, Jan. 2011. [32] J. C. Willems, “Dissipative dynamical systems, part I: General theory,” Arch. Ration. Mech. Anal., vol. 45, no. 5, pp. 321–351, 1972. [33] X. Zhu, Y. Wang, and G. Yang, “New delay-dependent stability results for discrete-time recurrent neural networks with time-varying delay,” Neurocomputing, vol. 72, no. 13–15, pp. 3376–3383, Aug. 2009. [34] Z. Wu, Ju H. Park, H. Su, and J. Chu, “Admissibility and dissipativity analysis for discrete-time singular systems with mixed time-varying delays,” Appl. Math. Comput., vol. 47, no. 13, pp. 199–210, Mar. 2012. [35] P. Li, J. Lam, and Z. Shu, “On the transient and steady-state estimates of interval genetic regulatory networks,” IEEE Trans. Syst., Man, Cybern., Part B: Cybern., vol. 40, no. 2, pp. 336–349, Apr. 2010. [36] Z. Wang, H. Gao, J. Cao, and X. Liu, “On delayed genetic regulatory networks with polytopic uncertainties: Robust stability analysis,” IEEE Trans. Nanobiosci., vol. 7, no. 2, pp. 154–163, Jun. 2008.

WU et al.: DISSIPATIVITY ANALYSIS FOR DISCRETE-TIME STOCHASTIC NEURAL NETWORKS

355

Zheng-Guang Wu was born in 1982. He received the B.S. and M.S. degrees from Zhejiang Normal University, Jinhua, China, in 2004 and 2007, respectively, and the Ph.D. degree from Zhejiang University, Hangzhou, China, in 2011. He was a Research Assistant with the Department of Mathematics, City University of Hong Kong, Kowloon, Hong Kong, from February 2010 to June 2010, and a Research Associate with the Department of Mechanical Engineering, University of Hong Kong, Hong Kong, from December 2010 to February 2011. He is currently with the Institute of Cyber-Systems and Control, Zhejiang University. His current research interests include robust control and systems theory.

Hongye Su was born in 1969. He received the B.S. degree in industrial automation from the Nanjing University of Chemical Technology, Jiangsu, China, in 1990, and the M.S. and Ph.D. degrees from Zhejiang University, Hangzhou, China, in 1993 and 1995, respectively. He was a Lecturer with the Department of Chemical Engineering, Zhejiang University, from 1995 to 1997. From 1998 to 2000, he was an Associate Professor with the Institute of Advanced Process Control, Zhejiang University. Currently, he is a Professor with the Institute of Cyber-Systems and Control, Zhejiang University. His current research interests include robust controls, time-delay systems, and advanced process control theory and applications.

Peng Shi (M’95–SM’98) received the B.Sc. degree in mathematics from the Harbin Institute of Technology, Harbin, China, the M.E. degree in systems engineering from Harbin Engineering University, Harbin, the Ph.D. degree in electrical engineering from the University of Newcastle, Newcastle, Australia, the Ph.D. degree in mathematics from the University of South Australia, Mawson Lakes, Australia, and the D.Sc. degree from the University of Glamorgan, Glamorgan, U.K. He was a Lecturer with Heilongjiang University, Harbin, a Post-Doctorate and Lecturer with the University of South Australia, a Senior Scientist with the Defense Science and Technology Organization, Canberra, Australia, and a Professor with the University of Glamorgan. He is currently a Professor with Victoria University, Melbourne, Australia, and The University of Adelaide, Adelaide, Australia. His current research interests include system and control theory, computational and intelligent systems, and operational research. Dr. Shi is a fellow of the Institution of Engineering and Technology, U.K., and the Institute of Mathematics and its Applications, U.K. He has been in the editorial board of a number of international journals, including Automatica, the IEEE T RANSACTIONS ON AUTOMATIC C ONTROL , the IEEE T RANSAC TIONS ON F UZZY S YSTEMS , the IEEE T RANSACTIONS ON S YSTEMS , M AN AND C YBERNETICS -PART B: C YBERNETICS , and the IEEE T RANSACTIONS ON C IRCUITS AND S YSTEMS -I: R EGULAR PAPERS .

Jian Chu was born in 1963. He received the B.S., M.S., and Ph.D. degrees from Zhejiang University, Hangzhou, China, in 1982, 1984, and 1989, respectively, and the Ph.D. degree in joint education program with Zhejiang University and Kyoto University, Kyoto, Japan. He joined the Institute of Advanced Process Control, Zhejiang University, as a Post-Doctoral Researcher. He was an Associate Professor in 1991, and a Full Professor in 1993. In 1994, he was a Doctorial Advisor. His current research interests include control theory and applications, research and development of computer control systems, and advanced process control software.

Dissipativity analysis for discrete-time stochastic neural networks with time-varying delays.

In this paper, the problem of dissipativity analysis is discussed for discrete-time stochastic neural networks with time-varying discrete and finite-d...
319KB Sizes 0 Downloads 3 Views