1368

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 23, NO. 9, SEPTEMBER 2012

Exponential Synchronization of Neural Networks With Discrete and Distributed Delays Under Time-Varying Sampling Zheng-Guang Wu, Peng Shi, Senior Member, IEEE, Hongye Su, and Jian Chu

Abstract— This paper investigates the problem of master-slave synchronization for neural networks with discrete and distributed delays under variable sampling with a known upper bound on the sampling intervals. An improved method is proposed, which captures the characteristic of sampled-data systems. Some delaydependent criteria are derived to ensure the exponential stability of the error systems, and thus the master systems synchronize with the slave systems. The desired sampled-data controller can be achieved by solving a set of linear matrix inequalitys, which depend upon the maximum sampling interval and the decay rate. The obtained conditions not only have less conservatism but also have less decision variables than existing results. Simulation results are given to show the effectiveness and benefits of the proposed methods. Index Terms— Exponential synchronization, linear matrix inequality (LMI), neural networks, sampled-data control.

I. I NTRODUCTION

O

VER the past few decades, a lot of effort has been devoted to neural networks because they have wide applications in a variety of areas, such as signal processing, pattern recognition, static image processing, associative memory, and combinatorial optimization [1]. On the other hand, the finite switching speed of amplifiers and the communication time of neurons may induce time delays in the interaction between the neurons when the neural networks are implemented by very large-scale integrated electronic circuits. It has been shown that time delays may be an important source of oscillation, divergence, and instability in systems [2]–[4]. Thus, time delay neural networks have been widely studied in recent years, and a large variety of important and interesting results have been reported via all kinds of methods, among which the linear matrix inequality (LMI) method appears to be very popular. For example, the stability Manuscript received September 18, 2011; revised February 15, 2012; accepted May 23, 2012. Date of publication June 28, 2012; date of current version August 1, 2012. This work was supported in part by the National Natural Science Foundation of China under Grant 61174029. Z.-G. Wu, H. Su, and J. Chu are with the National Laboratory of Industrial Control Technology, Institute of Cyber-Systems and Control, Zhejiang University, Hangzhou 310027, China (e-mail: nashwzhg@ gmail.com; [email protected]; [email protected]). P. Shi is with the Faculty of Advanced Technology, University of Glamorgan, Pontypridd CF37 1DL, U.K., and also with the School of Engineering and Science, Victoria University, Melbourne 8001, Australia, and also with the School of Mathematics and Statistics, University of South Australia, Mawson Lakes 5095, Australia (e-mail: [email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TNNLS.2012.2202687

problem has been extensively investigated for all sorts of time delay neural networks and lots of stability criteria have been obtained by using LMI approach ([4]–[9] and the references therein). The state estimation problem has been investigated for neural networks with time delay in [10]–[12], where the explicit expression of the desired state estimators can be achieved by LMI approach. The results on passivity analysis for time-delay neural networks can be found in [13] and [14]. As a generalization of passivity property, dissipativity has also been considered for distributed delay cellular neural networks in [15]. Also, it is well known that chaotic systems have complex dynamical behaviors that possess some special features, such as being extremely sensitive to tiny variations of initial conditions, having bounded trajectories in phase space, and so on [16] and [17]. Over the past two decades, the synchronization problem of chaotic systems has been widely investigated due to its potential applications in biology, chemistry, secret communication and cryptography, nonlinear oscillation synchronization, and some other nonlinear fields. Up to now, many useful approaches have been established for the synchronization of chaotic systems, which include time-delay feedback control, adaptive control, fuzzy control, impulsive control, sampled-data control, manifold-based method, and so on ( [16], [18] and the references therein). Since the master-slave (drive-response) concept introduced by Pecora and Carroll in their pioneering work [18], an increasing interest has been devoted to the master-slave synchronization of chaotic neural networks with delay. For example, based on the Lyapunov stability theory and master-slave synchronization concept, the synchronization problem of chaotic neural networks with time delay has been investigated in [19], where controllers have been designed to achieve synchronization of the considered neural networks. However, an obvious shortcoming of the method adopted in [19] is that all the proposed conditions, due to their complicated representations, are difficult to be checked. In [16], the synchronization problem of stochastic neural networks with time-varying delay has been considered by using LMI method, and sufficient conditions have been derived to ensure the master systems synchronize with the slaver systems. One of the advantages of the results in [16] is that the obtained results are formulated by LMIs, which can be easily verified by the usage of the existing powerful tools, such as the LMI toolbox of MATLAB. But it should be pointed out

2162–237X/$31.00 © 2012 IEEE

WU et al.: DISCRETE AND DISTRIBUTED DELAYS UNDER TIME-VARYING SAMPLING

that the given results of [16] are delay-independent and no longer valid when the derivative of time-varying delay equals or is greater than one, and thus they have less application scope and more conservatism. Some sufficient conditions for the exponential H∞ synchronization of neural networks with time-varying delays have been proposed in [20]. The essential virtues of [20] are that the supplementary requirement that the time-derivative of time-varying delays must be smaller than one is released, and all the proposed results are delaydependent. Therefore, the proposed results in [20] are less conservative and more desirable than those in [16]. The results on exponential synchronization of discrete-time chaotic neural networks with time delay can be found in [21]. It should be pointed out that all the above-mentioned time delays are of the discrete nature. It is well known that neural networks usually have a spatial extent due to the presence of an amount of parallel pathways with a variety of axon sizes and lengths, and there will be a distribution of propagation delays. In this case, the signal propagation is no longer instantaneous and cannot be modeled with discrete delays [22]. Recently, neural networks with distributed delays have been extensively discussed ([14], [23], [24] and the references therein). The master-slave synchronization problem has been investigated for neural networks with discrete and distributed delays in [25], where several sufficient conditions have been established to guarantee the exponential synchronization of the considered chaotic neural networks. When external disturbance appears, [26] have established an exponential H∞ synchronization method for a class of uncertain master and slave neural networks with mixed time delays. It should be noted that the mixed time delays considered in [26] synchronously comprise neutral, discrete, and distributed delays, and thus the results proposed in [26] are very general and powerful. Reference [27] has investigated the problem of sampleddata synchronization for neural networks with discrete and distributed delays under variable sampling in the framework of the input delay approach [28], [29]. The key point of [27] lies in that the free-weighting matrix approach has been employed. It is worth pointing out that the free-weighting matrix approach originated from [30] is an effective way to minimize conservatism for all sorts of time-delay systems and has been used extensively. However, some of the free-weighting matrices introduced in [27] have no effect on reducing the conservatism of the obtained results. On the contrary, they make the number of decision variables in the obtained results large and consequently lead to a significant increase in the computational demand. On the other hand, the Lyapunov functional proposed in [27] ignores the characteristic of sampled-data systems, which makes the given results conservative. Naturally, it is meaningful and interesting to further study the problem of master-slave synchronization for neural networks with discrete and distributed delays using sampled-data for achieving some less conservative delay-dependent conditions with less decision variables to guarantee the master systems synchronize with the slave systems. In this paper, we are concerned with the problem of masterslave synchronization for neural networks with discrete and distributed delays under variable sampling with a known upper

1369

bound on the sampling intervals. Inspired by [29], a novel Lyapunov functional is proposed, which captures the characteristic of sampled-data systems. Some new criteria are derived to ensure the exponential stability of the error systems, and thus the master systems synchronize with the slave systems. The desired sampled-data controller can be easily obtained because all the involved conditions are formulated by LMIs, which depend upon the maximum sampling interval and the decay rate. It is shown that the newly proposed methods lead to less conservative and less complex results than the existing ones. An example is given to illustrate the effectiveness and reduced conservatism of the proposed methods. Notation: The notations used throughout this paper are fairly standard. Rn and Rm×n denote the n-dimensional Euclidean space and the set of all m × n real matrices, respectively. The notation X > Y (X ≥ Y ), where X and Y are symmetric matrices, means that X − Y is positive definite (positive semidefinite). I and 0 represent the identity matrix and a zero matrix, respectively. The superscript “T” represents the transpose, and diag{· · · } stands for a block-diagonal matrix. || · || denotes the Euclidean norm of a vector and its induced norm of a matrix. For an arbitrary matrix B and two symmetric matrices A and C   A B ∗ C denotes a symmetric matrix, where “∗” denotes the term that is induced by symmetry. Matrices, if their dimensions are not explicitly stated, are assumed to have compatible dimensions for algebraic operations. II. P RELIMINARIES Consider the following neural network with mixed time delays x(t) ˙ = − C x(t) + Ag(x(t)) + Bg(x(t − d(t))) t +D g(x(s)) ds + V (t)

(1)

t −τ (t )

where x(t) = [x 1 (t) x 2 (t) · · · x n (t)]T , g(x(k)) = [g1(x 1 (t)) g2 (x 2 (t)) · · · gn (x n (t))]T , and x i (t) is the state of the i -th neuron at time k, g(x(t)) denotes the neuron activation function; C = diag{c1 , c2 , . . . , cn } is a diagonal matrix with positive entries; A = (ai j )n×n , B = (bi j )n×n , and D = (di j )n×n are, respectively, the connection weight matrix, the discretely delayed connection weight matrix and T connection weight matrix; V (t) = the distributively delayed V1 (t) V2 (t) · · · Vn (t) is an external input vector; d(t) and τ (t) denote the discrete delay and the distributed delay, respectively, and satisfy ˙ ≤μ 0 ≤ d(t) ≤ d, d(t)

(2)

0 ≤ τ (t) ≤ τ

(3)

and

where d, μ, and τ are constants. Throughout this paper, we make the following general assumption.

1370

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 23, NO. 9, SEPTEMBER 2012

Assumption 1 [31]: Each activation function gi (·) in (1) is continuous and bounded, and there exist constants Fi− and Fi+ such that gi (α1 ) − gi (α2 ) ≤ Fi+ , i = 1, 2, . . . , n (4) Fi− ≤ α1 − α2 where α1 , α2 ∈ R, and α1 = α2 . Remark 1: Assumption 1 was first introduced in [31]. The constants Fi− and Fi+ in Assumption 1 are allowed to be positive, negative or zero. Hence, the resulting activation functions may be nonmonotonic, and more general than the usual sigmoid functions and Lipschitz-type conditions. In this paper, we consider system (1) as the master system and a slave system for (1) can be described by the following equation: y˙ (t) = −C y(t) + Ag(y(t)) + Bg(y(t − d(t))) t g(y(s)) ds + V (t) + u(t) +D

(5)

t −τ (t )

where D, A, B, and C are matrices as in (1), and u(t) ∈ Rn is the appropriate control input that will be designed in order to obtain a certain control objective. By defining the error signal as r (t) = y(t) − x(t), the error system can be represented as follows: r˙ (t) = −Cr (t) + A f (t) + B f (t − d(t)) f (s) ds + u(t)

(6)

t −τ (t )

where f (t) := g(y(t)) − g(x(t)). It should be pointed out that f (t) is dependent on x(t) and y(t). However, in order to avoid cumbersome notations, we will use the simpler symbol f (t) instead of f (x(t), y(t)) [10]. The control signal is assumed to be generated by using a zero-order-hold function with a sequence of hold times 0 ≤ t0 < t1 < · · · < tk < · · · . Therefore, the state-feedback controller takes the following form: u(t) = K r (tk ), tk ≤ t < tk+1

(7)

where K is sampled-data feedback controller gain matrix to be determined, r (tk ) is discrete measurement of r (t) at the sampling instant tk , limk→+∞ tk = +∞. Similar to [32], the sampling is not required to be periodic in this paper, and the only assumption is that the distance between any two consecutive sampling instants is less than a given bound. Specifically, it is assumed that tk+1 − tk = h k ≤ h

(8)

for all k ≥ 0, where h is a positive scalar and represents the largest sampling interval. By substituting (7) into (6), we obtain r˙ (t) = −Cr (t) + A f (t) + B f (t − d(t)) t f (s) ds, tk ≤ t < tk+1 . +K r (tk ) + D t −τ (t )

||r (t)|| ≤ κe−α(t −t0) ||rt0 ||c

(9)

∀t ≥ t0

(10)

where ||rt ||c = sup− max{d,h,τ }≤θ≤0 {||r (t + θ )||, ||˙r(t + θ )||}. The main purpose of this paper is to design a controller with the form (7) to achieve the exponential synchronization of the master system (1) and slave system (5). In other words, we are interested in finding a feedback gain matrix K such that the error system (9) is exponentially stable. Through this paper, the following lemma is needed to develop our main results. Lemma 1 (Jensen Inequality) [33]: For any matrix W > 0, scalars γ1 and γ2 satisfying γ2 > γ1 , a vector function ω : [ γ1 , γ2 ] → Rn such that the integrations concerned are well defined, then γ2 (γ2 − γ1 ) γ1

ω(α)T W ω(α)dα ⎤ T ⎡ γ2 ⎤ ⎡ γ2   ≥ ⎣ ω(α) dα ⎦ W ⎣ ω(α) dα ⎦. γ1

t +D

We are now in a position to introduce the notion of exponential synchronization. Definition 1: The master system (1) and slave system (5) are said to be exponentially synchronous if the error system (9) is exponentially stable, that is, there exist two constants κ > 0 and α > 0 such that

(11)

γ1

III. M AIN R ESULTS In this section, sufficient conditions will be established to the control synthesis assuring the synchronization between the master system (1) and slave system (5) by employing a new Lyapunov functional, which captures the characteristic of sampled-data systems. To present the main results of this section, we first denote F1 = diag F1− F1+ , F2− F2+ , . . . , Fn− Fn+

 F1− + F1+ F2− + F2+ Fn− + Fn+ , ,..., F2 = diag 2 2 2       g1 = I 0 0 , g2 = 0 I 0 , g3 = 0 0 I . Theorem 1: Given scalars α > 0 and γ , if there exist P > 0,  2 Q = Q∗1 Q Q 3 > 0, R > 0, Z 1 > 0, Z 2 > 0, U > 0, X, X 1 , G, L, H , S, diagonal matrices V1 > 0, V2 > 0 such that Γ (h) = ⎡ ⎤ X + XT −h X + h X P + h 1 ⎢ ⎥ 2 ⎣ T ⎦> 0 (12) X + X ∗ −h X 1 − h X 1T + h 2 ⎤ ⎡ Ξ11 + Λ(h) Ξ12 Ξ13 Ξ14 Ξ15 Ξ16 ⎢ ∗ Ξ22 Ξ23 0 0 Ξ26 ⎥ ⎥ ⎢ ⎢ ∗ ∗ Ξ33 0 0 0 ⎥ ⎥< 0 (13) ⎢ Ξ1 (h) = ⎢ ∗ ∗ ∗ Ξ44 0 0 ⎥ ⎥ ⎢ ⎣ ∗ ∗ ∗ ∗ Ξ55 0 ⎦ ∗ ∗ ∗ ∗ ∗ Ξ66

WU et al.: DISCRETE AND DISTRIBUTED DELAYS UNDER TIME-VARYING SAMPLING

⎡ Ξ11 ⎢ ∗ ⎢ ⎢ ∗ ⎢ Ξ2 = ⎢ ⎢ ∗ ⎢ ∗ ⎢ ⎣ ∗ ∗

Ξ12 Ξ22 ∗ ∗ ∗ ∗ ∗

Ξ13 Ξ23 Ξ33 ∗ ∗ ∗ ∗

Ξ14 0 0 Ξ44 ∗ ∗ ∗

Ξ15 0 0 0 Ξ55 ∗ ∗

⎤ Ξ16 hHT ⎥ Ξ26 0 ⎥ ⎥ 0 0 ⎥ ⎥< 0 0 0 ⎥ ⎥ 0 0 ⎥ ⎦ Ξ66 0 −2αh U ∗ −he

t t −d(t ) t

V3 (t) =

(15)

Ξ15 = Ξ16 = Ξ22 = Ξ23 = Ξ26 = Ξ33 = Ξ44 = Ξ55 = Ξ66 =

0  t e2αs r˙ (s)T Z 1r˙ (s) ds dθ

V4 (t) = d −d t +θ 0  t

V5 (t) = τ

Ξ11 = 2αg1T Pg1 + g1T Pg2 + g2T Pg1 + g1T Q 1 g1 + g1T Rg1 + d 2 g2T Z 1 g2 − e−2αd g1T Z 1 g1 − g1T Gg2 − g2T G T g1  T   g g − 1 H 1 + H T (g1 − g3) + (g1 − g3 )T H g3 g3

Ξ13 = Ξ14 =

e2αs r (s)T Rr (s) ds t −d

where

Ξ12 =

e2αs η(s)T Qη(s) ds

V2 (t) =

(14)

 −2αd  e Z1 S >0 ∗ e−2αd Z 1

1371

−g1T F1 V1 g1 − g1T GCg1 − g1T C T G T g1 + g1T Lg3 + g3T L T g1 − γ g2T GCg1 − γ g1T C T G T g2 −γ g2T Gg2 − γ g2T G T g2 + γ g2T Lg3 + γ g3T L T g2 g1T (e−2αd Z 1 − S) g1T S g1T G D + γ g2T G D g1T Q 2 + g1T F2 V1 + g1T G A + γ g2T G A γ g2T G B + g1T G B −(1 − μ)e−2αd Q 1 − 2e−2αd Z 1 + S + S T − F1 V2 −S + e−2αd Z 1 −(1 − μ)e−2αd Q 2 + F2 V2 −e−2αd R − e−2αd Z 1 −e−2ατ Z 2 Q 3 + τ 2 Z 2 − V1 −(1 − μ)e−2αd Q 3 − V2  T    T   g1 g1 g2 g1

Λ(h) = 2αh

g3

H

g3

+h

g3

H

then the master system (1) and slave system (5) are exponentially synchronous with decay rate α. Moreover, the desired controller gain matrix in (7) can be given by K = G −1 L. T Proof: Denote η(t) = r (t)T f (t)T and consider the following Lyapunov functional for the error system (9) V (t) =

Vi (t), t ∈ [tk , tk+1 )

i=1

where V1 (t) = e2αt r (t)T Pr (t)

−τ t +θ

t V6 (t) = (tk+1 − t)

(16)

e2αs r˙ (s)T U r˙ (s) ds tk

V7 (t) = (tk+1 − t)e

2αt



r (t) r (tk )

T



 r (t) H . r (tk )

Its derivative along the solution of system (9), thus reads as V˙ (t) =

7 

V˙i (t), t ∈ [tk , tk+1 )

(17)

i=1

where V˙i (t) are given in Appendix. On the other hand, according to the error system (9), for any appropriately dimensioned matrix G and scalar γ, the following equation holds   ˙ T G [−˙r(t) − Cr (t) + A f (t) 0 = 2e2αt r (t)T G + γ r(t) t f (s) ds] (18) +B f (t − d(t)) + K r (tk ) + D t −τ (t )

On the other hand, we have from (4) that for any i = 1, 2, . . ., n ( f i (t) − Fi−ri (t))( fi (t) − Fi+ ri (t)) ≤ 0 which is equivalent to ⎡ − + T ⎢ Fi Fi eˆi eˆi η(t)T ⎢ ⎣ F− + F+ i eˆi eˆiT − i

0

 T   g g + h 2 H 1 + hg2T U g2 0 g3 ⎤ ⎡ T X+X −X + X 1 ⎥ ⎢ 2 H = ⎣ T ⎦ X + X ∗ −X 1 − X 1T + 2

7 

e2αs f (s)T Z 2 f (s) ds dθ

⎤ Fi− + Fi+ T eˆi eˆi ⎥ − 2 ⎥ η(t) ≤ 0 ⎦ T eˆi eˆi

(19)

(20)

2 where eˆi denotes the unit column vector with 1 element on its i -th row and zeros elsewhere. Thus, for any appropriately dimensioned diagonal matrix V1 > 0, the following inequality holds [4]   −F1 V1 F2 V1 η(t). (21) 0 ≤ e2αt η(t)T ∗ −V1 Similarly, for any appropriately dimensioned diagonal matrix V2 > 0, the following inequality also holds   −F1 V2 F2 V2 0 ≤ e2αt η(t − d(t))T η(t − d(t)). (22) ∗ −V2 Then, adding the right hand sides of (18), (21), and (22) to V˙ (t) and letting L = G K , we have that for t ∈ [tk , tk+1 )   tk+1 − t t − tk Ξ1 (h k ) + Δ(h k ) χ(t) V˙ (t) ≤ e2αt χ(t)T hk hk (23)

1372

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 23, NO. 9, SEPTEMBER 2012

where



 T T

χ(t) = χ1 (t)T χ2 (t) T  χ1 (t) = φ(t)T r (t − d(t))T r (t − d)T  T t T T T f (s) ds f (t) f (t − d(t)) χ2 (t) = t −τ (t )

  h k H T e2αh U −1 H 0 . Δ(h k ) = Ξ1 (0) + ∗ 0 According to Schur complement, (14) is equivalent to   T 2αh −1 hH e U H 0 0 and (12), we can obtain that there exists a sufficiently small scalar δ > 0 such that P > δ I and Γ (h) > δ I . Thus Γ (h k ) =

V (t) ≥ δe2αt ||r (t)||2 .

(30)

Combining (16), (27) and (30), we can get that there exists a scalar ρ > 0 such that δe2αt ||r (t)||2 ≤ V (t0 ) ≤ ρe2αt0 ||rt0 ||2c which implies ||r (t)|| ≤ e

−α(t −t0 )



ρ ||rt ||c . δ 0

(31)

(32)

Therefore, system (9) is exponentially stable with the decay rate α. According to Definition 1, the master system (1) and slave system (5) are exponentially synchronous, and the exponential synchronization rate is α. This completes the proof. Remark 2: Theorem 1 provides a new synchronization criterion for the master system (1) and slave system (5). It is noted that the given result is formulated by LMIs, which

where V1 (t), V2 (t), V3 (t), V4 (t), V6 (t), and V7 (t) follow the same definitions as those in (16). By using the similar method employed in the proof of Theorem 1, we can easily obtain the following result. Theorem 2: Given scalars α > 0 and γ , if there exist   Q1 Q2 > 0, R > 0, Z 1 > 0, U > 0, P > 0, Q = ∗ Q3 X, X 1 , G, L, H , S, diagonal matrices V1 > 0, V2 > 0 such that (12), (15), (37), and (37) hold ⎡ ⎤ Ξ11 + Λ(h) Ξ12 Ξ13 Ξ15 Ξ16 ⎢ ∗ Ξ22 Ξ23 0 Ξ26 ⎥ ⎢ ⎥ ⎢ ∗ ∗ Ξ33 0 0 ⎥ (37) ⎢ ⎥ 0, R > 0, Z 1 > 0, Z 2 > 0, U > 0, X, X 1 , G, L, H , S, diagonal matrices V1 , V2 such that (12), (13)| Q=0, (14) | Q=0 and (15) hold, then the master system (1) and slave system (5) are exponentially synchronous with decay rate α. Moreover, the desired controller gain matrix in (7) can be given by K = G −1 L. Corollary 2: Given scalars α > 0 and γ , if there exist P > 0, R > 0, Z 1 > 0, U > 0, X, X 1 , G, L, H , S, diagonal matrices V1 , V2 such that (12), (15), (37)| Q=0 , and (37)| Q=0 hold, then the master system (33) and slave system (34) are exponentially synchronous with decay rate α. Moreover, the desired controller gain matrix in (7) can be given by K = G −1 L.

1373

0

−2

−4

−6 −1

−0.5

0

0.5

1

1.5

y1

Fig. 2.

Chaotic behavior of the slave system (5) with u(t) = 0. TABLE I M AXIMUM A LLOWED α FOR D IFFERENT h h

0.03

0.05

0.07

0.09

0.1

[27] Theorem 1

1.2 1.6

0.7 1.2

0.4 1.0

0.1 0.7

– 0.6

While utilizing Theorem 1 proposed here, we can get that the maximum allowed exponential synchronization rate α is 1.0, which is 150% larger than that in [27]. Table I gives detailed results on the maximum allowed exponential synchronization rate α for different largest sampling interval h by Theorem 1 and [27]. It can be found from Table I that the larger sampling interval h corresponds to the smaller exponential synchronization rate α. The results on the largest sampling interval h for different exponential synchronization rate α by Theorem 1 and [27] are listed in Table II, from which we can find the larger exponential synchronization rate α corresponds to the smaller sampling interval h. Moreover, from Tables I and II, we find that our results are better than those of [27]. In particular, when the largest sampling interval h = 0.1 and the exponential synchronization rate α = 0.6, the method of [27] is not valid anymore, but by using the MATLAB LMI

1374

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 23, NO. 9, SEPTEMBER 2012

TABLE II M AXIMUM A LLOWED h FOR D IFFERENT α α

1.15

0.95

0.75

0.55

0.35

[27] Theorem 1

0.03 0.05

0.04 0.07

0.05 0.09

0.06 0.11

0.07 0.13

1

condition has been established such that the master systems synchronize with the slave systems, and the desired sampleddata controller can also be achieved by solving a set of LMIs. Moreover, the obtained condition has less decision variables than the existing one. Finally, a numerical example has been given to show the validity and potential of the developed theoretical result.

u1 (t) u2 (t)

0.5

A PPENDIX The V˙i (t) involved in (17) are given as follows:

0 −0.5 −1 −1.5 −2 −2.5 −3

0

0.5

1

1.5

2

2.5

3

3.5

Time t

V˙1 (t) = 2e2αt r (t)T P r˙ (t) + 2αe2αt r (t)T Pr (t) (39) 2α(t −d(t )) ˙ V˙2 (t) = e2αt η(t)T Qη(t) − (1 − d(t))e ×η(t − d(t))T Qη(t − d(t)) ≤ e2αt η(t)T Qη(t) − e2αt (1 − μ)e−2αd ×η(t− d(t))T Qη(t − d(t))  (40) V˙3 (t) = e2αt r (t)T Rr (t) − e−2αd r (t − d)T Rr (t − d) (41) t V˙4 (t) = d 2 e2αt r˙ (t)T Z 1r˙ (t) − d e2αs r˙ (s)T Z 1r˙ (s) ds t −d

Fig. 3.

≤ e2αt d 2r˙ (t)T Z 1r˙ (t) t −e2αt d e−2αd r˙ (s)T Z 1r˙ (s) ds

Control input u(t).

0.35

r1 (t) r2 (t)

0.3

t −d

t

0.25

V˙5 (t) = τ e

2 2αt

0.2

e2αs f (s)T Z 2 f (s) ds

f (t) Z 2 f (t) − τ

≤ e2αt τ 2 f (t)T Z 2 f (t) t 2αt −e τ e−2ατ f (s)T Z 2 f (s) ds

0.1 0.05

t −τ (t )

0

≤e

−0.05

0

0.5

1

1.5

2

2.5

3

3.5

Time t

Fig. 4.

T

t −τ

0.15

−0.1

(42)

State response of error system (9).

Control Toolbox to solve (12)–(15), we can the corresponding gain matrix in (7) as follows:   −9.2187 0.0340 K = . 1.8405 −10.9114 Under the above gain matrix, the response curves of control input (7) and error system (9) are exhibited in Figs. 3 and 4, respectively. It is obvious from Fig. 4 that the master system (1) synchronizes with the slave system (5). V. C ONCLUSION In this paper, the problem of exponential synchronization has been investigated for neural networks with mixed time delays under variable sampling. A novel Lyapunov functional has been introduced for error systems, which can make full use of the available information about the actual sampling pattern. Thanks to the new Lyapunov functional, a less conservative

t

τ f (t) Z 2 f (t) − e

2αt 2

T

  t −2ατ × e Z2

f (s)T ds

2αt t −τ (t )

f (s) ds

(43)

t −τ (t )

where Lemma 1 is applied V˙6 (t) = −

t e2αs r˙ (s)T U r˙ (s) ds tk

+ (tk+1 − t)e2αt r˙ (t)T U r˙ (t) t 2αt ≤ −e e−2αh r˙ (s)T U r˙ (s) ds tk

+ (tk+1 − t)e2αt r˙ (t)T U r˙ (t)  T   r (t) r (t) V˙7 (t) = −e2αt H r (tk ) r (tk ) T    r (t) 2αt r (t) H + 2α(tk+1 − t)e r (tk ) r (tk )  T   r˙ (t) 2αt r (t) + 2(tk+1 − t)e H . r (tk ) 0

(44)

(45)

WU et al.: DISCRETE AND DISTRIBUTED DELAYS UNDER TIME-VARYING SAMPLING

Now, denote

matrix H , the following inequality holds:

η1 (t) =

  T  T 2αh −1 t  φ(t) φ(t) H e U H HT ds ≥ 0 (51) r˙ (s) ∗ e−2αh U r˙ (s)

t −d(t  )

t r˙ (s) ds, η2 (t) =

r˙ (s) ds.

tk

t −d

t −d(t )

When 0 < d(t) < d, according to Lemma 1, we have that t d

e

−2αd

t r˙ (s) Z 1r˙ (s) ds = d e−2αd r˙ (s)T Z 1r˙ (s) ds T

t −d

t −d(t )

t

e−2αd r˙ (s)T Z 1r˙ (s) ds



t −d

d d ≥ η1 (t)T e−2αd Z 1 η1 (t)+ η2 (t)T e−2αd Z 1 η2 (t) d(t) d − d(t) d − d(t) η1 (t)T e−2αd Z 1 η1 (t) = η1 (t)T e−2αd Z 1 η1 (t) + d(t) d(t) η2 (t)T e−2αd Z 1 η2 (t). + η2 (t)T e−2αd Z 1 η2 (t) + d − d(t) (46) Based on the lower bounds lemma of [34], we have ⎡ √ ⎤T  −2αd  d−d(t ) √ η (t) 1 Z1 S e ⎣ √d(t ) ⎦ d(t ) ∗ e−2αd Z 1 − √d−d(t η (t) ) 2 ⎡ √ ⎤ d−d(t ) √ η (t) ) 1 ⎦≥0 × ⎣ √d(t d(t ) √ − d−d(t ) η2 (t)

(47)

d − d(t) d(t) η1 (t)T e−2αd Z 1 η1 (t) + η2 (t)T e−2αd Z 1 η2 (t) d(t) d − d(t) (48) ≥ η1 (t)T Sη2 (t) + η2 (t)T S T η1 (t). Then, we can get from (46) and (48) that −d

e−2αd r˙ (s)T Z 1r˙ (s) ds

t −d

≤ −η1 (t)T e−2αd Z 1 η1 (t) − η2 (t)T e−2αd Z 1 η2 (t)

−η1 (t)T Sη2 (t) − η2 (t)T S T η1 (t) T  −2αd    Z1 S e η1 (t) η1 (t) . =− η2 (t) ∗ e−2αd Z 1 η2 (t)

(49)

Note that when d(t) = 0 or d(t) = d, we have η1 (t) = 0 or η2 (t) = 0, respectively. Thus, (49) still holds. Therefore V˙4 (t) ≤ e2αt [d 2r˙ (t)T Z 1r˙ (t) − r (t)T (e−2αd Z 1 )r (t) + 2r (t)T (e−2αd Z 1 − S)r (t − d(t)) + 2r (t)T Sr (t − d) + r (t − d(t))T (−2e−2αd Z 1 + S + S T )r (t − d(t)) + 2r (t − d(t))T (−S + e−2αd Z 1 )r (t − d) − r (t − d)T (e−2αd Z 1 )r (t − d)].

e−2αh r(s) ˙ T U r˙ (s) ds ≤ (t − tk )φ(t)T H T e2αh U −1 H φ(t)

tk

+ 2φ(t)T H T (r (t) − r (tk )).

(52)

Thus V˙6 (t) ≤ e2αt (t − tk )φ(t)T H T e2αh U −1 H φ(t) + 2e2αt φ(t)T H T (r (t) − r (tk )) + (tk+1 − t)e2αt r˙ (t)T U r˙ (t).

(53)

R EFERENCES

which implies

t

T  where φ(t) = r (t)T r˙ (t)T r (tk )T , because   T 2αh −1 H e U H HT ≥ 0. ∗ e−2αh U From (51), we can immediately get that

t −d(t  )

+d

1375

(50)

On the other hand, inspired by the free-weighting matrix approach, we can find that, for any appropriately dimensioned

[1] M. M. Gupta, L. Jin, and N. Homma, Static and Dynamic Neural Networks: From Fundamentals to Advanced Theory. New York: Wiley, 2003. [2] Z. Wu, H. Su, and J. Chu, “Delay-dependent robust exponential stability of uncertain singular systems with time delays,” Int. J. Innov. Comp. Inf. Control, vol. 6, no. 5, pp. 2275–2284, May 2010. [3] H. Li, “Synchronization stability for discrete-time stochastic complex networks with probabilistic interval time-varying delays,” Int. J. Innov. Comp. Inf. Control, vol. 7, no. 2, pp. 697–708, Feb. 2011. [4] Z. Wang, Y. Liu, L. Yu, and X. Liu, “Exponential stability of delayed recurrent neural networks with Markovian jumping parameters,” Phys. Lett. A, vol. 356, no. 4, pp. 346–352, May 2006. [5] H. Zhang, Z. Liu, G. Huang, and Z. Wang, “Novel weighting-delaybased stability criteria for recurrent neural networks with time-varying delay,” IEEE Trans. Neural Netw., vol. 21, no. 1, pp. 91–106, Jan. 2010. [6] L. Wu, Z. Feng, and W. Zheng, “Exponential stability analysis for delayed neural networks with switching parameters: Average dwell time approach,” IEEE Trans. Neural Netw., vol. 21, no. 9, pp. 1396–1407, Sep. 2010. [7] Z. Wu, P. Shi, H. Su, and J. Chu, “Delay-dependent stability analysis for switched neural networks with time-varying delay,” IEEE Trans. Syst., Man, Cybern., Part B, Cybern., vol. 41, no. 6, pp. 1522–1530, Dec. 2011. [8] R. Yang, H. Gao, and P. Shi, “Novel robust stability criteria for stochastic Hopfield neural networks with time delays,” IEEE Trans. Syst., Man, Cybern., Part B, Cybern., vol. 39, no. 2, pp. 467–474, Apr. 2009. [9] R. Yang, Z. Zhang, and P. Shi, “Exponential stability on stochastic neural networks with discrete interval and distributed delays,” IEEE Trans. Neural Netw., vol. 21, no. 1, pp. 169–175, Jan. 2010. [10] Z. Wang, D. W. C. Ho, and X. Liu, “State estimation for delayed neural networks,” IEEE Trans. Neural Netw., vol. 16, no. 1, pp. 279–284, Jan. 2005. [11] J. Lian, Z. Feng, and P. Shi, “Observer design for switched recurrent neural networks: An average dwell time approach,” IEEE Trans. Neural Netw., vol. 22, no. 10, pp. 1547–1556, Oct. 2011. [12] Z. Wu, P. Shi, H. Su, and J. Chu, “State estimation for discrete-time neural networks with time-varying delay,” Int. J. Syst. Sci., vol. 43, no. 4, pp. 647–655, Apr. 2012. [13] H. Li, C. Wang, H. Gao, and P. Shi, “New passivity results for uncertain discrete-time stochastic neural networks with mixed time delays,” Neurocomputing, vol. 73, nos. 16–18, pp. 3291–3299, Oct. 2010. [14] Z. Wu, P. Shi, H. Su, and J. Chu, “Passivity analysis for discrete-time stochastic Markovian jump neural networks with mixed time delays,” IEEE Trans. Neural Netw., vol. 22, no. 10, pp. 1566–1575, Oct. 2011.

1376

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 23, NO. 9, SEPTEMBER 2012

[15] Z. Feng and J. Lam, “Stability and dissipativity analysis of distributed delay cellular neural networks,” IEEE Trans. Neural Netw., vol. 22, no. 6, pp. 976–981, Jun. 2011. [16] W. Yu and J. Cao, “Synchronization control of stochastic delayed neural networks,” Phys. A, vol. 373, pp. 252–260, Jan. 2007. [17] L. Wang, Q. Wang, Y. Jing, and Z. Kong, “Local and global exponential synchronization in uncertain complex dynamical networks,” Int. J. Innov. Comp. Inf. Control, vol. 6, no. 2, pp. 701–710, Feb. 2010. [18] L. M. Pecora and T. L. Carroll, “Synchronization in chaotic systems,” Phys. Rev. Lett., vol. 64, no. 8, pp. 821–824, Feb. 1990. [19] C. Cheng, T. Liao, J. Yan, and C. Hwang, “Synchronization of neural networks by decentralized feedback control,” Phys. Lett. A, vol. 338, no. 1, pp. 28–35, Apr. 2005. [20] H. R. Karimi and P. Maass, “Delay-range-dependent exponential H∞ synchronization of a class of delayed neural networks,” Chaos Solitons Fractals, vol. 41, no. 3, pp. 1125–1135, Aug. 2009. [21] D. Qi, M. Liu, M. Qiu, and S. Zhang, “Exponential H∞ synchronization of general discrete-time chaotic neural networks with or without time delays,” IEEE Trans. Neural Netw., vol. 21, no. 8, pp. 1358–1365, Aug. 2010. [22] Z. Wang, Y. Liu, G. Wei, and X. Liu, “A note on control of a class of discrete-time stochastic systems with distributed delays and nonlinear disturbances,” Automatica, vol. 46, no. 3, pp. 543–548, Mar. 2010. [23] Z. Wang and H. Zhang, “Global asymptotic stability of reactiondiffusion Cohen-Grossberg neural networks with continuously distributed delays,” IEEE Trans. Neural Netw., vol. 20, no. 1, pp. 39–49, Jan. 2010. [24] Z. Wang, Y. Liu, M. Li, and X. Liu, “Stability analysis for stochastic Cohen-Grossberg neural networks with mixed time delays,” IEEE Trans. Neural Netw., vol. 17, no. 3, pp. 814–820, Mar. 2006. [25] T. Li, S. Fei, and K. Zhang, “Synchronization control of recurrent neural networks with distributed delays,” Phys. A, vol. 387, no. 4, pp. 982–996, Feb. 2008. [26] H. R. Karimi and H. Gao, “New delay-dependent exponential H∞ synchronization for uncertain neural networks with mixed time delays,” IEEE Trans. Syst., Man, Cybern., Part B, Cybern., vol. 40, no. 1, pp. 173–185, Feb. 2010. [27] C. Zhang, Y. He, and M. Wu, “Exponential synchronization of neural networks with time-varying mixed delays and sampled-data,” Neurocomputing, vol. 74, nos. 1–3, pp. 265–273, Dec. 2010. [28] E. Fridman, A. Seuret, and J. P. Richard, “Robust sampled-data stabilization of linear systems: An input delay approach,” Automatica, vol. 40, no. 8, pp. 1441–1446, Aug. 2004. [29] E. Fridman, “A refined input delay approach to sampled-data control,” Automatica, vol. 46, no. 2, pp. 421–427, Feb. 2010. [30] M. Wu, Y. He, J. She, and G. Liu, “Delay-dependent criteria for robust stability of time-varying delay systems,” Automatica, vol. 40, no. 8, pp. 1435–1439, Aug. 2004. [31] Y. Liu, Z. Wang, and X. Liu, “Global exponential stability of generalized recurrent neural networks with discrete and distributed delays,” Neural Netw., vol. 19, no. 5, pp. 667–675, Jun. 2006. [32] H. Gao and T. Chen, “Stabilization of nonlinear systems under variable sampling: A fuzzy control approach,” IEEE Trans. Fuzzy Syst., vol. 15, no. 5, pp. 972–983, Oct. 2007. [33] K. Gu, V. K. Kharitonov, and J. Chen, Stability of Time-Delay Systems. Boston, MA: Birkhauser, 2003. [34] P. Park, J. W. Ko, and C. Jeong, “Reciprocally convex approach to stability of systems with time-varying delays,” Automatica, vol. 47, no. 1, pp. 235–238, Jan. 2011.

Zheng-Guang Wu was born in 1982. He received the B.S. and M.S. degrees from Zhejiang Normal University, Jinhua, China, in 2004 and 2007, respectively, and the Ph.D. degree from Zhejiang University, Hangzhou, China, in 2011. He was a Research Assistant with the Department of Mathematics, City University of Hong Kong, Kowloon, Hong Kong, from February 2010 to June 2010, and a Research Associate with the Department of Mechanical Engineering, the University of Hong Kong, Hong Kong, from December 2010 to February 2011. He is currently with the Institute of Cyber-Systems and Control, Zhejiang University. His current research interests include robust control and systems theory.

Peng Shi (SM’98) received the B.Sc. degree in mathematics and the M.E. degree in systems engineering from the Harbin Institute of Technology, Harbin, China, in 1982 and 1985, respectively, the Ph.D. degree in electrical engineering from the University of Newcastle, Newcastle, Australia, in 1994, the Ph.D. degree in mathematics from the University of South Australia, Mawson Lakes, Australia, in 1998, and the Doctor of Science degree from the University of Glamorgan, Pontypridd, U.K., in 2006. He was a Lecturer with Heilongjiang University, China, from 1985 to 1989, the University of South Australia from 1997 to 1999, and a Senior Scientist with the Department of Defense, Defense Science and Technology Organization, Australia from 1999 to 2005. He joined the University of Glamorgan, as a Professor, in 2004. He has been a Professor with Victoria University, Melbourne, Australia, since 2008. His current research interests include system and control theory, computational and intelligent systems, and operational research. He has published five monographs. Dr. Shi is a fellow of the Institution of Engineering and Technology, U.K., and the Institute of Mathematics and its Applications, U.K. He is currently serving as the Editor-in-Chief of the International Journal of Innovative Computing, Information and Control, and an Associate Editor of the IEEE T RANSACTIONS ON AUTOMATIC C ONTROL, the IEEE T RANSACTIONS ON S YSTEMS , M AN AND C YBERNETICS -PART B, and the IEEE T RANSAC TIONS ON F UZZY S YSTEMS . He is a recipient of the Most Cited Paper Award of Signal Processing in 2009, and the Vice-Chancellor’s Award for Research and Research Training from Victoria University in 2010.

Hongye Su was born in 1969. He received the B.S. degree in industrial automation from the Nanjing University of Chemical Technology, Jiangsu, China, in 1990, and the M.S. and Ph.D. degrees from Zhejiang University, Hangzhou, China, in 1993 and 1995, respectively. He was a Lecturer with the Department of Chemical Engineering, Zhejiang University, from 1995 to 1997. From 1998 to 2000, he was an Associate Professor with the Institute of Advanced Process Control, Zhejiang University. Currently, he is a Professor with the Institute of Cyber-Systems and Control, Zhejiang University. His current research interests include robust controls, time-delay systems, and advanced process control theory and applications.

Jian Chu was born in 1963. He received the B.S., M.S., and Ph.D. degrees from Zhejiang University, Hangzhou, China, in 1982, 1984, and 1989, respectively, and the Ph.D. degree in joint education program with Zhejiang University, Hangzhou, China, and Kyoto University, Kyoto, Japan. He joined the Institute of Advanced Process Control, Zhejiang University, as a Post-Doctoral Researcher. He was an Associate Professor in 1991, and a Full Professor in 1993. In 1994, he was a Doctorial Advisor. His current research interests include control theory and applications, research and development of computer control systems, and advanced process control software.

Exponential synchronization of neural networks with discrete and distributed delays under time-varying sampling.

This paper investigates the problem of master-slave synchronization for neural networks with discrete and distributed delays under variable sampling w...
2MB Sizes 0 Downloads 3 Views