Cogn Neurodyn (2016) 10:85–98 DOI 10.1007/s11571-015-9356-y
RESEARCH ARTICLE
New stability criterion of neural networks with leakage delays and impulses: a piecewise delay method R. Suresh Kumar1 • G. Sugumaran2 • R. Raja3 • Quanxin Zhu4 • U. Karthik Raja5
Received: 26 May 2015 / Revised: 6 September 2015 / Accepted: 15 September 2015 / Published online: 29 September 2015 Ó Springer Science+Business Media Dordrecht 2015
Abstract This paper analyzes the global asymptotic stability of a class of neural networks with time delay in the leakage term and time-varying delays under impulsive perturbations. Here the time-varying delays are assumed to be piecewise. In this method, the interval of the variation is divided into two subintervals by its central point. By developing a new Lyapunov–Krasovskii functional and checking its variation in between the two subintervals, respectively, and then we present some sufficient conditions to guarantee the global asymptotic stability of the equilibrium point for the considered neural network. The proposed results which do not require the boundedness, differentiability and monotonicity of the activation functions, can be easily verified via the linear matrix inequality (LMI) control toolbox in MATLAB. Finally, a numerical example and its simulation are given to show the
& Quanxin Zhu
[email protected] R. Raja
[email protected] 1
Department of Electrical and Electronic Engineering, Anna University Regional Centre, Coimbatore 641 047, India
2
Department of Electrical and Electronic Engineering, Sri Krishna College of Engineering and Technology, Coimbatore 641 008, India
3
Ramanujan Centre for Higher Mathematics, Alagappa University, Karaikudi 630 004, India
4
School of Mathematical Sciences and Institute of Finance and Statistics, Nanjing Normal University, Nanjing 210 023, China
5
Department of Mathematics, K.S.R College of Arts and Science, Thiruchengodu 637 215, India
conditions obtained are new and less conservative than some existing ones in the literature. Keywords Asymptotic stability Time-varying delay Lyapunov–Krasovskii functional Leakage delay Impulse
Introduction Neural networks have attracted much attention due to their applications in many areas of real world problems such as optimization problems, associative memory, classification of patterns etc., So far, there are various types of neural networks such as cellular neural networks (CNNs), bidirectional associative memory neural networks (BAMNNs), Hopfield neural network (HNNs), Chaotic neural networks and Cohen–Grossberg neural network (CGNNs) which have been studied by many researchers for their enormous applications, see (Cao and Wang 2003; Cho and Park 2007; Haykin 1999; Kosko 1992; Liu 1997; Meng and Wang 2007; Mou et al. 2008; Senan and Arik 2007; Tan et al. 2015; Wang et al. 2008; Yang et al. 2014). These applications heavily depend on the stability of the equilibrium point of neural networks. Therefore, the stability analysis is essential for the design and applications of neural networks. As is well known, time delays is a natural phenomenon frequently encountered in various engineering systems, automatic control systems, population models, inferred grinding models, the AIDS epidemic and so on (Arik 2004; Gopalsamy 1992); Gu et al. 2003. Moreover, the existence of time delays in the network may lead to instability or bad performance of systems. Recently neural networks with various types of delay which have been widely investigated by many authors; see (Cao and Li 2005; Liu et al. 2006; Meng and Wang 2007; Wang et al.
123
86
2006; Yang and Xu 2005) and references therein. However, so far there has been very little interest in neural networks with time delay in the leakage (or forgetting) term (see Gopalsamy 2007; Peng 2010). Moreover, time delay in the leakage term can also have the ability to crash the activity of dynamic behavior such as instability or poor performance of the given system. Hence it is considered that the leakage delay in dynamical neural networks is an important research topic in the field of stability analysis of neural networks (Li and Huang 2009; Li et al. 2010; Song and Cao 2012). Furthermore, impulsive effects can be found in a wide range of evolutionaryprocesses, especially in biological systems such as biological neural networks, some bursting rhythm models in pathology, optimal control in economics, frequency-modulated signal processing systems, flying object motions, in which many sudden and sharp changes occur simultaneously, in the form of impulse. In the implementation of neural networks, it has also been shown that the presence of impulsive perturbations is likewise unavoidable (Fu et al. 2005; Ignatyev 2008; Lakshmikantham et al. 1989). So, the combination of impulsive perturbations and time delays in the leakage term can change the dynamic behavior of the neural network. Recently, the survey among the existing results on delayed neural networks and impulsive perturbations can only regarded as an ideal situation and they contain few errors. Very recently, the authors (Akca et al. 2004; Liu et al. 2005; Xu and Yang 2005), have established some novel methods to reflect such a more realistic dynamics for delayed neural networks in the presence of impulsive perturbations in which the occurring perturbations depend on not only the current state of neurons at impulse times tk but also the state of neurons in recent history. Based on the above motivated points, this paper considers a class of neural networks with time delay in the leakage term and impulsive effect to guarantee the global asymptotic stability of the addressed network. By constructing a suitable Lyapunov–Krasovskii functional and LMI technique combined with free weighting matrix method, we obtain new sufficient conditions to ensure the global asymptotic stability of the NN with time delay in the leakage term and impulsive perturbations. Moreover, the limitations on the activation functions like boundedness, monotonicity and differentiability, which are not required for our proposed work and the results obtained can be easily verified using MATLAB LMI Control tool box. In order to show some novelty to this paper, we have assumed that the time-varying delays as piecewise delay and also measured the variation in between the intervals by its central point. Finally, a numerical example is given to demonstrate the effectiveness of the proposed results in this
123
Cogn Neurodyn (2016) 10:85–98
paper and also we have compared our results with some existing ones in the literature. Notations: The notations are quite standard throughout this paper; Rn denotes the n-dimensional Euclidean space; Rnm is the set of real n m matrices; I is the identity matrix of appropriate dimensions; k k stands for the Euclidean vector norm or spectral norm as appropriate. The notation X [ 0 (respectively X\0), for X 2 Rnn means that the matrix X is a real symmetric positive definite (respectively, negative definite). The notation AT and A1 means the transpose of A and the inverse of the square matrix. For any interval J R, set V Rk ð1 k nÞ, CðJ; VÞ ¼ f/ : J ! V is continuousg and PC 1 ðJ; VÞ ¼ f/ : J ! V is continuously differentiable everywhere except at finite number of point t at which /ðtþ Þ, /ðt Þ, _ þ Þ and /ðt _ Þ exist and /ðtþ Þ ¼ /ðtÞ, /ðt _ þ Þ ¼ /ðtÞ, _ /ðt where /_ denotes the derivative of /. The symbol in a matrix is used to denote a term that is induced by symmetry.
Model description and preliminaries Consider a continuous-time neural network model with time-delay in the leakage term and impulses as follows: _ ¼ Cxðt rÞ þ Af ðxðtÞÞ þ Bf ðxðt sðtÞÞÞ xðtÞ Z t þD f ðxðsÞÞds þ I; t 6¼ tk ; tsðtÞ
Dxðtk Þ ¼ xðtkþ Þ xðtk Þ ¼ Jk ðxðtk Þ; xtk Þ; t ¼ tk ; xðsÞ ¼ /ðsÞ; s 2 ½q; 0; where xðtÞ ¼ ðx1 ; x2 ; . . .; xn ÞT is the neuron state vector of the considered network; C ¼ diagðc1 ; c2 ; . . .; cn Þ is a diagonal matrix with ci [ 0 ði ¼ 1; 2; . . .; nÞ; A, B and D are the connection weight matrix, discrete delayed connection weight matrix and distributed delayed connection weight matrix, respectively. I is a constant external input vector; f ðxðÞÞ ¼ f1 ðx1 ðÞÞ; f2 ðx2 ðÞÞ; . . .; fn ðxn ðÞÞT represents the neuron activation functions; sðtÞ represents the transmission time-varying delay; r 0 denote a leakage delay; Dxðtk Þ describes the evolution process thatexperiences abrupt change of state at moments tk , where Jk ðxðtk Þ; xtk Þ is the incremental change of state at moments tk and Jk ð0; 0Þ ¼ 0. The fixed moments of time tk satisfy t1 \t2 \:::; limk!þ1 tk ¼ þ1 and xðt Þ ¼ lims!t xðsÞ; /ðsÞ is the initial condition for the considered neural network (1), where q ¼ maxfr; sM g, /ðÞ ¼ ð/1 ; /2 ; . . .; /n ÞT 2 PC 1 ð½q; 0; Rn Þ. In this paper, we make the following assumptions:
Cogn Neurodyn (2016) 10:85–98
87
Assumption 1 For i 2 1; 2; . . .; n, the neuron activation functions fi ðÞ is continuous, bounded and satisfies the following condition: r i
fi ðs1 Þ fi ðs2 Þ rþ i ; 8 s1 6¼ s2 2 R; s1 s2
ð2Þ
where rþ i and ri are known constants.
Remark 1 The aforementioned Assumption 1 was proposed by Liu et al. (2006). and Wang et al. (2006), þ respectively. Here the constants r i , ri in Assumption 1 are allowed to be positive, negative or zero. Hence, the resulting activation functions may be non-monotonic, and the assumption is less conservative than the descriptions on both the sigmoid activation functions and the Lipschitztype activation functions.
often named delay center point (DCP) method, which was first proposed in Yue (2004) to study the stabilization for systems with interval time-varying delay. However, in our paper, the DCP method will be improved by introducing a piecewise analysis method in respect to the time delay. It is easy to see from Assumption 2 that for all t 2 Rþ , we have sðtÞ 2 ½sm ; s0 or sðtÞ 2 ðs0 ; sM . Consequently, we define the following two sets: y1 ¼ ftjt 2 Rþ ; sðtÞ 2 ½sm ; s0 g; y2 ¼ ftjt 2 Rþ ; sðtÞ 2 ðs0 ; sM g: T S Obviously, y1 y2 ¼ ; ðan empty setÞ and y1 y2 ¼ Rþ . In the proof of our main results, we will check the variation of derivative of the Lyapunov functional in y1 and y2 , respectively.
Assumption 2 The time delay sðtÞ is a time-varying differentiable function that satisfies:
Now, we need the following lemmas in proving our asymptotic stability results for the addressed network (1).
sm sðtÞ sM ;
Lemma 1 Sanchez and Perez (1999) Let x 2 Rn , y 2 Rn and a scalar [ 0. Then we have:
_ l\1; sðtÞ
ð3Þ
where sm , sM and l are known constants.
xT y þ yT x xT x þ 1 yT y:
For convenience, we shift the equilibrium point x ¼ to the origin by the translation uðtÞ ¼ xðtÞ x , which yields the following system:
ðx1 ; x2 ; . . .; xn ÞT
_ ¼ Cuðt rÞ þ AgðuðtÞÞ þ Bgðuðt sðtÞÞÞ uðtÞ Z t þD gðuðsÞÞds; t 6¼ tk ; tsðtÞ þ uðtk Þ uðtk Þ
¼ Jk ðuðtk Þ; utk Þ Z t ¼ Ek uðtk Þ C uðhÞdh ; t ¼ tk ;
Duðtk Þ ¼
tk r
where uðtÞ¼ðu1 ;u2 ;...;un ÞT , gðuðtÞÞ¼ðg1 ðu1 ðtÞÞ;g2 ðu2 ðtÞÞ; :::; gn ðun ðtÞÞÞT , gðuðt sðtÞÞÞ ¼ ðg1 ðu1 ðt sðtÞÞÞ; g2 ðu2 ðt sðtÞÞÞ; . . .; gn ðun ðtsðtÞÞÞÞT , gi ðui ðtÞÞ¼fi ðxi ðtÞþxi Þfi ðxi Þ, Ek ðk2Zþ Þ are some nn real matrices. By Assumption 1, it can be verified that gi ðs1 Þ gi ðs2 Þ rþ i ; 8 s1 6¼ s2 2 R; s1 s2
gi ð0Þ ¼ 0; i ¼ 1; 2; . . .; n:
ð5Þ ð6Þ
In the following, we define two scalars s0 and d related to the variation range of time delay: s0 ¼
ð4Þ
uðsÞ ¼ /ðsÞ x ; s 2 ½q; 0;
r i
Lemma 2 Han and Yue (2007) For any symmetric constant matrix R 2 Rnn , R 0, scalars sm , sM with sm sM , _ : ½sM ; sm ! Rn , t 2 and a vector valued function xðtÞ þ R such that the following integration is well defined, then Z tsm _ ðsM sm Þ x_T ðsÞRxðsÞds
sM þ sm sM sm ; d¼ : 2 2
Remark 2 Here s0 is the central point of the interval of the time-varying delay and the method of constructing Lyapunov functional by utilizing the central point was
xðt
tsM sm Þ T R
xðt sM Þ
R
R R
xðt sm Þ xðt sM Þ
:
Lemma 3 Zhang et al. (2009) For any constant matrices W1 and W2 of appropriate dimensions and a symmetric matrix X\0, scalars sm sM and a function sðtÞ : Rþ ! ½sm ; sM , then ðsðtÞ sm ÞW1 þ ðsM sðtÞÞW2 þ X\0
ð7Þ
holds, if and only if: ðsM sm ÞW1 þ X \ 0;
ð8Þ
ðsM sm ÞW2 þ X \ 0;
ð9Þ
hold. Proof (Necessity part:) Let sðtÞ ¼ sm in (7), we can obtain (8) holds. Similarly, sðtÞ ¼ sM in (7) implies (9) holds. (Sufficient part:) Define a function as KðsðtÞÞ ¼ ðsðtÞ sm ÞW1 þ ðsM sðtÞÞW2 þ X;
ð10Þ
which can be further rewritten as
123
88
Cogn Neurodyn (2016) 10:85–98
KðsðtÞÞ ¼
sðtÞ sm ½ðsM sm ÞW1 þ X sM sm sM sðtÞ ½ðsM sm ÞW2 þ X: þ sM sm
ð11Þ
From (7) and (8), we can conclude that KðsðtÞÞ\0 for all sðtÞ 2 ½sm ; sM , that is, ðsðtÞ sm ÞW1 þ ðsM sðtÞÞW2 þ X\0: Lemma 4 For any constant symmetric positive-definite matrix J 2 vRmm , a scalar g [ 0, and the vector function v : ½0; g ! Rm , the following inequality holds: Z g T Z g Z g g vT ðsÞJvðsÞds vðsÞds J vðsÞds : 0
0
0
where R11 ¼ RT11 ; R22 ¼ RT22 , is equivalent to any one of the following conditions: T R22 [ 0; R11 R12 R1 22 R12 [ 0; R11 [ 0; R22 RT12 R1 11 R12 [ 0.
For our convenience, we use the following notations throughout this paper: þ þ þ R1 ¼ diag r 1 r1 ; r2 r2 ; . . .; rn rn ; þ r þ rþ r þ rþ n 1 r2 þ r2 R2 ¼ diag 1 ; ; . . .; n ; 2 2 2
N T ¼ N1T N2T N3T N4T N5T N6T N7T N8T N9T ;
M T ¼ M1T M2T M3T M4T M5T M6T M7T M8T M9T ;
U T ¼ U1T U2T U3T U4T U5T U6T U7T U8T U9T ;
V T ¼ V1T V2T V3T V4T V5T V6T V7T V8T V9T ;
nT ðtÞ ¼ uT ðtÞ uT ðt sm Þ uT ðt sM Þ uT ðt s0 Þ Z t T T u ðt sðtÞÞ uðhÞdh gT ðuðtÞÞ T
g ðuðt sðtÞÞÞ
Z
!T #
t
gðuðsÞÞds
:
tsðtÞ
In this section, we study (or investigate) the global asymptotic stability of the continuous-time neural network with time delay in the leakage term and impulsive effects. Based on the Lyapunov–Krasovskii function and piecewise delay method, we obtain the delay-dependent asymptotic stability conditions as follows:
123
2 6 6 6 6 6 6 6 6 P¼6 6 6 6 6 6 6 6 4
P11
R2 P22
0 0
0 X2
0 0
P16 0
P17 0
P18 0
X3
X2T P44
0 0
0 0
0 0
0 0
P55
0 P66
0 P67 P77
P58 P68 P78
P88
2
Main results
tr
ð13Þ with
Lemma 5 (Schur complement Boyd et al. (1994)). For a given matrix R11 R12 R¼ [ 0; RT12 R22
(i) (ii)
Theorem 1 Suppose that Assumptions 1 and 2 hold. For given scalars 0 l\1, sm and sM with 0 sm sM , the system (4) is globally asymptotically stable if there exist positive symmetric matrices P, Qi ði ¼ 1; 2; 3Þ, Ri ði ¼ 1; 2; 3Þ, S, positive diagonal matrices K, Ci ði ¼ 1; 2Þ and matrices Nl ; Ml ; Ul and Vl ðl ¼ 1; 2; . . .; 9Þ of appropriate dimensions such that for all i ¼ 1; 2 and j ¼ 1; 2 the following LMIs hold: P þ Ni Hij \0 ði; j ¼ 1; 2Þ; ð12Þ Ri P ðI Ek ÞP X1 X 2 0 and 0; k 2 Zþ P X3
N112 N122
0 N123 R3
N114 N124 N134
N115 N125 N135
0 N126 0
0 N127 0
0 N128 0
N144
N145 N155
N146 N156
N147 N157
N148 N158
0
0 0
0 0
0
0
N212
0
N214
N215
0
0
0
R2
N223 N233
N224 N234
N225 N235
0 N236
0 N237
0 N238
N244
N245 N255
N246 N256 0
N247 N257 0
N248 N258 0
0
0 0
0 6 6 6 6 6 6 6 6 N1 ¼ 6 6 6 6 6 6 6 6 4 2
6 6 6 6 6 6 6 6 N2 ¼ 6 6 6 6 6 6 6 6 4
H11 ¼ ss N; H12 ¼ ss M; H21 ¼ ss U; H22 ¼ ss V
3 0 7 N129 7 7 0 7 7 7 N149 7 7 1 7; N59 7 7 0 7 7 0 7 7 7 0 5 0 0
3 P19 0 7 7 7 0 7 7 7 0 7 7 0 7 7; 7 P69 7 7 P79 7 7 7 P89 5 P99
3
7 0 7 7 2 7 N39 7 7 N249 7 7 N259 7 7; 7 0 7 7 0 7 7 7 0 5 0
and P11 ¼ CT P PC þ Q1 þ R1 þ s2s C T ðR2 þ R3 ÞCþ r2 S þ s2m ðC T Q3 Q3 CÞ 2C1 R1 R2 , P16 ¼ CT PC, P17 ¼ PA C T K s2s CT ðR2 þ R3 ÞA s2m CT Q3 A þ C1 R2 , P18 ¼ PB s2s C T ðR2 þ R3 ÞB s2m CT Q3 B þ C2 R2 , P19 ¼ PD s2s C T ðR2 þ R3 ÞD s2m C T Q3 D, P22 ¼ R1 þ X1 R2 , P44 ¼ X3 X1 , P55 ¼ ð1 lÞQ1 2C2 R1 , P58 ¼ C2 R2 , P66 ¼ r1, P67 ¼ C T PA, P68 ¼ C T PB, P69 ¼ C T PD, P77 ¼ AT K þ KA þ Q2 þ s2s AT ðR2 þ R3 Þ
Cogn Neurodyn (2016) 10:85–98
89
A þ s2m AT Q3 A 2C1 , P78 ¼ KB þ s2s AT ðR2 þ R3 ÞB þ s2m AT Q3 B, P79 ¼ KD þ s2s AT ðR2 þ R3 ÞD þ s2m AT Q3 D, P88 ¼ ð1 lÞQ2 þ s2s BT ðR2 þ R3 ÞB þ s2m BT Q3 B 2C2 , P89 ¼ s2s BT ðR2 þ R3 ÞD þ s2m BT Q3 D, P99 ¼ s2s DT ðR2 þ R3 ÞDþ s2m DT Q3 D, N112 ¼ ss N1 , N114 ¼ ss M1 , N115 ¼ ss N1 þ ss M1 , N122 ¼ ss N2 þ ss N2T , N123 ¼ ss N3 , N124 ¼ ss N4 ss M2 , N125 ¼ ss N2 þ ss M2 þ ss N5 , N126 ¼ ss N6 , N127 ¼ ss N7 , N128 ¼ ss N8 , N129 ¼ ss N9 , N134 ¼ ss M3 , N135 ¼ ss N3 þ ss M3 , N144 ¼ ss M4 ss M4T , N145 ¼ ss N4 þ ss M4 ss M5 , N146 ¼ ss M6 , N147 ¼ ss M7 , N148 ¼ ss M8 , N149 ¼ ss M9 , N155 ¼ ss N5 ss N5T þ ss M5 þ ss M5T , N156 ¼ ss N6 þ ss M6 , N157 ¼ ss N7 þ ss M7 , N158 ¼ ss N8 þ ss M8 , N159 ¼ ss N9 þ ss M9 , N212 ¼ ss V1 , N214 ¼ ss U1 , N215 ¼ ss U1 þ ss V1 , N223 ¼ ss V2 , N224 ¼ ss U2 , N225 ¼ ss U2 þ ss V2 , N233 ¼ ss V3 ss V3T , N234 ¼ ss U3 ss V4T , N235 ¼ ss U3 þ ss V3 ss V5T , N236 ¼ ss V6T , N237 ¼ ss V7T , N238 ¼ ss V8T , N239 ¼ ss V9T , N244 ¼ ss U4 þ ss U4T , N245 ¼ ss U4 þ ss U5T þ ss V4 , N246 ¼ ss U6T , N247 ¼ ss U7T , N248 ¼ ss U8T , N249 ¼ ss U9T , N256 ¼ ss U6T þ N255 ¼ ss U5 ss U5T þ ss V5 þ ss V5T , 2 2 T T T T ss V6 , N57 ¼ ss U7 þ ss V7 , N58 ¼ ss U8 þ ss V8T , N259 ¼ ss U9T þ ss V9T .
V5 ðt; uðtÞÞ ¼ ss
7 X
þ ss V6 ðt; uðtÞÞ ¼ r
V3 ðt; uðtÞÞ ¼
2 Z
2
þ V4 ðt; uðtÞÞ ¼
Z
gT ðuðsÞÞQ2 gðuðsÞÞds
t
uT ðsÞR1 uðsÞds Z
X1
T uðsÞ uðs sðsÞÞ uðsÞ X2
X3
tsm
ts0
uT ðhÞSuðhÞ dh ds
r tþs Z t Z t s
u_ T1 ðhÞQ3 u_ 1 ðhÞ dh ds:
uðs sðsÞÞ
ð15Þ
þ c1;i gi ðui ðtÞÞ r i ui ðtÞ gi ðui ðtÞÞ ri ui ðtÞ
¼
n X
c1;i
uðtÞ
T "
gðuðtÞÞ uðtÞ gðuðtÞÞ uðtÞ T 2C1 R1 i¼1
gðuðtÞÞ
þ T 2r i ri ei ei
#
þ T r i þ ri e i e i 2ei eTi
C1 R2 2C1
uðtÞ gðuðtÞÞ
0;
n X
c2;i gi ðui ðt sðtÞÞÞ r i ui ðt sðtÞÞ
i¼1
tsðtÞ
gi ðsÞds 0
uT ðsÞQ1 uðsÞds
tsm
þ
n X i¼1
2
t
tsðtÞ Z t
s
u_ T1 ðhÞR3 u_ 1 ðhÞ dh ds
where 0 e0i denotes a column vector having 1 element on its ith row and zeros elsewhere. Similarly,
tr ui ðtÞ
ki
i¼1
tsM Z t
t
ð17Þ
V1 ðt; uðtÞÞ ¼ uðtÞ C uðhÞdh P tr Z t uðtÞ C uðhÞdh V2 ðt; uðtÞÞ ¼
Z
ts0
T
t
Z
0
u_ T1 ðhÞR2 u_ 1 ðhÞ dh ds
Then from the above equation it can be deduced that there exist positive diagonal matrices C1 ¼ diagfc1;1 ; c1;2 ; . . .; c1;n g [ 0, C2 ¼ diagfc2;1 ; c2;2 ; . . .; c2;n g [ 0, such that
¼
n X
Z
t
Calculating the time derivative of V(t, u(t)) along the solution of (4) and in addition to that, from Eqs. (5) and (6), it is easy to obtain that
þ gi ðui ðtÞÞ r i ui ðtÞ gi ðui ðtÞÞ ri ui ðtÞ 0
gi ðui ðt sðtÞÞÞ r i ui ðtÞ
gi ðui ðt sðtÞÞÞ rþ ð16Þ i ui ðtÞ 0:
where Z
Z s
tsm
i¼1
Z
V7 ðt; uðtÞÞ ¼ sm
ð14Þ
Vi ðt; uðtÞÞ;
tsm
ts0
Proof In order to proceed with the stability analysis of system (4), we construct the following Lyapunov function candidate: Vðt; uðtÞÞ ¼
Z
ds
gi ðui ðt sðtÞÞÞ rþ i ui ðt sðtÞÞ n X uðt sðtÞÞ T ¼ c2;i gðuðt sðtÞÞÞ i¼1 " #
þ þ T T uðt sðtÞÞ r 2r i ri ei ei i þ ri ei ei gðuðt sðtÞÞÞ 2ei eTi T 2C2 R1 C2 R2 uðt sðtÞÞ ¼ gðuðt sðtÞÞÞ 2C2 uðt sðtÞÞ 0: gðuðt sðtÞÞÞ ð18Þ
123
90
Cogn Neurodyn (2016) 10:85–98
_ uðtÞÞ ¼ uT ðtÞ C T P PC þ Q1 þ R1 þ s2s C T Vðt; ðR2 þ R3 ÞC þ r2 S þ s2m C T Q3
By Lemma 2.4 and 2.6, it can be seen that r
Q3 CÞuðtÞ þ 2uT ðtÞC T PC Z t
uðhÞdh þ 2uT ðtÞ PA C T K tr KC s2s C T ðR2 þ R3 ÞA s2m C T Q3 A
gðuðtÞÞ þ 2uT ðtÞ PB s2s C T ðR2 þ R3 ÞB s2m C T Q3 B gðuðt sðtÞÞÞ
þ 2uT ðtÞ PD s2s C T ðR2 þ R3 ÞD s2m ! Z t T C Q3 D gðuðsÞÞds
2
Z
t
1 u ðhÞSuðhÞdhÞ r tr T
s2m
Z
u_ T1 ðhÞQ3 u_ 1 ðhÞdh
T
T
_ uðtÞÞ nT ðtÞPnðtÞ ss Vðt;
þ u ðt sm ÞX2 uðt s0 Þ u ðt sM ÞX3 T
uðt sM Þ u ðt
sM ÞX2T uðt
s0 Þ
ss
T
þ u ðt s0 Þ½X3 X1 uðt s0 Þ þ u ðt sðtÞÞ½ð1 lÞQ1 uðt sðtÞÞ
þ gT ðuðtÞÞ AT K þ KA þ Q2 þ s2s AT ðR2 þ R3 ÞA þ s2m AT Q3 A gðuðtÞÞ Z t T uðhÞdh C T PAgðuðtÞÞ 2
tr
T Z uðhÞdh C T PD
tsðtÞ
Z
t
þ R3 ÞD þ !
gðuðsÞÞds
2
r
tsðtÞ
ss
Z
tsm
ts0
Z
u_ T1 ðhÞR2 u_ 1 ðhÞdh
u_ 1 ðhÞdh s2m
Z
ss
T
u ðhÞSuðhÞdh Z
ts0
ts0
tsM
u_ T1 ðhÞR3 u_ 1 ðhÞdh
tsM
t
tsm
uðt s0 Þ
R3
uðt sM Þ R3 uðt s0 Þ : uðt sM Þ
R3
R3
Z tsm _ uðtÞÞ nT ðtÞPnðtÞ ss Vðt; u_ T1 ðhÞR2 u_ 1 ðhÞdh ts0 uðt s0 Þ R3 R3 uðt s0 Þ þ uðt sM Þ R3 R3 uðt sM Þ þ 2ss nT ðtÞN½uðt sm Þ uðt sðtÞÞ Z tsm u_ 1 ðhÞdh þ 2ss nT ðtÞM½uðt sðtÞÞ uðt s0 Þ
Z
tsðtÞ
u_ 1 ðhÞdh ts0
ð24Þ u_ T1 ðhÞR3
u_ T1 ðhÞQ3 u_ 1 ðhÞdh:
ð19Þ
123
ss
Z
tsðtÞ
t
tr
ð22Þ
We further introduce two variable matrices N and M of appropriate dimensions. By combining (22) and (23) using Leibniz formula, we get
þ s2m BT Q3 Bgðuðt sðtÞÞÞ þ 2gT ðuðt sðtÞÞÞ
s2s BT ðR2 þ R3 ÞD þ s2m BT Q3 D ! !T Z t Z t gðuðsÞÞds þ gðuðsÞÞds s2m DT Q3 D
u_ T1 ðhÞR2 u_ 1 ðhÞdh
ð23Þ
½ð1 lÞQ2 þ s2s BT ðR2 þ R3 ÞB
tsðtÞ
tsm
u_ T1 ðhÞR3 u_ 1 ðhÞdh:
s2s AT ðR2
tsðtÞ
s2s DT ðR2
ts0
gðuðsÞÞds
þ R3 ÞB þ 2g ðuðtÞÞ KB þ 2 T þsm A Q3 B gðuðt sðtÞÞÞ þ 2gT ðuðtÞÞ½KD þs2s AT ðR2 þ R3 ÞD þ s2m AT Q3 D ! Z t gðuðsÞÞds þ gT ðuðt sðtÞÞÞ
Case I. For t 2 y1 , i.e., sðtÞ 2 ½sm ; s0 . It can be deduced from Lemma 2.4 that !
t
Z
Z
tsðtÞ
T
R2 R2
Next, we will discuss the variation of derivatives of V(t, u(t)) under two cases, i.e., t 2 y1 and t 2 y2 , respectively.
T uðhÞdh C T PBgðuðt sðtÞÞÞ
tr
ts0
tsM
T
t
ð20Þ
uðhÞdh tr
Combining (17)–(21), we get
þ u ðt sm Þ½X1 R1 uðt sm Þ
Z 2
S
ð21Þ
tsðtÞ
2
uðhÞdh Z
uðtÞ R2 uðt sm Þ R2 uðtÞ : uðt sm Þ
t
tsm
T
t tr t
T
tr Z t
Z
Now, it is easy to get the following inequalities by using Lemma 2.3,
Cogn Neurodyn (2016) 10:85–98
2ss nT ðtÞN
Z
91
tsm
u_ 1 ðhÞdh
tsðtÞ T ðsðtÞ sm Þss nT ðtÞNR1 2 N nðtÞ Z tsm þ ss u_ T1 ðhÞR2 u_ 1 ðhÞdh
ð25Þ
tsðtÞ
2ss nT ðtÞM
Z
tsðtÞ
u_ 1 ðhÞdh
Z ts0 _ uðtÞÞ nT ðtÞPnðtÞ ss u_ T1 ðhÞR3 u_ 1 ðhÞdh Vðt; tsM uðt sðtÞÞ R2 R2 uðt sðtÞÞ þ uðt s0 Þ uðt s0 Þ R2 R2 h þ 2ss nT ðtÞU uðt s0 Þ uðt sðtÞÞ Z ts0 i h u_ 1 ðhÞdh þ 2ss nT ðtÞV uðt sðtÞÞ tsðtÞ
ts0 T
T ðtÞMR1 2 M nðtÞ
ðs0 sðtÞÞss n Z tsðtÞ u_ T1 ðhÞR2 u_ 1 ðhÞdh: þ ss
ð26Þ
uðt sM Þ
Z
tsðtÞ
i u_ 1 ðhÞdh :
tsM
ð34Þ
ts0
Combining (24) and (26), we get h T _ uðtÞÞ nT ðtÞ P þ N1 þ ðsðtÞ sm Þss NR1 Vðt; 2 N i T þ ðs0 sðtÞÞss MR1 2 M nðtÞ:
ð27Þ
þ ss
When i ¼ j ¼ 1, it can be deduced from Schur complement that (12) is equivalent to P þ N1 þ
T s2s NR1 2 N \0:
T P þ N1 þ s2s MR1 2 M \0:
tsðtÞ
ts0
2ss nT ðtÞV þ ss
Z
u_ T1 ðhÞR3 u_ 1 ðhÞdh;
Z
ð35Þ
tsðtÞ
tsM
T u_ 1 ðhÞdhðsM sðtÞÞss nT ðtÞVR1 3 V nðtÞ
tsðtÞ
tsM
u_ T1 ðhÞR3 u_ 1 ðhÞdh:
ð29Þ
It can be seen from (28) and (29) that there exists a positive scalar c1 [ 0 such that T P þ N1 þ s2s NR1 2 N \ c1 I; T P þ N1 þ s2s MR1 2 M \ c1 I:
ð30Þ
T P þ N1 þ ðsðtÞ sm Þss NR1 2 N þ ðs0 sðtÞÞ
ðsm sðtÞ s0 Þ:
ð36Þ Combining (35) and (36), we get h T _ uðtÞÞ nT ðtÞ P þ N2 þ ðsðtÞ s0 Þss UR1 Vðt; 3 U i T þ ðsM sðtÞÞss VR1 3 V nðtÞ:
ð37Þ
When i ¼ 2 and j ¼ 1, by using Schur complement Eq. (12) can be simplified to
By using Lemma 2.5, (30) is equivalent to ð31Þ
T P þ N2 þ s2s UR1 3 U \0:
ð38Þ
Similarly, when i ¼ 2 and j ¼ 2, (12) is equivalent to
Combining (27) and (31), we can conclude that _ uðtÞÞ c1 kuðtÞk2 : Vðt;
Z
tsðtÞ
ð28Þ
Similarly, when i ¼ 1 and j ¼ 2, (12) is equivalent to
T ss MR1 2 M \ c1 I;
It is easy to verify that Z ts0 T u_ 1 ðhÞdhðsðtÞ s0 Þss nT ðtÞUR1 2ss nT ðtÞU 3 U nðtÞ
ð32Þ
Case II. For t 2 y2 , i.e., sðtÞ 2 ðs0 ; sM . It can be seen from Lemma 2.4 that Z tsm uðt sðtÞÞ R2 R2 ss u_ T1 ðhÞR2 u_ 1 ðhÞdh uðt s0 Þ R2 R2 ts0 uðt sðtÞÞ : uðt s0 Þ ð33Þ Combining (33) and (22) and using Leibniz formula, we get
T P þ N2 þ s2s VR1 3 V \0:
ð39Þ
Then there exists a small positive scalar c2 [ 0 such that T P þ N2 þ s2s UR1 3 U \ c2 I; T P þ N2 þ s2s VR1 3 V \ c2 I:
ð40Þ
Now, using Lemma 2.5, (40) is equivalent to T P þ N2 þ ðs0 sðtÞÞss UR1 3 U þ ðsM sðtÞÞ T ss VR1 3 V \ c2 I;
ðsm sðtÞ sM Þ:
ð41Þ
Combining (37) and (41), we can conclude that _ uðtÞÞ c2 kuðtÞk2 : Vðt;
ð42Þ
123
92
Cogn Neurodyn (2016) 10:85–98
From Case I and Case II, it can be seen that for all t 2 Rþ with i ¼ 1; 2 and j ¼ 1; 2 Eq. (12) holds. Now, we need to construct the change of V at impulse times. Firstly, it follows from (13) that P ðI Ek ÞP 0 P P ðI Ek ÞP I 0 I 0 () 0 P1 P 0 P1 ð43Þ P I Ek () 0 P1
þr
þ sm
n
¼ uðtk Þ Ek uðtk Þ C C
Z
tk
uðhÞdh
þ þ
¼ ðI Ek Þ
h
ð44Þ
þ
C
Z
tk r
tk r tk
n Z X
uðhÞdh
i
þr
gi ðsÞds
Z
Z
tk sm
uðsÞ
Z
tk s0 tk s0
Z
T
X1
X2
tk sM 0
Z
s
tk
¼
uðsÞ
T
X1
X2
tk s0 tk sm
tk s0 Z tk s0
s
Z Z
tk
s
s
tk
u_ T1 ðhÞR2 u_ 1 ðhÞdhds u_ T1 ðhÞR3 u_ 1 ðhÞdhds
u_ T1 ðhÞR3 u_ 1 ðhÞdhds
s tk
uT ðhÞSuðhÞds
r tk þs Z t Z t k k tk sm
s
gT ðuðsÞÞQ1 gðuðsÞÞds
tk sM
123
Vðtk ; uðtk ÞÞ:
uðs sðsÞÞ X3 uðsÞ ds uðs sðsÞÞ Z tk sm Z tk þ ss u_ T1 ðhÞR2 u_ 1 ðhÞdhds
þ ss
gi ðsÞds
gT ðuðsÞÞQ1 gðuðsÞÞds
Z h uðtk Þ C
t s0
þ ss
tk r
uT ðsÞQ1 uðsÞds
uT ðsÞQ1 uðsÞds
k
Z
Z
þ sm
uT ðsÞR1 uðsÞds
tk sm
þ
i
uT ðsÞR1 uðsÞds
tk sm
ui ðtk Þ 0
i¼1 tk
Z
Z
þ ss
tk sðtk Þ tk
þ
tk sðtk Þ tk
Z
uðhÞdh
0
tk sðtk Þ Z t k
tk r
tk sðtk Þ tk
þ
tk r tk
uðs sðsÞÞ X3 uðsÞ ds uðs sðsÞÞ Z t sm Z t k k þ ss u_ T1 ðhÞR2 u_ 1 ðhÞdhds
i uðhÞdh :
tk
Z h uðtk Þ C
Z
iT uðhÞdh P
tk
k
Therefore from Eq. (15), we have Z tk h iT Vðtk ; uðtk ÞÞ ¼ uðtk Þ C uðhÞdh P
þ
u_ T1 ðhÞQ3 u_ 1 ðhÞdhds
s
t s0
uðtk Þ
þ2
Z
i¼1 tk
tk sm
uðhÞdh tk r
uT ðhÞSuðhÞds
r tk þs Z tk Z tk
n Z ui ðtk Þ X
þ2
o
tk
tk
Z h uðtk Þ C
þ
tk r
Z
Z h ¼ uðtk Þ C
in which the last equivalent relation is obtained by Lemma 2.7. Secondly, from model (4), it can be obtained that Z tk uðtk Þ C uðhÞdh Z
0
tk sm
() P ðI Ek ÞT PðI Ek Þ [ 0
tk r
Z
u_ T1 ðhÞQ3 u_ 1 ðhÞdhds tk
tk r
Z iT h uðhÞdh P uðtk Þ C
tk
uðhÞdh
i
tk r
Therefore, by using Lyapunov stability theorem, the network model in (4) is globally asymptotically stable. This completes the proof of the theorem. When there is no time delay in the leakage term in system (4), that is, r ¼ 0, we get the system as follows: _ ¼ CuðtÞ þ AgðuðtÞÞ þ Bgðuðt sðtÞÞÞ uðtÞ Z t þD gðuðsÞÞds; t 6¼ tk ; tsðtÞ
Duðtk Þ ¼ uðtkþ Þ uðtk Þ ¼ Jk ðuðtk Þ; utk Þ Z t o n ¼ Ek uðtk Þ C uðhÞdh ; t ¼ tk ; tk
ð45Þ
Cogn Neurodyn (2016) 10:85–98
93
Then the following corollary is derived by changing V1 ðt; uðtÞÞ ¼ uðtÞT PuðtÞ and setting S ¼ 0 in the proof of Theorem 3.1. Corollary 2 Suppose that Assumptions 1 and 2 hold. For given scalars 0 l\1, sm and sM with 0 sm sM , the system (45) is globally asymptotically stable if there exist positive symmetric matrices P, Qi ði ¼ 1; 2; 3Þ, Ri ði ¼ 1; 2; 3Þ, positive diagonal matrices K, Ci ði ¼ 1; 2Þ and matrices Nl ; Ml ; Ul and Vl ðl ¼ 1; 2; . . .; 9Þ of appropriate dimensions such that for all i ¼ 1; 2 and j ¼ 1; 2 the following LMIs hold: " # ^ þ Ni Hij P \0 ði; j ¼ 1; 2Þ; Ri ð46Þ P ðI Ek ÞP X1 X 2 0 and 0; k 2 Zþ P X3 with 2
^ 11 R2 ^ 16 P 0 0 0 P 6 X2 0 0 6 P22 0 6 T 6 X3 X2 0 0 6 6 6 P44 0 0 6 ^ 6 P¼6 P55 0 6 ^ 66 6 P 6 6 6 6 4
3 P17 P18 P19 7 0 0 0 7 7 0 0 0 7 7 7 0 0 0 7 7 0 P58 0 7 7 ^ 67 P ^ 68 P ^ 69 7 7 P 7 P77 P78 P79 7 7 7 P88 P89 5
P99
^ 11 ¼ CT P PC þ Q1 þ R1 þ s2 C T ðR2 þ R3 ÞCþ and P s ^ 16 ¼ 0, P ^ 66 ¼ 0, s2m ðC T Q3 Q3 CÞ 2C1 R1 R2 , P ^ 68 ¼ 0, P ^ 69 ¼ 0, P17 , P18 , P19 , P22 , P44 , P55 , ^ 67 ¼ 0, P P P58 , P77 , P78 , P79 , P88 , P89 , P99 , N1 , N2 , H11 , H12 , H21 , H22 are defined as the same in Theorem 3.1. Proof The proof of this corollary is similar to that of Theorem 3.1 and so we omitted it here. Further, when there are no impulsive disruptions in system (4), then it can be rewritten in the following form: _ ¼ Cuðt rÞ þ AgðuðtÞÞ þ Bgðuðt sðtÞÞÞ uðtÞ Z t þD gðuðsÞÞds; tsðtÞ
ð47Þ
Corollary 3 Suppose that Assumptions 1 and 2 hold. For given scalars 0 l\1, sm and sM with 0 sm sM , the system (47) is globally asymptotically stable if there exist positive symmetric matrices P, Qi ði ¼ 1; 2; 3Þ, Ri ði ¼ 1; 2; 3Þ, S, positive diagonal matrices K, Ci ði ¼ 1; 2Þ and matrices Nl ; Ml ; Ul and Vl ðl ¼ 1; 2; . . .; 9Þ of appropriate dimensions such that for all i ¼ 1; 2 and j ¼ 1; 2 the LMI (12) in Theorem 3.1 hold. Proof The proof of this corollary is similar to that of Theorem 3.1 and so we omitted it here. Remark 3 In He et al. (2007), developed the stability problem for neural networks with time-varying interval delay. Further, in Qiu et al. (2009), investigated the new robust stability criterion for uncertain neural networks with interval time-varying delays. The authors Kwon et al. in Kwon et al. (2008) established the robust stability for uncertain neural networks with interval time-varying delays. Recently, Zhang et al. (2009) proposed a new delay dependent stability criterion of neural networks with interval time-varying delay by using piecewise delay method. However, in this paper, we provide a new set of delay-dependent stability conditions to ensure the global asymptotic stability of the considered neural network (4) with time delay in the leakage term, interval time-varying delays and impulsive perturbations. The stability criterion is derived by using the appropriate model transformation that shifts the equilibrium point to the origin by translation, suitable Lyapunov–Krasovskii functional and some inequality techniques. In contrast to the above mentioned literature, the derived stability criteria are dependent on both the upper bound of the leakage delays and the interval time-varying delays. Remark 4 Our proposed main results deal with the asymptotic stability problem for a class of NNs with interval time-varying delay. To obtain the stability criteria first we have to construct a Lyapunov function V(t, u(t)) as shown in (15). Then by checking the variation derivatives of V(t, u(t)) for the considered cases sðtÞ 2 ½sm ; s0 or sðtÞ 2 ðs0 ; sM , respectively, some new set of delay-dependent stability criteria are derived which can guarantee Vðt; uðtÞÞ\0. The obtained stability criterion can be readily checked by resorting the set of LMIs to the Matlab LMI Control toolbox.
123
94
Cogn Neurodyn (2016) 10:85–98
Numerical examples In this section, we have given a numerical example and their simulations to demonstrate the effectiveness and applicability of our developed method. Example work (4) 7 C¼ 0
1 Consider a second-order delayed neural netwith the following parameters: 0 0:5 0 0:6 0:1 ; A¼ ; B¼ ; 6 0 0:5 1:2 0:8 0:4 0:3 ; D¼ 0:8 0:2
Here the time-varying delay and the activation functions are taken to be sðtÞ ¼ 0:5;
g1 ðuÞ ¼ tanhð0:7xÞ 0:1 sinx;
g2 ðuÞ ¼ tanhð0:4xÞ þ 0:2 cos x: Further, it satisfies Assumption 1 with r 1 ¼ 0:1, þ ¼ 0:8, r ¼ 0:2, r ¼ 0:6, and hence rþ 2 1 2 0:08 0 0:35 0 C1 ¼ ; C2 ¼ : 0 0:12 0 0:2 By solving the LMIs in Theorem 3.1 via MATLAB LMI Control toolbox, we can obtain a set of feasible solution, but due to the limited length of this paper, we do not give such solutions here. The above result shows that all the conditions stated in Theorem 3.1 have been satisfied and
hence system (4) with the above given parameters is globally asymptotically stable in the mean square. In addition, we have calculated the upper bounds of interval time-varying delays as shown in Table 1, which describes the allowable upper bounds for different values of r; l and sm . From this table, it is evidentally proved that the delay-dependent stability criterion obtained in our paper is finer and less conservative than some existing results in the sense of upper bound technique. Furthermore, in He et al. (2007), Zhang et al. (2009) it can be seen that the system is stable if the difference between sM and sm is \1.3606 and 1.7532 (i.e. sM sm 1:3606, sM sm 1:7532), respectively. Whereas, by Theorem 3.1 in this paper, we can verify that the allowable value of sM sm is improved to be 3.0035. For r ¼ 0:2, sm ¼ 1 and l ¼ 0:95, the upper bound of time delay in He et al. (2007) which ensures and verifies that the system is globally asymptotically stable is 6.5227. It can also be seen that in Zhang et al. (2009) the upper bound is improved to be 8.4119. By using Theorem 3.1 in Table 2 Upper bounds of sM sm for various l l 0.8
0.9
Unknown l
He et al. (2007)
2.2552
1.4769
1.3606
Zhang et al. (2009)
2.8335
1.9234
1.7532
This paper
4.8735
3.1845
3.0035
Table 1 Maximum upper bounds of sM for various sm and l sm
Methods
l ¼ 0:8
l ¼ 0:9
Unknown l
sm ¼ 0
Liu and Chen (2007), Hua et al. (2006), He et al. (2005), He et al. (2006)
1.2281
0.8636
0.8298
Cho and Park (2007)
1.2459
0.8827
0.8259
sm ¼ 1
sm ¼ 100
123
Kwon et al. (2008), He et al. (2007)
1.6831
1.1493
1.0880
He et al. (2007)
2.3534
1.6050
1.5103
Zhang et al. (2009)
2.8654
1.9508
1.7809
This paper Kwon et al. (2008)
6.0124 2.5967
5.2873 2.0443
5.0010 1.9621
He et al. (2007)
3.2575
2.4769
2.3606
Zhang et al. (2009)
3.8359
2.9234
2.7532
This paper
7.3243
6.2903
6.0025
Kwon et al. (2008)
101.5946
101.0443
100.9621
He et al. (2007)
102.2552
101.4769
101.3606
Zhang et al. (2009)
102.8335
101.9234
101.7532
This paper
106.0002
105.7932
105.2252
Cogn Neurodyn (2016) 10:85–98
95 20
Table 3 Maximum upper bounds of sM for various sm and l l ¼ 0:99
Unknown l
15
–
3.0465
10
4.3522
3.9112
Methods
sm ¼ 1
Qiu et al. (2009)
–
He et al. (2007)
6.5227
Zhang et al. (2009)
8.4119
5.4834
4.9471
This paper
12.0025
9.6557
9.0757
Qiu et al. (2009)
–
–
4.0324
sm ¼ 2
He et al. (2007)
7.5227
5.3135
4.8847
Zhang et al. (2009)
9.4119
6.4377
5.9198
This paper
13.6442
10.3366
7.7683
x2
5
x2(t)
l ¼ 0:95
sm
0 −5
−10 −15 −20
15
20
10
15
5
10
0
5
−5
10
20
−5
−15
−10
0
10
20
t
30
40
50
30
40
50
x2
−15 −20
20
t
0
−10
−20
0
x1
x2(t)
x1(t)
20
0
10
20
t
30
40
50
x1
Fig. 2 State variable of x2 ðtÞ of the network (4) with non-impulsive and impulsive effects
15 10
x1(t)
5 0 −5 −10 −15 −20
0
10
20
t
30
40
50
Fig. 1 State variable of x1 ðtÞ of the network (4) with non-impulsive and impulsive effects
this paper, we obtain the maximum allowable upper bound is 12.0025. Furthermore, the comparisons of upper bound between the criterion in the paper and those in He et al. (2007), Qiu et al. (2009), Zhang et al. (2009) are listed in Table 2. From Table 1, 2 and 3, it is clear that the proposed stability criteria in this paper seems to be less conservative than the existing ones in the literature. The simulation result reveals that by taking the initial condition ½/1 ðsÞ; /2 ðsÞ ¼ ½3; 4, s 2 ½0:2; 0, and then Figs. 1, 2 and 3 show that the considered network (4) with and without impulsive effect leads to a stable position. However, if we take the leakage delay with r [ 0:2 for the
123
96
Cogn Neurodyn (2016) 10:85–98 60
30 6
x1 x2
x 10
x1
5
20
4
10
2
x1(t)
x2(t)
3
0 −10
1 0
−20
−1 −2
−30
−3
−40 −30
−20
−10
0
10
20
30
−4 −20
x1(t)
20
40
60
t
80
100
120
140
160
14
30 4
x1 x2
20
3
10
2
x 10
x1
1
0
x1(t)
x2(t)
0
−10
0 −1
−20
−2
−30
−3 −4
−40 −40
−30
−20
−10
0
10
20
30
x1(t)
Fig. 3 State variable of x1 ðtÞ, x2 ðtÞ of the network (4) with nonimpulsive and impulsive effects
network (4), one may deduce that the conditions (LMIs) in Theorem 3.1 have not been satisfied and do not have a feasible solution. It has shown in Figs. 4, 5 the unstable and in Fig. 6 the chaotic behavior of the neural network (4) have been shown. Therefore, our proposed method cannot guarantee the stability of network (4) with r [ 0:2.
123
−5 −20
0
20
40
60
t
80
100
120
140
160
Fig. 4 State variable of x1 ðtÞ of the network (4) with r ¼ 0:3, nonimpulsive and impulsive effects
Conclusions In this paper, we have dealt with the stability criteria for a class of neural networks with interval time-varying delays in the leakage term and impulses. By using model
Cogn Neurodyn (2016) 10:85–98
97 60
60
6
x 10
x2
5 4
4
3
3
2
2
1
0
−1
−1
−2
−2
−3
−3 0
20
40
60
t
80
100
120
140
x1 x2
1
0
−4 −20
x 10
5
x2(t)
x2(t)
6
160
−4 −4
−2
0
x1(t)
14
4
x 10
4
6 60
x 10
14
4
x2
3
x 10
x1 x2
3
2
2
1
1
0
x2(t)
x2(t)
2
−1
0
−1 −2
−2
−3 −4 −20
−3 0
20
40
60
t
80
100
120
140
160
Fig. 5 State variable of x2 ðtÞ of the network (4) with r ¼ 0:3, nonimpulsive and impulsive effects
transformation, constructing appropriate Lya punov–Krasovskii functional, employing piecewise delay method and some known inequality techniques, several improved delay-dependent stability criteria for the considered neural networks have been derived. The derived criterion has been obtained in LMI forms and it can be solved in MATLAB LMI Control toolbox. Finally, a numerical example have been provided to show the effectiveness and superiority of the proposed stability results. Further, we would like to point out that, the considered model can be generalized to discrete time neural networks or more complex neural networks, such as Cohen–Grossberg NNs, BAM NNs, NNs with stochastic perturbations and Markovian jumping parameters. The corresponding results will appear in the near future.
−4 −5
−4
−3
−2
−1
0 x1(t)
1
2
3
4 14
x 10
Fig. 6 State variable of x1 ðtÞ, x2 ðtÞ of the network (4) with r ¼ 0:3, non-impulsive and impulsive effects Funding Quanxin Zhu’s work was jointly supported by the National Natural Science Foundation of China (61374080), and a Project Funded by the Priority Academic Program Development of Jiangsu Higher Education Institutions. Compliance with ethical standards Conflict of interest of interest.
The authors declare that they have no conflict
References Arik S (2004) An analysis of exponential stability of delayed neural networks with time-varying delays. Neural Netw 17:1027–1031
123
98 Akca H, Alassar R, Covachev V, Covacheva Z, Al-Zahrani E (2004) Continuous-time additive Hopfield-type neural networks with impulses. J Math Anal Appl 290:436–451 Boyd S, Ghaoui LEI, Feron E, Balakrishnan V (1994) Linear matrix inequalities in system and control theory. SIAM, Philadelphia Cao J, Wang J (2003) Global asymptotic stability of a general class of recurrent neural networks with time-varying delays. IEEE Trans Circuits Syst Part I: Fundam Theory Appl 50:34–44 Cao J, Li X (2005) Stability in delayed Cohen–Grossberg neural networks: LMI optimization approach. Phys D 212:54–65 Cho HJ, Park JH (2007) Novel delay-dependent robust stability criterion of delayed cellular neural networks. Chaos, Solitons Fractals 32(3):1194–1200 Fu X, Yan B, Liu Y (2005) Introduction of impulsive differential systems. Science Press, Beijing Gopalsamy K (2007) Leakage delays in BAM. J Math Anal Appl 325:1117–1132 Gopalsamy K (1992) Stability and oscillations in delay differential equations of population dynamics. Kluwer Academic Publishers, Dordrecht Gu K, Kharitonov V, Chen J (2003) Stability of time-delay systems. Birhauser, Boston Han QL, Yue D (2007) Absolute stability of Lure systems with timevarying delay. Control Theory Appl IET 1(3):854–859 Haykin S (1999) Neural networks. Prentice Hall, New Jersey He Y, Liu G, Rees D (2007) New delay-dependent stability criteria for neural networks with time-varying delay. Neural Netw IEEE Trans 18(1):310–314 He Y, Liu GP, Rees D, Wu M (2007) Stability analysis for neural networks with time-varying interval delay. Neural Netw IEEE Trans 18(6):1850–1854 He Y, Wang Q, Wu M (2005) LMI-based stability criteria for neural networks with multiple time-varying delays. Phys D: Nonlinear Phenom 212(12):126–136 He Y, Wu M, She J (2006) Delay-dependent exponential stability of delayed neural networks with time-varying delay. Circuits Syst II: Express Briefs IEEE Trans 53(7):553–557 Hua C, Long C, Guan X (2006) New results on stability analysis of neural networks with time-varying delays. Phys Lett A 352(45):335–340 Ignatyev A (2008) On the stability of invariant sets of systems with impulse effect. Nonlinear Anal 69:53–72 Kosko B (1992) Neural Netw Fuzzy Syst. Prentice Hall, New Delhi Kwon O, Park J, Lee S (2008) On robust stability for uncertain neural networks with interval time-varying delays. Control Theory Appl IET 2(7):625–634 Lakshmikantham V, Bainov D, Simenov P (1989) Theory of impulsive differential equations. World Scientific, Singapore Li C, Huang T (2009) On the stability of nonlinear systems with leakage delay. J. Franklin Inst. 346:366–377 Li X, Fu X, Balasubramanium P, Rakkiyappan R (2010) Existence, uniqueness and stability analysis of recurrent neural networks with time delay in the leakage term under impulsive perturbations. Nonlinear Anal Real World Appl 11:4092–4108
123
Cogn Neurodyn (2016) 10:85–98 Li X, Cao J (2010) Delay-dependent stability of neural networks of neutral-type with time delay in the leakage term. Nonlinearity 23:1709–1726 Liu DR (1997) Cloning templete design of cellular neural networks for associative memories. IEEE Trans Circuits Syst 1(44):645–650 Liu H, Chen G (2007) Delay-dependent stability for neural networks with time-varying delay. Chaos, Solitons Fractals 33(1):171–177 Liu X, Teo K, Xu B (2005) Exponential stability of impulsive highorder Hopfield-type neural networks with time-varying delays. IEEE Trans Neural Netw 16:1329–1339 Liu Y, Wang Z, Liu X (2006) Global exponential stability of generalized recurrent neural networks with discrete and distributed delays. Neural Netw 19:667–675 Meng J, Wang XY (2007) Robust anti-synchronization of a class of delayed chaotic neural networks. Chaos 17(2):023113 Mou SS, Gao HJ, Lam J, Qiang WY (2008) A new criterion of delaydependent asymptotic stability for Hopfield neural networks with time delay. IEEE Trans Circuits Syst 19(3):532–535 Peng S (2010) Global attractive periodic solutions of BAM neural networks with continuously distributed delays in the leakage terms. Nonlinear Anal: Real World Appl 11:2141–2151 Qiu J, Yang H, Zhang J, Gao Z (2009) New robust stability criteria for uncertain neural networks with interval time-varying delays. Chaos, Solitons Fractals 39(2):579–585 Sanchez EN, Perez JP (1999) Input-to-state stability (ISS) analysis for dynamic neural networks. Circuits Syst I: Fundam Theory Appl: IEEE Trans 46(11):1395–1398 Senan S, Arik S (2007) Global robust stability of bidirectional associative memory neural networks with multiple time delays. IEEE Trans Syst, Man Cybern, Part B 37:1375–1381 Song Q, Cao J (2012) Passivity of uncertain neural networks with both leakage delay and time-varying delay. Nonlinear Dyn 67:1695–1707 Tan J, Li C, Huang T (2015) The stability of impulsive stochastic Cohen–Grossberg neural networks with mixed delays and reaction-diffusion terms. Cogn Neurodyn 9:213–220 Wang XY, Zhao Q, Wang MJ, Yang YH (2008) Generalized synchronization of communication. Mod Phys Lett B 22(2):2077–2084 Wang Z, Shu H, Liu Y, Ho DWC, Liu X (2006) Robust stability analysis of generalized neural networks with discrete and distributed time delays. Chaos Soltons Fractals 30:886–896 Xu D, Yang Z (2005) Impulsive delay differential inequlaity and stability of neural networks. J Math Anal Appl 05:107–120 Yang Z, Xu D (2005) Stability analysis of delay neural networks with impulsive effects. IEEE Trans Circuits Syst ll 52:517–521 Yang Z, Zhou W, Huang T (2014) Exponential input-to-state stability of recurrent neural networks with multiple time-varying delays. Cogn Neurodyn 8:47–54 Yue D (2004) Robust stabilization of uncertain systems with unknown input delay. Automatica 41:331–336 Zhang Y, Yue D, Tian E (2009) New stability criteria of neural networks with interval time-varying delay: a piecewise delay method. Appl Math Comput 208:249–259