IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 23, NO. 3, MARCH 2012

467

Stability Analysis of Markovian Jump Stochastic BAM Neural Networks With Impulse Control and Mixed Time Delays Quanxin Zhu and Jinde Cao, Senior Member, IEEE

Abstract— This paper discusses the issue of stability analysis for a class of impulsive stochastic bidirectional associative memory neural networks with both Markovian jump parameters and mixed time delays. The jumping parameters are modeled as a continuous-time discrete-state Markov chain. Based on a novel Lyapunov–Krasovskii functional, the generalized Ito’s ˆ formula, mathematical induction, and stochastic analysis theory, a linear matrix inequality approach is developed to derive some novel sufficient conditions that guarantee the exponential stability in the mean square of the equilibrium point. At the same time, we also investigate the robustly exponential stability in the mean square of the corresponding system with unknown parameters. It should be mentioned that our stability results are delaydependent, which depend on not only the upper bounds of time delays but also their lower bounds. Moreover, the derivatives of time delays are not necessarily zero or smaller than one since several free matrices are introduced in our results. Consequently, the results obtained in this paper are not only less conservative but also generalize and improve many earlier results. Finally, two numerical examples and their simulations are provided to show the effectiveness of the theoretical results. Index Terms— Exponential stability, impulsive control, linear matrix inequality, Lyapunov functional, Markovian jump parameter, mixed time delay, stochastic bidirectional associative memory neural network.

I. I NTRODUCTION

A

S IS well known, bidirectional associative memory (BAM) neural networks were originally introduced and investigated by Kosko [1]–[3]. Since then, there has been increasing research interests on the class of BAM neural networks because of its many important applications such as pattern recognition, associative memory, combinatorial optimization, etc. Such applications heavily depend on the dynamical behaviors such as stability properties, periodic oscillatory behavior, almost periodic oscillatory properties,

Manuscript received May 9, 2011; revised December 30, 2011; accepted December 30, 2011. Date of publication January 16, 2012; date of current version February 29, 2012. This work was supported in part by the National Natural Science Foundation of China under Grant 10801056, Grant 60874088, and Grant 11072059, the Natural Science Foundation of Ningbo under Grant 2010A610094, the K. C. W. Magna Fund in Ningbo University, and the Specialized Research Fund for the Doctoral Program of Higher Education (20110092110017). Q. Zhu is with the Department of Mathematics, Ningbo University, Ningbo 315211, China (e-mail: [email protected]). J. Cao is with the Department of Mathematics, Southeast University, Nanjing 210096, China (e-mail: [email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TNNLS.2011.2182659

chaos, bifurcation, and so on. In particular, the stability of the BAM neural network has been one of the most active research areas since it is the most fundamental concept in modern control theory. Therefore, it is important both in theory and in applications to study the issue of stability analysis for a class of BAM neural networks. However, most BAM neural network models discussed in the literature are focused on the deterministic case. In fact, a real system is usually affected by external perturbations, which in many cases are of great uncertainty and hence may be treated as random. As pointed out by Haykin [4], the synaptic transmission in real neural networks can be viewed as a noisy process introduced by random fluctuations from the release of neurotransmitters and other probabilistic causes. Consequently, a BAM neural network can be stabilized or destabilized by certain stochastic inputs. Hence, it is necessary to consider the effect of noise perturbations when studying the stability of a BAM neural network. In particular, a class of stochastic BAM neural networks with Markovian jump parameters has recently received much attention because it can model the phenomenon of information latching and other abrupt phenomena such as random failures or repairs of the components, sudden environmental changes, changing subsystem interconnections, etc. Moreover, this class is very sophisticated and comprises a common class of stochastic BAM neural networks without Markovian jump parameters as its special case. Due to the complexity of this system, there have appeared only a few reports on their stability analysis [5]–[7]. Thus, there is a great deal of room to develop novel stability conditions for improvement. On the other hand, time delays often occur in the process of information storage and transmission in real neural networks because of the finite switching speed of amplifiers. Unfortunately, the existence of time delays can cause instability, oscillation, or bad system performance, which is very harmful to the applications of neural networks. Moreover, neural networks usually have a spatial extent due to the presence of a multitude of parallel pathways with a variety of axon sizes and lengths, and hence there is a distribution of propagation delays over a period of time. So it is reasonable that the neural network model possesses mixed time delays, which comprise both discrete and distributed delays. In recent years, a large number of works on mixed time delays have been reported in the literature. Among of them, we mention [5]– [27] and references therein.

2162–237X/$31.00 © 2012 IEEE

468

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 23, NO. 3, MARCH 2012

On the other hand, besides noise perturbations and delays effects, impulsive effects are also likely to exist in the neural network system. Generally speaking, the states of real neural networks are often subject to instantaneous perturbations and experience abrupt change at certain moments of time, which can be caused by switching phenomenon, frequency change, or other sudden noise, i.e., impulsive effects. Moreover, neural networks are subject to impulsive perturbations which in turn affect the dynamical behaviors of the system. Therefore, impulsive perturbations should be taken into account when studying the stability of neural networks. It is inspiring that in recent years considerable attention has been paid to investigating the stability analysis of impulsive neural networks. The reason is twofold: one is that impulsive perturbations occur in many important fields such as medicine, biology, economics, mechanics, electronics, and telecommunications, the other is that an impulsive neural network model belongs to a new category of dynamical systems, which are neither purely continuous-time nor purely discrete-time ones. Since this model displays a combination of the characteristics of both continuous-time and discrete-time systems, it is difficult and challenging to discuss the stability analysis of an impulsive neural network. Inspired by the above discussion, this paper investigates the issue of stability analysis for a class of impulsive stochastic BAM neural networks with both Markovian jump parameters and mixed time delays. It is worth pointing out that the suggested BAM neural network model is very comprehensive because it simultaneously considers noise perturbations, Markovian jumping parameters, impulsive control, and mixed time delays. To the best of our knowledge, until now, the stability analysis issue for this class of generalized neural networks has not yet been solved. The main goal of this paper is to fill this gap. By employing a new Lyapunov– Krasovskii functional, the generalized Ito’s ˆ formula, mathematical induction, and stochastic analysis theory, as well as a linear matrix inequality technique, we establish some novel sufficient conditions to ensure the exponential stability in the mean square of the equilibrium point. Furthermore, we also investigate the robustly exponential stability in the mean square of the corresponding system with unknown parameters since neural networks’ parameters cannot be exactly known a priori in many applications. Different from the previous works in the literature, our stability results depend not only on the upper bounds of time delays but also on their lower bounds. Moreover, the derivatives of time delays are not necessarily zero or smaller than 1 because we introduce several free matrices in our results. Therefore, the results obtained in this paper are not only less conservative but also generalize and improve many earlier results (for instance, see [5]–[7], [24]– [31] and references therein). Finally, two numerical examples and their simulations are given to illustrate the effectiveness of the theoretical results. The remainder of this paper is organized as follows. In Section II, we introduce a new class of impulsive stochastic BAM neural networks with both Markovian jump parameters and mixed time delays, and provide some necessary assumptions and preliminary lemmas. After our main results and proofs

are presented in Section III, we use two numerical examples to demonstrate the effectiveness of the obtained results in Section IV. Finally, in Section V, we conclude this paper with some general remarks. II. M ODEL , A SSUMPTIONS , AND P RELIMINARIES The notations used in this paper are quite standard, and the reader can refer to [5], [8]–[10]. Consider the following class of Markovian jump stochastic BAM neural networks with both impulsive perturbations and mixed time delays: ⎧ d x(t) = [−C(r (t))x(t) ⎪  t + A(r (t)) f 1 (y(t − τ1 (t))) ⎪ ⎪ ⎪ + B(r (t)) t −τ2 (t ) f 2 (y(s))ds]dt ⎪ ⎪ ⎪ ⎪ + σ1 (x(t), y(t − τ1 (t)), y(t − τ2 (t)), ⎪ ⎪ ⎪ ⎪ t, r (t))dw1 (t), t = tk ⎪ ⎪ ⎨ x(tk ) = Dk (r (t))x(tk− ), t = tk (1) d y(t) = [−E(r (t))y(t) + F(r (t))g1 (x(t − δ1 (t))) ⎪ ⎪  ⎪ t ⎪ ⎪ + G(r (t)) t −δ2 (t ) g2 (x(s))ds]dt ⎪ ⎪ ⎪ ⎪ + σ2 (y(t), x(t − δ1 (t)), x(t − δ2 (t)), ⎪ ⎪ ⎪ ⎪ t, r (t))dw2 (t), t = tk ⎪ ⎩ y(tk ) = Hk (r (t))y(tk− ), t = tk for t > 0 and k = 1, 2, ..., where x(t) = [x 1 (t), x 2 (t), . . . , x n (t)]T and y(t) = [y1 (t), y2 (t), . . . , yn (t)]T are state vectors associated with the n neurons. The first part is the continuous part of (1), which describes the continuous evolution process of the neural network, where the diagonal matrices C(r (t)) = diag(c1 (r (t)), c2 (r (t)), . . . , cn (r (t))) and E(r (t)) = diag (e1 (r (t)), e2 (r (t)), . . . , en (r (t))) have positive entries ci (r (t)) > 0, ei (r (t)) > 0, (i = 1, 2, . . . , n), respectively. The matrices A(r (t)) = (ai j (r (t)))n×n , F(r (t)) = ( f i j (r (t)))n×n , B(r (t)) = (bi j (r (t)))n×n , and G(r (t)) = (gi j (r (t)))n×n are the time-varying delay connection weight matrix and the distributed delay connection weight matrix, respectively. f i (x(t)) = [ f i1 (x 1 (t)), fi2 (x 2 (t)), . . . , fin (x n (t))]T and gi (x(t)) = [gi1 (x 1 (t)), gi2 (x 2 (t)), . . . , gin (x n (t)]T (i = 1, 2) are the neuron activation functions. Noise perturbations σi : Rn × Rn × Rn × R+ × S → Rn×m (i = 1, 2) are Borel measurable functions. τ1 (t), τ2 (t), δ1 (t), and δ2 (t) are timevarying delays. {r (t), t ≥ 0} is a right-continuous Markov chain on a complete probability space (, F, P) taking values in a finite state space S = {1, 2, ..., N} with generator Q = (qi j ) N×N given by  q t + o(t), if i = j P{r (t + t) = j |r (t) =i } = i j 1 + qii t + o(t), if i = j where t > 0 and limt →0 (o(t))/t = 0. Here,  qi j ≥ 0 is the transition rate from i to j if i = j while qii = − j =i qi j . wi (t) = (wi1 (t), . . . , wim (t))T (i = 1, 2) is an m-dimensional Brownian motion, which is assumed to be independent of r (t). The second part is the discrete part of (1), which describes that the evolution process experiences abrupt changes of a state at the moment of time tk , where x(tk ) = Dk (r (t))x(tk− ), y(tk ) = Hk (r (t))y(tk− ) are the impulses at the moment tk , and Dk (r (t)), Hk (r (t)) are the impulse gain matrices at the moment of time tk . The discrete set {tk } satisfies 0 = t0 < t1 < · · · < tk < · · · , limk→∞ tk = ∞. x(tk− ) and

ZHU AND CAO: STABILITY ANALYSIS OF MARKOVIAN JUMP STOCHASTIC BAM NEURAL NETWORKS

y(tk− ) denote the left-hand limits at tk . Similarly, x(tk+ ) and y(tk+ ) denote the right-hand limits at tk . In this paper, we assume that the processes x(t) and y(t) are right-continuous, i.e., x(tk+ ) = x(tk ) and y(tk+ ) = y(tk ). Moreover, we assume that (1) has a unique equilibrium point. In addition, to study the stability, we also need the following assumptions. Assumption 1: There exist diagonal matrices Ui− = diag − − + + + − (u i1 , u − = diag(u + = i2 , . . . , u in ), Ui i1 , u i2 , . . . , u in ), Vi − − − + + + + diag(v i1 , v i2 , . . . , v in ), Vi = diag(v i1 , v i2 , . . . , v in ), i = 1, 2 satisfying fi j (α1 ) − f i j (α2 ) ≤ u+ i j (i = 1, 2), α1 − α2 gi j (α1 ) − gi j (α2 ) ≤ ≤ v i+j (i = 1, 2) α1 − α2

u− ij ≤ v i−j

for all α1 , α2 ∈ R, α1 = α2 , j = 1, 2, . . . , n. Assumption 2: There exist constants τ 1 , τ 1 , τ 2 , τ 2 , δ 1 , δ 1 , δ 2 , δ 2 , μ1 , μ2 , ν1 , ν2 , τ, δ such that 0 ≤ τ 1 ≤ τ1 (t) ≤ τ 1 , 0 ≤ τ 2 ≤ τ2 (t) ≤ τ 2 , 0 ≤ δ 1 ≤ δ1 (t) ≤ δ 1 , 0 ≤ δ 2 ≤ δ2 (t) ≤ δ 2 , τ˙1 (t) ≤ μ1 , τ˙2 (t) ≤ μ2 , δ˙1 (t) ≤ ν1 , δ˙2 (t) ≤ ν2 , τ = max{τ 1 , τ 2 }, δ = max{δ 1 , δ 2 }. Assumption 3: There exist positive definite matrices R1i , R2i , R3i , T1i , T2i , T3i such that for all i ∈ S trace[σ1T (t, i )σ1 (t, i )] ≤ x T (t)R1i x(t) +y T (t − τ1 (t))R2i y(t − τ1 (t)) trace[σ2T (t, i )σ2 (t, i )] ≤ y T (t)T1i y(t) +x T (t − δ1 (t))T2i x(t − δ1 (t)) +x (t − δ2 (t))T3i x(t − δ2 (t))

[tk−1 , tk ) × Rn × Rn × S and for all t ≥ t0 , V (t, 0, 0, i ) ≡ 0, i ∈ S; 2) V (t, x, y, i ) is locally Lipschitzian in x, y ∈ Rn , i ∈ S; 3) for each k = 1, 2, ..., there exist finite limits lim

(t,z 1 ,z 2 , j )→(tk− ,x,y,i1 )

E[|x(t, φ)|2 + |y(t, ψ)|2 ] ≤ γ e−βt [α1 sup E|φ(θ )|2 −δ≤θ≤0

+ α2

sup E|ψ(θ )|2 ].

−τ ≤θ≤0

To develop our results, we need to introduce some more notations. First of all, we give the following useful definition. Definition 2: The function V : [t0 , ∞)×Rn ×Rn ×S → R+ belongs to class 0 if: 1) the function V is continuous on each of the sets

V (tk− , x, y, i 1 ) and

lim

(t,z 1 ,z 2 , j )→(tk+ ,x,y,i2 )

V (tk+ , x, y, i 2 )

with V (tk+ , x, y, i 2 ) = V (tk , x, y, i 2 ) satisfied. From now on, for simplicity, when r (t) = i , the matrices C(r (t)), A(r (t)), B(r (t)), F(r (t)), G(r (t)), Dk (r (t)), and Hk (r (t)) will be written as Ci , Ai , Bi , Fi , G i , Dik , and Hik , respectively. Let C12 (R+ × Rn × Rn × S; R+ ) denote the family of all nonnegative functions V (t, x, y, i ) ∈ 0 on R+ ×Rn ×Rn × S, which are continuously twice differentiable in x, y, and differentiable in t. Finally, we present some important facts that are needed to prove our main results. Lemma 1: For any real matrices X, Y , and one positivedefinite matrix G, the following matrix inequality holds: X T Y + Y T X ≤ X T G −1 X + Y T GY. Lemma 2: For any positive-definite matrix G > 0, a scalar τ > 0, and a function  : [0, τ ] → Rn such that the integrations concerned are well defined, the following inequality holds:

τ

T

where σ1 (t, i ) = σ1 (x(t), y(t − τ1 (t)), y(t − τ2 (t)), t, i ) and σ2 (t, i ) = σ2 (y(t), x(t − δ1 (t)), x(t − δ2 (t)), t, i ). Assumption 4: f 1 (0) = f 2 (0) = g1 (0) = g2 (0) ≡ 0, σ1 (0, 0, 0, t, r (t)) = σ2 (0, 0, 0, t, r (t)) ≡ 0. Let (x(t, φ), y(t, ψ)) denote the state trajectory of the 2 ([−τ, 0]), neural network (1) from the initial data φ ∈ CF 0 2 ψ ∈ CF0 ([−δ, 0]). Clearly, (1) admits a trivial solution (x(t, 0), y(t, 0)) ≡ 0 corresponding to the initial data φ = 0, ψ = 0. For simplicity, we write (x(t, φ), y(t, ψ)) = (x(t), y(t)). Now we give the concept of exponential stability in the mean square for (1). Definition 1: The trivial solution of (1) is said to be exponentially stable in the mean square if for every φ ∈ 2 ([−δ, 0]; Rn ), ψ ∈ C 2 ([−τ, 0]; Rn ), there exist scalars CF F0 0 α1 > 0, α2 > 0, β > 0, and γ > 0 such that the following inequality holds:

469

T (t)dt



τ

G

0

(t)dt



τ

≤τ

0

T (t)G(t)dt .

0

Lemma 3: (Schur complement) Given one positive-definite matrix G 2 > 0 and constant matrices G 1 , G 3 , where G 1 = G 1T , then G 1 + G 3T G −1 2 G 3 < 0 if and only if

G 1 G 3T G 3 −G 2



1,

e−αδi , e−αδi

if 0 < x ≤ 1, i = 1, 2 if x > 1,

where α is a given positive constant. Theorem 1: Let α be a fixed positive constant and suppose that Assumptions 1–4 hold. Then the trivial solution of (1) is exponentially stable in the mean square if there exist positive scalars λ1i , λ2i (i ∈ S), positive diagonal matrices Q 1 , Q 2 , X 1 , X 2 , X 3 , X 4 , positive-definite matrices L 1 , L 2 , M1 , M2 , P1i , P2i , Hi , K i (i ∈ S), and any matrices N1 , N2 , N3 , N1∗ , N2∗ , N3∗ , Y1 , Y2 , Y3 , Y1∗ , Y2∗ , Y3∗ such that the

470

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 23, NO. 3, MARCH 2012

24 = −N1∗ + N2∗T , 26 = −Y1∗ + Y2∗T ,

following linear matrix inequalities (LMIs) hold: P1i ≤ λ1i I P2i ≤ λ2i I DiTk P1 j Dik − P1i ≤ 0, [here r (tk ) = j ],

(4)

HiTk P2 j Hik

(5)

− P2i ≤ 0, [here r (tk ) = j ], τ 2 Hi ≤ e



11 M ⎢  ⎢ ⎢  ⎢ ⎣   where

11 M

⎡ ⎢ ⎢ ⎢ =⎢ ⎢ ⎢ ⎣ ⎡

12 M

(2) (3)

⎢ ⎢ ⎢ =⎢ ⎢ ⎢ ⎣

22 = M

23 = M

33 = M 11 =

0 22    

13 0 33   

P1i Ai 0 0 P2i Fi 0 0 0 0 0 0 0 0

Q1,

(6)

e−αδ2 Q 2 , ⎤ 14 M 15 M 24 M 25 ⎥ M ⎥  35 ⎥ < 0 M34 M ⎥ −Hi 0 ⎦  −K i

(7)

0 24 0 44   0 0 0 0 0 0

15 0 0 0 55  ⎤

0 26 ⎥ ⎥ 0 ⎥ ⎥, 0 ⎥ ⎥ 0 ⎦ 66

13 = −N1 +

33 = −(1 − ν1 )1 (ν1 )L 1 + λ2i T2i + V1 X 2 V1 −N2T − N2 , 311 = −N2 − N3T , 44 = −(1 − μ1 )1 (μ1 )M1 + λ1i R2i +U1 X 1U1 − N2∗T − N2∗ ,

412 = −N2∗ − N3∗T , 55 = −(1 − ν2 )2 (ν2 )L 2 + λ2i T3i − Y2T − Y2 ,

614 = −Y2∗ − Y3∗T , 77 = −X 1 , 88 = −X 2 , 99 = −X 3 + τ 2 Q 1 , 1010 = −X 4 + δ 2 Q 2 , 1111 = −N3 − N3T , 1212 = −N3∗ − N3∗T ,

1313 = −Y3 − Y3T , 1414 = −Y3∗ − Y3∗T , U j = diag(u j 1 , u j 2 , ..., u j n ), u j i = max{|u −j i |, |u +j i |},

0 0⎥ ⎥ 0⎥ ⎥, 0⎥ ⎥ 0⎦ 0

+ V j = diag(v j 1 , v j 2 , ..., v j n ), v j i = max{|v − j i |, |v j i |},

j =1

N2T , N3T ,

(8)

35 = [0 0 0 0]T , 34 = [0 0 0 0]T , M M

513 = −Y2 − Y3T , 66 = −(1 − μ2 )2 (μ2 )M2 + λ1i R3i − Y2∗T − Y2∗ ,



⎤ 111 0 113 0 ⎢ 0 212 0 214 ⎥ ⎥ ⎢ ⎢ 311 0 0 0 ⎥ ⎥, ⎢ ⎢ 0 412 0 0 ⎥ ⎥ ⎢ ⎣ 0 0 513 0 ⎦ 0 0 0 614 ⎡ ⎤ 0 77 0 0 ⎢  88 0 0 ⎥ ⎢ ⎥ ⎣   99 0 ⎦ ,    1010 ⎡ ⎤ 0000 ⎢0 0 0 0⎥ ⎢ ⎥ ⎣0 0 0 0⎦, 0000 ⎤ ⎡ 0 0 1111 0 ⎢  1212 0 0 ⎥ ⎥, ⎢ ⎣   1313 0 ⎦    1414 −2P1i Ci + α P1i + λ1i R1i + L 1 + V2 X 4 V2 N  +N1T + N1 + Y1T + Y1 + qi j P1 j , ⎡

13 = M

11     

δ2 K i ≤ 12 M 13 M 22 M 23 M   M33    

−ατ 2

212 = −N1∗ + N3∗T , 214 = −Y1∗ + Y3∗T , 14 = [P1i Bi 0 0 0 0 0]T , M 15 = [0 P2i G i 0 0 0 0]T , M 24 = [0 0 0 0]T , M 25 = [0 0 0 0]T , M

15 = −Y1 + Y2T , −Y1 + Y3T ,

111 = −N1 + 113 = 22 = −2P2i E i + α P2i + λ2i T1i + M1 + U2 X 3 U2 N  +N1∗ + N1∗T + Y1∗ + Y1∗T + qi j P2 j , j =1

for j = 1, 2 and i = 1, 2, ..., n. Proof: Let us consider the following Lyapunov– Krasovskii functional: V (t, x(t), y(t), i ) =

3 

V j (t, x(t), y(t), i ), where

j =1 αt T

V1 (t, x(t), y(t), i ) = e x (t)P1i x(t) + eαt y T (t)P2i y(t),

t V2 (t, x(t), y(t), i ) = eαs x T (s)L 1 x(s)ds t −δ1 (t )

t + eαs x T (s)L 2 x(s)ds t −δ2 (t )

t eαs y T (s)M1 y(s)ds + t −τ1 (t )

t + eαs y T (s)M2 y(s)ds,

V3 (t, x(t), y(t), i ) =

0

t −τ2 (t )

t

−τ 2

0

+



t +θ t



−δ 2

eαs f 2T (y(s))Q 1 f2 (y(s))ds eαs g2T (x(s))Q 2 g2 (x(s))ds.

t +θ

Obviously, V (t, x(t), y(t), i ) ∈ C12 (R+ × Rn × Rn × S; Rn ). Then for t = tk , combining (4) and (5) we can easily obtain V (tk , x(tk ), y(tk ), j ) − V (tk− , x(tk− ), y(tk− ), i ) −

= eαtk x T (tk )P1 j x(tk ) − eαtk x T (tk− )P1i x(tk− ) −

+ eαtk y T (tk )P2 j y(tk ) − eαtk y T (tk− )P2i y(tk− )

tk eαs x T (s)L 1 x(s)ds + tk −δ1 (tk )

ZHU AND CAO: STABILITY ANALYSIS OF MARKOVIAN JUMP STOCHASTIC BAM NEURAL NETWORKS

− + − +

tk −δ2 (tk )

t− k

+ − + −

tk −τ2 (tk )

t− k



−τ 2 0

−δ 2

0



+ e x (t) + +

eαs y T (s)M1 y(s)ds eαs y T (s)M1 y(s)ds

eαs y T (s)M2 y(s)ds e



tk− +θ tk tk +θ tk−

αs

eαs f2T (y(s))Q 1 f 2 (y(s))ds

For t ∈ [tk−1 , tk ), we have LV1 (t, x(t), y(t), i )  = αeαt x T (t)P1i x(t) + 2eαt x T (t)P1i − Ci x(t)

t  + Ai f1 (y(t − τ1 (t))) + Bi f 2 (y(s))ds t −τ2 (t )  + αeαt y T (t)P2i y(t) + 2eαt y T (t)P2i − E i y(t)

t  + Fi g1 (x(t − δ1 (t))) + G i g2 (x(s))ds t −δ2 (t )

+e

=

N 

αt T

qi j P1 j x(t) + e j =1 eαt trace[σ1T (t, i )P1i σ1 (t, i )] eαt trace[σ2T (t, i )P2i σ2 (t, i )]

y (t)

N 

qi j P2 j y(t)

j =1

(10)

qi j [x (t)P1 j x(t) + y (t)P2 j y(t)]

(9)

+ eαt y T (t)M1 y(t) + eαt y T (t)M2 y(t) − (1 − ν1 )1 (ν1 )eαt x T (t − δ1 (t))L 1 x(t − δ1 (t)) − (1 − ν2 )1 (ν2 )eαt x T (t − δ2 (t))L 2 x(t − δ2 (t)) − (1 − μ1 )1 (μ1 )eαt y T (t − τ1 (t))M1 y(t − τ1 (t)) − (1 − μ2 )2 (μ2 )eαt y T (t − τ2 (t)) ×M2 y(t − τ2 (t))

+ e trace[σ1T (t, i )P1i σ1 (t, i )] + eαt trace[σ2T (t, i )P2i σ2 (t, i )] x T (t)(−2eαt P1i Ci )x(t) + y T (t)(−2eαt P2i E i )y(t) + αeαt x T (t)P1i x(t) + αeαt y T (t)P2i y(t) + 2eαt x T (t)P1i Ai f 1 (y(t − τ1 (t))) + 2eαt y T (t)P2i Fi g1 (x(t − δ1 (t)))

(11)

LV3 (t, x(t), y(t), i ) = τ 2 eαt f 2T (y(t))Q 1 f 2 (y(t)))

+ δ 2 eαt g2T (x(t))Q 2 g2 (x(t))

t − eαs f 2T (y(s))Q 1 f2 (y(s))ds t −τ

t 2 eαs g2T (x(s))Q 2 g2 (x(s))ds − t −δ2

≤ τ 2 eαt f 2T (y(t))Q 1 f2 (y(t)))

+ δ 2 eαt g2T (x(t))Q 2 g2 (x(t))

t − e(αt −ατ 2 ) f 2T (y(s))Q 1 f2 (y(s))ds t −τ 2

t g2T (x(s))Q 2 g2 (x(s))ds. − e(αt −αδ2 ) t −δ2

T

j =1 αt

g2 (x(s))ds

≤ eαt x T (t)L 1 x(t) + eαt x T (t)L 2 x(t)

eαs g2T (x(s))Q 2 g2 (x(s))ds

T

t −δ2 (t )

− (1 − τ˙1 (t))eα(t −τ1 (t )) y T (t − τ1 (t))M1 y(t − τ1 (t)) − (1 − τ˙2 (t))eα(t −τ2 (t )) y T (t − τ2 (t))M2 y(t − τ2 (t))

eαs g2T (x(s))Q 2 g2 (x(s))ds

V (tk , x(tk ), y(tk ), j ) ≤ V (tk− , x(tk− ), y(tk− ), i ).

αt

f 2 (y(s))ds

= eαt x T (t)L 1 x(t) + eαt x T (t)L 2 x(t) + eαt y T (t)M1 y(t) + eαt y T (t)M2 y(t) − (1 − δ˙1 (t))eα(t −δ1(t )) x T (t − δ1 (t))L 1 x(t − δ1 (t)) − (1 − δ˙2 (t))eα(t −δ2 (t )) x T (t − δ2 (t))L 2 x(t − δ2 (t))

f 2T (y(s))Q 1 f 2 (y(s))ds

which implies

N 

t −τ2 (t ) t



LV2 (t, x(t), y(t), i )

tk− +θ = e x (tk− )DiTk P1 j Dik x(tk− ) − eαtk x T (tk− )P1i x(tk− ) − + eαtk y T (tk− )HiTk P2 j Hik y(tk− ) − eαtk y T (tk )P2i y(tk− ) = eαtk x T (tk− )[DiTk P1 j Dik − P1i ]x(tk− ) + eαtk y T (tk− )[HiTk P2 j Hik − P2i ]y(tk− ) ≤ 0 −δ 2 αtk T

t

where σ1 (t, i ) and σ2 (t, i ) denote σ1 (x(t), y(t − τ1 (t)), y(t − τ2 (t)), t, i ) and σ2 (y(t), x(t − δ1 (t)), y(t − δ2 (t)), t, i ), respectively. Similarly, it is easy to compute LV2 (t, x(t), y(t), i ) and LV3 (t, x(t), y(t), i ) as follows:

eαs y T (s)M2 y(s)ds

tk +θ

t− k



αt T

e x (s)L 2 x(s)ds

tk− −τ2 (tk− )

0

tk



+ 2eαt y T (t)P2i G i

αs T

t − −τ1 (tk− )

ktk

+

tk −τ1 (tk ) tk−

−τ 2

0

+ 2e x (t)P1i Bi

e x (s)L 1 x(s)ds

eαs x T (s)L 2 x(s)ds

t − −δ2 (tk− )

ktk



αt T

αs T

t − −δ1 (tk− )

ktk



tk−

471

(12)

Using Lemmas 1 and 2, we have

t T 2x (t)P1i Bi f2 (y(s))ds ≤

t −τ2 (t ) x (t)P1i Bi Hi−1 BiT t T

+

t −τ2 (t )

P1i x(t) T

f 2 (y(s))ds Hi

≤ x T (t)P1i Bi Hi−1 BiT P1i x(t)

t t −τ2 (t )

f 2 (y(s))ds

472

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 23, NO. 3, MARCH 2012

+τ2 (t)

t

t −τ2 (t ) ≤ x T (t)P1i Bi Hi−1 BiT P1i x(t)

t +τ 2 f 2T (y(s))Hi f2 (y(s))ds t −τ 2

t T g2 (y(s))ds 2y (t)P2i G i t −δ2 (t ) ≤ y T (t)P2i G i K i−1 G iT P2i y(t) T t t

+

t −δ2 (t )

Let

f 2T (y(s))Hi f 2 (y(s))ds

g2 (y(s))ds

Ki

t −δ2 (t )

(13)

t −δ2 (t )

g2 (y(s))ds

≤ y (t)P2i G i K i−1 G iT P2i y(t)

t +δ 2 g2T (y(s))K i g2 (y(s))ds. t −δ2

which yields

T

(14)

Also, by Assumption 3 and the conditions (2) and (3), we get trace[σ1T (t, i )P1i σ1 (t, i )] ≤ λ1i trace[σ1T (t, i )σ1 (t, i )]

Hence, for any matrices N1 , N2 , N3 , N1∗ , N2∗ , N3∗ with appropriate dimensions, we get (15)

trace[σ2T (t)P2i σ2 (t)] ≤ λ2i trace[σ2T (t)σ2 (t)] ≤ λ2i y T (t)T1i y(t) +λ2i x T (t − δ1 (t))T2i x(t − δ1 (t)) +λ2i x T (t − δ2 (t))T3i x(t − δ2 (t)).

(16)

On the other hand, the conditions (4) and (5) yield t

f 2T (y(s))Hi f 2 (y(s))ds t −τ 2

t −ατ 2 f 2T (y(s))Q 1 f 2 (y(s))ds ≤ 0 −e t −τ 2

t δ2 g2T (x(s))K i g2 (x(s))ds t −δ2

t −αδ2 g2T (x(s))Q 2 g2 (x(s))ds ≤ 0. −e t −δ2

(17)

(18)

Moreover, it follows from Assumption 1 that f 1T (y(t − τ1 (t)))X 1 f 1 (y(t − τ1 (t))) ≤ y T (t − τ1 (t))U1 X 1U1 y(t − τ1 (t))

(19)

g1T (x(t T

(20)

f 2T (y(t))X 3 f 2 (y(t)) ≤ y T (t)U2 X 3 U2 y(t) g2T (x(t))X 4 g2 (x(t)) ≤ x T (t)V2 X 4 V2 x(t).

(21) (22)

− δ1 (t)))X 2 g1 (x(t − δ1 (t))) ≤ x (t − δ1 (t))V1 X 2 V1 x(t − δ1 (t))

t x(t) − x(t − δ1 (t)) − Z 1 (s)ds t −δ1 (t )

t σ1 (s)dw1 (s) = 0, − t −δ1 (t )

t y(t) − y(t − τ1 (t)) − Z 2 (s)ds t −τ1 (t )

t − σ2 (s)dw2 (s) = 0. t −τ1 (t )

≤ λ1i x T (t)R1i x(t) +λ1i y T (t − τ1 (t))R2i y(t − τ1 (t)) +λ1i y T (t − τ2 (t))R3i y(t − τ2 (t))

By (1), we have d x(t) = Z 1 (t)dt + σ1 (t)dw1 (t) and d y(t) = Z 2 (t)dt + σ2 (t)dw2 (t)

t −δ2 (t )

τ2

t −τ2 (t )

Z 2 (t) = −E(r (t))y(t) + F(r (t))g1 (x(t − δ1 (t)), r (t))

t +G(r (t)) g2 (x(s), r (s))ds.

≤ y T (t)P2i G i K i−1 G iT P2i y(t)

t g2T (y(s))K i g2 (y(s))ds +δ2 (t)



Z 1 (t) = −C(r (t))x(t) + A(r (t)) f 1 (y(t − τ1 (t)), r (t))

t +B(r (t)) f 2 (y(s), r (s))ds and

[2x T (t)N1 + 2x T (t − δ1 (t))N2 + t T 2 Z 1 (s)ds N3 ][x(t) − x(t − δ1 (t)) − t −δ1 (t )

t

t Z 1 (s)ds − σ1 (s)dw1 (s)] = 0 (23) t −δ1 (t )

t −δ1 (t )

t −τ1 (t )

t −τ1 (t )

[2y T (t)N1∗ + 2y T (t − τ1 (t))N2∗ + t T 2 Z 2 (s)ds N3∗ ][y(t) − y(t − τ1 (t)) − t −τ1 (t )

t

t Z 2 (s)ds − σ2 (s)dw2 (s)] = 0. (24) Similarly, for any matrices Y1 , Y2 , Y3 , Y1∗ , Y2∗ , Y3∗ with appropriate dimensions, we have [2x T (t)Y1 + 2x T (t − δ2 (t))Y2 + t T 2 Z 1 (s)ds Y3 ][x(t) − x(t − δ2 (t)) − t −δ2 (t )

t

t Z 1 (s)ds − σ1 (s)dw1 (s)] = 0 t −δ2 (t ) T

t −δ2 (t )

t −τ2 (t )

t −τ2 (t )

(25)

[2y (t)Y1∗ + 2y T (t − τ2 (t))Y2∗ + t T 2 Z 2 (s)ds Y3∗ ][y(t) − y(t − τ2 (t)) − t −τ2 (t )

t

t Z 2 (s)ds − σ2 (s)dw2 (s)] = 0. (26) Hence, by (8) and (11)–(26), we obtain LV (t, x(t), y(t), i ) ≤ η T (t)i η(t), t ∈ [tk−1 , tk )

(27)

ZHU AND CAO: STABILITY ANALYSIS OF MARKOVIAN JUMP STOCHASTIC BAM NEURAL NETWORKS



where

+

η (t) = [ x (t) y (t) x (t − δ1 (t)) y (t − τ1 (t)) T

T

T

T

T

+

x T (t − δ2 (t)) y T (t − τ2 (t)) f 1T (y(t − τ1 (t))) g1T (x(t − δ1 (t))) f 2T (y(t)) g2T (x(t))

t

t Z 1T (s)ds Z 2T (s)ds t −δ1 (t ) t −τ1 (t )

t

t Z 1T (s)ds Z 2T (s)ds ], t −δ2 (t ) t −τ2 (t ) ⎡ ∗ ⎤  M 12 M 13 M 11 22 M 23 ⎦ , i = ⎣  M 33   M ⎤ ⎡ ∗ 11 0 13 0 15 0 ⎢   ∗ 0 24 0 26 ⎥ 22 ⎥ ⎢ ⎢   33 0 0 0 ⎥ ∗ ⎥, ⎢  M11 = ⎢ ⎥ ⎢    44 0 0 ⎥ ⎣     55 0 ⎦      66 ∗ 11 ∗ 22

= 11 + = 22 +

+

−τ2 (0)

0 −τ 2

0 −δ 2

eαs y T (s)M2 y(s)ds





0 θ 0

θ 2

eαs f2T (y(s))Q 1 f 2 (y(s))ds eαs g2T (x(s))Q 2 g2 (x(s))ds

−δ 1 0

+λmax (L 2 ) +λmax (M1 ) +λmax (M2 )

−δ 2

0

Ex T (s)x(s)ds

−τ 1

0 −τ 2

Ey T (s)y(s)ds Ey T (s)y(s)ds

+τ 2 λmax (U2 Q 1 U2 )

+δ 2 λmax (V2 Q 2 V2 )

By Lemma 3 and the condition (8), we see that i < 0 for all i ∈ S. Then, it follows from (27) that LV (t, x(t), y(t), i ) ≤ 0, t ∈ [tk−1 , tk ).

i∈S 2 +δ 2 λmax (V2 Q 2 V2 )]

(28)





−τ 2 0 −δ 2

Ey T (s)y(s)ds

Ex T (s)x(s)ds

sup E|φ(θ )|2

−δ≤θ≤0

+[max λ2i + τ 1 λmax (M1 ) + τ 2 λmax (M2 )





0

≤ [max λ1i + δ 1 λmax (L 1 ) + δ 2 λmax (L 2 )

For any t, t ∈ [tk−1 , tk ) and t > t , applying the generalized Ito’s ˆ formula and (28), we get

i∈S

+τ 22 λmax (U2 Q 1 U2 )] sup E|ψ(θ )|2 .



−τ ≤θ≤0

EV (t, x(t), y(t), i ) − EV (t , x(t ), y(t ), r (t ))

t ELV (s, x(s), y(s), r (s))ds ≤ 0 = t

0

≤ λ10 E|x(0)| + λ20 E|y(0)|2

0 +λmax (L 1 ) Ex T (s)x(s)ds

P1i Bi Hi−1 BiT P1i , P2i G i K i−1 G iT P2i .



473

(31)

On the other hand, by the definition of V (t, x(t), y(t), i ), we get



V (t, x(t), y(t), i )

which yields







EV (t, x(t), y(t), i ) ≤ EV (t , x(t ), y(t ), r (t )).

≥ eαt x T (t)P1i x(t) + eαt y T (t)P2i y(t)

(29)

≥ eαt [λmin (P1i )|x(t)|2 + λmin (P2i )|y(t)|2 ] ≥ eαt min{λmin (P1i ), λmin (P2i )}

By (9), (29), and mathematical induction, we have that for all i, j ∈ S and k ≥ 1 EV (tk , x(tk ), y(tk ), j ) ≤ EV (tk− , x(tk− ), y(tk− ), i )

2

α1 = max λ1i + δ 1 λmax (L 1 ) + δ 2 λmax (L 2 ) + δ 2 λmax (V2 Q 2 V2 ), i∈S

α2 = max λ2i +τ 1 λmax (M1 )+τ 2 λmax (M2 )+τ 22 λmax (U2 Q 1 U2 ).

≤ ··· Noting t0 = 0, it follows from (29) and (30) that EV (t, x(t), y(t), i ) ≤ EV (0, x(0), y(0), r (0)) ≤ Ex T (0)P1r(0) x(0) + Ey T (0)P2r(0) y(0)

0 eαs x T (s)L 1 x(s)ds +

−δ1 (0) 0



−δ2 (0) 0

+ +

−τ1 (0)

eαs x T (s)L 2 x(s)ds eαs y T (s)M1 y(s)ds

(32)

Let

≤ EV (tk−1 , x(tk−1 ), y(tk−1 ), r (tk−1 )) − − − − , x(tk−1 ), y(tk−1 ), r (tk−1 )) ≤ EV (tk−1 ≤ EV (t0 , x(t0 ), y(t0 ), r (t0 )).

×[|x(t)|2 + |y(t)|2 ].

i∈S

(30)

Then, combining (31) and (32) we obtain E[|x(t)|2 + |y(t)|2 ] ≤ e−αt [α1 sup−δ≤θ≤0 E|φ(θ )|2 + α2 sup−τ ≤θ≤0 E|ψ(θ )|2 ] × min{λmin (P1i ), λmin (P2i )} which together with Definition 1 verifies that the trivial solution of (1) is stochastic exponentially stable in the mean square. This completes the proof of Theorem 1.  Theorem 2: Let α be a fixed positive constant and suppose that Assumptions 1–4 hold. Then the trivial solution of (1) is exponentially stable in the mean square if there exist positive scalars λ1i , λ2i (i ∈ S), positive diagonal matrices Q 1 , Q 2 , X 1 , X 2 , X 3 , X 4 , positive-definite matrices

474

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 23, NO. 3, MARCH 2012

L 1 , L 2 , L 3 , L 4 , M1 , M2 , M3 , M4 , P1i , P2i , Hi , K i (i ∈ S), and any matrices N1 , N2 , N3 , N1∗ , N2∗ , N3∗ , Y1 , Y2 , Y3 , Y1∗ , Y2∗ , Y3∗ such that (2)–(7) and the following LMI hold: ⎤ ⎡ i 12 13 14 15 ⎢  22 0 0 0 ⎥ ⎥ ⎢ ⎢   33 0 0 ⎥ < 0 (33) ⎥ ⎢ ⎣    44 0 ⎦     55 where



⎤ ∗∗ M 12 M 13 M 14 M 15 M 11 ⎢  M 22 M 23 M 24 M 25 ⎥ ⎢ ⎥ 33 M 34 M 35 ⎥ , i = ⎢   M ⎢ ⎥ ⎣    −Hi 0 ⎦     −K i ⎤ ⎡ ∗∗ 11 0 13 0 15 0 ⎢   ∗∗ 0 24 0 26 ⎥ 22 ⎥ ⎢ ⎢   33 0 0 0 ⎥ ∗∗ ⎥ 11 M =⎢ ⎢    44 0 0 ⎥ , ⎥ ⎢ ⎣     55 0 ⎦      66

(t, x(t), y(t), i ) = V (t, x(t), y(t), i ) + V¯ (t, x(t), y(t), i ) V where V (t, x(t), y(t), i ) is the same as in Theorem 1 V¯ (t, x(t), y(t), i )

t eαs x T (s)L 3 x(s)ds =

+ +

t −τ

t 1 t −τ 2

(36)

[G(t) F(t) E(t)] = M H (t)[S1 , S2 , S3 ]

(37)



44 = −e−ατ 1 M3 , 55 = −e−ατ 2 M4 . Proof: Let us consider the following Lyapunov– Krasovskii functional:

t −δ

t 2

eαs x T (s)L 4 x(s)ds eαs y T (s)M3 y(s)ds









where M, M , and Sk , Sk (k = 1, 2, 3) are known real constant matrices and H (t) is the unknown time-varying matrix-valued function satisfying H T (t)H (t) ≤ I ∀t ≥ 0.

τ 2 (Hi + S2T S2 ) ≤ e−ατ 2 Q 1 ,

T



−αδ2

δ 2 (K i + S2 S2 ) ≤ e ⎤ Y i X ⎣  −1 I 0 ⎦ < 0 3   − 13 I ⎡

A direct computation yields V¯ (t, x(t), y(t), i ) = eαt x T (t)L 3 x(t) + eαt x T (t)L 4 x(t)

Q2,

where

+eαt y T (t)M3 y(t) + eαt y T (t)M4 y(t) −eα(t −δ1 ) x T (t − δ 1 )L 3 x(t − δ 1 )

X = [0 P1i M 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]T ,

−eα(t −δ2 ) x T (t − δ 2 )L 4 x(t − δ 2 ) −eα(t −τ 1 ) y T (t − τ 1 )M3 y(t − τ 1 )

(34)

Then, along the same line as for Theorem 1, we can obtain the desired result. This completes the proof of Theorem 2. 

(38)

Definition 3: The trivial solution of (35) is said to be robustly exponentially stable in the mean square if the trivial solution of (35) is exponentially stable in the mean square for all admissible unknown parameters. Theorem 3: Let α be a fixed positive constant and suppose that Assumptions 1–4 hold. Then the trivial solution of (35) is robustly exponentially stable in the mean square if there exist positive scalars λ1i , λ2i (i ∈ S), positive diagonal matrices Q 1 , Q 2 , X 1 , X 2 , X 3 , X 4 , positive definite matrices L 1 , L 2 , M1 , M2 , P1i , P2i , Hi , K i (i ∈ S), and any matrices N1 , N2 , N3 , N1∗ , N2∗ , N3∗ , Y1 , Y2 , Y3 , Y1∗ , Y2∗ , Y3∗ such that (2)–(5) and the following LMIs hold:

eαs y T (s)M4 y(s)ds.

−eα(t −τ 2 ) y T (t − τ 2 )M4 y(t − τ 2 ).

(35)

[A(t) B(t) C(t)] = M H (t)[S1 , S2 , S3 ],

12 = 13 = 14 = 15 = [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]T , 22 = −e−αδ1 L 3 , 33 = −e−αδ2 L 4 ,

+

⎧ d x(t) = [−(Ci + C(t))x(t) + (Ai + A(t)) ⎪ t ⎪ ⎪ ⎪ × f 1 (y(t − τ1 (t))) + (Bi + B(t)) t −τ2 (t ) ⎪ ⎪ ⎪ ⎪ × f 2 (y(s))ds]dt + σ1 (x(t), y(t − τ1 (t)), ⎪ ⎪ ⎪ ⎪ y(t − τ2 (t)), t, i )dw1 (t), t = tk ⎪ ⎪ ⎨ x(tk ) = Dik x(tk− ), t = tk ⎪ ⎪ d y(t) = [−(E i + E(t))y(t) + (Fi + F(t)) t ⎪ ⎪ ⎪ ×g1 (x(t − δ1 (t))) + (G i + G(t)) t −δ2 (t ) ⎪ ⎪ ⎪ ⎪ ×g2 (x(s))ds]dt + σ2 (y(t), x(t − δ1 (t)), ⎪ ⎪ ⎪ ⎪ x(t − δ2 (t)), t, i )dw2 (t), t = tk ⎪ ⎩ y(tk ) = Hik y(tk− ), t = tk

where A(t), B(t), C(t), E(t), F(t), and G(t) are unknown matrices denoting time-varying parameter uncertainties and satisfying the following conditions:

∗∗ ∗∗ 11 = 11 + L 3 + L 4 , 22 = 22 + M3 + M4 ,

t −δ1

t

In what follows, we will investigate the exponential stability in the mean square of the following delayed Markovian jump stochastic BAM neural networks with impulse control and unknown parameters

Y = [P2i M ⎡ ¯ 11 M ⎢  ⎢ i = ⎢ ⎢  ⎣  



0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]T , ⎤ 12 M 13 M 14 M 15 M ¯ 23 M 24 M 25 ⎥  M 22 M ⎥ 33 M 34 M 35 ⎥  M ⎥,   −Hi 0 ⎦    −K i

(39) (40) (41)

ZHU AND CAO: STABILITY ANALYSIS OF MARKOVIAN JUMP STOCHASTIC BAM NEURAL NETWORKS



¯ 11 ⎢  ⎢ ⎢  =⎢ ⎢  ⎢ ⎣   ⎡ ¯ 77 ⎢  =⎢ ⎣  

¯  M 11

0 ¯ 22    

13 0 33   

0 24 0 44  

0 ¯ 88  

15 0 0 0 55  ⎤

⎤ 0 26 ⎥ ⎥ 0 ⎥ ⎥, 0 ⎥ ⎥ 0 ⎦ 66

≤e

+τ 2

×P1i x(t) + x T (t)S3T S3 x(t)



≤ eαt x T (t)P1i M M T P1i x(t)  +x T (t)S3T S3 x(t) .

(42)

Similarly, we have 2eαt x T (t)P1i A(t) f 1 (y(t − τ1 (t)))

+

t −τ2 (t )

≤e

αt



+

S2T S2

f 2 (y(s))ds

t t −τ2 (t )



T

t −τ2 (t )

f 2 (y(s))ds



S2T S2

t t −τ2 (t )

+τ2 (t)

t t −τ2 (t )





S2 S2

t t −δ2 (t )

 g2 (x(s))ds



T

t t −δ2 (t )



g2 (x(s))ds





S2T S2



t t −δ2 (t )

 g2 (x(s))ds





t



g2T (x(s))S2T S2 g2 (x(s))ds



+δ2 (t) t −δ2 (t ) 

≤ eαt y T (t)P2i M M T P2i y(t)





g2 (x(s))ds

T

(46)

≤ eαt y T (t)P2i M M T P2i y(t)

f2 (y(s))ds

f 2T (y(s))S2T S2 f 2 (y(s))ds

(45)



T

t t −δ2 (t )

(43)



≤ eαt x T (t)P1i M M T P1i x(t)





f2 (y(s))ds



= 2eαt y T (t)P2i M H (t)S1 g1 (x(t − δ1 (t))) 



≤ eαt y T (t)P2i M H (t)H T (t)M T P2i y(t) 



+g1T (x(t − δ1 (t)))S1T S1 g1 (x(t − δ1 (t))) 



≤ eαt y T (t)P2i M M T P2i y(t) 



+g1T (x(t − δ1 (t)))S1T S1 g1 (x(t − δ1 (t))) ;

t αt T g2 (x(s))ds 2e y (t)P2i B(t) t −δ2 (t )

t



g2 (x(s))ds = 2eαt y T (t)P2i M H (t)S2 t −δ2 (t ) 



≤ eαt y T (t)P2i M H (t)H T (t)M T P2i y(t)

+

T

t

(44)

≤ eαt y T (t)P2i M M T P2i y(t)

x (t)P1i M M P1i x(t) T

 ;

2eαt y T (t)P2i F(t)g1 (x(t − δ1 (t)))





 + f 1T (y(t − τ1 (t)))S1T S1 f 1 (y(t − τ1 (t))) ;

t 2eαt x T (t)P1i B(t) f2 (y(s))ds t −τ2 (t )

t = 2eαt x T (t)P1i M H (t)S2 f 2 (y(s))ds t −τ2 (t )  αt ≤ e x T (t)P1i M H (t)H T (t)M T P1i x(t)

f 2T (y(s))S2T S2 f2 (y(s))ds

= −2eαt y T (t)P2i M H (t)S3 y(t) 



≤ eαt y T (t)P2i M H (t)H T (t)M T 



×P2i y(t) + y T (t)S3T S3 x(t) 



≤ eαt y T (t)P2i M M T P2i y(t) 



+y T (t)S3T S3 y(t) ;

+

+ f 1T (y(t − τ1 (t)))S1T S1 f 1 (y(t − τ1 (t)))  ≤ eαt x T (t)P1i M M T P1i x(t)

T

t −τ 2 αt T



= 2eαt x T (t)P1i M H (t)S1 f 1 (y(t − τ1 (t)))  ≤ eαt x T (t)P1i M H (t)H T (t)M T P1i x(t)

t



= −2eαt x T (t)P1i M H (t)S3 x(t)  ≤ eαt x T (t)P1i M H (t)H T (t)M T

t

x T (t)P1i M M T P1i x(t)

−2e y (t)P2i E(t)y(t)

−2eαt x T (t)P1i C(t)x(t)







0 0 0 0 ⎥ ¯ ⎥, 22 M 99 0 ⎦  1010



T ¯ 11 = −11 + S3 S3 , ¯ 22 = 22 + S3T S3 ,

¯ 77 = −X 1 + S1T S1 , ¯ 88 = −X 2 + S1T S1 . Proof: Let us consider the same Lyapunov–Krasovskii functional as in Theorem 1. Then, we will use (35) to compute LV (t, x(t), y(t), i ). To this end, from Theorem 1 we only need to estimate the following equalities:



αt

475

+δ 2

t t −δ2





g2T (x(s))S2T S2 g2 (x(s))ds .

(47)

Then along the same line as for Theorem 1, we can obtain the desired result by applying Lemma 3 and (42)–(47). This completes the proof of Theorem 3.  By Theorems 2 and 3, we have the following result. Theorem 4: Let α be a fixed positive constant and suppose that Assumptions 1–4 hold. Then the trivial solution of (35) is robustly exponentially stable in the mean square if there exist positive scalars λ1i , λ2i (i ∈ S), positive diagonal matrices Q 1 , Q 2 , X 1 , X 2 , X 3 , X 4 , positive-definite matrices L 1 , L 2 , L 3 , L 4 , M1 , M2 , M3 , M4 , P1i , P2i , Hi , K i (i ∈ S), and

476

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 23, NO. 3, MARCH 2012

any matrices N1 , N2 , N3 , N1∗ , N2∗ , N3∗ , Y1 , Y2 , Y3 , Y1∗ , Y2∗ , Y3∗ such that (2)–(5), (39), (40), and the following LMI hold: ⎡ ⎤ ¯ i 12 13 14 15 X  Y ⎢  22 0 0 0 0 0 ⎥ ⎢ ⎥ ⎢   33 0 0 0 0 ⎥ ⎢ ⎥ ⎢   0 0 ⎥  44 0 (48) ⎢ ⎥

Stability analysis of Markovian jump stochastic BAM neural networks with impulse control and mixed time delays.

This paper discusses the issue of stability analysis for a class of impulsive stochastic bidirectional associative memory neural networks with both Ma...
432KB Sizes 0 Downloads 0 Views