LETTER

Communicated by Anke Meyer-B¨ase

Synchronization of Stochastic Competitive Neural Networks with Different Timescales and Reaction-Diffusion Terms Yanchao Shi [email protected]

Peiyong Zhu [email protected] School of Applied Mathematics, University of Electronic Science and Technology of China, Chengdu 611731, China

We propose a feedback controller for the synchronization of stochastic competitive neural networks with different timescales and reactiondiffusion terms. By constructing a proper Lyapunov-Krasovskii functional, as well as employing stochastic analysis theory, the LaShall-type invariance principle for stochastic differential delay equations, and a linear matrix inequality (LMI) technique, a feedback controller is designed to achieve the asymptotical synchronization of coupled stochastic competitive neural networks. A simulation example is given to show the effectiveness of the theoretical results. 1 Introduction Meyer-B¨ase, Ohl, and Scheich (1996) proposed competitive neural networks with different timescales. In this model neural network, there are two types of state variables: a short-term-memory (STM) variable describing fast neural activity and a long-term-memory (LTM) variable describing slow, unsupervised synaptic modifications. So there are two timescales in the competitive neural networks, one corresponding to the fast change of the state and the other to the slow change of the synapse by external stimuli. Meyer-B¨ase et al. (1996; Meyer-B¨ase, Botella, & Rybarska-Rusinek, 2013; Meyer-B¨ase, Pilyugin, & Chen, 2003; Meyer-B¨ase, Pilyugin, Wismler, & Foo, 2004) studied the stability of competitive neural networks with different timescales that were not delayed. However, time delays exist in real neural networks and always influence the information processing of neurons for various reasons; they may, for example, cause periodic oscillations, bifurcation, or chaotic attractors. Since time delays appear in many neural networks, the stability of delayed neural networks has received considerable attention (Wan, Qiao, Peng, & Wang, 2006; Gu, 2009; Wang, Kao, & Yang, 2012). Since 1990, when the concept of drive-response synchronization for coupled chaotic systems was proposed in Pecora and Carroll (1990), much attention has been paid to control and chaos synchronization due to its Neural Computation 26, 2005–2024 (2014) doi:10.1162/NECO_a_00629

c 2014 Massachusetts Institute of Technology 

2006

Y. Shi and P. Zhu

potential applications in secure communication, biological systems, and information science (Blythe, Mao, & Liao, 2001; Mao, 2002; Lou & Cui, 2007; Gu, 2009; Yang, Cao, Long, & Rui, 2010; Oliveira, 2011; Wang et al., 2012). Researchers used synchronized two chaotic systems using various synchronization strategies—for example, adaptive control method, feedback control method, and active control method. Yang et al. (2010) investigated the exponential synchronization problem for a class of competitive neural networks using Lyapunov functions and LMI method. Blythe et al. (2001) examined the synchronization of unknown chaotic delayed competitive neural networks with different timescales based on adaptive control and parameter identification. As Haykin (1994) pointed out, in real nervous systems, synaptic transmission is a noisy process brought on by random fluctuations from the release of neurotransmitters and other probabilistic causes (Gu, 2009). Therefore, the effect of noise should be taken into account in researching the synchronization of chaos systems. Gu (2009) and, more recently, Wang et al. (2012) proposed some stochastic synchronization results. Reaction-diffusion neural networks were introduced by Chua in Liao, Fu, Gao, and Zhao (2000). Strictly speaking, diffusion effects cannot be ignored in neural networks when electrons move in a no uniform electromagnetic field (Wang et al., 2012). Therefore, it is necessary to consider the stochastic effect and the diffusion effect on the synchronization of neural networks. Mao (2002) studied the problem of the synchronization of stochastic chaotic neural networks with reaction-diffusion terms. However, to the best of our knowledge, the synchronization of delayed stochastic competitive neural networks with different scales and reaction-diffusion has seldom been considered. Inspired by these discussions, we propose in this letter a feedback controller for the synchronization of coupled delayed stochastic competitive neural networks with different scales and reaction-diffusion based on the LaSalle-type invariance principle for stochastic differential delay equations. The results we obtained show that the synchronization between the coupled delayed competitive neural networks with different scales and stochastic perturbation could almost surely be achieved even if the networks are subjected to some stochastic perturbation. The proposed delayed stochastic competitive neural networks with different scales and reaction-diffusion is an important new area for research of emergent importance in gene regulatory networks as well as in brain research. Synchronization in cortical networks relates to many neurodegenerative diseases. Throughout this letter, Rn and Rn×n denote the n-dimensional Euclidean space and the set of all n × n real matrices, respectively. The superscript T denotes matrix transposition; tr(·) denotes the trace of the corresponding matrix, and I denotes the identity matrix. λmax and λmin denote the maximum and the minimum eigenvalues of a real symmetric matrix. ·

Synchronization of Competitive Neural Networks

2007

stands for the Euclidean norm. diag(·) stands for the block diagonal matrix. ∗ represents the elements below the main diagonal of a symmetric matrix. P > 0 means that is a real symmetric positive-definite matrix. (, F , P) is   a complete probability space with a filtration Ft t≥0 satisfying the usual conditions (i.e., the filtration contains all P-null sets and is right continuous). 2 Problem Formulation and Preliminaries Consider the neural networks with time-varying delays described by the following differential equation:

STM :  y˙i (t, x)=

  m n   ∂ ∂y (t, x) Dik i − ci yi (t, x) + ai j f j (y j (t, x)) ∂xk ∂xk j=1

k=1

+

n 

 p

bi j f j (y j (t − τ1 (t), x)) + Hi

j=1

mil (t, x)αl , (2.1)

l=1

˙ il (t, x) = −βi mil (t, x) + αl fi (yi (t, x)), LTM : m i = 1, 2, . . . , n,

l = 1, 2, . . . , p,

where  > 0 is the timescale of the STM state. n denotes the number of neurons. p denotes the number of the constant external stimulus, C = diag(c1 , c2 , . . . , cn ). A = (ai j )n×n and B = (bi j )n×n are the connection  T weight matrices. x = x1 , x2 , . . . , xm ∈ S ⊂ Rm , and S = {x| xk ≤ Lk }, and  T Lk is a constant (k = 1, 2, . . . , m). y(t, x) = y1 (t, x), y2 (t, x), . . . , yn (t, x) , yi (t, x) is the neuron is current activity level. f j (y j (t, x)) is the output of neurons. mil (t, x) is the synaptic efficiency; Hi is the strength of the exter T nal stimulus. f (y(t, x)) = f1 (y1 (t, x)), f2 (y2 (t, x)), . . . , fn (yn (t, x)) . τ (t) is the time-varying

pdelay. Set si (t, x) = l=1 mil (t, x)αl = mTi (t, x)α, where α = (α1 , α2 , . . . , α p )T , mi = (mi1 (t, x), mi2 (t, x), . . . , mip (t, x))T . The neural network, equation 2.1, can be rewritten as STM :  y˙i (t, x) =

  m n   ∂ ∂y (t, x) ai j f j (y j (t, x)) Dik i − ci yi (t, x) + ∂xk ∂xk j=1

k=1

+

n 

bi j f j (y j (t − τ (t), x)) + Hi si (t, x),

j=1

LTM : s˙i (t, x) = −βi si (t, x) + |α|2 fi (yi (t, x)),

i = 1, 2, . . . , n,

(2.2)

2008

Y. Shi and P. Zhu

where |α|2 = α12 + α22 + · · · + αP2 is a constant. Without loss of generality, the input stimulus vector α is assumed to be normalized with unit magnitude |α|2 = 1. Then equation 2.2, is simplified to ˙ x) = STM :  y(t,

  m  ∂ ∂y(t, x) Dk − Cy(t, x) + A f (y(t, x)) ∂xk ∂xk k=1

+ B f (y(t − τ (t), x)) + Hs(t, x),

(2.3)

˙ x) = −βs(t, x) + f (y(t, x)). LTM : s(t, Consider system 2.1 as the drive system. The response system is STM : dz(t, x) =

m   ∂z(t, x) 1  ∂ Dk − Cz(t, x) + A f (z(t, x))  ∂xk ∂xk k=1

+ B f (z(t − τ (t), x)) + Hr(t, x) + u(t) dt + σ (t, e(t, x), e(t − τ (t), x)) dω(t),

(2.4)

LTM : r˙(t, x) = −βr(t, x) + f (z(t, x)), where z(t, x) ∈ Rn is the state vector of the response system, and u(t) is the control input to be designed. The noise perturbation σ : [0, ∞) × Rn × Rn → Rn×n is the noise intensity matrix, and ω j (t)( j = 1, 2, . . . , n) is an n-dimensional Brownian motions defined   on a complete probability space (, F , P) with a natural filtration Ft t≥0 (Ft = σ {ω(s) : 0 ≤ s ≤ t)}). C([−τ, 0]; Rn ) denotes the family of continuous function φ from [−τ, 0] to Rn 2 with the uniform norm φ = sup−τ ≤s≤0 |φ(s)|. Denote by CF ([−τ, 0]; Rn ) 0 the family of all CF that is, measurable, C([−τ, 0]; Rn ) valued stochastic 0

0 variables φ = {φ(s, x) : −τ ≤ s ≤ 0} such that −τ E |φ(s)|2 ds < ∞, where E[·] stands for the correspondent expectation operator with respect to the given probability measure P. Let the error e(t, x) = z(t, x) − y(t, x) and h(t, x) = r(t, x) − s(t, x). Then the error system is given as follows: STM : de(t, x) =

m   1  ∂ ∂e(t, x) Dk − Ce(t, x) + Ag(e(t, x))  ∂xk ∂xk k=1

+ Bg(e(t − τ (t), x)) + Hh(t, x) + u(t) dt + σ (t, e(t, x), e(t − τ (t), x))dω(t), ˙ x) = −βh(t, x) + g(e(t, x)), LTM : h(t,

(2.5)

Synchronization of Competitive Neural Networks

2009

where g(e(t)) = f (z(t, x)) − f (y(t, x)),u(t) = K1 e(t, x) + K2 e(t − τ (t)), K1 , and K2 are the controller gains to be determined. Definition 1. The two coupled neural networks, equations 2.3 and 2.4, are said to be stochastic synchronization for almost every initial data if for every ξ ∈ ([−τ, 0]; Rn ), lim e(t; ξ ) = 0

t→∞

and lim h(t; ξ ) = 0.

t→∞

Remark 1. Models of this letter are general. The models in Yang et al. (2010) and Wang et al. (2012) are special cases of the results of this letters. Throughout this letter, we make the following assumptions. Assumption 1: There exists a diagonal matrix F = diag(F1 , F2 , . . . , Fn ), satisfying 0≤

f j (u) − f j (v) u−v

≤ Fj ,

for all u, v ∈ R, j = 1, 2, . . . , n. Assumption 2: There exist positive constants τ, γ such that 0 < τ (t) ≤ τ, τ˙ (t) ≤ γ < 1. Assumption 3: There exist positive definite matrices Q1 and Q2 such that tr(σ T (t, x, y)σ (t, x, y)) ≤ xT Q1 x + yT Q2 y. To prove the main result, some preliminary lemmas are presented. Lemma 1 (Evans, 1998). Assume that system d x(t) = f (t, x(t), x(t − τ ))dt + σ (t, x(t), x(t − τ ))d solution x(t; ξ ) on t > 0 for any  B(t). There is a unique  b given initial data x(θ ) : −τ ≤ θ ≤ 0 = ξ ∈ CF ([−τ, 0]; Rn ). Moreover, both 0 f (x, y, t) and σ (x, y, t) are locally bounded in (x, y) and uniformly bounded in t. If there are a function V ∈ C 2,1 (Rn × R+ ; R+ ), β ∈ L 1 (R+ , R+ ) and ω1 , ω2 ∈ C(Rn ; R+ ) such that LV(x, y, t) ≤ β(t) − ω1 (x) − ω2 (y),

(x, y, t) ∈ Rn × Rn × R+ , ω1 (x) > ω2 (y), ∀x = 0, lim inf V(t) = ∞. Then x→∞ 0≤t≤∞

b ([ − τ, 0];Rn ). lim x(t; ξ ) = 0 almost surely for every ξ ∈ CF

x→∞

0

2010

Y. Shi and P. Zhu

This lemma is also called the LaShall-type invariance principle. Lemma 2 (Friedrichs inequality; Mao, 2002). For u ∈ C01 (S) and S ⊂ S1 ⊂ Rn , S1 = {x||xk | ≤ δ}, k = 1, 2, . . . , n, we have  u2 (x) d x ≤ S

 

δ2 n

S

∂u ∂ xk

2 d x.

Lemma 3 (Meyer-B¨ase et al., 2003). For any vector x, y ∈ Rn and positive-definite matrix G, the following matrix inequality holds: 2x T y ≤ x T Gx + yT G −1 y.

Lemma 4 (Liao et al., 2000). For any positive-definite matrix D ∈ Rn×n , a scalar ρ > 0, vector function ω : [0, ρ] → Rn such that the integration concerned is well defined. Then 

ρ

T ω(x)d x

 D

0

ρ





ρ

ω(x)d x ≤ ρ

0

ω T (x)Dω(x) d x.

0

Throughout this letter, we assume that f (t), g(t), h(t), and σ (·) satisfy the usual local Lipschitz condition and linear growth condition. It is known 2 from Gu (2009) that e(θ ) = ξ (t) on −τ ≤ θ ≤ 0 in CF ([−τ, 0]; Rn ) for any 0 given initial data and that error system 2.3 has a unique global solution on t ≥ 0 denoted by e(t, ξ ). Let C2,1 (Rn × R+ ; R+ ) be the family of all nonnegative functions V (t, e(t)) on Rn × R+ , which are continuously twice differentiable in e(t) and once in t. For SDE, dx(t) = f (t, x(t))dt + σ (t, x(t)) dω(t),

(2.6)

For each V ∈ C2,1 (Rn × R+ ; R+ ), along the trajectory of system 2.6, we define an operator LV from R × Rn × Rn to R by LV (t, x) = Vt (t, x) + Vx (t, x) f (t, x) +

1 trace[σ T (t, x)Vxx σ (t, x)] (2.7) 2

where Vx (t, x) = ( ∂V∂x(t,x) , ∂V∂x(t,x) , . . . , ∂V∂x(t,x) ), Vxx (t, x) = 1

2

n



∂ 2V (t,x) ∂xi ∂x j

 . n×n

Synchronization of Competitive Neural Networks

2011

3 Main Results In this section, the stochastic synchronization for the two coupled neural networks, 2.3 and 2.4, is investigated under assumptions 1 to 3. Theorem 1. Under assumptions 1 to 3, the two coupled delayed neural networks, 2.3 and 2.4, can be synchronized for almost all initial data if there exist a scalar λ > 0, diagonal matrices P > 0 and Q > 0, and positive-definite matrices R, T, and W such that the following matrix inequalities hold: P − λI ≤ 0, ⎡

(3.1)

Π11 0

⎢ ⎢ ∗ Π22 ⎢ ⎢ Π =⎢ ∗ ∗ ⎢ ⎢ ∗ ∗ ⎣ ∗

1 

1 

PA

PB

0

0

R

0



−(1 − γ )R







1 

PH



⎥ ⎥ ⎥ ⎥ Q ⎥ < 0. ⎥ 0 ⎥ ⎦ 0

(3.2)

Π55

where Π11 = − 2 P D − 2 PC + 2 P K 1 + λQ1 + T, Π22 = λQ2 + 2 P K 2 − T, Π55 = −2Qβ + W, 

m m m   D1k  D2k Dnk , , . . . , D = diag 2 2 L L L 2k k k k=1 k=1 k=1

Proof. 2.5 as

 .

Consider the following Lyapunov-Krasovskii function for system

V (t, e(t, x)) =

3 

Vi (t, e(t, x)),

i=1

where  V1 (t, e(t, x)) =

eT (t, x)Pe(t, x)dx, 

V2 (t, e(t, x)) =

S

hT (t, x)Qh(t, x)dx, S

  V3 (t, e(t, x)) =

S

t

t−τ (t)

gT (e(s, x))Rg(e(s, x))dsdx.

(3.3)

2012

Y. Shi and P. Zhu

Then it follows from equations 2.6 and 2.7 and assumption 2 that m   ∂e(t, x) 1  ∂ LV1 (t, e(t, x)) = 2 e (t, x)P Dk  ∂xk ∂xk S 

T

k=1

− Ce(t, x) + Ag(e(t, x)) + Bg(e(t − τ (t), x)) + Hh(t, x) + K1 e(t, x) + K2 e(t − τ (t)) dx  +

tr(σ T (t, x)Pσ (t, x)) dx,

(3.4)

S

 LV2 (t, e(t, x)) = 2

hT (t, x)Q[−βh(t, x) + g(e(t, x))] dx,

(3.5)

S

 LV3 (t, e(t, x)) =

gT (e(t, x))Rg(e(t, x)) dx S



−(1 − τ˙ (t))

gT (e(t − τ (t), x))Rg(e(t − τ (t), x)) dx, S

(3.6)   m 1  ∂ ∂e(t, x) Dk LV (t, e(t, x)) ≤ 2 e(t, x)P  ∂xk ∂xk S 

k=1

− Ce(t, x) + Ag(e(t, x)) + Bg(e(t − τ1 (t), x)) + Hh(t, x) + K1 e(t, x) + K2 e(t − τ (t)) dx  +

tr(σ T (t, x)Pσ (t, x))dx S

 +2

hT (t, x)Q[−βh(t, x) + g(e(t, x))] dx S



gT (e(t, x))Rg(e(t, x)) dx

+ S

 − (1 − γ ) gT(e(t − τ (t), x))Rg(e(t − τ (t), x)) dx S

(3.7)

Synchronization of Competitive Neural Networks

2013

From the boundary condition and Green formula, we have 

  m  ∂ ∂e (t, x) Dik i dx ∂xk ∂xk S k=1    ∂e (t, x) m dx = ei (t, x)Pi ∇ · Dik i ∂xk S k=1    ∂ei (t, x) m = ∇ · ei (t, x)Pi Dik dx ∂xk S k=1    ∂ei (t, x) m Dik ∇ · ei (t, x)Pi dx − ∂xk S k=1    ∂e (t, x) m  ds ·n = ei (t, x)Pi Dik i ∂xk ∂S k=1     m ∂ei (t, x) 2 Dik dx − Pi ∂xk S ei (t, x)Pi

k=1

 =− S

where ∇ =  Dik



Pi



m 

Dik

k=1

∂ei (t, x) ∂xk

∂ , ∂ , . . . , ∂x∂ ∂x1 ∂x2 n

m

∂ei (t, x) ∂xk

k=1

2 dx,

(3.8)

 is the gradient operator and

  ∂e (t, x) ∂e (t, x) ∂e (t, x) T = Di1 i , Di2 i , . . . , Dim i . ∂x1 ∂x2 ∂xm

By lemma 2, it can be seen that  − S

Pi

m 

 Dik

k=1

∂ei (t, x) ∂xk

2 dx ≤ −

  m S

Pi

k=1

Dik 2 e (t, x) dx. L2k i

Thus,   m  ∂ ∂e(t, x) 2 e(t, x)P Dk dx ∂xk ∂xk S 

k=1

=2

  n S i=1

≤ −2

  m  ∂ ∂ei (t, x) ei (t, x)Pi Dik dx ∂xk ∂xk

  n S i=1

k=1

Pi

m  D

ik 2 e (t, x)dx 2 i L k k=1

 = −2

eT (t, x)PDe(t, x)dx. S

(3.9)

2014

Y. Shi and P. Zhu

It follows from assumption 3 that  S

 eT (t, x)PQ1 e(t, x) tr(σ T(t, x)Pσ (t, x)) dx = S

+ e (t − τ (t), x)PQ2 e(t − τ (t), x) dx. (3.10) T

It follows from equations 3.6 to 3.9 that LV (t, e(t, x)) ≤ −2

1 



1 +2  +2 +2

1  1 

1  

eT (t, x)PDe(t, x) dx − 2 S



S

 eT (t, x)PCe(t, x) dx S

eT (t, x)PAg(e(t, x)) dx S



eT (t, x)PBg(e(t − τ (t), x)) dx 

S

eT(t, x)PHh(t, x) dx + 2 S



+2 +

1 

S

1 

 S

eT (t, x)PK1 e(t, x) dx

 eT(t, x)PK2 e(t −τ (t)) dx + λeT(t, x)Q1 e(t, x) dx S

λeT (t − τ (t), x)Q2 e(t − τ (t), x) dx

 +2

hT (t, x)Q[−βh(t, x) + g(e(t, x))] dx S

 +

gT (e(t, x))Rg(e(t, x)) dx S



− (1 − γ )  −

gT (e(t − τ, x))Rg(e(t − τ, x)) dx S

[eT (t, x)Te(t, x) + hT (t, x)Wh(t, x)] dx S



eT (t − τ (t), x)Te(t − τ (t), x) dx

− S

 +

[eT (t, x)Te(t, x) + hT (t, x)Wh(t, x)] dx S

 +

eT (t − τ (t), x)Te(t − τ (t), x) dx S

2 2 2 ≤ e (t, x) − PD − PC + PK1 + λQ1 + T e(t, x) dx    S 

T

Synchronization of Competitive Neural Networks

2015

2 + e (t − τ (t), x) λQ2 + PK2 − T e(t − τ (t), x) dx  S  + gT (e(t, x))Rg(e(t, x)) dx 

T

S



− (1 − γ )

gT (e(t − τ (t), x))Rg(e(t − τ (t), x)) dx S



hT (t, x)[−2Qβ + W]h(t, x) dx

+ S

2 

+

2 + 

 eT (t, x)PAg(e(t, x)) dx S



eT (t, x)PBg(e(t − τ (t), x)) dx   2 + eT (t, x)PHh(t, x) dx + 2 hT (t, x)Qg(e(t, x) dx  S S  eT (t, x)Te(t, x) dx − 

S



 hT (t, x)Wh(t, x) dx +

S

 =

eT (t − τ (t), x)Te(t − τ (t), x) dx S



ηT ηdx − S

S



eT (t, x)Te(t, x) dx − S



+

hT (t, x)Wh(t, x) dx S

eT (t − τ (t), x)Te(t − τ (t), x) dx,

(3.11)

S

where η = [eT (t, x), eT (t − τ (t), x), gT (e(t, x)), gT (e(t − τ (t), x)), hT (t, x)]. Let −λ1 = λmax () denote the largest eigenvalue of the matrix. From equation 3.10, we have  LV (t, e(t, x)) ≤ − eT (t, x)(T + λ1 I)e(t, x) dx 

S



S

− + S

hT (t, x)(W + λ1 I)h(t, x) dx eT (t − τ (t), x)(T − λ1 I)e(t − τ (t), x) dx

= −ω1 (e(t, x), h(t, x)) + ω2 (e(t − τ (t), x)). It can be seen that ω1 (e(t, x), h(t, x)) > ω2 (e(t − τ (t), x)) for any (e(t, x), h(t, x)) = 0. Therefore, by lemma 1, the LaShall-type invariance principle for the stochastic differential delay equations, we can conclude that the neural networks 2.3 and 2.4 can be synchronized for almost every initial data. The proof is complete.

2016

Y. Shi and P. Zhu

Remark 2. Theorem 1 gives a sufficient condition to prove that the two coupled neural networks, 2.3 and 2.4, can be synchronized for almost all initial data. This condition is expressed by a set of LMIs, which can be solved readily using Matlab LMI Toolbox. It is worth pointing out that theorem 1 depends on not only timescale  and delay constants τ but also on the reaction-diffusion Dk . If we ignore the effect of reaction-diffusion, we obtain the following result: Theorem 2. Under assumptions 1 to 3, the two coupled delayed neural networks, 2.3 and 2.4, can be synchronized for almost all initial data if there exist a scalar λ > 0, positive-definite diagonal matrices P and Q, and positive-definite matrices R, T, and W such that the following matrix inequalities hold: P − λI ≤ 0, ⎡ Π11 0 ⎢ ⎢ ∗ Π22 ⎢ ⎢ Π =⎢ ∗ ∗ ⎢ ⎢ ∗ ∗ ⎣ ∗

(3.12) 1 



PA

1 

1 

PB

0

0

R

0



−(1 − γ )R





PH



⎥ ⎥ ⎥ ⎥ Q ⎥ < 0. ⎥ 0 ⎥ ⎦ 0

(3.13)

Π55

where Π11 = − 2 PC + 2 P K 1 + λQ1 + T, −2Qβ + W.

Π22 = λQ2 + 2 P K 2 − T,

Π55 =

By applying the Green formula and the boundary condition, we

Proof. get

  m  ∂ ∂y (t, x) Dik i dx ∂xk ∂xk S k=1    ∂y (t, x) m = yi (t, x)Pi ∇ · Dik i dx ∂xk S k=1    ∂yi (t, x) m dx = ∇ · yi (t, x)Pi Dik ∂xk S k=1    ∂y (t, x) m ∇ · yi (t, x)Pi dx − Dik i ∂xk S k=1        m ∂yi (t, x) 2 ∂yi (t, x) m  ds − Pi = yi (t, x)Pi Dik ·n Dik dx ∂xk ∂xk ∂S S k=1



yi (t, x)Pi

k=1

 =− S

Pi

m  k=1

 Dik

∂yi (t, x) ∂xk

2 dx ≤ 0.

Synchronization of Competitive Neural Networks

2017

Then, following a similar line as in the proof of theorem 1, the desired result can be obtained readily. Remark 3. When Dik = 0, systems 2.1 and 2.3 are stochastic competitive neural networks. So theorem 2 is also applicable to the feedback synchronization of stochastic competitive neural networks with different timescales (Wang et al., 2012; Chua, 1999). 4 Numerical Example In this section, we give some numerical simulations to illustrate the results. Consider the following competitive neural networks with different timescales and reaction-diffusion terms:

˙ x) = STM :  y(t,

  m  ∂ ∂y(t, x) Dk − Cy(t, x) + A f (y(t, x)) ∂xk ∂xk k=1

+ B f (y(t − τ (t), x)) + Hs(t, x),

(4.1)

˙ x) = −βs(t, x) + f (y(t, x)). LTM : s(t, Let  = 1,  Dk =1(k = 1, 2),  τ (t) = 0.5 sin t, f (y(x, t))  = tanh(y(x,  t)), 10 1 0.5 −2.0 −0.1 1.6 0 C= , A= , B= , H= , β= 01 −0.5 1.5 −0.2 −2.7 0 −0.7   2 0 . with initial values with initial values y(t, x) = [−0.5, 2.1]T . 0 1.5 Assume the response neural network

STM : dz(t, x) =

  m 1  ∂ ∂z(t, x) Dk − Cz(t, x) + A f (z(t, x))  ∂xk ∂xk k=1

+ B f (z(t − τ (t), x)) + Hr(t, x) + u(t) dt + σ (t, e(t, x), e(t − τ (t), x))dω(t),

(4.2)

LTM : r˙(t, x) = −βr(t, x) + f (z(t, x)), where tr(σ T (e(t, x), e(t − τ (t), x))σ (e(t, x), e(t − τ (t), x))) ≤ eT (t, x)Q1 e(t, x) + eT (t − τ (t), x)Q2 e(t − τ (t), x),

(4.3)

2018

Y. Shi and P. Zhu

Figure 1: The evolution of variable y2 (t, x) of the drive system.

Figure 2: The evolution of variable y2 (t, x) of the drive system.



   0.01 0 0.05 0 Q1 = , Q2 = , with initial values z(t, x) = 0 0.25 0 0.1 T [0.3, −0.5] and the boundary conditions are set as Dirichlet boundary conditions. The simulation results of system 4.3 are provided in Figures 1 to 4.

Synchronization of Competitive Neural Networks

2019

Figure 3: The evolution of variable s2 (t, x) of the drive system.

Figure 4: The evolution of variable s2 (t, x) of the drive system.

Using LMI toolbox in Matlab, we can obtain the following feasible solutions to LMIs 3.1 and 3.2: λ = 3.8451

2020

Y. Shi and P. Zhu

Figure 5: Dynamical behavior of synchronization error e1 (t, x).

 P=  Q=  R=  T=  W=

0.1136

0

0

0.0856

4.4152

0

0

9.0041

0.5751 0.0122

  

0.0122 0.6913 0.5450

−0.0052

−0.0052 −0.0192 0.6973

−0.0082

−0.0082

2.7631

 

Figures 5 to 8 depict the synchronization errors of state variables between drive and response systems. According to theorem 1, the response system and the drive system with the controller u(t) can be square-mean synchronized. Moreover, according to theorem 3, we see that the systems 2.3 and 2.4 without reaction-diffusion term are synchronized. Using the LMI toolbox in Matlab, we can obtain the following feasible solutions to LMIs 3.1 and 3.2: λ = 3.8451

Synchronization of Competitive Neural Networks

Figure 6: Dynamical behavior of synchronization error e2 (t, x).

Figure 7: Dynamical behavior of synchronization error h1 (t, x).

 P=

0.1136

0

0

0.0856



2021

2022

Y. Shi and P. Zhu

Figure 8: Dynamical behavior of synchronization error h2 (t, x).

 Q=  R=  T=  W=

4.4152

0

0

9.0041

0.5751 0.0122

 

0.0122 0.6913 0.5450

−0.0052

−0.0052 −0.0192 0.6973

−0.0082

−0.0082

2.7631

 

Figures 5 to 8 depict the synchronization errors of state variables between drive and response systems. According to theorem 1, the response system and the drive system with the controller u(t) can be square-meansynchronized. Moreover, according to theorem 3, systems 2.3 and 2.4 without reaction-diffusion terms are synchronized. 5 Conclusion In this letter, we have proposed a feedback controller for the synchronization of stochastic competitive neural networks with different timescales

Synchronization of Competitive Neural Networks

2023

and reaction-diffusion terms. To guarantee that the response system can be synchronized with the drive system, we use Lyapunov stability theory, stochastic analysis theory, and a LaShall-type invariance principle for stochastic differential delay equations. The synchronization criteria are easily verified. The numerical simulations illustrate the effectiveness of the obtained results. Acknowledgments We thank the editor and the referees for their helpful comments and suggestions. References Blythe, S., Mao, X. R., & Liao, X. X. (2001). Stability of stochastic delay neural networks. J. Franklin Inst., 338, 481–495. Chua, L. O. (1999). Passivity and complexity. IEEE Trans. Circ. Syst. I Fundam. Theory Appl., 46, 71–82. Evans, L.C. (1998). Partial differential equations. Providence, RI: American Mathematical Society. Gu, H. (2009). Adaptive synchronization for competitive neural networks with different time scales and stochastic perturbation. Neurocomput., 73, 350–356. Haykin, S. (1994). Neural networks. Upper Saddle River, NJ: Prentice Hall. Liao, X. X., Fu, Y. L., Gao, J. D., & Zhao, X. Q. (2000). Stability of Hopfield neural networks with reaction-diffusion terms. Acta Electron. Sin., 28, pp. 78–81. Lou, X. Y., & Cui, B. T. (2007). Synchronization of competitive neural networks with different time scales. Physica A, 380, 563–576. Mao, X. R. (2002). A note on the LaShall-type theorems for stochastic differential delay equations. Journal of Mathematical Analysis and Applications, 268, 125–142. Meyer-B¨ase, A., Botella, G., & Rybarska-Rusinek, L. (2013). Stochastic stability analysis of competitive neural networks with different time-scales. Neurocomputing, 118, 115–118. Meyer-B¨ase, A., Ohl, F., & Scheich, H. (1996). Singular perturbation analysis of competitive neural networks with different time scales. Neural Comput., 8, 1731– 1742. Meyer-B¨ase, A., Pilyugin, S. S., & Chen, Y. (2003). Global exponential stability of competitive neural networks with different time scales. IEEE Trans. Neural Netw., 14, 716–719. Meyer-B¨ase, A., Pilyugin, S. S., Wismler, A., & Foo, S. (2004). Local exponential stability of competitive neural networks with different time scales. Eng. Appl. Artif. Intell., 17, 227–232. Oliveira, J. J. (2011). Global stability of a Cohen-Grossberg neural network with both time-varying and continuous distributed delays. Nonlinear Anal., Real World Appl., 1, 2861–2870. Pecora, L. M., & Carroll, T. L. (1990). Synchronization in chaotic systems. Phys. Rev. Lett., 64, 821–824.

2024

Y. Shi and P. Zhu

Wan, A. H., Qiao, H., Peng, J. G., & Wang, M. S. (2006). Delay-independent criteria for exponential stability of generalized Cohen-Grossberg neural networks with discrete delays. Phys. Lett. A, 353, 151–157. Wang, C. H., Kao, Y. G., & Yang, G. W. (2012). Exponential stability of impulsive stochastic fuzzy reaction-diffusion Cohen-Grossberg neural networks with mixed delays. Neurocomputing, 89, 55–63. Yang, X. S., Cao, J. D., Long, Y., & Rui, W. G. (2010). Adaptive lag synchronization for competitive neural networks with mixed delays and uncertain hybrid perturbations. IEEE on Transactions Neural Networks, 21, 1656–1667.

Received November 19, 2013; accepted March 23, 2014.

Copyright of Neural Computation is the property of MIT Press and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use.

Synchronization of stochastic competitive neural networks with different timescales and reaction-diffusion terms.

We propose a feedback controller for the synchronization of stochastic competitive neural networks with different timescales and reaction-diffusion te...
741KB Sizes 1 Downloads 3 Views