Neural Networks 60 (2014) 67–73

Contents lists available at ScienceDirect

Neural Networks journal homepage: www.elsevier.com/locate/neunet

Global robust asymptotic stability of variable-time impulsive BAM neural networks Mustafa Şaylı a , Enes Yılmaz b,c,∗ a

Department of Mathematics, Middle East Technical University, 06531 Ankara, Turkey

b

Gazi University, Polatlı Faculty of Science and Arts, Department of Mathematics, 06900 Ankara, Turkey

c

Institute of Applied Mathematics, Middle East Technical University, 06531 Ankara, Turkey

article

info

Article history: Received 19 June 2014 Accepted 31 July 2014 Available online 8 August 2014 Keywords: Global robust asymptotic stability Impulsive BAM neural networks Asymptotic stability Linear matrix inequality

abstract In this paper, the global robust asymptotic stability of the equilibrium point for a more general class of bidirectional associative memory (BAM) neural networks with variable time of impulses is addressed. Unlike most existing studies, the case of non-fix time impulses is focused on in the present study. By means of B-equivalence method, which was introduced in Akhmet (2003, 2005, 2009, 2010), Akhmet and Perestyuk (1990) and Akhmet and Turan (2009), we reduce these networks to a fix time impulsive neural networks system. Sufficient conditions ensuring the existence, uniqueness and global robust asymptotic stability of the equilibrium point are obtained by employing an appropriate Lyapunov function and linear matrix inequality (LMI). Finally, we give one illustrative example to show the effectiveness of the theoretical results. © 2014 Elsevier Ltd. All rights reserved.

1. Introduction A new model of neural networks named as bidirectional associative memory (BAM) neural networks is introduced by Kosko in Kosko (1988, 1989). Basically, this model consists of two-layers and has many important applications in pattern recognition, signal processing, associative memory, and optimization problems. All of these applications tediously depend on dynamical behaviors of the network and require that the equilibrium point of the model is globally asymptotically stable. There are many studies on the global stability analysis and other dynamical behaviors, like periodicity/almost periodicity, of the BAM neural networks (Cao, Liang, & Lam, 2004; Chen, Huang, Liu, & Cao, 2006; Jalab & Ibrahim, 2009; Song, Han, & Wei, 2005; Song & Wang, 2007; Wang, 2014; Wang & Zou, 2005; Yang, Liao, Hu, & Wang, 2009; Zhang & Liu, 2011; Zhang, Qiu, & She, 2014; Zhang & Si, 2007) and references therein. In addition to these, the instantaneous perturbations and abrupt changes in the voltages at certain instant, which are produced by circuit elements, are exemplary of impulsive phenomena that can affect the transient behavior of the neural networks.

∗ Corresponding author at: Gazi University, Polatlı Faculty of Science and Arts, Department of Mathematics, 06900 Ankara, Turkey. Tel.: +90 5323723885. E-mail addresses: [email protected] (M. Şaylı), [email protected] (E. Yılmaz). http://dx.doi.org/10.1016/j.neunet.2014.07.016 0893-6080/© 2014 Elsevier Ltd. All rights reserved.

Therefore, impulsive BAM neural networks (see, for example Gu, Jiang, & Teng, 2009, Ho, Liang, & Lam, 2006, Huang, Luo, & Yang, 2007, Li, 2009, Li, Li, Liao, & Huang, 2011, Li & Yang, 2006, Li & Zeng, 2010, Li, Zhang, Zhang, & Li, 2010, Mohamad & Gopalsamy, 2008, 2009, Yang, Zhang, & Wu, 2007 and Zhou & Wan, 2009) which are neither purely continuous nor discrete have been widely considered. Moreover, in practical implementation of neural networks, the stability of networks can often be destroyed by its compulsory uncertainty issuing from the existence of modeling errors, external disturbance and parameter fluctuations. In addition, several studies with interesting results examining robust stability analysis of BAM neural networks were published in Arık (2014), Cao, Ho, and Huang (2007), Li, Cao, and Wang (2007), Li and Jia (2013), Liu, Yi, Guo, and Wu (2008), Senan, Arık, and Liu (2012), Sheng and Yang (2009) and Zhou and Wan (2010). Hence, robustness of the designed network should be considered. In the light of above discussion, it is necessary to consider both impulsive phenomena and robustness of the neural networks (see, for example Li et al., 2007, Zhang, 2011 and Zhou & Wan, 2010). Besides, in the present paper, different from the most existing results, we introduce a more general class of BAM neural networks related to the impulsive phenomena that happen at nonprescribed moments of time. The aim of defining this new class is that the moments of impulses θk are arbitrary in R. To the best of our knowledge, these types of impulsive neural networks were considered as models of BAM neural networks in Liu, Li, and Liao

68

M. Şaylı, E. Yılmaz / Neural Networks 60 (2014) 67–73

(2011), on which a narrow class of variable-time impulsive BAM neural networks is performed and some inequality techniques were used to solve the problem. Additionally, in the paper Yılmaz (2014) almost periodic solutions of recurrent neural networks with variable moments of impulse are studied based on B-equivalence method. One should emphasize that there are no results on global robust asymptotic stability of impulsive neural networks with variable moments of time in the literature. The main necessity of the present study is to find sufficient conditions which guarantee the global robust asymptotic stability of equilibrium points for neural networks with discontinuities. In a real world problem, considering the system with non-fix moments of impulses is more general than the fixed time impulses, which creates more theoretical challenges. To solve the problem we used the technique of the reduction of the examined system to a system with fixed moments of impulses by using the B-equivalence method, which was studied widely in Akalın and Akhmet (2005), Akhmet (2003, 2005, 2009, 2010), Akhmet and Perestyuk (1990) and Akhmet and Turan (2009). Then, we used an appropriate Lyapunov function and some LMI techniques to prove robustness of the system.

where x(t ) = (x1 (t ), x2 (t ), . . . , xn (t ))T , y(t ) = (y1 (t ), y2 (t ), . . . , ym (t ))T , A(t ) = diag(a1 (t ), a2 (t ), . . . , an (t )), B(t ) = diag(b1 (t ), b2 (t ), . . . , bm (t )), W (t ) = ((wij (t ))n×m ),

H (t ) = ((hji (t ))m×n ),

C (t ) = (c1 (t ), c2 (t ), . . . , cn (t ))T , D(t ) = (d1 (t ), d2 (t ), . . . , dm (t ))T , f (y(t )) = (f1 (y1 (t )), f2 (y2 (t )), . . . , fm (ym (t )))T , g (x(t )) = (g1 (x1 (t )), g2 (x2 (t )), . . . , gn (xn (t )))T ,

∆x = (∆x1 (t ), ∆x2 (t ), . . . , ∆xn (t ))T , ∆y = (∆y1 (t ), ∆y2 (t ), . . . , ∆ym (t ))T , dk = diag(d1k , d2k , . . . , dnk ), Ik = (I1k , I2k , . . . , Ink )T , ek = diag(e1k , e2k , . . . , emk ), Jk = (J1k , J2k , . . . , Jmk )T . From now on, we define a new dependent variable z (t ) =

2. B-equivalence systems Let Z and R be the sets of integers and real numbers. Consider the following impulsive BAM neural networks with variable moments of time:

n



,

(2.2)

(2.3)

∆z |t =θk +τk (z ) = mk z + Sk (z ), where, An×n (t ) K (t ) = 0m×n

(2.1)

m

z ′ (t ) = −K (t )z (t ) + F (t , z (t ))



where ai (t ) > 0, i = 1, 2, . . . , n and bj (t ) >  0, j = 1, 2, . . . , m, k ∈ Z, x ∈ Rn , y ∈ Rm , t ∈ R, {dik } and ejk are bounded sequences such that (1 + dik ) ̸= 0, i = 1, 2, . . . , n, (1 + ejk ) ̸= 0, j = 1, 2, . . . , m, k ∈ Z, τk (x, y) are positive real valued continuous functions defined on Rn+m , k ∈ Z. Moreover, the sequence θk satisfies the condition θk < θk+1 , |θk | → +∞ as |k| → ∞. The system (2.1) is composed of two layers, that is, X -layer and Y -layer. xi (t ), i = 1, 2, . . . , n denotes the membrane potentials of the set of n neurons in X -layer and yj (t ), j = 1, 2, . . . , m denotes the membrane potentials of the set of m neurons in Y -layer at time t; the continuous, bounded functions fj (.) and gi (.) represent the measures of activation to its incoming potentials of the unit j from Y -layer and the unit i from X -layer, respectively, at time t ; wij corresponds to the synaptic connection weight of the unit j on the unit i and hji corresponds to the synaptic connection weight of the unit i on the unit j; ci and dj signifies the bounded external bias or input from outside the network to the unit i and j, respectively; ai and bj denote rate with which the ith unit and jth unit will reset their potentials to the resting state in isolation when it is disconnected from the network and external inputs, respectively. It will be assumed that ai , wij , ci , bj , hji , dj , Iik : Rn → Rn , Jjk : Rm → Rm , i = 1, 2, . . . , n, j = 1, 2, . . . , m, k ∈ Z are continuous functions. In the present study, we do not require smoothness and monotonicity of the activation functions fj (.) and gi (.), i = 1, 2, . . . , n, j = 1, 2, . . . , m. We can rewrite the system (2.1) in the vector state space form as

 ′ x (t ) = −A(t )x(t ) + W (t )f (y(t )) + C (t )     ∆x |t =θk +τk (x,y) = dk x + Ik (x), y′ (t ) = −B(t )y(t ) + H (t )g (x(t )) + D(t )    ∆y |t =θk +τk (x,y) = ek y + Jk (y)

x(t ) y(t )

z (t ) ∈ R , where x(t ) ∈ R , y(t ) ∈ R . Then, the system (2.2) can be written in the form n +m

 m  ′  x ( t ) = − a ( t ) x ( t ) + wij (t )fj (yj (t )) + ci (t )  i i  i   j =1    ∆xi |t =θk +τk (x,y) = dik xi + Iik (x), n   ′   yj (t ) = −bj (t )yj (t ) + hji (t )gi (xi (t )) + dj (t )     i = 1  ∆yj |t =θk +τk (x,y) = ejk yj + Jjk (y)



F (t , z (t )) =



0n×m , Bm×m (t )

Wn×m (t )fm×1 (y(t )) + Cn×1 (t ) , Hm×n (t )gn×1 x((t )) + Dm×1 (t )



   ∆x(t ) (dk )n×n ∆z (t ) = , mk = ∆y(t ) 0m×n  1    S (x) I (x) Sk (z ) = k2 = k Jk (y) Sk (y)





0n×m , (ek )m×m

with 0n×m , 0m×n are zero matrices. Since the system (2.3) and the system (2.1) are equivalent, we can use system (2.3) to perform our analysis. The following assumptions will be needed throughout the paper: (A1) there exists a Lipschitz constant ℓ > 0 such that

|τk (z1 ) − τk (z2 )| + |F (t , z1 ) − F (t , z2 )| + |Sk (z1 ) − Sk (z2 )| ≤ ℓ|z1 − z2 | and |τk (z1 )| ≤ ℓ for all z1 , z2 ∈ Rn+m , k ∈ Z; (A2) there exists a positive number θ ∈ R such that θk+1 − θk ≥ θ holds for all k ∈ Z and the surfaces of discontinuity Γk : t = θk + τk (z ), k ∈ Z satisfy the following conditions:

θk + τk (z ) < θk+1 + τk+1 (z ), |θk | → +∞ as |k| → ∞, τk ((E + mk )z + Sk (z )) ≤ τk (z ), z ∈ Rn+m , where E is an (n + m) × (n + m) identity matrix; (A3) ℓ(Nh + M ) < 1, where N = supt ∈R ∥K (t )∥ < +∞, ∥z (t )∥ ≤ h, M = supt ∈R ∥F (t , z (t ))∥ < +∞. Using the local existence theorem (Theorem 5.2.1 in Akhmet (2010)), a solution of (2.3) exists. By virtue of Theorem 5.3.1 in Akhmet (2010) and assumptions (A2)–(A3), every solution z (t ), ∥z (t )∥ ≤ h of (2.3) intersects each surface of discontinuity Γk : t = θk + τk (z ), k ∈ Z, at most once. Furthermore, by the proof of Theorem 5.2.4 in Akhmet (2010), continuation of solutions of ordinary differential equation z ′ (t ) = −K (t )z (t ) + F (t , z (t ))

(2.4)

M. Şaylı, E. Yılmaz / Neural Networks 60 (2014) 67–73

69

 For any α, β ∈ R we define the oriented interval [α, β] as [α, β ] ,  [α, β] = [β, α ] ,



if α ≤ β . otherwise



(2.6)

Definition 2.2. Systems (2.3) and (2.5) are said to be B-equivalent, if for any solution z (t ) of (2.3) defined on an interval U with the discontinuity points ξk , k ∈ Z, there exists a solution v(t ) of system (2.5) satisfying z (t ) = v(t ), t ∈ R



(ξ k , θk ].

(2.7)

k∈Z

In particular, Fig. 1. The procedure of the construction of the map Uk .

and the condition |θk | → +∞ as |k| → ∞, one can find that every solution z (t ) = z (t , t0 , z 0 ), (t0 , z 0 ) ∈ R × Rn+m , of (2.3) is continuable on R. For a fixed k ∈ Z, let z 0 (t ) = z (t , θk , z 0 ) be a solution of the system of ordinary differential equations (2.4). Denote by t = ξk the time when the solution of (2.4) intersects the surface of discontinuity Γk : t = θk + τk (z (ξk )), k ∈ Z. Suppose that z 1 (t ) = z (t , ξk , (E + mk )z 0 (θk ) + Sk (z 0 (ξk ))) is also a solution of (2.4). Next, we define a mapping Uk (z ) : Rn+m → Rn+m such that Uk (z ) = z 1 (θk ) − (E + mk )z (Fig. 1 illustrates the procedure of the contraction of the map Uk .) and construct a system of impulsive differential equations with fixed moments, which has the form

v ′ (t ) = −K (t )v(t ) + F (t , v(t )) (2.5) ∆v |t =θk = mk v + Uk (v),    1  x(t ) Uk (x) where v(t ) = y(t ) , Uk (v) = , Uk1 : Rn → Rn ; Uk1 = U 2 (y) k

1 1 1 T 2 2 2 T (U1k , U2k , . . . , Unk ) , Uk2 : Rm → Rm ; Uk2 = (U1k , U2k , . . . , Umk ) , k ∈ Z are continuous functions.

The most important difficulty in examining the system (2.1) is that the time of impulses are not fixed. That is to say, solutions with different initial values have different discontinuity time, in general. To investigate the asymptotic properties of solutions of Eqs. (2.1), we introduce the following concepts. Hereinafter, we give the techniques of B-topology and B-equivalence method which were introduced and developed in Akhmet (2010) for the systems of differential equations with variable moments of time. We refer the reader to the book (Akhmet, 2010), for a more detailed discussion. We denote P C (J ; Rn+m ), J ⊂ R, the space of all piecewise continuous functions ϕ : J → Rn+m with points of discontinuity of the first kind θk , k ∈ Z and which are continuous from the left. Let z (t ) be a solution of Eq. (2.3) on U (U can be an interval, a real half-line, or the real line R). Definition 2.1. A solution v(t ) of (2.5) is said to be in the ε neighborhood of a solution z (t ) if: (i) the measure of the symmetrical difference between the domains of existence of these solutions does not exceed ϵ ; (ii) discontinuity points of v(t ) are in ε -neighborhoods of discontinuity points of z (t ); (iii) for all t ∈ U outside of ε -neighborhoods of discontinuity points of z (t ) the inequality ∥z (t ) − v(t )∥ < ϵ holds. The topology defined by ε -neighborhoods of piecewise continuous solutions will be called the B-topology. One can easily check that it is Hausdorff topology. Topologies and metrics for spaces of discontinuous functions were introduced and developed in Akhmet (2005) and Akhmet and Perestyuk (1990).

z (θk ) = v(θk +), z (θk ) = v(θk ),

z (ξk ) = v(ξk ) if θk > ξk

(2.8)

z (ξk +) = v(ξk ) if θk < ξk .

(2.9)

Conversely, for each solution v(t ) of Eq. (2.5), there exists a solution z (t ) of system (2.3), which satisfies the conditions (2.7)–(2.9). The proof of following lemma is quite similar to that of Theorem 5.8.1 in Akhmet (2010), so we omit it here. Lemma 2.1. Assume that conditions (A1)–(A3) are satisfied by (2.3), then there are mapping Uk (v) : Rn+m −→ Rn+m , k ∈ Z, such that corresponding to each solution z (t ) of Eq. (2.3), there is a solution v(t )  of the system (2.5) satisfying z (t ) = v(t ) if t ∈ R \ (ξ k , θk ].

Moreover, the functions Uk (y) satisfy the inequality

k∈Z

∥Uk (z ) − Uk (v)∥ ≤ ℓk(ℓ)∥z − v∥, k(ℓ) = k(ℓ, h) is a bounded function, uniformly with respect to k ∈ Z for all z , v ∈ Rn+m , such that ∥z ∥ ≤ h and ∥v∥ ≤ h. 3. Equilibrium for variable-time impulsive BAM neural networks In this section, we will investigate the existence of equilibrium point of the system (2.5). Now, the system (2.5) can be written in matrix–vector form as follows:

 ′ x (t ) = −A(t )x(t ) + W (t )f (y(t )) + C (t )    ∆x |t =θk = dk x + Uk1 (x), ′ y (t ) = −B(t )y(t ) + H (t )g (x(t )) + D(t )  ∆y | 2 t =θk = ek y + Uk (y).

(3.10)

Suppose that v = [x∗ , y∗ ]T is the equilibrium point of the system (3.10), that is:

 A(t )x∗ (t ) = W (t )f (y∗ (t )) + C (t )    ∗ ∆x |t =θk = dk x∗ + Uk1 (x∗ ), B(t )y∗ (t ) = H (t )g (x∗ (t )) + D(t )   ∆y∗ | ∗ 2 ∗ t =θk = ek y + Uk (y ).

(3.11)

It can be concluded that the assumptions on activation functions gi (·), fj (·) and conditions dk x∗ + Uk1 (x∗ ) = 0, ek y∗ + Uk2 (y∗ ) = 0 guarantee the existence of an equilibrium point for the system (3.10) by using the commonly known Brouwer’s fixed point theorem. Therefore, one can easily say that v ∗ = [x∗ , y∗ ]T is an equilibrium point of the system (3.10). Now, we have to show also v ∗ = [x∗ , y∗ ]T is an equilibrium point of the system (2.3). Denote the set of all zeros of the impulse functions of the system (2.3) by

  Υ = z ∈ Rn+m | mk z + Sk (z ) = 0, k ∈ Z . Here, we need to show v ∗ ∈ Υ .

70

M. Şaylı, E. Yılmaz / Neural Networks 60 (2014) 67–73

Lemma 3.1. If v ∗ = [x∗ , y∗ ]T is an equilibrium point of the system (3.10), then v ∗ is an equilibrium point of the system (2.3).

where, f j (yj (t )) = fj (yj (t ) + y∗j ) − fj (y∗ ),

Proof. To prove the lemma, it is enough to showthat Sk (v ∗ ) =

U k (x(t )) = [U 1k (x1 (t )), U 2k (x2 (t )), . . . , U nk (xn (t ))] and

Uk (v ∗ ). We know that our mapping Uk (v) = Uik1 (x) = (1 + dik )

ξk 



−ai (s)xoi (s) +

θk

+ ci (s) ds + Sik1 (x(ξk )) + +

m 

is given by

wij (s)fj (y0j (s))

(y) = (1 + ejk ) +

n 

ξk 



θk

−bj (s)yoj (s) 

hji (s)gi (x0i (s)) + dj (s) ds + Sjk2 (y(ξk ))

i=1θk  n   −bj (s)y1j (s) + hji (s)gi (x1i (s)) + di (s) ds, + ξk

2

1

2

1

2

2

U k (y(t )) = [U 1k (y1 (t )), U 2k (y2 (t )), . . . , U mk (ym (t ))]

(xi (t )) =

Uik1

(xi (t )) −

Uik1

where,

(xi (t )) and ∗

Clearly it can be concluded that f j (0) = 0 and g i (0) = 0 for i = 1, 2, . . . , n, j = 1, 2, . . . , m. In the assumption (A1) we assumed that |F (t , z1 ) − F (t , z2 )| ≤ ℓ|z1 − z2 | so this implies that there exist positive numbers l1j , l2i such that |fj (κ1 ) − fj (κ2 )| ≤ ℓ1j |κ1 − κ2 | and

 wij (s)fj (y1j (s)) + ci (s) ds,

j =1

Ujk2

1

(yj (t )) = Ujk2 (yj (t )) − Ujk2 (y∗j (t )), i = 1, 2, . . . , n, j = 1, 2, . . . , m, k ∈ Z, respectively.

−ai (s)x1i (s)

ξk

1

1 U ik 2 U jk

j =1 θk 





m 

Uk1 (x) Uk2 (y)

and

g i (xi (t )) = gi (xi (t ) + x∗i ) − gi (x∗ ),

i =1

|gi (κ1 ) − gi (κ2 )| ≤ ℓ2i |κ1 − κ2 |, for all κ1 , κ2 ∈ R i = 1, 2, . . . , n, j = 1, 2, . . . , m. Also, we assumed the activation functions are bounded that is for any κ1 , κ2 ∈ R there exist ℓ3j , ℓ4i such that |fj (κ1 )| ≤ ℓ3j and |gi (κ2 )| ≤ ℓ4i . Using these properties of activation functions with Euclidean norm ∥·∥ it follows that:  2    σ1    f (y) ≤  Π  ∥y∥ and ∥g (x)∥ ≤ Π σ  ∥x∥ for all t ̸= θk , t ∈ R,

where

                 x(ξk )  = y(ξk )              

ξk 

 x1 +

θk

x2 +

θk

xn +

ξk 

θk



ξk 

θk

m 



w2j (s)fj (y0j (s)) + c2 (s) ds

xon

... m 

 wnj (s)fj (y0j (s)) + cn (s) ds

j=1

ξk 



ym +

−an (s) (s) +

θk

y2 +

w1j (s)fj (y0j (s)) + c1 (s) ds

j=1

θk

 y1 +

−a2 (s)xo2 (s) +

ξk 





j=1

ξk 



−a1 (s)xo1 (s) +

m 

−b1 (s)yo1 (s) +

n 



h1i (s)gi (x0i (s)) + d1 (s) ds

i=1 n

−b2 (s)yo2 (s) +





h2i (s)gi (x0i (s)) + d2 (s) ds

i=1

−bm (s)yom (s) +

... n 



hmi (s)gi (x0i (s)) + dm (s) ds

                .              

i=1

Thus, by using Eq. (3.11), one can easily see that Sk (v ∗ ) = Uk (v ∗ ). This completes the proof. The uniqueness of equilibrium point can be concluded from the global asymptotic stability constructed in the next section. 4. Global robust asymptotic stability In this part of the paper, we will obtain sufficient conditions for the system (3.10) to be robust globally asymptotically stable. For notational acceptance we will shift an equilibrium point v = [x∗ , y∗ ] of the system (3.10) to the origin by x(t ) = x(t ) − x∗ , y(t ) = y(t ) − y∗ . The system (3.10) can be easily transformed into the following form:

 ′ x (t ) = −A(t )x(t ) + W (t )f (y(t ))   1  ∆x |t =θk = dk x + U k (x), ′ y (t ) = −B(t )y(t ) + H (t )g (x(t ))    2 ∆y |t =θk = ek y + U k (y) where; f (y) = [f 1 (y1 ), f 2 (y2 ), . . . , f m (ym )] and g (x) = [g 1 (x1 ), g 2 (x2 ), . . . , g n (xn )]

(4.12)

where Π σ = diag(σ11 , σ21 , . . . , σm1 ), Π σ = diag(σ12 , σ22 , . . . , σn2 ). In order to prove the global asymptotic stability of the equilibrium point of the system (3.10), it will be enough to prove the global asymptotic stability of origin of the system (4.12). As we stated in Section 2, there are some deviations in the values of the parameters in the system (3.10). Since these deviations are bounded in practical experiments, the quantities ai (t ), bj (t ), wij (t ), hji (t ) can be intervalized as follows: For i = 1, 2, . . . , n, j = 1, 2, . . . , m, 1

2

    , A = A = diag(ai (t )) : 0 < A ≤ A ≤ A, A := A I        i.e., ai ≤ ai (t ) ≤ ai ,          B := B, B = B = diag(bj (t )) : 0 < B ≤ B ≤ B,  I       i.e., b ≤ bj (t ) ≤ bj , j     WI := W , W = W = (wij (t ))n×m : W ≤ W ≤ W ,         i.e., w ij ≤ wij (t ) ≤ w ij ,          HI := H , H = H = (hji (t ))m×n : H ≤ H ≤ H ,      i.e., hji ≤ hji (t ) ≤ hji . Now, we will need the following lemmas and definition to investigate our main results. Definition 4.1. The impulsive BAM neural network (3.10) is called globally robust asymptotic stable if there is a unique equilibrium point (x∗ , y∗ ) = (x∗1 , x∗2 , . . . , x∗n , y∗1 , y∗2 , . . . , y∗m ) and it is globally asymptotically stable for all A ∈ AI , B ∈ BI , W ∈ WI , H ∈ HI . Lemma 4.1 (Zhou & Wan, 2010). Given any real matrices A, B, C of appropriate dimensions and a scalar ϵ > 0 such that 0 < C = C T . Then the following inequality holds: AT B+BT A ≤ ϵ AT CA+ϵ −1 BT C −1 B where the superscript T means the transpose of a matrix. Lemma 4.2 (Zhou & Wan,  2010 Schur Complement). Linear matrix inequality:

Q (x)

S T (x)

S (x)

R(x)

> 0 with, Q (x) = Q T (x), R(x) = RT (x) is

the same as R(x) > 0, Q (x) − S (x)R−1 (x)S T (x) > 0.

M. Şaylı, E. Yılmaz / Neural Networks 60 (2014) 67–73

From now on we need the following assumptions: (A4) There exist diagonal matrices Ψα = diag(α1 , α2 , . . . , αn ) > 0, Ψβ = diag(β1 , β2 , . . . , βm ) > 0, such that the following linear matrix inequalities hold:



2Ψα A − Π σ (H ∗ )T (H ∗ )Π σ 2

2

Ψα

Ψα



I

>0

71

Using the B-equivalence method which was widely explained in Akhmet (2010), the solution v(t ) of (2.5) coincides with the solution z (t ) of Eq. (2.3) at discontinuity points t ∈ (θk , θk+1 ). The continuous dependence, in the B-topology, of solutions of Eq. (2.3) on initial data and the right side implies that z (t ), equivalently the solutions of the system (2.1), is also globally robust asymptotic stable. Therefore, we have the following result.

and



2Ψβ B − Π σ (W ∗ )T (W ∗ )Π σ 1

1

Ψβ

Ψβ



I

Theorem 4.1. Assume that conditions (A1)–(A5) are valid. Then (2.1) has a unique globally robust asymptotically stable equilibrium point.

> 0.

(A5) The impulsive operators satisfy the following conditions dik xi + ejk yj +

1 U ik 2 U jk

(xi ) = −γik xi (θk ),

0 < γik < 2

(yj ) = −δjk yj (θk ),

0 < δjk < 2.

    z (t ) − v ∗  ≤ ∥z (t ) − v(t )∥ + v(t ) − v ∗  .

  For the sake of convenience, we define W ∗ = max |W |, |W | and   H ∗ = max |H |, |H | . Lemma 4.3. Assume that conditions (A4) and (A5) are valid. Then, the system (4.12) is globally robust asymptotically stable. Proof. We choose the following Lyapunov function: V (t , x(t ), y(t )) = x (t )Ψα x(t ) + y (t )Ψβ y(t ). T

T

Along the trajectories of the system (4.12), evaluating the time derivative of V (t , x(t ), y(t )), we have: When t ̸= θk , k ∈ Z, V + (t ) ≤ −x (t )(A(t )Ψα + Ψα A(t ))x(t ) − y (B(t )Ψβ T

T

T

+ x (t )Ψα W (t )f (y(t )) + g (x(t ))H (t )Ψβ y(t ) T

(4.13)

By Lemma 4.3 we know that ∥v(t ) − v ∥ → 0 as t → ∞. Thus, it is enough to show that ∥z (t ) − v(t )∥ → 0 as t → ∞. So, we obtain ∗

ξk 

v(t ) − z (t ) ≤ v(θk +) +



− z (θk ) −



 −K (s)v(s) + F (s, v(s)) ds

θk ξk 

 −K (s)z (s) + F (s, z (s)) ds

θk

= v(θk +) − z (θk ) +



ξk 

θk

−K (s)(v(s) − z (s))

 + F (s, v(s)) − F (s, z (s)) ds

+ Ψβ B(t ))y(t ) + f (y(t ))W T (t )Ψα x(t ) T

Proof. To prove this assertion, we need to show that ∥z (t ) − v ∗ ∥  → 0 as t → ∞. For t ̸∈ [θ i , ξi ), it is clear that z (t ) = v(t ).  However, for t ∈ [θi , ξi ), we have

T

+ yT (t )Ψβ H (t )g (x(t )).

= mk v(θk ) + Uk (v(θk )) − mk v ∗ − Uk (v ∗ )  ξk  −K (s)(v(s) − z (s)) + θk

Using Lemma 4.1 with ϵ = 1 and C = I we get

 + F (s, v(s)) − F (s, z (s)) ds.

V + (t ) ≤ −x (t )(A(t )Ψα + Ψα A(t ))x(t ) − y (t )(B(t )Ψβ T

T

T

+ Ψβ B(t ))y(t ) + f (y(t ))W T (t )W (t )f (y(t )) + xT (t )Ψα Ψα x(t ) + g T (x(t ))H T (t )H (t )g (x(t )) + yT (t )Ψβ Ψβ y(t ) σ2

≤ xT (t )(−A(t )Ψα − Ψα A(t ) + Π H T (t )H (t )Π + Ψα Ψα )x(t ) + yT (−B(t )Ψβ − Ψβ B(t ) σ1

+ Π W (t )W (t )Π T

σ1

σ2

+ Ψβ Ψβ )y(t )

≤ x (t )(−2Ψα A + Ψα Ψα T

+ Π σ (H ∗ )T (t )H ∗ (t )Π σ +)x(t ) + yT (−2Ψβ B + Ψβ Ψβ 2

2

+ Π σ (W ∗ )T (t )W ∗ (t )Π σ )y(t ). 1

1

Thus using Lemma 4.2 and (A4) we conclude that V + (t ) < 0 for t ̸= θk , k ∈ Z. When t = θk , k ∈ Z, V (θk +) − V (θk ) = x (θk +)Ψα x(θk +) − x (θk )Ψα x(θk ) T

T

+ yT (θk +)Ψβ y(θk +) − yT (θk )Ψβ y(θk ) = γk (γk − 2I )xT (θk )Ψα x(θk ) + δk (δk − 2I )yT (θk )Ψβ y(θk ). Using (A5) we get V (θk +) − V (θk ) < 0, so V ′ (t ) < 0. Hence, we can get V ′ (t ) < 0 for x(t ) ̸= 0, y(t ) ̸= 0 and clearly V (·) is radially unbounded which easily results in that the origin of (4.12) is globally robust asymptotically stable by standard Lyapunov theorem. 

Taking absolute value of both sides and using the conditions (A1), N = supt ∈R ∥K (t )∥ < +∞ and mk v ∗ + Uk (v ∗ ) = 0, we obtain

∥z (t ) − v(t )∥ ≤ ∥Uk (v(θk )) − Uk (v ∗ )∥ + ∥mk (v(θk )) − mk (v ∗ )∥  ξk ∥z (s) − v(s)∥ ds. + (N + ℓ) θk

It follows from Lemma 2.1 and boundedness of mk , we get

   ∥z (t ) − v(t )∥ ≤ ℓk˜ (ℓ) + ∥mk ∥ v(θk ) − v ∗   ξk ∥z (s) − v(s)∥ ds. + (N + ℓ) θk

By using the Gronwall–Bellman lemma (Akhmet, 2010), we find that

  ∥z (t ) − v(t )∥ ≤ (ℓk˜ (ℓ) + ∥mk ∥)e(N +ℓ)|ξk −θk | v(θk ) − v ∗    ≤ (ℓk˜ (ℓ) + ∥mk ∥)e(N +ℓ)ℓ v(θk ) − v ∗  . Next, using θk < θk+1 , |θk | → +∞ as |k| → ∞ with continuation property of solution, one can obtain

  ∥z (t ) − v(t )∥ ≤ (ℓk˜ (ℓ) + ∥mk ∥)e(N +ℓ)ℓ v(t ) − v ∗  . Thus, Lemma 4.3 implies ∥z (t ) − v ∗ ∥ → 0 as t → ∞. That is, the equilibrium solution of (2.1) is globally robust asymptotically stable. 

72

M. Şaylı, E. Yılmaz / Neural Networks 60 (2014) 67–73

5. An illustrative example In this paper, theoretical results guarantees that simulation of reduced system with fixed moments of impulses is fully adequate to the original system because of the B-equivalence method. Now, let us consider the following neural networks system with fixed moments of impulses. In what follows, let θk = 0.9k + (−1)k /24, k ∈ Z be the sequence of impulsive action and network parameters defined as

     0.8 0 1 0  = , A = , A   0 0 1.2   0.6       0.25 − 0.6 0.38 0.14   W= , W = ,  −0.16 0.09 0.15 0.13      0.63 0 1.3 0   B= , B= ,   0 0.57 0 1         0.25 0.08  H = −0.13 −0.16 , H = , 0.07 0.08 0.12 0.24 with the activation functions fj (yj (t )) =

=

1 5 3 4j

tanh(xi (t )) and γik = (

1 100

+

2 3i

Fig. 2. State trajectory of the system (5.14).

tanh(yj (t )), gi (xi (t )) 1 (cos(1 + k))2 ), δjk = ( 100 1 5

 2 3 2 1 2 k ), i, j = 1, 2. So, we can take Π σ = Π σ = 5   0 .2 0 1 0 and Ψα = Ψβ = 0 1 . 0 0.2 Clearly, 0 < γik , δjk < 2 (i, j = 1, 2) and LMIs    −0.5969 0.0028 σ2 ∗ T ∗ σ2   + I + Π ( H ) ( H ) Π = − 2A < 0,  0.0028 −0.1967    1 1  −2B + I + Π σ (W ∗ )T (W ∗ )Π σ = −0.2532 0.0030 < 0, 0.0030 −0.1385

+ 



sin

hold. Thus, conditions of Theorem 4.1 are satisfied, so origin of the specified network is globally robust asymptotically stable. To perform numerical simulation, let us choose A, B, W , H from the indicated intervals above, respectively, and obtain the following system:

 dx(t ) 0.9 0 x1 (t )  =−  0 0.8 x (t )  dt   1 2       tanh(y1 (t ))   0.33 0.12      + 0.18 0.11  5  , t ̸= θk  1   tanh ( y ( t ))  2   5      − 1  d1k x1 (θk ) + U1k (x1 (θk− ))    ∆x(t ) = 1   d x (θ − ) + U2k (x2 (θk− ))     2k 2 k   2 1    − + (cos(1 + k))2 x1 (θk− )     100 3   , t = θk ,      =  1  1  − 2   − + (cos(1 + k)) x2 (θk )   100 3    (5.14) dy ( t ) 0 . 65 0 y1 (t )   = −   0 0.6 y2 (t ) dt   1        tanh ( x ( t )) 1  0.23 0.13  5    +   , t ̸= θk  0.9 0.16  1    tanh(x2 (t ))   5     −  2  e y (θ ) + U1k (y1 (θk− )) ∆y(t ) = 1k 1 k    e y (θ − ) + U 2 (y (θk− ))       2k 2 k  2k 2   2   1 3 2 3 −   + sin k y1 (θk )  −  100 4 5      =   , t = θk .         2   1 3 2 3   −   − + sin k y2 (θk ) 

100



8

Fig. 3. State trajectories x1 (t ), x2 (t ), y1 (t ), y2 (t ) of the system (5.14).



5

By simple calculation, we can see that the LMI conditions of Theorem 4.1 is satisfied, that is

 2 σ2 −2A  + I + Π σ (H ∗ )T (H ∗ )Π      −0.7943 0.0024    = 0.0024 −0.5989 < 0, 1 1   −2B  + I + Π σ (W ∗ )T (W ∗ )Π σ      = −0.2655 0.0070 < 0. 0.0070 −0.1983 Thus, origin of the system (5.14) is globally asymptotically stable. This can be seen by simulation in Figs. 2 and 3. 6. Conclusion In this paper, it is the first time that the global robust asymptotic stability of BAM neural networks with variable-time impulses is considered in literature. It is very difficult to analyze system with non-fix time impulses, even though the case holds commonly both biological and artificial neural networks. To overcome this problem, first we reduced the system to a fix time impulsive system by using B-equivalence method, then we used some appropriate Lyapunov functions and LMIs. At the end, we obtained easily verifiable sufficient conditions for robustness of the considered system. Finally, we give one example to show the effectiveness and applicability of our results.

M. Şaylı, E. Yılmaz / Neural Networks 60 (2014) 67–73

References Akalın, E., & Akhmet, M. U. (2005). The principles of B-smooth discontinuous flows. Computers and Mathematics with Applications, 49, 981–995. Akhmet, M. U. (2003). On the general problem of stability for impulsive differential equations. Journal of Mathematical Analysis and Applications, 288, 182–196. Akhmet, M. U. (2005). Perturbations and Hopf bifurcation of the planar discontinuous dynamical system. Nonlinear Analysis, 60, 163–178. Akhmet, M. U. (2009). Li–Yorke chaos in the system with impacts. Journal of Mathematical Analysis and Applications, 351, 804–810. Akhmet, M. (2010). Principles of discontinuous dynamical systems. New York: Springer. Akhmet, M. U., & Perestyuk, N. A. (1990). The comparison method for differential equations with impulse action. Differential Equations, 26(9), 1079–1086. Akhmet, M. U., & Turan, M. (2009). Differential equations on variable time scales. Nonlinear Analysis, 70, 1175–1192. Arık, S. (2014). An improved robust stability result for uncertain neural networks with multiple time delays. Neural Networks, 54, 1–10. Cao, J., Ho, D. W. C., & Huang, X. (2007). LMI-based criteria for global robust stability of bidirectional associative memory neural networks with time delay. Nonlinear Analysis, Series A, 66, 215–223. Cao, J., Liang, J., & Lam, J. (2004). Exponential stability of high-order bidirectional associative memory neural networks with time delays. Physica D: Nonlinear Phenomena, 199, 425–436. Chen, A., Huang, L., Liu, Z., & Cao, J. (2006). Periodic bidirectional associative memory neural networks with distributed delays. Journal of Mathematical Analysis and Applications, 317, 80–102. Gu, H., Jiang, H., & Teng, Z. (2009). BAM-type impulsive neural networks with timevarying delays. Nonlinear Analysis: Real World Applications, 10, 3059–3072. Ho, D. W. C., Liang, J., & Lam, J. (2006). Global exponential stability of impulsive high-order BAM neural networks with time-varying delays. Neural Networks, 19, 1581–1590. Huang, Z., Luo, X., & Yang, Q. (2007). Global asymptotic stability analysis of bidirectional associative memory neural networks with distributed delay and impulse. Chaos, Solitons and Fractals, 34, 878–885. Jalab, H. A., & Ibrahim, R. W. (2009). Almost-periodic solution for BAM neural networks. Surveys in Mathematics and its Applications, 4, 53–63. Kosko, B. (1988). Bidirectional associative memories. IEEE Transactions on Systems, Man and Cybernetics, 18, 49–60. Kosko, B. (1989). Adaptive bidirectional associative memories. Applied Optimization, 26, 4947–4960. Li, X. (2009). Exponential stability of Cohen–Grossberg-type BAM neural networks with time-varying delay via impulsive control. Neurocomputing, 73, 525–530. Li, P., Cao, J., & Wang, Z. (2007). Robust impulsive synchronization of coupled delayed neural networks with uncertainties. Physica A, 373, 261–272. Li, X., & Jia, J. (2013). Global robust stability analysis for BAM neural networks with time-varying delays. Neurocomputing, 120, 499–503. Li, C., Li, C., Liao, X., & Huang, T. (2011). Impulsive effects on stability of high-order BAM neural networks with time delays. Neurocomputing, 74, 1541–1550. Li, Y., & Yang, C. (2006). Global exponential stability analysis on impulsive BAM neural networks with distributed delays. Journal of Mathematical Analysis and Applications, 324, 1125–1139. Li, K., & Zeng, H. (2010). Stability in impulsive Cohen–Grossberg-type BAM neural networks with time-varying delays: a general analysis. Mathematics and Computers in Simulation, 80, 2329–2349. Li, K., Zhang, L., Zhang, X., & Li, Z. (2010). Stability in impulsive Cohen–Grossbergtype BAM neural networks with distributed delays. Applied Mathematics and Computation, 215, 3970–3984.

73

Liu, C., Li, C., & Liao, X. (2011). Variable-time impulses in BAM neural networks with delay. Neurocomputing, 74, 3286–3295. Liu, P., Yi, F., Guo, Q., & Wu, W. (2008). Analysis on global exponential robust stability of reaction–diffusion neural networks with S-type distributed delays. Physica D: Nonlinear Phenomena, 237, 475–485. Mohamad, S., & Gopalsamy, K. (2008). A unified treatment for stability preservation in computer simulations of impulsive BAM networks. Computers and Mathematics with Applications, 55, 2043–2063. Mohamad, S., & Gopalsamy, K. (2009). Exponential stability preservation in semidiscretisations of BAM networks with nonlinear impulses. Communications in Nonlinear Science and Numerical Simulation, 14, 27–50. Senan, S., Arık, S., & Liu, D. (2012). New robust stability results for bidirectional associative memory neural networks with multiple time delays. Applied Mathematics and Computation, 218, 11472–11482. Sheng, L., & Yang, H. (2009). Novel global robust exponential stability criterion for uncertain BAM neural networks with time-varying delays. Chaos, Solitons & Fractals, 40, 2102–2113. Song, Y., Han, M., & Wei, J. (2005). Stability and Hopf bifurcation analysis on a simplified BAM neural network with delays. Physica D: Nonlinear Phenomena, 200, 185–204. Song, Q., & Wang, Z. (2007). An analysis on existence and global exponential stability of periodic solutions for BAM neural networks with time-varying delays. Nonlinear Analysis: Real World Applications, 8, 1224–1234. Wang, C. (2014). Almost periodic solutions of impulsive BAM neural networks with variable delays on time scales. Communications in Nonlinear Science and Numerical Simulation, 19, 2828–2842. Wang, L., & Zou, X. (2005). Stability and bifurcation of bidirectional associative memory neural networks with delayed self-feedback. International Journal of Bifurcation and Chaos, 15, 2145–2159. Yang, D., Liao, X., Hu, C., & Wang, Y. (2009). New delay-dependent exponential stability criteria of BAM neural networks with time delays. Mathematics and Computers in Simulation, 79, 1679–1697. Yang, F., Zhang, C., & Wu, D. (2007). Global stability analysis of impulsive BAM type Cohen–Grossberg neural networks with delay. Applied Mathematics and Computation, 186, 932–940. Yılmaz, E. (2014). Almost periodic solutions of impulsive neural networks at nonprescribed moments of time. Neurocomputing, 141, 148–152. Zhang, Y. (2011). Robust exponential stability of uncertain impulsive neural networks with time-varying delays and delayed impulses. Neurocomputing, 74, 3268–3276. Zhang, Z., & Liu, K. (2011). Existence and global exponential stability of a periodic solution to interval general bidirectional associative memory (BAM) neural networks with multiple delays on time scales. Neural Networks, 24, 427–439. Zhang, A., Qiu, J., & She, J. (2014). Existence and global exponential stability of periodic solution for high-order discrete-time BAM neural networks. Neural Networks, 50, 98–109. Zhang, L., & Si, L. (2007). Existence and exponential stability of almost periodic solution for BAM neural networks with variable coefficients and delays. Applied Mathematics and Computation, 194, 215–223. Zhou, Q., & Wan, L. (2009). Impulsive effects on stability of Cohen–Grossbergtype bidirectional associative memory neural networks with delays. Nonlinear Analysis: Real World Applications, 10, 2531–2540. Zhou, Q., & Wan, L. (2010). Global robust asymptotic stability analysis of BAM neural networks with time delay and impulse: an LMI approach. Applied Mathematics and Computation, 216, 1538–1545.

Global robust asymptotic stability of variable-time impulsive BAM neural networks.

In this paper, the global robust asymptotic stability of the equilibrium point for a more general class of bidirectional associative memory (BAM) neur...
465KB Sizes 0 Downloads 5 Views