Neural Networks 54 (2014) 112–122

Contents lists available at ScienceDirect

Neural Networks journal homepage: www.elsevier.com/locate/neunet

A systematic method for analyzing robust stability of interval neural networks with time-delays based on stability criteria✩ Zhenyuan Guo a,b , Jun Wang b,∗ , Zheng Yan b a

College of Mathematics and Econometrics, Hunan University, Changsha 410082, China

b

Department of Mechanical and Automation Engineering, The Chinese University of Hong Kong, Shatin, New Territories, Hong Kong

article

info

Article history: Received 11 January 2014 Received in revised form 28 February 2014 Accepted 6 March 2014 Keywords: Robust stability Interval neural network Time delay LMI

abstract This paper presents a systematic method for analyzing the robust stability of a class of interval neural networks with uncertain parameters and time delays. The neural networks are affected by uncertain parameters whose values are time-invariant and unknown, but bounded in given compact sets. Several new sufficient conditions for the global asymptotic/exponential robust stability of the interval delayed neural networks are derived. The results can be casted as linear matrix inequalities (LMIs), which are shown to be generalizations of some existing conditions. Compared with most existing results, the presented conditions are less conservative and easier to check. Two illustrative numerical examples are given to substantiate the effectiveness and applicability of the proposed robust stability analysis method. © 2014 Elsevier Ltd. All rights reserved.

1. Introduction Recurrent neural networks have been applied for various applications in many fields such as image and signal processing, pattern recognition, optimization, associative memory, control, and modelling. In such applications, it is crucial to ensure the stability of designed neural networks. For example, when a neural network is designed as an optimization solver, the foremost prerequisite is to guarantee that the neural network is globally asymptotically stable. In recent years, dynamics of recurrent neural networks have been widely studied (Arik, 2002; Cao & Wang, 2003, 2005a, 2005b; Cao & Zhou, 1998; Forti & Tesi, 1995; Gao, Song, Ding, Liu, & Hao, 2013; He, Liu, & Rees, 2007; Liao, Chen, & Sanchez, 2002; Mahmouda & Xia, 2011; Shen & Wang, 2007, 2008, 2012; Wang, Liu, & Liu, 2005; Wu, Lam, Su, & Chu, 2012; Yu, Cao, & Wang, 2007; Zeng & Wang, 2006a, 2006b; Zeng, Wang, & Liao, 2003, 2004, 2005; Zhang, Liu, Huang, & Wang, 2010; Zhang, Wang, & Liu, 2008). To analyze dynamical properties of recurrent neural networks, it is sometimes necessary to take account of time delays. Due

✩ The work described in the paper was supported by the Research Grants Council of the Hong Kong Special Administrative Region, China (Project No. CUHK416811E), Hong Kong Scholars Program, and National Natural Science Foundation of China (11101133). ∗ Corresponding author. Tel.: +852 39438472. E-mail addresses: [email protected] (Z. Guo), [email protected], [email protected] (J. Wang), [email protected] (Z. Yan).

http://dx.doi.org/10.1016/j.neunet.2014.03.002 0893-6080/© 2014 Elsevier Ltd. All rights reserved.

to finite switching speed of amplifiers and communication speed between the neurons, time delays are likely to be present in the electronic implementations of neural networks. Time delays can change the dynamics of a network, such as inducing a network to exhibit oscillation or other unstable behaviors (Baldi & Atiya, 1994). Moreover, in deterministic neural networks, vital data such as the neuron fire rate and the synaptic interconnection weights are usually measured, acquired, and processed by means of statistical estimation. As a result, estimation errors are unavoidable. Furthermore, inevitable external disturbances and parameter perturbations result in additional uncertainties for neural network models. In light of the above discussions, it is important to study the robust stability of recurrent neural networks with time delays in presence of uncertainties. There are mainly two forms of uncertainties; namely, interval uncertainty and norm-bounded uncertainty. For interval neural networks with time delays, there existed many delay-dependent or delay-independent robust stability criteria (Arik, 2003; Bao, Wen, & Zeng, 2012; Cao & Chen, 2004; Cao, Huang, & Qu, 2005; Chen, Cao, & Huang, 2005; Ensari & Arik, 2010; Faydasicok & Arik, 2012a, 2012b, 2013; Guo & Huang, 2009; Liao & Yu, 1998; Li, Liao, & Zhang, 2004; Ozcan & Arik, 2006; Qi, 2007; Shao, Huang, & Zhou, 2010; Singh, 2005; Wu, Tao, Qin, Shi, & He, 2011; Zhang, Wang, & Liu, 2007). A brief review on these works reveals that there are mainly two types of methods. One method is to find an upper bound for the norm of the interval matrices, and then apply it to the robust stability analysis of neural networks with time delays; e.g., Ensari and Arik (2010), Faydasicok and Arik (2012a, 2012b, 2013), Guo and Huang (2009), Ozcan and

Z. Guo et al. / Neural Networks 54 (2014) 112–122

Arik (2006), Qi (2007), Shao et al. (2010) and Singh (2007). In particular, Faydasicok and Arik in Faydasicok and Arik (2013) recently proposed a new upper bounded for the norm of interval matrices and addressed some robust stability criteria which are shown to be weaker than previous results in Ensari and Arik (2010), Faydasicok and Arik (2012a, 2012b), Ozcan and Arik (2006), Qi (2007), Shao et al. (2010) and Singh (2007), under some cases. Moreover, they made a summary of the results in Ensari and Arik (2010), Faydasicok and Arik (2012a, 2012b, 2013), Ozcan and Arik (2006), Qi (2007), Shao et al. (2010) and Singh (2007), and obtained the unified results on the robust stability. Another method is to use the absolute value of the upper or lower bounds of the network parameters to ascertain the robust stability of interval delayed neural networks; e.g., Cao and Chen (2004) and Chen et al. (2005). A disadvantage of these two methods is that the differences between the neuronal excitatory and the inhibitory effects are without taking into account fully. In this paper, we propose a new method for analyzing robust stability of a general class of interval neural networks with time delay and uncertain parameters whose values are unknown but bounded in given compact sets. Based on existing stability criteria of delayed neural networks with deterministic parameters, this method can be used to derive new sufficient conditions for the global robust asymptotic/exponential stability of uncertain interval delayed neural networks. More importantly, the robust stability criteria herein are less conservative or restrictive than the ones given in Ensari and Arik (2010), Faydasicok and Arik (2012a, 2012b, 2013), Guo and Huang (2009), Ozcan and Arik (2006), Qi (2007), Shao et al. (2010) and Singh (2007). Moreover, they take into account the differences between the neuronal excitatory and the inhibitory effects. The reminder of this paper is organized as follows. In Section 2, interval neural networks with time delays are described. In Section 3, a new method for analyzing the robust stability of interval delayed neural networks is proposed. In Section 4, one more set of criteria for determining the global asymptotic/exponential robust stability of interval neural networks with time delays are derived by applying this method to existing stability criteria of delayed neural networks with deterministic parameters. In Section 5, we make a comparison with previous robust stability results. Moreover, two illustrative and comparative numerical examples are given to show the effectiveness and applicability of the proposed robust stability conditions. Finally, concluding remarks are made in Section 6. 2. Preliminaries In this section, we shall formulate the problem discussed in the paper, and introduce some notations and definitions. Notations: Let x = (x1 , x2 , . . . , xn )T ∈ Rn and A = (aij )n×n ∈ n×n R . ∥x∥2 are 2-norm of column vector x, which are defined as:

  n  ∥x∥2 =  x2i . i=1

∥A∥2 is the induced norm of corresponding vector norm and are represented as: ∥A∥2 = [λmax (AT A)]1/2 . |A| is denoted as |A| = (|aij |)n×n . If A is a symmetric matrix, then A > 0 (≥ 0, < 0) implies that A is positive definite (positive semidefinite, negative definite), i.e., A has all positive (nonnegative, negative) eigenvalues. Denote K = diag{k1 , k2 , . . . , kn } and L = diag{l1 , l2 , . . . , ln }.

113

In this paper, we consider the following delayed recurrent neural network (DRNN) model described by the state equations of the form: dxi (t ) dt

= −di xi (t ) +

n 

aij fj (xj (t ))

j =1

+

n 

bij gj (xj (t − τj (t ))) + ui ,

i = 1, 2, . . . , n,

j =1

or equivalently dx(t ) dt

= −Dx(t ) + Af (x(t )) + Bg (x(t − τ (t ))) + u,

(1)

where x(t ) = (x1 (t ), x2 (t ), . . . , xn (t ))T is the state vector; D = diag{d1 , d2 , . . . , dn } is the real diagonal matrix where di > 0, i = 1, 2, . . . , n, are the neuron self-inhibitions; τj (t ) is the transmission delay; A = [aij ]n×n and B = [bij ]n×n are the feedback connection weight matrix and the delayed feedback connection weight matrix, respectively; f (x(t )) = (f1 (x1 (t )), f2 (x2 (t )), . . . , fn (xn (t )))T and g (x(t − τ (t ))) = (g1 (x1 (t − τ1 (t ))), g2 (x2 (t − τ2 (t ))), . . . , gn (xn (t − τn (t ))))T represent activation functions of the network; u = (u1 , u2 , . . . , un )T ∈ Rn is the constant external input vector. Since in the electronic implementation of neural networks, the estimating errors and perturbation of parameters are bounded in general, the parameters of DRNN (1) are assumed to satisfy the following parameter ranges:

D := [D, D] = {D = diag(di ) : D ≤ D ≤ D, i.e., di ≤ di ≤ d¯ i , i = 1, 2, . . . , n},

A := [A, A] = {A = (aij )n×n : A ≤ A ≤ A,

(2)

i.e., aij ≤ aij ≤ a¯ ij , i, j = 1, 2, . . . , n},

B := [B, B] = {B = (bij )n×n : B ≤ B ≤ B, i.e., bij ≤ bij ≤ b¯ ij , i, j = 1, 2, . . . , n}. Since the boundary of di has two points, i.e., di and di , the boundary of aij also has two points, i.e. aij or aij , and the boundary of bij also has two points, i.e. bij or bij , then the combination number 2 of boundary points of components of D, A and B is 22n +n . We order 2 the 22n +n cases in any way as follows:

(D1 , A1 , B1 ), (D2 , A2 , B2 ), . . . , (D22n2 +n A22n2 +n , B22n2 +n ). In this paper, the following assumptions on the activation functions and delays will be used: A1 . The delay τj (t ) (j = 1, 2, . . . , n) is differential and bounded function with 0 ≤ τj (t ) ≤ τ ,

dτj (t ) dt

≤ 0.

A2 . The delay τj (t ) (j = 1, 2, . . . , n) is differential and bounded function with 0 ≤ τj (t ) ≤ τ ,

dτj (t ) dt

≤ µj ≤ µ < 1.

A3 . The activation functions fi (·) (i = 1, 2, . . . , n) are monotone nondecreasing. A4 . The activation functions fi (·) (i = 1, 2, . . . , n) satisfy Lipschitz condition, i.e. there exist positive constants ki , li , i = 1, 2, . . . , n such that

|fi (x) − fi (y)| ≤ ki |x − y|, |gi (x) − gi (y)| ≤ li |x − y|,

∀ x, y ∈ R .

The activation functions in this paper are not required to be bounded or differentiable.

114

Z. Guo et al. / Neural Networks 54 (2014) 112–122

Let C ([−τ , 0], Rn ) be the space of continuous functions mapping [−τ , 0] into Rn with the norm defined by ∥φ∥ = max1≤i≤n {sups∈[−τ ,0] |φi (s)|r }1/r , where r ≥ 1 is a constant. Obviously, C ([−τ , 0], Rn ) is a Banach space. The initial condition of DRNN (1) is x(s) = φ(s), s ∈ [−τ , 0], where φ ∈ C ([−τ , 0], Rn ). Denote x(t ; 0, φ) as the solution of DRNN (1) with initial condition (0, φ). It means that x(t ; 0, φ) is continuous and satisfies DRNN (1), and x(s; 0, φ) = φ(s), s ∈ [−τ , 0]. Next, we will formulate the robust stability problem based on the following definitions. Definition 1. An equilibrium point of DRNN (1) is a constant vector x∗ = (x∗1 , x∗2 , . . . , x∗n )T such that

−Dx∗ + Af (x∗ ) + Bg (x∗ ) + u = 0. Definition 2. The equilibrium point x∗ = (x∗1 , x∗2 , . . . , x∗n )T is said to be globally asymptotically stable if it is locally stable in the sense of Lyapunov and global attractive, where global attractivity means that every trajectory tends to x∗ as t → +∞. Furthermore, the equilibrium x∗ = (x∗1 , x∗2 , . . . , x∗n )T is said to be globally exponentially stable, if there exist constants δ > 0, T > 0, and M > 0 such that

∥x(t ) − x∗ ∥ ≤ Me−δt ,

t ≥ T,

where δ is called as the exponential convergence rate. Definition 3. DRNN (1) with the parameter ranges defined by (2) is globally asymptotically (or exponentially) robust stable if the unique equilibrium point x∗ = (x∗1 , x∗2 , . . . , x∗n )T of the model is globally asymptotically (or exponentially) stable for all D ∈ D , A ∈ A, B ∈ B . In the sequel, as in the literature, the stability of equilibrium point in the DRNNs is also stated the stability of the DRNNs for simplicity. Definition 4 (Berman & Plemmons, 1979). Suppose that A = (aij )n×n has nonpositive off-diagonal elements. A is a nonsingular M-matrix if one of the following conditions holds. (i) All principal minors of A are positive; (ii) A has all positive diagonal elements and there exists a positive vector λ = (λ1 , λ2 , . . . , λn )T such that λT A > 0, that is

λi aii >



λj |aji |,

i = 1, 2, . . . , n,

As stated in introduction, there have existed a lot of criteria on the stability of DRNN (1) with time delays and deterministic parameters. These criteria have many forms such as M-matrix, H-matrix, Lyapunov stability matrix, LMIs and other algebraic forms. Therefore, we assume these criteria have the following general form: gj (α1 , α2 , . . . , αm ) < (≤, >, ≥, or =) 0,

j = 1, 2, . . . , r .

(4)

Remark 1. If gj (α1 , α2 , . . . , αm ) is a scalar function in α1 , α2 , . . . , αm , (4) means that gj (α1 , α2 , . . . , αm ) is less than (less than or equal to, large than, large than or equal to, or equal to) zero. If gj (α1 , α2 , . . . , αm ) is a matrix function in α1 , α2 , . . . , αm , (4) means that gj (α1 , α2 , . . . , αm ) is negative (semi-negative, positive, semi-positive, or a zero matrix). If the stability criteria are that gj (α1 , α2 , . . . , αm ) is an M-matrix, H-matrix, or a Lyapunov stability matrix, j = 1, 2, . . . , r, we can transform these stability criteria to algebraic scalar or matrix forms based on the definition of M-matrix, H-matrix, or Lyapunov stability matrix. Lemma 1. Assume that h(α1 , α2 , . . . , αm ) with its variables in (3) is a linear scalar function. If h(α1 , α2 , . . . , αm ) < (≤, >, ≥, or =) 0,

(5)

for αi = α i or α i , i = 1, 2, . . . , m, then (5) holds ∀αi ∈ [α i , α i ], i = 1, 2, . . . , m. Proof. Since h(α1 , α2 , . . . , αm ) is linear in α1 , α2 , . . . , αm , then ∂ h/∂αi = constant, i = 1, 2, . . . , m. It follows that hmax =

max

h(α1 , α2 , . . . , αm )

min

h(α1 , α2 , . . . , αm )

αi ∈[α i ,α i ],i=1,...,m

and hmin =

αi ∈[α i ,α i ],i=1,...,m

can be reached when αi = α i or α i , i = 1, . . . , m, that is to say the points of maximum value and minimum value must be boundary points. If h(α1 , α2 , . . . , αm ) < 0,

(6)

for αi = α i or α i , i = 1, 2, . . . , m, then hmax < 0. Therefore, (6) holds for αi ∈ [α i , α i ], i = 1, 2, . . . , m. Similarly, we can prove the conclusion under the case that h(α1 , α2 , . . . , αm ) ≤, >, ≥, or = 0. 

j̸=i

Lemma 2. Assume that h(α1 , α2 , . . . , αm ) = (hij (α1 , α2 , . . . , αm ))n×n with its variables in (3) is a linear matrix function. If

which can be rewritten as n 

λj aji > 0,

i = 1, 2, . . . , n.

h(α1 , α2 , . . . , αm ) < (≤, >, ≥, or =) 0,

j =1

for αi = α i or α i , i = 1, 2, . . . , m, then (7) holds ∀αi ∈ [α i , α i ], i = 1, 2, . . . , m.

3. A method for analyzing robust stability In this section, we shall address a new method for analyzing the robust stability of interval delayed neural networks based on the stability criteria of corresponding deterministic delayed neural networks. For convenience, we rewrite the parameters of DRNN (1), i.e., d1 , d2 , . . . , dn , a11 , a12 , . . . , a1n , . . . , an1 , an2 , . . . , ann , b11 , b12 , . . . , b1n , . . . , bn1 , bn2 , . . . , bnn , as the following form:

α1 , α2 , . . . , αm , i = 1, 2, . . . , m = 2n2 + n.

Proof. For any given nonzero vector x ∈ Rn , since hij (α1 , α2 , . . . , αm ) is linear in α1 , α2 , . . . , αm , then xT hx is a scalar linear function in α1 , α2 , . . . , αm . So, it follows that

∂(xT hx) = constant, ∂αi

i = 1, 2, . . . , m.

Therefore,

(xT hx)max =

max

xT h(α1 , α2 , . . . , αm )x

min

xT h(α1 , α2 , . . . , αm )x

αi ∈[α i ,α i ],i=1,...,m

and

which satisfy that

αi ∈ [α i , α i ],

(7)

(3)

(xT hx)min =

αi ∈[α i ,α i ],i=1,...,m

Z. Guo et al. / Neural Networks 54 (2014) 112–122

can be reached when αi = α i or α i , i = 1, . . . , m, that is to say the points of maximum value and minimum value must be boundary points. If h(α1 , α2 , . . . , αm ) < 0,

(8)

115

then hmax < 0. Therefore, (10) holds for αi ∈ [α i , α i ], i = 1, 2, . . . , m. Similarly, we can prove the conclusion under the case that h(α1 , α2 , . . . , αm ) ≤, >, ≥, or = 0.  Lemma 4. Assume that h(α1 , α2 , . . . , αm ) = (hij (α1 , α2 , . . . , αm ))n×n with its variables in (3) is a nonlinear matrix function. If

for αi = α i or α i , i = 1, 2, . . . , m, then

h(α1 , α2 , . . . , αm ) < (≤, >, ≥, or =) 0,

(xT h(α1 , α2 , . . . , αm )x)max < 0

for each α = (α1 , α2 , . . . , αm ) which satisfies one of the following three conditions:

for any given nonzero vector x. Therefore, xT h(α1 , α2 , . . . , αm )x < 0 holds for any given nonzero vector x and αi ∈ [α i , α i ], i = 1, 2, . . . , m. It means that (8) holds for αi ∈ [α i , α i ], i = 1, 2, . . . , m. Similarly, we can prove the conclusion under the case that h(α1 , α2 , . . . , αm ) ≤, >, ≥, or = 0.  Based on Lemmas 1 and 2, we have the following theorem. Theorem 1. For k = 1, 2, . . . , r, assume that gk (α1 , α2 , . . . , αm ) is a linear scalar function, or gk (α1 , α2 , . . . , αm ) = (gkij (α1 , α2 , . . . , αm ))n×n is a linear matrix function. If (4) holds for αi = α i or α i , i = 1, 2, . . . , m, then DRNN (1) with its uncertain parameters in (2) is robust stable. Proof. According to Lemmas 1 and 2, if (4) holds for αi = α i or α i , i = 1, 2, . . . , m, then (4) holds for αi ∈ [α i , α i ], i = 1, 2, . . . , m. Therefore, DRNN (1) is stable when its parameters αi , i = 1, 2, . . . , m vary in [α i , α i ], that is to say, DRNN (1) with its uncertain parameters in (2) is robust stable.  Lemma 3. Assume that h(α1 , α2 , . . . , αm ) with its variables in (3) is a nonlinear scalar function. If h(α1 , α2 , . . . , αm ) < (≤, >, ≥, or =) 0

(9)

for each α = (α1 , α2 , . . . , αm ) which satisfies one of the following three conditions: T

(i) αi = α i or α i (i = 1, 2, . . . , m), (ii) ∇α h = 0, (iii) ∇α h does not exist, then (9) holds for αi ∈ [α i , α i ], i = 1, 2, . . . , m. Proof. Since h(α1 , α2 , . . . , αm ) is the function in α1 , α2 , . . . , αm , then hmax =

max

αi ∈[α i ,α i ],i=1,...,m

h(α1 , α2 , . . . , αm )

and hmin =

min

αi ∈[α i ,α i ],i=1,...,m

h(α1 , α2 , . . . , αm )

can be reached at α = (α1 , α2 , . . . , αm )T which must satisfy one of the following three conditions: (i) αi = α i or α i (i = 1, 2, . . . , m), (ii) ∇α h = 0, (iii) ∇α h does not exist.

(10)

for each α = (α1 , α2 , . . . , αm )T which satisfies one of the following three conditions: (i) αi = α i or α i (i = 1, 2, . . . , m), (ii) ∇α h = 0, (iii) ∇α h does not exist,

(i) αi = α i or α i (i = 1, 2, . . . , m), (ii) ∇α (xT hx) = 0 for any given nonzero vector x ∈ Rn . (iii) ∇α (xT hx) does not exist for any given nonzero vector x ∈ Rn , then (11) holds for αi ∈ [α i , α i ], i = 1, 2, . . . , m. Proof. Since hij (α1 , α2 , . . . , αm ) is the nonlinear function in α1 , α2 , . . . , αm , then

(xT hx)max =

max

xT h(α1 , α2 , . . . , αm )x

min

xT h(α1 , α2 , . . . , αm )x

αi ∈[α i ,α i ],i=1,...,m

and

(xT hx)min =

αi ∈[α i ,α i ],i=1,...,m

can be reached at α = (α1 , α2 , . . . , αm )T which must satisfy one of the following three conditions: (i) αi = α i or α i (i = 1, 2, . . . , m), (ii) ∇α (xT hx) = 0 for any given nonzero vector x ∈ Rn , (iii) ∇α (xT hx) does not exist for any given nonzero vector x ∈ Rn . If h(α1 , α2 , . . . , αm ) < 0,

(12)

for each α = (α1 , α2 , . . . , αm ) which satisfies one of the following three conditions: T

(i) αi = α i or α i (i = 1, 2, . . . , m), (ii) ∇α (xT hx) = 0 for any given nonzero vector x ∈ Rn , (iii) ∇α (xT hx) does not exist for any given nonzero vector x ∈ Rn , then (xT h(α1 , α2 , . . . , αm )x)max < 0 for any given nonzero vector x. Therefore, xT h(α1 , α2 , . . . , αm )x < 0 holds for any given nonzero vector x and αi ∈ [α i , α i ], i = 1, 2, . . . , m. It means that (12) holds for αi ∈ [α i , α i ], i = 1, 2, . . . , m. Similarly, we can prove the conclusion under the case that h(α1 , α2 , . . . , αm ) ≤, >, ≥, or = 0.  If we consider the scalar function h(α1 , α2 , . . . , αm ) as a 1 × 1 matrix function, then Lemma 3 is a special case of Lemma 4. Next, based on Lemmas 3 and 4, we have the following theorem. Theorem 2. For k = 1, 2 . . . , r, assume that gk (α1 , α2 , . . . , αm ) is a nonlinear scalar function, or gk (α1 , α2 , . . . , αm ) = (gkij (α1 , α2 , . . . , αm ))n×n is a nonlinear matrix function. If (4) holds for each α = (α1 , α2 , . . . , αm )T which satisfies one of the following three conditions: (i) αi = α i or α i (i = 1, 2, . . . , m), (ii) ∇α (xT gk x) = 0 for any given nonzero vector x ∈ Rn , (iii) ∇α (xT gk x) does not exist for any given nonzero vector x ∈ Rn , then DRNN (1) with its uncertain parameters in (2) is robust stable.

If h(α1 , α2 , . . . , αm ) < 0,

(11)

T

Proof. According to Lemmas 3 and 4, if (4) holds for each α = (α1 , α2 , . . . , αm )T which satisfies one of the following three cases: (i) αi = α i or α i (i = 1, 2, . . . , m), (ii) ∇α (xT gk x) = 0 for any given nonzero vector x ∈ Rn , (iii) ∇α (xT gk x) does not exist for any given nonzero vector x ∈ Rn , then (4) holds for αi ∈ [α i , α i ], i = 1, 2, . . . , m. Therefore, DRNN (1) is stable when its parameters αi , i = 1, 2, . . . , m vary in [α i , α i ], that is to say, DRNN (1) with its uncertain parameters in (2) is robust stable.



116

Z. Guo et al. / Neural Networks 54 (2014) 112–122

4. Robust stability criteria

We denote that

In this section, by applying the new method for analyzing the robust stability to existing stability criteria of DRNN (1) with deterministic parameters, the robust stability criteria of DRNN (1) satisfying (4) will be presented by scale and matrix forms, respectively.

ki (di , a1i , . . . , ani , b1i , . . . , bni ) = ri di −

Lemma 6 (Cao & Wang, 2003). Assume that Assumptions A2 and A4 hold. DRNN (1) is globally asymptotically stable if there exist ri > 0 (i = 1, 2, . . . , n) such that

 rj dj −

min

1≤j≤n

n 

ri |aij |kj −

i=1

n  i=1

ri

1 1−µ

 |bij |lj

> 0.

n 

rj |bji |li

j =1

and

kmin (di , a1i , . . . , ani , b1i , . . . , bni ) i

= ri di −

Lemma 5 (Cao & Wang, 2003). Assume that Assumptions A1 and A4 hold. DRNN (1) is globally asymptotically stable if D − |A|K − |B|L is a nonsingular M-matrix.

rj |aji |ki −

j =1

4.1. Criteria in scalar form First, we restate some important results with scalar form or that can be changed to scalar form, such as M-matrix form, on the stability of DRNN (1) in Cao and Wang (2003, 2005b).

n 

n 

rj |aji |max ki −

j =1

n 

rj |bji |max li .

j =1

Since di ≥ di , −|aji | ≥ −|aji |max and −|bji | ≥ −|bji |max for di ∈ [di , di ], aji ∈ [aji , aji ] and bji ∈ [bji , bji ], i, j = 1, 2, . . . , n, then ki ≥ kmin . Moreover, we can see that ki is nonlinear in i (di , a1i , . . . , ani , b1i , . . . , bni )T because of the absolute value of aji and bji for i, j = 1, 2, . . . , n. Next, we consider the following three cases: Case (i): When di = di or di , aji = aji or aji , and bji = bji or bji ,

it follows that ki ≥ kmin > 0. By Definition 4, D − |A|K − |B|L is a i nonsingular M-matrix. ∂k

∂k

Case (ii): Since ∂ d i = ri > 0, ∂ a i = rj ki , −rj ki or does not i ji ∂k exist, and ∂ b i = rj li , −rj li or does not exist, then ∇ ki = 0 has no ji

Lemma 7 (Cao & Wang, 2003). Assume that Assumptions A2 and A4 hold. DRNN (1) is globally asymptotically stable if there exist ri > 0 (i = 1, 2, . . . , n) such that

 ri di −

min

1≤i≤n



n 1

2 j =1

n 1

2 j =1

(ri |aij |kj + rj |aji |ki )

 ri |bij |lj + rj



1 1−µ

|bji |li

> 0.

Lemma 8 (Cao & Wang, 2005b). Assume that Assumption A4 holds and τj (t ) = τj , j = 1, 2, . . . , n are constants. DRNN (1) is globally exponentially stable if there exist constants q∗ij , rij∗ , qij , rij ∈ R and ωi > 0, r > 1, i, j = 1, 2, . . . , n such that n

.

ii (di , aij , bij ) = −di +

 j=1

∗ r − 1 (r −q∗ij )/(r −1) kj |aij |(r −rij )/(r −1) r

ωj q∗ji r − 1 (r −qij )/(r −1) r∗ ki |aji | ji + lj |bij |(r −rij )/(r −1) r ωi r j =1  ωj qji + l |bji |rji < 0, i = 1, 2, . . . , n. r ωi i 

+

n



Based on Lemmas 5–7, we can obtain the following theorems.

solutions, where ∇ ki is gradient of ki and can be represented as ∇ ki = ( ∂∂kd i , ∂∂aki , . . . , ∂∂aki , ∂∂bki , . . . , ∂∂bki ), that is to say, there does i

|A|max = (|aij |max )n×n , |B|max = (|bij |max )n×n ,

ni

1i

Case (iii): When ∇ ki does not exist, there should exist j0 ∈ {1, 2, . . . , n} such that aj0 i = 0 or bj0 i = 0, that is because of the absolute value of aji or bji for j = 1, 2, . . . , n. Since 0 ≥ −|aji |max and 0 ≥ −|bji |max , then ki ≥ kmin > 0. By Definition 4, i D − |A|K − |B|L is a nonsingular M-matrix. From Theorem 2 and Lemma 5, DRNN (1) with its uncertain parameters in (2) is globally asymptotically robust stable.  Similarly to the proof in Theorem 3, we can obtain Theorems 4 and 5. Theorem 4. Assume that Assumptions A2 and A4 hold. DRNN (1) with its uncertain parameters in (2) is globally asymptotically robust stable if there exist ri > 0 (i = 1, 2, . . . , n) such that

 min1≤j≤n

rj dj −

n 

ri |aij |max kj −

i =1

n  i=1

ri



1 1−µ

|bij |max lj

> 0.

Theorem 5. Assume that Assumptions A2 and A4 hold. DRNN (1) with its uncertain parameters in (2) is globally asymptotically robust stable if there exist ri > 0 (i = 1, 2, . . . , n) such that

 Theorem 3. Assume that Assumptions A1 and A4 hold. DRNN (1) with its uncertain parameters in (2) is globally asymptotically robust stable if D − |A|max K − |B|max L is a nonsingular M-matrix, where

ni

1i

not exist (di , a1i , . . . , ani , b1i , . . . , bni )T such that ∇ ki = 0.

min

1≤i≤n



ri di − n 1

2 j=1

n 1

2 j =1

(ri |aij |max kj + rj |aji |max ki )

 ri |bij |max lj + rj

1 1−µ

 |bji |max li

> 0.

|aij |max = max{|aij |, |aij |}, and |bij |max = max{|bij |, |bij |}. Proof. According to Definition 4, if D − |A|max K − |B|max L is a singular M-matrix, then there exist constants ri , i = 1, 2, . . . , n such that n

di ri −

 j=1

n

rj |aji |max ki −

 j =1

rj |bji |max li ,

i = 1, 2, . . . , n.

Based on Lemma 8, we can obtain the following theorem. Theorem 6. Assume that Assumption A4 holds and τj (t ) = τj , j = 1, 2, . . . , n are constants. DRNN (1) with its uncertain parameters in (2) is globally exponentially robust stable if there exist constants q∗ij , rij∗ , qij , rij ∈ R and ωi > 0, r > 1, i, j = 1, 2, . . . , n

Z. Guo et al. / Neural Networks 54 (2014) 112–122

such that

−di +

n   r −1

r

j =1

+

n   r −1

r

j =1

(r −q∗ )/(r −1) ij

kj

(r −qij )/(r −1)

(r −r ∗ )/(r −1)

|aij |max ij

(r −r )/(r −1)

|bij |max ij

lj

+

+

ωj r ωi

q∗

r∗

ji

ji



ki |aji |max

ωj qji rji l |bji |max r ωi i



< 0,

Proof. By Lemma 8, DRNN (1) with deterministic parameters is globally exponentially stable if there exist constants q∗ij , rij∗ , qij , rij ∈ R and ωi > 0, r > 1, i, j = 1, 2, . . . , n such that ii (di , aij , bij ) < 0. Denote (1)

(r −rij∗ )/(r −1)

(2)

r∗

,

aji = |aji | ji ,

bij = |bij |(r −rij )/(r −1) ,

bji = |bji |rji .

(1)

(2)

(1)

(2)

(1)

Obviously, ii (di , aij , bij ) is line in di , aij , aji , bij (1)

(2)

(1)

(2)

(2)

and bji .

According to the definitions of aij , aji , bij and bji , we have (1)

aij = (2)

aji = (1)

bij = (2)

bji = (1)

aij = (2)

aji = (1)

bij =

min

aij ∈[aij ,aij ]

(1)

aij = (2)

min

aji =

min

bij =

min

bji =

aji ∈[aji ,aji ]

bij ∈[bij ,bij ]

bji ∈[bji ,bji ]

(1)

(2)

0,



|aij |min ij , aij · aij > 0,  0, aji · aji ≤ 0; r∗

ji |aji |min ,  0,

|bij |min 0,



rji bji min

| |

(2)

rji∗

(1)

(r −rij )/(r −1)

aji = |aji |max ,

max

bij = |bij |max

bij ∈[bij ,bij ]

(2)

max

bji ∈[bji ,bji ]

(2)

,

(1)

(1)

,

(13)

−(1 − µ)Q

where Θ12 = PA + Hg K − HD and Θ22 = HA + AT H − 2Hg .

then,

−[xi (t ) − x∗i ] ≤ −[fi (xi (t )) − fi (x∗i )]/ki .

in Lemma 9 also holds. When g (·) = f (·) in MRNN (1), then it becomes the following model: dxi (t ) (2)

and bji

vary in intervals [di , di ],

then ii (di , aij , bij ) < 0, i = 1, 2, . . . , n when di = di or d¯ i , (2)

  < 0,

If we use this expression in the proof of Lemma 9, then, when

i = 1 , 2 , . . . , n, (1)



Θ12 = PA + Hg K and Θ22 = HA + AT H − HDK −1 − 2Hg , the inclusion

[a(ij1) , a(ij1) ], [a(ji2) , a(ji2) ], [b(ij1) , b(ij1) ] and [b(ji2) , b(ji2) ], respectively. If there exist constants q∗ij , rij∗ , qij , rij ∈ R and ωi > 0, r > 1, i, j = 1, 2, . . . , n such that  n   rji∗ (r −rij∗ )/(r −1) r − 1 (r −q∗ij )/(r −1) ωj q∗ji −di + kj |aij |max + k |aji |max r r ωi i j =1  n   ωj qji r − 1 (r −qij )/(r −1) (r −rij )/(r −1) rji + lj |bij |max + l |bji |max < 0, r r ωi i j =1

aij



PB HB MB

−[fi (xi (t )) − fi (x∗i )]hi di (xi (t ) − x∗i ) ≤ −[fi (xi (t )) − fi (x∗i )](hi di /ki )[fi (xi (t )) − fi (x∗i )].

rji

(2)

Mf L − DM AT M Q − 2Mf

So,

bji = |bji |max .

Therefore, di , aij , aji , bij

Θ12 Θ22 ∗ ∗

0 ≤ [fi (xi (t )) − fi (x∗i )]/(xi (t ) − x∗i ) ≤ ki ,

and bji =

 −PD − DP  ∗ Θ= ∗ ∗

bij · bij > 0,

bji · bji > 0,

(1)

max

aji ∈[aji ,aji ]

,

Lemma 9 (Zhang et al., 2008). Assume that Assumptions A2 , A3 and A4 hold. DRNN (1) is globally asymptotically stable, independent of the magnitude of time delays if there exist a positive definite symmetric matrix P > 0, positive diagonal matrices H = diag(h1 , h2 , . . . , hn ), M = diag(m1 , m2 , . . . , mn ), Hg , Mf , and Q = diag(q1 , q2 , . . . , qn ), such that the following LMI holds:

bij · bij ≤ 0;

bji · bji ≤ 0;

(r −rij∗ )/(r −1)

aij = |aij |max

,

First, we restate some important results with matrix forms, such as LMI form, on the stability of DRNN (1) in Zhang et al. (2008).

In the following, we make a minor modification on Lemma 9 to obtain another stability criterion of DRNN (1). By A3 and A4 , we have that

aji · aji > 0,

(r −rij )/(r −1)

max

aij ∈[aij ,aij ]

aij · aij ≤ 0;

(r −r ∗ )/(r −1)

Remark 2. Many scalar algebraic criteria on the robust stability of interval DRNN (1) with its uncertain parameters in (2), such as those in Cao and Chen (2004), Chen et al. (2005) and Liao and Yu (1998) can be obtained by applying the method introduced in Section 3 to existing stability results. 4.2. Criteria in matrix form

i = 1 , 2 , . . . , n.

aij = |aij |

117

= a(ij1) or a(ij1) , a(ji2) = a(ji2) or a(ji2) , b(ij1) = b(ij1) or b(ij1) , and b(ji2) = (2)

bji or bji . From Theorem 1, it follows that DRNN (1) with its uncertain parameters in (2) is globally exponentially robust stable. 

dt

= −di xi (t ) +

n 

aij fj (xj (t ))

j =1

+

n 

bij fj (xj (t − τj (t ))) + ui ,

i = 1 , 2 , . . . , n,

j =1

or equivalently dx(t ) dt

= −Dx(t ) + Af (x(t )) + Bf (x(t − τ (t ))) + u.

(14)

Lemma 10 (Zhang et al., 2008). Assume that Assumptions A2 , A3 and A4 hold. DRNN (14) is globally asymptotically stable, independent of the magnitude of time delays if there exist a positive diagonal matrix P = diag(p1 , p2 , . . . , pn ), and a positive definite matrix Q > 0, such that one of the following conditions holds: (i)

− 2PDK −1 + PA + AT P +

1 1−µ

PBQ −1 BT P + Q < 0;

(15)

118

Z. Guo et al. / Neural Networks 54 (2014) 112–122

(ii)

Ω=

 Ω11 BT P



PB

−(1 − µ)Q

where Ω11 = −2PDK

−1

< 0,

(16)

2 (i) For i = 1, 2, . . . , 22n +n ,

T

+ PA + A P + Q .

Lemma 11 (Zhang et al., 2008). Assume that Assumptions A2 , A3 and A4 hold. DRNN (14) is globally asymptotically stable, independent of the magnitude of time delays if there exist positive definite symmetric matrices P > 0, and Q > 0, and positive diagonal matrix R = diag(r1 , r2 , . . . , rn ), such that the following LMI holds:

 −PD − DP Ξ =  AT P

PA

BT P

−(1 − µ)Q

BT R

 < 0,

Let µj = µ, j = 1, 2, . . . , n in Corollary 3.1 of Zhang et al. (2008), we can obtain Lemma 9. Let N = 1 and ηj = µ, j = 1, 2, . . . , n in Theorem 3.3 and Corollary 3.5 of Zhang et al. (2008), Lemmas 10 and 11 can be derived, respectively. Based on Lemmas 9–11, we can obtain the following theorems. Theorem 7. Assume that Assumptions A2 , A3 and A4 hold. DRNN (1) with its uncertain parameters in (2) is globally asymptotically robust stable, independent of the magnitude of time delays if there exist a positive definite symmetric matrix P > 0, positive diagonal matrices H = diag(h1 , h2 , . . . , hn ), M = diag(m1 , m2 , . . . , mn ), Hg , Mf , and Q = diag(q1 , q2 , . . . , qn ), such that the following LMIs hold: i Θ12

Mf L − Di M

PBi

i Θ22

ATi M

∗ ∗

Q − 2Mf

HBi MBi −(1 − µ)Q

∗ 2n2 +n

for i = 1, 2, . . . , 2 i Θ22 = HAi + ATi H − HAi + ATi H − HDi K −1 − 2Hg .

i , where Θ12 i 2Hg , or Θ12

   < 0, 

(18)

= PAi + Hg K − HDi and i = PAi + Hg K and Θ22 =

or aij , and bij = bij or bij , i, j = 1, 2, . . . , n, i.e., i Θ12

Mf L − Di M

i Θ22 ∗ ∗

ATi M Q − 2Mf



PBi

PBi Q −1 BTi P + Q < 0;

(19)



i Ω11

BTi P



PBi

−(1 − µ)Q

< 0,

(20)

i where Ω11 = −2PDi K −1 + PAi + ATi P + Q .

Proof. From Lemma 10, DRNN (14) with deterministic parameters is globally asymptotically stable, independent of the magnitude of time delays if matrix Ω < 0. Since Ω is linear in D, A, and B, then each component of Ω is linear in variables d1 , d2 , . . . , dn , a11 , a12 , . . . , a1n , . . . , an1 , an2 , . . . , ann , b11 , b12 , . . . , b1n , . . . , bn1 , bn2 , . . . , bnn . According to Theorem 1, DRNN (14) with its uncertain parameters in (2) is globally robust asymptotically stable, independent of the magnitude of time delays if there exist a positive diagonal matrix P = diag(p1 , p2 , . . . , pn ), and a positive definite matrix Q > 0, such that Ω < 0 when di = di or di , aij = aij or aij , and bij = bij or bij , i, j = 1, 2, . . . , n, i.e.





i Ω11

PBi

BTi P

−(1 − µ)Q

< 0, 2

Proof. From Lemma 9, DRNN (1) with deterministic parameters is globally asymptotically stable, independent of the magnitude of time delays if matrix Θ < 0. Since Θ is linear in D, A, and B, then each component of Θ is linear in variables d1 , d2 , . . . , dn , a11 , a12 , . . . , a1n , . . . , an1 , an2 , . . . , ann , b11 , b12 , . . . , b1n , . . . , bn1 , bn2 , . . . , bnn . According to Theorem 1, DRNN (1) with its uncertain parameters in (2) is globally asymptotically robust stable, independent of the magnitude of time delays if there exist a positive-definite symmetric matrix P > 0, positive diagonal matrices H = diag(h1 , h2 , . . . , hn ), M = diag(m1 , m2 , . . . , mn ), Hg , Mf , and Q = diag(q1 , q2 , . . . , qn ), such that Θ < 0 when di = di or di , aij = aij

 −PDi − Di P  ∗   ∗ ∗

1 1−µ

2 (ii) For i = 1, 2, . . . , 22n +n ,

(17)

where Ξ22 = −2RDK −1 + RA + AT R + Q .

 −PDi − Di P  ∗   ∗ ∗

− 2PDi K −1 + PAi + ATi P +



PB RB

Ξ22

positive diagonal matrix P = diag(p1 , p2 , . . . , pn ), and a positive definite matrix Q > 0, such that one of the following conditions holds:

i for i = 1, 2, . . . , 22n +n and Ω11 = −2PDi K −1 + PAi + ATi P + Q . By Schur’s complement (Boyd, Ghaoui, Feron, & Balakrishnan, 1994), (20) can be converted into the following inequalities

−2PDi K −1 + PAi + ATi P + PBi Q −1 BTi P + Q < 0, 2 for i = 1, 2, . . . , 22n +n .



Similarly to the proofs of Theorems 7 and 8, we can obtain the following theorem. Theorem 9. Assume that Assumptions A2 , A3 and A4 hold. DRNN (14) with its uncertain parameters in (2) is globally asymptotically robust stable, independent of the magnitude of time delays if there exist positive definite symmetric matrices P > 0, and Q > 0, and positive diagonal matrix R = diag(r1 , r2 , . . . , rn ), such that the following LMIs hold:



−PDi − Di P

PAi



ATi P

i Ξ22

PBi RBi

BTi P

BTi R

−(1 − µ)Q

  < 0,

(21)

2 i for i = 1, 2, . . . , 22n +n , where Ξ22 = −2RDi K −1 + RAi + ATi R + Q .



HBi  < 0,  MBi −(1 − µ)Q



2n2 +n

i for i = 1, 2, . . . , 2 , where Θ12 = PAi + Hg K − HDi and i i i Θ22 = HAi + ATi H − 2Hg , or Θ12 = PAi + Hg K and Θ22 = HAi + ATi H − HDi K −1 − 2Hg . 

Theorem 8. Assume that Assumptions A2 , A3 and A4 hold. DRNN (14) with its uncertain parameters in (2) is globally asymptotically robust stable, independent of the magnitude of time delays if there exist a

Remark 3. Many robust stability criteria (e.g., those in Cao and Chen (2004) and Chen et al. (2005)) are in scalar form or can be transformed to scalar form, such as M-matrix form. Moreover, many robust stability criteria are in matrix norm form; e.g., Arik (2003), Bao et al. (2012), Cao et al. (2005), Ensari and Arik (2010), Faydasicok and Arik (2012a, 2012b, 2013), Guo and Huang (2009), Liao and Yu (1998), Ozcan and Arik (2006), Qi (2007) and Wu et al. (2011). In particular, some of them are in LMI form; e.g., Li et al. (2004), Singh (2005) and Zhang et al. (2007). It is worth noting that the technique in Arik (2003), Bao et al. (2012), Cao and Chen (2004), Cao et al. (2005), Chen et al. (2005), Ensari and Arik (2010), Faydasicok and Arik (2012a, 2012b, 2013), Guo and Huang (2009),

Z. Guo et al. / Neural Networks 54 (2014) 112–122

119

Liao and Yu (1998), Li et al. (2004), Ozcan and Arik (2006), Qi (2007), Singh (2005), Wu et al. (2011) and Zhang et al. (2007), ignored neuronal excitatory and inhibitory effects. In contrast, in Theorems 7–9, the stability conditions are derived in the form of LMIs with a consideration of the neuronal excitatory and inhibitory effects. There are two advantages: first, there is no need to tune any parameters or matrices to derive robust stability criteria; second, it can be efficiently verified via solving LMIs numerically.

Lemma 14 (Ensari & Arik, 2010). Let B be any real matrix defined by B ∈ B := {B = (Bij )n×n : B ≤ B ≤ B, i.e., bij ≤ b ≤ bij , i, j = 1, 2, . . . , n}. Then, the following inequality holds:

Remark 4. In this paper, we apply the proposed new method to the stability results in Cao and Wang (2003, 2005b) and Zhang et al. (2008) only. As stated in introduction, there exist numerous results on the stability of neural networks (Arik, 2002; Cao & Wang, 2003, 2005a, 2005b; Cao & Zhou, 1998; Forti & Tesi, 1995; Gao et al., 2013; He et al., 2007; Liao et al., 2002; Mahmouda & Xia, 2011; Wang et al., 2005; Wu et al., 2012; Zhang et al., 2010, 2008). Our results here suggest that it should be worthwhile to reconsider the stability of those neural network models by incorporating the interval uncertainties of parameters.

∥B∥2 ≤ σ4 (B).

Remark 5. In Wu and Zeng (2012), Wen, Zeng, and Huang (2012), Zhang, Shen, Quan, and Sun (2012, 2013) and Zhang and Shen (2013), the dynamics behaviors of the memristorbased neural networks with time delays, which are discontinuous state-dependent switching systems, were considered. First, the differential equations describing the memristor-based neural network models were transformed into the differential inclusions, i.e., the memristor-based neural networks were transformed into interval neural networks. Then, the dynamics of the interval neural networks were studied. Therefore, we can apply the proposed new method to analyze the dynamics of memristor-based neural networks.

where S = (sij )n×n with sii = −2pi aii and sij = − max{|pi aij + pj aji |, |pi aij + pj aji |} for i ̸= j.

5. Comparison and examples In this section, by further analyzing the results and giving some examples, we will compare the results with the existing results in the literature. For convenience, we introduce the following notations: A∗ =

1 2 1

(A + A),

A∗ =

1 2 1

(A − A),

B∗ = (B − B), (B + B), 2 2  σ1 (B) = ∥|B∗ T B∗ | + 2|B∗ T |B∗ + BT∗ B∗ ∥2 , B∗ =

σ2 (B) = ∥B ∥2 + ∥B∗ ∥2 ,  σ3 (B) = ∥B∗ ∥22 + ∥B∗ ∥22 + 2∥B∗ T |B∗ |∥2 , ∗

σ4 (B) = ∥|B|max ∥2 . Next, we present some important lemmas, which is used to compare our results with that in the previous literature. Lemma 12 (Faydasicok & Arik, 2013). Let B be any real matrix defined by B ∈ B := {B = (Bij )n×n : B ≤ B ≤ B, i.e., bij ≤ b ≤ bij , i, j = 1, 2, . . . , n}. Then, the following inequality holds:

∥B∥2 ≤ σ1 (B). Lemma 13 (Cao et al., 2005). Let B be any real matrix defined by B ∈ B := {B = (Bij )n×n : B ≤ B ≤ B, i.e., bij ≤ b ≤ bij , i, j = 1, 2, . . . , n}. Then, the following inequality holds:

∥B∥2 ≤ σ2 (B).

∥B∥2 ≤ σ3 (B). Lemma 15 (Qi, 2007). Let B be any real matrix defined by B ∈ B := {B = (Bij )n×n : B ≤ B ≤ B, i.e., bij ≤ b ≤ bij , i, j = 1, 2, . . . , n}. Then, the following inequality holds:

Lemma 16 (Ensari & Arik, 2010). Let x = (x1 , x2 , . . . , xn )T ∈ Rn . If A ∈ A := {A = (Aij )n×n : A ≤ A ≤ A, i.e., aij ≤ a ≤ aij , i, j = 1, 2, . . . , n}. Then, for any positive diagonal matrix P, the following inequality holds: xT (PA + AT P )x ≤ −|xT |S |x|,

Lemma 17 (Qi, 2007). Let x = (x1 , x2 , . . . , xn )T ∈ Rn . If A ∈ A := {A = (Aij )n×n : A ≤ A ≤ A, i.e., aij ≤ a ≤ aij , i, j = 1, 2, . . . , n}. Then, for any positive diagonal matrix P, the following inequality holds: xT (PA + AT P )x ≤ xT (PA∗ + A∗ P + ∥PA∗ + AT∗ P ∥2 I )x. T

In order to make the comparison precise, we restate the previous global robust stability results derived for DRNN (14). In Faydasicok and Arik (2012b, 2013), Qi (2007) and Shao et al. (2010), the authors presented the following results when σm (B) = σ1 (B), σ2 (B), σ3 (B), and σ4 (B), respectively. Assume that Assumptions A3 and A4 hold and τj (t ) = τj , j = 1, 2, . . . , n are constant. DRNN (14) with its uncertain parameters in (2) is globally robust asymptotically stable if there exist positive diagonal matrices P = diag(p1 , p2 , . . . , pn ) such that

Φ1 = 2PDK −1 − (PA∗ + A∗ T P

+ ∥PA∗ + AT∗ P ∥2 I ) − 2∥P ∥2 σm (B)I > 0, where σm (B) = min{σ1 (B), σ2 (B), σ3 (B), σ4 (B)}. In Ensari and Arik (2010), Faydasicok and Arik (2013), Ozcan and Arik (2006) and Singh (2007), the authors presented the following results when σm (B) = σ1 (B), σ2 (B), σ3 (B), and σ4 (B), respectively. Assume that Assumptions A3 and A4 hold and τj (t ) = τj , j = 1, 2, . . . , n are constant. DRNN (14) with its uncertain parameters in (2) is globally robust asymptotically stable if there exist positive diagonal matrices P = diag(p1 , p2 , . . . , pn ) such that

Φ2 = 2PDK −1 + S − 2∥P ∥2 σm (B)I > 0, where σm (B) = min{σ1 (B), σ2 (B), σ3 (B), σ4 (B)}, S = (sij )n×n with sii = −2pi aii and sij = − max{|pi aij + pj aji |, |pi aij + pj aji |} for i ̸= j. By Lemmas 12–17 and Theorem 8, we can obtain the following corollaries of Theorem 8. Corollary 1. Assume that Assumptions A3 and A4 hold and τj (t ) = τj , j = 1, 2, . . . , n are constant. DRNN (14) with its uncertain parameters in (2) is globally robust asymptotically stable, independent of the magnitude of time delays if there exist positive diagonal matrices

120

Z. Guo et al. / Neural Networks 54 (2014) 112–122

P = diag(p1 , p2 , . . . , pn ), and positive definite matrix Q > 0, such that the following inequality holds:

Φ3 = 2PDK

−1

− (PA + A P + ∥PA∗ + A∗ P ∥2 I ) ∗

∗T

T

− ∥P ∥ ∥Q −1 ∥2 σm2 (B)I − Q > 0, 2 2

where σm (B) = min{σ1 (B), σ2 (B), σ3 (B), σ4 (B)}. Proof. Let x = (x1 , x2 , . . . , xn )T . Since Bi ∈ B = [B, B], i = 2 1, 2, . . . , 22n +n , according to Lemmas 12–15, we have that

∥Bi ∥2 ≤ σm (B),

2n2 +n

i = 1, 2, . . . , 2

.

2 Since Ai ∈ A = [A, A], i = 1, 2, . . . , 22n +n , according to Lemma 17, we have that

xT (PAi + ATi P )x ≤ xT (PA∗ + A∗ P + ∥PA∗ + AT∗ P ∥2 I )x, T

2 for i = 1, 2, . . . , 22n +n . Then,

xT [2PDi K −1 − (PAi + ATi P ) − PBi Q −1 BTi P − Q ]x

= xT (2PDi K −1 )x − xT (PAi + ATi P )x − xT (PBi Q −1 BTi P )x − xT Qx ≥ xT (2PDK −1 )x − xT (PA∗ + A∗ T P + ∥PA∗ + AT∗ P ∥2 I )x − xT ∥P ∥22 ∥Q −1 ∥2 ∥Bi ∥22 x − xT Qx ≥ xT (2PDK −1 )x − xT (PA∗ + A∗ T P + ∥PA∗ + AT∗ P ∥2 I )x − xT ∥P ∥22 ∥Q −1 ∥2 σm2 (B)x − xT Qx = xT Φ3 x > 0,

2 i = 1, 2, . . . , 22n +n .

2 Therefore, for i = 1, 2, . . . , 22n +n ,

−2PDi K −1 + PAi + ATi P + PBi Q −1 BTi P + Q < 0. When τj (t ) = τj , then µ = 0. According to Theorem 8, we have that DRNN (14) with its uncertain parameters in (2) is globally robust asymptotically stable, independent of the magnitude of time delays.  Remark 6. Let Q = ∥P ∥2 σm (B)I. Then, Φ3 > 0 and Φ4 > 0 are equivalent to Φ1 > 0 and Φ4 > 0, respectively. Therefore, the results in Faydasicok and Arik (2012b, 2013), Qi (2007) and Shao et al. (2010), are corollaries of Corollary 1, and the results in Ensari and Arik (2010), Faydasicok and Arik (2013), Ozcan and Arik (2006) and Singh (2007), are corollaries of Corollary 2. So, the conditions of Theorem 8 in this paper are weaker than those in Ensari and Arik (2010), Faydasicok and Arik (2012b, 2013) Ozcan and Arik (2006), Qi (2007), Shao et al. (2010) and Singh (2007). When σm (B) = σ2 (B) in Corollary 2, we can obtain the conclusions in Cao, Li, and Han (2006). However, the price we have to pay is to 2

verify up to 22n +n LMIs for ascertaining robust stability. Next, we will give an example to compare the conditions in Theorem 8 with those in Corollaries 1 and 2 when P = I. Example 1. Let the network parameters of DRNN (14) be bounded as follows:

 D=

2 Therefore, for i = 1, 2, . . . , 22n +n ,

−2PDi K −1 + PAi + ATi P + PBi Q −1 BTi P + Q < 0. When τj (t ) = τj , then µ = 0. According to Theorem 8, we have that DRNN (14) with its uncertain parameters in (2) is globally robust asymptotically stable, independent of the magnitude of time delays.  Corollary 2. Assume that Assumptions A3 and A4 hold and τj (t ) = τj , j = 1, 2, . . . , n are constant. DRNN (14) with its uncertain parameters in (2) is globally robust asymptotically stable, independent of the magnitude of time delays if there exist positive diagonal matrices P = diag(p1 , p2 , . . . , pn ), and positive definite matrix Q > 0, such that the following inequality holds:

Φ4 = 2PDK −1 + S − ∥P ∥22 ∥Q −1 ∥2 σm2 (B)I − Q > 0, where σm (B) = min{σ1 (B), σ2 (B), σ3 (B), σ4 (B)}, S = (sij )n×n with sii = −2pi aii and sij = − max{|pi aij + pj aji |, |pi aij + pj aji |} for i ̸= j. Proof. Let x = (x1 , x2 , . . . , xn )T . Since Bi ∈ B = [B, B], i = 2 1, 2, . . . , 22n +n , according to Lemmas 12–15, we have that

∥Bi ∥2 ≤ σm (B),

2 i = 1, 2, . . . , 22n +n .

2 Since Ai ∈ A = [A, A], i = 1, 2, . . . , 22n +n , according to Lemma 16, we have that

xT (PAi + ATi P )x ≤ −|xT |S |x|, 2 for i = 1, 2, . . . , 22n +n . Then,

xT [2PDi K −1 − (PAi + ATi P ) − PBi Q −1 BTi P − Q ]x

= xT (2PDi K −1 )x − xT (PAi + ATi P )x − xT (PBi Q −1 BTi P )x − xT Qx ≥ |xT |(2PDK −1 )|x| + |xT |S |x| − xT ∥P ∥22 ∥Q −1 ∥2 ∥Bi ∥22 x − |xT |Q |x| ≥ |xT |(2PDK −1 )|x| + |xT |S |x| − xT ∥P ∥22 ∥Q −1 ∥2 σm2 (B)x − |xT |Q |x| = |xT |Φ4 |x| > 0,

2 i = 1, 2, . . . , 22n +n .



5 0

 A=



0 , 8

3a 0

D=



0 , 5a



15 0

B=



0 , 20

 −2a

A=

a 0





0 , −a

0



0 , a

B=

2a 0



0 , 3a

k1 = k2 = 1, and τj (t ) = τj are constant. According to the above given parameters, we have that the combination number of (Di , Ai , Bi ) is 26 = 64. This is because only six variables d1 , d2 , a11 , a22 , b11 , b22 vary in the intervals, and a12 , a21 , b12 , b21 are constants and equal to zero. Choose P = I and



2a Q = 0



0 . 3a

From Theorem 8, we need that

−2PDi K −1 + PAi + ATi P +

1 1−µ

PBi Q −1 BTi P + Q

= −2Di + Ai + ATi + Bi Q −1 BTi + Q < 0,

i = 1, . . . , 64.

But it is easy to see that when d1 and d2 take the minimum value, and a11 , a22 , b11 , b22 take the maximum value, there exists i0 such that (D, A, B) = (Di0 , Ai0 , Bi0 ) and

−2Di0 + Ai0 + ATi0 + Bi0 Q −1 BTi0 + Q ≥ −2Di + Ai + ATi + Bi Q −1 BTi + Q ,

i = 1, . . . , 64.

Hence, we only need that 0 > −2Di0 + Ai0 + ATi0 + Bi0 Q −1 BTi0 + Q , i.e.,





5 2 0

 −



0 3a −2 8 0 2a 0

0 3a

T 

2a 0

0 5a



0 3a

 −1 

2a 0

Therefore, we only need a < 1.





0 2a − 3a 0



0 > 0. 3a

Z. Guo et al. / Neural Networks 54 (2014) 112–122

121

From the information of the network parameters, we obtain A∗ = B∗ =

 

2a 0

0 0



0 , 3a







0 , a

B∗ =



a 0

A∗ =

0 , 2a



2a 0

0 . 2a

Then, T

A∗ + A∗ + ∥A∗ + AT∗ ∥2 I =

 S=

6a 0





8a 0

0 , 10a



0 , 10a

σm (B) = σ1 (B) = σ2 (B) = σ3 (B) = σ4 (B) = 3a. Φ1 > 0 provided that a < 5/7, and Φ2 > 0 if a < 5/6. Hence, for this example, we have shown that the conditions imposed on the network parameters by Theorem 8 are weaker than those in Ensari and Arik (2010), Faydasicok and Arik (2012b, 2013), Ozcan and Arik (2006), Qi (2007), Shao et al. (2010) and Singh (2007). Next, we give an example to verify that the LMIs conditions on the robust stability are effective and can be computed by MATLAB LMI toolbox. Let us take Theorem 9 for example.

Fig. 1. Time responses of the state variables of DRNN (14) with the initial states φ(s) = (−1, −1.5)T , s ∈ [−0.1, 0] in Example 2.

Example 2. Let the network parameters of DRNN (14) be bounded as follows:

  A=





13 0

0 , 15

5 3



D=

1 , 6

D=

 B=

−2 2





17 0

0 , 21

A=



1 , 3

B=

 −1 4

4 2

 −5 5

,



2 , 6

f1 (s) = f2 (s) = 0.5(|s + 1| − |s − 1|), and τj (t ) = τj = 0.5, j = 1, 2 are constant. So, we have k1 = k2 = 1 and µ = 0. According to Theorem 9 and using the MATLAB LMI toolbox, we can obtain the following feasible solutions to LMIs (21): 12.0051 P = −2.0842

 −2.0842 > 0, 4.8092   375.2903 2.0214 Q = > 0, 2.0214 330.9322 

Fig. 2. Time responses of the state variables of DRNN (14) with the initial states φ(s) = (2, 3)T , s ∈ [−0.1, 0] in Example 2.

and

 R=

57.6092 0



0 > 0. 40.7902

Therefore, DRNN (14) with its uncertain parameters in (2) is globally robust asymptotically stable, independent of the magnitude of time delays. By using the Monte Carlo method; i.e., selecting 50 arrays of data (D, A, B) of the network parameters from (D , A, B ) randomly, the trajectories of the state variables of DRNN (14) with the initial states φ(s) = (−1, −1.5)T , (2, 3)T and (sin t , cos t )T for s ∈ [−1, 0] are drawn in Figs. 1–3, respectively. 6. Concluding remarks This paper presents a systematic method for analyzing the robust stability of interval recurrent neural networks. By utilizing existing algebraic stability criteria of deterministic neural networks, the method provides a general tool to derive robust stability criteria of interval neural networks with uncertain parameters. Specifically, using the proposed method, several new sufficient conditions are obtained for global asymptotic/exponential robust stability of interval neural networks. The results are compared with the relevant robust stability results published in the literature and

Fig. 3. Time responses of the state variables of DRNN (14) with the initial states φ(s) = (sin t , cos t )T , s ∈ [−0.1, 0] in Example 2.

it is theoretically proven and numerically demonstrated that the results herein generalize the most of existing results on the robust stability of neural networks with time delays. Note that resulting

122

Z. Guo et al. / Neural Networks 54 (2014) 112–122

robust stability criteria are expressed in the form of LMIs and the neuronal excitatory and inhibitory effects on neural networks are taken into consideration. Therefore, the results are less restrictive than previous ones. Many avenues are open for further investigations. Future works may aim at the following topics: (i) reduction of the computation complexity of the conditions in the main theorems; (ii) the robust stability of more complicated neural networks, such as neural networks with discontinuous activation functions, neural networks with random disturbances and coupled neural networks; (iii) other dynamical behaviors of interval neural networks with time delays; (iv) real applications to large-scale neural systems; (v) the dynamical behaviors of memristor-based neural networks with time delays. References Arik, S. (2002). An analysis of global asymptotic stability of delayed cellular neural networks. IEEE Transactions on Neural Networks, 13, 1239–1242. Arik, S. (2003). Global robust stability of delayed neural networks. IEEE Transactions on Circuits and Systems I: Fundamental Theory and Applications, 50, 156–160. Baldi, P., & Atiya, A. F. (1994). How delays affect neural dynamics and learning. IEEE Transactions on Neural Networks, 5, 612–621. Bao, G., Wen, S., & Zeng, Z. (2012). Robust stability analysis of interval fuzzy Cohen–Grossberg neural networks with piecewise constant argument of generalized type. Neural Networks, 33, 32–41. Berman, A., & Plemmons, R. J. (1979). Nonnegative matrics in the mathematical science. New York: Academic. Boyd, S., Ghaoui, L. E., Feron, E., & Balakrishnan, V. (1994). Linear matrix inequalities in system and control theory. Philadelphia, PA: SIAM. Cao, J., & Chen, T. (2004). Globally exponentially robust stability and periodicity of delayed neural networks. Chaos, Solitons and Fractals, 22, 957–963. Cao, J., Huang, D.-S., & Qu, Y. (2005). Global robust stability of delayed recurrent neural networks. Chaos, Solitons and Fractals, 23, 221–229. Cao, J., Li, H., & Han, L. (2006). Novel results concerning global robust stability of delayed neural networks. Nonlinear Analysis: Real World Applications, 7, 458–469. Cao, J., & Wang, J. (2003). Global asymptotic stability of a general class of recurrent neural networks with time-varying delays. IEEE Transactions on Circuits and Systems I, 50, 34–44. Cao, J., & Wang, J. (2005a). Global asymptotic and robust stability of recurrent neural networks with time delays. IEEE Transactions on Circuits and Systems I, 52, 417–426. Cao, J., & Wang, J. (2005b). Global exponential stability and periodicity of recurrent neural networks with time delays. IEEE Transactions on Circuits and Systems I, 52, 920–931. Cao, J., & Zhou, D. (1998). Stability analysis of delayed cellular neural networks. Neural Networks, 11, 1601–1605. Chen, A., Cao, J., & Huang, L. (2005). Global robust stability of interval cellular neural networks with time-varying delays. Chaos, Solitons and Fractals, 23, 787–799. Ensari, T., & Arik, S. (2010). New results for robust stability of dynamical neural networks with discrete time delays. Expert Systems with Applications, 37, 5925–5930. Faydasicok, O., & Arik, S. (2012a). Equilibrium and stability analysis of delayed neural networks under parameter uncertainties. Applied Mathematics and Computation, 218, 6716–6726. Faydasicok, O., & Arik, S. (2012b). Robust stability analysis of a class of neural networks with discrete time delays. Neural Networks, 29, 52–59. Faydasicok, O., & Arik, S. (2013). A new upper bound for the norm of interval matrices with application to robust stability analysis of delayed neural networks. Neural Networks, 44, 64–71. Forti, M., & Tesi, A. (1995). New conditions for global stability of neural networks with application to linear and quadratic programming problems. IEEE Transactions on Circuits and Systems I, 42, 354–366. Gao, H., Song, X., Ding, L., Liu, D., & Hao, M. (2013). New conditions for global exponential stability of continuous-time neural networks with delays. Neural Computation Application, 22, 41–48. Guo, Z., & Huang, L. (2009). LMI conditions for global robust stability of delayed neural networks with discontinuous neuron activations. Applied Mathematics and Computation, 215, 889–900. He, Y., Liu, G., & Rees, D. (2007). New delay-dependent stability criteria for neural networks with time-varying delay. IEEE Transactions on Neural Networks, 18, 310–314. Liao, X., Chen, G., & Sanchez, E. N. (2002). Delay-dependent exponential stability analysis of delayed neural networks: an LMI approach. Neural Networks, 15, 855–866.

Liao, X., & Yu, J. (1998). Robust stability for interval Hopfield neural networks with time delay. IEEE Transactions on Neural Networks, 9, 1042–1045. Li, C., Liao, X., & Zhang, R. (2004). Global robust asymptotical stability of multidelayed interval neural networks: an LMI approach. Physics Letters A, 328, 452–462. Mahmouda, M. S., & Xia, Y. (2011). Improved exponential stability analysis for delayed recurrent neura lnetworks. Journal of the Franklin Institute, 348, 201–211. Ozcan, N., & Arik, S. (2006). Global robust stability analysis of neural networks with multiple time delays. IEEE Transactions on Circuits and Systems I: Regular Papers, 53, 166–176. Qi, H. (2007). New sufficient conditions for global robust stability of delayed neural networks. IEEE Transactions on Circuits and Systems I: Regular Papers, 54, 1131–1141. Shao, J.-L., Huang, T.-Z., & Zhou, S. (2010). Some improved criteria for global robust exponential stability of neural networks with time-varying delays. Communications in Nonlinear Science and Numerical Simulation, 15, 3782–3794. Shen, Y., & Wang, J. (2007). Noise-induced stabilization of the recurrent neural networks with mixed time-varying delays and Markovian-switching parameters. IEEE Transactions on Neural Networks, 18, 1857–1862. Shen, Y., & Wang, J. (2008). An improved algebraic criterion for global exponential stability of recurrent neural networks with time-varying delays. IEEE Transactions on Neural Networks, 19, 528–531. Shen, Y., & Wang, J. (2012). Robustness analysis of global exponential stability of recurrent neural networks in the presence of time delays and random disturbances. IEEE Transactions on Neural Networks and Learning Systems, 23, 87–96. Singh, V. (2005). Global robust stability of delayed neural networks: an LMI approach. IEEE Transactions on Circuits and Systems II: Express Briefs, 52, 33–36. Singh, V. (2007). Global robust stability of delayed neural networks: estimating upper limit of norm of delayed connection weight matrix. Chaos, Solitons and Fractals, 32, 259–263. Wang, Z., Liu, Y., & Liu, X. (2005). On global asymptotic stability of neural networks with discrete and distributed delays. Physics Letters A, 345, 299–308. Wen, S., Zeng, Z., & Huang, T. (2012). Exponential stability analysis of memristorbesed recurrent neural networks with time-varying delays. Neurocomputing, 97, 233–240. Wu, Z., Lam, J., Su, H., & Chu, J. (2012). Stability and dissipativity analysis of static neural networks with time delay. IEEE Transactions on Neural Networks and Learning Systems, 23, 199–209. Wu, H., Tao, F., Qin, L., Shi, R., & He, L. (2011). Robust exponential stability for interval neural networks with delays and non-lipschitz activation functions. Nonlinear Dynamics, 66, 479–487. Wu, A., & Zeng, Z. (2012). Dynamic behaviors of memristor-based recurrent neural networks with time-varying delays. Neural Networks, 36, 1–10. Yu, W., Cao, J., & Wang, J. (2007). An lmi approach to global asymptotic stability of the delayed cohen-grossberg neural network via nonsmooth analysis. Neural Networks, 20, 810–818. Zeng, Z., & Wang, J. (2006a). Improved conditions for global exponential stability of recurrent neural networks with time-varying delays. IEEE Transactions on Neural Networks, 17, 623–635. Zeng, Z., & Wang, J. A. (2006b). Global exponential stability of recurrent neural networks with time-varying delays in the presence of strong external stimuli. Neural Networks, 19, 1528–1537. Zeng, Z., Wang, J., & Liao, X. (2003). Global exponential stability of a general class of recurrent neural networks with time-varying delays. IEEE Transactions on Circuits and Systems I: Fundamental Theory and Applications, 50, 1353–1358. Zeng, Z., Wang, J., & Liao, X. (2004). Stability analysis of delayed cellular neural networks described using cloning templates. IEEE Transactions on Circuits and Systems I: Regular Papers, 51, 2313–2324. Zeng, Z., Wang, J., & Liao, X. (2005). Global asymptotic stability and global exponential stability of neural networks with unbounded time-varying delays. IEEE Transactions on Circuits and Systems II: Express Briefs, 52, 168–173. Zhang, H., Liu, Z., Huang, G., & Wang, Z. (2010). Novel weighting-delay-based stability criteria for recurrent neural networks with time-varying delay. IEEE Transactions on Neural Networks, 21, 91–106. Zhang, G., & Shen, Y. (2013). New alegebraic criteria for synchronization stability of chaotic memristive neural networks with time-varying delays. IEEE Transactions on Neural Networks and Learning, 24, 1701–1707. Zhang, G., Shen, Y., Quan, Y., & Sun, J. (2012). Global exponential stability of a class of memristor-based recurrent neural networks with time-varying delays. Neurocomputing, 97, 149–154. Zhang, G., Shen, Y., Quan, Y., & Sun, J. (2013). Global exponential periodicity and stability of a class of memristor-based recurrent neural networks with multiple delays. Information Sciences, 232, 386–396. Zhang, H., Wang, Z., & Liu, D. (2007). Robust exponential stability of recurrent neural networks with multiple time-varying delays. IEEE Transactions on Circuits and Systems II: Express Briefs, 54, 730–734. Zhang, H., Wang, Z., & Liu, D. (2008). Global asymptotic stability of recurrent neural networks with multiple time-varying delays. IEEE Transactions on Neural Networks, 19, 855–873.

A systematic method for analyzing robust stability of interval neural networks with time-delays based on stability criteria.

This paper presents a systematic method for analyzing the robust stability of a class of interval neural networks with uncertain parameters and time d...
609KB Sizes 0 Downloads 3 Views