IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 27, NO. 3, MARCH 2016

593

Global Exponential Stability for Complex-Valued Recurrent Neural Networks With Asynchronous Time Delays Xiwei Liu, Member, IEEE, and Tianping Chen, Senior Member, IEEE

Abstract— In this paper, we investigate the global exponential stability for complex-valued recurrent neural networks with asynchronous time delays by decomposing complex-valued networks to real and imaginary parts and construct an equivalent real-valued system. The network model is described by a continuous-time equation. There are two main differences of this paper with previous works: 1) time delays can be asynchronous, i.e., delays between different nodes are different, which make our model more general and 2) we prove the exponential convergence directly, while the existence and uniqueness of the equilibrium point is just a direct consequence of the exponential convergence. Using three generalized norms, we present some sufficient conditions for the uniqueness and global exponential stability of the equilibrium point for delayed complex-valued neural networks. These conditions in our results are less restrictive because of our consideration of the excitatory and inhibitory effects between neurons; so previous works of other researchers can be extended. Finally, some numerical simulations are given to demonstrate the correctness of our obtained results. Index Terms— Asynchronous, complex-valued, global exponential stability, recurrent neural networks, time delays.

I. I NTRODUCTION

R

ECURRENTLY connected neural networks, including Hopfield neural networks [1], Cohen–Grossberg neural networks [2], and cellular neural networks [3], [4], have been extensively studied in past decades and found many applications in different areas, such as signal and image processing, pattern recognition, optimization problems, associative memories, and so on. Until now, many criteria about the stability of equilibrium are obtained in the literature (see [5]–[20] and references therein).

Manuscript received November 15, 2014; revised March 3, 2015; accepted March 16, 2015. Date of publication April 14, 2015; date of current version February 15, 2016. This work was supported in part by the National Science Foundation of China under Grant 61203149, Grant 61273211, and Grant 61233016, in part by the National Basic Research Program (973 Program) of China under Grant 2010CB328101, in part by the Shanghai Municipal Education Commission and Shanghai Education Development Foundation under Grant 11CG22, in part by the Fundamental Research Funds for the Central Universities, and in part by the Program for Young Excellent Talents in Tongji University. (Corresponding author: Tianping Chen.) X. Liu is with Department of Computer Science and Technology, Tongji University, Shanghai 200092, China, and also with the Key Laboratory of Embedded System and Service Computing, Ministry of Education, Tongji University, Shanghai 200092, China (e-mail: [email protected]). T. Chen is with the School of Computer Sciences and Mathematical Sciences, Fudan University, Shanghai 200433, China (e-mail: [email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TNNLS.2015.2415496

It is natural to generalize the real-valued systems to complex-valued systems [21], which can be used in the nonlinear quantum systems, reaction-advection-diffusion systems, heat equation, Petri nets, chaotic systems, and so on. Many approaches are also obtained, for example, decomposing complex-valued system to two real-valued systems is applied in some nonlinear systems and regular networks (see [22], [23] and references therein). Recently, as an important part of nonlinear complex-valued systems, complexvalued neural network (CVNN) models are proposed as an important part of complex-valued systems, and have attracted more and more attention from various areas in science and technology (see [24]–[47] and references therein). CVNN can be regarded as an extension of real-valued recurrent neural networks, which has complex-valued state, output, connection weight, and activation functions. For example, they are suited to deal with complex state composed of amplitude and phase. This is one of the core concepts in physical systems dealing with electromagnetic, light, ultrasonic, quantum waves, and so on. Moreover, many applications heavily depend on the dynamical behaviors of networks. Therefore, analysis of these dynamical behaviors is a necessary step toward practical design of these neural networks. In [35], a CVNN model on time scales is studied based on delta differential operator. In [36]–[39], discrete-time CVNNs are also discussed. Stability of complex-valued impulsive system is investigated in [40]. Until now, there have been various methods to study the stability of CVNNs, such as the Lyapunov functional method [41], the synthesis method [42], and so on. In particular, in hardware implementation, time delays inevitably occur due to the finite switching speed of the amplifiers and communication time. Furthermore, to process moving images, one must introduce time delays in the signals transmitted among the cells. Furthermore, time delay is frequently a source of oscillation and instability in neural networks. Therefore, neural networks with time delays have much more complicated dynamics due to the incorporation of delays, and the stability of delayed neural networks has become a hot topic of great theoretical and practical importance, and a great deal of significant results have been reported in the literature. For example, Zhang et al. [43] investigate the stability and synchronization for discrete-time CVNNs with time-varying delays; Fang and Sun [44] study the stability of complex-valued impulsive system with delay. The global exponential and asymptotical stability of CVNNs with

2162-237X © 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

594

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 27, NO. 3, MARCH 2016

time-delays is studied in [45] with two assumptions of activation functions, while Zhang et al. [46] and Fang and Sun [47] point out the mistakes in the proof of [45] and give some new conditions and criteria to ensure the existence, uniqueness, and globally asymptotical stability of the equilibrium point of CVNNs. In practice, the interconnections are generally asynchronous, that is to say, the inevitable time delays between different nodes are generally different. For example, in order to model vehicular traffic flow [48], [49], the reaction delays of drivers should be considered, and for different drivers, the reaction delays are different depending on physical conditions, drivers’ cognitive and physiological states, and so on. Moreover, in the load balancing problem [50], for a computing network consisting of n computers (also called nodes), except for the different communication delays, the task-transfer delays τ j k also should be considered, which depend on the number of tasks to be transferred from node k to node j . More related examples can be found in [51] and references therein. Hence, based on the above discussions, it is necessary to study the dynamical behavior of neural networks with asynchronous time (varying) delays. To the best of our knowledge, there have been few works to report the stability of CVNNs with asynchronous time delays [52], [53]. For example, Zhang et al. [52] focus on the existence, uniqueness, and global robust stability of equilibrium point for CVNNs with multiple time-delays and under parameter uncertainties with respect to two activation functions; while Xu et al. [53] investigate the dynamical behaviors of CVNNs with mixed time delays. However, [45]–[47], [52], and [53] apply the homeomorphism mapping approach proposed in [7] to prove the existence, uniqueness, and global stability of equilibrium point by two steps: 1) prove the existence of equilibrium and 2) prove its stability. In [9] and [10], a direct approach to analyze global and local stabilities of the networks was first proposed. It was revealed that  ∞ the finiteness of trajectory x(t) under ˙ < ∞, is a sufficient condition some norms, i.e., 0 x(t)dt for the existence and global stability of the equilibrium point. This idea was also used in [13]. In this paper, we will adopt this approach. Moreover, we give several criteria based on three generalized norms, L ∞ norm, L 1 norm, and L 2 norm, respectively. In particular, based on L ∞ -norm, we can discuss the networks with time-varying delays. This paper is organized as follows. In Section II, we give the model description, decompose the complex-valued differential equations to real part and imaginary part, and then recast it into an equivalent real-valued differential system, whose dimension is double that of the original complex-valued system. Some definitions, lemmas, and notations used in this paper are also given. In Section III, we present some criteria for the uniqueness and global exponential stability of the equilibrium point for recurrent neural networks models with asynchronous time delays using the generalized ∞-norm, 1-norm, and 2-norm, respectively. Some comparisons with previous M-matrix results are also presented. In Section IV, some numerical simulations under constant and time-varying delays are given to demonstrate the effectiveness of our obtained results. Finally, the conclusion is given and some discussions

about our future investigation of CVNNs are presented in Section V. II. P RELIMINARIES In this section, we give some definitions, lemmas, and notations, which will be used throughout this paper. At first, let us give a definition of asynchronous time delays. Definition 1 (Synchronous and Asynchronous Time Delays): For any node j in a coupled neural network, the synchronous time delay means that at time t, node j receives the information from other nodes at the same time t − τ j (t); while the asynchronous time delays mean that at time t, node j receives the information from other nodes at different times t − τ j k (t), i.e., for nodes k1 = k2 , τ j k1 (t), and τ j k2 (t) can be different. Obviously, the network models of asynchronous time delays have a larger scope than that of synchronous time delays. In this paper, we will investigate the CVNN with asynchronous time delays as follows: z˙ j (t) = −d j z j (t) +

n 

a j k fk (z k (t))

k=1

+

n 

b j k gk (z k (t − τ j k )) + u j ,

j = 1, . . . , n

(1)

k=1

where z j ∈ C is the state of j th neuron and C is the set of complex numbers; d j > 0 represents the positive rate with which the j th unit will reset its potential to the resting state in isolation when disconnected from the network; f j (·) : C → C and g j (·) : C → C are complex-valued activation functions; matrices A = (a j k ) and B = (b j k ) are complex-valued connection weight matrices without and with time delays; τ j k are asynchronous constant time delays; u j ∈ C is the j th external input. Remark 1: When τ j k = τ , system (1) becomes the model investigated in [45]; when activation functions f j and g j are real functions, system (1) becomes the model investigated in [10]. Therefore, this model has a larger scope than previous works, and all the obtained results in Section III can be applied to these special cases. For any complex number z, we use z R and z I to denote its real and imaginary parts, respectively, so z =√z R + i · z I , where i denotes the imaginary unit, that is, i = −1. Now, we introduce some classes of activation functions. Definition 2: Assume f j (z) can be decomposed to its real and imaginary part as f j (z) = f jR (z R , z I ) + i f jI (z R , z I ) where z = z R +i z I , f jR (·, ·) : R 2 → R and f jI (·, ·) : R 2 → R. Suppose the partial derivatives of f j (·, ·) with respect to z R , z I : 1) ∂ f jR /∂z R ; 2) ∂ f jR /∂z I ; 3) ∂ f jI /∂z R ; and 4) ∂ f jI /∂z I exist. If these partial derivatives are continuous, positive, and bounded, i.e., there exist positive constant numbers λ Rj R, λ Rj I , λ Ij R, λ Ij I , such that 0 < ∂ f jR /∂z R ≤ λ Rj R, 0 < ∂ f jR /∂z I ≤ λ Rj I 0 < ∂ f jI /∂z R ≤ λ Ij R,

0 < ∂ f jI /∂z I ≤ λ Ij I

then f j (z) is said to belong to class

(2)

H1 (λ Rj R, λ Rj I, λ Ij R, λ Ij I ).

LIU AND CHEN: GLOBAL EXPONENTIAL STABILITY FOR COMPLEX-VALUED RECURRENT NEURAL NETWORKS

Remark 2: If f jR and f jI are absolutely continuous, then their partial derivatives exist almost everywhere. Definition 3: Assume g j (z) can be decomposed to its real and imaginary part as g j (z) = g Rj (z R , z I ) + ig Ij (z R , z I ), where z = z R +i z I , g Rj (·, ·) : R 2 → R and g Ij (·, ·) : R 2 → R. Suppose the partial derivatives of g j (·, ·) with respect to z R , z I: 1) ∂g Rj /∂z R ; 2) ∂g Rj /∂z I ; 3) ∂g Ij /∂z R ; and 4) ∂g Ij /∂z I exist. If these partial derivatives are continuous and bounded, i.e., there exist positive constant numbers μ Rj R, μ Rj I, μ Ij R, μ Ij I , such that  R    ∂g /∂z R  ≤ μ R R, ∂g R /∂z I  ≤ μ R I j j j j   I   ∂g /∂z R  ≤ μ I R, ∂g I /∂z I  ≤ μ I I (3) j

j

j

j

H2 (μ Rj R, μ Rj I, μ Ij R, μ Ij I ).

then g j (z) is said to belong to class Remark 3: Definition 3 is the usual assumption for activation functions in the literature of CVNNs, which can be found in [45], [52], [53] and references therein. However, the activation functions defined in Definition 2 are more restrictive, which will be useful when considering the signs of entries in connection weights, i.e., there is a tradeoff between the assumption on activation functions and obtained final criteria. Therefore, by decomposing CVNN (1) to real and imaginary parts, we can get two equivalent real-valued systems z˙ Rj (t) = −d j z Rj (t) n n       + a Rjk f kR z kR (t), z kI (t) − a Ij k f kI z kR (t), z kI (t) k=1

+

n 

k=1

 R

b Rjk gk z kR (t − τ j k ), z kI (t − τ j k )



k=1



n 

  b Ij k gkI z kR (t − τ j k ), z kI (t − τ j k ) + u Rj

1) {ξ, ∞}-norm. v(t){ξ,∞} = max j |ξ −1 j v j (t)|, where ξ j > 0, j = 1, . . . , m.  = 2) {ξ, 1}-norm. v(t){ξ,1} j |ξ j v j (t)|, where ξ j > 0, j = 1, . . . , m.  3) {ξ, 2}-norm. v(t){ξ,2} = { j ξ j |v j (t)|2 }1/2 , where ξ j > 0, j = 1, . . . , m. Lemma 1 (See [10]): Let C = (c j k ) ∈ R m×m be a nonsingular matrix with c j k ≤ 0, j, k = 1, . . . , m, j = k. Then, all the following statements are equivalent. 1) C is an M-matrix, i.e., all the successive principal minors of C are equivalent. 2) C T is an M-matrix, where C T is the transpose of C. 3) The real part of all eigenvalues are positive. 4) There exists a vector ξ = (ξ1 , . . . , ξm )T with all ξ j > 0, j = 1, . . . , m such that ξ T C > 0 or Cξ > 0. Notation 1: For any real scalar a, denote a + = max{0, a}. For any matrix C = (c j k ) ∈ R n×n , denote |C| = (|c j k |). In the following, we denote n × n matrices A R = (a Rjk ), A I = (a Ij k ), B R = (b Rjk ), B I = (b Ij k ), and F R R = diag{λ1R R, . . . , λnR R }, F R I = diag{λ1R I, . . . , λnR I }, F I R = diag{λ1I R, . . . , λnI R }, F I I = diag{λ1I I , . . . , λnI I }, G R R = diag{μ1R R, . . . , μnR R }, G R I = diag{μ1R I, . . . , μnR I }, G I R = diag{μ1I R, . . . , μnI R }, G I I = diag{μ1I I , . . . , μnI I }. Notation 2: For any two nonnegative functions f (t), g(t) : (−∞, +∞) → [0, +∞), f (t) = O(g(t)) means that for all t ∈ R, there is a positive constant scalar c such that f (t) ≤ c · g(t). For any symmetric matrix A, λmax (A) means its largest eigenvalue. A n-dimensional vector p = ( p1 , . . . , pn )T is called a positive vector, if its all elements are positive, i.e., pi > 0, i = 1, . . . , n. III. M AIN R ESULTS

(4)

k=1

and z˙ Ij (t)

595

In this section, we prove some criteria for the uniqueness and global exponential stability of the equilibrium. A. Criteria With {ξ, ∞}-Norm

=

−d j z Ij (t) n n       R I R I + a j k f k z k (t), z k (t) + a Ij k f kR z kR (t), z kI (t) k=1 k=1 n    R I R I + b j k gk z k (t − τ j k ), z k (t − τ j k ) k=1 n    b Ij k gkR z kR (t − τ j k ), z kI (t − τ j k ) + u Ij . (5) + k=1

Remark 4: The method of decomposing the CVNNs into two real-valued networks makes the network dimension grow two times, which may cause more calculations. However, this expansion of dimension can also bring some benefits. For example, the number (or dimension) of equilibria can be doubled, which enlarges the capacity of neural networks. It is a tradeoff. The following three generalized norms are used throughout this paper. Definition 4 (See [10]): For any vector v(t) ∈ R m×1 , the following hold.

Theorem 1: For dynamical systems (4) and (5), suppose the activation function f j (z) belongs to class H1(λ Rj R, λ Rj I, λ Ij R, λ Ij I ) and g j (z) belongs to class H2(μ Rj R, μ Rj I, μ Ij R, μ Ij I ), j = 1, . . . , n. If there exists a positive vector ξ = (ξ1 , . . . , ξn , φ1 , . . . , φn )T > 0 and  > 0, such that, for j = 1, . . . , n  +   +  T 1( j ) = ξ j − d j +  + a Rjj · λ Rj R + − a Ij j · λ Ij R n 

+

n      ξk a Rjk λkR R + φk a Rjk λkR I

k=1,k = j

+

n 

n      ξk a Ij k λkI R + φk a Ij k λkI I

k=1,k = j

+

n 

k=1

n      ξk b Rjk μkR R + φk b Rjk μkR I

k=1

+

k=1

n  k=1

k=1 n      ξk b Ij k μkI R+ φk b Ij k μkI I k=1

eτ j k ≤ 0 (6)

596

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 27, NO. 3, MARCH 2016

and





+ a Rjj

T 2( j ) = φ j − d j +  + n    + ξk a Rjk λkI R + k=1

+ +

n 

  ξk a Ij k λkR R +

k=1

n 

  φk a Ij k λkR I

H2(μ Rj R, μ Rj I, μ Ij R, μ Ij I ), j = 1, . . . , n. If there exists a positive vector ξ = (ξ1 , . . . , ξn , φ1 , . . . , φn )T > 0, such that, for j = 1, . . . , n

k=1,k = j n 

k=1

+

k=1,k = j n 

  ξk b Rjk μkI R +

n 

If conditions (11) and (12) hold, then we can find a sufficient small constant  > 0, such that inequalities (6) and (7) hold. Therefore, this corollary is a direct consequence of Theorem 1. Corollary 2: For dynamical systems (4) and (5), suppose the activation function f j (z) belongs to class H2(λ Rj R, λ Rj I, λ Ij R, λ Ij I ) and g j (z) belongs to class

  + + a Ij j · λ Rj I   φk a Rjk λkI I

· λ Ij I n 

  φk b Rjk μkI I

k=1

  ξk b Ij k μkR R +

k=1

n 

  φk b Ij k μkR I

eτ j k ≤ 0

T 5( j ) = − ξ j d j +

k=1

+

(7)

n  k=1

then dynamical systems (4) and (5) have a unique equilibrium Z¯ R = (¯z 1R , . . . , z¯nR )T and Z¯ I = (¯z 1I , . . . , z¯ nI )T , respectively. Moreover, for any solution  T Z (t) = z 1R (t), . . . , z nR (t), z 1I (t), . . . , z nI (t) (8)

+

n  k=1

+

n 

there hold (9) (10)

k=1

Its proof can be found in Appendix A. Corollary 1: For dynamical systems (4) and (5), suppose the activation function f j (z) belongs to class H1(λ Rj R, λ Rj I, λ Ij R, λ Ij I ) and g j (z) belongs to class H2 (μ Rj R, μ Rj I, μ Ij R, μ Ij I ), j = 1, . . . , n. If there exists a positive vector ξ = (ξ1 , . . . , ξn , φ1 , . . . , φn )T > 0, such that, for j = 1, . . . , n  +   +  T 3( j ) = ξ j − d j + a Rjj · λ Rj R + − a Ij j · λ Ij R n n       ξk a Rjk λkR R + φk a Rjk λkR I + + + +

k=1,k = j n  k=1,k = j n 

k=1 n      ξk a Ij k λkI R + φk a Ij k λkI I

k=1

k=1

n      ξk b Ij k μkI R + φk b Ij k μkI I < 0

k=1

+

n 

k=1

k=1

k=1

n      ξk b Rjk μkI R + φk b Rjk μkI I k=1

n      ξk b Ij k μkR R + φk b Ij k μkR I < 0

k=1

(14)

k=1

then any solution of systems (4) and (5), respectively, converges to a unique equilibrium exponentially. This result is a direct consequence of Corollary 1. Remark 5: Theorem 1 can be generalized to the system with time-varying delays n 

+ (11)

n 

a j k f k (z k (t))

b j k gk (z k (t − τ j k (t))) + u j ,

j = 1, . . . , n

k=1

(15)

k=1

T 4( j ) =

 + k=1  +   φ j − d j + a Rjj · λ Ij I + a Ij j · λ Rj I n n       ξk a Rjk λkI R + φk a Rjk λkI I + k=1 k=1,k = j n n    I  RR     + ξk a j k λk + φk a Ij k λkR I k=1 k=1,k = j n n   R  IR    + ξk b j k μk + φk b Rjk μkI I k=1 k=1 n n   I  RR    + ξk b j k μk + φk b Ij k μkR I < 0 k=1 k=1

+

n      ξk a Rj,k λkI R + φk a Rjk λkI I

k=1

k=1

n  I  IR      ξk b j k μk + φk b Ij k μkI I < 0

k=1

n 

n 

(13)

k=1

n      ξk a Ij k λkR R + φk a Ij k λkR I

z˙ j (t) = −d j z j (t) +

k=1

n      ξk b Rjk μkR R + φk b Rjk μkR I

k=1 n 

+

k=1

n      ξk b Rjk μkR R + φk b Rjk μkR I

T 6( j ) = − φ j d j + n 

n      ξk a Rjk λkR R + φk a Rjk λkR I

n      ξk a Ij k λkI R + φk a Ij k λkI I

k=1

 Z˙ (t){ξ,∞} = O(e−t )  R T I T T Z (t) − Z¯ , Z¯ = O(e−t ). {ξ,∞}

n 

where τ j k (t) can be bounded or unbounded. In fact, by Theorem 1, system (15) has an equilibrium Z¯ = (¯z 1 , . . . , z¯ n )T , and d(z j (t) − z¯ j ) = −d j (z j (t) − z¯ j ) dt n  + a j k ( f k (z k (t)) − fk (¯z k )) k=1

+ (12)

then any solution of systems (4) and (5), respectively, converges to a unique equilibrium exponentially.

n 

b j k (gk (z k (t − τ j k (t))) − gk (¯z k )).

k=1

Replacing et z˙ (t) by et (z(t) − Z¯ ) in the proof of Theorem 1 and with the similar approach, we can prove that under the conditions (6) and (7), system (15) has a unique

LIU AND CHEN: GLOBAL EXPONENTIAL STABILITY FOR COMPLEX-VALUED RECURRENT NEURAL NETWORKS

equilibrium, which is globally μ stable (for the concept of μ stability first proposed in [11] and detail, readers can refer to [11] and [12]). B. Criteria With {ξ, 1}-Norm Theorem 2: For dynamical systems (4) and (5), suppose the activation function f j (z) belongs to class H1(λ Rj R, λ Rj I, λ Ij R, λ Ij I ) and g j (z) belongs to class H2 (μ Rj R, μ Rj I, μ Ij R, μ Ij I ), j = 1, . . . , n. If there exists a positive vector ξ = (ξ1 , . . . , ξn , φ1 , . . . , φn )T > 0 and  > 0, such that, for k = 1, . . . , n T 7(k) = ξk (−dk + ) ⎡

n 

R + + ⎣ξk akk

  ξ j a Rjk  +

j =1, j  =k

⎡ I + ⎣−ξk akk

⎤+

  φ j a Ij k ⎦ λkR R

j =1

⎤+ n  I    R ξ j a j k  + φ j a j k ⎦ λkI R

n 

+

n 

j =1, j  =k

j =1

n        ξ j b Rjk μkR R + b Ij k μkI R + j =1

     + φ j b Rjk μkI R + b Ij k μkR R eτ j k ≤ 0

T 8(k) = φk (−dk + ) ⎡ R + ⎣φk akk

+

n  j =1, j  =k

⎡ I + ⎣φk akk +

n 

⎤+ n  R   I  φ j a j k  + ξ j a j k ⎦ λkI I j =1

⎤+ n  I    R φ j a j k  + ξ j a j k ⎦ λkR I

j =1, j  =k

j =1

     + φ j b Rjk μkI I + b Ij k μkR I eτ j k ≤ 0

then dynamical systems (4) and (5) have a unique equilibrium Z¯ R = (¯z 1R , . . . , z¯nR )T and Z¯ I = (¯z 1I , . . . , z¯ nI )T , respectively. Moreover, for any solution Z (t) defined by (8)–(10) hold, while the norm is {ξ, 1}-norm. Its proof can be found in Appendix B. Corollary 3: For dynamical systems (4) and (5), suppose the activation function f j (z) belongs to class H1(λ Rj R, λ Rj I, λ Ij R, λ Ij I ) and g j (z) belongs to class H2 (μ Rj R, μ Rj I, μ Ij R, μ Ij I ), j = 1, . . . , n. If there exists a positive vector ξ = (ξ1 , . . . , ξn , φ1 , . . . , φn )T > 0, such that, for k = 1, . . . , n T 9(k) = −ξk dk ⎡

n 

R + ⎣ξk akk +



⎤+ n  R   I  ξ j a j k  + φ j a j k ⎦ λkR R

j =1, j  =k

I + ⎣− ξk akk +

n  j =1, j  =k

j =1

  ξ j a Ij k  +

n  j =1

n 

     ξ j b Rjk μkR R + b Ij k μkI R

j =1

     + φ j b Rjk μkI R + b Ij k μkR R < 0 T 10(k) = −φk dk ⎡ R + ⎣φk akk +

n  j =1, j  =k

⎡ I + ⎣φk akk +

n 

⎤+

  φ j a Rjk ⎦ λkI R

⎤+ n  R   I  φ j a j k  + ξ j a j k ⎦ λkI I j =1

⎤+ n  I    R φ j a j k  + ξ j a j k ⎦ λkR I

j =1, j  =k

+

n 

j =1

     ξ j b Rjk μkR I + b Ij k μkI I

j =1

     + φ j b Rjk μkI I + b Ij k μkR I < 0 then any solution of systems (4) and (5), respectively, converges to a unique equilibrium exponentially. Corollary 4: For dynamical systems (4) and (5), suppose the activation function f j (z) belongs to class H2(λ Rj R, λ Rj I, λ Ij R, λ Ij I ) and g j (z) belongs to class H2(μ Rj R, μ Rj I, μ Ij R, μ Ij I ), j = 1, . . . , n. If there exists a positive vector ξ = (ξ1 , . . . , ξn , φ1 , . . . , φn )T > 0, such that, for k = 1, . . . , n ⎡ ⎤ n n   R   I  RR T 11(k) = −ξk dk + ⎣ ξ j a j k  + φ j a j k ⎦ λk ⎡ +⎣

j =1

n 

+

n 

j =1

⎤ n  I    R  IR ξ j a j k  + φ j a j k ⎦ λk

j =1

n        ξ j b Rjk μkR I + b Ij k μkI I + j =1

+

597

j =1

     ξ j b Rjk μkR R + b Ij k μkI R

j =1

     + φ j b Rjk μkI R + b Ij k μkR R < 0 (16) ⎤ ⎡ n n       φ j a Rjk  + ξ j a Ij k ⎦ λkI I T 12(k) = −φk dk + ⎣ ⎡ +⎣

j =1

n  j =1

+

n 

j =1

⎤ n  I    R  RI φ j a j,k  + ξ j a j k ⎦ λk j =1

     ξ j b Rjk μkR I + b Ij k μkI I

j =1

     + φ j b Rjk μkI I + b Ij k μkR I < 0

(17)

then any solution of systems (4) and (5), respectively, converges to a unique equilibrium exponentially. C. Some Comparisons The following theorem is a direct consequence of Corollaries 2 and 4 and the properties of the M-matrix.

598

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 27, NO. 3, MARCH 2016

Theorem 3: For dynamical systems (4) and (5), suppose the activation function f j (z) belongs to class H2(λ Rj R, λ Rj I, λ Ij R, λ Ij I ) and g j (z) belongs to class H2 (μ Rj R, μ Rj I, μ Ij R, μ Ij I ), j = 1, . . . , n. Denote   D 0 ¯ D= 0 D    RR  R F RI F |A | |A I | ¯ ¯ , F= A= |A I | |A R | FIR FII     R G RR G RI |B | |B I | ¯ , G = . (18) B¯ = GI R GI I |B I | |B R | If D¯ − AF − BG is a nonsingular M-matrix, then any solution of systems (4) and (5), respectively, converges to a unique equilibrium exponentially. Proof: If D¯ − AF − BG is a nonsingular M-matrix, according to Lemma 1, there exists vector ξ = (ξ1 , . . . , ξn , φ1 , . . . , φn )T > 0, such that ( D¯ − AF − BG)ξ > 0, i.e., inequalities (13) and (14) hold. Therefore, the conclusion is a direct consequence of Corollary 2. On the other hand, if D¯ − AF − BG is a nonsingular M-matrix, according to Lemma 1, there exists vector ξ = (ξ1 , . . . , ξn , φ1 , . . . , φn )T > 0, such that ξ T ( D¯ − AF − BG) > 0, i.e., inequalities (16) and (17) hold. Therefore, the conclusion is also a direct consequence of Corollary 4. Remark 6: Criterion based on M-matrix was also reported in [45]. However, it neglects the signs of entries in the connection matrices A and B, and thus, the difference between excitatory and inhibitory effects might be ignored. Comparatively, the criteria given in Theorems 1 and 2 and Corollaries 1 and 3 are more powerful. In the following, we give a comparison between Corollary 1 and Theorem 3 using the matrix theory. Denote matrices  R   R +   R   R + − a − a , . . . , ann P1 = diag a11 11 nn     I    I + I  I + P2 = diag a11 − − a11 , . . . , ann − − ann  I   I +   I   I + − a − a P3 = diag a11 , . . . , ann . 11 nn

should be as large as possible, so one way is to make all a Rjj , j = 1, . . . , n be negative numbers. Remark 7: The function M(t) = maxt maxi=1,...,m |u i (t)| proposed in [10] is a powerful tool in dealing with delayed systems. In particular, for the time-varying delays.  ∞Remark 8: It can be seen that in computing the integral ˙ ˙ 0 || Z (t)||dt, the estimation of d/dt|| Z(t)|| plays an important role. Let A(t) = (ai j )i,N j =1 , ξi > 0, i = 1, . . . , N and dw = Aw(t). dt It has been shown that [9], [10] ⎡ ⎤ d  ξi w(t) {ξ,1} max dt = max ⎣a j j + |ai j |⎦ j w(t){ξ,1} ξj i = j ⎤ ⎡ d  ξj w(t) {ξ,∞} = max ⎣aii + |ai j |⎦ max dt i w(t){ξ,∞} ξi

(21)

j  =i

max

d 2 dt w(t){ξ,2} w(t)2{ξ,2}

= λmax ( A + A T ), = diag(ξ ).

In fact n n   d w(t){ξ,1} = sign(wi (t))ξi ai j w j (t) dt i=1 j =1   n   ξi = sign(wi (t)) ai j ξ j w j (t) ξj j =1 i ⎤ ⎡  ξi  ⎣a j j + |ai j |⎦ |ξ j w j (t)| ≤ ξj j i = j ⎡ ⎤  ξi ≤ max ⎣a j j + |ai j |⎦ w(t){ξ,1} . j ξj i = j

Obviously, these matrices are all nonnegative definite. Define   P1 F R R + P2 F I R 0 ¯ = (19) 0 P1 F I I + P3 F R I

Therefore

so it is also nonnegative definite. Using this notation, and from Corollary 1, the sufficient condition for global stability is that

Similarly, we can prove the other two equalities. These three equalities play very important role in discussing stability of the neural networks  or other dynamical systems. For example, if max j [a j j + i= j (ξi /ξ j )|ai j |] ≤ −α < 0, then (d/dt)w(t){ξ,1} ≤ −αw(t){ξ,1} , which implies w(t){ξ,1} = O(e−αt ). It happens that these three equalities are closely relating to the matrix measure of A with respect to three norms.

¯ D¯ − AF − BG + 

(20)

¯ AF −BG should be a nonsingular M-matrix. Obviously, if D− is a nonsingular M-matrix, the above matrix (20) is also a nonsingular M-matrix; instead, if matrix (20) is a nonsingular M-matrix, D¯ − AF − BG may be not. Therefore, Corollary 1 presents a better criterion than that by previous works, like [45], because it considers the signs of entries in the connection matrix A, whose positive effect is ¯ defined in (19). described by the above nonnegative matrix  Moreover, from this result, we can also find that in order to make the CVNNs have the stable equilibrium, P1 , P2 , P3

max

d dt w(t){ξ,1}

w(t){ξ,1}

⎡ = max ⎣a j j j

⎤  ξi + |ai j |⎦. ξj i = j

D. Criteria With {ξ, 2}-Norm Theorem 4: For dynamical systems (4) and (5), suppose the activation function f j (z) belongs to class H1(λ Rj R, λ Rj I, λ Ij R, λ Ij I ) and g j (z) belongs to class H2(μ Rj R, μ Rj I, μ Ij R, μ Ij I ), j = 1, . . . , n. If there exists a

LIU AND CHEN: GLOBAL EXPONENTIAL STABILITY FOR COMPLEX-VALUED RECURRENT NEURAL NETWORKS

positive vector ξ = (ξ1 , . . . , ξn , φ1 , . . . , φn )T > 0 and  > 0, such that, for j = 1, . . . , n T 13( j )  +   +  = 2ξ j − d j +  + a Rjj λ Rj R + − a Ij j λ Ij R n       + ξ j a Rjk λkR R + a Ij k λkI R π1 j k + +

k=1,k = j n 

 R  R R  I  I R  −1 λ + a λ π1 ξk akj j kj j kj

k=1,k = j n 

     ξ j a Rjk λkR I + a Ij k λkI I π2 j k

Corollary 5: For dynamical systems (4) and (5), suppose the activation function f j (z) belongs to class H1(λ Rj R, λ Rj I, λ Ij R, λ Ij I ) and g j (z) belongs to class H2(μ Rj R, μ Rj I, μ Ij R, μ Ij I ), j = 1, . . . , n. If there exists a positive vector ξ = (ξ1 , . . . , ξn , φ1 , . . . , φn )T > 0, such that, for j = 1, . . . , n  +   +  T 15( j ) = 2ξ j − d j + a Rjj λ Rj R + − a Ij j λ Ij R n       + ξ j a Rjk λkR R + a Ij k λkI R π1 j k +

k=1

+

n 

+

+

+

+

+

+

+

n 

 R  I R  I  R R  −1 2τ     φk bkj μ j + bkj μ j ω3kj e kj ≤ 0

+

+ + + + + +

     φ j a Rjk μkI I + a Ij k μkR I ω2 j k

k=1,k = j n 

 R  I I  I  R I  −1 μ + a μ ω2 φk akj j kj j kj

+

+

n 

 R  I R  I  R R  −1 μ + a μ φk akj ω1kj j kj j

n 

 R  R R  I  I R  −1 μ + b μ π3 ξk bkj j kj j kj

n 

 R  I R  I  R R  −1 μ + b μ ω3kj < 0 φk bkj j kj j

k=1

 +  +   T 16( j ) = 2φ j − d j + a Rjj μ Rj R + a Ij j μ Rj I n       φ j a R μ I R + a I μ R R ω1 j k +

+

k=1 n  k=1,k = j n 

k

jk

k

     φ j a Rjk μkI I + a Ij k μkR I ω2 j k  R  I I  I  R I  −1 μ + a μ ω2 φk akj j kj j kj

n 

     φ j b Rjk μkI R + b Ij k μkR R ω3 j k

k=1

+

n 

     φ j b Rjk μkI I + b Ij k μkR I ω4 j k

k=1

 R  R I  I  I I  −1 μ + b μ π4 ξk bkj j kj j kj

n 

     ξ j b Rjk μkR I + b Ij k μkI I π4 j k

k=1,k = j

 R  R I  I  I I  −1 λ + a λ π2 ξk akj j kj j kj

k=1

+

+

     φ j b Rjk μkI I + b Ij k μkR I ω4 j k

k=1

n 

n 

k=1

+

     φ j b Rjk μkI R + b Ij k μkR R ω3 j k

k=1 n 

     ξ j b Rjk μkR R + b Ij k μkI R π3 j k

jk

k=1,k = j n  k=1 n 

n 

k=1

k=1

T 14( j )    +  + = 2φ j − d j +  + a Rjj μ Rj R + a Ij j μ Rj I n       φ j a Rjk μkI R + a Ij k μkR R ω1 j k + k=1 n 

     ξ j a Rjk λkR I + a Ij k λkI I π2 j k

k=1

 R  R R  I  I R  −1 μ + b μ π3 ξk bkj j kj j kj

k=1

n 

k=1

 R  I R  I  R R  −1 μ + a μ φk akj ω1kj j kj j

k=1

n 

 R  R R  I  I R  −1 λ + a λ π1 ξk akj j kj j kj

k=1

     ξ j b Rjk μkR I + b Ij k μkI I π4 j k

k=1

n 

k=1,k = j n  k=1,k = j

     ξ j b Rjk μkR R + b Ij k μkI R π3 j k

k=1

n 

599

+

 R  I I  I  R I  −1 2τ μ + b μ ω4 e kj ≤ 0 φk bkj j kj j kj

 R  R I  I  I I  −1 λ + a λ π2 ξk akj j kj j kj

k=1

+

n 

 R  R I  I  I I  −1 μ + b μ π4 ξk bkj j kj j kj

k=1

k=1

where π1 j k , π2 j k , π3 j k , π4 j k , ω1 j k , ω2 j k , ω3 j k , and ω4 j k are positive numbers. Then, dynamical systems (4) and (5) have a unique equilibrium Z¯ R and Z¯ I, respectively. Moreover, for any solution Z (t) defined by (8)–(10) hold, where the norm is {ξ, 2}-norm. Its proof can be found in Appendix C.

n 

+

n 

 R  I I  I  R I  −1 μ + b μ ω4 < 0 φk bkj j kj j kj

k=1

where π1 j k , π2 j k , π3 j k , π4 j k , ω1 j k , ω2 j k , ω3 j k , and ω4 j k are positive numbers. Then, any solution of systems (4) and (5), respectively, converges to a unique equilibrium exponentially.

600

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 27, NO. 3, MARCH 2016

Corollary 6: For dynamical systems (4) and (5), suppose the activation function f j (z) belongs to class H2 (λ Rj R, λ Rj I, λ Ij R, λ Ij I ) and g j (z) belongs to class H2 (μ Rj R, μ Rj I, μ Ij R, μ Ij I ), j = 1, . . . , n. If there exists a positive vector ξ = (ξ1 , . . . , ξn , φ1 , . . . , φn )T > 0, such that, for j = 1, . . . , n n       T 17( j ) = −2ξ j d j + ξ j a Rjk λkR R + a Ij k λkI R +

n  k=1

+

n  k=1

+

n  k=1

+

n  k=1

+ +

n  k=1 n  k=1

+

n 

k=1

 R  R R  I  I R  λ + a λ ξk akj j kj j      ξ j a Rjk λkR I + a Ij k λkI I      ξ j b Rjk μkR R + b Ij k μkI R      ξ j b Rjk μkR I + b Ij k μkI I  R  I R  I  R R  μ + a μ φk akj j kj j  R  R R  I  I R  μ + b μ ξk bkj j kj j  R  I R  I  R R  μ + b μ φk bkj

Global Exponential Stability for Complex-Valued Recurrent Neural Networks With Asynchronous Time Delays.

In this paper, we investigate the global exponential stability for complex-valued recurrent neural networks with asynchronous time delays by decomposi...
1MB Sizes 0 Downloads 7 Views