IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 26, NO. 7, JULY 2015

1431

Exponential Stabilization of Memristor-based Chaotic Neural Networks with Time-Varying Delays via Intermittent Control Guodong Zhang and Yi Shen

Abstract— This paper is concerned with the global exponential stabilization of memristor-based chaotic neural networks with both time-varying delays and general activation functions. Here, we adopt nonsmooth analysis and control theory to handle memristor-based chaotic neural networks with discontinuous right-hand side. In particular, several new sufficient conditions ensuring exponential stabilization of memristor-based chaotic neural networks are obtained via periodically intermittent control. In addition, the proposed results here are easy to verify and they also extend the earlier publications. Finally, numerical simulations illustrate the effectiveness of the obtained results. Index Terms— Exponential stabilization, intermittent control, memristor-based neural networks, nonsmooth analysis.

I. I NTRODUCTION

M

EMRISTOR devices have been the focus of much recent attention in the electrical and electronic engineering communities [1]–[3]. Although their existence was already postulated by Chua [4], the prototype of memristor in nanoscale electronics is reported in [5]. Aside from resistors, inductors, and capacitors, memristors are considered as the fourth basic circuit element and are defined by a nonlinear charge–flux characteristic. Chua [4] showed that the value of the memristor, called memristance, is the function of electric charge q given as M(q) = dϕ/dq, where ϕ denotes the magnetic flux. Its current–voltage characteristic is shown in Fig. 1, which was demonstrated by experiments from the scientists at the HP laboratory [5]. In these years, it is have been showed that memristor devices have many promising applications, such as associative memory and signal processing [1]–[3], [6], one of which is to Manuscript received November 11, 2013; revised April 23, 2014 and July 24, 2014; accepted July 26, 2014. Date of publication August 19, 2014; date of current version June 16, 2015. The work of Y. Shen was supported in part by the National Science Foundation of China under Grant 11271146, in part by the Science and Technology Program of Wuhan under Grant 2013010501010117, in part by the Key Program of National Natural Science Foundation of China under Grant 61134012, and in part by the Prior Development Field for the Doctoral Program of Higher Education of China under Grant 20130142130012. G. Zhang is with School of Automation, Huazhong University of Science and Technology, Hubei 430074, China, and also with the College of Mathematics and Statistics, South-Central University for Nationalities, Wuhan 430074, China. Y. Shen is with School of Automation, Huazhong University of Science and Technology, Hubei 430074, China (e-mail: [email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TNNLS.2014.2345125

Fig. 1. Current–voltage characteristic of memristor with a sinusoidal current source.

emulate synaptic behavior [7], [8]. Therefore, of course, we can replace resistors with memristors to build a new model of neural networks to emulate the human brain, that is, the memrsitor-based recurrent neural networks (Fig. 2). Recently, dynamic analysis of memristor-based neural networks has attracted many researchers’ attention. In 2010, Hu and Wang [9] studied global uniform asymptotic stability of memristorbased neural networks by constructing proper Lyapunov functionals and using the differential inclusion theory. And then, the stability and synchronization control of memristorbased recurrent neural networks were further investigated in [10]–[17]. As it is well known, stability of neural networks is very important due to its potential applications in many different areas including secure communication, information science, and so on [18]–[25]. However, during the implementation of neural networks, time delays in particular time-varying delays are unavoidably encountered in the signal transmission among the neurons due to the finite switching speed of neurons and amplifiers, which will affect the stability of the neural system [26]–[30]. Hence, the control of neural networks with timevarying delays is of both theoretical and practical importance. Meanwhile, many control approaches have been proposed to stabilize chaotic networks and nonlinear systems such as adaptive control [31], impulsive control [32], intermittent control [33]–[38] and so on. And for memristor-based neural networks, there are also some works for stabilization and synchronization control, e.g., Wu et al. [39] investigated the exponential stabilization

2162-237X © 2014 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

1432

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 26, NO. 7, JULY 2015

Fig. 2. Circuit of memristor-based recurrent neural networks. Ri∗j is the memristor between feedback function f j (·) and xi (t), Ri∗∗ j is the memristor between feedback function g j (·) and xi (t), Ii (t) is external input, Ri and Ci are resistor and capacitor, respectively, v i and v i∗ are outputs, and i, j = 1, 2, . . . , n.

of delayed memristor-based neural networks via continuous linear state feedback control. Wen et al. [40] and Zhang and Shen [41] discussed the exponential synchronization of two delayed memristor-based neural networks also via continuous state feedback control. Intermittent control, which was first introduced to control linear econometric models in [42], has been used for a variety of purposes such as manufacturing, transportation, and communication in practice. In addition, compared with the continuous control methods, intermittent control is more economical and efficient. After several decades, numerous studies with respect to intermittent control have been carried out [43]–[46]. However, on the stabilization of memristorbased chaotic neural networks via intermittent control, few results are found in literature. On the other hand, neuron activation functions play important roles in the dynamical analysis of neurodynamic systems [23], [27], general activation functions can provide richness of flexibility and opportunities in practical applications. In recent years, for memristor-based neural networks, the bounded Lurie-type activation functions are often used in [11], [14], [40], and [41]. Motivated by the above discussions, in this paper, we will derive several new sufficient conditions ensuring exponential stabilization of memristor-based chaotic neural networks with both time-varying delays and general activation functions via periodically intermittent control. The main contribution of this paper lies in the following aspects. First, the dynamic analysis here adopts nonsmooth analysis and control theory to handle memristor-based neural networks with discontinuous right-hand side. Second, periodically intermittent controller technique, which is totally

different from the techniques employed in [10] and [39]–[41], is to study the stabilization of addressed neural networks with time-varying delays in this paper. Third, as the generalization of the obtained results, exponential stabilization of addressed neural networks under various feedback functions are discussed in detail. Finally, some new criteria are derived to ensure stabilization of the neural networks, the proposed results here are easy to verify and they also extend the earlier publications. II. P RELIMINARIES A. Model In this paper, based on the previous works [10], [13], and [41], we consider a class of memristorbased neural networks with time-varying delays as follows: n  d x i (t) = − di (x i (t))x i (t) + ai j (x i (t)) f j (x j (t)) dt j =1

+

n 

bi j (x i (t))g j (x j (t − τi j (t))),

j =1

t ≥ 0, i = 1, 2, . . . , n where

(1)

⎡ ⎤ n  1 ⎣ di (x i (t)) = (Mi j + Wi j ) × δi j + Ri ⎦ Ci j =1

Mi j Wi j × δi j , bi j (x i (t)) = × δi j Ci Ci where δi j = 1, if i = j holds, otherwise, −1; x i (t) is the state of the i -th neuron at time t; Mi j and Wi j denote the memductances of memristors Ri∗j and Ri∗∗ j , respectively; di (x i (t)) is the ai j (x i (t)) =

ZHANG AND SHEN: EXPONENTIAL STABILIZATION OF MEMRISTOR-BASED CHAOTIC NEURAL NETWORKS WITH TIME-VARYING DELAYS

1433

where di (x i (t)), ai j (x i (t)), bi j (x i (t)) are the same as defined in (1), and u i (t) is an periodically intermittent controller defined by ⎧ n ⎨ ω x (t), mT ≤ t < mT + δ ij j (3) u i (t) = j =1 ⎩ 0, mT + δ ≤ t < (m + 1)T where m = 1, 2, . . . , and ωi j are constants for all i, j = 1, 2, . . . , n, which denote the control gains, T denotes the control period, and δ (0 < δ < T ) is called the control width. Fig. 3.

B. Notations

Typical current–voltage characteristics of a memristor.

i th neuron self-inhibitions at time t; ai j (x i (t)), bi j (x i (t)) are memristors synaptic connection weights; τi j (t) corresponds to the transmission delays and satisfies 0 ≤ τi j (t) ≤ τ, τ˙i j (t) ≤ σ0 < 1 (τ > 0 and σ0 is a constant); Ri and Ci are the resistor and capacitor; Ri = 1/Ri , i, j = 1, 2, . . . , n. The memrsitorbased recurrent neural networks (1) can be implemented by very-large-scale integration circuits, as shown in Fig. 2. If the parameters or time delays of memristor-based neural networks (1) are appropriately chosen, then, the dynamical networks (1) may lead to chaos [40], [41]; when this case occurred, we call that the dynamical networks (1) are memristor-based chaotic neural networks. Chua [2] has shown that memristor needs to exhibit only two sufficient distinct equilibrium states since digital computer applications require only two memory states. Based on the analysis in [2], Fig. 1 can be simplified as Fig. 3 that shows the simplification of memductance of the memristor. Through the typical feature of the memristor in Fig. 3, then we have  ∗ di , |x i (t)| ≤ Ti di (x i (t)) = di∗∗ , |x i (t)| > Ti  ∗ ai j , |x i (t)| ≤ Ti ai j (x i (t)) = ai∗∗ j , |x i (t)| > Ti  ∗ bi j , |x i (t)| ≤ Ti bi j (x i (t)) = bi∗∗ j , |x i (t)| > Ti in which switching jumps Ti > 0, di∗ > 0, di∗∗ > 0, ai∗j , ai∗∗ j , bi∗j , bi∗∗ , and i, j = 1, 2, . . . , n, are all constant numbers. j Obviously, the memristor-based neural networks (1) are statedependent switched systems, which are the generalization of those for conventional neural networks. If the dynamical networks (1) are memristor-based chaotic neural networks, those networks are unstable. To stabilize the origin of (1), we introduce the control model of (1) described by the following: n

j =1

+

bi j (x i (t))g j (x j (t − τi j (t))) + u i (t),

j =1

t ≥ 0, i = 1, 2, . . . , n

−τ ≤t ≤0

p ≥ 1, for ∀ φ = (φ1 (t), φ2 (t), . . . , φn (t)) ∈ C([−τ, 0], Rn ), and co{ξ i , ξ i } denotes the convex hull of {ξ i , ξ i }. a i j = ∗ ∗∗ ∗ ∗∗ min{ai∗j , ai∗∗ j }, a i j = max{ai j , ai j }, bi j = min{bi j , bi j }, and ∗ ∗∗ bi j = max{bi j , bi j }. For a continuous function k(t) : R → R, D + k(t) is called the upper right dini derivative and defined as D + k(t) = lim 1/ h(k(t + h) − k(t)). The initial conditions h→0+

of (1) are assumed to be x i (s) = ψi (s), s ∈ [−τ, 0], and ψi (s) ∈ C([−τ, 0], R), i = 1, 2, . . . , n. In this paper, we do the following assumption for (1): (H1 ) for i = 1, 2, . . . , n, ∀s1 , s2 ∈ R, the neuron activation functions f i , gi are bounded, f i (0) = gi (0) = 0, and satisfy σi− ≤

f i (s1 ) − f i (s2 ) ≤ σi+ , s1 − s2

ρi− ≤

gi (s1 ) − gi (s2 ) ≤ ρi+ s1 − s2

where s1 = s2 and σi− , σi+ , ρi− , ρi+ are constants. Let Rn be the space of n-dimensional real column vectors. For any h = (h 1 , h 2 , . . . , h n )T ∈ Rn , the norms are defined by h p = ( ni=1 |h i | p )1/ p , where p ≥ 1 is a positive integer. C. Definitions and Lemma Now, we introduce the following definitions about set-valued map [47]–[49]. Definition 1: Let E ⊂ Rn , x → F(x) is called a set-valued map from E → Rn , if to each point x of a set E ⊂ Rn , there corresponds a nonempty set F(x) ⊂ Rn . A set-valued map F with nonempty values is said to be upper semicontinuous at x 0 ∈ E ⊂ Rn , if for any open set N containing F(x 0 ), there exists a neighborhood M of x 0 such that F(M) ⊂ N. F(x) is said to have a closed (convex, compact) image if for each x ∈ E, F(x) is closed (convex, compact). Now, we introduce the concept of Filippov solution [47]. Consider the following differential system in vector notation: dx = g(t, x), g : R+ × Rn → Rn . (4) dt Definition 2: For the (4) with discontinuous right-hand sides, a set-valued map is defined as co[g(t, B(x, ) \ N)] (t, x) =

 d x i (t) = − di (x i (t))x i (t) + ai j (x i (t)) f j (x j (t)) dt n 

Throughout this paper, solutions of all systems considered in the following are in Filippov’s sense [47]. We define φ = sup [ ni=1 | φi (t) | p ]1/ p , where p is a constant and

(2)

>0 μ(N)=0

1434

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 26, NO. 7, JULY 2015

where co[·] denotes the closure of the convex hull, B(x, ) is the ball of center x and radius , and μ(N) is the Lebesgue measure of set N. A vector-valued function x(t) = (x 1 (t), x 2 (t), . . . , x n (t))T defined on a nondegenerate interval I ⊂ R is called a Filippov solution of (4), if x(t) is an absolutely continuous function on any subinterval [l1 , l2 ] ⊂ I , which satisfies the differential inclusion dx ∈ (t, x), for a.a. t ∈ I. (5) dt Through the theories of differential inclusion and set-valued map [47]–[49], from (2), it follows that:  d x i (t) ∈ − co[di (x i (t))]x i (t) + co[ai j (x i (t))] f j (x j (t)) dt n

j =1

+

n 

co[bi j (x i (t))]g j (x j (t − τi j (t))) + u i (t),

j =1

for a.e. t ≥ 0, i = 1, 2, . . . , n.

(6)

Clearly, in this paper, for i, j = 1, 2, . . . , n co[di (x i (t))] = [d i , d i ], co[ai j (x i (t))] = [ai j , a i j ], co[bi j (x i (t))] = [bi j , bi j ]. Under assumption (H1 ), it is obvious that the set-valued map  d x i (t) co[ai j (x i (t))] f j (x j (t))  − co[di (x i (t))]x i (t) + dt n

j =1

+

n 

co[bi j (x i (t))]g j (x j (t − τi j (t))),

j =1

for a.e. t ≥ 0, i = 1, 2, . . . , n

(7)

has nonempty compact convex values. Furthermore, it is upper semicontinuous. Definition 3: A function (in Filippov’s sense) x ∗ (t) = (x 1∗ (t), x 2∗ (t), . . . , x n∗ (t))T is a solution of (1) with the initial conditions ψ(s) = (ψ1 (s), ψ2 (s), . . . , ψn (s))T ∈ C([−τ, 0], Rn ), if x ∗ (t) is an absolutely continuous function on any compact interval of [0, +∞) and satisfies the differential inclusion n  d x i∗ (t) ∈ − co[di (x i∗ (t))]x i∗ (t) + co[ai j (x i∗ (t))] f j (x ∗j (t)) dt j =1

+

n 

co[bi j (x i∗ (t))]g j (x ∗j (t − τi j (t))),

j =1

for a.e. t ≥ 0, i = 1, 2, . . . , n.

(8)

Definition 4: The equilibrium x ∗ = (x 1∗ , x 2∗ , . . . , x n∗ )T of (1) is said to be globally exponentially stable, if there exist constants ε > 0 and M ≥ 1 such that

n 1/ p  ∗ p |x i (t) − x i | ≤ Me−εt ψ − x ∗ , for ∀ t ≥ 0 i=1

where ψ(s) = (ψ1 (s), ψ2 (s), . . . , ψn (s))T ∈ C([−τ, 0], Rn ). Lemma 1: If assumption (H1 ) holds, then there is at least a local solution x(t) of (1) with initial condition

ψ(s) = (ψ1 (s), ψ2 (s), . . . , ψn (s))T ∈ C((−τ, 0], Rn ], and the local solution x(t) can be extended to the interval [0, +∞) in the sense of Filippov. In the following section, the paper aims to find some suitable T, δ, and ωi j such that the memristor-based chaotic neural networks (1) can realize exponential stability under periodically intermittent controller (3). III. M AIN R ESULTS Under controller (3), (2) can be rewritten as ⎧ n  ⎪ ⎪d x i (t) = − di (x i (t))x i (t) + ai j (x i (t)) f j (x j (t)) ⎪ ⎪ ⎪ dt ⎪ ⎪ j =1 ⎪ ⎪ n ⎪ ⎪ ⎪ + bi j (x i (t))g j (x j (t − τi j (t))) ⎪ ⎪ ⎪ j =1 ⎪ ⎪ n ⎪ ⎨ + ωi j x j (t), mT ≤ t < mT + δ j =1 ⎪ n ⎪d x (t)  ⎪ i ⎪ ⎪ = −d (x (t))x (t) + ai j (x i (t)) f j (x j (t)) ⎪ i i i ⎪ ⎪ dt ⎪ j =1 ⎪ ⎪ ⎪ n ⎪ ⎪ ⎪ + bi j (x i (t))g j (x j (t − τi j (t))) ⎪ ⎪ ⎪ j =1 ⎩ mT + δ ≤ t < (m + 1)T

(9)

where m = 1, 2, . . . , i = 1, 2, . . . , n, t ≥ 0. In addition, through the theories of differential inclusion and set-valued map [47]–[49], from (9), it follows that: ⎧ n  d x i (t) ⎪ ⎪ ∈ −co[di (x i (t))]x i (t)+ co[ai j (x i (t))] f j (x j (t)) ⎪ ⎪ ⎪ dt ⎪ ⎪ j =1 ⎪ ⎪ n ⎪ ⎪ ⎪ + co[bi j (x i (t))]g j (x j (t −τi j (t))) ⎪ ⎪ ⎪ j =1 ⎪ ⎪ ⎪ n ⎪ ⎨ ωi j x j (t), mT ≤ t < mT + δ + (10) j =1 ⎪ n ⎪  ⎪ d x i (t) ⎪ ⎪ ⎪ ∈ −co[di (x i (t))]x i (t) + co[ai j (x i (t))] f j (x j (t)) ⎪ ⎪ dt ⎪ ⎪ j =1 ⎪ ⎪ n ⎪ ⎪ ⎪ + co[bi j (x i (t))]g j (x j (t − τi j (t))) ⎪ ⎪ ⎪ j =1 ⎪ ⎩ mT + δ ≤ t < (m + 1)T. Now, before we state the main results, the following denotations are introduced. Let λi = pd i −

p−1 n  

pαli j

Ai j

pβli j

σj



j =1 l=1



n  j =1

κi = pd i − −

j =1

pγli j

Bi j

pξli j

ρj

j =1 l=1

μ j pα p j i pβ p j i A σi μi j i p−1 n  

pαli j

Ai j

(11)

pβ li j

σj

j =1 l=1 n 

p−1 n  

μ j pα p j i pβ p j i A σi μi j i



p−1 n  

pγ li j

Bi j

pξ li j

ρj

j =1 l=1

(12)

ZHANG AND SHEN: EXPONENTIAL STABILIZATION OF MEMRISTOR-BASED CHAOTIC NEURAL NETWORKS WITH TIME-VARYING DELAYS

νi = p ωii +

p−1 n  

|ωi j | pζli j +

j =1, j  =i l=1

n  j =1, j  =i

μj |ω j i | pζ p j i μi (13)

ηi =

n  μj j =1

μi

pγ pξ B j i p j i ρi p j i

and i, j = 1, 2, . . . , n, then, λi = κi , νi , ηi are, respectively, changed into λi = κ i , ν i , ηi as follows: λi = pd i − ( p − 1)

And then, we can easily get ηi τ ε˘ i τ e 0. 1 − σ0

Hi (˘εi ) = −1 −

On the other hand, Hi (˘εi ) is continuous on [0, +∞) and when ε˘ i → +∞, Hi (˘εi ) < 0, then there exists a positive number ε˘ i∗ such that Hi (˘εi∗ ) ≥ 0 and for ε˘ i ∈ (0, ε˘ i∗ ) and Hi (˘εi ) > 0. Denoting ε˘ = min1≤i≤n {˘εi∗ }, then we have ηi Hi (˘ε) = λi −νi − ε˘ − eε˘ τ ≥ 0, i = 1, 2, . . . , n. (15) 1 − σ0 In a similar way, we get there exist a constant εˆ > 0 such that ηi Q i (ˆε) = κi +θi − εˆ − eεˆ τ ≥ 0, i = 1, 2, . . . , n. (16) 1 − σ0 Let ε = min{˘ε, εˆ }, then we can obtain Hi (ε) ≥ 0, Q i (ε) ≥ 0, i = 1, 2, . . . , n. Now, we are in a position to present the following results. Theorem 1: Under assumptions (H1 ) and (H2 ), τi j (t) satisfy τ˙i j (t) ≤ σ0 < 1, the origin of system (1) is globally exponentially stable under controller (3) if the following condition is also satisfied: (H3 ) ε − ((T − δ)θ /T ) > 0, where θ = max1≤i≤n {θi }. Proof: See the Appendix. Theorem 2: Under assumption (H1 ) and τi j (t) satisfy τ˙i j (t) ≤ σ0 < 1, the origin of system (1) is globally exponentially stable under controller (3) if the following conditions are also satisfied. 1) νi < 0, λi − νi − (ηi /1 − σ0 ) > 0, i = 1, 2, . . . , n. 2) ε − ((T − δ)ν/T ) > 0, where ν = max1≤i≤n {|νi |}. Proof: Let α li j = αli j , β li j = βli j , γ li j = γli j , ξ li j = ξli j for all l = 1, 2, . . . , p and i, j = 1, 2, . . . , n in (12), then λi = κi . Under condition (i), select θi = −νi , then, Theorem 2 can be immediately derived from Theorem 1. In Theorem 1, now, we choose αli j = βli j = γli j = α li j = β li j = γ li j = ξ li j = ζli j = 1/ p for all l = 1, 2, . . . , p

n 

Ai j σ j − ( p − 1)

j =1

(14)

where Ai j = max{|ai∗j |, |ai∗∗ = max{|bi∗j |, |bi∗∗ j |}, Bi j j |}, − + − + σi = max{|σi |, |σi |}, ρi = max{|ρi |, |ρi |}, μi > 0, and the αli j , βli j , γli j , ξli j , α li j , β li j , γ li j , ξ li j , ζ li j are nonnega p p tive p real constants, p and they satisfy p l=1 αli j = p l=1 βli j = γli j = α = l=1 β li j = l=1 γ li j = l=1 p l=1 li j p ξ = ζ = 1. li j l=1 li j l=1 In the following, we give another assumption: (H2 ) λi − νi − (ηi /1 − σ0 ) > 0 and there exist θi > 0 such that κi +θi −(ηi /1 − σ0 ) > 0, where σ0 satisfy τ˙i j (t) ≤ σ0 < 1, i, j = 1, 2, . . . , n, t ≥ 0. Now, under assumption (H2 ), we consider the function ηi eε˘ i τ i = 1, 2, . . . , n, ε˘ i ≥ 0. Hi (˘εi ) = λi − νi − ε˘ i − 1 − σ0

1435



n  μj j =1

μi

(17) n  j =1, j  =i

n  μj j =1

μi

Bi j ρ j

j =1

A j i σi

ν i = p ωii + ( p − 1) ηi =

n 

B j i ρi .

|ωi j | +

n  j =1, j  =i

μj |ω j i | μi

(18)

(19)

Then, as the proof of Theorem 1, we have the following results. Theorem 3: Under assumption (H1 ), τi j (t) satisfy τ˙i j (t) ≤ σ0 < 1, the origin of system (1) is globally exponentially stable under controller (3) if the following conditions are also satisfied. 1) λi − ν i − (ηi /1 − σ0 ) > 0, i = 1, 2, . . . , n. 2) There exist θ i > 0, such that λi + θ i − (ηi /1 − σ0 ) > 0. 3) ε − ((T − δ)θ/T ) > 0, where θ = max1≤i≤n {θ i }. Combining Theorems 2 and 3, we can easily obtain the following results. Theorem 4: Under assumption (H1 ), τi j (t) satisfy τ˙i j (t) ≤ σ0 < 1, the origin of system (1) is globally exponentially stable under controller (3) if the following conditions are also satisfied. 1) ν i < 0, λi − ν i − (ηi /1 − σ0 ) > 0, i = 1, 2, . . . , n. 2) ε − ((T − δ)ν/T ) > 0, where ν = max1≤i≤n {|ν i |}. Now, to show our activation functions are more general, we denote G  { ∈ C(R, R)|s (s) > 0, s = 0, D + (s) ≥ 0, (0) = 0, s ∈ R}. For i = 1, 2, . . . , n, two different types of activation functions are listed as follows. 1) 1  {h i (·)|h i (·) ∈ G, ∃ki > 0, si h i (si ) ≤ ki si2 , si ∈ R}, i.e., h i (·) is the Lurie-type function. 2) 2  {h i (·)|h i (·) ∈ C(R, R), |h i (x i ) − h i (yi )| ≤ ki |x i − yi |, x i , yi ∈ R}, i.e., h i (·) is the Lipschitz-type function. Remark 1: In [11], [37], [40], [41], and [46], the scholars studied the stability of neural networks with the Lurie-type activation functions. That is, they need σi− ≥ 0, ρi− ≥ 0, σi+ ≥ 0, ρi+ ≥ 0 in assumption (H1 ) of this paper. However, the activation functions discussed in this paper may not be monotonously nondecreasing. What is more, the constants σi− , ρi− , σi+ , ρi+ in assumption (H1 ) of this paper are allowed to be positive, negative, or zero. Obviously, the assumption (H1 ) of this paper is more weaker and general than 1 and 2 . Remark 2: In fact, for a fixed scalar i ∈ N = {1, 2, . . . , n}, there are four different types combination of activation functions f i (·), gi (·) in the memristor-based neural networks (1). They are: type 1) f i (·) ∈ 1 , gi (·) ∈ 1 ; type 2) fi (·) ∈ 1 , gi (·) ∈ 2 ; type 3) f i (·) ∈ 2 , gi (·) ∈ 1 ; and type 4) f i (·) ∈ 2 , gi (·) ∈ 2 , respectively. Remark 3: For i = 1, 2, . . . , n, there are 4n different types combination of activation functions for the memristor-based neural networks (1). In [11], [37], [40], [41], and [46], the

1436

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 26, NO. 7, JULY 2015

scholars only studied type 1) activation function, that is, only one type combination of activation function. However, in this paper, our results embody all different types combination of activation functions. In addition, compared with the results on stability of the neural networks with continuous right-hand side [18]–[36], [43]–[45], our results on stability of the neural networks are with discontinuous right-hand side. Therefore, the results of this paper are less conservative and more general. Now, we show some corollaries corresponding to above theorems when the memristor-based neural networks (1) have type 1) activation function. Other type activation function can be similarly discussed, and here they are omitted. Now, let f i (·), gi (·) ∈ 1 , we know σi− = ρi− = 0, σi+ > 0, ρi+ > 0 in assumption (H1 ), and let λ∗i = p(d i − Aii σi ) −

κi∗ =

νi∗ =

ηi∗ =

p−1 n  

pαli j

Ai j

pβli j

σj



p−1 n  

j =1, j  =i l=1 j =1 l=1 n  μ pγ pξ j pα p j i pβ p j i ×Bi j li j ρ j li j − A σi (20) μi j i j =1, j  =i p−1 p−1 n n     pβ pα p(d i − Aii σi ) − Ai j li j σ j li j − j =1, j  =i l=1 j =1 l=1 n  μ j pα p j i pβ p j i pγ pξ ×Bi j li j ρ j li j − A σi (21) μi j i j =1, j  =i p−1 n n    μj pζli j p ωii + |ωi j | + |ω j i | pζ p j i μi j =1, j  =i l=1 j =1, j  =i n  μj j =1

μi

(22) pγ pξ B j i p j i ρi p j i

(23)

where Aii = max{aii∗ , aii∗∗ }, i = 1, 2, . . . , n. Corollary 1: If f i (·), gi (·) ∈ 1 and τi j (t) satisfy τ˙i j (t) ≤ σ0 < 1, the origin of system (1) is globally exponentially stable under the intermittent controller (3) if the following conditions also hold. 1) λ∗i − νi∗ − (ηi∗ /1 − σ0 ) > 0, i = 1, 2, . . . , n. 2) There exist θi∗ > 0, such that κi∗ +θi∗ −(ηi∗ /1 − σ0 ) > 0. 3) ε∗ − ((T − δ)θ ∗ /T ) > 0, where θ ∗ = max1≤i≤n {θi∗ }. Corollary 2: If f i (·), gi (·) ∈ 1 and τi j (t) satisfy τ˙i j (t) ≤ σ0 < 1, the origin of system (1) is globally exponentially stable under the intermittent controller (3) if the following conditions also hold. 1) νi∗ < 0, λ∗i − νi∗ − (ηi∗ /1 − σ0 ) > 0, i = 1, 2, . . . , n. 2) ε∗ − ((T − δ)θ ∗ /T ) > 0, where θ ∗ = max1≤i≤n {θi∗ }. In Corollary 1, now, we choose αli j = βli j = γli j = α li j = β li j = γ li j = ξ li j = ζli j = 1/ p for all l = 1, 2, . . . , p and i, j = 1, 2, . . . , n, then, λ∗i = κi∗ , νi∗ , ηi∗ are, respectively, ∗ changed into λi = κ ∗i , ν ∗i , η∗i as follows: n    ∗ λi = p d i − Aii σi − ( p − 1)

− ( p − 1)

n  j =1

Bi j ρ j −

Ai j σ j

j =1, j  =i n  j =1, j  =i

μj A j i σi μi

(24)

ν ∗i = p ωii + ( p − 1)

n  j =1, j  =i

η∗i =

n  j =1

μj B j i ρi . μi

|ωi j | +

n  j =1, j  =i

μj |ω j i | (25) μi (26)

Then we have the following results. Corollary 3: If f i (·), gi (·) ∈ 1 and τi j (t) satisfy τ˙i j (t) ≤ σ0 < 1, the origin of system (1) is globally exponentially stable under controller (3) if the following conditions are also satisfied. ∗ 1) λi − ν ∗i − (η∗i /1 − σ0 ) > 0, i = 1, 2, . . . , n. ∗ ∗ ∗ 2) There exist θ i > 0, such that λi + θ i −(η∗i /1 − σ0 ) > 0. ∗ ∗ ∗ 3) ε∗ − ((T − δ)θ /T ) > 0, where θ = max1≤i≤n {θ i }. Corollary 4: If f i (·), gi (·) ∈ 1 and τi j (t) satisfy τ˙i j (t) ≤ σ0 < 1, the origin of system (1) is globally exponentially stable under controller (3) if the following conditions are also satisfied. ∗ 1) ν ∗i < 0, λi − ν ∗i − (η∗i /1 − σ0 ) > 0, i = 1, 2, . . . , n. 2) ε∗ ((T − δ)ν ∗ /T ) > 0, where ν ∗ = max1≤i≤n {|ν ∗i |}. Remark 4: For the neural networks (1), when di∗ = − − + + ∗∗ ∗ ∗∗ di , ai∗j = ai∗∗ j , bi j = bi j , and σi = ρi = 0, σi > 0, ρi > 0 in assumption (H1 ), and γ li j = γli j , ξ li j = ξli j in (21). Then, Corollary 1 in this paper is the main result of Theorem 1 in [46]. Hence, our results are more general and they extend the results in [46]. In this paper, the intermittent control is generalized to study a more general neural network model and the traditional restrictions that the control width is greater than the time delay is also removed [33], [34], [36], [43]–[45]. Remark 5: Compared with other researches using LMIs technique to get the conditions of exponential stabilization, such as [27] and [35], the conditions in our paper can be directly derived from the parameters of the neural networks, are easily verified. Remark 6: In the existing literature, the exponential stabilization of conventional neural networks via periodically intermittent control is guaranteed [43]–[46]. However, the memristor-based neural network model is also a system family. The existing results does not yield any exponential stabilization via periodically intermittent control. Meanwhile, our criteria guarantee the global exponential stability of memristorbased neural networks via periodically intermittent control with various activation functions. Therefore, our results complement and extend the earlier publications. Remark 7: Here, we take an example for the application of Theorem 4 to shed light on how to design a suitable intermittent controller in real application to realize exponential stabilization, for a given chaotic network satisfying (H1 ). Step 1) Compute the value of τ according to its definition, and figure out the values of λi , ηi according to (17) and (18), respectively; Step 2) Select control strengths ωi j such that ν i satisfy condition (i) of Theorem 4. Step 3) Using MATLAB software, compute the values of ε˘ i∗ , ε, ν. Step 4) Choose randomly a control period T , and take the control width δ satisfying (1 − (ε/ν))T < δ < T , and then according to the above chosen ωi j , T, δ, we can design an intermittent controller (3).

ZHANG AND SHEN: EXPONENTIAL STABILIZATION OF MEMRISTOR-BASED CHAOTIC NEURAL NETWORKS WITH TIME-VARYING DELAYS

Fig. 4.

Chaotic attractors of memristor-based neural networks (27).

1437

Fig. 5. Choose randomly 100 initial conditions, time behaviors of state variable x1 (t) of (27) with periodically intermittent control (3), T = 5, and δ = 4.95.

IV. N UMERICAL E XAMPLE Now, we perform some numerical simulations to illustrate our results. Example: Consider the following 2-D memristor-based neural networks:  d x i (t) = −x i (t) + ai j (x i (t)) f j (x j (t)) dt 2

j =1

+

2 

bi j (x i (t))g j (x j (t − τi j (t))), i = 1, 2

(27)

j =1

where a11 (x 1 (t)) = 1, a22 (x 2 (t)) = 1.8  −1.5, |x 1 (t)| ≤ 1 b11 (x 1 (t)) = −1.2, |x 1(t)| > 1  12, |x 1 (t)| ≤ 1 a12 (x 1 (t)) = 14, |x 1 (t)| > 1  0.08, |x 2 (t)| ≤ 1 a21(x 2 (t)) = 0.1, |x 2 (t)| > 1  1.0, |x 1 (t)| ≤ 1 b12 (x 1 (t)) = 0.8, |x 1 (t)| > 1  0.08, |x 2 (t)| ≤ 1 b21(x 2 (t)) = 0.1, |x 2 (t)| > 1  −1.4, |x 2 (t)| ≤ 1 b22 (x 2 (t)) = −1.6, |x 2 (t)| > 1. In addition, τi j (t) = et /(1 + et ), and take the activation functions as f i (x i ) = sin(x i ), gi (x i ) = 1/2(|x i +1|−|x i −1|), i, j = 1, 2. The model (27) has chaotic attractors with the initial condition x 1 (s) = 0.5, x 2 (s) = 0.6, ∀s ∈ [−1, 0) can be observed in Fig. 4. From (27), we can get A11 = 1, A12 = 14, A21 = 0.1, A22 = 1.8, B11 = 1.5, B12 = 1, B21 = 0.1, B22 = 1.6, ρi− = 0, σi− = −1, ρi+ = σi+ = 1, and τ˙i j (t) ≤ σ0 = (1/4) < 1, i, j = 1, 2. Now, let p = μ1 = μ2 = 1, then, from (17) to (19), we can get λ1 = 1 − A11 − A21 = −0.1, λ2 = 1 − A22 − A12 = −14.8 η1 = B11 + B21 = 1.6, η2 = B22 + B12 = 2.6. Now, we take the controller matrix W = (ωi j )n×n in (3) which is given as follows: ω11 = −5, ω22 = −21, and

Fig. 6. Choose randomly 100 initial conditions, time behaviors of state variable x2 (t) of (27) with periodically intermittent control (3), T = 5, and δ = 4.95.

ω12 = ω21 = 1. And ν 1 = ω11 + |ω21 | = −4, ν 2 = ω22 + |ω12 | = −20. Then, we obtain ε = 0.3382. And now, we choose T = 5, from 2) of Theorem 4, we get δ > 4.9154. Selecting δ = 4.95, all the conditions of Theorem 4 are satisfied, which implies that under controller (3) the origin of system (27) is exponentially stable. Under the above conditions and periodically intermittent control (3), choosing randomly 100 initial conditions and time behaviors of state variables x 1 (t), x 2 (t) are shown in Figs. 5 and 6, respectively. Remark 8: Because di (x i (t)), ai j (x i (t)) and bi j (x i (t)), i, j = 1, 2, . . . , n are discontinuous, so the results obtained in [33]–[38] and [43]–[45] about neural networks with continuous right-hand side cannot be used here. And in (27), f i (·) ∈ 2 , we know σi− = 0, i = 1, 2, so the results obtained in [11], [37], [40], [41], and [46] cannot be used here. In addition, the memristor-based neural network (27) is chaotic system, so the results obtained in [9] and [11]–[17] also cannot be used here. Remark 9: In Example  t 1, one can choose the time delays τi j (t) = (1/(2π)1/2 ) −∞ exp[−s 2 /2]ds, i, j = 1, 2, . . . , n. Some examples can also be given for other theorems, and here they are omitted. In addition, we believe that our results can be extended to the memristor-based neural networks with discontinuous activation functions. We will study this issue in future research.

1438

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 26, NO. 7, JULY 2015

 n  × p(−d i + ωii )|x i (t)| p + p Ai j |x i (t)| p−1 |

Remark 10: Recently, based on the previous work [14], Jun et al. [50] studied the exponential stability of stochastic memristor-based recurrent neural networks with time-varying delays. And we believe that our results can also be extended to the stochastic memristor-based neural networks.

j =1

× f j (x j (t))| +

V (t, x) =

εt

μi e |x i (t)|

i=1

+

eετ 1 − σ0

n  i=1

 ηi

t −τi j (t )

p

(28)

D V (t, x) n   εμi eεt |x i (t)| p + pμi |x i (t)| p−1 sgn(x i (t)) =

n 

p Ai j |x i (t)| p−1 | f j (x j (t))|

j =1





 ηi Vi (t, x i (t)) 1 − σ0

εVi (t, x i (t)) + μi eεt

i=1

n 

p Ai j σ j |x i (t)| p−1 |x j (t)|

j =1

=

n 

⎡ p⎣

j =1



p−1 





α β α β Ai jli j σ j li j |x i (t)| ⎦ (Ai jpi j σ j pi j |x j (t)|)

l=1

p−1 n  

pα pβ Ai j li j σ j li j |x i (t)| p

+

n 

j =1 l=1

pα pi j

Ai j

pβ pi j

σj

|x j (t)| p

j =1

(30) n 

p |ωi j ||x i (t)| p−1 |x j (t)|

=

n 

⎡ p⎣

j =1, j  =i n 

p−1 

⎤   |ωi j |ζli j |x i (t)| ⎦ (|ωi j |ζ pi j |x j (t)|)

l=1 p−1 

and n 

|ωi j | pζli j |x i (t)| p +

n 

|ωi j | pζ pi j |x j (t)| p

j =1, j  =i

p Bi j |x i (t)| p−1 | f j (x j (t − τi j (t)))|

j =1 n  j =1

=

eετ

−Vi (t − τi j (t), x i (t − τi j (t))) n  

p

we can get



+

×D + x i (t) +

p

p 1 2 · · ·  p ≤ 1 + 2 + · · · +  p , i ≥ 0, i = 1, 2, . . . , p

(31) Vi (s, x i (s))ds

where Vi (s, x i (s)) = μi eεs |x i (s)| p , s ≥ 0, i = 1, 2, . . . , n. We calculate the upper right derivation of V (t, x) along the solution of system (10). When mT ≤ t < mT + δ, we obtain

i=1

(29)

Furthermore, it follows from (H1 ) and the fact that

j =1, j  =i l=1 t

| f j (x j (t − τi j (t)))|

 Vi (t, x i (t)) + ηi e − Vi (t − τi j (t), 1 − σ0  x i (t − τi j (t))) .



p



ετ

j =1, j  =i

Based on the previous works [41] and [46], we consider the following Lyapunov functional:

p Bi j |x i (t)|

p−1

j =1

P ROOF OF THE T HEOREM 1

n 

n 

+

A PPENDIX

p |ωi j ||x i (t)| p−1 |x j (t)|

j =1, j  =i

V. C ONCLUSION In this paper, some sufficient conditions were obtained to ensure exponential stabilization of memristor-based chaotic neural networks via periodically intermittent control. Compared with the continuous feedback control methods used in [10] and [39]–[41], our control method is discontinuous. Here, di (x i (t)), ai j (x i (t)) and bi j (x i (t)), i, j = 1, 2, . . . , n are discontinuous, our results extended the works [33]–[38] and [43]–[46] about neural networks with continuous right-hand side. In addition, the intermittent control method can be applied for dealing with antisynchronization of memristor-based chaotic neural networks. In addition, because of memristorbased neural networks have demonstrated high efficiency to emulate synaptic behavior of human brain [3], [6]–[8], so, memristor-based neural networks also have high efficiency to emulate human brain. By studying dynamic behaviors of memristor-based neural networks would help to predict some good or bad behaviors of human brain. In 2006, Majtanik, Dolan, and Tass published one seminal paper for the control of epileptic seizures [51], therefore, based on [51], the intermittent pulse based control technique discussed in this paper may be possibly applied for dealing with desynchronization of memristor-based neural networks. These issues will be the topic of future research.

n 

n 

p Bi j ρ j |x i (t)| p−1 × |x j (t − τi j (t))| ⎡ p⎣

j =1



p−1 





γ ξ Bi jli j ρ j li j |x i (t)| ⎦

l=1

  γ ξ × Bi jpi j ρ j pi j |x j (t − τi j (t))|

 ≤

p−1 n  

pγli j

Bi j

pξli j

ρj

j =1 l=1

×|x j (t − τi j (t))| p .

|x i (t)| p +

n 

pγ pi j

Bi j

pξ pi j

ρj

j =1

(32)

ZHANG AND SHEN: EXPONENTIAL STABILIZATION OF MEMRISTOR-BASED CHAOTIC NEURAL NETWORKS WITH TIME-VARYING DELAYS

Now, from (30)–(32), we obtain

+

+

D V (t, x) n   εVi (t, x i (t))+μi eεt ≤ i=1

≤ θ V (t, x), θ = max θi

pαli j

Ai j

for t ∈ [mT + δ, (m + 1)T ), which leads to V (t, x) ≤ V (mT + δ, x)exp{θ (t − mT − δ)}.

pβli j

σj

|x i (t)| p +

j =1 l=1

×

n 

n 

+

p−1 

|ωi j |

pα pi j

Ai j

V (t, x) ≤ V (δ, x)exp{θ (t − δ)} ≤ V (0, x)exp{θ (t − δ)}. pζli j

|x i (t)|

p

(S3 ) For t ∈ [T, T + δ), we get

j =1, j  =i l=1 n 

+

|ωi j |

|x j (t)| +

pζ pi j

p

j =1, j  =i

×

p−1 n  

V (t, x) ≤ V (T, x) ≤ V (0, x)exp{θ (T − δ)}. pγ Bi j li j

(S4 ) For t ∈ [T + δ, 2T ), we obtain

j =1 l=1

pξ ρ j li j |x i (t)| p

+

n 

V (t, x) ≤ V (T + δ, x)exp{θ (t − T − δ)} ≤ V (0, x)exp{θ (t − 2δ)}.

pγ pξ Bi j pi j ρ j pi j

j =1



ετ

× |x j (t − τi j (t))| + ηi e   Vi (t, x i (t)) × − Vi (t − τi j (t), x i (t − τi j (t))) 1 − σ0 p



n  

Repeating above t ∈ [mT, mT + δ)

i=1 p−1  n  

pαli j

Ai j

pβli j

σj

j =1 l=1

+

p−1 n  

pγli j

+ Bi j

pξli j

ρj





Vi (t, x i (t)) |ωi j | j =1, j  =i l=1 n  μi pα pi j pβ pi j A σj V j (t, x j (t)) + μ j ij j =1 n n   μi μi + |ωi j | pζ pi j V j (t, x j (t)) + μj μj j =1, j  =i j =1 pγ pi j pξ pi j ετ × Bi j ρ j e V j (t − τi j (t), x j (t − τi j (t))) +ηi eετ



pζli j

Vi (t, x i (t)) 1 − σ0



−Vi (t − τi j (t), x i (t − τi j (t))) =−

 n   ηi eετ λi − νi − ε− Vi (t, x i (t)) ≤ 0 1 − σ0 i=1

(33) which implies that V (t, x) ≤ V (mT, x), t ∈ [mT, mT + δ).

(34)

Similarly, for t ∈ [mT + δ, (m + 1)T ), we can obtain  n   ηi eετ D + V (t, x) ≤ − κi + θ i − ε − Vi (t, x i (t)) 1 − σ0 i=1

procedure,

we

obtain

that

V (t, x) ≤ V (mT, x) ≤ V (0, x)exp{mθ (T − δ)}.

for (37)

Moreover, for t ∈ [mT + δ, (m + 1)T ) V (t, x) ≤ V (mT + δ, x)exp{θ (t − mT − δ)} ≤ V (0, x)exp{θ (t − (m + 1)δ)}.

ε − p(d i − ωii ) +

(36)

Combining (34) and (36), we summarize the following. (S1 ) For t ∈ [0, δ), it follows from (34) that V (t, x) ≤ V (0, x). (S2 ) For t ∈ [δ, T ), from (36), we have

j =1

pβ σ j pi j |x j (t)| p

(35)

1≤i≤n

p(−d i +ωii )|x i (t)| p +

θi Vi (t, x i (t))

i=1

 p−1 n  

n 

1439

(38)

If t ∈ [mT, mT + δ), we have m ≤ t/T , then it follows from (37) that   θ (T − δ) t . V (t, x) ≤ V (0, x)exp T Similarly, if t ∈ [mT + δ, (m + 1)T ), we have m + 1 ≤ t/T , then it follows from (38) that   θ (T − δ) V (t, x) ≤ V (0, x)exp t . T Therefore, for any t ∈ [0, +∞), we always have   θ (T − δ) V (t, x) ≤ V (0, x)exp t . (39) T It follows from (38) and (39) that   θ (T − δ) 1 p x(t) p ≤ V (0, x)exp − (ε − )t μ T   n  1 ηi eετ ≤ Vi (0, x i ) + max 1≤i≤n 1 − σ0 μ i=1       0 θ (T − δ) × Vi (s, x i (s))ds ×exp − ε − t T −τ    θ (T − δ) p t , t ≥0 ≤ Mψ p exp − ε − T (40) where M=

   μ0 ηi eετ ≥ 1, μ = min {μi }, 1 + max 1≤i≤n 1 − σ0 1≤i≤n μ μ0 = max {μi }. 1≤i≤n

1440

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 26, NO. 7, JULY 2015

Let  = 1/ p(ε − (θ (T − δ)/T )), under (H3 ), we know that  > 0, it follows from (40) that x(t) p ≤ M 1/ p ψ p e−t , t ≥ 0 which implies that (1) can be globally exponentially stable under controller (3) based on p-norm. This completes the proof of Theorem 1. R EFERENCES [1] X. Wang, Y. Chen, H. Xi, H. Li, and D. Dimitrov, “Spintronic memristor through spin-torque-induced magnetization motion,” IEEE Electron Device Lett., vol. 30, no. 3, pp. 294–297, Mar. 2009. [2] L. Chua, “Resistance switching memories are memristors,” Appl. Phys. A, vol. 102, no. 4, pp. 765–783, Mar. 2011. [3] K. D. Cantley, A. Subramaniam, H. J. Stiegler, R. A. Chapman, and E. M. Vogel, “Neural learning circuits utilizing nano-crystalline silicon transistors and memristors,” IEEE Trans. Neural Netw. Learn. Syst., vol. 23, no. 4, pp. 565–573, Apr. 2012. [4] L. O. Chua, “Memristor—The missing circuit element,” IEEE Trans. Circuit Theory, vol. 18, no. 5, pp. 507–519, Sep. 1971. [5] D. B. Strukov, G. S. Snider, G. R. Stewart, and R. S. Williams, “The missing memristor found,” Nature, vol. 453, pp. 80–83, May 2008. [6] Y. V. Pershin and M. Di Ventra, “Experimental demonstration of associative memory with memristive neural networks,” Neural Netw., vol. 23, no. 7, pp. 881–886, Sep. 2010. [7] M. J. Sharifiy and Y. M. Banadaki, “General SPICE models for memristor and application to circuit simulation of memristor-based synapses and memory cells,” J. Circuit. Syst. Comput., vol. 19, pp. 407–424, Apr. 2010. [8] H. Kim, M. P. Sah, C. J. Yang, T. Roska, and L. O. Chua, “Neural synaptic weighting with a pulse-based memristor circuit,” IEEE Trans. Circuits Syst. I, Reg. Papers, vol. 59, no. 1, pp. 148–158, Jan. 2012. [9] J. Hu and J. Wang, “Global uniform asymptotic stability of memristorbased recurrent neural networks with time delays,” in Proc. IEEE World Congr. Cumput. Intell., Barcelona, Spain, 2010, pp. 2127–2134. [10] A. L. Wu and Z. G. Zeng, “Exponential stabilization of memristive neural networks with time delays,” IEEE Trans. Neural Netw. Learn. Syst., vol. 23, no. 12, pp. 1919–1929, Dec. 2012. [11] A. Wu and Z. Zeng, “Dynamic behaviors of memristor-based recurrent neural networks with time-varying delays,” Neural Netw., vol. 36, pp. 1–10, Dec. 2012. [12] S. Wen and Z. Zeng, “Dynamics analysis of a class of memristorbased recurrent networks with time-varying delays in the presence of strong external stimuli,” Neural Process. Lett., vol. 35, no. l, pp. 47–59, Feb. 2012. [13] S. Wen, Z. Zeng, and T. Huang, “Exponential stability analysis of memristor-based recurrent neural networks with time-varying delays,” Neurocomputing, vol. 97, pp. 233–240, Nov. 2012. [14] G. Zhang, Y. Shen, and J. Sun, “Global exponential stability of a class of memristor-based recurrent neural networks with time-varying delays,” Neurocomputing, vol. 97, pp. 149–154, Nov. 2012. [15] G. Zhang, Y. Shen, Q. Yin, and J. Sun, “Global exponential periodicity and stability of a class of memristor-based recurrent neural networks with multiple delays,” Inf. Sci., vol. 232, pp. 386–396, May 2013. [16] G. Bao and Z. Zeng, “Multistability of periodic delayed recurrent neural network with memristors,” Neural Comput. Appl., vol. 23, nos. 7–8, pp. 1963–1967, Dec. 2013. [17] Z. Guo, J. Wang, and Z. Yan, “Global exponential dissipativity and stabilization of memristor-based recurrent neural networks with timevarying delays,” Neural Netw., vol. 48, pp. 158–172, Dec. 2013. [18] T. Huang, C. Li, S. Duan, and J. A. Starzyk, “Robust exponential stability of uncertain delayed neural networks with stochastic perturbation and impulse effects,” IEEE Trans. Neural Netw. Learn. Syst., vol. 23, no. 6, pp. 866–875, Jun. 2012. [19] D. Liu, S. Hu, and J. Wang, “Global output convergence of a class of continuous-time recurrent neural networks with time-varying thresholds,” IEEE Trans. Circuits Syst. II, Exp. Briefs, vol. 51, no. 4, pp. 161–167, Apr. 2004. [20] J. Cao and J. Wang, “Global asymptotic stability of a general class of recurrent neural networks with time-varying delays,” IEEE Trans. Circuits Syst. I, Fund. Theory Appl., vol. 50, no. 1, pp. 34–44, Jan. 2003.

[21] Y. Liu, Z. Wang, J. Liang, and X. H. Liu, “Stability and synchronization of discrete-time Markovian jumping neural networks with mixed modedependent time delays,” IEEE Trans. Neural Netw., vol. 20, no. 7, pp. 1102–1116, Jul. 2009. [22] H. Zhang, Z. Liu, G.-B. Huang, and Z. Wang, “Novel weighting-delaybased stability criteria for recurrent neural networks with time-varying delay,” IEEE Trans. Neural Netw., vol. 21, no. 1, pp. 91–106, Jan. 2010. [23] V. N. Phat and H. Trinh, “Exponential stabilization of neural networks with various activation functions and mixed time-varying delays,” IEEE Trans. Neural Netw., vol. 21, no. 7, pp. 1180–1184, Jul. 2010. [24] Y. Shen and J. Wang, “Almost sure exponential stability of recurrent neural networks with Markovian switching,” IEEE Trans. Neural Netw., vol. 20, no. 5, pp. 840–855, May 2009. [25] Z. Zeng and W. X. Zheng, “Multistability of neural networks with timevarying delays and concave-convex characteristics,” IEEE Trans. Neural Netw. Learn. Syst., vol. 23, no. 2, pp. 293–305, Feb. 2012. [26] S. Xu, J. Lam, and D. W. C. Ho, “New LMI conditions for delaydependent asymptotic stability of delayed Hopfield neural networks,” IEEE Trans. Circuits Syst. II, Exp. Briefs, vol. 53, no. 3, pp. 230–234, Mar. 2006. [27] Q. K. Song, “Exponential stability of recurrent neural networks with both time-varying delays and general activation functions via LMI approach,” Neurocomputing, vol. 71, nos. 13–15, pp. 2823–2830, Aug. 2008. [28] Z. Wang, Y. Liu, and X. Liu, “On global asymptotic stability of neural networks with discrete and distributed delays,” Phys. Lett. A, vol. 345, nos. 4–6, pp. 299–308, Oct. 2005. [29] H. Lu, “Chaotic attractors in delayed neural networks,” Phys. Lett. A, vol. 298, nos. 2–3, pp. 109–116, Jun. 2002. [30] C. J. Cheng, T. L. Liao, J. J. Yan, and C. C. Hwang, “Exponential synchronization of a class of neural networks with time-varying delays,” IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 36, no. 1, pp. 209–215, Feb. 2006. [31] Q. Zhu, T. Zhang, S. Fei, K. Zhang, and T. Li, “Adaptive neural control for a class of output feedback time delay nonlinear systems,” Neurocomputing, vol. 72, nos. 7–9, pp. 1985–1992, Mar. 2009. [32] Z.-H. Guan and H. Zhang, “Stabilization of complex network with hybrid impulsive and switching control,” Chaos, Solitons Fractals, vol. 37, no. 5, pp. 1372–1382, Sep. 2008. [33] T. W. Huang, C. D. Li, and X. Z. Liu, “Synchronization of chaotic systems with delay using intermittent linear state feedback,” Chaos, vol. 18, no. 3, p. 033122, Sep. 2008. [34] T. Huang, C. Li, W. Yu, and G. Chen, “Synchronization of delayed chaotic systems with parameter mismatches by using intermittent linear state feedback,” Nonlinearity, vol. 22, no. 3, pp. 569–584, Mar. 2009. [35] T. Huang and C. Li, “Chaotic synchronization by the intermittent feedback method,” J. Comput. Appl. Math., vol. 234, no. 4, pp. 1097–1104, Jun. 2010. [36] S. Cai, Z. Liu, F. Xu, and J. Shen, “Periodically intermittent controlling complex dynamical networks with time-varying delays to a desired orbit,” Phys. Lett. A, vol. 373, no. 42, pp. 3846–3854, Oct. 2009. [37] C. Hu, J. Yu, H. Jiang, and Z. Teng, “Exponential lag synchronization for neural networks with mixed delays via periodically intermittent control,” Chaos, vol. 20, no. 2, p. 023108, Jun. 2010. [38] J. Yu, C. Hu, H. Jiang, and Z. Teng, “Exponential synchronization of Cohen–Grossberg neural networks via periodically intermittent control,” Neurocomputing, vol. 74, no. 10, pp. 1776–1782, May 2011. [39] A. Wu, S. Wen, and Z. Zeng, “Synchronization control of a class of memristor-based recurrent neural networks,” Inf. Sci., vol. 183, no. 1, pp. 106–116, Jan. 2012. [40] S. Wen, G. Bao, Z. Zeng, Y. Chen, and T. Huang, “Global exponential synchronization of memristor-based recurrent neural networks with timevarying delays,” Neural Netw., vol. 48, pp. 195–203, Dec. 2013. [41] G. Zhang and Y. Shen, “New algebraic criteria for synchronization stability of chaotic memristive neural networks with time-varying delays,” IEEE Trans. Neural Netw. Learn. Syst., vol. 24, no. 10, pp. 1701–1707, Oct. 2013. [42] C. Deissenberg, “Optimal control of linear econometric models with intermittent controls,” Econ. Planning, vol. 16, no. 1, pp. 49–56, 1980. [43] C. Li, G. Feng, and X. Liao, “Stabilization of nonlinear systems via periodically intermittent control,” IEEE Trans. Circuits Syst. II, Exp. Briefs, vol. 54, no. 11, pp. 1019–1023, Nov. 2007. [44] C. Li, X. Liao, and T. Huang, “Exponential stabilization of chaotic systems with delay by periodically intermittent control,” Chaos, vol. 17, no. 1, p. 013103, Mar. 2007. [45] J. Huang, C. Li, and Q. Han, “Stabilization of delayed chaotic neural networks by periodically intermittent control,” Circuits Syst. Signal Process., vol. 28, no. 4, pp. 567–579, Aug. 2009.

ZHANG AND SHEN: EXPONENTIAL STABILIZATION OF MEMRISTOR-BASED CHAOTIC NEURAL NETWORKS WITH TIME-VARYING DELAYS

[46] C. Hu, J. Yu, H. Jiang, and Z. Teng, “Exponential stabilization and synchronization of neural networks with time-varying delays via periodically intermittent control,” Nonlinearity, vol. 23, no. 10, pp. 2369–2391, Oct. 2010. [47] A. F. Filippov, Differential Equations With Discontinuous Right-Hand Sides. Dordrecht, The Netherlands: Kluwer, 1988. [48] J. P. Aubin and A. Cellina, Differential Inclusions. Berlin, Germany: Springer-Verlag, 1984. [49] F. H. Clarke, Y. S. Ledyaev, R. J. Stem, and R. R. Wolenski, Nonsmooth Analysis and Control Theory. New York, NY, USA: Springer-Verlag, 1998. [50] L. Jun, M. Hu, and L. Guo, “Exponential stability of stochastic memristor-based recurrent neural networks with time-varying delays,” Neurocomputing, vol. 138, pp. 92–98, Aug. 2014. [51] M. Majtanik, K. Dolan, and P. A. Tass, “Desynchronization in networks of globally coupled neurons with dendritic dynamics,” J. Biol. Phys., vol. 32, nos. 3–4, pp. 307–333, Oct. 2006.

Guodong Zhang received the M.S. degree in applied mathematics from Hubei Normal University, Huangshi, China, and the Ph.D. degree from the School of Automation, Huazhong University of Science and Technology, Wuhan, China, in 2011 and 2014, respectively. He is currently a Teacher with the College of Mathematics and Statistics, South-Central University For Nationalities, Wuhan. He has authored over 15 international journal papers. His current research interests include biological system, neural networks, and intelligent control.

1441

Yi Shen received the M.S. degree in applied mathematics and the Ph.D. degree in systems engineering from the Huazhong University of Science and Technology, Wuhan, China, in 1995 and 1998, respectively. He was a Post-Doctoral Fellow with the Huazhong University of Science and Technology from 1999 to 2001, where he is currently a Professor with the School of Automation. He has authored over 30 international journal papers. His current research interests include neural networks, nonlinear stochastic systems, and memristors. Prof. Shen serves as an Associate Editor of the IEEE T RANSACTIONS ON N EURAL N ETWORKS AND L EARNING S YSTEMS .

Exponential Stabilization of Memristor-based Chaotic Neural Networks with Time-Varying Delays via Intermittent Control.

This paper is concerned with the global exponential stabilization of memristor-based chaotic neural networks with both time-varying delays and general...
1MB Sizes 1 Downloads 9 Views