Neural Networks 53 (2014) 8–14

Contents lists available at ScienceDirect

Neural Networks journal homepage: www.elsevier.com/locate/neunet

Synchronization control of memristor-based recurrent neural networks with perturbations Weiping Wang a , Lixiang Li b,∗ , Haipeng Peng b,c , Jinghua Xiao a , Yixian Yang b a

School of Science, Beijing University of Posts and Telecommunications, Beijing 100876, China

b

Information Security Center, State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, Beijing 100876, China c Zhejiang Provincial Key Lab of Data Storage and Transmission Technology, Hangzhou Dianzi University, Hangzhou 310018, Zhejiang, China

article

info

Article history: Received 21 June 2013 Received in revised form 12 January 2014 Accepted 21 January 2014 Keywords: Memristor-based recurrent neural networks Synchronization control Impulsive perturbation Boundary perturbation

abstract In this paper, the synchronization control of memristor-based recurrent neural networks with impulsive perturbations or boundary perturbations is studied. We find that the memristive connection weights have a certain relationship with the stability of the system. Some criteria are obtained to guarantee that memristive neural networks have strong noise tolerance capability. Two kinds of controllers are designed so that the memristive neural networks with perturbations can converge to the equilibrium points, which evoke human’s memory patterns. The analysis in this paper employs the differential inclusions theory and the Lyapunov functional method. Numerical examples are given to show the effectiveness of our results. Crown Copyright © 2014 Published by Elsevier Ltd. All rights reserved.

1. Introduction The memristor is considered to be the fourth passive circuit element, originally predicted by Chua (1971) in 1971. The first practical memristor device was invented by scientists in 2008 (Strukov, Snider, Stewart, & Williams, 2008). The memristor retains its most recent value when the voltage is turned off, so it re-expresses that value the next time it is turned on. That feature makes them useful as energy-saving devices that can compete with flash memory and other static memory devices. Some classes of memristors also have non-linear response characteristics which makes them doubly suitable as artificial neurons. More and more researchers have been focusing on the memristor because of its potential applications in next generation computers and powerful brain-link ‘‘neural’’ computers. Over recent decades, in order to process information intelligently, people set up artificial neural networks to simulate the function of human brain. Traditional hardware implementations of artificial neural network have used fixed value resistors between neural processing units, which are supposed to represent the strength of synaptic connections between neurons in biology. The strength of synapses is variable while the resistance is invariable. In order to simulate the artificial neural network of human



Corresponding author. Tel.: +86 010 62282264. E-mail address: [email protected] (L. Li).

brain better, the resistor is replaced by the memristor which eventually may be used in hardware and software of artificial neural networks. Recently, Wu, Wen, and Zeng (2012), Wu and Zeng (2012), Wu and Zhang (2013), Wu, Zhang, and Zeng (2011), and Zhang, Shen, and Wang (2013) have concentrated on the dynamical nature of the memristor-based neural networks in order to achieve its successful applications in many different fields, such as pattern recognition, associative memories and learning, in a way that mimics the human brain. In the real world, random uncertainties (e.g. instantaneous interference on neural systems) make the neural networks change their states suddenly, which lead to the impulsive effect. Li and Chen (2009) study the stability properties for Hopfield neural networks with delays and impulsive perturbations. Besides impulsive perturbations, Gu (2009), Li and Cao (2008), Li, Ding, and Zhu (2010), Zhou, Wang, and Mou (2012), and Zhu, Yang, and Wang (2010) have studied stochastic perturbations on neural networks, since a real system is usually affected by external perturbations which in many cases are of great uncertainties and such perturbations may be treated as fluctuations from the release of neurotransmitters and other probabilistic causes. Furthermore, from the logical analysis of mathematical reasoning, many people have studied the noise tolerance on artificial neural networks, and the obtained results have a certain robustness. Then we want to know, do the memristive neural networks also have strong noise tolerance capability? Knowing that associative memory can be obtained from artificial neural networks, the

0893-6080/$ – see front matter Crown Copyright © 2014 Published by Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.neunet.2014.01.010

W. Wang et al. / Neural Networks 53 (2014) 8–14

memory pattern in Zeng, Huang, and Wang (2005) can be designed as the equilibrium points. And Hu, Duan, and Wang (2012) also studied the memristive synapse with applications in associative memory. Under the information input, the dynamic neural networks will converge to the equilibrium points, which are recalling the memory patterns. So how do we design controllers in order to guarantee that the memristive neural networks converge to the equilibrium points? In this sense, we seek to more closely duplicate the dynamic capabilities and stability of human memory. However, to the author’s best knowledge, the noise tolerance capability of memristive neural networks are unknown, and the synchronization control of memristor-based neural networks with perturbations has received little attention. As we know, the memristive recurrent neural networks are network sequences, and they have infinite number of asymptotically stable equilibrium points, so they can be used for large capacity storage, associative memory, pattern recognition, and so on. Motivated by their important applications, in this paper, we choose the model of memristive recurrent neural networks with perturbations to study the noise tolerance capability of designed controllers so that the networks converge to the equilibrium points. By utilizing the differential inclusion theory and the Lyapunov function method, two theorems are derived to achieve the synchronization of memristor-based recurrent neural networks with impulsive or boundary perturbations. 2. Preliminaries Many functions of artificial neural networks like memristors can be simulated successfully, which reproduce the characteristic time hysteresis behavior of memristor devices, can mimic the functionality of the human brain, and can provide an in-depth understanding of memristor-based memories, such as learning and anticipating. People usually use the following model for fullyconnected recurrent memristive neural networks, dxi (t ) dt

= −di xi (t ) +

n 

aij (xi (t ))fj (xj (t ))

j=1

+

n 

bij (xi (t ))fj (xj (t − τij (t ))) + Ji (t ),

j=1

i = 1, 2, . . . , n,

(1)

where x(t ) = (x1 (t ), . . . , xn (t )) and J (t ) = (J1 (t ), . . . , Jn (t ))T are, respectively, the voltage of capacitor and the external input, respectively. A(xi (t )) = (aij (xi (t )))n×n and B(xi (t )) = (bij (xi (t )))n×n are memristive connection weights, which represent the neuron interconnection matrix and the delayed neuron interconnection matrix, respectively. And in the artificial neural networks, the memristors worked as synaptic weights. The connection weights A(xi (t )) = (aij (xi (t )))n×n and B(xi (t )) = (bij (xi (t )))n×n change according to the state of each subsystem. If A(xi (t )) = (aij (xi (t )))n×n and B(xi (t )) = (bij (xi (t )))n×n are constants, the system (1) will reduce to general network. D = diag(d1 , · · · , dn ) > 0 is a self-feedback connection matrix. fj (xj (t )) and fj (xj (t − τij (t ))) are, respectively, bounded feedback functions without and with time-varying time delay between the jth-dimension of the memristor and xi (t ); τij (t ) corresponds to the transmission delay; let T

aij (xi (t )) = bij (xi (t )) =



aˆ ij , ϖ (˙xi (t )) ≤ 0, aˇ ij , ϖ (˙xi (t )) > 0,



bˆ ij , φ(˙xi (t )) ≤ 0, bˇ ij , φ(˙xi (t )) > 0,

where ϖ (˙xi (t )), φ(˙xi (t )) are memristive switching rules and aˆ ij , aˇ ij , bˆ ij , bˇ ij are known constants relating to memristances.

9

Remark 1. The model (1) has the memory characteristic, and Wang and Shen (2013) said that the memory characteristic reflected in the switching behavior of model (1). Li and Wang (2013) studied the switched Boolean network. They used the semi-tensor product method to get an algebraic expression of switched Boolean networks. And it would be very interesting for their complimentary application in memristive systems. The stability of memristive Boolean networks is a challenging topic which is worthy of further investigation in our future works. Remark 2. Pershin and Ventra (2010), and Wang et al. (2012) studied the memristive neural networks inherited the advantages of traditional artificial neural networks, such as fast computing, parallel computing, learning function and associative memory storage function, etc. However, the memristive neural network also has its own characteristics. For example, it is adaptive to new information and new patterns, which is close to the human brain. In this paper, solutions of all the systems considered in the following are intended in the Filippov’s sense, where [·, ·] represents the interval. Let a¯ ij = max{ˆaij , aˇ ij }, aij = min{ˆaij , aˇ ij }, b¯ ij = max{bˆ ij , bˇ ij }, bij = min{bˆ ij , bˇ ij }, A¯ = (¯aij )n×n , B¯ = (b¯ ij )n×n , A =

(aij )n×n , B = (bij )n×n , Aˆ = (ˆaij )n×n , Bˆ = (bˆ ij )n×n , Aˇ = (ˇaij )n×n , Bˇ = (bˇ ij )n×n , for i = 1, 2, . . . , n. co{u, v} denotes the closure of convex hull generated by real numbers u and v or real matrices u and v . For matrices X = (xij )n×n , Y = (yij )n×n and Z = (zij )n×n , the internal matrix [X , Y ] means that xij ≤ yij , and Z ϵ co{X , Y }implies min(xij , yij ) ≤ zij ≤ max(xij , yij ), i, j = 1, 2, . . . , n. Definition 1. Suppose E ⊂ ℜn . Then x → F (x) is called a setvalued map defined on E, if for each point x of E, it corresponds a nonempty set F (x) ⊂ ℜn . A set-valued map F with nonempty values is said to be upper-semicontinuous at x0 ϵ E if, for any open set N containing F (x0 ), there exists a neighborhood M of x0 such that F (M ) ⊂ N. F (x) is said to have a closed image if for each xϵ E , F (x) is closed. Based on Definition 1, the set-valued map xi (t ) → −Dxi (t ) + co{Aˆ , Aˇ }f (xi (t )) + co{Bˆ , Bˇ }f (xi (t − τij (t ))) + J (t ) has nonempty compact convex values. By applying the theory of differential inclusion, the memristor-based recurrent neural networks with impulsive perturbations can be written as the following differential inclusion:

 n  dxi (t )   ϵ − d x ( t ) + co[aij , a¯ ij ]fj (xj (t ))  i i   dt  j =1   n  + co[bij , b¯ ij ]fj (xj (t − τij (t ))) + Ji (t ), t ̸= tk ,    j = 1   + −   ∆xi (tk ) = xi (tk ) − xi (tk ) = −rik x(tk ), t = tk , xi (t ) = φi (t ), t ϵ(−∞, 0], i = 1, 2, . . . , n, k = 1, 2, . . . ,

(2)

or equivalently, there exist a˜ ij ϵ co[aij , a¯ ij ], b˜ ij ϵ co[bij , b¯ ij ] such that

 n  dxi (t )   = − d x ( t ) + a˜ ij fj (xj (t )) i i  dt   j =1    n     + b˜ ij fj (xj (t − τij (t ))) + Ji (t ), j =1   t ̸= tk ,     ∆xi (tk ) = xi (tk+ ) − xi (tk− ) = −rik x(tk ),     t = tk , xi (t ) = φi (t ), t ϵ(−∞, 0], i = 1, 2, . . . , n, k = 1, 2, . . . ,

(3)

where rik is the impulsive gain constant. In essence, rik is the ratio of the x˙ i (t ) at t = tk , and we choose 0 < rik < 2. x(tk− )

10

W. Wang et al. / Neural Networks 53 (2014) 8–14

= limt →t − x(tk ) = x(tk ), and x(tk+ ) = limt →t + x(t ). φi (t ) is the k k initial value of xi (t ). If the system has an equilibrium point, or an orbit of a chaotic attractor of system (3), then let ei (t ) = xi (t ) − s(t ) be the synchronization error, and s(t ) can be the equilibrium point, or the orbit of a chaotic attractor of system (3). We define ei (t − τij (t )) = xi (t − τij (t )) − s(t − τij (t )), then we can get the synchronization error system as follows

 n  dei (t )   = − di ei (t ) + a˜ ij (fj (ej (t ) + s(t )) − fj (s(t )))   dt   j =1   n      + b˜ ij (fj (ej (t − τij (t )) + s(t − τij (t )))    j =1 − fj (s(t − τij (t )))) + ui (t ),   t ̸= tk ,     ∆ei (tk ) = ei (tk+ ) − ei (tk− ) = −rik e(tk ),    t = tk ,     ei (t ) = ψi (t ), t ϵ(−∞, 0], i = 1, 2, . . . , n, k = 1, 2, . . . ,

dt

ϵ − di xi (t ) +

n 

In this section, the controller ui (t ) is designed to achieve the synchronization of the memristor-based recurrent neural network. Theorem 1. If there exist constants η > 0, λi > 0, such that

+k

i = 1, 2, . . . , n, (4)

co[bij , b¯ ij ]fj (xj (t − τij (t ))) + Ji (t ) + h(t ),

or equivalently, there exist a˜ ij ϵ co[aij , a¯ ij ], b˜ ij ϵ co[bij , b¯ ij ], such that

dt

= −di xi (t ) + +

n n   λj |¯aij | + (η − 1)k |b¯ ij | λ j =1 j=1,j̸=i i

1 − σ j=1 λi

n 

dt

(6)

V˙ (t ) =

1 − σ j =1

|b˜ ij |

η µs+µτij

|ej (t )| e

ds .

n 





λi η|ei (t )|

η−1

sgn(ei (t ))e

µt

(pi − di )ei (t )

i=1

a˜ ij (fj (ej (t ) + s(t )) − fj (s(t )))

+

n  (˜aij + qij )(f (ej (t ) + s(t )) − f (s(t ))) j =1

b˜ ij (fj (ej (t − τij (t )) + s(t − τij (t )))

+

n 

b˜ ij (f (ej (t − τij (t )) + s(t − τij (t )))

j =1

j =1

− fj (s(t − τij (t )))) + h(t ) + ui (t ),

n 

When t ̸= tk , then we get

j=1



k



t t −τij (t )

n

+

λi |ei (t )|η eµt +

×

where h(t ) is a vector, ∥h(t )∥ < d, and d is an unknown constant. Similarly, applying the theory of differential inclusion, we can get the synchronization error system as follows n 





b˜ ij fj (xj (t − τij (t ))) + Ji (t ) + h(t ),

= −di ei (t ) +

|b¯ ij |eµτij < 0,

where i = 1, 2, . . . , n. Construct the following Lyapunov function:

j =1

dei (t )

n  λj

k

i=1

i = 1, 2, . . . , n,

|¯aij |

j=1,j̸=i

V (t ) =

a˜ ij fj (xj (t ))

n 

η(pi − di ) + ηk(qii + a¯ ii ) + (η − 1)k

j =1 n 

(9)

Proof. If the assumption in the Theorem 1 holds, then there exists a small enough constant µ such that

+µ + (5)

n 

qij (f (ej (t ) + s(t )) − f (s(t ))),

where pi , qij , are constants for i, j = 1, 2, . . . , n, then system (4) can achieve the convergence.

+k

j =1

dxi (t )

n  j =1

co[aij , a¯ ij ]fj (xj (t ))

i = 1, 2, . . . , n,

(8)

and there exists the error system (4) under the following controller, ui (t ) = pi ei (t ) +

j =1



|¯aij |

n n   λj |¯aij | + (η − 1)k |b¯ ij | < 0, λ j =1 j=1,j̸=i i

n

+

n 

η(pi − di ) + ηk(qii + a¯ ii ) + (η − 1)k

j=1,j̸=i

with initial conditions ψi (t ) = φi (t )− s(t ), for i = 1, 2, . . . , n. And u(t ) is defined as the control actions in the context of this paper, rik is the ratio of the e˙ i (t ) at t = tk , and we choose 0 < rik < 2. The memristor-based recurrent neural networks with boundary perturbations h(t ) can be written as the following differential inclusion: dxi (t )

3. Main results



i = 1, 2, . . . , n. (7)

− f (s(t − τij (t )))) + µeµt |ei (t )|η

In this paper, we use the following assumptions:

n 

k

|b˜ ij | |ej (t )|η eµt +µτij

Assumption 1. Function f is bounded and satisfies the following Lipschitz condition with a Lipschitz constant k > 0,

+

∥f (x) − f (y)∥ ≤ k∥x − y∥.

n 1 − τ˙ij (t )  − k |b˜ ij ||ej (t − τij (t ))|η eµt +µτij −µτij (t ) 1−σ j=1

Assumption 2. For i, jϵ{1, 2, . . . , n}, τij (t ) in this paper is a differential function with |τ˙ij (t )| < σ (σ is a positive constant), 0 < τij (t ) < τij (τij is a positive constant), for all t ≥ 0.



1 − σ j=1

n  i=1

 λi e

µt

η(pi − di )|ei (t )|η + ηk(qii + a˜ ii )|ei (t )|η



W. Wang et al. / Neural Networks 53 (2014) 8–14 n

|˜aij ||ei (t )|η−1 |ej (t )|





n 



j=1,j̸=i n 

+ ηk

|b˜ ij ||ei (t )|η−1 |ej (t − τij (t ))|

×

n 

k

1 − σ j=1

|b˜ ij ||ej (t − τij (t ))|η .

V (0) =

n 

n 



n 

|˜aij ||ei (t )| + k

η

|˜aij ||ej (t )| ,

(10)

n 

j =1

|b˜ ij ||ej (t − τij (t ))|η .

(11)

V˙ (t ) ≤

|b¯ ij | ×



0

η

|ej (s)| ds

−τij





η

max 1≤i≤n

λi +



µτ

τij ke ij 1−σ

λj

n 

 |b¯ ij |

j=1

max λi

λi eµt η(pi − di )|ei (t )|η + ηk(qii + a¯ ii )|ei (t )|η

∥ψ∥ηη e−µt ,

1≤i≤n

and n 

|¯aij ||ei (t )|η + k

j=1,j̸=i

+ (η − 1)k

n 

n 

|¯aij ||ej (t )|η

j=1,j̸=i

n 

k

1 − σ j =1



 n τij keµτij  λ¯ = max λi + λj |b¯ ij | , 1≤i≤n 1−σ j =1 λ = max λi ,

|b¯ ij ||ei (t )|η + µ|ei (t )|η

1≤i≤n

j =1



1 − σ j =1



So one obtains

∥e(t )∥ηη



+ (η − 1)k

n 

λi |ei (0)| +



i =1

+

ds

n keµτij 

η

j =1

From Eqs. (10) and (11), we have n 

|b˜ ij |

n τij keµτij  λj ¯ ≤ λi |ei (0)| + |bij | max |ei (s)|η −τij ≤s≤0 1 − σ λ i i=1 j =1   n τij keµτij  ≤ max λi + λj |b¯ ij | ∥ψ∥ηη . 1≤i≤n 1−σ j =1

|b˜ ij ||ei (t )|η−1 |ej (t − τij (t ))| |b˜ ij ||ei (t )|η + k



n 

j=1,j̸=i

j =1 n 

1 − σ j =1

|ej (s)| e

i=1

and

≤ (η − 1)k

n 

n 

k

η µs+µτij

−τ (0)

η

ds



0

×

j=1,j̸=i

n 

λi |ei (0)|η e0 +



j=1,j̸=i

ηk



i=1

|˜aij ||ei (t )|η−1 |ej (t )|

≤ (η − 1)k

|ej (t )| e

then we get V (t ) ≤ V (0), t ϵℜ+ , and



According to Young’s inequality in Hu (2013), we get n 

|b˜ ij |

= V (tk ),

|b˜ ij ||ej (t )|η eµτij

j=1

ηk

η µs+µτij

t −τij (t )

+ µ|ei (t )|η +

−k

1 − σ j =1



t



n 

k

λi |ei (tk )|η eµtk +

i=1

j =1

n 

11



then we can get the following conclusion:

 |b¯ ij ||ej (t )|η eµτij

 ∥e(t )∥η ≤

η

 µt

λi e |ei (t )|

η

η(pi − di ) + ηk(qii + a¯ ii )

µt λ¯ ∥ψ∥η e− η . λ

Thus the proof is completed.

i =1

+ (η − 1)k

n 

|¯aij | + k

j=1,j̸=i

+ (η − 1)k

n 

Theorem 2. Under Assumption 1, the system (7) is globally asymptotically convergent via the following controller:

n  λj |¯aij | λ j=1,j̸=i i

|b¯ ij | + µ +

j =1

n  λj

k

1 − σ j =1 λ i

 |b¯ ij |eµτij .

V (tk+ ) =



i =1



+ tk

|ej (t )| e

i =1

× t −τij (t )

e(t ) > 0, e(t ) = 0, e(t ) < 0.

Proof. We consider the following Lyapunov function V = V1 + V2 + V3 .

λi |1 − rik |η |ei (tk )|η eµtk + t

sgn(e(t )) =

|b˜ ij |

ds





1, 0, −1,





+ tk −τij (t )

=

1 − σ j =1

η µs+µτij

× n 

n 

k

+

λi |ei (tk+ )|η eµtk +

(12)

where Γ1 , Γ2 are positive definite matrices, and

When t = tk , one finds that n 

 u(t ) = L(t )e(t ) − diag(sgn(e(t )))d(t ), L˙ (t ) = −Γ1 eT (t )e(t ), ˙ d(t ) = Γ2 |e(t )|,

 |ej (t )|η eµs+µτij ds

k

n 

1 − σ j=1

|b˜ ij |

System (7) can be written in the following vector form: e˙ (t ) = −De(t ) + A˜ (f (e(t ) + s(t )) − f (s(t )))

+ B˜ (f (e(t − τ (t )) + s(t − τ (t ))) − f (s(t − τ (t )))) + h(t ) + u(t ).

(13)

12

W. Wang et al. / Neural Networks 53 (2014) 8–14

Since V1 =

1 2

eT (t )e(t ),

then we get V˙ 1 = eT (t )˙e(t ) = eT (t )[−De(t ) + A˜ (f (e(t ) + s(t )) − f (s(t )))

+ B˜ (f (e(t − τ (t )) + s(t − τ (t ))) − f (s(t − τ (t )))) + h(t ) + u(t )]. We have V˙ 1 ≤ −DeT (t )e(t ) +

+ +

k2

1 2

eT (t )e(t ) +

2 k2

eT (t )A¯ A¯ T e(t ) 1 2

eT (t )B¯ B¯ T e(t )

follows:

And V2 =

k2 2(1 − σ )



t t −τ (t )

eT (θ )e(θ )dθ .

One obtains 2

V˙ 2 =



k

2(1 − σ ) k2 2(1 − σ )

eT (t )e(t ) − eT (t )e(t ) −

k (1 − τ˙ (t )) 2

2(1 − σ ) k2 2

eT (t − τ (t ))e(t − τ (t ))

eT (t − τ (t ))e(t − τ (t )).

 dx1 (t )  = −d1 x1 (t ) + a11 (x1 (t ))f (x1 (t ))   dt    + a 12 (x1 (t ))f (x2 (t )) + b11 (x1 (t ))f (x1 (t − τ11 (t )))   + b12 (x1 (t ))f (x2 (t − τ12 (t ))) + J1 (t ), dx (t )   2 = −d2 x2 (t ) + a21 (x2 (t ))f (x1 (t ))   dt   +  a22 (x2 (t ))f (x2 (t )) + b21 (x2 (t ))f (x1 (t − τ21 (t )))  + b22 (x2 (t ))f (x2 (t − τ22 (t ))) + J2 (t ),

(14)

where

Then we have

 −5, −f˙ (x1 (t )) − x˙ 1 (t ) ≤ 0, 5, −f˙ (x1 (t )) − x˙ 1 (t ) > 0,  3, f˙ (x2 (t )) − x˙ 1 (t ) ≤ 0, a12 (x1 (t )) = −3, f˙ (x2 (t )) − x˙ 1 (t ) > 0,  9, −f˙ (x1 (t − τ11 (t ))) − x˙ 1 (t ) ≤ 0 b11 (x1 (t )) = −9, −f˙ (x1 (t − τ11 (t ))) − x˙ 1 (t ) > 0,  −5, f˙ (x2 (t − τ12 (t ))) − x˙ 1 (t ) ≤ 0, b12 (x1 (t )) = 5, f˙ (x2 (t − τ12 (t ))) − x˙ 1 (t ) > 0,  −5, f˙ (x1 (t )) − x˙ 2 (t ) ≤ 0, a21 (x2 (t )) = 5, f˙ (x1 (t )) − x˙ 2 (t ) > 0,  9, −f˙ (x2 (t )) − x˙ 2 (t ) ≤ 0, a22 (x2 (t )) = −9, −f˙ (x2 (t )) − x˙ 2 (t ) > 0,  6, f˙ (x1 (t − τ21 (t ))) − x˙ 2 (t ) ≤ 0, b21 (x2 (t )) = −6, f˙ (x1 (t − τ21 (t ))) − x˙ 2 (t ) > 0,  −5, −f˙ (x2 (t − τ22 (t ))) − x˙ 2 (t ) ≤ 0, b22 (x2 (t )) = 5, −f˙ (x2 (t − τ22 (t ))) − x˙ 2 (t ) > 0. a11 (x1 (t )) =

 2 ˙V1 + V˙ 2 ≤ −D + 1 A¯ A¯ T + 1 B¯ B¯ T + k I + 2

2

2

k2 2(1 − σ )

 I eT (t )e(t )

+ |eT (t )|d + L(t )eT (t )e(t ) − |eT (t )|d(t ), where I is the identity matrix. Since V3 =

Fig. 1. Phase trajectories (x1 (t ), x2 (t )) of network (14).

eT (t − τ (t ))e(t − τ (t )) + |eT (t )|d + eT (t )u(t ).

2

(d(t ) − d)2 (L(t ) − L)2 + , 2Γ1 2Γ2

we get V˙ 3 = −(L(t ) − L)eT (t )e(t ) + (d(t ) − d)|e(t )|

= −L(t )eT (t )e(t ) + LeT (t )e(t ) + d(t )|e(t )| − d|e(t )|. One finds that

 2 ˙V1 + V˙ 2 + V˙ 3 ≤ −D + 1 A¯ A¯ T + k I 2

1

2

+ B¯ B¯ + 2

We can select L < D −



k2

T

2(1 − σ )

¯¯ −

1 AAT 2

k2 I 2

I + L e (t )e(t ). T

− 12 B¯ B¯ T −

k2 I, 2(1−σ )

then V˙ =

V˙ 1 + V˙ 2 + V˙ 3 ≤ 0, thus the system (7) can obtain the convergence. 4. Numerical simulation In this section, several examples are offered to illustrate the effectiveness of the results obtained above. Consider a twodimensional memristor-based recurrent neural network model as

And J (t ) = (J1 (t ), J2 (t ))T = (0, 0)T , τ11 (t ) = τ22 (t ) = τ12 (t ) = τ21 (t ) = 2 sin(t ), take the activation function as f (x) = 12 (|1 + x|− |1 − x|). Obviously, we have σ = 2, k = 1, d1 = 8, d2 = 6, a¯ 11 = 5, a¯ 22 = 9, a¯ 12 = 3, a¯ 21 = 5, b¯ 11 = 9, b¯ 12 = 5, b¯ 21 = 6, b¯ 22 = 5. The phase trajectories (x1 (t ), x2 (t )) of network (14) are shown in Fig. 1. And the state trajectories xi1 (t ), xi2 (t ) with 15 initial values of network (14) are shown in Fig. 2. The parameters are selected as η = 1, p1 = 1, p2 = −4, λ1 = 2, λ2 = 12 , q11 = 1, q22 = −20. From Fig. 3, we can see the above selection of all the parameters satisfies Theorem 1. Let h(t ) = [cos(t ), − 12 sin(2t )], we get Fig. 4 to show the state trajectories xi1 (t ), xi2 (t ), (i = 1, 2, . . . , 15) of system (14) with boundary perturbations without the impact of control. Fig. 5 shows

W. Wang et al. / Neural Networks 53 (2014) 8–14

13

Fig. 2. The left figure and the right figure show the state trajectories xi1 (t ) and xi2 (t )(i = 1, 2, . . . , 15) with 15 initial values of network (14), respectively.

ei1 (t ), ei2 (t ), (i = 1, 2, . . . , 15) of network (14) with boundary perturbations under the effect of controller (12), respectively. These simulation results verified Theorem 2. 5. Conclusion This paper proposed two kinds of controllers in order to achieve the synchronization of memristor-based neural network with impulsive perturbations or boundary perturbations. According to the Lyapunov stability method and the differential inclusion theory, these two kinds of control strategies are successful in ensuring the system converges to the equilibrium points. It can be well mimic the human brain in many applications, such as pattern recognition, associative memories and learning. Finally, numerical examples are given to illustrate the effectiveness of the proposed results. Fig. 3. State trajectories ei1 (t ), ei2 (t )(i = 1, 2) with 2 initial values of network (14) with impulsive perturbations under controller (9).

the state trajectories ei1 (t ) and ei2 (t ) (i = 1, 2, . . . , 15) with 15 initial values of network (14) with boundary perturbations under controller (12), but the selected parameters are inappropriate which does not satisfy Theorem 2, so the networks cannot converge to the equilibrium points. Fig. 6 shows the state trajectories

Acknowledgments The authors would like to thank all the anonymous reviewers for their helpful advice. This paper is supported by the National Natural Science Foundation of China (Grant Nos. 61100204, 61170269, 61121061), the China Postdoctoral Science Foundation Funded Project (Grant No. 2013M540070), the Beijing Higher Education Young Elite Teacher Project (Grant No. YETP0449), and the Open Research Fund of Zhejiang Provincial Key Lab of Data

Fig. 4. The left figure and the right figure show the state trajectories xi1 (t ) and xi2 (t ) (i = 1, 2, . . . , 15) with 15 initial values of network (14) with boundary perturbations without control, respectively.

14

W. Wang et al. / Neural Networks 53 (2014) 8–14

Fig. 5. The left figure and the right figure show the state trajectories ei1 (t ) and ei2 (t )(i = 1, 2, . . . , 15) with 15 initial values of network (14) with boundary perturbations under controller (12), respectively. But the selected parameters are inappropriate which does not satisfy Theorem 2, so the networks cannot converge to the equilibrium points.

Fig. 6. The left figure and the right figure show the state trajectories ei1 (t ) and ei2 (t )(i = 1, 2, . . . , 15) with 15 initial values of network (14) with boundary perturbations under controller (12), respectively.

Storage and Transmission Technology, Hangzhou Dianzi University (Grant No. 201305). References Chua, L. O. (1971). Memristor-the missing circuit element. IEEE Transactions on Circuit Theory, 507–519. Gu, H. B. (2009). Adaptive synchronization for competitive neural networks with different time scales and stochastic perturbation. Neurocomputing, 73, 350–356. Hu, X. L. (2013). An extension of Young’s inequality and its application. Applied Mathematics and Computation, 219, 6393–6399. Hu, X. F., Duan, S. K., & Wang, L. D. (2012). A novel chaotic neural network using memristive synapse with applications in associative memory. Abstract and Applied Analysis, http://dx.doi.org/10.1155/2012/405739. Li, X. L., & Cao, J. D. (2008). Adaptive synchronization for delayed neural networks with stochastic perturbation. Journal of The Franklin Institute, 345, 779–791. Li, X. D., & Chen, Z. (2009). Stability properties for Hopfield neural networks with delays and impulsive perturbations. Nonlinear Analysis Real Word Applications, 10, 3253–3265. Li, X. D., Ding, C. M., & Zhu, Q. X. (2010). Synchronization of stochastic perturbed chaotic neural networks with mixed delays. Journal of The Franklin Institute, 347, 1266–1280. Li, H. T., & Wang, Y. Z. (2013). Consistent stabilizability of switched boolean networks. Neural Networks, http://dx.doi.org/10.1016/j.neunet.2013.05.012. Pershin, Y. V., & Ventra, M. D. (2010). Experimental demonstration of associative memory with memristive neural networks. Neural Networks, 23, 881–886.

Strukov, D. B., Snider, G. S., Stewart, D. R., & Williams, R. S. (2008). The missing memristor found. Nature, 453, 80–83. Wang, G., & Shen, Y. (2013). Exponential synchronization of coupled memristive neural networks with time delays. Neural Computing Application, http://dx.doi.org/10.1007/s00521-013-1349-3. Wang, Z. Q., Xu, H. Y., Li, X. H., Yu, H., Liu, Y. C., & Zhu, X. J. (2012). Synaptic learning and memory functions achieved using oxygen ion migration/diffusion in an amorphous ingazno memristor. Advanced Functional Materials, 22, 2759–2765. Wu, A. L., Wen, S. P., & Zeng, Z. G. (2012). Synchronization control of a class of memristor-based recurrent neural networks. Information Sciences, 183, 106–116. Wu, A. L., & Zeng, Z. G. (2012). Dynamical behaviors of memristor-based recurrent networks with time-varyig delays. Neural Networks, 36, 1–10. Wu, H. Q., & Zhang, L. Y. (2013). Almost periodic solution for memristive neural networks with time-varying delays. Journal of Applied Mathematics, http://dx.doi.org/10.1155/2013/716172. Wu, A. L., Zhang, J., & Zeng, Z. G. (2011). Dynamic behaviors of a class of memristorbased Hopfield networks. Physics Letters A, 375, 1661–1665. Zeng, Z. G., Huang, D. S., & Wang, Z. F. (2005). Memory pattern analysis of celluar neural networks. Physics Letters A, 342, 114–128. Zhang, G. D., Shen, Y., & Wang, L. M. (2013). Global anti-synchronization of a class of chaotic memristive neural networks with time-varying delays. Neural Networks, 46, 1–8. Zhou, W. N., Wang, T. B., & Mou, J. P. (2012). Synchronization control for the competitive complex networks with time delay and stochastic effects. Communications in Nonlinear Science and Numerical Simulation, 17, 3417–3426. Zhu, Q. X., Yang, X. S., & Wang, H. C. (2010). Stochastic asymptotic stability of delayed recurrent neural networks with both Markovian jump parameters and nonlinear disturbances. Journal of The Franklin Institute, 347, 1489–1510.

Synchronization control of memristor-based recurrent neural networks with perturbations.

In this paper, the synchronization control of memristor-based recurrent neural networks with impulsive perturbations or boundary perturbations is stud...
1MB Sizes 0 Downloads 3 Views