Neural Networks 52 (2014) 25–32

Contents lists available at ScienceDirect

Neural Networks journal homepage: www.elsevier.com/locate/neunet

Nonsmooth finite-time stabilization of neural networks with discontinuous activations Xiaoyang Liu a,b , Ju H. Park a,∗ , Nan Jiang c , Jinde Cao d,e a

Nonlinear Dynamics Group, Department of Electrical Engineering, Yeungnam University, 214-1 Dae-Dong, Kyongsan 712-749, Republic of Korea

b

School of Computer Science & Technology, Jiangsu Normal University, Xuzhou 221116, China

c

School of Business, Jiangsu Normal University, Xuzhou 221116, China

d

Department of Mathematics, Southeast University, Nanjing 210096, China

e

Department of Mathematics, Faculty of Science, King Abdulaziz University, Jeddah 21589, Saudi Arabia

article

info

Article history: Received 25 August 2013 Received in revised form 26 November 2013 Accepted 3 January 2014 Keywords: Discontinuous activation Discontinuous controller Finite-time stabilization Neural network

abstract This paper is concerned with the finite-time stabilization for a class of neural networks (NNs) with discontinuous activations. The purpose of the addressed problem is to design a discontinuous controller to stabilize the states of such neural networks in finite time. Unlike the previous works, such stabilization objective will be realized for neural networks when the activations and controllers are both discontinuous. Based on the famous finite-time stability theorem of nonlinear systems and nonsmooth analysis in mathematics, sufficient conditions are established to ensure the finite-time stability of the dynamics of NNs. Then, the upper bound of the settling time for stabilization can be estimated in two forms due to two different methods of proof. Finally, two numerical examples are given to illustrate the effectiveness of the proposed design method. © 2014 Elsevier Ltd. All rights reserved.

1. Introduction In the past decades, the dynamic behaviors of artificial neural networks have become popular subjects of research and have received considerable attention. There are many reasons for this fact, but one of the main reasons is NNs’ successful applications in various areas such as signal processing, fault diagnosis, pattern recognition, image processing, and some optimization problems (Liang, Wang, & Liu, 2008; Lu & Cao, 2008; Wang, Cao, & Liang, 2009). Meanwhile, a great number of research results have been published in the literature; see Huang, Cao, and Ho (2006), Huang, Feng, and Cao (2009), Liu, Wang, and Liu (2007), Liu, Wang, and Liu (2008), Lu and Cao (2008), and the references therein. Many of the above papers mainly focused on the stability or periodicity of neural networks. However, since neural network systems may also exhibit some unstable behaviors such as oscillation or chaos, some controller should be introduced to stabilize the unstable neural networks (Lu, Ho, & Wang, 2009). Since the pioneering works of Ott, Grebogi, and Yorke (1990) where a chaotic attractor can be converted to any one of a large number of possible attracting time-periodic motions, many



Corresponding author. Tel.: +82 53 8102491. E-mail address: [email protected] (J.H. Park).

0893-6080/$ – see front matter © 2014 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.neunet.2014.01.004

monographs and papers studying the stabilization of the nonlinear systems have been published (Li, Liao, & Huang, 2007; Lu, Ho, & Wu, 2009; Sontag, 1989). These publications have developed many control techniques including the linear feedback, adaptive feedback and so on. For example, in Ref. Lu, Ho, and Wuet al. (2009), the exponential mean square stabilization for switched stochastic dynamical networks has been studied by designing a single linear controller, where the coupling matrix of the networks was assumed to be irreducible symmetric or irreducible asymmetric. Unfortunately, to the best of the authors’ knowledge, most literature regarding the stabilization of neural networks is based on the sufficiently large convergence time, even though in practical applications the argued network states are eagerly wanted to be stabilized as quickly as possible. In order to achieve faster stabilization speed, it is better to shorten the stabilization time. We hope to complete stabilization objective in finite time rather than merely asymptotically (Bhat & Bernstein, 1997; Haimo, 1986). One of the effective methods is to utilize finite-time stabilization techniques, which have also demonstrated some advantages, such as better robustness and disturbance rejection properties (Bhat & Bernstein, 1997; Chen, Lewis, & Xie, 2011; Cortés, 2006; Hong, Wang, & Cheng, 2006; Hui, Haddad, & Bhat, 2008; Khoo, Xie, & Man, 2009; Wang & Xiao, 2010; Xiao, Wang, Chen, & Gao, 2009). On the other hand, the second flaw when considering the issue of stabilization for nonlinear systems, whether in finite time or not,

26

X. Liu et al. / Neural Networks 52 (2014) 25–32

is that the nonlinear functions were assumed to be continuous, even globally Lipschitz. In 2003, Forti and Nistri (2003) noted a brief review that neural networks with discontinuous activations are important and frequently arise in practice. Taking the classic Hopfield neural networks (Hopfield & Tank, 1984), for example, the standard assumption is that the activations need to be in a high-gain limit, where they closely approach discontinuous and comparator functions. Just as Hopfield and Tank (1984) and Hopfield and Tank (1986) pointed out, the high-gain hypothesis is crucial to make negligible the connection to the neural network energy function of the term depending on neuron self-inhibitions and to favor binary output formation (Lu & Chen, 2006, 2008). It is well known that the nonlinear systems with discontinuous dynamic behaviors are more easy to achieve finite-time stability or convergence. Likewise, discontinuous controllers can also be intentionally designed to realize finite-time stabilization or synchronization. In Shen and Cao (2011), the authors investigated finite-time synchronization of an array of coupled neural networks via discontinuous controllers. In Chen et al. (2011), the finite-time distributed consensus problem for multi-agent systems has been investigated by using a binary consensus protocol and the pinning control scheme, where the proposed (discontinuous) algorithm only requires sign information of the relative state measurements. However, there are very few papers using discontinuous controllers to stabilize neural networks with discontinuous activations. Motivated by the above discussions, in this paper, we will consider finite-time stabilization of neural networks with discontinuous activations via discontinuous controllers. The contribution of this paper is four fold. First, the previous literature considering the stabilization problems of neural networks mainly focused on asymptotical stabilization in infinite time. In this paper, the finite-time stabilization will be realized under the new proposed control algorithms. Second, even there are a very few papers that discussed the finite-time stabilization of neural networks, the activations were almost restricted to be continuous. For the neural networks with discontinuous activations, we will study the finite-time stabilization issue in this paper, which expanded the scope of the study. Third, the controllers used to solve the finite-time stabilization problems are also discontinuous. By employing the concept of Filippov solutions, the nonsmooth analysis theory and set-valued Lie derivative, the neural networks will be stabilized in finite-time for both the activations and controllers are discontinuous. Fourth, in this paper, the main results about finite-time stabilization will be obtained based on the famous finite-time stability theorem. The settling time can be estimated in two forms due to two different methods of proof. The rest of the paper is organized as follows. In Section 2, some preliminaries about the network model and the definition of Filippov solutions are briefly given. In Section 3, the main theorems and corollaries for finite-time stabilization with discontinuous controllers are discussed. In Section 4, simulation results aiming at substantiating the theoretical analysis are presented. This paper is finally concluded in Section 5. The notations in this paper are quite standard. Rn and Rn×m denote, respectively, the n dimensional Euclidean space and the set of all n × m real matrices. The superscript ‘‘T ’’ denotes the transpose and the notation X ≥ Y (respectively, X > Y ) where X and Y are symmetric matrices, means that X − Y is positive semi-definite (respectively, positive definite). λmax (M ) and λmin (M ) denote the maximal and minimal eigenvalues of real matrix M, respectively. I and 0 represent the identity matrix and a zero matrix, respectively; diag(· · ·) stands for a block-diagonal matrix; matrices, if their dimensions are not explicitly stated, are assumed to be compatible for algebraic operations.

2. Model formulation and preliminaries Some preliminary knowledge is presented in this section for the derivation of our main results. The deterministic NN can be described by the following differential equation: x˙ (t ) = −Dx(t ) + Af (x(t )) + J ,

(1)

or x˙ i (t ) = −di xi (t ) +

n 

aij fj (xj (t )) + Ji ,

i = 1, 2, . . . , n,

(2)

j =1

where x(t ) = (x1 (t ), x2 (t ), . . . , xn (t ))T ∈ Rn is the vector of neuron states; n represents the number of neurons in the network; D = diag(d1 , d2 , . . . , dn ) is an n × n constant diagonal matrix with di > 0, i = 1, 2, . . . , n; A = (aij )n×n is an n × n interconnection matrix; f (x) = (f1 (x1 ), f2 (x2 ), . . . , fn (xn ))T : Rn → Rn is a diagonal mapping where fi , i = 1, 2, . . . , n, represents the neuron input–output activation and J = (J1 , J2 , . . . , Jn )T is a constant external input vector. Unlike the previous studies on the stability or stabilization of neural networks, the nonlinear activation function f in this paper does not satisfy the common Lipschitz condition or QUAD condition (Kwon, Lee, Park, & Cha, 2012; Park, Kwon, Lee, Park, & Cha, 2012; Wu, Park, Su, & Chu, 2012) any more, even the basic continuous conditions are also removed. The main difference between this paper and the existing works is that the dynamic behavior in our model is admitted to be discontinuous. From the theoretical point of view in mathematics, the basic and natural question is about the solution of the discontinuous dynamical systems. The existence of solutions for the above discontinuous dynamical systems is a delicate problem, as can be seen from our previous works (Liu & Cao, 2009, 2011; Liu, Chen, Cao, & Lu, 2011). First, we need some preliminaries to introduce the new definition for the solutions. Definition 1. Class F of functions: we call f ∈ F , if for all i = 1, 2, . . . , n, fi (·) satisfies: fi (·) is continuously differentiable, except on a countable set of isolated points {ρki }, where the right and left limits fi+ (ρki ) and fi− (ρki ) exist, k = 1, 2, . . . . In the following, we apply the framework of Filippov (1988) in discussing the solution of neural networks (1) with the discontinuous activation f . Definition 2. A set-valued map is defined as

F[f ](x) =

 

K [f (B(x, δ) \ N )],

(3)

δ>0 µ(N )=0

where K (E ) is the closure of the convex hull of set E , B(x, δ) = {y : ∥y − x∥ ≤ δ}, and µ(N ) is the Lebesgue measure of set N. A solution in the sense of Filippov of Eq. (1) with the initial condition x(0) = x0 is an absolutely continuous function x(t ), which satisfies differential inclusion: dx dt

∈ −Dx(t ) + AF[f ](x),

a.e. t ∈ [0, +∞).

(4)

Remark 1. Definition 2 can be specifically explained as follows: for x ∈ Rn , the vector field f is evaluated at the points belonging to B(x, δ). Considering the effect of δ approaching zero by performing this evaluation for smaller and smaller δ . In order to keep the outcome be the same for two vector fields that differ on a set of measure zero, an arbitrary set N of measure zero in B(x, δ) can be flexibly excluded. The above definition of F[f ] is due to Filippov and is called as the Filippov regularization process.

X. Liu et al. / Neural Networks 52 (2014) 25–32

In this paper, because f ∈ F , then F[f ](x) = K [f (x(t ))] = + (K [f1 (x1 )], . . . , K [fn (xn )]), and K [fi (xi )] = [min{fi (x− i ), fi (xi )}, − + max{fi (xi ), fi (xi )}], i = 1, . . . , n. In the following, some useful properties of Filippov set-valued map F[f ] are presented: (1) Consistency: if f : Rd → Rm is continuous at x ∈ Rd , then F[f ](x) = {f (x)}. (2) Sum rule: If f1 , f2 : Rd → Rm are locally bounded at x ∈ Rd , then F[f1 + f2 ](x) ⊆ F[f1 ](x) + F[f2 ](x). Moreover, if either f1 or f2 is continuous at x, then equality holds. (3) Product rule: if f1 : Rd → Rm and f2 : Rd → Rn are locally bounded at x ∈ Rd , then F[(f1 , f2 )T ](x) ⊆ F[f1 ](x) × F[f2 ](x). Moreover, if either f1 or f2 is continuous at x, then equality holds. (4) Matrix transformation rule: if f : Rd → Rm is locally bounded at x ∈ Rd and Z : Rd → Rd×m is continuous at x ∈ Rd , then F[Zf ](x) = Z (x)F[f ](x).

27

Considering the neural networks (1) or (2) with Assumption 1, the existence of an equilibrium point can be guaranteed (a detailed explanation can be referred to our earlier work Liu & Cao, 2009 and Liu et al., 2011). In this paper, our main goal is to control the NN (1) to the desired state x∗ in finite time, which is an equilibrium point of NN (1). Without loss of generality, one can shift the equilibrium point x∗ to the origin by using the translation y(t ) = x(t ) − x∗ , which derives the following dynamic system: y˙ (t ) = −Dy(t ) + Ag (y(t )),

(7)

where g (y(t )) = f (x(t )) − f (x ). ∗

3. Finite-time stabilization via discontinuous controllers In this section, we will establish some basic results on the finite-time stabilization issue of the neural network (1) or (2) with discontinuous activations. That is, the goal is to stabilize the NN (1) to the equilibrium point x∗ , or equivalently, stabilize the NN (7) to the origin due to the transformation. To this end, in the remainder of this paper, a controller u(t ) will be designed to the finite-time stabilization purpose for NN (7). The controlled NN (7) can be described by the following differential equation:

It is obvious that, for all f ∈ F , the set-valued map x(t ) ↩→ Dx(t ) + AF[f ](x(t )) has nonempty compact convex values. Hence, it is upper-semi-continuous (Aubin & Cellina, 1984) and is measurable. By the measurable selection theorem (Aubin & Frankowska, 1990), if x(t ) is a solution of (1), then there exists a measurable function α(t ) ∈ K [f (x(t ))] such that for a.e. t ∈ [0, +∞), the following equation holds:

In this paper, the controller u(t ) will be designed as a discontinuous function shown as follows:

x˙ (t ) = −Dx(t ) + Aα(t ),

u(t ) = −k1 y(t ) − k2 sign(y(t )),

for a.e. t ∈ [0, +∞),

(5)

where α(t ) = (α1 (t ), α2 (t ), . . . , αn (t ))T and αi (t ) ∈ K [fi (xi (t ))]. For each state x(t ), the single-valued function α(t ) is the socalled measurable selection of the set-valued function F[f ](x(t )). For the set-valued function obtained by the Filippov regularization process, the existence of a continuous approximate selection can be ensured based on Cellina’s theorem (Aubin & Frankowska, 1990). Definition 3. Let ΩV ⊂ Rn denote the set of points at which function V fails to be differentiable. The generalized gradient (Clarke, 1983) ∂ V : Rn → B (Rn ) of V is defined by ∂ V (x) = co{limi→∞ ∇ V (xi ) : xi → x, xi ̸∈ S ∪ ΩV }, where co(·) denotes the convex hull and S ⊂ Rn is a set of measure zero. In addition, the set-valued Lie derivative of V with respect to f at x is defined as Lf V (x) = {a ∈ R | ∃v ∈ F[f ](x) with v T ξ = a, ∀ξ ∈ ∂ V (x)}. To establish our main results, it is necessary to give the following assumption for system (1) or (2). Assumption 1. For all i = 1, 2, . . . , n, suppose 0 ∈ K [fi (0)] and there exist constants li and mi ≥ 0 such that for all ιi (t ) ∈ K [fi (xi (t ))], κi (t ) ∈ K [fi (yi (t ))], the following holds:

∥ιi (t ) − κi (t )∥ ≤ li ∥xi (t ) − yi (t )∥ + mi ,

i = 1 , 2 , . . . , n,

(6)

Remark 2. Based on the definition of the Filippov regularization process, we know that a discontinuous function needs to be transformed into a Filippov set-valued function, which maps one point into a set. Then, a measurable scalar function should be selected. But this selection is not uniqueness, which may result in the existence of different values at the same discontinuous points. Hence, ιi (t ) may not be equal to κi (t ) even xi = yi if xi is a discontinuous point. That is, for the discontinuous function f , mi in Assumption 1 should not be equal to zero, which is the essential difference between this paper and the previous literature where the Lipschitz condition was held.

y˙ (t ) = −Dy(t ) + Ag (y(t )) + u(t ).

(8)

(9)

where sign(y(t )) = diag(sign(y1 (t )), sign(y2 (t )), . . . , sign(yn (t ))), and constants k1 , k2 are two gain coefficients to be determined. Remark 3. In Refs. Hong et al. (2006), Khoo et al. (2009), Wang and Xiao (2010) and Xiao et al. (2009), the finite-time controller is designed as uα (t ) = −k1 y(t ) − k2 sign(y(t ))|y(t )|α , which is a continuous function when 0 < α < 1. In fact, when α = 1, such a controller is the typical linear one studied in Li et al. (2007) and Lu, Ho, and Wuet al. (2009), and in this case it solves an asymptotic stabilization problem. In this paper, we claim that the discontinuous controller u(t ) in (9) can also be considered to study the finite-time stabilization issue, which fills the gap of the critical case (that is, α = 0 in uα (t )) in designing the controllers for the finite-time purpose. Definition 4. The neural network (8) is said to be stabilized in finite time by the controller (9) if, for any initial state y(0), there exists a time t ∗ such that lim ∥y(t )∥ = 0,

t →t ∗

and ∥y(t )∥ = ∥x(t ) − x∗ ∥ ≡ 0 for t > t ∗ . The following lemmas are needed for the derivation of our main results in this paper. Lemma 1 (Bhat & Bernstein, 2000 and Yu, Ma, Zhu, & Wu, 2006). Suppose that V (x(t )) : Rn → R is C -regular, and that x(t ) : [0, +∞) → Rn is absolutely continuous on any compact interval of [0, +∞). If there exists a continuous function γ : (0, +∞) → R, with γ (σ ) > 0 for σ ∈ (0, +∞), such that V˙ (t ) ≤ −γ (V (t )) and V (0)

 0

1

γ (σ )

dσ = t ∗ < +∞,

then, we have V (t ) = 0 for t ≥ t ∗ . In particular:

28

X. Liu et al. / Neural Networks 52 (2014) 25–32

(1) If γ (σ ) = K1 σ + K2 σ µ , for all σ ∈ (0, +∞), where µ ∈ (0, 1) and K1 , K2 > 0, then the settling time is estimated by t∗ =

1

ln

K1 (1 − µ)

K1 V 1−µ (0) + K2 K2

.

(10)

(2) If γ (σ ) = K σ µ and K > 0, then the settling time is estimated by t∗ =

V 1−µ (0) K (1 − µ)

.

1/p

n



p

 ≤

ai

i =1

Lδ V (t ) ≤ 2yT (t )P (−D − k1 I )y(t ) + 2yT (t )PAα(t )

− 2k2 λmin (P )

(11)

Lemma 2 (Hardy, Littlewood, & Pólya, 1988). If a1 , a2 , . . . , an are positive number and 0 < r < p, then



For the set-valued function F(g (t )), by the measurable selection theorem Aubin and Frankowska (1990), there exist measurable functions αi (t ) ∈ F(gi (t )), i = 1, 2, . . . , n, such that for a.e. t ∈ [0, +∞), one has



n 

≤ a˜ ≤ a˜

k2 >

,

λmin (P )

.

ln 1 +

k2 λmin (P )−˜amn

.

Proof. Considering the controlled system (8) with the controller (9), we have y˙ (t ) = −(D + k1 I )y(t ) + Ag (y(t )) − k2 sign(y(t )).

(14)

Next, we will prove that system (9) is finite-time stable based on Definition 4. To this end, we choose the candidate Lyapunov function V (y(t )) = yT (t )Py(t ) and calculate the Lie derivative of V (y(t )) along the trajectories of the augmented system (14). Denoting δ(t ) = −(D + k1 I )y(t ) + Ag (y(t )) − k2 sign(y(t )) and by Definition 3, we have

≤ a˜ nl

n 

n 

n 

|yi (t )|

i =1

λmin (P )

V (t ) + a˜ mn

n 

|yi (t )|.

(19)

i=1

|yi (t )| ≥

i=1

n 

 21 |yi (t )|

1

= [yT (t )y(t )] 2

2

i =1

− 12

1

≥ λmax (P ) · V 2 (t ).

(20)

By (18)–(20), we have Lδ V (t ) ≤ yT (t )(−2PD − 2k1 P )y(t ) +

− (2k2 λmin (P ) − 2a˜ mn)

n 

2a˜ nl

λmin (P )

V (t )

|yi (t )|

i =1

  2a˜ nl ≤ yT (t )(−2PD)y(t ) − 2k1 − V (t ) λmin (P ) n  − (2k2 λmin (P ) − 2a˜ mn) |yi (t )| i =1

 ≤ − 2k1 −

2a˜ nl



λmin (P )

V (t ) −1

1

2 − (2k2 λmin (P ) − 2a˜ mn)λmax (P ) · V 2 (t ).

(16)

(21)

From conditions (12), (13) and the special case (1) in Lemma 1, the finite-time stabilization for neural network (1) can be ensured and the settling time is estimated by

Then, yT (t )Pk2 SIGN(y(t )) ≥ k2 λmin (P )yT (t ) SIGN(y(t )) n 

t1∗ =

yi (t ) SIGN(yi (t ))

i=1

 i=1

1 1 2



2k1 − λ 2a˜ nl(P ) min



n

= k2 λmin (P )

y2i (t ) + a˜ mn

a˜ nl



where the set-valued function SIGN(y(t )) = (SIGN(y1 ), SIGN(y2 ), . . . , SIGN(yn ))T , and

= k2 λmin (P )

|yi (t )|

Based on Lemma 2, we have

⊆ 2yT (t )P [(−D − k1 I )y(t ) + AF(g (t )) − k2 F(sign(y(t )))] = 2yT (t )P [(−D − k1 I )y(t ) + AF(g (t )) − k2 SIGN(y(t ))], (15)

s 0.

n  i,j=1

i =1

Lδ V (t ) = [∂ V (t )]T · F[δ(t )]

 −1, SIGN(s) = [−1, 1], 1,

|yi (t )| |yj (t )| + a˜ m

i,j=1

(13)

(k1 − λ a˜ nl(P ) )λmax (P )∥y(0)∥ min

|yi (t )|(lj |yj (t )| + mj )

n 

(12)

Moreover, the settling time for stabilization is estimated by  λmin (P ) k1 λmin (P )−˜anl

n 

≤ a˜ l



a˜ mn

|yi (t )| |αj (t )|

i,j=1

Theorem 1. For the given control gains k1 and k2 , the controlled neural network (8) with (9) is finite-time stable, that is, the neural network (1) or (2) can be stabilized in finite time, if there exists a positive-definite matrix P ∈ Rn×n such that

λmin (P )

n  i,j=1

For the positive-definite matrix P ∈ R , let A˜ = PA = (˜aij )n×n , a˜ = max1≤i,j≤n {|˜aij |}, and denote l = max1≤i≤n {li }, m = max1≤i≤n {mi }, and we have the following theorems. n×n

k1 >

yi (t )˜aij αj (t )

i,j=1

i =1

a˜ nl

(18)

where α(t ) = (α1 (t ), α2 (t ), . . . , αn (t ))T . Based on Assumption 1, we have

.

ari

|yi (t )|

i =1

yT (t )PAα(t ) =

1/r

n

n 

|yi (t )|.

(17)

 

−1

1



2 2k1 − λ 2a˜ nl(P ) V 2 (0) + (2k2 λmin (P ) − 2a˜ mn)λmax (P ) min  

× ln 

−1

2 (2k2 λmin (P ) − 2a˜ mn)λmax (P )



X. Liu et al. / Neural Networks 52 (2014) 25–32



λmin (P )

Proof. Choose the same Lyapunov function V (y(t )) as in the proof in Theorem 1 and by (15), (18)–(19), we have

k1 λmin (P ) − a˜ nl

 × ln 

29



k1 − λ a˜ nl(P ) λmax (P )∥y(0)∥ + k2 λmin (P ) − a˜ mn min

 Lδ V (t ) ≤ yT (t )P (−2D − 2k1 )y(t ) +



k2 λmin (P ) − a˜ mn

    k1 − λ a˜ nl(P ) λmax (P )∥y(0)∥ λmin (P ) min  . (22) = ln 1 + k1 λmin (P ) − a˜ nl k2 λmin (P ) − a˜ mn This completes the proof.



2a˜ nl

V (t )

λmin (P ) n  − (2k2 λmin (P ) − 2a˜ mn) |yi (t )|.

(28)

i =1

Combining the conditions (26) and (27), there exists a positive constant η > λ a˜ ln(P ) such that min

Lδ V (t ) ≤ y (t )P (−2D − 2k1 I + ηI )y(t ) T

Remark 4. In the past few years, the dynamic behaviors of neural networks with discontinuous activations have been studied extensively, including the stability of equilibrium points, periodic solutions, almost periodic solutions (Forti & Nistri, 2003; Liu & Cao, 2009; Liu et al., 2011; Lu & Chen, 2006, 2008). On the other hand, in Ref. Shen and Cao (2011), the authors considered the finite-time synchronization of an array of coupled neural networks via discontinuous controllers, while the activation function satisfied the usual Lipschitz condition. In this paper, we consider the finite-time stabilization issues about neural networks with discontinuous activations via discontinuous controllers, which fills the gap in studying the dynamics when the nonlinear functions and controllers are both discontinuous.

 −



≤ − η−

where a¯ = max1≤i,j≤n {|aij |}. Moreover, the settling time for stabilization is estimated by (25)

Proof. Let P = pI, then λmax (P ) = λmin (P ) = p, a˜ = pa¯ and we can prove the result based on Theorem 1. Theorem 2. For the given control gains k1 and k2 , the controlled neural network (8) with (9) is finite-time stable, that is, the neural network (1) or (2) can be stabilized in finite time, if there exist a constant η > 0 and a positive-definite matrix P ∈ Rn×n such that the condition (13) and the following two inequalities hold

−2D − 2k1 I + ηI < 0, η>

(26)

2a˜ nl . λmin (P )

(27)

Moreover, the settling time for stabilization is estimated by  a˜ nl 2λmin (P ) ηλmin (P )−2a˜ nl

ln 1 +

(η−2 λ

)λmax (P )∥y(0)∥ min (P ) 2k2 λmin (P )−2a˜ mn

|yi (t )|

2a˜ nl

λmin (P )



V (t ) 1

(29)

From the special case (1) in Lemma 1, the finite-time stabilization for neural network (1) can be ensured and the settling time is estimated by 1 1 2



 2a˜ nl η − λmin (P )   1 − 12 2a˜ nl η − λmin V 2 (0) + (2k2 λmin (P ) − 2a˜ mn)λmax (P ) (P )

× ln

−1

2 (2k2 λmin (P ) − 2a˜ mn)λmax (P )

2λmin (P ) ηλmin (P ) − 2a˜ nl     a˜ nl λmax (P )∥y(0)∥ η − 2 λmin (P ) . × ln 1 + 2k2 λmin (P ) − 2a˜ mn

This completes the proof.

(30)



Remark 6. It can be seen from conditions (26)–(27) in Theorem 2 that k1 > −d + 12 η, where d = maxi {di }. Then, we have k1 > a˜ nl −d + λmin . Hence, compared to the condition (12) in Theorem 1, (P )

 (k1 − a¯ nl)∥y(0)∥ t∗1 ≤ ln 1 + . k1 − a¯ nl k2 − a¯ mn 

1

n 

−1



(24)

V (t )

2 − (2k2 λmin (P ) − 2a˜ mn)λmax (P ) · V 2 (t ).

Corollary 1. For the given control gains k1 and k2 , the controlled neural network (8) with (9) is finite-time stable, that is, the neural network (1) or (2) can be stabilized in finite time, if the following two inequalities hold (23)

λmin (P )

i =1

t2∗ =

k1 > a¯ nl,



2a˜ nl

− (2k2 λmin (P ) − 2a˜ mn)

Remark 5. In this paper, a positive definite matrix parameter P has been introduced into the conditions (12)–(13), which makes them easier to be satisfied and reduces some conservation. Indeed, the previous work could be included by our results if we take the matrix P = pI, where p is an arbitrary positive constant, just shown as follows.

k2 > a¯ mn,

η−

.

Theorem 2 reduces some conservation because of the introduction of parameter η. Remark 7. From Theorem 2, we can see that the two gain parameters k1 and k2 in the controller u(t ) play different roles when considering the finite-time stabilization problem. Actually, based on the conditions (26) and (27), whether or not the controlled system (14) could realize the finite-time stability not only depends on the value of k1 but also on k2 . However, the size of the settle time t2∗ only depends on the value of k2 not on k1 . Similar to Corollary 1, letting the positive-definite matrix be P = pI , p > 0, we have the following corollary. Corollary 2. For the given control gains k1 and k2 , the controlled neural network (8) with (9) is finite-time stable, that is, the neural network (1) or (2) can be stabilized in finite time, if there exists a constant η > 0 such that the condition (24) and the following two inequalities hold 2k1 I > −2D + ηI ,

(31)

η > 2a¯ nl.

(32)

30

X. Liu et al. / Neural Networks 52 (2014) 25–32

Moreover, the settling time for stabilization is estimated by

 (η − 2a¯ nl)∥y(0)∥ t2 ≤ ln 1 + . η − 2a¯ nl 2k2 − 2a¯ mn 

2



(33)

Proof. Let P = pI, and we can prove the result based on Theorem 2.  Theorem 3. For the given control gains k1 and k2 , the controlled neural network (8) with (9) is finite-time stable, that is, the neural network (1) or (2) can be stabilized in finite time, if there exists a positive-definite matrix P ∈ Rn×n such that the condition (13) and the following inequality hold

−PD − k1 P + a˜ nlI < 0.

(34)

Moreover, the upper-bound of the settling time for stabilization is λ (P )∥y(0)∥ estimated by k max . λ (P )−˜amn 2 min

Fig. 1. Trajectories of NN (40) without any controller in Example 1.

Proof. Based on the formula (19) in the proof of Theorem 1, we have yT (t )PAα(t ) ≤ a˜ nl

n 

y2i (t ) + a˜ mn

i =1

n 

|yi (t )|

i=1

= a˜ nlyT (t )y(t ) + a˜ mn

n 

|yi (t )|.

(35)

i=1

From (18) and (20), one obtains Lδ V (t ) ≤ yT (t )(−2PD − 2k1 P + 2a˜ nlI )y(t )

− (2k2 λmin (P ) − 2a˜ mn)

n 

|yi (t )|.

(36)

i =1

By the condition (34), we have Lδ V (t ) ≤ −(2k2 λmin (P ) − 2a˜ mn)

n 

|yi (t )|

Fig. 2. Trajectories of NN (40) under the controller (41) with k1 = 0.8, k2 = 1 in Example 1.

i =1

−1

1

2 (P ) · V 2 (t ). ≤ −(2k2 λmin (P ) − 2a˜ mn)λmax

(37)

Then, from the special case (2) in Lemma 1, the finite-time stabilization for neural network (1) can be ensured and the settling time is estimated by V 2 (0) 1 2

2

4. Two numerical examples

1

t3∗ =

Moreover, the upper-bound of the settling time for stabilization is es∥y(0)∥ timated by t∗3 ≤ k −¯amn .

−1

2 (2k2 − 2a˜ mn)λmax (P )

λmax (P )∥y(0)∥ ≤ . k2 λmin (P ) − a˜ mn This completes the proof.

Example 1. Consider the following second-order neural network: (38)

x˙ (t ) = −Dx(t ) + Af (x(t )) + J , where



 A=

Remark 8. In the proofs of Theorems 2 and 3, two special cases of Lemma 1 are utilized to ensure the finite-time stabilization problem for the discussed neural network (1), respectively. Actually, from the conditions (26)–(27) in Theorem 2, we can see that, for positive-definite matrix P , 2k1 P > −2PD + ηP > −2PD + 2a˜ nl λ P (P ) > −2PD + 2a˜ nlI, which is exactly the condition (34) min in Theorem 3. Hence, the special case (2) of Lemma 1 utilized in Theorem 3 reduces some conservation relative to special case (1) used in Theorem 2. Corollary 3. For the given control gains k1 and k2 , the controlled neural network (8) with (9) is finite-time stable, that is, the neural network (1) or (2) can be stabilized in finite time, if the condition (24) and the following inequality hold

− D − k1 I + a¯ nlI < 0.

(40)

(39)

1 0



0 , 1

 B=

1 0.1

 −0.2 1

,

  J =

0 . 0

The discontinuous activation function is taken as f (x) = x + 0.2 sign(x). Then, it is obvious that a¯ = 1, l = 1, m = 0.4, n = 2. The NN (40) with the above-given parameters is depicted in Fig. 1 with initial values x(0) = [−2, 3]T . The discontinuous stabilization controller is designed as u(t ) = −k1 x(t ) − k2 sign(x(t )),

(41)

and the initial value x(0) = [−2, 3]T . Then, ∥x(0)∥ = 3.606. Let η = 4.5, according to Corollary 2 or 3, the neural network (40) can be stabilized in finite time by choosing the control gains k1 = 0.8 and k2 = 1. Simulation result is depicted in Fig. 2, which shows the states x1 (t ) and x2 (t ) of the controlled NN (40). The simulation result has confirmed the effectiveness of our main

X. Liu et al. / Neural Networks 52 (2014) 25–32

Fig. 3. Trajectories of NN (40) without any controller in Example 2.

results. Meanwhile, based on Corollary 2, we can obtain the upper(η−2a¯ nl)∥x(0)∥+2k2 −2a¯ mn bound of the settle time t∗2 = η−22a¯ nl ln = 2k2 −2a¯ mn 6.8244. Based on Corollary 3, the settle time is shortened to t∗3 = ∥x(0)∥ k2 −¯amn

= 18.03.

Example 2. Still consider the NN (40) with the following thirdorder parameters:

 D=

1.7 0 0

0 1.4 0

0 0 , 1 .5



1 0.1 0.3

 A=

−0.2 1 0.2

0.2 0.2 , 1



0 0 . 0

  J =

The discontinuous activation function is also taken as f (x) = x + 0.2 sign(x). Then, it is obvious that a¯ = 1, l = 1, m = 0.4, n = 3. The NN (40) with the above-given parameters is depicted in Fig. 3 with initial values x(0) = [1, −1, 3]T . Using the same discontinuous controller (41) with k1 = 0.5 and k2 = 1.3, we can stabilize the third-order NN (40) in finite time based on Corollary 2 or 3. Let η = 3.6; we can obtain the upper-bound of the settle time (η−2a¯ nl)∥x(0)∥+2k2 −2a¯ mn = 7.9777 by Corollary 2, and t∗2 = η−22a¯ nl ln 2k −2a¯ mn ∥x(0)∥

2

t∗3 = k −¯amn = 33.166 by Corollary 3. The simulation result is 2 depicted in Fig. 4, which confirms the effectiveness of our main results. 5. Conclusions In this paper, we have investigated the finite-time stabilization problem for neural networks with discontinuous activations by constructing a discontinuous nonlinear controller. Meanwhile, based on the well-known finite-time stability theorem, some sufficient criteria are derived for the states of the augmented system to be finite-time globally stable. Finally, two illustrative examples have been used to demonstrate the usefulness of the main results. It is expected that the theory established in this paper can be widely applied in delayed systems, which will be an interesting topic in our future research. Acknowledgments This research was partially supported by the National Natural Science Foundation of China under Grants No. 61272530,

31

Fig. 4. Trajectories of NN (40) under the controller (41) with k1 = 0.5, k2 = 1.25 in Example 2.

11226146, 61304174, 61104145 and by the Natural Science Foundation of the Jiangsu Higher Education Institutions under Grant No. 12KJB120002. Also, the research of J.H. Park was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (2013R1A1A2A10005201). References Aubin, J. P., & Cellina, A. (1984). Differential inclusions. Berlin: Springer. Aubin, J. P., & Frankowska, H. (1990). Set-valued analysis. Boston: Birkhäuser. Bhat, S., & Bernstein, D. (1997). Finite-time stability of homogeneous systems. In Proc. American control conference. June (pp. 2513–2514). Bhat, S., & Bernstein, D. (2000). Finite-time stability of continuous autonomous systems. SIAM Journal on Control and Optimization, 38(3), 751–766. Chen, G., Lewis, F. L., & Xie, L. H. (2011). Finite-time distributed consensus via binary control protocols. Automatica, 47(9), 1962–1968. Clarke, F. H. (1983). Optimization and nonsmooth analysis. New York: Wiley. Cortés, J. (2006). Finite-time convergent gradient flows with applications to network consensus. Automatica, 42(11), 1993–2000. Filippov, A. F. (1988). Differential equations with discontinuous right-hand side. In Soviet series, Mathematics and its applications. Boston: Kluwer Academic Publishers. Forti, M., & Nistri, P. (2003). Global convergence of neural networks with discontinuous neuron activations. IEEE Transactions on Circuits and Systems I: Fundamental Theory and Applications, 50(11), 1421–1435. Haimo, V. T. (1986). Finite time controllers. SIAM Journal on Control and Optimization, 24(4), 760–770. Hardy, G. H., Littlewood, J. E., & Pólya, G. (1988). Inequalities. Cambridge: Cambridge University Press. Hong, Y., Wang, J., & Cheng, D. (2006). Adaptive finite-time control of nonlinear systems with parametric uncertainty. IEEE Transactions on Automatic Control, 51(5), 858–862. Hopfield, J. J., & Tank, D. W. (1984). Neurons with graded response have collective computational properties like those of two-state neurons. Proceedings of the National Academy of Sciences, 79, 3088–3092. Hopfield, J. J., & Tank, D. W. (1986). Computing with neural circuits: a model. Science, 233, 625–633. Huang, X., Cao, J., & Ho, D. W. C. (2006). Existence and attractivity of almost periodic solution for recurrent neural networks with unbounded delays and variable coefficients. Nonlinear Dynamics, 45(3), 337–351. Huang, H., Feng, G., & Cao, J. (2009). Exponential synchronization of chaotic Lur’e systems with delayed feedback control. Nonlinear Dynamics, 57(3), 441–453. Hui, Q., Haddad, W. M., & Bhat, S. P. (2008). Finite-time semistability and consensus for nonlinear dynamical networks. IEEE Transactions on Automatic Control, 53(8), 1887–1900. Khoo, S. Y., Xie, L. H., & Man, Z. H. (2009). Robust finite-time consensus tracking algorithm for multi robot systems. IEEE/ASME Transactions on Mechatronics, 14(2), 219–228. Kwon, O. M., Lee, S. M., Park, J. H., & Cha, E. J. (2012). New approaches on stability criteria for neural networks with interval time-varying delays. Applied Mathematics and Computation, 218(19), 9953–9964. Li, C., Liao, X., & Huang, T. (2007). Exponential stabilization of chaotic systems with delay by periodically intermittent control. Chaos, 17, 013103. Liang, J., Wang, Z., & Liu, X. (2008). Exponential synchronization of stochastic delayed discrete-time complex networks. Nonlinear Dynamics, 53(1), 153–165. Liu, X., & Cao, J. (2009). On periodic solutions of neural networks via differential inclusions. Neural Networks, 22(4), 329–334.

32

X. Liu et al. / Neural Networks 52 (2014) 25–32

Liu, X., & Cao, J. (2011). Local synchronization of one-to-one coupled neural networks with discontinuous activations. Cognitive Neurodynamics, 5, 13–20. Liu, X., Chen, T., Cao, J., & Lu, W. (2011). Dissipativity and quasi-synchronization for neural networks with discontinuous activations and parameter mismatches. Neural Networks, 24(10), 1013–1021. Liu, Y., Wang, Z., & Liu, X. (2007). Stability criteria for periodic neural networks with discrete and distributed delays. Nonlinear Dynamics, 49(1), 93–103. Liu, Y., Wang, Z., & Liu, X. (2008). On delay-dependent robust exponential stability of stochastic neural networks with mixed time delays and Markovian switching. Nonlinear Dynamics, 54(3), 199–212. Lu, J., & Cao, J. (2008). Adaptive synchronization of uncertain dynamical networks with delayed coupling. Nonlinear Dynamics, 53(1), 107–115. Lu, W., & Chen, T. (2006). Dynamical behaviors of delayed neural networks systems with discontinuous activation functions. Neural Computation, 18, 683–708. Lu, W., & Chen, T. (2008). Almost periodic dynamics of a class of delayed neural networks with discontinuous activations. Neural Computation, 20, 1065–1090. Lu, J., Ho, D. W. C., & Wang, Z. (2009). Pinning stabilization of linearly coupled stochastic neural networks via minimum number of controllers. IEEE Transactions on Neural Networks, 20(10), 1617–1629. Lu, J., Ho, D. W. C., & Wu, L. (2009). Exponential stabilization of switched stochastic dynamical networks. Nonlinearity, 22, 889–911.

Ott, E., Grebogi, G., & Yorke, J. A. (1990). Controlling chaos. Physical Review Letters, 64, 1196–1199. Park, M. J., Kwon, O. M., Lee, S. M., Park, J. H., & Cha, E. J. (2012). Synchronization criteria for coupled stochastic neural networks with time-varying delays and leakage delay. Journal of the Franklin Institute, 349(5), 1699–1720. Shen, J., & Cao, J. (2011). Finite-time synchronization of coupled neural networks via discontinuous controllers. Cognitive Neurodynamics, 5(4), 373–385. Sontag, E. D. (1989). A ‘‘universal’’ construction of Artsteins theorem on nonlinear stabilization. Systems & Control Letters, 13, 117–123. Wang, G., Cao, J., & Liang, J. (2009). Exponential stability in the mean square for stochastic neural networks with mixed time-delays and Markovian jumping parameters. Nonlinear Dynamics, 57(1), 209–218. Wang, L., & Xiao, F. (2010). Finite-time consensus problems for networks of dynamic agents. IEEE Transactions on Automatic Control, 55(4), 950–955. Wu, Z., Park, J. H., Su, H., & Chu, J. (2012). New results on exponential passivity of neural networks with time-varying delays. Nonlinear Analysis: Real World Applications, 13(4), 1593–1599. Xiao, F., Wang, L., Chen, J., & Gao, Y. P. (2009). Finite-time formation control for multi-agent systems. Automatica, 45(11), 2605–2611. Yu, S., Ma, Z., Zhu, Q., & Wu, D. (2006). Nonsmooth finite-time control of uncertain affine planar systems. In Proceedings of the 6th world congress on intelligent control and automation (pp. 21–23).

Nonsmooth finite-time stabilization of neural networks with discontinuous activations.

This paper is concerned with the finite-time stabilization for a class of neural networks (NNs) with discontinuous activations. The purpose of the add...
642KB Sizes 0 Downloads 0 Views