662

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 23, NO. 4, APRIL 2012

network. This network was characterized by a simple algorithm of training in the process of numerical implementation; it was mono-layer, its numerical realization and of arbitrary small error of approximation. The algorithm of the modified Kolmogorov neural network was well convergent, easy in calculations, and it also omits complicated gradient methods and methods of back propagation. It can be applied with the identification of Hammerstein and Wiener systems, and was also suitable to construct the wavelet algorithm. Interesting ways to illustrate the modified Kolmogorov theorem were the filling curves.

[22] H. Yuexian, Z. Peng, X. Xingxing, Z. Xiaowei, and L. Wenjie, “Nonlinear dimensionality reduction by locally linear inlaying,” IEEE Trans. Neural Netw., vol. 20, no. 2, pp. 300–315, Feb. 2009. [23] J. Michalkiewicz, “New concept of Kolmogorov’s neural network,” in Artificial Intelligence and Soft Computing. Warsaw, Poland: Exit, 2006. [24] J. Michalkiewicz and Z. Hasiewicz, “An algorithm for training a neural network based on Kolmogorov’s modified theorem,” in Proc. 10th IEEE Int. Conf. Methods Models Autom. Robot., Miedzyzdroje, Poland, Aug. 2004, pp. 1339–1342.

R EFERENCES

Wuneng Zhou, Dongbing Tong, Yan Gao, Chuan Ji, and Hongye Su

[1] A. N. Kolmogorov, “On the representation of continuous functions of many variables by superposition of continuous functions of one variable and addition,” Dokl. Akad. Nauk ZSRR, vol. 114, no. 5, pp. 953–956, 1957. [2] D. A. Sprecher, “A numerical implementation of Kolmogorov’s superpositions,” Neural Netw., vol. 9, no. 5, pp. 765–772, Jul. 1996. [3] D. A. Sprecher, “A numerical implementation of Kolmogorovs superpositions II,” Neural Netw., vol. 10, no. 3, pp. 447–457, 1997. [4] B. Igelnik, “Kolmogorov’s spline network,” IEEE Trans. Neural Netw., vol. 14, no. 4, pp. 725–733, Jul. 2003. [5] S. Osowski, Neural Network in the Algorithm Form. Warsaw, Poland: WNT, 1996. [6] G. Cybenko, “Approximation by superpositions of a sigmoidal function,” Math. Control, Signals, Syst., vol. 2, no. 4, pp. 303–314, 1989. [7] K. Hornik, M. Stinchcombe, and H. White, “Multilayer feedforward networks are universal approximators,” Neural Netw., vol. 2, no. 5, pp. 359–366, 1989. [8] V. Kurkova, “Kolmogorov’s theorem is relevant,” Neural Comput., vol. 3, no. 4, pp. 617–622, 1991. [9] A. Janczak, “Neural networks in the identification of Wiener and Hammerstein systems,” in Cybernetics and Biomedical Engineering Neural Networks. Warsaw, Poland: Exit, 2000. [10] S. A. Billings and S. Y. Fakhouri, “Identification of systems containing linear dynamic and static nonlinear elements,” Automatica, vol. 18, no. 1, pp. 15–26, Jan. 1982. [11] P. G. Gallman, “A comparison of two Hammerstein model identification algorithms,” IEEE Trans. Autom. Control, vol. 21, no. 1, pp. 124–126, Feb. 1976. [12] W. Greblicki, “Nonparametric approach to Wiener system identification,” IEEE Trans. Circuits Syst. I: Fundam. Theory Appl., vol. 44, no. 6, pp. 538–545, Jun. 1997. [13] J. Michalkiewicz, “Kolmogorov’s neural network in the identification of Hammerstein system,” in Proc. 11th IEEE Int. Conf. Methods Models Autom. Robot., Miedzyzdroje, Poland, Aug. 2005, pp. 915–918. [14] J. Michalkiewicz, “Polynomial Kolmogorov’s neural network in the identification of Wiener systems,” in Proc. 13th IEEE/IFAC Int. Conf. Methods Models Autom. Robot., Szczecin, Poland, Aug. 2007, pp. 691– 694. [15] J. Michalkiewicz, “Modified Kolmogorov’s theorems,” in Proc. 36th Conf. Appl. Math., Zakopane, Poland, Sep. 2007, p. 67. [16] J. Michalkiewicz, “Modified Kolmogorov’s theorem,” Appl. Math., vol. 10, pp. 23–38, Sep. 2009. [17] F. Mkadem and S. Boumaiza, “Extended Hammerstein behavioral model using artificial neural networks,” IEEE Trans. Microw. Theory Tech., vol. 57, no. 4, pp. 745–751, Apr. 2009. [18] A. E. Nordsjo and L. H. Zetterberg, “Identification of certain timevarying nonlinear Wiener and Hammerstein systems,” IEEE Trans. Signal Process., vol. 49, no. 3, pp. 577–592, Mar. 2001. [19] X. Ren and X. Lv, “Identification of extended Hammerstein systems using dynamic self-optimizing neural networks,” IEEE Trans. Neural Netw., vol. 22, no. 8, pp. 1169–1179, Aug. 2011. [20] C. Turchetti, G. Biagetti, F. Gianfelici, and P. Crippa, “Nonlinear system identification: An effective framework based on the Karhunen–Loève transform,” IEEE Trans. Signal Process., vol. 57, no. 2, pp. 536–550, Feb. 2009. [21] J. Voros, “Identification of Hammerstein systems with time-varying piecewise-linear characteristics,” IEEE Trans. Circuts Syst. II, vol. 52, no. 12, pp. 865–869, Dec. 2005.

Mode and Delay-Dependent Adaptive Exponential Synchronization in pth Moment for Stochastic Delayed Neural Networks with Markovian Switching

Abstract— In this brief, the analysis problem of the mode and delay-dependent adaptive exponential synchronization in pth moment is considered for stochastic delayed neural networks with Markovian switching. By utilizing a new nonnegative function and the M-matrix approach, several sufficient conditions to ensure the mode and delay-dependent adaptive exponential synchronization in pth moment for stochastic delayed neural networks are derived. Via the adaptive feedback control techniques, some suitable parameters update laws are found. To illustrate the effectiveness of the M-matrix-based synchronization conditions derived in this brief, a numerical example is provided finally. Index Terms— Adaptive exponential synchronization in pth moment, Markovian switching, neural networks, stochastic noise, time-varying delays.

I. I NTRODUCTION In reality, time-delay system is frequently encountered in many areas and a time-delay is often a source of instability and oscillators. For neural networks with time-delays, various sufficient conditions have been proposed to guarantee the global asymptotic or exponential stability in some recent literatures, see [1]– [4] and the references therein in which many methods have been exploited, such as the linear matrix inequality approach etc. Meanwhile, many neural networks may experience abrupt changes in their structure and parameters caused by some Manuscript received April 6, 2011; accepted August 26, 2011. Date of publication January 24, 2012; date of current version March 6, 2012. This work was supported in part by the National Natural Science Foundation of China, under Grant 61075060, the Innovation Program of Shanghai Municipal Education Commission (12zz064), the National 863 Key Program of China, under Grant 2008AA042902, and the Innovation Fund of Donghua University for Doctoral Candidate (BC201136). W. Zhou is with the College of Information Sciences and Technology, Donghua University, Shanghai 200051, China, and also with the Engineering Research Center of Digitized Textile and Fashion Technology, Ministry of Education Donghua University, Shanghai 201620, China (e-mail: [email protected]). D. Tong, Y. Gao, and C. Ji are with the College of Information Sciences and Technology, Donghua University, Shanghai 200051, China (e-mail: [email protected]; [email protected]; [email protected]). H. Su is with the National Laboratory of Industrial Control Technology, Institute of Cyber-Systems and Control, Zhejiang University, Zhejiang 310027, China (e-mail: [email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TNNLS.2011.2179556

2162–237X/$31.00 © 2012 IEEE

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 23, NO. 4, APRIL 2012

phenomena such as component failures or repairs, changing subsystem interconnections, and abrupt environmental disturbances etc. In this situation, there exist finite modes in the neural networks, and the modes may be switched (or jumped) from one to another at different times. These kind of systems are widely studied by many scholars, see [5]– [8] and the references therein. As we know, the synchronization for a neural networks is to achieve the accordance of the states of the drive system and the response system in a moment. That is to say, the state of the error system of the drive systems and the response system can achieve to zero eventually when the time approaches infinity. Especially, the adaptive synchronization for a neural networks is such a synchronization that the parameters of the drive system need to be estimated and the synchronization control law needs to be updated in real-time when the neural networks evolves. Up to now, the synchronization problem of the neural networks has been extensively investigated over the last decade due to their successful applications in many areas, such as signal processing, combinatorial optimization, communication etc. Moreover, the adaptive synchronization for neural networks has been used in real neural networks control, such as parameter estimation adaptive control, model reference adaptive control, etc. In the past decade, much attention has been devoted to the research of the adaptive synchronization for neural networks (see [9]– [13], references therein). In [9], the adaptive lag synchronization issue of unknown chaotic delayed neural networks with noise perturbation is considered and the suitable parameters update laws and several sufficient conditions to ensure lag synchronization of unknown delayed neural networks with or without noise perturbation are derived. An adaptive feedback controller is designed to achieve complete synchronization of unidirectionally coupled delayed neural networks with stochastic perturbation and the globally almost surely asymptotical stability of the error dynamical system is investigated by LaSalle-type invariance principle in [10]. In [11], adaptive synchronization condition under almost every initial data for stochastic neural networks with timevarying delays and distributed delays is derived. In [12], the issues of lag synchronization of coupled chaotic delayed neural networks are investigated. By using the adaptive control with the linear feedback updated law, some simple yet generic criteria for determining the lag synchronization of coupled chaotic delayed neural networks are derived based on the invariance principle of functional differential equations. In [13], Lu et al. investigated globally exponential synchronization for linearly coupled neural networks with time varying delay and impulsive disturbances. By referring to an impulsive delay differential inequality, a sufficient condition of globally exponential synchronization for linearly coupled neural networks with impulsive disturbances is derived in the brief. It should be pointed out that, up to now, the problem of adaptive exponential synchronization in pth moment for stochastic delayed neural networks with Markovian switching has received very little research attention. In this brief, we are concerned with the analysis issue for the mode and delay-dependent adaptive exponential

663

synchronization of neural networks with stochastic delayed and Markovian switching parameters by employing M-matrix approach. The main purpose of this brief is to establish Mmatrix-based stability criteria for testing whether the stochastic delayed neural networks is stochastically exponentially synchronization in pth moment. We will use a simple example to illustrate the usefulness of the derived M-matrix-based synchronization conditions. II. P ROBLEM F ORMULATION AND P RELIMINARIES Throughout this brief, E stands for the mathematical expec2 tation operator, n x2 is used to denote a vector norm defined 2 by x|| = i=1 x i , ‘T’ represents the transpose of a matrix or a vector, In is an n-dimensional identical matrix. Let {r (t)}t ≥0 be a right-continuous Markovian chain on the probability space taking values in a finite state set S = {1, 2, . . . , N} with generator  = (γi j ) N×N given by

γi j δ + o(δ), if i = j P{r (t + δ) = j |r (t) = i } = 1 + γii δ + o(δ), if i = j where δ > 0 and γi j ≥ 0 is the transition rate from i to j if i = j while  γi j . γii = − j =i

We denote r (0) = r0 . In this brief, we consider the neural networks called drive system and represented by the compact form as follows: d x(t) = [−C(r (t))x(t) + A(r (t)) f (x(t)) +B(r (t)) f (x(t − τ (t))) + D(r (t))]dt

(1)

where t ≥ 0 (or t ∈ R+ , the set of all non-negative real numbers) is the time variable, x(t) = (x 1 (t), x 2 (t), . . . , x n (t))T ∈ R n is the state vector associated with n neurons, f (x(t)) = ( f 1 (x 1 (t)), f2 (x 2 (t)), . . . , f n (x n (t)))T ∈ R n denotes the activation function of the neurons, τ (t) is the transmission delay satisfying that 0 < τ (t) ≤ τ¯ and τ˙ (t) ≤ τˆ < 1, where τ¯ , τˆ are constants. As a matter of convenience, for t ≥ 0, we denote r (t) = i and A(r (t)) = Ai , B(r (t)) = B i , C(r (t)) = C i , and D(r (t)) = D i , respectively. In the drive system (1), furthermore, ∀i ∈ S, C i = diag {c1i , c2i , . . . , cni } has positive and unknown entries cki > 0, Ai = (a ij k )n×n and B i = (bij k )n×n are the connection weight and the delayed connection weight matrices, respectively, and are both unknown matrices. D i = (d1i , d2i , . . . , dni )T ∈ R n is the constant external input vector. For the drive systems (1), a response system is constructed as follows:  ˆ (t))y(t) + A(r ˆ (t)) f (y(t)) d y(t) = − C(r  ˆ (t)) f (y(t − τ (t))) + D(r (t)) + U (t) dt + B(r +σ (t, r (t), y(t) − x(t), y(t − τ (t)) −x(t − τ (t)))dω(t)

(2)

where y(t) is the state vector of the response system (2), Cˆ i = diag {cˆ1i , cˆ2i , . . . , cˆni }, Aˆ i = (aˆ ij k )n×n , and Bˆ i = (bˆ ij k )n×n are the estimations of the unknown matrices C i , Ai , and B i ,

664

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 23, NO. 4, APRIL 2012

respectively, U (t) = (u 1 (t), u 2 (t), . . . , u n (t))T ∈ R n is a control input vector with the form of U (t) = K (t)(y(t) − x(t)) = diag {k1 (t), k2 (t), . . . , kn (t)}(y(t) − x(t))

(3)

ω(t) = (ω1 (t), ω2 (t), . . . , ωn (t))T is an n-dimensional Brown moment defined on a complete probability space ( , F , P) with a natural filtration {Ft }t ≥0 (i.e., Ft = σ {ω(s) : 0 ≤ s ≤ t} is a σ -algebra), and is independent to the Markovian process {r (t)}t ≥0, and σ : R+ × S × R n × R n → R n×n is the noise intensity matrix and can be regarded as a result from the occurrence of eternal random fluctuation and other probabilistic causes. Let e(t) = y(t) − x(t). For the purpose of simplicity, we mark e(t − τ (t)) = eτ (t) and f (x(t) + e(t)) − f (x(t)) = g(e(t)). From the drive system (1) and the response system (2), the error system of theirs can be represented as follows:  ˜ (t))y(t) − C(r (t))e(t) + A(r ˜ (t)) f (y(t)) de(t) = − C(r ˜ (t)) f (yτ (t)) +A(r (t))g(e(t)) + B(r  +B(r (t))g(eτ (t)) + U (t) dt (4) +σ (t, r (t), e(t), eτ (t))dω(t) ˜ (t)) = C(r ˆ (t)) − C(r (t)), A(r ˜ (t)) = A(r ˆ (t))− where C(r ˜ (t)) = B(r ˆ (t))− B(r (t)). Denote c˜i = cˆi −ci , A(r (t)) and B(r j j j a˜ ij k = aˆ ij k − a ij k , and b˜ ij k = bˆ ij k − bij k , then C˜ i = diag {c˜1i , c˜2i , . . . , c˜ni }, A˜ i = (a˜ ij k )n×n , and B˜ i = (b˜ ij k )n×n . The initial condition associated with (4) is given in the following form: e(s) = ξ(s), s ∈ [−τ¯ , 0] for any ξ(s) ∈ F0 ([−τ¯ , 0], where F0 ([−τ¯ , 0], is the family of all F0 -measurable C([−τ¯ , 0]; R n )-value random variables satisfying that sup−τ¯ ≤s≤0 E|ξ(s)|2 < ∞, and C([−τ¯ , 0]; R n ) denotes the family of all continuous R n valued functions ξ(s) on [−τ¯ , 0] with the norm ξ(s) = sup−τ¯ ≤s≤0 |ξ(s)|. To obtain the main result, we need the following assumptions. Assumption 1: The activation functions of the neurons f (x(t)) satisfy the Lipschitz condition. That is, there exists a constant L > 0 such that L2

R n ),

L2

Rn )

| f (u) − f (v)| ≤ L|u − v|, ∀u, v ∈ R n . Assumption 2: The noise intensity matrix σ (·, ·, ·, ·) satisfies the linear growth condition. That is, there exist two positives H1 and H2 , such that trace(σ (t, r (t), u(t), v(t)))T (σ (t, r (t), u(t), v(t))) ≤ H1|u(t)|2 + H2 |v(t)|2 for all (t, r (t), u(t), v(t)) ∈ R+ × S × R n × R n . Remark 1: Under Assumptions 1 and 2, the error system (4) admits an equilibrium point (or trivial solution) e(t, ξ(s)), t ≥ 0. The following stability concept and synchronization concept are needed in this brief.

Definition 1: The trivial solution e(t, ξ(s)) of the error system (4) is said to be exponential stability in pth moment if lim sup t →∞



1 log E|e(t, ξ(s))| p < 0 t

p

for any ξ(s) ∈ L L0 ([−τ¯ , 0]; R n ), where p ≥ 2, p ∈ Z . When p = 2, it is said to be exponential stability in mean square. The drive system (1) and the response system (2) are said to be exponential synchronized in pth moment, if the error system (4) is exponential stability in pth moment. The main purpose of the rest of this brief is to establish a criterion of adaptive exponential synchronization in pth moment of (1) and the response system (2) by using adaptive feedback control and M-matrix techniques. To this end, we introduce some concepts and lemmas which will be used in the proofs of our main results. Definition 2 [14]: A square matrix M = (m i j )n×n is called a nonsingular M-matrix if M can be expressed in the form of M = s In − G with some G ≥ 0 (i.e., each element of G is nonnegative) and s > ρ(G), where ρ(G) is the spectral radius of G. Lemma 1 [8]: If M = (m i j )n×n ∈ R n×n with m i j < 0 (i = j ), then the following statements are equivalent. 1) M is a nonsingular M-matrix. 2) Every real eigenvalue of M is positive. 3) M is positive stable. That is, M −1 exists and M −1 > 0 (i. e., M −1 ≥ 0 and at least one element of M −1 is positive). Lemma 2 (Yong Inequality): Let a, b ∈ R and β ∈ [0, 1]. Then |a|β |b|(1−β) ≤ β|a| + (1 − β)|b|. Lemma 3 [2]: Let x ∈ R n and y ∈ R n . Then x T y + y T x ≤ x T x +  −1 y T y for any  > 0. Lemma 4 (Gronwally’s Inequality) [15]: Let T > 0 and u(·) be a Borel measurable bounded non-negative function on [0, T ]. If  t u(t) ≤ c + v u(s)ds, ∀0 ≤ t ≤ T 0

for some constants c, v, then u(t) ≤ c exp(vt),

∀0 ≤ t ≤ T.

Consider an n-dimensional stochastic delayed differential equation (SDDE) with Markovian switching d x(t) = f (t, r (t), x(t), x τ (t))dt +g(t, r (t), x(t), x τ (t))dω(t) on t ∈ [0, ∞) with the initial data given by

p {x(θ ) : −τ¯ ≤ θ ≤ 0} = ξ(θ ) ∈ L L0 [−τ¯ , 0] ; R n .

(5)

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 23, NO. 4, APRIL 2012

For V ∈ C 2,1 (R+ × S × R n ; R+ ), define an operator L from R+ × S × R n × R n to R by LV (t, i, x(t), x τ (t)) = Vt (t, i, x(t)) + Vx (t, i, x(t)) f (t, i, x(t), x τ (t))  1 + trace g T (t, i, x(t), x τ (t))Vx x (t, i, x(t)) 2 N   ×g(t, i, x(t), x τ (t)) + γi j V (t, j, x(t)) j =1

where ∂ V (t, i, x(t)) , ∂t  ∂ V (t, i, x(t)) Vx (t, i, x(t)) = , ∂ x1  ∂ V (t, i, x(t)) ∂ V (t, i, x(t)) , ,..., ∂ x2 ∂ xn  2  ∂ V (t, i, x(t)) Vx x (t, i, x(t)) = . ∂ x j ∂ xk n×n Vt (t, i, x(t)) =

t For 0 |x τ | p ds, let u = s − τ (s), then du = (1 − τ˙ (s))ds and  t −τ (t )  t 1 |x(s)| p ds |x τ | p ds = 1 − τ ˙ (s) 0 −τ (0)  t 1 |x(s)| p ds ≤ 1 − τˆ −τ¯  0  t 1 1 |x(s)| p ds + |x(s)| p ds = 1 − τˆ −τ¯ 1 − τˆ 0  t 1 τ¯ max |ξ(s)| p + |x(s)| p ds. ≤ 1 − τˆ τ¯ ≤s≤0 1 − τˆ 0 So  t vE|x| p ds E|x| p ≤ c + 0

where

For the SDDE with Markovian switching, we have the Dynkin formula as follows. Lemma 5 (Dynkin Formula) [8], [15]: Let V ∈ C 2,1 (R+ × S × R n ; R+ ) and τ1 , τ2 be bounded stopping times such that 0 ≤ τ1 ≤ τ2 a.s. If V (t, i, x(t)) and LV (t, i, x(t), x τ (t)) are bounded on t ∈ [τ1 , τ2 ] with probability 1, then E V (τ2 , r (τ2 ), x(τ2 )) = E V (τ1 , r (τ1 ), x(τ1 ))  τ2 +E LV (s, r (s), x(s), x τ (s))ds. τ1

III. M AIN R ESULTS In this section, we give a criterion of adaptive exponential synchronization in pth moment for the drive system (1) and the response system (2). First, we establish a general result which can be applied widely. Theorem 1: Assume that there is a function V (t, i, x) ∈ C 2,1 (R+ × S × R n ; R+ ) and positive constants p, c1 , λ1 , and λ2 such that λ2 < λ1 (1 − τˆ )

(6)

c1 |x| p ≤ V (t, i, x)

(7)

  λ2 τ¯ c = c1−1 E V (0, r0 , ξ(0)) + max E|ξ(s)| p 1 − τˆ τ¯ ≤s≤0 −λ1 (1 − τˆ ) + λ2 . v = c1 (1 − τˆ ) It can be seen that c, v are constants and c > 0 and v < 0. By using the Gronwally’s inequality, we have E|x| p ≤ c exp(vt). Therefore lim sup t →∞

(8)

for all t ≥ 0, i ∈ S and x ∈ R n (x = x(t) for short). Then the SDDE (5) is exponential stability in pth moment. Proof: For the function V (t, i, x), applying Lemma 5 and using the above conditions, we obtain that  t LV (s, r (s), x(s), x τ (s))ds c1 E|x| p ≤ E V (0, r0 , ξ(0)) + E 0  t (−λ1 |x| p + λ2 |x τ | p )ds. ≤ E V (0, r0 , ξ(0)) + E 0



1 log E|e(t, ξ )| p ≤ v < 0. t

Thus the SDDE (5) is exponential stability in pth moment. This completes the proof. Now we are in a position to set up a criterion of adaptive exponential synchronization in pth moment for the drive system (1) and the response system (2). Theorem 2: Assume that M := −diag {η, η, . . . , η} −  is    N

a nonsingular M-matrix, where  1  η = p −2γ + α + L 2 + β + ( p − 1) H1 2   1 + ( p − 2) L 2 + ( p − 1) H2 2 γ = min min cij i∈S 1≤ j ≤n

  2 α = max ρ Ai i∈S   2 β = max ρ B i , i∈S

and LV (t, i, x, x τ ) ≤ −λ1 |x| p + λ2 |x τ | p

665

p ≥ 2.

→ Let m > 0 and − m = (m, m, . . . , m )T (in this case,    N

→ (q1 , q2 , . . . , q N )T := M −1 − m 0, i. e., all elements of − → −1 M m are positive, by Lemma 1). Assume also that   N    2 γik qk (1−τˆ ), ∀i ∈ S L + ( p − 1)H2 q¯ < − ηqi + k=1

(9) where q¯ = max qi . i∈S Under Assumptions 1 and 2, the noise-perturbed response system (2) can be adaptive exponential synchronized in pth

666

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 23, NO. 4, APRIL 2012

moment with the drive neural network (1), if the feedback gain K (t) of the controller (3) with the update law is chosen as 1 k˙ j = − α j pqi |e| p−2 e2j 2

Now, using Assumptions 1 and 2 together with Lemma 3 yield (13) −e T C i e ≤ −γ |e|2  T 1 1 e T Ai g (e) ≤ e T Ai Ai e + g T (e) g (e) 2 2  1 2 2 α + L |e| ≤ (14) 2   T 1 1 e T B i g (eτ ) ≤ e T B i B i e + g T (eτ ) g (eτ ) 2 2  1 β|e|2 + L 2 |eτ |2 ≤ (15) 2

(10)

and the parameters update laws of matrices Cˆ i , Aˆ i , and Bˆ i are chosen as ⎧ i γj p−2 e y ˙ ⎪ i |e| j j ⎨ cˆ j = 2 pq α jl i a˙ˆ j l = − 2 pqi |e| p−2 e j fl (11) ⎪ ⎩ ˙ˆ i β b = − jl pqi |e| p−2 e j ( fl )τ jl

2

where α j > 0, γ j > 0, α j l > 0, and β j l > 0 ( j, l = 1, 2, . . . , n) are arbitrary constants, respectively. Proof: For each i ∈ S, choose a non-negative function as follows: n   1  i 2 1 2 c˜ j kj + V (t, i, e) = qi |e| p + αj γj j =1  n n  1 i 2  1 ˜i 2 + (a˜ j l ) + (b j l ) . α jl β jl l=1

l=1

Clearly V (t, i, x) obeys (7) with c1 = min qi . Computing i∈S

LV (t, i, e, eτ ) along the trajectory of error system (4), and using (10) and (11), one can obtain that LV (t, i, e, eτ )

 = Vt (t, i, e) + Ve (t, i, e) − C˜ i y − C i e + A˜ i f (y)+ Ai g (e)  + B˜ i f (yτ ) + B i g (eτ ) + U (t)   1 + trace σ T (t, i, e, eτ ) Vee (t, i, e) σ (t, i, e, eτ ) 2 N  γik V (t, k, e) + =2

k=1 n  j =1

 1 1 ˙ kjkj + 2 c˜i c˙˜i αj γj j j n

j =1

n  n  n n   1 i ˙i 1 ˜ i ˙˜ i b b +2 a˜ j l a˜ j l + 2 α jl β jl jl jl j =1 l=1 j =1 l=1  + pqi |e| p−2 e T − C˜ i y − C i e + A˜ i f (y) + Ai g (e)  + B˜ i f (yτ ) + B i g (eτ ) + U (t)    1 + trace σ T (t, i, e, eτ ) p ( p − 1) qi |e| p−2 2 N   γik qk |e| p ·σ (t, i, e, eτ ) + k=1

 = pqi |e| p−2 e T − C i e + Ai g (e) + B i g (eτ )    1 + trace σ T (t, i, e, eτ ) p ( p − 1) qi |e| p−2 2 N   γik qk |e| p . ·σ (t, i, e, eτ ) + 

k=1

(12)

and





σ (t, i, e, eτ ) trace σ T (t, i, e, eτ ) p ( p − 1) qi |e| p−2

≤ 12 p ( p − 1) qi |e| p−2 H1|e|2 + H2|eτ |2 . (16) On the other hand, making use of Yong inequality, we have 1 2

|e| p−2 |eτ |2 ≤

p−2 p 2 |e| + |eτ | p . p p

(17)

Substituting (13) ∼ (17) into (12), yields LV (t, i, e, eτ ) 1   ≤ p −2γ + α + L 2 + β + ( p − 1) H1 qi 2 N   + γik qk |e| p k=1



1  + p L 2 + ( p − 1) H2 qi |e| p−2 |eτ |2 2  N    ≤ ηqi + γik qk |e| p + L 2 + ( p − 1) H2 qi |eτ | p k=1

  ¯ τ |p. ≤ −m|e| + L 2 + ( p − 1) H2 q|e p

(18)

Let λ1 = m, λ2 = (L 2 + ( p − 1)H2)q. ¯ Then (8) and (6) hold. By Theorem 1, the error system (4) is adaptive exponential stability in pth moment, and hence the noiseperturbed response system (2) can be adaptive exponential synchronized in pth moment with the neural network (1). This completes the proof. Remark 2: In Theorem 2, (9) of the adaptive exponential synchronization for neural networks with Markovian switching obtained by using M-matrix approach is mode-dependent and very different to those, such as linear matrix inequality method. And the condition can be checked if the drive system and the response system are given and the positive constant m be chosen. To the best of our knowledge, the method combining Lyapunov function and M-matrix approach in this brief is rarely used in the researching area of adaptive exponential synchronization in pth moment for stochastic neural networks with Markovian switching. Now, we are in a position to consider two special cases of the drive system (1) and the response system (2). Case 1: The Markovian jumping parameters are removed from the neural networks. That is to say, N = 1. For this case, one can get the following result analogous to Theorem 2.

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 23, NO. 4, APRIL 2012

5

Corollary 3: Assume that η < 0 and L 2 + ( p − 1)H2 < −η(1 − τˆ ), where   η = 12 p −2γ +α + L 2 + β + ( p − 1)H1 + 12 ( p − 2) L 2 + ( p − 1)H2 .

3 2 1 −1

1 k˙ j = − α j p|e| p−2e2j 2

−4

0 −2 −3

(19)

Fig. 1.

N

(21)

k=1

where q¯ = max qi . i∈S Under Assumption 1, the noiseless-perturbed response system can be adaptive exponential synchronized in pth moment with the drive neural network, if the feedback gain K (t) of the controller (3) with the update law is chosen as (10) and the parameters update laws of matrices Cˆ i , Aˆ i , and Bˆ i are chosen as (11). Proof: The proof is similar to that of Theorem 2, and hence omitted. IV. N UMERICAL E XAMPLES In the section, we present an example to illustrate the usefulness of the main results obtained in this brief. The adaptive exponential stability in pth moment is examined for a given stochastic delayed neural networks with Markovian jumping parameters.

200

300 t

400

500

600

Response curve of e1 (t) and e2 (t) of the errors system.

k1(t) k2(t)

−2 −4 −6 −8 −10 −12

l=1

a nonsingular M-matrix, where  1 1  η = p −2γ + α + L 2 + β + ( p − 2) L 2 2 2 and   N 

2 γik qk 1 − τˆ , ∀i ∈ S L q¯ < − ηqi +

100

0

where α j > 0, γ j > 0, α j l > 0, and β j l > 0 ( j, l = 1, 2, . . . , n) are arbitrary constants, respectively. Proof: Choose the following non-negative function:  n 2 1 2 1 V (t, e) = |e| p + α j k j + γ j c˜ j j =1  2  n n 2 1 1 ˜ + + . α jl a˜ j l β jl b j l The proof is similar to that of Theorem 2, and hence omitted. Case 2: When the noise-perturbation is removed from the response system (2), it yields the noiseless response system which can lead to the following results. Corollary 4: Assume that M := −diag {η, η, . . . , η} −  is   

0

2

2

l=1

e1(t) e2(t)

4

Under Assumptions 1 and 2, the noise-perturbed response system can be adaptive exponential synchronized in pth moment with the drive neural network, if the feedback gain K (t) of the controller (3) with the update law is chosen as

ˆ A, ˆ and and the update laws of the parameters of matrices C, Bˆ are chosen as ⎧ γj p−2 ˙ ⎪ ⎨ cˆ j = 2 p|e| e j y j α jl a˙ˆ j l = − 2 p|e| p−2 e j fl (20) ⎪ β ⎩ ˙ˆ b j l = − jl p|e| p−2 e j ( fl )τ

667

Fig. 2.

0

100

200

300 t

400

500

600

Dynamic curve of the feedback gain k1 and k2 .

Example 1: Consider the delayed neural networks (1) with Markovian switching, the response stochastic delayed neural networks (2) with Markovian switching and the error system (4) with the network parameters given as follows: ! ! ! 2.1 0 2.5 0 1.2 −1.5 C1 = , C2 = , A1 = , 0 2.8 0 2.2 −1.7 1.2 ! ! ! 1.1 −1.6 0.7 −0.2 −0.4 −0.1 , B1 = , B2 = , A2 = −1.8 1.2 0 0.3 −0.3 0.5 ! ! ! 0.6 0.8 −0.12 0.12 , D2 = Dˆ 2 = , = , D1 = Dˆ 1 = 0.1 0.2 0.11 −0.11 α11 = α12 = α21 = α22 = β11 = β12 = β21 = β22 = 1, σ (t, e(t), e(t − τ ), 1) = (0.4e1 (t − τ ), 0.5e2 (t))T, σ (t, e(t), e(t − τ ), 2) = (0.5e1 (t), 0.3e2 (t − τ ))T, p = 3,

L = 1,

f (x(t)) = tanh(x(t)), τ = 0.12.

It can be checked that Assumptions 1, 2, and (9) are satisfied and the matrix M is a nonsingular M-matrix. So the noiseperturbed response system (2) can be adaptive exponential synchronized in pth moment with the drive neural network (1) by Theorem 2. The simulation results are given in Figs. 1–5. Among them, Fig. 1 shows the state response of errors system e1 (t), e2 (t). Fig. 2 shows the feedback gain k1 , k2 . Figs. 3–5 " A, " " show the parameters update laws of matrices C, B chosen as c1 (t), c2 (t), a11 (t), a12 (t), a21 (t), a22 (t), b11 (t), b12 (t), b21 (t), and b22 (t). From the simulations figures, one can see

668

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 23, NO. 4, APRIL 2012

8

adaptive exponential synchronization in pth moment have been derived in terms of some algebraical inequalities. These synchronization conditions are much different to those of linear matrix inequality. Finally, a simple example has been used to demonstrate the effectiveness of the main results obtained in this brief.

7 c1(t) c2(t)

6 5 4 3 2

ACKNOWLEDGMENT

1

The authors would like to thank the reviewers, the Associate Editor, and the Editor-in-Chief for their valuable comments.

0

Fig. 3.

0

100

200

300 t

400

500

600

R EFERENCES

Dynamic curve of the parameters c1 (t) and c2 (t).

4 3 2

a11(t)

1

a12(t) a21(t)

0

a22(t)

−1 −2 −3 −4

0

100

200

300 t

400

500

600

Fig. 4. Dynamic curve of the parameters a11 (t), a12 (t), a21 (t), and a22 (t).

0.6 0.4 b11(t)

0.2

b12(t)

0

b21(t)

−0.2

b22(t)

−0.4 −0.6 −0.8

0

100

200

300 t

400

500

600

Fig. 5. Dynamic curve of the parameters b11 (t), b12 (t), b21 (t), and b22 (t).

that the stochastic delayed neural networks with markovian switching (1) and (2) are adaptive exponential synchronization in pth moment. V. C ONCLUSION In this brief, we have dealt with the problem of the mode and delay-dependent adaptive exponential synchronization in pth moment for neural networks with stochastic delayed and Markovian jumping parameters. We have removed the traditional monotonicity and smoothness assumptions on the activation function. A M-matrix approach has been developed to solve the problem addressed. The conditions for the

[1] W. Zhou, H. Lu, and C. Duan, “Exponential stability of hybrid stochastic neural networks with mixed time delays and nonlinearity,” Neurocomputing, vol. 72, nos. 13–15, pp. 3357–3365, Aug. 2009. [2] Z. Wang, Y. Liu, L. Yu, and X. Liu, “Exponential stability of delayed recurrent neural networks with Markovian jumping parameters,” Phys. Lett. A, vol. 356, no. 4, pp. 346–352, 2006. [3] H. Zhang, Y. Wang, and D. Liu, “Delay-dependent guaranteed cost control for uncertain stochasti fuzzy systems with multiple time delays,” IEEE Trans. Syst. Man Cybern. Part B, vol. 38, no. 1, pp. 125–140, Feb. 2008. [4] T. Chen and L. Wang, “Power-rate global stability of dynamical systems with unbounded time-varying delays,” IEEE Trans. Circuits Syst. II: Exp. Briefs, vol. 54, no. 8, pp. 705–709, Aug. 2007. [5] Z. Wang, Y. Liu, and X. Liu, “Exponential stabilization of a class of stochastic system with Markovian jump parameters and mode-dependent mixed time-delays,” IEEE Trans. Automat. Control, vol. 55, no. 7, pp. 1656–1662, Jul. 2010. [6] L. Zhang and E. Boukas, “Stability and stabilization of Markovian jump linear systems with partly unknown transition probabilities,” Automatica, vol. 45, no. 2, pp. 463–468, Feb. 2009. [7] S. Zhu and Y. Shen, “Passivity analysis of stochastic delayed neural networks with Markovian switching,” Neurocomputing, vol. 74, no. 10, pp. 1754–1761, May 2011. [8] X. R. Mao and C. G. Yuan, Stochastic Differential Equations with Markovian Switching. London, U.K.: Imperial College Press, 2006. [9] Y. Sun and J. Cao, “Adaptive lag synchronization of unknown chaotic delayed neural networks with noise perturbation,” Phys. Lett. A, vol. 364, nos. 3–4, pp. 277–285, Apr. 2007. [10] X. Li and J. Cao, “Adaptive synchronization for delayed neural networks with stochastic perturbation,” J. Franklin Inst., vol. 354, no. 7, pp. 779– 791, Oct. 2008. [11] Q. Zhu and J. Cao, “Adaptive synchronization under almost every initial data for stochastic neural networks with time-varying delays and distributed delays,” Commun. Nonlinear Sci. Numer. Simulat., vol. 16, no. 4, pp. 2139–2159, Apr. 2011. [12] J. Zhou, T. Chen, and L. Xiang, “Chaotic lag synchronization of coupled delayed neural networks and its applications in secure communication,” Circuits Syst. Signal Process., vol. 24, no. 5, pp. 599–613, 2005. [13] J. Lu, W. C. D. Ho, J. Cao, and J. Kurths, “Exponential synchronization of linearly coupled neural networks with impulsive disturbances,” IEEE Trans. Neural Netw., vol. 22, no. 2, pp. 329–336, Feb. 2011. [14] A. Berman and R. J. Plemmons, Nonnegative Matrices in Mathematical Sciences. Philadelphia, PA: SIAM, 1987. [15] B. Ksendal, Stochastic Differential Equations an Introduction with Applications. New York: Springer-Verlag, 2005.

Mode and delay-dependent adaptive exponential synchronization in pth moment for stochastic delayed neural networks with Markovian switching.

In this brief, the analysis problem of the mode and delay-dependent adaptive exponential synchronization in th moment is considered for stochastic del...
300KB Sizes 0 Downloads 3 Views