510

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 27, NO. 3, MARCH 2016

Robust Adaptive Neural Tracking Control for a Class of Stochastic Nonlinear Interconnected Systems Huanqing Wang, Xiaoping Liu, and Kefu Liu Abstract— In this paper, an adaptive neural decentralized control approach is proposed for a class of multiple input and multiple output uncertain stochastic nonlinear strong interconnected systems. Radial basis function neural networks are used to approximate the packaged unknown nonlinearities, and backstepping technique is utilized to construct an adaptive neural decentralized controller. The proposed control scheme can guarantee that all signals of the resulting closed-loop system are semiglobally uniformly ultimately bounded in the sense of fourth moment, and the tracking errors eventually converge to a small neighborhood around the origin. The main feature of this paper is that the proposed approach is capable of controlling the stochastic systems with strong interconnected nonlinearities both in the drift and diffusion terms that are the functions of all states of the overall system. Simulation results are used to illustrate the effectiveness of the suggested approach. Index Terms— Adaptive decentralized control, backstepping approach, neural networks, stochastic nonlinear interconnected systems.

I. I NTRODUCTION TOCHASTIC nonlinear dynamical systems modeled by the Itˆo stochastic differential equation are one of the most useful system models in practical applications, e.g., aircrafts, chemical or process control systems, and distributed networks [1]. Consequently, the control design of stochastic nonlinear systems has become an active topic and many results in this field have been reported, including sliding-model control [1], backstepping control [2], [3], Takagi–Sugeno fuzzy control [4], and so on. Among them, backstepping control as a recursive design procedure was initially introduced in [2] to handle the stabilization problem for a class of stochastic nonlinear strict-feedback systems using quadratic Lyapunov function. Alternatively, by applying a quadratic Lyapunov function, Deng and Kristi´c [3], [5] and Deng et al. [6] proposed a backstepping design for stochastic

S

Manuscript received June 6, 2014; revised March 2, 2015; accepted March 6, 2015. Date of publication March 24, 2015; date of current version February 15, 2016. This work was supported in part by the National Natural Science Foundation of China under Grant 61304002, Grant 61473160, Grant 61333012, and Grant 61304003, in part by the General Project of Education Department of Liaoning Province under Grant L2013424, and in part by the Program for Liaoning Innovative Research Team in University under Grant LT2013023. H. Wang is with the Department of Mathematics, Bohai University, Jinzhou 121000, China (e-mail: [email protected]). X. Liu and K. Liu are with the Faculty of Engineering, Lakehead University, Thunder Bay, ON P7B 5E1, Canada (e-mail: [email protected]; [email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TNNLS.2015.2412035

nonlinear strict-feedback systems, and the results were further extended to inverse optimal control of the stochastic cases. In the latter, the controller design for stochastic nonlinear systems based on the quadratic Lyapunov function has received more intensive investigation [7]–[10]. However, the aforementioned results are limited to the nonlinear system functions being assumed to be either known or bounded by known functions multiplying unknown constant coefficients. With these presuppositions, the above-mentioned approaches are inapplicable of controlling the nonlinear systems with unknown continuous nonlinear functions, which are encountered more often, in practice, as most of real systems are poorly known. To overcome this restriction, approximation-based adaptive neural or fuzzy control methods, which had been extensively used to control deterministic nonlinear systems [11]–[29], have been developed for stochastic nonlinear systems with unknown smooth nonlinear functions. In general, adaptive neural or fuzzy backstepping control provides a systematic methodology of solving the control problems of unknown nonlinear systems, in which neural networks or fuzzy systems are used to approximate uncertain nonlinear functions. Since Psillakis and Alexandridis [30] first developed an adaptive neural control approach for uncertain stochastic nonlinear strict-feedback systems, approximation-based adaptive control for stochastic nonlinear systems has received much interest by both theoreticians and practitioners in the control community. So far, many interesting control algorithms have been obtained in [31]–[33] for state-feedback control and in [34]–[36] for output-feedback control. It is well known that large-scale systems that are composed of interconnected subsystems extensively exist, such as power systems, digital communication networks, aerospace systems, and multiagent systems. Due to the complexity of the control synthesis and physical restrictions on information exchange among subsystems, finding a centralized controller is often a technically challenging design problem [37]. Therefore, decentralized control approach in which each subsystem is controlled independently of its locally available information is often required for achieving a control objective of the overall large-scale system. Compared with centralized control, decentralized control structure naturally alleviates the computational burden and enhances the robustness and reliability against interacting operation failures [38]. The early results on decentralized control were largely focused on the deterministic linear systems or nonlinear systems with the so-called strict matching conditions [39]. The first result

2162-237X © 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

WANG et al.: ROBUST ADAPTIVE NEURAL TRACKING CONTROL

on decentralized adaptive backstepping control of large-scale nonlinear systems without satisfying the matching condition was reported in [40]. Recently, in [41], an adaptive backstepping output-feedback control scheme was proposed for time-delay interconnected systems using neural networks. Correspondingly, there have also been many highly valuable research results on the decentralized backstepping control of stochastic interconnected systems. Backstepping decentralized control approach was first proposed in [42] for a class of stochastic nonlinear interconnected systems. Liu et al. [43] presented a decentralized robust adaptive controller design for a class of large-scale stochastic nonlinear systems with zero dynamic and uncertain parameters. More recently, several approximation-based adaptive control schemes [44]–[46] were developed for controlling stochastic large-scale systems with unknown nonlinear functions. Although a tremendous amount of work has been conducted on the design of the backstepping decentralized controllers for large-scale nonlinear systems, most of the existing results require that the interconnected nonlinearities of the system under consideration only depend on the outputs of each subsystem. To make the backstepping-based decentralized control applicable for the nonlinear large-scale systems with more general forms of interconnections, some efforts have been made. In [37], the problem of decentralized adaptive asymptotic tracking was investigated for a class of large-scale systems with significant nonlinearities in the last equation of each isolated subsystem. Hereafter, Yoo and Park [47] proposed an adaptive neural decentralized control scheme for a class of nonlinear interconnected systems with time delay, where the interconnected terms are the functions of states of the whole system. When the controlled nonlinear large-scale systems are affected by stochastic disturbances, which extensively exist in many practical systems, especially, the interconnected nonlinearity in the drift terms as well as diffusion terms depends on the states of the overall system, the existing control strategies become invalid. Therefore, how to design an approximation-based adaptive neural backstepping control scheme for the aforementioned large-scale stochastic nonlinear systems is a more challenging task. Based on the above observation, in this paper, we consider the more challenging problem, i.e., the problem of decentralized adaptive neural control of a class of large-scale stochastic nonlinear systems with strong interconnections. To overcome the difficulty from strong interconnected nonlinearity, the monotonically increasing property of the bounding functions is used to develop the variable separation technique. In the controller design, radial basis functions (RBFs) neural networks are used to approximate the packaged unknown nonlinear functions and the backstepping is applied to design a decentralized adaptive controller, which guarantees that all the signals in the closed-loop systems remain semiglobally bounded, and the tracking errors eventually converge to a small neighborhood around the origin in the sense of mean quadratic value. The main contributions of this paper are summarized as follows. 1) A backstepping-based decentralized adaptive neural control strategy is recursively designed for a class

511

of large-scale stochastic nonlinear systems with strong interconnections, e.g., the interconnected functions and their bounding functions in both drift and diffusion terms are all related to the state variables of the overall system. 2) Only one adaptive parameter is involved in the proposed controller for each n-order subsystem. As a result, the computational burden is significantly alleviated. The remainder of this paper is organized as follows. The preliminaries and problem formulation are given in Section II. A decentralized adaptive neural control scheme is presented in Section III. The simulation results are given in Section IV. Finally, the conclusion is given in Section V. II. P RELIMINARIES AND P ROBLEM F ORMULATION Before presenting the main result, let us first introduce some necessary definitions and lemmas. Consider the following stochastic system: d x = f (x)dt + h(x)dw ∀x ∈ R n

(1)

where w is an r -dimensional-independent standard Brownian motion defined on the complete probability space (, F, {Ft }t ≥0 , P) with  being a sample space, F being a σ −field, {Ft }t ≥0 being a filtration, and P being a probability measure; x ∈ R n is the state, f : R n → R n and h : R n → R n×r are locally Lipschitz functions in x and satisfy f (0) = 0, h(0) = 0. Definition 1 [43]: For any given V (x) ∈ C 2 , associated with the stochastic differential equation (1), define the differential operator L as follows:   1 ∂2V ∂V f + Tr h T LV = h (2) ∂x 2 ∂x2 where Tr(A) is the trace of A. Remark 1: As shown in [43], the term 1/2Tr{h T (∂ 2 V /∂ x 2 )h} in (2) is called Itˆo correction term where the second-order differential ∂ 2 V /∂ x 2 makes the controller design much more difficult than that of the deterministic system. In addition, the differential operator L is repeatedly used during the controller design in Section III. Definition 2 [34]: The trajectory {x(t), t ≥ 0} of the stochastic system (1) is said to be semiglobally uniformly ultimately bounded in the pth moment, if for some compact set  ⊂ R n and any initial state x 0 = x(t0 ), there exist a constant ε > 0 and a time constant T = T (ε, x 0 ) such that E(|x(t)| p ) < ε for all t > t0 + T . In particular, when p = 2, it is usually called semiglobally uniformly ultimately bounded in mean square. Lemma 1 [34]: Suppose that there exists a C 2 function V (x) : R n → R + , two constants c1 > 0 and c2 > 0, class K ∞ −functions α¯ 1 and α¯ 2 such that  α¯ 1 (|x|) ≤ V (x) ≤ α¯ 2 (|x|) LV (x) ≤ −c1 V (x) + c2 for all x ∈ R n and t > t0 . Then, there is a unique strong solution of (1) for each x 0 ∈ R n and it satisfies c2 ∀t > t0 . E[V (x)] ≤ V (x 0 )e−c1 t + c1

512

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 27, NO. 3, MARCH 2016

Lemma 2 (Young’s Inequality [3]): For any (x, y) ∈ R 2 , the following inequality holds: xy ≤

1 εp p |x| + q |y|q p qε

where ε > 0, p > 1, q > 1, and ( p − 1)(q − 1) = 1. Young’s inequality in Lemma 2 is a key technique to handle some nonlinear terms, such as (19) and (21) during the controller design in Section III. In this paper, we consider a class of multiple input and multiple output (MIMO) stochastic nonlinear systems composed of N subsystems. The i th (i = 1, 2, . . . , N) subsystem is described by: for 1 ≤ j ≤ n i − 1 ⎧ T ⎪ ⎨d x i, j = (gi, j (x¯i, j )x i, j +1 + fi, j (X))dt + ψi, j (X)dwi T (X)dw (3) d x i,ni = (gi,ni (x¯i,ni )u i + f i,ni (X))dt + ψi,n i i ⎪ ⎩ yi = x i,1 where X = [x 1T , x 2T , . . . , x NT ]T represents full state of the overall system with x i = x¯i,ni = [x i,1 , x i,2 , . . . , x i,ni ]T ∈ R ni (i = 1, 2, . . . , N) being state vector of the i th subsystem, u i and yi are the scalar control variable and the scalar output of the i th subsystem, respectively. wi is an independent ri -dimensional standard Brownian motion defined on the complete probability space (, F, {Ft }t ≥0, P) with  being a sample space, F a σ −field, {Ft }t ≥0 a filtration, and P a probability measure. fi, j (·), gi, j (·), and ψi, j (·) are unknown smooth nonlinear functions with fi, j (0) = 0, ψi, j (0) = 0, and x¯i, j = [x i,1 , x i,2 , . . . , x i, j ]T ∈ R j . The nonlinear functions f i, j (·) and ψi, j (·) in (3) are the functions of states of the whole system, (3) is thus called strong interconnected stochastic nonlinear system, which is different from the large-scale systems considered in [43]–[46] where the interconnected nonlinearity only depends on the output of each subsystems. Therefore, (3) is in a more general form than those of the existing results. The control objective is to design adaptive neural controllers using only local information such that all the signals in the closed-loop systems remain bounded in the sense of the fourth moment, and the outputs yi of the local subsystems follow the desired reference signals ydi . To facilitate the adaptive neural control design, the following assumptions are required for each of the subsystems. Assumption 1 [15]: The signs of gi, j (x¯i, j ), 1 ≤ i ≤ N, 1 ≤ j ≤ n i , do not change and without loss of generality, it is further assumed that there exists a positive constant bm such that 0 < bm ≤ gi, j (x¯i, j ) < ∞. Assumption 2 [13]: The desired trajectory yd j (t), j = 1, 2, . . . , N, and its time derivatives up to the n j th order (n ) yd j j (t) are continuous and bounded. It is further assumed that there exists a positive constant d such that |yd j (t)| ≤ d. Assumption 3 [21]: For functions f i, j (X) in (3), there exist strict increasing smooth functions φi, j (·) : R + → R + with φi, j (0) = 0 such that for i = 1, 2, . . . , N, j = 1, 2, . . . , n i | f i, j (X)| ≤ φi, j ( X ).

Remark 2: As stated in [21], the increasing property of if ak ≥ 0, for k = 1, 2, . . . , n, then φi, j (·) means that n φi, j ( nk=1 ak ) ≤ k=1 φi, j (nak ). Note that φi, j (s) is a smooth function with φi, j (0) = 0, so there exists a smooth function qi, j (s) such that φi, j (s) = sqi, j (s), which results in

n n φi, j ak ≤ nak qi, j (nak ). (4) k=1

k=1

Assumption 4 [33]: There exist strictly increasing smooth functions ρi, j (.) : R + → R + with ρi, j (0) = 0 such that for i = 1, 2, . . . , N, j = 1, 2, . . . , n i ψi, j (X) ≤ ρi, j (||X||). Apparently, similar to Remark 2, for the smooth function ρi, j (·) with ρi, j (0) = 0, there exists a smooth function ηi, j (·) such that

n n bk ≤ nbk ηi, j (nbk ) (5) ρi, j k=1

k=1

where bk ≥ 0. In the developed control design procedure, RBF neural networks f nn (Z ) will be used to approximate any continuous function f (Z ) : R n → R. The RBF neural networks are described in the following form: f nn (Z ) = W T S(Z )

(6)

where Z ∈  Z ⊂ R q is the input vector with q being the neural networks input dimension, weight vector W = [w1 , w2 , . . . , wl ]T ∈ Rl , l > 1 is the neural networks node number, and S(Z ) = [s1 (Z ), s2 (Z ), . . . , sl (Z )]T means the basis function vector with si (Z ) being chosen as the commonly used Gaussian function of the form 

(Z − μi )T (Z − μi ) , i = 1, 2, . . . , l (7) si (Z ) = exp − η2 where μi = [μi1 , μi2 , . . . , μiq ]T is the center of the receptive field and η is the width of the Gaussian function. In [49], it has been indicated that with sufficiently large node number l, the RBF neural networks (6) can approximate any continuous function f (Z ) over a compact set  Z ⊂ R q to an arbitrary accuracy ε > 0 as f (Z ) = W ∗ T S(Z ) + δ(Z ) ∀z ∈ z ⊂ R q where

W∗

(8)

is the ideal constant weight vector and defined as  

W ∗ := arg min

W ∈ R¯ l

sup | f (Z ) − W T S(Z )|

Z ∈ Z

and δ(Z ) is the approximation error and satisfies | δ(Z ) |≤ ε. The boundedness of the basis function vector S(Z ) in (6) is shown in the following lemma, which will be used to prove the result of Lemma 4. Lemma 3 [50]: Consider the Gaussian RBF networks (6) and (7). Let ρ := (1/2) mini = j μi − μ j , then an upper bound of S(Z ) is taken as S(Z ) ≤

∞ k=0

3q(k + 2)q−1 e−2ρ

2 k 2 /η2

:= s.

(9)

WANG et al.: ROBUST ADAPTIVE NEURAL TRACKING CONTROL

513

It has been shown in [14] that the constant s in Lemma 3 is a limited value and is independent of the variable Z and the dimension of neural weights l. In Section III, a backstepping-based design procedure will be proposed. In the developed design procedure, for the i th subsystem, the neural network Wi,T j S(Z i, j ) will be used to model the packaged unknown nonlinear function f¯i, j (Z i, j ) at step j . Both virtual control signals and adaption laws will be constructed in the following forms: αi, j (Z i, j ) = −λi, j χi, j − θ˙ˆi =

1 χ 3 θˆi S T (Z i, j )Si, j (Z i, j ) (10) 2ai,2 j i, j i, j

ni

γi 6 T χi, j Si, j (Z i, j )Si, j (Z i, j ) − σi θˆi 2 2a i, j j =1

(1)

( j)

y¯di = [ydi , ydi , . . . , ydi ]( j = 2, . . . , n i ), and χi, j satisfy the following variable transformation: χi, j = x i, j − αi, j −1

X ≤ ≤

ni N

ni N |x i, j | ≤ (|χi, j | + |αi, j −1 |)

i=1 j =1

i=1 j =1

ni N

N n i −1

|χi, j | +

i=1 j =1

+ ≤

N



i=1 j =1

λi, j

1 2 2ˆ + 2 χi, j s θi |χi, j | 2ai, j

|ydi |

i=1 ni N

ϕi, j (χi, j , θˆi )|χi, j | + d ∗ .

i=1 j =1

(11)

where for i = 1, 2, . . . , N, j = 1, 2, . . . , n i , λi, j , ai, j , γi , (1) and σi are positive design parameters, Z i,1 = [x i,1 , θˆi , y¯di ] ( j ) (1) (1) T T with y¯di = [ydi , ydi ], Z i, j = [x¯i, j , θˆi , y¯di ] with ( j)

Proof: From αi,0 = ydi , (10) and (12), we have

(12)

with αi,0 = ydi . θˆi is the estimation of an unknown constant θi that will be specified as   1 (13) Wi, j 2 ; j = 1, 2, . . . , n i θi = max bm where bm is defined in Assumption 1 and Wi, j denotes the norm of ideal weight vector of the neural networks, which will be specified at the j th design step. In particular, αi,ni is the actual control input u i. Remark 3: Based on [32, Lemma 1], it is apparent that (11) implies that for any nonnegative initial condition θˆi (t0 ) ≥ 0, the solution θˆi (t) ≥ 0 holds for t ≥ t0 . In fact, it is reasonable to choose θˆi (t0 ) ≥ 0 in practice, as θˆi is an estimation of θi . Therefore, throughout this paper, it is assumed that θˆi (t) ≥ 0. Remark 4: In the following, we will give some key lemmas. They will be applied for control design in Section III. Lemma 4 gives the relationship between X and error signal χi, j (i = 1, 2, . . . , N, j = 1, 2, n i ), which is used to separate all the state functions f i, j (X) and ψi, j (X) in (19) and (21) with j = 1, and (36)–(39), and so on. Lemma 5 is an important technique to deal with some nonlinear terms generated during the backstepping-based control design procedure, such as (20) and (31). Lemma 4: From the coordinate transformations χi, j = x i, j − αi, j −1 , for i = 1, 2, . . . , N, j = 1, 2, . . . , n i , the following result holds:

Lemma 5 [48]: For any η ∈ R and  > 0, the following inequality holds:

 η 0 ≤ |η| − η tanh ≤ δ, δ = 0.2785. (15)  III. A DAPTIVE N EURAL C ONTROL D ESIGN For notational simplicity, in the following, the time variable t and the state vector x¯i, j will be omitted from the corresponding functions and let Si, j (Z i, j ) = Si, j . Step 1: Based on χi,1 = x i,1 − ydi , χi,2 = x i,2 − αi,1 , it follows from (3) that: T dz i,1 = (gi,1 χi,2 + gi,1 αi,1 + f i,1 (X) − y˙di )dt + ψi,1 (X)dwi .

(16) Choose a Lyapunov function candidate as 1 4 bm ˜ 2 Vi,1 = χi,1 + θ 4 2γi i where θ˜i = θi − θˆi denotes the parameter error. It can be verified easily from (2) that

(17)

3 LVi,1 = χi,1 (gi,1 χi,2 + gi,1 αi,1 + f i,1 (X) − y˙di ) 3 2 T bm ˙ θ˜i θˆi . + χi,1 ψi,1 (X)ψi,1 (X) − (18) 2 γi By applying Assumption 3, (4), (14), and Young’s inequality, we have  3    3 | f i,1 (X)| ≤ χ 3 φi,1 ( X ) f i,1 (X) ≤ χi,1 χi,1 i,1

N n m  3  ≤ χ φi,1 ϕm,l (χm,l , θˆm )|χm,l | + d ∗ i,1

m=1 l=1 nm N  3    φi,1 ((M + 1)|χm,l |ϕm,l (χm,l , θˆm )) ≤ χi,1

 l=1  3m=1 φi,1 ((M + 1)d ∗ ) + χi,1 =

nm N  3  χ (M + 1)|χm,l |ϕm,l (χm,l , θˆm ) i,1 m=1 l=1

X ≤

ni N

ϕi, j (χi, j , θˆi )|χi, j | + d ∗

×qi,1 ((M + 1)|χm,l |ϕm,l (χm,l , θˆm ))   + χ 3 φi,1 ((M + 1)d ∗ )

(14)

i,1

i=1 j =1

where d ∗ = Nd with d being the upper bound of ydi , ϕi, j (χi, j , θˆi ) = (λi, j + 1) + (1/2ai,2 j )χi,2 j s 2 θˆi , ϕi,ni = 1, for i = 1, 2, . . . , N, j = 1, 2, . . . , n i − 1.



nm N 3 4 4 ¯4 Mχi,1 + χm,l φi,1 (χm,l , θˆm ) 4  3  m=1 l=1 φi,1 ((M + 1)d ∗ ) + χi,1

(19)

514

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 27, NO. 3, MARCH 2016

N ¯4 ˆ where M = m=1 n m and φi,1 (χm,l , θm ) = (1/4) 4 (χ , θˆ )q 4 ((M + 1)|χ |ϕ (M + 1)4 ϕm,l m,l m i,1 m,l m,l (χm,l , θˆm )). Let ∗ Ui,1 = φi,1 ((M + 1)d ). Then, applying Lemma 5 to the last term on the right-hand side in (19) yields

3   3  χi,1 Ui,1 χ Ui,1 − χ 3 Ui,1 tanh ≤ δi,1 . i,1 i,1 i,1

(20)

Similarly, based on Assumption 4, (5), (14), and Young’s inequality, the following inequality holds: 3 2 2 3 2 T χ ψ (X)ψi,1 (X) ≤ χi,1 ρi,1 ( X ) 2 i,1 i,1 2

N n m 3 2 2 ∗ ˆ ≤ χi,1 ρi,1 ϕm,l (χm,l , θm )|χm,l | + d 2 m=1 l=1  nm N 3 2 2 ∗ 2 ≤ χi,1 (M + 1) ρi,1 ((M + 1)d ) + ρi,1 2 m=1 l=1



× ((M + 1)|χm,l |ϕm,l (χm,l , θˆm ))  nm N 3 2 2 2 ((M + 1)d ∗ ) + χm,l = χi,1 (M + 1) ρi,1 2 × (M ×

m=1 l=1 2 2 + 1) ϕm,l (χm,l , θˆm )

2 ((M ηi,1



+ 1)|χm,l |ϕm,l (χm,l , θˆm ))

m=1 l=1

(21)

4 (χ where li,1,1 is a positive constant, ρ¯i,1 m,l , θˆm ) = 1/2 4 4 4 (M + 1) ϕm,l (χm,l , θˆm )ηi,1 ((M + 1)|χm,l |ϕm,l (χm,l , θˆm )). Substituting (19) into (18) and using (20) and (21) gives

LVi,1

3 3 4 ≤ χi,1 (gi,1 χi,2 + gi,1 αi,1 − y˙di ) + Mχi,1 4 nm N 9 4 4 4 + (M + 1)2 Mχi,1 + χm,l ρ¯i,1 (χm,l , θˆm ) 8 m=1 l=1

9 −2 4 4 + (M + 1)2li,1,1 χi,1 ρi,1 ((M + 1)d ∗ ) 8 nm N bm ˜ ˙ˆ 4 ¯4 + χm,l φi,1 (χm,l , θˆm ) − θi θi γi m=1 l=1 

3 χi,1 Ui,1 1 2 3 + δi,1 + li,1,1 + χi,1 Ui,1 tanh . i,1 2

(22)

Step 2: Based on (12), one has dχi,2 = (gi,2 x i,3 + f i,2 (X) − α1 )dt

T ∂αi,1 + ψi,2 (X) − ψi,1 (X) dwi ∂ x i,1

α1 =

∂αi,1 ∂αi,1 ˙ θˆi (gi,1 x i,2 + fi,1 (X)) + ∂ x i,1 ∂ θˆi 1 ∂αi,1 (k+1) 1 ∂ 2 αi,1 T + y + ψi,1 (X)ψi,1 (X). (24) 2 (k) di 2 ∂ x i,1 k=0 ∂ydi

4 . Then, Choose a Lyapunov function as Vi,2 = (1/4)χi,2 according to (2), we can obtain 3 LVi,2 = χi,2 (gi,2 x i,3 + fi,2 (X) − α1 ) T

3 2 ∂α1 + χi,2 ψi,2 (X) − ψi,1 (X) 2 ∂ x1

 ∂αi,1 × ψi,2 (X) − ψi,1 (X) . ∂ x i,1

(25)

By following the same estimation method as (19), one has:    3  ∂αi,1  3 ∂αi,1  φi,1 ( X )   −χi,2 fi,1 (X) ≤ χi,2  ∂ x i,1 ∂ x i,1 

 N nm 3 ∂αi,1 4/3 4 4 ¯4 ≤ Mχi,2 + χm,l φi,1 (χm,l , θˆm ) 4 ∂ x i,1 m=1 l=1    3  ∂αi,1  φi,1 ((M + 1)d ∗ ) + χi,2  (26) ∂ x i,1  nm N 3 3 4 4 ¯4 χi,2 φi,2 (χm,l , θˆm ) f i,2 (X) ≤ Mχi,2 + χm,l 4 m=1 l=1  3  φi,2 ((M + 1)d ∗ ) + χi,2 (27) 4 (χ 4 4 4 where φ¯i,k m,l , θˆm ) = (1/4)(M + 1) ϕm,l (χm,l , θˆm )qi,k ((M + 1)|χm,l |ϕm,l (χm,l , θˆm )), k = 1, 2. Employing the same approach as (21) produces

nm N 9 4 4 4 + χm,l ρ¯i,1 (χm,l , θˆm ) ≤ (M + 1)2 Mχi,1 8

9 1 2 −2 4 4 + (M + 1)2li,1,1 χi,1 ρi,1 ((M + 1)d ∗ ) + li,1,1 8 2

where

(23)

1 3 ∂ 2 αi,1 T − χi,2 ψi,1 (X)ψi,1 (X) 2 2 ∂ x i,1

2  1 ∂ αi,1 2 6 ≤ (M + 1)2 Mχi,2 2 8 ∂ x i,1 +

nm N

4 4 χm,l ρ¯i,1 (χm,l , θˆm )

m=1 l=1

   3  ∂ 2 αi,1  2 1    + (M + 1) χi,2  2 ρi,1 ((M + 1)d ∗ ). 2 ∂ x i,1

(28)

Define a new function    ∂αi,1  φi,1 ((M + 1)d ∗ ) + φi,2 ((M + 1)d ∗ ) Ui,2 =  ∂ x i,1    2  ∂ αi,1  2 1 + (M + 1) 2 ρi,1 ((M + 1)d ∗ ). (29) 2 ∂ x i,1 Then, by applying Lemma 5, one has

3   3  χ U χ Ui,2 − χ 3 Ui,2 i,2 i,2 ≤ δi,2 . i,2 i,2 i,2

(30)

With the help of the triangle inequality and Assumption 4, it follows:         ψi,2 (X) − ∂αi,1 ψi,1 (X) ≤ ρi,2 ( X ) +  ∂αi,1 ρi,1 ( X ).    ∂x ∂x  i,1

i,1

WANG et al.: ROBUST ADAPTIVE NEURAL TRACKING CONTROL

515

Next, based on the result (a + b)2 ≤ 2a 2 + 2b2 , and then repeating the same derivation process as those used in (21), we have  2  ∂αi,1 3 2   ψi,1 (X) χi,2 ψi,2 (X) −  2 ∂ x i,1

 ∂αi,1 2 2 2 2 2 ≤ 3χi,2 ρi,2 ( X ) + 3χi,2 ρi,1 ( X ) ∂ x i,1

 9 ∂αi,1 4 4 9 2 4 2 χi,2 ≤ (M + 1) Mχi,2 + (M + 1) M 2 2 ∂ x i,1

4 9 2 ∂αi,1 4 −2 4 + (M + 1) χi,2 li,2,1 ρi,1 ((M + 1)d ∗ ) 2 ∂ x i,1 +

+

1 2

2 k=1

Step j (3 ≤ j ≤ n i − 1): Similar to (23), the following result is true: dχi, j = (gi, j x i, j +1 + f i, j (X) − αi, j −1 )dt ⎛ ⎞T j −1 ∂αi, j −1 + ⎝ψi, j (X) − ψi, j (X)⎠ dwi ∂ x i,k where αi, j −1 =

4 4 χm,l ρ¯i,s (χm,l , θˆm )

j −1 ∂αi, j −1 k=1

∂ x i,k

(gi,k x i,k+1 + f i,k (X)) j −1

∂αi, j −1 ˙ ∂αi, j −1 (k+1) + ydi θˆi + (k) ∂ θˆi ∂y

9 2 4 −2 4 li,2,k + (M + 1)2 χi,2 li,2,2 ρi,2 ((M + 1)d ∗ ) 2

nm N 2

(33)

k=1

k=0

+

(31)

di

j −1 1 ∂ 2 αi, j −1 T ψ (X)ψi,q (X). 2 ∂ x i, p ∂ x i,q i, p

(34)

p,q=1

s=1 m=1 l=1

where for k = 1, 2, li,2,k are positive constants, 4 (χ 4 4 4 ρ¯i,k m,l , θˆm ) = (1/2)(M + 1) ϕm,l (χm,l , θˆm )ηi,k ((M + 1) |χm,l |ϕm,l (χm,l , θˆm )). Then, by substituting (26)–(31) into (25), the following inequality holds: 3 χi,2

LVi,2 =

∂αi,1 ∂αi,1 ˙ gi,2 x i,3 − θˆi gi,1 x i,2 − ∂ x i,1 ∂ θˆi

 1 ∂αi,1 4/3 ∂αi,1 (k+1) 3 − y + Mχi,2 (k) di 4 ∂ x i,1 ∂y k=0

×

+

4 4 χm,l ρ¯i,1 (χm,l , θˆm ) +

m=1 l=1

+

nm N 2 s=1 m=1 l=1

∂ x i,k

(k+1) ydi

(gi,k x i,k+1 + f i,k (X)) −

j −1 ∂αi, j −1 k=0

(k) ∂ydi ⎞

j −1 1 ∂ 2 αi, j −1 T − ψ (X)ψi,q (X)⎠ 2 ∂ x i, p ∂ x i,q i, p

3 2 χ 2 i, j

 2   j −1   ∂α i, j −1 ψi, j (X) −  . ψ (X) i,k   ∂ x i,k   k=1

(35)

Furthermore, similar to the derivations from (26) to (27), one has 4 j −1 j −1 ∂αi, j −1 ∂αi, j −1 3 3 f i,k (X) ≤ Mχi,4 j −χi,3 j ∂ x i,k 4 ∂ x i,k k=1

+

k=1

j −1 nm N

4 ¯4 χm,l φi,k (χm,l , θˆm )

k=1 m=1 l=1

 j −1   3   ∂αi, j −1    φi,k ((M + 1)d ∗ )  + χi, j  ∂x 

4 ¯4 χm,l φi,s (χm,l , θˆm )

s=1 m=1 l=1 nm N

j −1 ∂αi, j −1

∂αi, j −1 ˙ θˆi ∂ θˆi

p,q=1

9 −2 4 + (M + 1)2 χi,2 li,2,2 ρi,2 ((M + 1)d ∗ ) 2

3   χi,2 Ui,2 ∂αi,1 4 9 χi,2 + Ui,2 tanh + (M + 1)2 M 2 ∂ x i,1 i,2

2  1 ∂ αi,1 2 3 + (M + 1)2 Mχi,2 2 8 ∂ x i,1

+

⎝ gi, j x i, j +1 + fi, j (X) −

k=1

9 3 + (M + 1)2 Mχi,2 + Mχi,2 2 4

 9 ∂αi,1 4 −2 4 + (M + 1)2 χi,2 li,2,1 ρi,1 ((M + 1)d ∗ ) 2 ∂ x i,1

+

LVi, j = χi,3 j ⎛



di

nm N 2

Taking Lyapunov function as Vi, j = (1/4)χi,4 j , we can further obtain

k=1

1 2 li,2,k 2 2

4 4 χm,l ρ¯i,s (χm,l , θˆm )

+ δi,2 .

m 3 4 ¯4 Mχi,4 j + φi, j (χm,l , θˆm ) χm,l 4

N

χi,3 j fi, j (X) ≤

k=1

(32)

(36)

i,k

  + χi,3 j φi, j ((M + 1)d ∗ )

n

m=1 l=1

(37)

516

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 27, NO. 3, MARCH 2016

4 (χ 4 4 4 where φ¯ i,k m,l , θˆm ) = (1/4)(M + 1) ϕm,l (χm,l , θˆm )qi,k ((M + 1)|χm,l |ϕm,l (χm,l , θˆm )), k = 1, 2, . . . , j j −1 ∂ 2 αi, j −1 T 1 ψ (X)ψi,q (X) − χi,3 j 2 ∂ x i, p ∂ x i,q i, p

It follows from (35) to (40) that: LVi, j = χi,3 j ⎧ j −1 ⎨ ∂αi, j −1 gi,k x i,k+1 gi, j x i, j +1 − ⎩ ∂ x i,k k=1

p,q=1





1 |χi, j |3 2

j −1 j −1  p=1 q=1

 ∂ 2 αi, j −1  ∂x ∂x i, p

i,q

j −1

∂αi, j −1 ˙ ∂αi, j −1 (k+1) 3 θˆi − ydi + Mχi, j (k) 4 ∂ θˆi k=0 ∂ydi 4 j −1 ∂αi, j −1 3 3 9 + Mχi, j + j 2 (M + 1)2 Mχi, j 4 ∂ x i,k 8 k=1  j −1 j −1 2 ∂ αi, j −1 2 1 + (M + 1)2 Mχi,3 j 8 ∂ x i, p ∂ x i,q −

  T ψ (X)ψi,q (X)  i,q

 j −1 j −1 2 ∂ αi, j −1 2 1 (M + 1)2 Mχi,6 j 8 ∂ x i, p ∂ x i,q p=1 q=1

+( j − 1)

j −1 nm N q=1 m=1 l=1

 j −1 j −1  2   ∂ αi, j −1  2  ∗   ×χi,3 j   ∂ x ∂ x ρi,q ((M + 1)d ) i, p i,q

+

(38)



9 + j 2 (M + 1)2 χi, j li,−2j, j ρi,4 j ((M + 1)d ∗ ) 8 

3 χi, j Ui, j + Ui, j tanh i, j

 2   j −1   ∂αi, j −1 ψi, j (X) − ψi,k (X)   ∂ x i,k   k=1

nm N 9 2 4 χm,l ρ¯i,4 j (χm,l , θˆm ) j (M + 1)2 Mχi,4 j + 8

+

m=1 l=1

+

j −1 nm N

4 4 χm,l ρ¯i,k (χm,l , θˆm ) +

k=1 m=1 l=1

1 2

j

+

li,2 j,k

+

9 2 j (M + 1)2 χi,4 j 8

k=1

∂αi, j −1 ∂ x i,k

(39)

i,k

 j −1  2 j −1  ∂ αi, j −1  2 1 ∗   + (M + 1)  ∂ x ∂ x ρi,q ((M + 1)d ). 2 i, p i,q p=1 q=1

By means of Lemma 5, we can obtain

3  χi, j Ui, j  3  χ Ui, j − χ 3 Ui, j tanh ≤ δi, j . i, j i, j i, j

k=1

4 4 χm,l ρ¯i,q (χm,l , θˆm ) + δi, j .

(41)

Step n i : In light of (12), one has

 j −1   ∂αi, j −1   φi,k ((M + 1)d ∗ ) + φi, j ((M + 1)d ∗ )  ∂x  k=1

1 2 li, j,k 2 j

4 4 χm,l ρ¯i,k (χm,l , θˆm ) +

q=1 m=1 l=1

where for k = 1, 2, . . . , j , li, j,k are positive con4 (χ , θˆ ) = (1/2)(M + 1)4 ϕ 4 (χ ˆ 4 stants, ρ¯i,k m,l m m,l m,l , θm )ηi,k ((M + 1)|χm,l |ϕm,l (χm,l , θˆm )). Let Ui, j =

j nm N

+ ( j − 1)

4 li,−2j,k ρi,k ((M + 1)d ∗ )

9 + j 2 (M + 1)2 χi,4 j li,−2j, j ρi,4 j ((M + 1)d ∗ ) 8

4 ¯4 χm,l φi,k (χm,l , θˆm )

k=1 m=1 l=1 j −1 nm N

k=1

k=1

4

j nm N k=1 m=1 l=1

 j −1 ∂αi, j −1 2 9 2 2 4 + j (M + 1) Mχi, j 8 ∂ x i,k j −1

9 2 j (M + 1)2 Mχi, j 8

k=1

p=1 q=1

3 2 χ 2 i, j

p=1 q=1 j −1

 ∂αi, j −1 2 ∂ x i,k k=1  j −1 ∂αi, j −1 4 −2 4 9 + j 2 (M +1)2 χi, j li, j,k ρi,k ((M + 1)d ∗ ) 8 ∂ x i,k

1 4 4 χm,l ρ¯i,q (χm,l , θˆm ) + (M + 1) 2

dχi,ni = (gi,ni u i + fi,ni (X) − αi,ni −1 )dt

T n i −1 ∂αi,ni −1 + ψi,ni (X) − ψi,ni (X) dwi (42) ∂ x i,k k=1

where αi,ni −1 is defined in (34) with j = n i . Consider a Lyapunov function as 1 4 . Vi,ni = χi,n i 4 Furthermore, the following result holds:

(43)

3 LVi,ni = χi,n i ⎛

⎝gi,ni u i + f i,ni (X) − ∂αi,ni −1 θ˙ˆi ∂ θˆi n i −1 ∂αi,ni −1 ∂αi,ni −1 (gi,k x i,k+1 + fi,k (X)) − (k) ∂ x i,k ∂ydi k=1 k=0 ⎞ n i −1 ∂ 2 αi,ni −1 T 1 (k+1) × ydi − ψ (X)ψi,q (X)⎠ 2 ∂ x i, p ∂ x i,q i, p



n i −1

p,q=1

(40)

3 2 + χi,n i 2

 2 n i −1   ∂αi,ni −1   ψi,k (X) . ψi,ni (X) −   ∂ x i,k k=1

(44)

WANG et al.: ROBUST ADAPTIVE NEURAL TRACKING CONTROL

517

Repeating the same derivations as (36)–(41) yields

Further, combining (22), (32), (41), and (45), we have

3 LVi,ni = χi,n i

 gi,ni u i −

LV ≤

n i −1 k=1

+

3 + Mχi,ni 4

k=1

+

di

∂αi,ni −1 ∂ x i,k

4 3

9 + n 2i (M + 1)2 Mχi,ni 8

+ +

9 + n 2i (M + 1)2 Mχi,ni 8 × (M + 1)2

n i −1 k=1

k=1

∂αi,ni −1 ∂ x i,k

4

∂αi,ni −1 ∂ x i,k

+

+

+



+ (n i − 1)

i,1

i 1 2 li,n i ,k 2

n

4 4 χm,l ρ¯i,q (χm,l , θˆm )

4 ¯4 χm,l φi,k (χm,l , θˆm )

4 4 χm,l ρ¯i,k (χm,l , θˆm )

q=1 m=1 l=1 i 1 li,2 j,k 2

N

δi, j +

ni N

n

j

i=1 j =1 k=1

∂αi, j −1 ˙ bm ˙ ˆθi − θ˜i θˆi γi ∂ θˆi N

χi,3 j

(48)

i=1

where

i, j

k=1

n nm N i −1

ni N

i=1 j =2

4 ¯4 χm,l φi,k (χm,l , θˆm ) + δi,ni

k=1 m=1 l=1

j ni nm N N

i=1 j =1

−2 li,n ρ 4 ((M + 1)d ∗ ) i ,k i,k

4 4 χm,l ρ¯i,k (χm,l , θˆm ) +

i=1 j ni nm N N

i=1 j =1

k=1 m=1 l=1 ni nm N

3 χi,n (gi,ni u i + i,ni ) i

j −1 nm ni N N 4 4 ( j − 1) χm,l ρ¯i,q (χm,l , θˆm ) +

9 + n 2i χi,ni 8

9 −2 + n 2i (M + 1)2 χi,ni li,n ρ 4 ((M + 1)d ∗ ) i ,n i i,n i 8

3  χi,ni Ui,ni + Ui,ni tanh i,ni ni nm N

N

i=1 j =1 k=1 m=1 l=1

p=1 q=1

2

χi,3 j (gi, j χi, j +1 + gi, j αi, j + i, j )

i=1 j =1 k=1 m=1 l=1

 n i −1 n i −1 2 1 ∂ αi,ni −1 2 2 3 + (M + 1) Mχi,ni 8 ∂ x i, p ∂ x i,q n i −1

N n i −1 i=1 j =2

n i −1 ∂αi,ni −1 ˙ ∂αi,ni −1 (k+1) 3 − ydi + Mχi,ni θˆi − ˆ 4 ∂ θi ∂y (k) n i −1

3 χi,1 (gi,1 χi,2 + gi,1 αi,1 + i,1 )

i=1

∂αi,ni −1 gi,k x i,k+1 ∂ x i,k

k=0

N

(45)

q=1 m=1 l=1

where for k = 1, 2, . . . , n i , li,ni ,k are positive constants. Based on the design procedure form Step 1 to Step n i , the Lyapunov function for the i th subsystem is defined as



3 χi,1 Ui,1 1 = − y˙di + Mχi,1 + Ui,1 tanh 2 i,1 9 −2 4 + (M + 1)2li,1,1 χi,1 ρi,1 ((M + 1)d ∗ ) 8 9 + (M + 1)2 Mχi,1 8 j −1 j −1 ∂αi, j −1 ∂αi, j −1 (k+1) =− gi,k x i,k+1 − ydi (k) ∂ x i,k k=1 k=0 ∂ydi 4 j −1 ∂αi, j −1 3 3 3 + Mχi, j + Mχi, j 4 4 ∂ x i,k k=1

3  χi, j Ui, j 9 + Ui, j tanh + j 2(M + 1)2 Mχi, j i, j 8  j −1 2 j −1 ∂ αi, j −1 2 1 + (M + 1)2 Mχi,3 j 8 ∂ x i, p ∂ x i,q

(49)

p=1 q=1

Vi =

ni

Vi, j =

j =1

ni 1

4

χi,4 j +

j =1

bm 2 θ˜ . 2γi i

(46)

Now, choose the following Lyapunov function for the whole systems: ⎛ ⎞ ni ni N N 1 b 1 m 4 4 2 ⎝ χi,1 + θ˜ ⎠. Vi, j = χi, j + V= 4 4 2γi i i=1 j =1

i=1

j =2

(47)

 j −1 ∂αi, j −1 2 9 2 j (M + 1)2 Mχi, j 8 ∂ x i,k k=1  j −1 ∂αi, j −1 4 9 + j 2(M + 1)2 χi, j 8 ∂ x i,k +

k=1 −2 4 li, j,k ρi,k ((M

+

+ 1)d ∗ )

9 2 j (M + 1)2 χi, j li,−2j, j ρi,4 j ((M + 1)d ∗ ) 8 j = 2, 3, . . . , n i − 1 (50)

518

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 27, NO. 3, MARCH 2016

n i −1 ∂αi,ni −1 ∂αi,ni −1 gi,k x i,k+1 − (k) ∂ x i,k k=1 k=0 ∂ydi 4 n i −1 ∂αi,ni −1 3 3 3 (k+1) × ydi + Mχi,ni + Mχi,ni 4 4 ∂ x i,k k=1

3  χi,ni Ui,ni 9 + Ui,ni tanh + n 2i (M + 1)2 Mχi,ni i,ni 8  n i −1 n i −1 2 ∂ αi,ni −1 2 1 2 3 + (M + 1) Mχi,ni 8 ∂ x i, p ∂ x i,q p=1 q=1  n i −1 ∂αi,ni −1 2 9 2 9 + n 2i (M + 1)2 Mχi,ni + n i χi,ni 8 ∂ x i,k 8 k=1

 n i −1 ∂αi,ni −1 4 −2 4 × (M + 1)2 li,ni ,k ρi,k ((M + 1)d ∗ ) ∂ x i,k k=1 9 2 −2 ρ 4 ((M + 1)d ∗ ). (51) + n i (M + 1)2 χi,ni li,n i ,n i i,n i 8 By rearranging sequence, we have

i,ni = −

n i −1

j ni nm N N

=

χi,4 j

nm l N

m=1 l=1 k=1 i=1 j =1 j n n N N i m 4 4 χm,l ρ¯i,k (χm,l , θˆm ) i=1 j =1 k=1 m=1 l=1 ni nm N l N 4 = χi,4 j ρ¯m,k (χi, j , θˆi ) m=1 l=1 k=1 i=1 j =1 j −1 ni nm N N 4 4 ( j − 1) χm,l ρ¯i,q (χm,l , θˆm ) q=1 m=1 l=1 i=1 j =1 ni nm N l−1 N 4 = χi,4 j (l − 1) ρ¯m,q (χi, j , θˆi ). i=1 j =1 m=1 l=2 q=1

N

i=1 j =2 j −1 ni N 3 ∂αi, j −1 − χi, j ∂ θˆi k=1 i=1 j =2 ni ni N ∂αi, j −1 − χi,3 j ∂ θˆi k= j i=1 j =2 ni N ∂αi, j −1 χi,3 j σi θˆi ∂ θˆi i=1 j =2 ni N

(53)

account,

we

can

3 χi,1 (gi,1 αi,1 + f¯i,1 (Z i,1 ))

+

N n i −1

χi,3 j (gi, j αi, j + f¯i, j (Z i, j ))

i=1 j =2

+

N

3 χi,n (gi,ni u i + f¯i,ni (Z i,ni )) i

i=1



ni N 3

ni N

4

i 1 + li,2 j,k 2

n

N

χi,4 j

j

i=1 j =1 k=1

δi, j −

i=1 j =1

N bm i=1

γi

θ˜i θ˙ˆi

(57)

3 3 f¯i,1 (Z i,1 ) = χi,1 + gi,1 χi,1 + i,1 4 4 + χi,1

nm l N

4 (χi,1 , θˆi ) φ¯ m,k

m=1 l=1 k=1

+ χi,1 (54)

γi 6 T χi,k Si,k Si,k 2 2ai,k

nm N l−1 4 (l − 1) ρ¯m,q (χi,1 , θˆi ) m=1 l=2

+ χi,1

q=1

nm N l

4 ρ¯m,k (χi,1 , θˆi )

(58)

m=1 l=1 k=1

3 1 3 f¯i, j (Z i, j ) = χi, j + gi, j −1 χi, j + gi, j χi, j 4 4 4 + i, j + χi, j

nm N l

4 (χi, j , θˆi ) φ¯ m,k

m=1 l=1 k=1

+ χi, j

γi 6 T χi,k Si,k Si,k 2 2ai,k

nm N

(l − 1)

m=1 l=2

+ χi, j

nm l N

l−1

4 ρ¯m,q (χi, j , θˆi )

q=1 4 ρ¯m,k (χi, j , θˆi ) +

m=1 l=1 k=1

j −1

∂αi, j −1 γi 6 T − χi,k Si,k Si,k 2 ˆi 2a ∂ θ i,k i=1 j =2 ⎞ ⎛ k=1  j  ni N γi 2 6 ⎝  3 ∂αi,k−1 ⎠ + s χi, j χi,k  . 2ai,2 j ∂ θˆi

into

where the functions f¯i, j (Z i, j ), i = 1, 2, . . . , N, j = 1, 2, . . . , n i are defined as

j −1 ∂αi, j −1 γi 6 T χi,k Si,k Si,k 2 ∂ θˆi k=1 2ai,k ⎞ ⎛  j  γi 2 3 ⎝  3 ∂αi,k−1 ⎠ + 2 s χi, j χi,k  2ai, j ∂ θˆi k=2

∂αi, j −1 σi θˆi ∂ θˆi



χi,3 j

k=2

(52)–(56)

(56)

i=1

(52)

i=1 j =2

i=1 j =2

LV ≤

+

Furthermore, we deal with the term next to the last one in (48). Using the definition of the adaptive law in (11), Lemma 3, and rearranging sequence, one has ni ni N N ∂αi, j −1 ˙ ∂αi, j −1 θˆi = χi,3 j χi,3 j σi θˆi − ∂ θˆi ∂ θˆi



Then, by taking rewrite (48) as

i=1 j =1

4 (χi, j , θˆi ) φ¯m,k

3 1 gi, j χi,4 j + gi, j χi,4 j +1 . 4 4

χi,3 j gi, j χi, j +1 ≤

4 ¯4 χm,l φi,k (χm,l , θˆm )

i=1 j =1 k=1 m=1 l=1 ni N

1, 2, . . . , n i − 1, based on

In addition, for j = Young’s inequality, one has

(55)

(59)

WANG et al.: ROBUST ADAPTIVE NEURAL TRACKING CONTROL

519

3 1 f¯i,ni (Z i,ni ) = i,ni + χi,ni + gi,ni −1 χi,ni 4 4 nm l N 4 φ¯ m,k + χi,ni (χi,ni , θˆi ) + χi,ni + χi,ni − +

m=1 l=1 k=1 nm N

(l − 1)

m=1 l=2 nm l N

l−1

Subsequently, combining (63) together with (64) and (11) results in

 N bm 3 ˜ T 3 χi,1 − λi,1 bm χi,1 + 2 χi,1 θi Si,1 Si,1 LV ≤ 2ai,1 i=1

4 ρ¯m,q (χi,ni , θˆi )

+

q=1 4 ρ¯m,k (χi,ni , θˆi )+

∂αi,ni −1 σi θˆi ∂ θˆi

m=1 l=1 k=1 n i −1 γi 6 T ∂αi,ni −1 χi,k Si,k Si,k 2a 2 ∂ θˆi k=1 i,k  ni   3 ∂αi,k−1  γi 2 3 χ  . s χi,ni  i,k  2 2ai,n ∂ θˆi i k=2

(60)



Wi,T j Wi, j

(62)

where i = 1, 2, . . . , N, j = 1, 2, . . . , n i and the unknown constant θi is defined in (13). Substituting (61) into (57) and using (62) gives

 N bm 3 3 T LV ≤ χi,1 gi,1 αi,1 + 2 χi,1 θi Si,1 Si,1 2ai,1 i=1

 −1 n N i bm 3 3 T + χi, j gi, j αi, j + 2 χi, j θi Si, j Si, j 2ai, j i=1 j =2

 N bm 3 3 T + χi,ni gi,ni u i + 2 χi,ni θi Si,ni Si,ni 2ai,ni i=1 ⎛ ⎞ j ni N 1 1 1 ⎝ + li,2 j,k + δi, j + ai,2 j + εi,4 j ⎠ 2 2 4 i=1 j =1



N i=1

k=1

bm ˜ ˙ˆ θi θi . γi

(63)

Furthermore, by considering the virtual control signal defined in (10), we have χi,3 j gi, j αi, j



−λi, j bm χi,4 j

bm − 2 χi,6 j θˆi Si,T j Si, j 2ai, j

where i = 1, 2, . . . , N, j = 1, 2, . . . , n i , and αi,ni = u i .



+

≤−

− λi,ni bm χi,ni

bm 3 θ˜ S T S + 2 χi,n i i i,n i i,n i 2ai,ni ⎞



⎛ ni γi 6 T χi, j Si, j Si, j − σi θˆi ⎠ θ˜i ⎝ 2 γi 2a i, j j =1

N bm

ni N

ni N i=1 j =1

+

 bm χi,3 j − λi, j bm χi, j + 2 χi,3 j θ˜i Si,T j Si, j 2ai, j

3 χi,n i

i=1 j =1

(61)

Si, j Wi, j + χi,3 j δi, j (Z i, j )

1 3 1 bm 6 χi, j θi Si,T j Si, j + ai,2 j + χi,4 j + εi,4 j 2 2 4 4 2ai, j

+

N

i=1

where δi, j (Z i, j ) refers to the approximation error and satisfies |δi, j (Z i, j )| < εi, j . Furthermore, by Young’s inequality, one has χi,3 j f¯i, j (Z i, j ) = χi,3 j

i=1 j =2

i=1

Since gi, j , φ¯m,k , and ρ¯m,k are unknown smooth functions, f¯i, j (Z i, j ), i = 1, 2, . . . , N, j = 1, 2, . . . , n i cannot be directly applied to design virtual control input αi, j or the actual controller u i . To compensate for the unknown function f¯i, j (Z i, j ), neural network Wi,T j Si, j (Z i, j ) can be used to model f¯i, j (Z i, j ) such that for any given εi, j > 0 f¯i, j (Z i, j ) = Wi,T j Si, j (Z i, j ) + δi, j (Z i, j )

N n i −1

ni N i=1 j =1



⎞ j 1 1 1 ⎝ li,2 j,k + δi, j + ai,2 j + εi,4 j ⎠ 2 2 4 k=1

λi, j bm χi,4 j +

N σi bm i=1

γi

θ˜i θˆi



⎞ j 1 2 1 4 1 2 ⎝ li, j,k + δi, j + ai, j + εi, j ⎠ 2 2 4 k=1



⎞ ni N N b σi bm 2 σ i m 4 2⎠ ⎝ ˜ λi, j bm χi, j + θ + θ ≤− 2γi i 2γi i i=1 j =1 i=1 ⎛ ⎞ j ni N 1 1 1 ⎝ + li,2 j,k + δi, j + ai,2 j + εi,4 j ⎠ (65) 2 2 4 i=1 j =1

k=1

where the inequality θ˜i θˆi ≤ −(1/2)θ˜i2 +(1/2)θi2 has been used in the above inequality. At the present stage, the adaptive neural control design has been completed based on backstepping technique. The main results of this paper are summarized in the following theorem. Theorem 1: Consider the MIMO stochastic nonlinear interconnected system (3), the controller (10), and adaptive law (11) under Assumptions 1–4. Assume that for 1 ≤ i ≤ N, 1 ≤ j ≤ n i , the packaged unknown functions f¯i, j (Z i, j ) can be approximated by the neural networks Wi,T j Si, j (Z i, j ) in the sense that the approximating errors δi, j (Z i, j ) are bounded, then for bounded initial conditions [z iT (0), θˆi (0)]T ∈ 0 , all the signals in the closed-loop system remain semiglobally uniformly bounded, and the error signals χi, j and θ˜i eventually converge to the compact set s defined by ⎧  N ni ⎨  b0 E[|χi, j |4 ] ≤ 4 s = χi, j , θ˜i  ⎩ a0 i=1 j =1



(64) |θ˜i | ≤

⎫ ⎬ 2γi b0 , 1 ≤ i ≤ N, 1 ≤ j ≤ n i . (66) ⎭ bm a0

520

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 27, NO. 3, MARCH 2016

Proof: Let a0 = min{4λi,⎛j bm , σi , i = 1, 2, . . . , N, j = 1, 2, . .⎞. , n i } j ni N 1 2 1 2 1 ⎝ δi, j + ai, j + li, j,k + εi,4 j ⎠ b0 = 2 2 4 i=1 j =1

+

k=1

N σi bm i=1

2γi

θi2 .

Then, (65) can be rewritten as LV (t) ≤ −a0 V (t) + b0 , t ≥ 0.

(67)

Furthermore,the following result can be obtained directly by [6, Th. 4.1]: d E[V (t)] ≤ −a0 E[V (t)] + b0 dt

(68)



b0 −a0 t b0 + e 0 ≤ E[V (t)] ≤ V (0) − a0 a0

(69)

which satisfies

Fig. 1.

that is E[V (t)] < V (0) + N n i

b0 ∀t > 0 a0

(70) N

where V (0) = + i=1 (bm /2γi ) i=1 θ˜i2 (0). Utilizing (70) and the definition of V , it can be concluded that all the signals in the closed-loop system are semiglobally uniformly bounded in the sense of the four moment and θ˜i is semiglobally uniformly bounded in mean square. In addition, from (67) and the definition of V in (47), Lemma 1 is satisfied. Then, we can obtain 4 j =1 (1/4)χi, j (0)

E[V (t)] ≤ e−a0 t [V (0)] +

b0 ∀t > 0 a0

(71)

which implies that E[V (t)] ≤

b0 , t → ∞. a0

(72)

Therefore, from (47), the error signals χi, j and θ˜i eventually converge to the compact set s specified in (66), that is, all the signals in the closed-loop system are semiglobally uniformly ultimately bounded in the sense of fourth moment. The design procedure of the controller could be visualized from the block diagram shown in Fig. 1. Remark 5: Note that several adaptive neural control schemes have been proposed in [37] and [47] for nonlinear interconnected systems. The main differences between this result and the ones in [37] and [47] are summarized as follows. 1) The results in [37] and [47] are obtained for deterministic interconnected systems, and in this paper, we considered a class of stochastic nonlinear interconnected systems, which is in a more general form. 2) The control schemes in [37] and [47] require to estimate all the elements of weight vectors, while the approach proposed in this paper only needs to estimate one parameter for each subsystem.

Block diagram of control system.

IV. S IMULATION E XAMPLE In this section, to show the effectiveness of the proposed control scheme, we consider an interconnected second-order stochastic nonlinear system consisting of three subsystems described by ⎧    2 x d x 1,1 = 1 + x 1,1 ⎪ 1,2 + x 1,1 x 2,1 x 2,2 x 3,1 dt ⎪ ⎪ 2 ⎪ ⎪ x 2,2 ⎪ ⎪ ⎪ dw1 + ⎪ 2 +x 2 ⎪ 1 + x 3,2 ⎪ 2,2 ⎪ ⎪

 ⎪ 2 2 ⎪ ⎪ ⎪d x = 2 + cos x 2 u + x 1,2 + x 2,2 ⎪ dt 1,2 1 ⎪ 2 2 1,2 ⎪ ⎪ 1 + e−x3,1 −x2,2 ⎪ ⎪ 2 ⎪ x 2,2 ⎪ ⎪ ⎪ + dw1 ⎪ ⎪ 2 + x2 ⎪ 1 + x 2,2 ⎪ 3,1 ⎪ ⎪ ⎪ ⎪ y = x 1,1 ⎪ ⎪ 1 ⎨ d x 2,1 = ((3 + sin x 2,1 )x 2,2 + x 1,1 x 1,2 x 3,2 )dt (73)   2 cos x 2 dw ⎪ + 0.5x 2,2 ⎪ 2 3,1 ⎪ ⎪ ⎪ ⎪ d x 2,2 = ((2 + e−x2,1 x2,2 )u 2 + cos(x 2,1 )x 2,2 ⎪ ⎪ ⎪ ⎪ 2 sin (x x )dw ⎪ + x 1,1 x 3,1)dt + x 2,1 1,1 3,2 2 ⎪ ⎪ ⎪ ⎪ ⎪ y = x 2 2,1 ⎪ ⎪    ⎪ 2 ⎪ ⎪ ⎪d x 3,1 = 2 + sin x 3,1 x 3,2+ x 1,1 x 2,1 x 2,2 dt ⎪ ⎪ 2 sin x 2 dw ⎪ + 0.5x 3,2 ⎪ 3 2,2 ⎪    ⎪ ⎪ 2 u + cos(x )x ⎪d x 3,2 = 5 + x 3,2 3 3,1 3,2 + x 1,1 x 2,1 dt ⎪ ⎪   ⎪ ⎪ 2 dw ⎪ + x 2,2 cos x 1,1 x 3,1 3 ⎪ ⎪ ⎩ y3 = x 3,1 where for i = 1, 2, 3, x i,1 and x i,2 denote the state variables, u i are the system control input signals, yi are the system outputs, and, for simplicity, w˙ i is chosen as 1-D Gaussian white noise with zero mean and variance one. It can be easily verified that Assumption 1 holds with bm = 1. The desired tracking trajectories are chosen as yd1 = 0.5(sin(t) + sin(0.5t)), yd2 = 0.5(sin(1.5t) + sin(0.5t)), and yd3 = 0.5 sin(t) + sin(0.5t) from which Assumption 2 is satisfied. It is apparent

WANG et al.: ROBUST ADAPTIVE NEURAL TRACKING CONTROL

Fig. 2.

System output variable y1 and the reference signal yd1 .

521

Fig. 3.

System output variable y2 and the reference signal yd2 .

Fig. 4.

System output variable y3 and the reference signal yd3 .

Fig. 5.

Control input signal u 1 , u 2 , and u 3 .

that the above systems belong to a class of interconnected nonlinear systems, such as the existence of the terms 2 / 1 + x2 + x2 ) f 1,1 (.) = x 1,1 x 2,1 x 2,2 x 3,1 ψ1,1 (.) = (x 2,2 3,2 2,2

2 + x 2 / 1 + e −x 3,1 −x 2,2 ) in the first subsysand f 1,2 (.) = (x 1,2 2,2 tem, which are the nonlinear functions of the whole system states. Therefore, the existing control schemes for large-scale nonlinear interconnected systems are not applicable. In addition, for convenience, we take the functions f 1,1 (.) and ψ1,1 (.), for example, to show that Assumptions 3 and 4 are satisfied. For the functions f 1,1 (.) and ψ1,1 (.), when we choose φ1,1 (s) = s 4 and ρ1,1 (s) = s 2 , respectively, which could make Assumptions 3 and 4 satisfactory. The control objective is to design an adaptive neural control scheme such that all the signals remain bounded and the system outputs y1 , y2 , and y3 follow the desired tracking trajectories yd1, yd2 , and yd3, respectively. Based on the control approach developed in Section III, the virtual control signals, actual controllers, and adaptive laws are constructed as 2

2

1 3 ˆ T χi,1 θi Si,1 Si,1 2 2ai,1 1 3 ˆ T u i = −λi,2 χi,2 − 2 χi,2 θi Si,2 Si,2 2ai,2

αi,1 = −λi,1 χi,1 −

θ˙ˆi =

2 γi 6 T χi, j Si, j Si, j − σi θˆi 2 2a i, j j =1

where i = 1, 2, 3, χi,1 = x i,1 − ydi , χi,2 = x i,2 − αi,1 . In simulation, the initial values are set to be [x 1,1(0), x 1,2 (0), x 2,1 (0), x 2,2 (0), x 3,1 (0), x 3,2 (0)]T = [0.7, 0.5, 0.7, 0.5, 0.7, 0.5]T , and [θˆ1 (0), θˆ2 (0), θˆ3 (0)]T = [0, 0, 0]T . The simulation is run under the design parameters λ1,1 = λ1,2 = λ2,1 = λ2,2 = λ3,1 = λ3,2 = 4, a1,1 = a1,2 = a2,1 = a2,2 = a3,1 = a3,2 = 5, γ1 = γ2 = γ3 = 1, and σ1 = σ2 = σ3 = 0.1. In practice, the chosen centers and widths of RBF have significant impacts on the performance of the presented controller. According to [49], Gaussian RBF neural networks arranged on a regular lattice on R n can uniformly approximate sufficiently smooth functions on closed bounded subsets. Therefore, in the following, the centers and widths are chosen on a regular lattice in the respective compact sets. In particular, we apply three nodes for each input

T S (Z ) and W T S (Z ), i = 1, 2, 3. dimension of Wi,1 i,1 i,1 i,2 i,2 i,2 T S (Z ) contains 81 nodes Therefore, neural network Wi,1 i,1 i,1 with centers spaced evenly in the interval [−5, 5] × [−5, 5] × [−5, 5]×[−5, 5] and widths of Gaussian functions being equal T S (Z ) contains 729 nodes to two. Neural network Wi,2 i,2 i,2 with centers spaced evenly in the interval [−5, 5] × [−5, 5] × [−5, 5] × [−5, 5] × [−5, 5] × [−5, 5] and widths of Gaussian functions being equal to two. The simulation results are shown in Figs. 2–6. Figs. 2–4 show the output variables y1 , y2 , y3 and desired reference signals yd1 , yd2 , and yd3 , respectively. Fig. 5 shows that the control input signals u 1 , u 2 , and u 3 are bounded, and Fig. 6 shows the response curve of the state

522

Fig. 6.

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 27, NO. 3, MARCH 2016

Other variables x1,2 , x2,2 , x3,2 , θˆ1 , θˆ2 , and θˆ3 .

variables x 1,2 , x 2,2 , and x 3,2 and the adaptive parameters θˆ1 , θˆ2 , and θˆ3 . Apparently, the simulation results show that good tracking performances are achieved and all the signals in the closed-loop system are bounded. V. C ONCLUSION In this paper, the problem of decentralized adaptive neural tracking control has been considered for a class of stochastic nonlinear interconnected systems. In the controller design, RBF neural networks are used to approximate the packaged unknown nonlinearities and then an adaptive neural decentralized control scheme is developed using backstepping design. It has been proved that the proposed controller can guarantee that all the signals in the closed-loop systems remain fourthmoment semiglobally uniformly ultimately bounded, and the tracking errors eventually converge to a small neighborhood around the origin. The main contributions of this paper are that the systematic backstepping-control algorithm is extended to a class of large-scale stochastic nonlinear strong interconnected systems and the developed control law requires only one adaptive parameter for each n-order subsystem. Simulation results have been provided to show the effectiveness of our results. ACKNOWLEDGMENT The authors would like to thank the anonymous reviewers for their constructive comments, which helped to greatly improve the presentation of this paper. R EFERENCES [1] Y. Niu, D. W. C. Ho, and J. Lam, “Robust integral sliding mode control for uncertain stochastic systems with time-varying delay,” Automatica, vol. 41, no. 5, pp. 873–880, 2005. [2] Z. Pan and T. Basar, “Backstepping controller design for nonlinear stochastic systems under a risk-sensitive cost criterion,” SIAM J. Control Optim., vol. 37, no. 3, pp. 957–995, 1999. [3] H. Deng and M. Kristi´c, “Stochastic nonlinear stabilization—I: A backstepping design,” Syst. Control Lett., vol. 32, no. 3, pp. 143–150, 1997. [4] L. Wu and W. X. Zheng, “L 2 -L ∞ control of nonlinear fuzzy Itô stochastic delay systems via dynamic output feedback,” IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 39, no. 5, pp. 1308–1315, Oct. 2009. [5] H. Deng and M. Krsti´c, “Stochastic nonlinear stabilization—II: Inverse optimality,” Syst. Control Lett., vol. 32, no. 3, pp. 151–159, 1997.

[6] H. Deng, M. Krsti´c, and R. J. Willians, “Stabilization of stochastic nonlinear systems driven by noise of unknown covariance,” IEEE Trans. Autom. Control, vol. 46, no. 8, pp. 1237–1253, Aug. 2001. [7] Y. Xia, M. Fu, P. Shi, Z. Wu, and J. Zhang, “Adaptive backstepping controller design for stochastic jump systems,” IEEE Trans. Autom. Control, vol. 54, no. 12, pp. 2853–2859, Dec. 2009. [8] Z.-J. Wu, X.-J. Xie, P. Shi, and Y.-Q. Xia, “Backstepping controller design for a class of stochastic nonlinear systems with Markovian switching,” Automatica, vol. 45, no. 4, pp. 997–1004, 2009. [9] L. Liu and X.-J. Xie, “Output-feedback stabilization for stochastic highorder nonlinear systems with time-varying delay,” Automatica, vol. 47, no. 12, pp. 2772–2779, 2011. [10] X.-J. Xie and J. Tian, “Adaptive state-feedback stabilization of highorder stochastic systems with nonlinear parameterization,” Automatica, vol. 45, no. 1, pp. 126–133, 2009. [11] H. Zhang, M. Li, J. Yang, and D. Yang, “Fuzzy model-based robust networked control for a class of nonlinear systems,” IEEE Trans. Syst., Man, Cybern. A, Syst., Humans, vol. 39, no. 2, pp. 437–447, Mar. 2009. [12] S. S. Ge and C. Wang, “Direct adaptive NN control of a class of nonlinear systems,” IEEE Trans. Neural Netw., vol. 13, no. 1, pp. 214–221, Jan. 2002. [13] S. S. Ge and K. P. Tee, “Approximation-based control of nonlinear MIMO time-delay systems,” Automatica, vol. 43, no. 1, pp. 31–43, 2007. [14] C. Wang, D. J. Hill, S. S. Ge, and G. R. Chen, “An ISS-modular approach for adaptive neural control of pure-feedback systems,” Automatica, vol. 42, no. 5, pp. 723–731, 2006. [15] C. Wang, M. Wang, T. Liu, and D. J. Hill, “Learning from ISS-modular adaptive NN control of nonlinear strict-feedback systems,” IEEE Trans. Neural Netw. Learn. Syst., vol. 23, no. 10, pp. 1539–1550, Oct. 2012. [16] M. Chen and S. S. Ge, “Direct adaptive neural control for a class of uncertain nonaffine nonlinear systems based on disturbance observer,” IEEE Trans. Cybern., vol. 43, no. 4, pp. 1213–1225, Aug. 2013. [17] T. Zhang and S. S. Ge, “Adaptive neural network tracking control of MIMO nonlinear systems with unknown dead zones and control directions,” IEEE Trans. Neural Netw., vol. 20, no. 3, pp. 483–497, Mar. 2009. [18] D. Wang and J. Huang, “Neural network-based adaptive dynamic surface control for a class of uncertain nonlinear systems in strict-feedback form,” IEEE Trans. Neural Netw., vol. 16, no. 1, pp. 195–202, Jan. 2005. [19] T.-S. Li, D. Wang, G. Feng, and S.-C. Tong, “A DSC approach to robust adaptive NN tracking control for strict-feedback nonlinear systems,” IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 40, no. 3, pp. 915–927, Jun. 2010. [20] B. Chen, X. Liu, K. Liu, and C. Lin, “Direct adaptive fuzzy control of nonlinear strict-feedback systems,” Automatica, vol. 45, no. 6, pp. 1530–1535, 2009. [21] B. Chen, X. P. Liu, S. S. Ge, and C. Lin, “Adaptive fuzzy control of a class of nonlinear systems by fuzzy approximation approach,” IEEE Trans. Fuzzy Syst., vol. 20, no. 6, pp. 1012–1021, Dec. 2012. [22] C. Yang, S. S. Ge, C. Xiang, T. Chai, and T. H. Lee, “Output feedback NN control for two classes of discrete-time systems with unknown control directions in a unified approach,” IEEE Trans. Neural Netw., vol. 19, no. 11, pp. 1873–1886, Nov. 2008. [23] S.-C. Tong, X.-L. He, and H.-G. Zhang, “A combined backstepping and small-gain approach to robust adaptive fuzzy output feedback control,” IEEE Trans. Fuzzy Syst., vol. 17, no. 5, pp. 1059–1069, Oct. 2009. [24] C. L. P. Chen, Y.-J. Liu, and G.-X. Wen, “Fuzzy neural network-based adaptive control for a class of uncertain nonlinear stochastic systems,” IEEE Trans. Cybern., vol. 44, no. 5, pp. 583–593, May 2014. [25] Y.-J. Liu, S. Tong, and C. L. P. Chen, “Adaptive fuzzy control via observer design for uncertain nonlinear systems with unmodeled dynamics,” IEEE Trans. Fuzzy Syst., vol. 21, no. 2, pp. 275–288, Apr. 2013. [26] Y.-J. Liu, C. L. P. Chen, G.-X. Wen, and S. Tong, “Adaptive neural output feedback tracking control for a class of uncertain discretetime nonlinear systems,” IEEE Trans. Neural Netw., vol. 22, no. 7, pp. 1162–1167, Jul. 2011. [27] S. Tong, S. Sui, and Y. Li, “Fuzzy adaptive output feedback control of MIMO nonlinear systems with partial tracking errors constrained,” IEEE Trans. Fuzzy Syst., doi: 10.1109/TFUZZ.2014.2327987, 2014. [28] Z. Liu, F. Wang, Y. Zhang, X. Chen, and C. L. P. Chen, “Adaptive fuzzy output-feedback controller design for nonlinear systems via backstepping and small-gain approach,” IEEE Trans. Cybern., vol. 44, no. 10, pp. 1714–1725, Oct. 2014. [29] M. Wang, B. Chen, and P. Shi, “Adaptive neural control for a class of perturbed strict-feedback nonlinear time-delay systems,” IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 38, no. 3, pp. 721–730, Jun. 2008.

WANG et al.: ROBUST ADAPTIVE NEURAL TRACKING CONTROL

[30] H. E. Psillakis and A. T. Alexandridis, “NN-based adaptive tracking control of uncertain nonlinear systems disturbed by unknown covariance noise,” IEEE Trans. Neural Netw., vol. 18, no. 6, pp. 1830–1835, Nov. 2007. [31] H. Wang, B. Chen, X. Liu, K. Liu, and C. Lin, “Robust adaptive fuzzy tracking control for pure-feedback stochastic nonlinear systems with input constraints,” IEEE Trans. Cybern., vol. 43, no. 6, pp. 2093–2104, Dec. 2013. [32] Z. Yu and H. Du, “Adaptive neural control for uncertain stochastic nonlinear strict-feedback systems with time-varying delays: A Razumikhin functional method,” Neurocomputing, vol. 74, nos. 12–13, pp. 2072–2082, 2011. [33] H. Wang, B. Chen, and C. Lin, “Approximation-based adaptive fuzzy control for a class of non-strict-feedback stochastic nonlinear systems,” Sci. China Inf. Sci., vol. 57, no. 3, pp. 1–16, 2014. [34] Y. Li, S. Tong, and T. Li, “Observer-based adaptive fuzzy tracking control of MIMO stochastic nonlinear systems with unknown control direction and unknown dead-zones,” IEEE Trans. Fuzzy Syst., doi: 10.1109/TFUZZ.2014.2348017, 2014. [35] S. Tong, T. Wang, Y. Li, and B. Chen, “A combined backstepping and stochastic small-gain approach to robust adaptive fuzzy output feedback control,” IEEE Trans. Fuzzy Syst., vol. 21, no. 2, pp. 314–321, Apr. 2013. [36] W. Chen, L. Jiao, J. Li, and R. Li, “Adaptive NN backstepping outputfeedback control for stochastic nonlinear strict-feedback systems with time-varying delays,” IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 40, no. 3, pp. 939–950, Jun. 2010. [37] S. N. Huang, K. K. Tan, and T. H. Lee, “Nonlinear adaptive control of interconnected systems using neural networks,” IEEE Trans. Neural Netw., vol. 17, no. 1, pp. 243–246, Jan. 2006. [38] S. Tong, C. Liu, and Y. Li, “Fuzzy-adaptive decentralized outputfeedback control for large-scale nonlinear systems with dynamical uncertainties,” IEEE Trans. Fuzzy Syst., vol. 18, no. 5, pp. 845–861, Oct. 2010. [39] D. T. Gavel and D. D. Siljak, “Decentralized adaptive control: Structural conditions for stability,” IEEE Trans. Autom. Control, vol. 34, no. 4, pp. 413–426, Apr. 1989. [40] C. Wen, “Decentralized adaptive regulation,” IEEE Trans. Autom. Control, vol. 39, no. 10, pp. 2163–2166, Oct. 1994. [41] C. Hua and X. Guan, “Output feedback stabilization for time-delay nonlinear interconnected systems using neural networks,” IEEE Trans. Neural Netw., vol. 19, no. 4, pp. 637–688, Apr. 2008. [42] S. Xie and L. Xie, “Decentralized stabilization of a class of interconnected stochastic nonlinear systems,” IEEE Trans. Autom. Control, vol. 45, no. 1, pp. 132–137, Jan. 2000. [43] S.-J. Liu, J.-F. Zhang, and Z.-P. Jiang, “Decentralized adaptive outputfeedback stabilization for large-scale stochastic nonlinear systems,” Automatica, vol. 43, no. 2, pp. 238–251, 2007. [44] W. Chen, L. C. Jiao, and J. Wu, “Decentralized backstepping outputfeedback control for stochastic interconnected systems with time-varying delays using neural networks,” Neural Comput. Appl., vol. 21, no. 6, pp. 1375–1390, 2012. [45] J. Li, W. Chen, and J.-M. Li, “Adaptive NN output-feedback decentralized stabilization for a class of large-scale stochastic nonlinear strictfeedback systems,” Int. J. Robust Nonlinear Control, vol. 21, no. 4, pp. 452–472, 2011. [46] Q. Zhou, P. Shi, H. Liu, and S. Xu, “Neural-network-based decentralized adaptive output-feedback control for large-scale stochastic nonlinear systems,” IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 42, no. 6, pp. 1608–1619, Dec. 2012. [47] S. J. Yoo and J. B. Park, “Neural-network-based decentralized adaptive control for a class of large-scale nonlinear systems with unknown timevarying delays,” IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 39, no. 5, pp. 1316–1323, Oct. 2009.

523

[48] M. M. Ploycarpou and P. A. Ioannou, “A robust adaptive nonlinear control design,” Automatica, vol. 32, no. 3, pp. 423–427, Mar. 1996. [49] R. M. Sanner and J.-J. E. Slotine, “Gaussian networks for direct adaptive control,” IEEE Trans. Neural Netw., vol. 3, no. 6, pp. 837–863, Nov. 1992. [50] A. J. Kurdila, F. J. Narcowich, and J. D. Ward, “Persistency of excitation in identification using radial basis function approximants,” SIAM J. Control Optim., vol. 33, no. 2, pp. 625–642, 1995.

Huanqing Wang received the B.Sc. degree in mathematics from Bohai University, Jinzhou, China, in 2003, the M.Sc. degree in mathematics from Inner Mongolia University, Huhhot, China, in 2006, and the Ph.D. degree from the Institute of Complexity Science, Qingdao University, Qingdao, China, in 2013. He is currently an Associate Professor with the School of Mathematics and Physics, Bohai University. His current research interests include nonlinear control, adaptive neural control, and stochastic nonlinear systems.

Xiaoping Liu received the B.Sc., M.Sc., and Ph.D. degrees from Northeastern University, Shenyang, China, in 1984, 1987, and 1989, respectively, all in electrical engineering. He was with the School of Information Science and Engineering, Northeastern University, for ten years. He joined the Department of Electrical Engineering, Lakehead University, Thunder Bay, ON, Canada, in 2001. His current research interests include nonlinear control systems, singular systems, and robust control. Prof. Liu is a member of the Professional Engineers of Ontario.

Kefu Liu received the B.Eng. and M.Sc. degrees from the Central South University of Technology, Changsha, China, in 1981 and 1984, respectively, and the Ph.D. degree from the Technical University of Nova Scotia, Halifax, NS, Canada, in 1992, all in mechanical engineering. He was an Assistant Professor with St. Mary’s University, Halifax, from 1993 to 1995, and Dalhousie University, Halifax, from 1995 to 1998. He joined Lakehead University, Thunder Bay, ON, Canada, in 1998, where he is currently a Full Professor with the Department of Mechanical Engineering. His current research interests include vibration control, control of nonlinear systems, and mechatronics. Prof. Liu is a member of Professional Engineers of Ontario.

Robust Adaptive Neural Tracking Control for a Class of Stochastic Nonlinear Interconnected Systems.

In this paper, an adaptive neural decentralized control approach is proposed for a class of multiple input and multiple output uncertain stochastic no...
2MB Sizes 0 Downloads 11 Views