IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 26, NO. 9, SEPTEMBER 2015

1885

Distributed Containment Control for Multiple Unknown Second-Order Nonlinear Systems With Application to Networked Lagrangian Systems Jie Mei, Wei Ren, Bing Li, and Guangfu Ma Abstract— In this paper, we consider the distributed containment control problem for multiagent systems with unknown nonlinear dynamics. More specifically, we focus on multiple second-order nonlinear systems and networked Lagrangian systems. We first study the distributed containment control problem for multiple second-order nonlinear systems with multiple dynamic leaders in the presence of unknown nonlinearities and external disturbances under a general directed graph that characterizes the interaction among the leaders and the followers. A distributed adaptive control algorithm with an adaptive gain design based on the approximation capability of neural networks is proposed. We present a necessary and sufficient condition on the directed graph such that the containment error can be reduced as small as desired. As a byproduct, the leaderless consensus problem is solved with asymptotical convergence. Because relative velocity measurements between neighbors are generally more difficult to obtain than relative position measurements, we then propose a distributed containment control algorithm without using neighbors’ velocity information. A two-step Lyapunov-based method is used to study the convergence of the closed-loop system. Next, we apply the ideas to deal with the containment control problem for networked unknown Lagrangian systems under a general directed graph. All the proposed algorithms are distributed and can be implemented using only local measurements in the absence of communication. Finally, simulation examples are provided to show the effectiveness of the proposed control algorithms. Index Terms— Consensus, containment control, cooperative control, Lagrangian systems, multiagent systems, nonlinear system.

I. I NTRODUCTION ISTRIBUTED coordination of multiagent systems has drawn much attention in the past decade, leading to significant results in consensus, flocking, and formation

D

Manuscript received May 30, 2013; revised September 16, 2014; accepted September 16, 2014. Date of publication October 15, 2014; date of current version August 17, 2015. This work was supported in part by the National Science Foundation under Grant ECCS-1213295 and Grant ECCS-1307678, in part by the National Natural Science Foundation of China under Grant 61174200, Grant 61403094, and Grant 61120106010, in part by the Natural Science Foundation of Guangdong Province under Grant S2012040007301, in part by the China Post-Doctoral Science Foundation under Grant 2012M520737 and Grant 2014T70338, and in part by the Foundation Research Project of Shenzhen under Grant JCYJ20120613115259889. J. Mei and B. Li are with the School of Mechanical Engineering and Automation, Harbin Institute of Technology Shenzhen Graduate School, Shenzhen 518055, China (e-mail: [email protected]; [email protected]). W. Ren is with the Department of Electrical and Computer Engineering, University of California at Riverside, Riverside, CA 92521 USA, and also with the Center of Excellence in Intelligent Engineering Systems, King Abdulaziz University, Jeddah 22254, Saudi Arabia (e-mail: [email protected]). G. Ma is with the Department of Control Science and Engineering, Harbin Institute of Technology, Harbin 150001, China (e-mail: [email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TNNLS.2014.2359955

control [1]–[6]. Many existing works on distributed coordination focus on the consensus problem when there is no leader (see the survey papers [7], [8] and references therein for more details). The main feature of consensus is that every agent updates its own state using the states of its local neighbors such that the group of agents approaches a common value. In practice, a group of agents might be required to track a reference or move to be in an area satisfying some special geometric constraints, that is, the case in the presence of a single leader or multiple leaders. The applications can be broadened as a group objective can be encapsulated by the leader or the leaders. There are some existing results on both the cases with a single leader [9], [10] and multiple leaders [11]–[13] for multiagent systems with single-integrator or double-integrator dynamics. Unfortunately, many physical systems are inherently nonlinear in practice. There are many commonly observed phenomena that cannot be described by linear equations, for example, multiple equilibria, limit cycles, bifurcations, and complex dynamical behavior, to name a few [14]. Thus, it is important and meaningful to describe dynamical systems by nonlinear equations and consider distributed coordination of multiple nonlinear systems. Recent work on coordination of nonlinear systems often focuses on special models, such as Lagrangian systems, unicycles, attitude dynamics of rigid bodies, and passive systems. For general nonlinear systems, the results focus on coordination of linear multiagent systems with nonlinear protocols [15], [16] and nonlinear multiagent systems with identical dynamics [17]–[19]. There are also some results on multiagent systems with general nonlinear dynamics in the presence of unknown nonlinearities and external disturbances [20]–[25]. Applying the neural network-based approach to solve the coordination problem of uncertain multiagent systems is originally proposed in [20] and [21]. The authors address the leaderless consensus problem for multiple first-order nonlinear systems under an undirected graph in [20] and the coordinated tracking problem under a directed graph that contains a directed spanning tree and with no loop in [21]. Das and Lewis [22] study the coordinated tracking problem for multiple first-order nonlinear systems under a strongly connected directed graph. Reference [22] is extended to second-order nonlinear systems in [23] and to high-order nonlinear systems in [24]. A summarized work about the results of [22]–[24] can be found in [25]. As a special case of second-order nonlinear systems, Lagrangian systems can be used to represent a large class of mechanical systems, including autonomous vehicles, robotic

2162-237X © 2014 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

1886

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 26, NO. 9, SEPTEMBER 2015

manipulators, spacecraft, and underwater vehicles, to name a few. Recent work on distributed coordination of networked Lagrangian systems focuses on leaderless consensus [26]–[32], coordinated tracking with one leader [33]–[41], and containment control with multiple leaders [42], [43]. These problems are studied under different interaction graphs, for example, a ring graph [26], [27], [34], an undirected graph [28]–[30], [33], [35]–[38], [42], and a directed graph [32], [39]–[41], [43]. In addition, some other issues like flocking [30], time-delay [31], [40], and uncertainties are also addressed. Most of the work on networked Lagrangian systems investigate the systems with parametric uncertainties and thus require the linearity-in-parameters assumption. The proposed control algorithms therein cannot be applied to the case with unknown nonlinearities (unstructural uncertainties) and external disturbances. Recently, the coordination problem for networked unknown Lagrangian systems has been firstly addressed in [44], where the authors study the coordinated tracking problem with a single stationary leader under an undirected graph. Chen and Lewis [45] consider the case with a single dynamic leader under a directed graph, in which the neighbors’ accelerations are used to implement the control algorithm. As a result, the control algorithms for the followers will be coupled with each other when there exists a loop in the interaction graph. Thus, the proposed control algorithm in [45] is only applicable to unidirected graphs with no loop as stated in [21], which are generally very special. Motivated by [12], [20]–[25], and [42]–[44], in this paper, we consider the distributed containment control problem for multiple second-order nonlinear systems and networked Lagrangian systems with multiple dynamic leaders in the presence of unknown nonlinearities and external disturbances under a general directed graph. The problem contains the following five motivations and challenges. 1) Containment Control With Multiple Dynamic Leaders: The objective of containment control is that the followers are driven to be in a special area that is the convex hull spanned by the leaders. The problem has many applications, for example, securing a group of followers in the area spanned by the leaders so that they can be away from dangerous sources outside the area. 2) Moving to Nonlinear Systems and Lagrangian System: As stated above, many physical systems are inherently nonlinear in practice and a large class of mechanical systems are Lagrangian systems. Therefore, distributed coordination of multiple nonlinear systems and networked Lagrangian systems has many applications. 3) In the Presence of Unknown Uncertainties and External Disturbance: For real-world applications, uncertainties are inevitable due to the unmodeling dynamics, imprecise measurements, and external disturbances. For Lagrangian systems, in addition to the parametric uncertainties, there always exist structural uncertainties. 4) Without Using Neighbors’ Velocity Information and Posing Communication Requirement: Relative velocity measurements between neighbors are generally more difficult to obtain than relative position measurements. In some applications, communication could be

unreliable, expensive, or even unavailable. And issues like time-delays will arise in the case of communication. It is hence valuable to accomplish coordination tasks by using information obtained via local sensors with no communication requirement. 5) Directed Interaction Graph: Because of nonuniform sensing ranges or limited field of view of sensors, one agent might be able to sense another agent, but not vice visa. In general, the interaction graph among the agents is directed. Note that an undirected graph is a special case of a directed graph. The combination of all these aspects makes our problem more complicated. Although existing results have considered a few of the aforementioned aspects, none of them has addressed a combination of all of them, and addressing a combination of them is nontrivial and challenging. In this paper, we first propose a distributed control algorithm with an adaptive gain design for multiple second-order nonlinear systems with unknown nonlinearities and external disturbances. A necessary and sufficient condition on the directed graph is presented such that the containment error can be reduced as small as desired. We then show a necessary and sufficient condition on the leaderless consensus case where the systems can achieve consensus asymptotically. Because relative velocity measurements between neighbors are generally more difficult to obtain than relative position measurements, we propose a distributed containment control algorithm with adaptive gain strategies without using neighbors’ velocity information. The stability of the closed-loop systems is analyzed via a two-step Lyapunov-based method. Next, the ideas are applied to deal with the containment control problem for networked unknown Lagrangian systems under a general directed graph. Parts of the current paper have appeared in our preliminary works reported in [46] and [47]. Compared with the existing results, the proposed results in this paper have the following unique advantages. 1) Our results rely on a mild connectivity assumption of the interaction topology among the agents. In contrast, the results in [21] and [45] are only applicable to directed graphs with no loop, which are very special. 2) Control algorithms without using relative velocity measurements are proposed for the containment control problem and adaptive gain design strategies are explored to compensate for the information deficiency and external disturbances. 3) The control algorithms proposed in this paper are fully distributed in the sense that each agent can select their control gains according to only local information. In contrast, the control algorithms in [22]–[25] require that every follower know both the (nonsymmetric) Laplacian matrix and the pinning gains to derive the control gains for the neural network updating laws. Unfortunately, these are actually global information and hence it is impossible to implement the controllers in [22]–[25] in a fully distributed way. 4) Leaderless consensus under a general directed graph is solved for multiple second-order systems with asymptotical convergence even in the presence of

MEI et al.: DISTRIBUTED CONTAINMENT CONTROL FOR MULTIPLE UNKNOWN SECOND-ORDER NONLINEAR SYSTEMS

unknown nonlinearities and external disturbances, which has not been reported in the literature to the best of our knowledge. This paper is organized as follows. In Section II, we provide some background on the graph theory. Section III is the main result of this paper, where the distributed containment problem for multiple unknown second-order nonlinear systems is presented. The results in Section III are applied in Section IV to the distributed coordination for networked unknown Lagrangian systems. In Section V, numerical simulations are presented to show the effectiveness of the proposed control algorithms. Finally, concluding remarks are given in Section VI. Notations: Let 1m and 0m denote, respectively, the m × 1 column vector of all ones and all zeros. Let 0m×n denote the m × n matrix with all zeros and Im denote the m × m identity matrix. Let λmax (·) and λmin (·) denote, respectively, the maximal and minimum eigenvalue of a square real matrix with real eigenvalues. Let diag(z 1 , . . . , z p ) be the diagonal matrix with diagonal entries z 1 to z p . Let tr(A) denote the trace of a square matrix A. Let σmax (·) denote the maximum singular value of a matrix. Let ⊗ denote the Kronecker product. For a vector function f (t) : R → Rn , we say that ∞ f (t) ∈ L2 if 0 f (τ )T f (τ )dτ < ∞ and f (t) ∈ L∞ if for each element of f (t), noted as f i (t), supt ≥0 | f i (t)| < ∞, i = 1, . . . , n.  ·  is the Euclidean norm of a vector.  · 1 is the sum norm of a vector.  ·  F is the Frobenius norm of a matrix. sgn(·) is the sign function defined componentwise. II. BACKGROUND AND P ROBLEM S TATEMENT Suppose that there exist m followers, labeled as agents 1 to m, and n − m (n > m) leaders labeled as agents m + 1 to n, in a team. We use a directed graph to describe the interaction graph between the n agents. Let G = (V, E) be a directed graph with the node set V = {1, . . . , n} and the edge set E ⊆ V × V. An edge (i, j ) ∈ E denotes that agent j can obtain information from agent i , but not vice versa. Here, node i is the parent node while node j is the child node and node i is a neighbor of node j . A directed path from node i to node j is a sequence of edges in a directed graph. A directed tree is a directed graph, where every node has exactly one parent except for one node, called the root, and the root has directed paths to every other node. A directed spanning tree of a directed graph is a direct tree that contains all nodes of the directed graph. A directed graph has or contains a directed spanning tree, if there exists a directed spanning tree as a subset of the directed graph. The adjacency matrix A = [ai j ] ∈ Rn×n associated with G is defined as ai j > 0 if ( j, i ) ∈ E, and ai j = 0 otherwise. In this paper, self edges are not allowed, that is, aii = 0. The (nonsymmetric) Laplacian matrix L A = [li j ] ∈Rn×n is associated with A and hence G is defined as lii = nj =1, j =i ai j and li j = −ai j , i = j . Lemma 2.1[3]: Let G be a directed graph of order n and L A ∈ Rn×n be the associated (nonsymmetric) Laplacian matrix. The following three statements are equivalent.

1887

Fig. 1. Some examples of the graphs among two leaders and three followers, where L1 and L2 denote the two leaders and Fi, i = 1, 2, 3, denotes the ith follower. Assumption 2.2 is satisfied in (a) and (b) but not in (c) and (d).

1) The matrix L A has a simple zero eigenvalue and all other eigenvalues have positive real parts. 2) G has a directed spanning tree. (3) Given a system z˙ = −L A z, where z = [z 1 , . . . , z n ]T , consensus is reached exponentially. In particular, for all n p i = 1, . . . , n, and all z i (0), z i (t) → i=1 i z i (0)

exponentially as t → ∞, where p = [ p1, . . . , pn ]T is a nonnegative left eigenvector  of L A associated with the zero eigenvalue satisfying ni=1 pi = 1. Definition 2.1: Let C be a set in a real vector space S ⊆ Rn . The set C is convex if, for any x and y in C, the point (1 − t)x + t y ∈ C for any t ∈ [0, 1]. The convex hull for a set of points X = {x 1 , . . . , x n } in S is the minimal convex set containing all points in X. We use Co(X) to denote the n convex hull n of X. In particular, Co(X) = { i=1 αi x i |x i ∈ X, αi ≥ 0, i=1 αi = 1}.



We use V F = {1, . . . , m} and V L = {m + 1, . . . , n} to denote, respectively, the follower set and the leader set. In this paper, we assume that the directed graph G satisfies the following assumption. Assumption 2.2: For each of the m followers, there exists at least one leader that has a directed path to the follower. Fig. 1 shows some examples of the directed graphs among two leaders and three followers. One can see that Assumption 2.2 is satisfied for the graphs in Fig. 1(a) and (b) since in Fig. 1(a) leader L1 has directed paths to F1 and F2 and L2 has a directed path to F3, and in Fig. 1(b) leader L1 has directed paths to all three followers. Assumption 2.2 is not satisfied for the graphs in Fig. 1(c) and (d) because in both graphs neither L1 nor L2 has a directed path to follower F3. For the special case with only one single leader, Assumption 2.2 becomes that the leader has directed paths to all followers. III. D ISTRIBUTED C ONTAINMENT C ONTROL FOR M ULTIPLE U NKNOWN S ECOND -O RDER S YSTEMS In this section, we will study the distributed containment control for multiple unknown second-order systems such that all followers converge to the convex hull spanned by the dynamics leaders. Two control algorithms are proposed, where the first one relies on both the relative position measurements and the relative velocity measurements and the second one does not use the relative velocity measurements.

1888

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 26, NO. 9, SEPTEMBER 2015

A. Agent Dynamics and Assumptions Let the state and state derivative of agent i be, respectively, x i ∈ R p and v i ∈ R p . The m followers are represented by the second-order nonlinear equations x˙ i = v i v˙i = f i (x i , v i ) + ωi + τi , i ∈ V F

(1)

where f i : R p × R p → R p is a vector-valued function, ωi ∈ R p is the external disturbance, and τi ∈ R p is the control input on the i th follower. Standard assumptions for existence of unique solutions are made, for example, f i (x i , v i ) either continuously differentiable or Lipschitz [48]. In this paper, the nonlinearity f i (x i , v i ) in (1) is assumed to be unknown. We use neural networks to compensate for the unknown nonlinearities. More details of neural networks can be found in [48]. Because of the approximation property of neural networks, the unknown continuous nonlinearity f i (x i , v i ) can be represented as fi (x i , v i ) = WiT φi (x i , v i ) + εi , where Wi is the ideal constant approximation weight matrix, φi (·) is a suitable basis set of functions, and εi is the approximation error for the i th follower. Typical examples of the basis function can be chosen as sigmoid, hyperbolic tangent and Gaussian functions. In general, the nonlinear function cannot be approximated exactly, but over a compact set ⊂ R2 p , for any given positive constant ε Mi , there exist an ideal weight matrix Wi and a number of neuron ri such that εi  ≤ ε Mi in [20]. Thus, for each follower, the estimate of fi (x i , v i )  T φi (x i , v i ), where W i is the can be written as fˆi (x i , v i ) = W i estimate of the ideal constant neural network weight matrix which will be designed later. Before moving on, we introduce the auxiliary variable for each follower  ai j (x i − x j ), i ∈ V F (2) si = v i + α  j ∈V L

VF

where α is a positive constant and ai j is the (i, j )th entry of the adjacency matrix A associated with G. Let s F , x F , x L , and v L be the column stack vectors of, respectively, si , ∀i ∈ V F , x i , ∀i ∈ V F , x i , ∀i ∈ V L , and v i , ∀i ∈ V L . Note that the (nonsymmetric) Laplacian matrix L A of G can be written as   L1 L2 LA = (3) 0(n−m)×m 0(n−m)×(n−m) where L 1 ∈ Rm×m and L 2 ∈ Rm×(n−m) . Then (2) can be written in a vector form as x˙ F = −α(L 1 ⊗ I p )x F − α(L 2 ⊗ I p )x L + s F .

(4)

Lemma 3.1 [43, Lemma 2.3]: The matrix L 1 defined in (3) is a nonsingular M-matrix, if and only if Assumption 2.2 holds. In addition, if Assumption 2.2 holds, then each entry −1 of −L −1 1 L 2 is nonnegative, all row sums of −L 1 L 2 equal to

one, and there exists a diagonal matrix D = diag(d1 , . . . , dn ) with di > 0, ∀i = 1, . . . , n, such that Q = DL 1 + L 1T D is symmetric positive definite. Under Assumption 2.2, it follows from Lemma 3.1 and Definition 2.1 that −(L −1 1 L 2 ⊗ I p )x L is within the convex

hull spanned by the leaders, Co({x i : i ∈ V L }). In this case, define the containment error vector

x¯ F = x F + (L −1 1 L 2 ⊗ I p )x L .

(5)

Remark 3.1: From the definition of the containment error vector x¯ F , each follower has been assigned a reference point in the convex hull spanned by the leaders. The reference points can be either static or dynamic depending on the motion of the leaders and might be different with each other. If every follower is driven to its reference point by a distributed control input, all followers converge to the convex hull spanned by the leaders eventually. In the coordinated tracking problem with one single leader, the reference point (the leader) is the same to all followers. But in the containment control problem, no followers know where the convex hull is located and they are driven by only their local neighbors, which makes the containment control problem more challenging than the coordinated tracking problem. In the current paper, we do not consider the general case where the followers can stay stationarily at arbitrary points or move freely in the convex hull spanned by the leaders. More general containment preserving mechanisms might be studied in a future work. Besides Assumption 2.2 on the interaction graph among the agents, the following assumptions are made throughout this paper. Assumption 3.2: All leaders’ states and state derivatives are bounded, that is, there exist constants x L M and v L M such that x L (t) ≤ x L M and v L (t) ≤ v L M . Remark 3.3: In this paper, we do not consider the explicit dynamics of the leaders. The leaders can have arbitrary dynamics so long as their states and state derivatives are bounded. The assumption on the boundedness of the leaders’ states is necessary, because the neural network approximation approach is only active over a compact set. It is reasonable for most practical applications. Assumption 3.4: The external disturbance ωi is bounded, that is, there exists a positive constant such that ωi  ≤ ω Mi , where ω Mi is bounded and unknown. B. Control Algorithm Design We propose the following control algorithm for (1) with unknown nonlinearities:  iTφi − kˆi sgn(si ) (6a) ai j (v i −v j ) − W τi = −K i si −α j ∈V L



VF

˙ = γ φ sT  W i i i i ˙kˆ = δ s  , i ∈ V i i i 1 F

(6b) (6c)

where K i ∈ R p× p is a symmetric positive-definite matrix, and γi and δi are positive constants. Remark 3.5: In (6a), −kˆi sgn(si ) is used to compensate for the bounded external disturbances and the approximation error with (6c) being the adaptive updating law of the varying gain kˆi . For the i th follower, it is easy to observe from (6) that the control input relies on the information of its own and the neighbors (i.e., its own position and velocity, and its relative positions and velocities with neighbors). The algorithm can

MEI et al.: DISTRIBUTED CONTAINMENT CONTROL FOR MULTIPLE UNKNOWN SECOND-ORDER NONLINEAR SYSTEMS

be implemented using local measurements. Thus, the control algorithm is distributed and communication is not necessarily required. Theorem 3.6: Under Assumptions 3.2 and 3.4, using (6) for (1), the containment error can be reduced as small as desired by tuning α for arbitrary-bounded initial conditions, if and only if Assumption 2.2 holds. More specially, with an additional assumption that limt →∞ v L (t) = 0, the containment error will converge to zero asymptotically. Proof (Sufficiency): The closed-loop system (1) using (6) can be written as iT φi + εi + ωi − kˆi sgn(si ), i ∈ V F (7) s˙i = −K i si + W i is the parameter estimation error. i = Wi − W where W Let ε, ω, and φ be, respectively, the column stack vectors , W , and W be, of εi , ωi , and φi , ∀i ∈ V F . Also let K , W i , W i , and respectively, the block diagonal matrices of K i , W Wi , ∀i ∈ V F . Over a compact set ⊂ R2 p , for any constant ε Mi > 0, there exist Wi and ri such that εi  ≤ ε Mi in . Consider the Lyapunov function candidate  1

 1 1 i + iT W tr W (ki − kˆ i )2 (8) V (t) = s FT s F + 2 2γi 2δi i∈V F

i∈V F



where ki = ε Mi + ω Mi . Taking the derivative of V (t) along (7) gives 

T ˙  −1 i W i − γi−1 tr W δi (ki − kˆi )k˙ˆi V˙ (t) = s FT s˙F + i∈V F

=

−s FT K s F +



T φ + s FT W

i∈V F



iT φi siT − tr W (9)

i∈V F

Note that for two matrices A and B with appropriate dimensions, tr(A + B) = tr(A) + tr(B), tr(AB) = tr(B A), and T φ is 2tr(AB) ≤ tr(A T A) + tr(B T B). Also note that s FT W a scalar. We can get



T T  T T φs F = i φi si . (10) Tφ = tr s FT W Tφ = tr W tr W s FT W i∈V F

From Assumption 3.4, over , we can get siT (εi + ωi ) ≤ si εi + ωi  ≤ ki si 1 .

(11)

Applying (10) and (11), we can get V˙ (t) ≤ −s FT K s F .

Note that V (t) ≥ 0 and V˙ (t) ≤ 0. We can conclude that limt →∞ V (t) = V∞ for some V∞ ∈ [0, V (0)]. Hence, integrating both sides of (12), we can get  ∞ s FT (t)s F (t)dt ≤ V (0) − V (∞) λmin (K ) 0

which means that s F ∈ L2 . On the other hand, it is easy  to get that s˙F ∈ L∞ from (7). Thus, we have s F ∈ L2 L∞ and s˙F ∈ L∞ . Using Barbalat’s Lemma [49], we can get that limt →∞ s F (t) = 0 pm . Next, we show that if Assumption 2.2 holds, the containment error x¯ F  can be reduced as small as desired. In this case, using (5), we can rewrite (4) as x˙¯ F = −α(L 1 ⊗ I p )x¯ F + s F + (L −1 1 L 2 ⊗ I p )v L .

(13)

Note that under Assumption 2.2, L 1 is a nonsingular M-matrix. Consider the following Lyapunov function candidate V1 (t) = x¯ FT (D ⊗ I p )x¯ F , where D is defined as in Lemma 3.1. Taking the derivative of V1 (t) along (13) gives V˙1 (t) = x˙¯ FT (D ⊗ I p )x¯ F + x¯ FT (D ⊗ I p )x˙¯ F T

= 2[s F + L −1 x FT(Q ⊗ I p )x¯ F 1 L 2 ⊗ I p v L ] (D⊗ I p ) x¯ F −α¯

 2 maxi di s F + L −1 αλmin (Q) 1 L2 ⊗ Ip vL  ≤ √ V1 − V1 maxi di mini di

2 2(maxi di )3 s F + L −1 αλmin (Q) 1 L2 ⊗ Ip vL  ≤ − V1 αλmin (Q) mini di 2 maxi di (14)

i∈V F

siT (εi + ωi ) − ki si 1 .

1889

(12)

i ∈ L∞ . Thus, V (t) ≤ V (0). From (8), we have s F , W i ∈ L∞ . Because Wi is constant and thus bounded, we have W Note from Assumption 3.2 that x L ∈ L∞ and L 1 is a nonsingular M-matrix from Lemma 3.1 as Assumption 2.2 holds. We can conclude from (4) that x F , x˙ F ∈ L∞ by the input-to-state stability [49]. Therefore, under the control algorithm (6), the solution of (1) always stays in a compact set, that is, exists for any bounded initial conditions and we will show the explicit expression of later.

where we have used the fact that mini di x¯ F 2 ≤ V1 ≤ maxi di x¯ F 2 to obtain the first inequality, and have used the fact that

 2 maxi di s F + L −1 1 L2 ⊗ Ip vL  √ V1 mini di   αλmin (Q) 2 maxi di =2 V1 · 2 maxi di αλmin (Q)

maxi di s F + L −1 1 L2 ⊗ Ip vL  · √ mini di

2 2(maxi di )3 s F + L −1 αλmin (Q) 1 L2 ⊗ Ip vL  ≤ V1 + 2 maxi di αλmin (Q) mini di to obtain the second inequality. After some manipulation, we can get from (14) that αλ

(Q) t i di

min − 2 max

V1 (t) ≤ V1 (0)e +

  αλmin (Q)

2 1−e− 2 maxi di t v 4(maxi di)4 s F + L −1 L ⊗ I  2 p L 1 α 2 mini di [λmin (Q)]2

.

(15) Note that αλmin (Q)/2 maxi di > 0 and x¯ F  ≤ (V1 /mini di )1/2 . Let C M = 2(maxi di )2 /α mini diλmin (Q).

1890

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 26, NO. 9, SEPTEMBER 2015

We can get lim supx¯ F (t) t →∞



≤ C M lim sups F (t)+ L −1 1 L 2 ⊗ I p v L  t →∞ 

≤ C M lim sup s F (t) + lim sup  − L −1 1 L2 ⊗ Ip vL  t →∞ t →∞

= C M lim sup s F (t) + C M lim sup − L −1 1 L 2 ⊗ I p v L. t →∞

t →∞

(16) Note that −(L −1 1 L 2 ⊗ I p )v L is in the convex hull Co({v i : i ∈ V L }). We can get that lim supt →∞  − (L −1 1 L2 ⊗ I p )v L  is bounded because v i , ∀i ∈ V L , is bounded. Thus, we can choose α large enough to make the second formula in (16) as small as desired. Equation (16) shows that the containment error x¯ F (t) can be bounded by s F (t) and  − (L −1 1 L 2 ⊗ I p)v L. From the former analysis, using (6), limt →∞ s F (t) = 0. Therefore, it follows from (16) that we can choose α large enough to reduce the containment error x¯ F  as small as desired. This completes the sufficient part. More specially, if lim t →∞ v L (t) = 0, we can get from (16) that limt →∞ x¯ F (t) = 0. We next present the explicit expression of the compact set for the neural network approximation. For any bounded initial conditions and t ≥ 0, because V (t) ≤ V (0), we have  

i (0)2 + δ −1 k˜ 2 (0) γi−1 W s F (t) ≤ s F (0)2 + F i i i∈V F

= sM . We then can get from (15) that 

V1 (0) 2 s + L −1 L ⊗ I v 2 x¯ F (t) ≤ +C M F 2 p L 1 mini di 

−1

2 maxi dix¯ F (0)2 2 s +σ ≤ +C M M max L 1 L 2 v LM mini di

= x¯ F M . From (5) and (4), we have x F (t) ≤ x¯ F (t) + (L −1 1 L 2 ⊗ I p )x L (t)

the remaining followers, denoted by V F 2 . By rearranging the labels of the followers, the Laplacian matrix can be written as ⎡ ⎤ L 12 L 13 L 11 L 22 0m 2 ×n−m ⎦ L A = ⎣ 0m 2 ×m 1 0(n−m)×m 0(n−m)×(n−m) 0(n−m)×(n−m) where m 1 is the number of the followers in V F 1 , m 2 is the number of the followers in V F 2 , m 1 +m 2 = m, L 11 ∈ Rm 1 ×m 1 , L 12 ∈ Rm 1 ×m 2 , L 13 ∈ Rm 1 ×(n−m) , and L 22 ∈ Rm 2 ×m 2 . Let x F 1 and x F 2 be, respectively, the column vectors of the followers’ position vectors in V F 1 and V F 2 . Let s F 1 and s F 2 be, respectively, the column vectors of si ’s with respect to x F 1 and x F 2 . Then (4) can be rewritten as x˙ F 1 = −(L 11 ⊗ I p )x F 1 − (L 12 ⊗ I p )x F 2 − (L 13 ⊗ I p )x L + s F 1 (17) x˙ F 2 = −(L 22 ⊗ I p )x F 2 + s F 2 .

Because s F 1 and s F 2 are proved to converge to zero, the final states of x F 1 and x F 2 will depend on the equilibrium points of (17) and (18) with s F 1 = 0 and s F 2 = 0. Clearly, we can get from (18) that the trajectories of the followers in V F 2 are independent of the leaders, and thus these followers cannot always converge to the convex hull spanned by the leaders for arbitrary-bounded initial conditions. On the other hand, for the followers in V F 1 , Assumption 2.2 holds. From the analysis in the sufficient part in the proof of Theorem 3.6, one can choose α large enough such that the followers converge to the convex hull spanned by the leaders and the followers in V F 2 , denotedby Co(V L V F 2 ). Since Co(V L ) is only a subset of Co(V L V F 2 ), there might exist some followers in V F 1 that cannot converge to the convex hull spanned by the leaders. In Theorem 3.6, the containment control problem with multiple dynamic leaders has been studied. Obviously, the coordinated tracking problem with one single leader is a special case of the containment control problem. Next, we show that the proposed control algorithm (6) can also be used to deal with the leaderless consensus problem when the directed graph characterizing the interaction among the agents has a directed spanning tree. It should be noted that when V L = ∅ (no leader), (6) becomes a leaderless consensus algorithm accounting for unknown nonlinearities and external disturbances. In this case, (4) can be written as



≤ x¯ F M + σmax (L −1 1 L 2 )x L M = x F M , v F (t) ≤ α(L 1 ⊗ I p )x F (t)+α(L 2 ⊗ I p )x L (t)+s F (t)

≤ ασmax (L 1 )x F M + ασmax (L 2 )x L M + s M = v F M . Therefore, for any bounded initial states and t ≥ 0, the states of (1) using (6) will always lie in the compact set = {(x i (t), v i (t))|x F (t) ≤ x F M , v F (t) ≤ v F M } ⊂ R2 p . Necessity: We prove the necessity part by contradiction. If Assumption 2.2 does not hold, there exists a subset of followers that cannot receive any information from the leaders directly or indirectly. In this case, the followers can be divided into two subsets, namely, one set with the followers that can receive the formation from the leaders directly or indirectly, denoted by V F 1 , and the other set with

(18)

x˙ = −α(L A ⊗ I p )x + s

(19)

Rn×n

is the (nonsymmetric) Laplacian matrix of where L A ∈ the directed graph associated with the n agents, and x and s are the column stack vectors of x i and si , i = 1, . . . , n. For the linear system x˙ = −α(L A ⊗ I p )x, if G has a directed spanning tree, it follows from Lemma 2.1 that consensus is reached  exponentially. Thus, there exists x 0 = ni=1 pi x i (0), where pi is defined in Lemma 2.1, such that 1n ⊗ x 0 is a globally exponentially stable equilibrium point of x˙ = −α(L A ⊗ I p )x.

Define the consensus error vector x¯ = x − 1n ⊗ x 0 . We can conclude from [49, Lemma 4.6] that the system (19) is input-to-state stable with the input s and the state x. ¯ On the other hand, following the same steps in the proof of Theorem 3.6, using (6), limt →∞ s(t) = 0. Thus, we can get

MEI et al.: DISTRIBUTED CONTAINMENT CONTROL FOR MULTIPLE UNKNOWN SECOND-ORDER NONLINEAR SYSTEMS

that limt →∞ x(t) ¯ = 0, which means that the consensus is achieved asymptotically. We then have the following result on the leaderless consensus problem for multiple second-order nonlinear systems with unknown nonlinearities and external disturbances. Corollary 3.7: Suppose that V L = ∅. Under Assumption 3.4, using (6) for (1), the systems achieve consensus asymptotically for arbitrary-bounded initial conditions, if and only if the directed graph G associated with the n agents has a directed spanning tree. Remark 3.8: By introducing a sgn(·) function and an adaptive gain design in the control algorithm (6), the auxiliary variables si defined in (2) are driven to zero asymptotically. As a result, the containment error vector only depends on the gain α and the velocities of the leaders and the consensus error vector converges to zero asymptotically. Unlike the results in [20]–[25] and [45], the bounds of the approximation error and the external disturbances are not used to determine the control gains in the current paper and thus have no influence on the containment error and the consensus error. Specifically, for the leaderless consensus problem, we extend the results in [20] to a general directed graph and with zero consensus error. Remark 3.9: Note that the right-hand side of (7) is discontinuous. Therefore, from the theoretical viewpoint, the solutions should be investigated in terms of differential inclusions. Because the signum function is measurable and locally essentially bounded, the Filippov solution for the closed-loop system (7) always exists [50]. It is easy to show that the set-valued Lie derivative of the Lyapunov function defined in (8) will become a singleton, and thus the Lyapunov function is continuously differentiable. Thus, the proofs of the theorems in the current paper still hold. In addition, to avoid the possible chattering caused by the sign function, the hyperbolic tangent function tanh(·) can be used in practice. Remark 3.10: Note in (6c) that the varying gain kˆi (t) is the time integral of a positive variable, which will cause some unwanted behaviors. For example, the control gain will increase rapidly if the initial state si (0) is very large, which may render high amplitude on the control input. On the other hand, the control gain will grow slowly if there exist noises in the relative position measurements or the absolute velocity measurement. An intuitive way to suppress this effect is the σ -modification method introduced in [51]. We are motivated to update the gain kˆi (t) as k˙ˆi (t) = δi (si 1 − σi kˆi )

(20)

where σi > 0 is a constant. In this case, we can only get a bounded result on si (t), as will shown in the subsequent section, instead of the asymptotical convergence of si (t) in Theorem 3.6. C. Control Algorithm Without Relative Velocity Feedback Note that the control algorithm (6) relies on both relative position and relative velocity information. In practice, for second-order systems, relative velocity measurements between neighbors are generally more difficult to obtain than relative position measurements. Even if each system can measure

1891

its own absolute velocity, to communicate the velocity measurements between neighbors will require the systems to be equipped with the communication capability and raise the communication burden. In this section, we aim to design a distributed containment control algorithm without using the relative velocity feedback in the absence of communication. Because the relative velocity measurements cannot be used, we propose the following control algorithm for (1): iT φi − lˆi sgn(si ) τi = −hˆ i si − W

˙ hˆ i = γ siT si − ν hˆ i ˙ = γ φ s T − ν W i  W i i i ˙ lˆi = γ (si 1 − ν lˆi ), i ∈ V F

(21a) (21b) (21c) (21d)

where γ and ν are positive Theorem 3.11: Under Assumption 3.2 and 3.4, using (21) for (1), the containment error can be reduced as small as desired for arbitrary-bounded initial conditions if and only if Assumption 2.2 holds. Proof: The closed-loop system (1) using (21) can be written as  iT φi + α ai j (v i − v j ) s˙i = −hˆ i si + W constants.1

j ∈V F



VL

− lˆi sgn(si ) + εi + ωi , i ∈ V F .

(22)



Let r = α(L 2 ⊗ I p )v L = [r1T , . . . , rmT ]T with ri ∈ R p . We can get from Assumption 3.2 that r is bounded. For any constant ε Mi > 0, over a compact set , we can get from Assumption 3.4 that there must exist a position constant lc such that ri + ωi + εi  ≤ lc ∀i ∈ V F .

(23)

First, we show that under (22), the solutions s F (t) as defined in (4) and x¯ F (t) as defined in (5) are bounded. Consider the Lyapunov function candidate 1 1  i ) iT W Vc (t) = s FT s F + x¯ FT (D ⊗ I p )x¯ F + tr(W 2 2γ i∈V F

1  + (h c − hˆ i )2 + (lc − lˆi )2 2γ

(24)

i∈V F

where D is defined as in Lemma 3.1, lc is defined as in (23), and h c is a positive constant to be determined later. Define h˜ i = h c − hˆ i and l˜i = lc − lˆi . Taking the derivative of Vc (t) along (22) gives

V˙c (t) = −h c s FT s F + αs FT (L 1 ⊗ I p )s F − α 2 s FT L 21 ⊗ I p x¯ F 



i + h˜ i hˆ i + l˜i lˆi iT W tr W +ν i∈V F

− α x¯ FT L 1T D + DL 1 x¯ F

+ 2 x¯ FT (D ⊗ I p )[s F + L −1 1 L2 ⊗ Ip vL ] 

+ siT (ri + εi + ωi ) − lc si 1 . i∈V F 1 Here, we use the same gains γ and ν just for simplification of the following analysis. They can be totally different for each agent as the same as those in (6).

1892

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 26, NO. 9, SEPTEMBER 2015

Note from Assumption 3.2 that v L is bounded. There must ¯ exist a positive constant d¯ such that (DL −1 1 L 2 ⊗ I p )v L  ≤ d. T 2 2 T Also note that s F (L 1 ⊗ I p )x¯ F ≤ σmax (L 1 )/(2 1 )s F s F +

1 σmax (L 21 )/(2)x¯ FT x¯ F , 2 x¯ FT (D⊗ I p )s F ≤ 2−1 maxi di s FT s F + 2 T maxi di x¯ FT x¯ F , and 2x¯ FT (DL −1 1 L 2 ⊗ I p )v L ≤ 3 x¯ F x¯ F + −1 ¯ 2

3 d , where 1 , 2 , and 3 are positive constants that can be chosen freely. We can then get

(22) gives V˙ (t) = −hs FT s F +



siT (vri +εi +ωi )−lsi 1

i∈V F



iT W i + h˜ i hˆ i + l˜i lˆi tr W +ν i∈V F

≤ −hs FT s F −

V˙c (t)

ν  T tr Wi Wi 2 i∈V F

ν  − [(h − h i )2 + (l − li )2 ] + C 2

1 ν  T ˜ 2 ˜2 ≤ − β1 s FT s F −β2 x¯ FT x¯ F − tr Wi Wi + h i + li +Cc 2 2

i∈V F

i∈V F

≤ −βV (t) + C

where α 2 σmax (L 21 ) 2 maxi di β1 = 2h c − 2ασ (L 1 ) − −

1

2

T α 2 1 σmax (L 21 ) − 2 max di − 3 β2 = αλmin L 1 D+ DL 1 − i 2  T

ν −1 2 2 2 ¯ tr Wi Wi + h c + lc . Cc = 3 d + 2 i∈V F

Irregardless of L 1 and D, we can first choose α large enough and i , i = 1, 2, 3, small enough such that β2 > 0. Then by choosing h c large enough we can make β1 > 0. We have β1 β2 x¯ T (D ⊗ I p )x¯ F V˙c (t) ≤ − s FT s F − 2 maxi di F ν  T ˜ 2 ˜2 tr Wi Wi + h i + li + Cc − 2 i∈V F

≤ −βc Vc (t) + Cc

(25)



where βc = min{β1 , β2 /maxi di , γ ν} > 0. Note that Wi , h c , and lc are constant. We can get that Cc > 0 is constant and thus bounded. After some manipulation, we can get from (25) that Cc (1 − e−βc t ). βc √ We have that limt →∞ Vc (t) ≤ Cc /βc . Unfortunately, Cc /βc might not be able to be reduced as small as desired, because 3−1 and h c are assumed to be chosen large enough. The fortunate part out of all the misfortunes is that we i ∈ L∞ . Following the same statements have s F , x¯ F , W in the proof of Theorem 3.6, we can conclude that the solution of (22) always stays in a compact set. Let vri =  T , . . . , v T ]T . We can α j ∈V F  V L ai j (v j − v i ) and vr = [vr1 rm 2 2 get that vr = α (L 1 ⊗ I p )x¯ F − α(L 1 ⊗ I p )s F − α(L 2 ⊗ I p )v L is bounded. We then consider the Lyapunov function candidate

1  T 1 tr Wi Wi + (h − hˆ i )2 + (l − lˆi )2 V (t) = s FT s F + 2 2γ Vc (t) ≤ Vc (0)e−βc t +

i∈V F

where l is chosen such that vri + ωi + εi  ≤ l, ∀i ∈ V F , and h is a positive constant.2 Taking the derivative of V along 2 Compared with V (t) defined in (24), the term x¯ T (D ⊗ I ) x¯ does not c p F F

exist in V (t). When calculating V˙ (t), the term associated with the relative velocity measurements vri can be eliminated by the term lˆi sgn(si ) since vri has already been proved to be bounded in the former analysis, which facilitates the stability analysis of the system (22).

 where β = min{2h, γ ν} > 0 and C = ν2 i∈V F [tr(WiT Wi ) + h 2 + l 2 ]. After some manipulation, we can get   lim sup s F (t) ≤ lim sup 2V (t) ≤ 2C/β. t →∞

t →∞

From the definition of C and β, we can choose ν small enough and γ large enough to make lim supt →∞ s F (t) as small as desired. Consider the Lyapunov function candidate V1 (t) = x¯ FT (D ⊗ I p )x¯ F . Following the same steps as in the proof in Theorem 4.1, we can still get (16). Because lim supt →∞ s F (t) can be reduced as small as desired, we can choose α large enough to reduce the containment error x¯ F (t) as small as desired. This concludes our proof. Similarly, since V (t) ≤ V (0) + C/β, for any bounded initial conditions and t ≥ 0, we have   1 2C  [Wi (0)2F + h˜ 2i (0)+ l˜i2 (0)]+ s F (t) ≤ s F (0)2 + γ β i∈V F



= sM . We then can obtain 

−1

2 maxi dix¯ F (0)2 2 s +σ x¯ F (t) ≤ +C M M max L 1 L 2 v L M mini di

= x¯ F M

x F (t) ≤ x¯ F M + σmax L −1 1 L2 xL M = xF M

v F (t) ≤ ασmax (L 1 )x F M + ασmax (L 2 )x L M + s M = v F M where C M is defined as in (16). Therefore, for any bounded initial states and t ≥ 0, the states of (1) using (21) will always lie in the compact set = {(x i (t), v i (t))|x F (t) ≤ x F M , v F (t) ≤ v F M } ⊂ R2 p . Remark 3.12: Compared with the control algorithm (6), the constant gain K i is replaced with a time-varying gain hˆ i (t) with an adaptive updating law. Otherwise, the constant gain K i needs to be chosen significantly large to ensure the stability of the closed-loop system. This is reasonable because more price might be paid when less information is used. IV. D ISTRIBUTED C ONTAINMENT C ONTROL FOR M ULTIPLE U NKNOWN L AGRANGIAN S YSTEMS Lagrangian systems are second-order nonlinear systems that can be used to model many mechanical systems including

MEI et al.: DISTRIBUTED CONTAINMENT CONTROL FOR MULTIPLE UNKNOWN SECOND-ORDER NONLINEAR SYSTEMS

robotic manipulators, autonomous vehicles, and spacecraft. Unfortunately, Lagrangian systems do not take in the form of (1) owing to the uncertain inertial matrix. In this section, we apply the idea in Section III to deal with the distributed containment control problem for networked Lagrangian systems in the presence of unknown nonlinearities and external disturbances. A. Agent Dynamics Assume that the m followers are represented by the Lagrangian equations [52] Mi (qi )q¨i +Ci (qi , q˙i )q˙i +gi (qi ) = τi + ωi , i = 1, . . . , m (26) where qi ∈ R p is the vector of generalized coordinates, Mi (qi ) ∈ R p× p is the symmetric positive-definite inertia matrix, Ci (qi , q˙i )q˙i ∈ R p is the vector of Coriolis and centrifugal forces, and gi (qi ) is the vector of gravitational force.3 The Lagrangian equation has the following properties [52]. (P1): For any i , Mi (qi ) is symmetric positive stable and there exist positive constants km and km such that 0 < km x2 ≤ x T Mi (qi )x ≤ km x2 , ∀x ∈ R p . (P2): M˙ i (qi ) − 2Ci (qi , q˙i ) is skew symmetric. Note that here we do not assume the property of linearity in the parameters because of the existence of structural uncertainties. B. Control Algorithm Design Define the auxiliary variables  ai j (qi − q j ) q˙ri = −α j ∈V L



VF



si = q˙i − q˙ri = q˙i + α



j ∈V L



where K i , γi , and δi are defined as in (6). Let s F , ε, ω, φ, K , , W , and W be defined as in Theorem 3.6. Also let q F and W q L be, respectively, the column stack vectors of qi , ∀i ∈ V F and qi , ∀i ∈ V L . For networked Lagrangian systems, the con tainment error vector is defined as q¯ F = q F +(L −1 1 L 2 ⊗ I p )q L . Theorem 4.1: Under Assumption 3.24 and 3.4, using (31) for (26), the containment error can be reduced as small as desired by tuning α for arbitrary-bounded initial conditions, if and only if Assumption 2.2 holds. More specially, with an additional assumption that limt →∞ q˙ L (t) = 0, the containment error will converge to zero asymptotically. Theorem 4.1 can be proved following the same steps in Theorem 3.6 by considering the following Lyapunov function candidate:  1

 1 1 iT W i + tr W (ki − kˆi )2. V (t) = s FT M(q F )s F + 2 2γi 2δi i∈V F

i∈V F

where ki is defined as in (8). Similar to the discussion in Section III-B, we have the following result on the leaderless consensus problem for networked Lagrangian systems in the presence of uncertainties and external disturbances. Corollary 4.2: Suppose that V L = ∅. Under Assumption 3.4, using (31) for (26), the systems achieve consensus asymptotically for arbitrary-bounded initial conditions, if and only if the directed graph G associated with the n agents has a directed spanning tree. C. Control Algorithm Without Relative Velocity Feedback

(27) ai j (qi −q j ), i ∈ V F (28)

VF

where ai j is defined as in (2). Then (26) can be written as Mi (qi )˙si +Ci (qi , q˙i )si = f i (qi , q˙i , q˙ri , q¨ri )+ωi +τi (29)

where f i (qi , q˙i , q˙ri , q¨ri ) = −Mi (qi )q¨ri −Ci (qi , q˙i )q˙ri −gi (qi ) is unknown. Because of the approximation property of neural networks, the unknown nonlinearity f i (qi , q˙i , q˙ri , q¨ri ) can be represented as fi (qi , q˙i , q˙ri , q¨ri ) = WiT φi (qi , q˙i , q˙ri , q¨ri ) + εi and the estimate of fi (qi , q˙i , q˙ri , q¨ri ) can be written as iT φi (qi , q˙i , q˙ri , q¨ri ), i ∈ V F . (30) fˆi (qi , q˙i , q˙ri , q¨ri ) = W Similar to the statements in Section III, over a compact set ⊂ R4 p , for any given positive constant ε Mi , there exist Wi and ri such that εi  ≤ ε Mi in . We propose the control algorithm for (26) iT φi − kˆi sgn(si ) τi = −K i si − W ˙ = γ φ sT  W i i i i ˆk˙i = δi si 1 , i ∈ V F

(31a) (31b) (31c)

3 Note that (26) can be written as q¨ = −M −1 (q )[C i i qi ,q˙i q˙i + gi (qi )] + i Mi−1 (qi )τi + Mi−1 (qi )ωi . Although we can use the results in Section III to derive Mi−1 (qi )τi , owing to the uncertainties of Mi (qi ), we still cannot derive τi . Hence, the results in Section III cannot be directly applied to

networked unknown Lagrangian systems.

1893

Note that in (30) q¨ri and q˙ri are used as the input of the  neural networks. If follows from (27) that q¨ri = −α j ∈V L  V F ai j (q˙i − q˙ j ), which means that the control algorithm (31) relies on both relative position and relative velocity information. In this section, we aim to design a distributed containment control algorithm for networked Lagrangian systems without using relative velocity feedback in the absence of communication. Because the relative velocity measurements cannot be used, (29) can be rewritten as Mi (qi )˙si + Ci (qi , q˙i )si = −Mi (qi )q¨ri + fi (qi , q˙i , q˙ri ) + ωi + τi

(32)



where fi (qi , q˙i , q˙ri ) = −Ci (qi , q˙i )q˙ri − gi (qi ) is unknown. Similarly, the unknown nonlinearity f i (qi , q˙i , q˙ri ) can be represented as f i (qi , q˙i , q˙ri ) = WiT φi (qi , q˙i , q˙ri ) + εi and the estimate of f i (qi , q˙i , q˙ri , ) can be written as fˆi (qi , q˙i , q˙ri ) =  T φi (qi , q˙i , q˙ri ), ∀i ∈ V F . We propose the control algorithm W i for (26) iT φi − lˆi sgn(si ) τi = −hˆ i si − W ˙ hˆ i = γ (siT si − ν hˆ i ) ˙ = γ (φ s T − ν W i )  W i i i ˙ lˆi = γ (si 1 − ν lˆi ), i ∈ V F

(33a) (33b) (33c) (33d)

where γ and ν are positive constants. 4 Here, for networked Lagrangian systems, x (t) and v (t) in Assumption L L 3.2 are replaced with, respectively, q L (t) and q˙ L (t). We still use x L M and v L M to represent the upper bound of the leaders’ states and state derivatives, respectively.

1894

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 26, NO. 9, SEPTEMBER 2015

Theorem 4.3: Under Assumption 3.2 and 3.4, using (33) for (26), the containment error can be reduced as small as desired for arbitrary-bounded initial conditions, if and only if Assumption 2.2 holds. Theorem 4.3 can be proved following the same steps in Theorem 3.11 by considering the following two-step Lyapunov function candidates: 1 Vc (t) = s FT M(q F )s F + q¯ FT (D ⊗ I p )q¯ F 2

1  T tr Wi Wi + (h c − hˆ i )2 + (lc − lˆi )2 + 2γ i∈V F

and V (t) =

1 T s M(q F )s F 2 F

1  T tr Wi Wi + (h − hˆ i )2 + (l − lˆi )2 + 2γ i∈V F

where l is chosen such that Mi (qi )q¨ri +ωi +εi  ≤ l, ∀i ∈ V F , and h is a positive constant. Details of the proof can be found in [47]. Remark 4.4: Comparison with existing results. 1) Distributed coordination of networked Lagrangian systems has been widely studied in the recent literature (see [28]–[31], [33]–[43], [45]). Most of the previous results focus on the parametric uncertainties in Lagrangian systems except for [45], in which the coordinated tracking problem with a single leader is studied. It is worth mentioning that the control algorithm in [45] can only be implemented under some special graphs—unidirectional graphs with no loop. In [42], the authors studied the distributed containment control problem for multiple Lagrangian systems under an undirected graph. The case with a directed graph is studied in [43]. We have extended the work to the case with unknown uncertainties and external disturbances (Theorem 4.1) and without using neighbors’ velocity measurements (Theorem 4.3). Because of the approximation property of neural networks, there is no need to know the values and even the structure of Mi (qi ), Ci (qi , q˙i ), and gi (qi ) in the control algorithms (31) and (33). Thus, our proposed algorithms account for both unstructural and parametric uncertainties in the systems. 2) The containment control with dynamic leaders is studied in [42] and [43]. Estimators that rely on neighbors’ estimates are used in the algorithm design, which means that to implement those algorithms, communication is a necessary requirement. This constraint is relaxed in this paper because our proposed algorithms can be implemented using local measurements of physical systems in the absence of communication. 3) References [21]–[25] studied the distributed coordinated tracking problem for multiple agents in the presence of a single leader with uncertain nonlinear dynamics under a directed graph. Note that the case with a single leader is a special case of multiple leaders. In [21], for each follower modeled by a first-order system, the follower’s own state and its neighbors’ states and state derivatives

are used to implement the control algorithm. It is required that the graph characterizing the interaction among the leader and the followers contains a directed spanning tree and no loop is allowed in the directed graph. The directed graph is not general, because the graph should be unidirectional and even an undirected graph cannot satisfy this condition. Actually, under the framework in [21], because the state derivatives are needed, the closed-loop systems become centralized. The same issue also exists in [41] and [45]. Thus, we have extended [21] and [45] to the case with multiple leaders and a more general directed graph. Moreover, to implement the neural network tuning laws for each follower, it is required in [22]–[25] that every follower know both the (nonsymmetric) Laplacian matrix and the pinning gains for all followers, which are actually global information. This is relaxed in the current paper by introducing some auxiliary variables. Furthermore, with the help of the auxiliary variables introduced in the current paper, the distributed leaderless consensus problem and the containment control without using neighbors’ velocities under a general directed graph as well as the coordination for networked unknown Lagrangian systems can be solved, which has not been reported in the literature to the best of our knowledge. V. S IMULATION R ESULTS In this section, numerical simulations are preformed to show the effectiveness of the proposed control algorithms. A. Example One: General Nonlinear Systems Case 1 (Distributed Containment Control With Multiple Dynamic Leaders): In this case, we consider the distributed containment control problem for ten agents among which are four leaders and six followers. For simplicity, we assume that the six identical followers are modeled by (1) with p = 2 and ⎛    ⎞ sin(πv i(1) ) 2 sin(4π x i(1) ) ⎟ ⎜4 ⎟ ⎜ π x i(1) πv i(1) f i (x i , v i ) = ⎜   2 ⎟ ⎝ sin(πv i(2) ) ⎠ sin(4π x i(2) ) 4 π x i(2) πv i(2) where i = 1, . . . , 6 [53]. Here in this section, we use a subscript ( j ) to denote the j th component of a vector. The external disturbances on follower i is given as   −t   −2t   e e cos(t 2 ) , ω ω1 = , ω = = 2 3 sin(t 2 ) cos(t 2 ) e−3t     2 2 − sin(t ) − cos(t) sin(t ) ω4 = , ω5 = cos(t)e−3t sin(t)e−3t   −5t sin(t)e ω6 = . sin(t) cos(t 2 ) For follower i , the basis function of neural networks is chosen as φi (z) = [φi1 (z), . . . , φir (z)]T , where $ # z − ci j 2 , j = 1, . . . , r φi j (z) = exp − σi2j

MEI et al.: DISTRIBUTED CONTAINMENT CONTROL FOR MULTIPLE UNKNOWN SECOND-ORDER NONLINEAR SYSTEMS

1895

Fig. 2. Directed graph that characterizes the interaction among the four leaders and the six followers, where Li, i = 1, . . . , 4, denotes the ith leader and Fi, i = 1, . . . , 6, denotes the ith follower.

Fig. 4. Containment error of the six followers with four leaders by using (6). Here x¯ F = [x¯ F1 , . . . , x¯ F6 ]T , x¯ Fi ∈ R2 .

Fig. 3. Distributed containment control for six followers and four leaders by using the algorithm (6). Blue circles: four leaders. Blue rectangle: convex hull spanned by the four leaders. Black squares: six followers. (a) t = 0 s. (b) t = 1 s. (c) t = 1.75 s. (d) t = 5 s.

where z = [x iT , v iT ]T ∈ R2 p , r is the number of neurons, ci j ∈ R2 p is the center of the receptive field, and σi j ∈ R is the width of the Gaussian function [20]. Fig. 2 shows the directed graph that characterizes the interaction among the leaders and the followers. The first six rows of the Laplacian matrix of the directed graph is given as ⎞ ⎛ 2 0 −1 0 0 0 −1 0 0 0 ⎜ 0 2 −1 0 0 0 −1 0 0 0⎟ ⎟ ⎜ ⎟ ⎜ −1 0 4 −1 0 0 0 −1 −1 0 ⎟. ⎜ ⎟ ⎜ 0 0 −1 3 −1 −1 0 0 0 0 ⎟ ⎜ ⎝ 0 0 0 0 2 0 0 −1 −1 0⎠ 0 0 0 −1 −1 3 0 0 0 −1 In simulation, we assume that all followers maintain the neural networks with the same basis functions. The number of neurons r = 6. The center of the receptive field are chosen to be distributed evenly in the range [−5, 5]2 × [−5, 5]2, and the width of the Gaussian function σi j = 2, ∀i, j = 1, . . . , 6. The initial weigh matrix Wˆ i (0) = 0r× p and the initial gain kˆi (0) = 0. Let the initial states of the six followers be [−8, 8]T , [0, 12]T , [8, 8]T , [−8, −8]T , [0, 12]T , and [8, −8]T , and the initial state derivatives of the six followers be [−0.3, 0.3]T , [−0.2, 0.2]T , [−0.1, 0.1]T , [0, 0]T , [0.1, −0.1]T , and [0.2, −0.2]T . Let the initial states of the four leaders be [−2, 2]T , [2, 2]T , [−2, −2]T , and [2, −2]T , and the leaders move with varying state derivatives as [sin(t), cos(t)]T ,

Fig. 5. Distributed containment control for six followers and four leaders by using the algorithm (21). (a) t = 0 s. (b) t = 0.6 s. (c) t = 1.05 s. (d) t = 3 s.

[sin(t), cos(t)]T , [cos(t), sin(t)]T , and [cos(t), sin(t)]T . The control parameters are chosen as K i = 0.8I2 , α = 3, γi = 0.5, and δi = 0.5, for i = 1, . . . , 6. Fig. 3 shows the states of the six followers and the leaders, respectively, at t = 0 s, t = 1 s, t = 1.75 s, and t = 5 s. Fig. 4 shows the containment error of the six followers. We can see that the six followers converge to the convex hull spanned by the four stationary leaders and the containment error is reduced small enough near zero. For the control algorithm without using relative velocity measurements (21), the control parameters are chosen as α = 2, γ = 1, and ν = 0.01, and the initial states of the adaptive updating laws are all set to zero. Fig. 5 shows the states of the six followers and the four leaders, respectively, at t = 0 s, t = 0.6 s, t = 1.05 s, and t = 3 s. Fig. 6 shows the containment error of the six followers. Case 2 (Distributed Leaderless Consensus): In this case, we consider the distributed leaderless consensus problem for six agents. We assume that the six agents are modeled by the dynamics as in Case 1. Fig. 7 shows the directed graph that characterizes the interaction among six agents. Note that the directed graph has

1896

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 26, NO. 9, SEPTEMBER 2015

Fig. 6. Containment error of the six followers with four leaders by using (21).

Fig. 7. Directed graph that characterizes the interaction among six agents, where Ai, i = 1, . . . , 6, denotes the ith agent.

Fig. 8.

States of the six agents without leaders by using (6).

a directed spanning tree. graph is given as ⎛ 1 0 ⎜ 0 1 ⎜ ⎜ −1 0 ⎜ ⎜ 0 0 ⎜ ⎝ 0 −1 0 0

The Laplacian matrix of the directed ⎞ −1 0 0 0 −1 0 0 0⎟ ⎟ 2 −1 0 0⎟ ⎟. −1 3 −1 −1 ⎟ ⎟ 0 0 1 0⎠ 0 −1 −1 2

In simulation, the parameters of neural networks and the initial states of the agents are chosen to be the same as in Case 1. The control parameters are chosen as K i = I2 , α = 2, γi = 0.5, and δi = 0.4, for i = 1, . . . , 6. Fig. 8 shows the states of the six agents when there do not exist leaders using (6). We can see that the six agents achieve consensus asymptotically. B. Example Two: Lagrangian Systems In this example, we consider the coordination for ten agents with four leaders and six followers, where the six

Fig. 9. Containment error of the six followers with four leaders by using (31). Here q¯ F = [q¯ F1 , . . . , q¯ F6 ]T , q¯ Fi ∈ R2 .

followers are represented by Euler–Lagrange equations as in (26). For simplicity, we consider six identical networked two-link revolute joint arms modeled by Euler-Lagrange equations as the followers. The readers are refer to [52, pp. 259–262] for the details of the equations. In particular, the masses of links 1 and 2 are, respectively, m 1 = 1 kg and m 2 = 0.8 kg, the lengths of links 1 and 2 are, respectively, l1 = 0.8 m and l2 = 0.6 m, the distances from the previous joint to the center of mass of links 1 and 2 are, respectively, lc1 = 0.4 m and lc2 = 0.3 m, and the moments of inertia of links 1 and 2 are, respectively, J1 = 0.0533 kg m2 and J2 = 0.024 kg m2 . In simulation, we assume that the interaction among the leaders and the followers are the same as Case 1 in Example One. All followers maintain the neural networks with the same basic functions. The number of neurons r = 6. The center of the receptive field are chosen to be distributed evenly in the range [−5, 5]2 × [−5, 5]2 × [−5, 5]2 × [−5, 5]2 , and the width of the Gaussian function σi j = 2, ∀i, j = 1, . . . , 6. The initial weigh matrix is set to Wˆ i (0) = 0r× p and initial gain kˆi (0) = 0. Let the initial angles of the six followers be, respectively, [−1, 1]T , [0, 1.5]T , [1, 1]T , [−1, −1]T , [0, −1.5]T , and [1, −1]T , and let the initial angle derivatives of the six followers be, respectively, [−0.05, 0.15]T , [−0.25, 0.05]T , [0.1, 0.05]T , [−0.1, −0.15]T , [0, −0.05]T , and [0.15, 0]T . Let the initial angles of the four leaders be, respectively, [−0.5, 0.5]T , [0.5, 0.5]T , [−0.5, −0.5]T , and [0.5, −0.5]T , and let the angle derivatives of the four leaders be, respectively, 0.05[sin(t), cos(t)]T , 0.05[sin(t), cos(t)]T , 0.05[cos(t), sin(t)]T , and 0.05[cos(t), sin(t)]T . For the containment control algorithm (31), the control parameters are chosen as K i = 1.5I2 , α = 6, γi = 1.2, and δi = 0.8, for i = 1, . . . , 6. Fig. 9 shows the containment error of the six followers. For the containment control algorithm without using relative velocity measurements (33), the center of the receptive field are chosen to be distributed evenly in the range [−5, 5]2 × [−5, 5]2 × [−5, 5]2 and the control parameters are chosen as α = 2, γ = 1.2, and ν = 0.02. Similarly, the initial states of the adaptive updating laws are all set to zero. Fig. 10 shows the containment error of the six followers. For the consensus without any leaders, the interaction among the leaders and the followers are assumed to be the

MEI et al.: DISTRIBUTED CONTAINMENT CONTROL FOR MULTIPLE UNKNOWN SECOND-ORDER NONLINEAR SYSTEMS

1897

will consider the containment control problem for uncertain multiagent systems with collision avoidance. R EFERENCES

Fig. 10. Containment error of the six followers with four leaders by using (33).

Fig. 11.

Joint angles of the six agents without leaders by using (31).

same as Case 2 in Example One. The parameters of neural networks and the initial states of the agents are chosen to be the same as the six followers in the case of the containment control problem for networked Lagrangian systems. The control parameters are chosen as K i = I2 , α = 2, γi = 1, and δi = 0.8, for i = 1, . . . , 6. Fig. 11 shows the joint angles of the six agents. VI. C ONCLUSION We have studied the distributed containment control problem for multiple second-order nonlinear systems and networked Lagrangian systems with multiple dynamic leaders in the presence of unknown nonlinearities and external disturbances under a general directed graph. We have proposed and analyzed distributed control algorithms as well as adaptive gain design strategies according to the approximation capability of neural networks and presented a necessary and sufficient condition on the directed graph such that the containment error can be reduced as small as desired. As a byproduct, we have shown necessary and sufficient conditions on leaderless consensus for multiple secondorder nonlinear systems and networked Lagrangian systems with unknown nonlinearities and external disturbances. Finally, simulation examples for both distributed containment control problem with multiple leaders and distributed leaderless consensus problem have been provided to show the effectiveness of the proposed control algorithms. Future work

[1] A. Jadbabaie, J. Lin, and A. S. Morse, “Coordination of groups of mobile autonomous agents using nearest neighbor rules,” IEEE Trans. Autom. Control, vol. 48, no. 6, pp. 988–1001, Jun. 2003. [2] R. Olfati-Saber and R. M. Murray, “Consensus problems in networks of agents with switching topology and time-delays,” IEEE Trans. Autom. Control, vol. 49, no. 9, pp. 1520–1533, Sep. 2004. [3] W. Ren and R. Beard, Distributed Consensus in Multi-Vehicle Cooperative Control. London, U.K.: Springer-Verlag, 2008. [4] Z. Qu, Cooperative Control of Dynamical Systems. London, U.K.: Springer-Verlag, 2009. [5] F. Bullo, J. Cortés, and S. Martínez, Distributed Control of Robotic Networks. Princeton, NJ, USA: Princeton Univ. Press, 2009. [6] W. Ren and Y. Cao, Distributed Coordination of Multi-Agent Networks. London, U.K.: Springer-Verlag, 2011. [7] R. Olfati-Saber, J. A. Fax, and R. M. Murray, “Consensus and cooperation in networked multi-agent systems,” Proc. IEEE, vol. 95, no. 1, pp. 215–233, Jan. 2007. [8] W. Ren, R. W. Beard, and E. M. Atkins, “Information consensus in multivehicle cooperative control,” IEEE Control Syst., vol. 27, no. 2, pp. 71–82, Apr. 2007. [9] Y. Hong, J. Hu, and L. Gao, “Tracking control for multi-agent consensus with an active leader and variable topology,” Automatica, vol. 42, no. 7, pp. 1177–1182, Jul. 2006. [10] Y. Cao and W. Ren, “Distributed coordinated tracking with reduced interaction via a variable structure approach,” IEEE Trans. Autom. Control, vol. 57, no. 1, pp. 33–48, Jan. 2012. [11] M. Ji, G. Ferrari-Trecate, M. Egerstedt, and A. Buffa, “Containment control in mobile networks,” IEEE Trans. Autom. Control, vol. 53, no. 8, pp. 1972–1975, Sep. 2008. [12] Y. Cao, W. Ren, and M. Egerstedt, “Distributed containment control with multiple stationary or dynamic leaders in fixed and switching directed networks,” Automatica, vol. 48, no. 8, pp. 1586–1597, Aug. 2012. [13] Y. Cao, D. Stuart, W. Ren, and Z. Meng, “Distributed containment control for multiple autonomous vehicles with double-integrator dynamics: Algorithms and experiments,” IEEE Trans. Control Syst. Technol., vol. 19, no. 4, pp. 929–938, Jul. 2011. [14] S. Sastry, Nonlinear Systems: Analysis, Stability, and Control. New York, NY, USA: Springer-Verlag, 1999. [15] X. Liu, T. Chen, and W. Lu, “Consensus problem in directed networks of multi-agents via nonlinear protocols,” Phys. Lett. A, vol. 373, no. 35, pp. 3122–3127, Aug. 2009. [16] F. Chen, Z. Chen, L. Xiang, Z. Liu, and Z. Yuan, “Reaching a consensus via pinning control,” Automatica, vol. 45, no. 5, pp. 1215–1220, May 2009. [17] T. Chen, X. Liu, and W. Lu, “Pinning complex networks by a single controller,” IEEE Trans. Circuits Syst. I, Reg. Papers, vol. 54, no. 6, pp. 1317–1326, Jun. 2007. [18] W. Yu, G. Chen, M. Cao, and J. Kurths, “Second-order consensus for multiagent systems with directed topologies and nonlinear dynamics,” IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 40, no. 3, pp. 881–891, Jun. 2010. [19] Q. Song, J. Cao, and W. Yu, “Second-order leader-following consensus of nonlinear multi-agent systems via pinning control,” Syst. Control Lett., vol. 59, no. 9, pp. 553–562, Sep. 2010. [20] Z.-G. Hou, L. Cheng, and M. Tan, “Decentralized robust adaptive control for the multiagent system consensus problem using neural networks,” IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 39, no. 3, pp. 636–647, Jun. 2009. [21] L. Cheng, Z.-G. Hou, M. Tan, Y. Lin, and W. Zhang, “Neural-networkbased adaptive leader-following control for multiagent systems with uncertainties,” IEEE Trans. Neural Netw., vol. 21, no. 8, pp. 1351–1358, Aug. 2010. [22] A. Das and F. L. Lewis, “Distributed adaptive control for synchronization of unknown nonlinear networked systems,” Automatica, vol. 46, no. 12, pp. 2014–2021, Dec. 2010. [23] A. Das and F. L. Lewis, “Cooperative adaptive control for synchronization of second-order systems with unknown nonlinearities,” Int. J. Robust Nonlinear Control, vol. 21, no. 13, pp. 1509–1524, Sep. 2011.

1898

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 26, NO. 9, SEPTEMBER 2015

[24] H. Zhang and F. L. Lewis, “Adaptive cooperative tracking control of higher-order nonlinear systems with unknown dynamics,” Automatica, vol. 48, no. 7, pp. 1432–1439, Jul. 2012. [25] H. Zhang, F. L. Lewis, and Z. Qu, “Lyapunov, adaptive, and optimal design techniques for cooperative systems on directed communication graphs,” IEEE Trans. Ind. Electron., vol. 59, no. 7, pp. 3026–3041, Jul. 2012. [26] D. Sun, “Position synchronization of multiple motion axes with adaptive coupling control,” Automatica, vol. 39, no. 6, pp. 997–1005, Jun. 2003. [27] D. Zhao, S. Li, F. Gao, and Q. Zhu, “Robust adaptive terminal sliding mode-based synchronised position control for multiple motion axes systems,” IET Control Theory Appl., vol. 3, no. 1, pp. 136–150, Jan. 2009. [28] W. Ren, “Distributed leaderless consensus algorithms for networked Euler–Lagrange systems,” Int. J. Control, vol. 82, no. 11, pp. 2137–2149, 2009. [29] L. Cheng, Z.-G. Hou, and M. Tan, “Decentralized adaptive consensus control for multi-manipulator system with uncertain dynamics,” in Proc. IEEE Int. Conf. Syst., Man, Cybern., Singapore, Oct. 2008, pp. 2712–2717. [30] N. Chopra, D. M. Stipanovic, and M. W. Spong, “On synchronization and collision avoidance for mechanical systems,” in Proc. Amer. Control Conf., Seattle, WA, USA, Jun. 2008, pp. 3713–3718. [31] H. Min, F. Sun, S. Wang, and H. Li, “Distributed adaptive consensus algorithm for networked Euler–Lagrange systems,” IET Control Theory Appl., vol. 5, no. 1, pp. 145–154, Jan. 2011. [32] J. Mei, W. Ren, J. Chen, and G. Ma, “Distributed adaptive coordination for multiple Lagrangian systems under a directed graph without using neighbors’ velocity information,” Automatica, vol. 49, no. 6, pp. 1723–1731, Jun. 2013. [33] L. Cheng, Z.-G. Hou, and M. Tan, “Decentralized adaptive leaderfollower control of multi-manipulator system with uncertain dynamics,” in Proc. 34th Annu. Conf. IEEE Ind. Electron. (IECON), Orlando, FL, USA, Nov. 2008, pp. 1608–1613. [34] S.-J. Chung and J.-J. E. Slotine, “Cooperative robot control and concurrent synchronization of Lagrangian systems,” IEEE Trans. Robot., vol. 25, no. 3, pp. 686–700, Jun. 2009. [35] P. F. Hokayem, D. M. Stipanovi´c, and M. W. Spong, “Semiautonomous control of multiple networked Lagrangian systems,” Int. J. Robust Nonlinear Control, vol. 19, no. 18, pp. 2040–2055, Dec. 2009. [36] C. C. Cheah, S. P. Hou, and J. J. E. Slotine, “Region-based shape control for a swarm of robots,” Automatica, vol. 45, no. 10, pp. 2406–2411, Oct. 2009. [37] W. Dong, “On consensus algorithms of multiple uncertain mechanical systems with a reference trajectory,” Automatica, vol. 47, no. 9, pp. 2023–2028, Sep. 2011. [38] J. Mei, W. Ren, and G. Ma, “Distributed coordinated tracking with a dynamic leader for multiple Euler–Lagrange systems,” IEEE Trans. Autom. Control, vol. 56, no. 6, pp. 1415–1421, Jun. 2011. [39] Y.-C. Liu and N. Chopra, “Synchronization of networked robotic systems on strongly connected graphs,” in Proc. 49th IEEE Conf. Decision Control, Atlanta, GA, USA, Dec. 2010, pp. 3194–3199. [40] E. Nuno, R. Ortega, L. Basanez, and D. Hill, “Synchronization of networks of nonidentical Euler–Lagrange systems with uncertain parameters and communication delays,” IEEE Trans. Autom. Control, vol. 56, no. 4, pp. 935–941, Apr. 2011. [41] S. Khoo, L. Xie, and Z. Man, “Robust finite-time consensus tracking algorithm for multirobot systems,” IEEE/ASME Trans. Mechatronics, vol. 14, no. 2, pp. 219–228, Apr. 2009. [42] Z. Meng, W. Ren, and Z. You, “Distributed finite-time attitude containment control for multiple rigid bodies,” Automatica, vol. 46, no. 12, pp. 2092–2099, Dec. 2010. [43] J. Mei, W. Ren, and G. Ma, “Distributed containment control for Lagrangian networks with parametric uncertainties under a directed graph,” Automatica, vol. 48, no. 4, pp. 653–659, Apr. 2012. [44] Z.-G. Hou, L. Cheng, M. Tan, and X. Wang, “Distributed adaptive coordinated control of multi-manipulator systems using neural networks,” in Robot Intelligence: An Advanced Knowledge Processing Approach, H. Liu, D. Gu, R. J. Howlett, and Y. Liu, Eds. Berlin, Germany: Springer-Verlag, 2010, pp. 49–69. [45] G. Chen and F. L. Lewis, “Distributed adaptive tracking control for synchronization of unknown networked Lagrangian systems,” IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 41, no. 3, pp. 805–816, Jun. 2011.

[46] J. Mei, W. Ren, and G. Ma, “Containment control for multiple unknown second-order nonlinear systems under a directed graph based on neural networks,” in Proc. 31st Chin. Control Conf., Hefei, China, Jul. 2012, pp. 6450–6455. [47] J. Mei, W. Ren, B. Li, and G. Ma, “Containment control for networked unknown Lagrangian systems with multiple dynamic leaders under a directed graph,” in Proc. Amer. Control Conf., Washington, DC, USA, Jun. 2013, pp. 522–527. [48] F. L. Lewis, S. Jagannathan, and A. Yesildirek, Neural Network Control of Robot Manipulators and Nonlinear Systems. London, U.K.: Taylor & Francis, 1999. [49] H. K. Khalil, Nonlinear Systems, 3rd ed. Upper Saddle River, NJ, USA: Prentice-Hall, 2002. [50] A. F. Filippov and F. M. Arscott, Differential Equations With Discontinuous Righthand Sides: Control Systems. Dordrecht, The Netherlands: Kluwer, 1988. [51] P. A. Ioannou and J. Sun, Robust Adaptive Control. Englewood Cliffs, NJ, USA: Prentice-Hall, 1996. [52] M. W. Spong, S. Hutchinson, and M. Vidyasagar, Robot Modeling and Control. New York, NY, USA: Wiley, 2006. [53] R. M. Sanner and J.-J. E. Slotine, “Gaussian networks for direct adaptive control,” IEEE Trans. Neural Netw., vol. 3, no. 6, pp. 837–863, Nov. 1992.

Jie Mei received the B.S. degree in information and computational science from Jilin University, Changchun, China, in 2007, and the Ph.D. degree in control science and engineering from the Harbin Institute of Technology, Harbin, China, in 2011. He was an exchange Ph.D. student supported by the China Scholarship Council with the Department of Electrical and Computer Engineering, Utah State University, Logan, UT, USA, from 2009 to 2011. He held research positions with the City University of Hong Kong, Hong Kong, and the University of California at Riverside, Riverside, CA, USA, from 2012 to 2014. He has been with the School of Mechanical Engineering and Automation, Harbin Institute of Technology Shenzhen Graduate School, Shenzhen, China, since 2012. His current research interests include coordination of distributed multiagent systems.

Wei Ren received the B.S. degree in electrical engineering from Hohai University, Nanjing, China, in 1997, the M.S. degree in mechatronics from Tongji University, Shanghai, China, in 2000, and the Ph.D. degree in electrical engineering from Brigham Young University, Provo, UT, USA, in 2004. He was a Post-Doctoral Research Associate with the Department of Aerospace Engineering, University of Maryland at College Park, College Park, MD, USA, from 2004 to 2005. He was an Assistant Professor from 2005 to 2010 and an Associate Professor from 2010 to 2011 with the Department of Electrical and Computer Engineering, Utah State University, Logan, UT, USA. Since 2011, he has been with the Department of Electrical Engineering, University of California at Riverside, Riverside, CA, USA, where he is currently a Professor. He authored two books entitled Distributed Coordination of Multi-agent Networks (Springer-Verlag, 2011) and Distributed Consensus in Multi-vehicle Cooperative Control (Springer-Verlag, 2008). His current research interests include distributed control of multiagent systems and autonomous control of unmanned vehicles. Dr. Ren was a recipient of the National Science Foundation CAREER Award in 2008. He is currently an Associate Editor of Automatica, Systems and Control Letters, and the IEEE T RANSACTIONS ON C ONTROL OF N ETWORK S YSTEMS .

MEI et al.: DISTRIBUTED CONTAINMENT CONTROL FOR MULTIPLE UNKNOWN SECOND-ORDER NONLINEAR SYSTEMS

Bing Li received the Ph.D. degree in mechanical engineering from Hong Kong Polytechnic University, Hong Kong, in 2001. He was a Professor of Mechatronics in 2006. He is currently the Head of the School of Mechanical Engineering and Automation with the Harbin Institute of Technology Shenzhen Graduate School, Shenzhen, China. His current research interests include parallel manipulators and control, and mechanical vibration and control. Prof. Li is serving as an Associate Editor of the International Journal of Mechanisms and Robotic Systems. Bing Li is the corresponding author of this paper.

1899

Guangfu Ma received the M.S. and Ph.D. degrees in electrical engineering from the Harbin Institute of Technology, Harbin, China, in 1987 and 1993, respectively. He became an Associate Professor and a Professor with the Harbin Institute of Technology, in 1992 and 1997, respectively, where he is currently a Professor with the Department of Control Science and Engineering. His current research interests include nonlinear control, satellite attitude control, and formation flying.

Distributed Containment Control for Multiple Unknown Second-Order Nonlinear Systems With Application to Networked Lagrangian Systems.

In this paper, we consider the distributed containment control problem for multiagent systems with unknown nonlinear dynamics. More specifically, we f...
2MB Sizes 0 Downloads 5 Views