990

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 24, NO. 6, JUNE 2013

Brief Papers Fast-Convergent Double-Sigmoid Hopfield Neural Network as Applied to Optimization Problems Zekeriya Uykan, Senior Member, IEEE

Abstract— The Hopfield neural network (HNN) has been widely used in numerous different optimization problems since the early 1980s. The convergence speed of the HNN (already in high gain) eventually plays a critical role in various real-time applications. In this brief, we propose and analyze a generalized HNN which drastically improves the convergence speed of the network, and thus allows benefiting from the HNN capabilities in solving the optimization problems in real time. By examining the channel allocation optimization problem in cellular radio systems, which is NP-complete and in which fast solution is necessary due to time-varying link gains, as well as the associative memory problem, computer simulations confirm the dramatic improvement in convergence speed at the expense of using a second nonlinear function in the proposed network. Index Terms— Associative memory, channel allocation problem in cellular radio systems, continuous-time Hopfield neural network, convergence speed, Lyapunov function, optimization.

I. I NTRODUCTION The Hopfield neural network (HNN) [1], [2] has been an important focus of research area since the early 1980s. Hopfield and Tank presented the application of HNN in optimization problems in [2] by solving the well-defined traveling salesman problem. This application of HNN made it a very popular model, and since then its applications varied from combinatorial optimization to image restoration, from various control engineering optimization problems to robotics, and from radio resource optimization in wireless networks to associative memory systems, among many others. The main idea of using the HNN in difficult optimization problems is relatively straightforward: design and formulate the cost function of your optimization problem in the form of the Lyapunov function of the HNN, which is given in (1) [3, pp. 5–18], and then let the HNN autmomatically find the solution to the optimization problem. N  1 aii V (t) = − f1T (x)Wf1 (x) + 2 i=1  yi −1 f 1,i (ρ)dρ − f1T (x)Bu − dT f 1 (x)

where x ∈ R N×1 , A, W ∈ R N×N , B ∈ R N×M , u ∈ R M×1 , d ∈ R N×1 , and f1 (x) = [ f 1 (x 1 ) f 1 (x 2 ) · · · f 1 (x N )]T in which f 1 (x i ), is a sigmoid function. For a detailed analysis of the Lyapunov function in (1) [3, pp. 6–18 ]. For HNNs in high gain, that is, the slope of the sigmoid function is higher than around 9, the integral term (second term) is neglected [3, pp. 14–18]. In this brief, we follow the Lyapunov theory which has been widely used for proving the global stability of various recurrent/dynamic neural networks [4]–[6]. Various optimization problems in different disciplines as mentioned above are indeed formulated as (1), and the continuous-time HNN in high gain has been successfully applied to finding solutions to difficult real-time optimization problems such as analogto-digital conversion [7], [8], job scheduling problems [9], quadratic assignment, and other related NP-complete problems [10], dynamic resource allocation problems in wireless networks [11], mobile ad hoc network routing problem [12], [13], image restoration, and so on, just to name a few. The convergence speed of the HNN naturally plays a critical role in real-time applications due to delay constraints. Furthermore, it is well known that: 1) the HNN does not guarantee the global solution for any given optimization problem, and typically finds a local one, and 2) in case the solution found is far from being satisfactory, then in many applications, the HNN is needed to be run for other initial conditions. Therefore, the faster the HNN converges for the same initial conditions and for the same weight matrix W, the better performance HNN provides in real-time optimization. Because of these reasons, our main motivation in this brief is to improve the convergence speed of the HNN for the same initial conditions and the weight matrix W. In this brief, we present a generalized HNN, called double-sigmoid HNN, which drastically improves the convergence speed of the HNN. The brief is arranged as follows. The proposed network with its stability analysis is presented in Section II, followed by the computer simulation results in Section III. The conclusion is given in Section IV. II. D OUBLE -S IGMOID HNN

(1)

As compared to the standard HNN in Fig. 1(a), the proposed generalized HNN is shown in Fig. 1(b), and its corresponding differential equation is given by (2)

0

Manuscript received March 9, 2012; revised January 25, 2013; accepted January 26, 2013. Date of publication February 21, 2013; date of current version April 5, 2013. The author is with Dogus University, Engineering Faculty, Control and Automation Engineering Department, Istanbul 34722, Turkey (e-mail: [email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TNNLS.2013.2244099

x˙ = f2 (−Ax + Wf 1 (x) + Bu + d)

(2)

where x ∈ R N×1 , A, W ∈ R N×N , B ∈ R N×M , u ∈ R M×1 , d ∈ R N×1 , and f j (x) = [ f j (x 1 ) f j (x 2 ) · · · f j (x N )]T in which f j (x i ) = k j (−1 + (2/(1 + e−σ j xi ))), k j , σ j > 0, is a sigmoid function ( j =1, 2, and i=1, 2,...,N ). In (2), if f2 (·) were a unity function, then it would be equal to the standard HNN in [1]. Because we introduce a second sigmoid function f2 (.)

2162-237X/$31.00 © 2013 IEEE

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 24, NO. 6, JUNE 2013

991

Theorem 2: The convergence speed of the proposed DS-HNN can be drastically faster than that of the standard HNN by choosing the second sigmoid function such that the second sigmoid f 2 (·) satisfies | f 2 (a)| >> |a|, for a = 0, in the work regime, which can easily be met by a relatively high k2 and σ2 . Proof: It is well known that the derivative of the Lyapunov function in (1) with respect to time for the standard HNN is obtained as [2] and [3]

(a)

V˙ (t) = −

N  d f 1 (x i )

dx i

i=1

(b) Fig. 1.

(a) HNN. (b) Proposed DS-HNN.

in (1), we call the proposed network as double-sigmoid HNN (DS-HNN). Theorem 1: Suppose W and B are symmetric matrices, A is a positive diagonal matrix, and u and d are constant vectors in appropriate dimensions in (2). If all the equilibrium points are isolated, then the system is stable and every solution of (2) converges to an equilibrium point as time goes to infinity. Proof: Let us consider the Lyapunov function of the HNN in [2, eq. (1)]. For a detailed analysis of the Lyapunov function in (1), see [3, pp. 6–18]. For HNNs in high gain, that is, σ1 > 9, the second term is neglected [3, pp. 14–18]. Taking the derivative of the function V (t) with respect to time and using the symmetry property of W and B gives   yi N df 1  df1,i d −1 V˙ (t) = −f1T (x)W aii f 1,i (ρ)dρ + dt df 1,i 0 dt i=1

df1 (x) df1 (x) − dT − uT B T dt dt

 df 1 (x i ) df1 (x) V˙ (t) = −[f−1 =− x)]T x˙i f 2−1 (x˙i ) (5) 2 (˙ dt dxi N

i=1

where f 1 (·) and f2 (·) are (possibly different) sigmoid functions as defined before. Since, in the work regime, for a sigmoid function, (df 1 (x i )/d x i ) > 0 and its inverse f 2−1 (x˙i ) has the same sign as its argument, we obtain  > ϕi2 (t)

(10)

for ϕ(t) = 0 in this brief regime, then the convergence speed is drastically improved as compared to the standard HNN (since V (t) itself is lower bounded). There are infinitely many choices of σ2 and k2 which would satisfy (10) for any limited work regime. This completes the proof. Equation (10) implies that it is possible to get very high convergence speed by the proposed network, provided that the gain of f2 (·) is very high. Relatively high σ2 and k2 satisfy this condition. The simulation results in Section III confirm this observation. III. S IMULATION R ESULTS As case studies, we examine two optimization problems: channel allocation problem in cellular radio systems in Example 1 and associative memory systems design problem in Example 2. A. Example 1 Channel allocation problem in cellular radio systems can be modelled as a maxCut graph partitioning problem [14],

992

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 24, NO. 6, JUNE 2013

Fig. 2. Snapshot of the radio network in Example 1.a. The MS locations are numbered from 1 to 20. Fig. 4.

Fig. 3.

Evolution of the states of high-gain HNN for the network in Fig. 2.

which is NP-complete. As in [14], we focus on twochannel case in this brief because: 1) the solution for a two-channel case is not known as it is an NP-complete problem [15, p. 359], [16], and 2) if the number of channels is different than 2, then the same algorithm can be iteratively repeated to find the solution as explained in [15, p. 378]. Therefore, we focus on the two-channel case, and borrow the code division multiple access system in [14] for computer simulations. Parameters are chosen as σ1 = 10 and k1 = 1, for high-gain HNN, moreover, from (10), let σ2 = 2 and k2 = 100, which assures | f 2 (a)| >> |a| for DS-HNN in (2). 1) Example 1.a: In Example 1.a, there are 20 base stations (BSs) located on a straight line as in Fig. 2. A snapshot of the radio network is in Fig. 2. BS locations are indicated as stars in squares, and mobile station (MS) locations with their channel allocations are shown as blue triangles and red circles. The weight matrix W of both HNN and DS-HNN is the same and is obtained from link gains, B = 0, d = 0, A = I. From Fig. 2, we find that the HNN solution gives the globally optimum solution for this scenario (since MSs with even numbers are in one channel and the odd numbers are in the other channel). Evolutions of the states are given in Figs. 3 and 4 for standard HNN and proposed DS-HNN, respectively. By comparing Figs. 3 and 4(b), we find that the convergence speed of DS-HNN is drastically faster than that

Evolution of the states of the proposed DS-HNN for Fig. 2.

Fig. 5. Evolution of the distance between the state vector and the equilibrium point with respect to time for the HNN and DS-HNN results in Figs. 3 and 4.

of the standard (high-gain) HNN. The same feature can also be seen from Fig. 5 where the evolution of the distance between the state vector and the equilibrium point (denoted as x∗ ) with respect to time for the HNN and DS-HNN. The plots in Figs. 3–5 are typical ones among the 2000 snapshots examined. 2) Example 1.b: In Example 1.b, the cellular radio system is in two dimensions, and there are 25 BSs located on a 5 × 5 grid as shown in Fig. 6 where the channel allocation results are also given. All initial state values are random; global optimal solution is achieved by the HNN as seen in Fig. 6. Typical state evolutions of HNN and DS-HNN are shown in Figs. 7 and 8, and the evolution of their distances to the equilibrium point (x∗ ) with respect to time is shown in Fig. 9. Comparing Fig. 7 with Fig. 8, we clearly see the dramatic improvement of the convergence speed by the DS-HNN as compared to the standard HNN (which is already in high gain). From the figures, while it takes more than 6 unit time for the standard HNN to find the (globally) optimum solution, it takes less than 0.1 unit time to find the same result for the proposed DS-HNN. The same results can be observed from Fig. 9. 3) Example 1.c: In Example 1.c, we examine random BS location scenarios for 10, 16, 22, and 28 BSs, and 2000 snapshots for each. Same drastic convergence speed improvement

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 24, NO. 6, JUNE 2013

Fig. 6. Snapshot of the radio network in Example 1.b. The 25 ms locations are indicated by numbers from 1 to 25.

993

Fig. 9. Evolution of the distance between the state vector and the equilibrium point with respect to time for the HNN and DS-HNN results in Figs. 7 and 8. TABLE I N ORMALIZED AVERAGE SINR S IN D ECIBELS W ITH R ESPECT TO T HOSE OF THE HNN C ASE IN E XAMPLE 1. C N = 10

N = 16

N = 22

N = 28

+0.14 dB

+0.06 dB

+0.01 dB

+0.01 dB

N is the number of BSs.

Fig. 7.

Evolution of the states of high-gain HNN for Fig. 6.

Fig. 8.

Evolution of the states of the proposed DS-HNN for Fig. 6.

is obtained by the proposed DS-HNN as compared to HNN for every case. Furthermore, we also examine the average SINR (signal-to-interference + noise ratio) per MS for HNN and DS-HNN. The normalized SINR values with respect to those of HNN are given in Table I. The results in Table I, all of which are slightly positive, confirm that improving the convergence

speed does not deteriorate the HNN capability in solving the optimization problem. Actually, the fast convergence is achieved at the cost of using a second sigmoid function. 4) Comments on the HNNs Use in Wireless Networks Resource Optimization: The continuous-time HNN has been successfully applied to various radio resource management problems in cellular radio systems [17]–[19] and mobile ad hoc networks [13], [20], [21], all of which are NP-complete problems. The HNNs have the capacity to find suboptimal solutions in a few microseconds [18], which is fast enough to establish a new resource allocation on a frame-by-frame basis in current wireless communication systems for relatively low mobile speeds. The faster the optimal radio resource allocation solution is found, the higher the performance of the wireless network especially for relatively high mobile speeds because the coherence time of the wireless channel is inversely proportional with the mobile speed [22]. Thus, the faster the HNN as the resource optimizer converges, the higher mobile speed scenarios can be handled. And the proposed DS-HNN drastically improves the convergence speed of the HNN as seen from the results above. B. Example 2 In this example, we compare the convergence speeds of the HNN and the proposed DS-HNN in associative memory problem. It is well known that the performance index of the associative memory problem can be made equal to the Lyapunov function of the high-gain HNNs Lyapunov function in (1). For details, see for example, [3, pp. 16–18]. The desired patterns represent numbers 1–4 as shown in Fig. 10(a) (where the number of neurons is 25). The matrix

994

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 24, NO. 6, JUNE 2013

(a)

(a)

(b)

(b) Fig. 10. (a) Desired patterns (numbers 1, 2, 3, and 4). (b) Desired and a distorted pattern (with HD = 5) and the results of HNN and DS-HNN.

W used in both the HNN and the proposed DS-HNN is found by standard Hebb rule with the desired patterns. See [3, pp. 18–19] for details about how the Hebb learning is used to determine the weight matrix of the HNN as applied to the associative memory problem (and B = 0, A = I, and d = 0). The parameters of the sigmoid function of the standard HNN is chosen as in Example 1 above: σ1 = 10 and k1 = 1 for high-gain HNN; moreover, σ2 = 2 and k2 = 100 for DS-HNN. We observe that this parameter selection, after only a few trials, according to (10), gives excellent results in all our case studies. As stated above, there are infinitely many choices of σ2 and k2 which would satisfy (10). Yet, optimum parameter selection is out of the scope of this brief. A sample of randomly distorted pattern for Hamming distance (HD) = 5 is given on the upper right corner of Fig. 10(b) for number 2. The results of the HNN and the DS-HNN are shown in the second row of Fig. 10(b), respectively, which shows that both the HNN and the DS-HNN manage to restore the pattern 2. The evolution of the states of the HNN and the DS-HNN (for the same initial condition) are shown in Fig. 11(a) and (b), respectively. By comparing Fig. 11(a) and (b), we see that while the high-gain HNN reaches its

Fig. 11. Evolution of the states for (a) HNN. (b) DS-HNN for the same initial condition.

steady-state solution after around 5 s, it takes less than 0.2 s to reach the very same steady-state solution by the proposed DS-HNN. Evolutions of the distance between the state vector and the equilibrium point (x∗ ) are plotted in Fig. 12 for both HNN and DS-HNN cases. The figure shows the improved convergence speed by the proposed DS-HNN due to the second sigmoid used, as compared to the HNN. We have examined hundreds of randomly distorted patterns for HD = 5 for each desired pattern, and we have observed similar results as those in Figs. 11 and 12. 1) Comments on the Basins of Equilibrium Points of the HNN and DS-HNN: In this brief, we introduce a second nonlinear function into the standard HNN and mathematically show its stability by the Lyapunov method. Generally speaking, introducing a new sigmoid into the standard HNN might eventually change the basins of attraction of the equilibrium points of the HNN. A detailed analysis (like in [23]) about how the basins would change is a challenging task due to the second sigmoid introduced, and is out of the scope of this brief. As far as the application of the HNN into optimization problems is concerned, the important point is that both the standard HNN and the proposed DS-HNN minimize the very same

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 24, NO. 6, JUNE 2013

995

IV. C ONCLUSION

Fig. 12. Evolution of the distance between the state vector and the equilibrium point with respect to time for the HNN and DS-HNN results in Fig. 11(a) and (b).

We presented and analyzed a generalized HNN which drastically improves the convergence speed of the network which is already in high gain, for the same weight matrices and for the same initial conditions, thus allows better benefitting from the HNN capabilities in solving optimization problems in real time. As case studies, the simulation results for the channel allocation problem in cellular radio systems (equivalently for maxCut graph-partitioning problem), which was NP-complete, as well as the associative memory problem confirmed the drastically improved convergence speed as compared to the high-gain HNN. As the future research direction, we will investigate the possibilities of how the proposed model in this brief can be realized as VLSI. Furthermore, examining how to optimally choose the extra parameters σ2 and k2 for general problems and their effect on the convergence will be another interesting future research direction. In this brief, we have fixed these parameters after only a few trials, which resulted in excellent convergence performance in all simulation cases in Section III. A PPENDIX Results for pattern 3 in Example 2 for HD = 7. R EFERENCES

(a)

(b)

(c)

(d)

Fig. 13. (a) Desired and distorted pattern (with HD = 7) and the results of HNN and DS-HNN. (b)–(c) Evolutions of states for HNN and DS-HNN, respectively. (d) Energy function in (1) for HNN and DS-HNN.

energy (cost) function. Because the basins of the equilibrium points might change due to the second sigmoid function introduced, the results might be different as well. We give such an example in the Appendix. A distorted pattern with HD = 7 for pattern 3 is applied to both the standard and the proposed DS-HNN in Fig. 13. As seen from Fig. 13(a), while the HNN fails to recover the pattern, the proposed DS-HNN succeeds to recover. Furthermore, comparing Fig. 13(d) and (e), the Lyapunov function (i.e., energy or cost function) is further minimized by the proposed DS-HNN as compared to the HNN case. In other words, the proposed DS-HNN not only improves the convergence speed but also further minimizes the cost function for this example. In this brief, though, we focus only on the convergence speed of the DS-HNN in which the second sigmoid is also in high gain.

[1] J. J. Hopfield, “Neural networks and physical systems with emergent collective computational abilities,” in Proc. Nat. Acad. Sci., vol. 79, no. 8, pp. 2554–2558, 1982. [2] J. J. Hopfield and D. W. Tank, “Neural computation of decisions in optimization problems,” Biol. Cybern., vol. 52, no. 3, pp. 141–146, Jul. 1985. [3] M. T. Hagan, H. B. Demuth, and M. Beale, Neural Network Design, Boston, MA, USA: PWS Publishing Company, 1996. [4] A. Balavoine, J. Romberg, and C. J. Rozell, “Convergence and rate analysis of neural networks for sparse approximation,” IEEE Trans. Neural Netw. Learn. Sys., vol. 23, no. 9, pp. 1377–1389, Sep. 2012. [5] Q. Zhu and J. Cao, “Stability analysis of markovian jump stochastic BAM neural networks with impulse control and mixed time delays,” IEEE Trans. Neural Netw. Learn. Syst., vol. 23, no. 3, pp. 467–479, Sep. 2012. [6] Y. Shen and J. Wang, “Robustness analysis of global exponential stability of recurrent neural networks in the presence of time delays and random disturbances,” IEEE Trans. Neural Netw. Learn. Syst., vol. 23, no. 1, pp. 87–96, Jan. 2012. [7] D. W. Tank and J. J. Hopfield, “Simple ’Neural’ optimization: An A/D converter, a single decision circuit and linear programming circuit,” IEEE Trans. Circuit Syst., vol. 33, no. 5, pp. 137–142, May 1991. [8] V. Chande and P. G. Pooncha, “On neural networks for analog to digital conversion,” IEEE Trans. Neural Netw., vol. 6, no. 5, pp. 1269–1274, Sep. 1995. [9] W. Wan-Liang, X. Xin-Li, and W. Qi-Di, “Hopfield neural networks approach for job shop scheduling problems,” in Proc. IEEE Int. Symp. Intell. Control, Oct. 2003, pp. 935–940. [10] C. Bousofio and M. R. W. Manning, “The hopfield neural network applied to the quadratic assignment problem,” Neural Comput. Appl., vol. 3, no. 2, pp. 64–72, 1995. [11] J. F. Monserrat, D. Calabuig, L. Rubio, and N. Cardona, “Hopfield neural-network-based dynamic resource allocation scheme for non-realtime traffic in wireless networks,” Int. J. Commun. Syst., vol. 22, no. 2, pp. 135–158, Feb. 2009. [12] C. W. Ahn, R. S. Ramakrishna, C. G. Kang, and I. C. Choi, “Shortest path routing algorithm using hopfield neural network,” Electron. Lett., vol. 37, no. 19, pp. 1176–1178, 2001. [13] M. Sheikhan and E. Hemmati, “High reliable disjoint path set selection in mobile ad-hoc network using hopfield neural network,” Inst. Eng. Technol. Commun., vol. 5, no. 11, pp. 1566–1576, 2011.

996

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 24, NO. 6, JUNE 2013

[14] Z. Uykan, “Spectral based solutions for (Near) optimum channel/frequency allocation,” in Proc. 18th Int. Conf. Syst. Signals Image Process., Sarajevo, Bosnia and Herzegovina, Jun. 2011, pp. 1–4. [15] M. E. J. Newman, Networks–An Introduction, London, U.K.: Oxford Univ. Press, 2011, pp. 358–380. [16] I. Stojmenovic, “Handbook of Wireless Networks and Mobile Computing,” New York, USA: Wiley, 2002. [17] O. Lazaro, and D. Girma, “Real-time operational aspects of Hopfield neural network based dynamic channel allocation scheme,” Electron. Lett., vol. 40, no. 18, pp. 1141–1143, 2004. [18] C. W. Ahn, and R. S. Ramakrishna, “QoS provisioning dynamic connection-admission control for multimedia wireless networks using Hopfield neural networks,” IEEE Trans. Veh. Technol., vol. 53, no. 1, pp. 106–117, Jan. 2004. [19] D. Calabuig, J. F. Monserrat, D. G. Barquero, and N. Cardona, “Hopfield neural network algorithm for dynamic resource allocation in WCDMA systems,” in Proc. IEEE 3rd Int. Symp. Wireless Commun. Syst., Valencia, Spain, Sep. 2006, pp. 40–44. [20] D. Calabuig, J. F. Monserrat, D. G. Barquero, and O. Lazaro, “User bandwidth usage-driven HNN neuron excitation method for maximum resource utilization within packet-switched communication networks,” IEEE Commun. Lett., vol. 10, no. 11, pp. 766–768, Nov. 2006. [21] G. Joya, M. A. Atencia, and F. Sandoval, “Hopfield neural network applied to optimization problems: Some theoretical and simulation results,” in Proc. Biol. Artif. Comput., Neurosci. Technol, Lect. Notes Comput. Sci., vol. 1240, pp. 556–565, 1997. [22] T. S. Rappaport, Wireless Communications: Principles & Practice, 2nd ed., Englewood Cliffs, NJ: Prentice Hall, 2002. [23] Z. Zeng and W. X. Zheng, “Multistability of neural networks with timevarying delays and concave-convex characteristics,” IEEE Trans. Neural Netw. Learn. Syst., vol. 23, no. 2, pp. 293–305, Feb. 2012.

Synchronization Design of Boolean Networks via the Semi-Tensor Product Method Rui Li, Meng Yang, and Tianguang Chu

Abstract— We provide a general approach for the design of a response Boolean network (BN) to achieve complete synchronization with a given drive BN. The approach is based on the algebraic representation of BNs in terms of the semitensor product of matrices. Instead of designing the logical dynamic equations of a response BN directly, we first construct its algebraic representation and then convert the algebraic representation back to the logical form. The results are applied to a three-neuron network in order to illustrate the effectiveness of the proposed approach. Index Terms— Algebraic representation, Boolean network, complete synchronization, semi-tensor product.

0 means that it is OFF. These nodes interact with each other according to some logical rules, specified through a set of Boolean functions that determine the states of the nodes at the next time-step, and thereby give the dynamics of the network. Even though BNs seem to be oversimplified models of real networks, they retain in most cases meaningful information that can be used to make inferences regarding the systems they model, and hence have been investigated widely. Recently, an increasing interest has been focused on synchronization of coupled BNs, motivated by many potential applications in biology, physics, and engineering [10], [11]. In [12], synchronization between a pair of stochastically coupled random BNs was considered. Later, the study was further extended to mutual synchronization in a random network of random BNs [13]. The spectral characterization of two synchronized deterministic BNs coupled in a drive–response configuration was discussed in [14]. In our recent paper [15], the concept of complete synchronization was introduced for the case of deterministic BNs in a drive–response configuration, and a necessary and sufficient algebraic criterion for complete synchronization was presented. In this brief, we further consider the problem of synchronization design for BNs. This brief constitutes a continuation of [15], providing a general approach for constructing a response BN to achieve complete synchronization with a given drive BN. Similar to [15], our discussion is based on the recently developed technique of the semi-tensor product of matrices that allows an algebraic representation of logical dynamics of BNs [16]–[19]. To demonstrate the application of the design approach, we apply the results to a three-neuron network studied in [20] and [21]. The remainder of this brief is organized as follows. Section II gives the problem formulation and some preliminary results. Section III presents our approach for constructing synchronized response BNs. An illustrative example is treated in Section IV, and a brief conclusion is drawn in the final section. Notation: Rn×m denotes the set of all n × m real matrices. In is the n × n identity matrix. δn (i ) stands for the i th column of In . n stands for the set of all n columns of In . Col(A) denotes the set of columns of a matrix A. 1n represents the n-dimensional row vector whose entries are equal to 1. ⊗ is the Kronecker product of matrices.

I. I NTRODUCTION

II. P ROBLEM F ORMULATION AND P RELIMINARIES

Boolean networks (BNs) have been successfully used in modeling complex systems such as neural networks and gene regulatory networks [1]–[9]. In a BN, the nodes can take on one of two binary values, 1 or 0, at each discrete time point. A value of 1 represents that the node is ON and a value of

Let us consider two BNs with N nodes respectively coupled in a drive–response configuration and described as follows:

Manuscript received May 14, 2012; revised November 19, 2012; accepted February 11, 2013. Date of publication March 7, 2013; date of current version April 5, 2013. This work was supported in part by the National Basic Research Program of China under Grant 2012CB821200 and the National Natural Science Foundation of China under Grant 61273111 and Grant 60974064. The authors are with the State Key Laboratory for Turbulence and Complex Systems, College of Engineering, Peking University, Beijing 100871, China (e-mail: [email protected]; [email protected]; [email protected]). Digital Object Identifier 10.1109/TNNLS.2013.2248092

x i (t + 1) = f i (x 1 (t), . . . , x N (t)) yi (t + 1) = gi (x 1 (t), . . . , x N (t), y1 (t), . . . , y N (t)),

(1) (2)

i = 1, . . . , N where x i and yi are the nodes of the drive BN (1) and the response BN (2), respectively, f i and gi are Boolean functions, and t = 0, 1, 2, . . .. We succinctly denote by X = (x 1 , . . . , x N ) and Y = (y1 , . . . , y N ) the states of the drive BN (1) and the response BN (2), respectively. Let X(t, X0 ) be the trajectory of (1) with X(0, X0 ) = X0 , and let

2162-237X/$31.00 © 2013 IEEE

Fast-convergent double-sigmoid Hopfield neural network as applied to optimization problems.

The Hopfield neural network (HNN) has been widely used in numerous different optimization problems since the early 1980s. The convergence speed of the...
1MB Sizes 0 Downloads 3 Views