IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 26, NO. 7, JULY 2015

1539

Phase Oscillatory Network and Visual Pattern Recognition Rosangela Follmann, Elbert E. N. Macau, Epaminondas Rosa, Jr., and José R. C. Piqueira Abstract— We explore a properly interconnected set of Kuramoto type oscillators that results in a new associative-memory network configuration, which includes second- and third-order additional terms in the Fourier expansion of the network’s coupling. Investigation of the response of the network to different external stimuli indicates an increase in the network capability for coding and information retrieval. Comparison of the network output with that of an equivalent experiment with subjects, for recognizing perturbed binary patterns, shows comparable results between the two approaches. We also discuss the enhanced storage capacity of the network.

Index Terms— Associative memory, information processing, phase oscillator, synchronization. I. I NTRODUCTION The understanding of the mechanism for information representation and processing in the brain is a central topic in neuroscience. It is known that self-organization of neuronal network caused by local oscillatory activity, along with the complexity of synaptic connections, plays a key role during information processing [1]. The remarkable Hopfield network model with associative memory [2] has provided essential insights into neuronal computation. In the Hopfield model, each element, or neuron, is assigned a binary unit, and the strengths of the connections between neurons are quantified in terms of weights. When subjected to an external stimulus, the network retrieves pattern, from a set of predetermined memories, mimicking the visualization of an image, i.e., the pattern that mostly resembles the external stimulus. This is accomplished based on a energy minimizing criterion, as an energy function associated to the network states is introduced so that local minima of this function are associated with stable local states that allows information storage. The associative memory component is inserted with the inclusion of memory vectors, and the capacity of the network is established by the ratio between the number of stored patterns and the number of neurons. It gives a measure, per neuron, of the maximum number of patterns that the network can memorize. The Hopfield network possesses, for an allowed small error, a capacity of 0.138, which means that out of every 1000 neurons, the network is capable of retrieving 138 vectors. However, for recovery of patterns with no error, the capacity is reduced to 1/logN, where N is the number of neurons in the network [3]. Since its introduction, the Hopfield network has become of central importance in studies in a wide range of areas. One point of Manuscript received October 8, 2013; accepted July 30, 2014. Date of publication August 15, 2014; date of current version June 16, 2015. This work was supported by the São Paulo Research Foundation-Fundação de Amparo à Pesquisa do Estado de São Paulo, under Grant 2011/13871-4, Grant 2012/12555-4, and Grant 2011/50151-0. The work of E. E. N. Macau was supported by the National Council for Scientific and Technological Development. R. Follmann and J. R. C. Piqueira are with the Department of Telecommunications and Control Engineering, Polytechnic School of University of São Paulo, São Paulo 05508-010, Brazil (e-mail: [email protected]; [email protected]). E. E. N. Macau is with the Brazilian National Institute for Space Research, São José dos Campos 12227-010, Brazil (e-mail: [email protected]). E. Rosa is with the Department of Physics, Illinois State University, Normal, IL 61761 USA (e-mail: [email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TNNLS.2014.2345572

particular interest concerns the spontaneous capability, of large quantities of interacting individual neurons, for performing computational work associated with synchronous states. Synchrony can be widely observed in nature, particularly phase synchronization [4], as for example in the unisonous flashing of fireflies [5] or the synchronous chirping of crickets [6]. In addition, the phenomenon is essential to the normal functioning of the human organism as in pacemaker cells in the cardiovascular system [7], in the insulin secretion cells in the pancreas [8], and is also present in pathologies as Parkinson disease and epilepsy [9], [10]. In addition, neuronal synchronization facilitates synaptic plasticity during cognitive processes [11], [12]. The presence of synchronized oscillations in the brain has motivated studies using oscillatory network models with associative memories based on temporal coding of information [13]–[15]. Usually, those models consist of coupled oscillators interacting with each other according to a learning rule, and the information is coded as phaselocked oscillations. One of the most used mathematical models for oscillatory associative memory is based on the Kuramoto oscillator [16]. The dynamics of the network is described in terms of the phase equations θ˙i = ωi +

N 

Ci j sin (θ j − θi ) i = 1, . . . , N

(1)

j =1

where θi represents the phase of the ith oscillator and Ci j is the coupling matrix. It has been shown [17] that in this kind of network the capacity is 0.042, which is approximately 1/3 as larger as that of the Hopfield network. In the case of stable solutions, the estimated capacity for information recovery with no error is 2/N patterns per neuron [18], [19]. An advantage of the Kuramoto-based network involves its implementability in a number of oscillatory devices, such as phase-locked loops circuits [20]–[22], lasers [23], and MEMS resonators [24]. The introduction of a second-order mode of strength  in the coupling term, for example, has been shown to enhance the network’s capacity to at least 2 2 /logN [13], [25].1 Here, we show that further extending the Kuramoto model, by including second- and third-order modes in the Fourier expansion of the coupling, results in substantial improvement in both the information retrieval and the storage capability of the network. In addition, we show that the outcome of binary pattern retrieval experiment with 20 volunteer subjects is consistent with the network output. II. N ETWORK M ODEL Exploring the idea of large collections of simple elements being capable of performing complex tasks, we use phase oscillators to represent the processing units (neurons) in the network. Here, we concentrate on the case where the natural frequencies off all oscillators are identical (ωi = ω) and consider the change of variable 1 Fourier mode is given by: θ˙i =  NThe model with a second-order N j =1 C i j sin (θ j − θi ) + /N j =1 (sin 2(θ j − θi )), where  is a parameter and Ci j the strengh of coupling from oscillator j to i, which is given by Hebb’s learning rule. For more details see [13], [25].

2162-237X © 2014 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

1540

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 26, NO. 7, JULY 2015

θi → θi + ωt. The dynamics of each neuron is given by θ˙i =

N 

Ci j sin(θ j − θi )

j =1

+

N 1  (η1 sin 2(θ j − θi ) − η2 sin 3(θ j − θi )) N

(2)

j =1

where Ci j is the coupling intensity of oscillator i to j , η1 and η2 are parameters that increase the strength of the second and third terms on (2), respectively. The model presented in (2) includes an additional term sin 3(θ j − θi ) to a previous phase oscillatory model introduced in [13] and [25]. This new term corresponds to a third-order mode of the Fourier components. The matrix C expresses the plasticity of the network regarding the associative memory given by the Hebbian Each element  p rule μ[26]. μ of the matrix C is then Ci j = 1/N μ=1 ξi ξ j , where ξ μ = μ μ μ (ξ1 , . . . , ξ N ), with ξi = ±1, μ = 1, . . . , p, and i = 1, . . . , N, μ representing a set of p stored patterns. The stored patterns ξi are randomly and independently distributed with same probabilities μ μ P(ξi = 1) = P(ξi = −1) = 1/2. Overall the memory matrix C can be understood as a mechanism that increases the synaptic efficacy as a function of the existing correlation between pre and postsynaptic activities. Equation (2) has 2 N fixed point solutions that correspond to all possible binary patterns with size N. Considering v = (v 1 , . . . , v N )T one of this binary patterns then there is a solution θ(v), corresponding to the pattern v characterized by  0 if v i = v j (3) |θi − θ j | = π if v i = v j which represents the phase locked oscillatory solutions. The patterns are coded on the phase deviations of the oscillators. The symmetry of the matrix C in the connections guarantees that (2) can be written as a gradient system dθi /dt = −∂ L/∂θi with energy function [20], [27] L(θ; η, C) = −



1 2

N 

Fig. 1. Distribution of maximum eigenvalues, max , for three types of solutions θ (v). (i) Memorized patterns. (ii) Patterns with a single bit error. (iii) Random patterns. max = 3η indicates the borderline of stability.

Fig. 2.

Memories visualized in 10 × 10 images.

variables is given by

⎡ ∂ y1 ∂θ ⎢ .1 ⎢ J (θ1 , . . . , θ N ) = ⎣ ..

∂ yN ∂θ1

··· .. . ···

∂ y1 ⎤ ∂θ N

.. ⎥ ⎥ . ⎦

where y1 = f 1 (θ1 , . . . , θ N ) = θ˙1 , . . . , y N = f N (θ1 , . . . , θ N ) = θ˙N . The elements of the Jacobian matrix Jik are given by 1 Jik = Cik cos(θk − θi ) + (2η1 cos 2(θk − θi ) N −3η2 cos 3(θk − θi )), if i = k (7) Jik = −

N  j =1

Ci j cos(θ j − θi ) −

N 1  (2η1 cos 2(θ j − θi ) N j =1

−3η2 cos 2(θ j − θi )), if i = k. Ci j cos(θi − θ j )

i, j =1

N 1  (3η1 cos 2(θi − θ j ) − 2η2 cos 3(θi − θ j )). 12N i, j =1

(4) The dynamics of the network is such that, starting from an initial state (external stimulus), any solution will eventually evolve toward a fix point of the system located at a local minimum of the energy function. Therefore, when the evolving oscillators energies converge to a minimum, a memorized pattern corresponding to a fix point solution is retrieved. The energy function can be rewritten as a function of the memorized patterns and an order parameter can be defined by     N    1 μ (5) ξ j eiθ j , μ = 1, . . . , p m μ (θ) =    N j =1 as a measure of the collective dynamics of the network. This parameter m μ is called superposition, or overlap, and measures the proximity of the solution with respect to the memorized pattern ξ μ . The stability of a solution θ(v) is determined by the eigenvalues of the Jacobian matrix, which in the case of N oscillators with N

(6)

∂ yN ∂θ N

(8)

The eigenvalues are obtained by solving |J − I| = 0, where are the eigenvalues to be determined and I is the identity matrix. A solution is said to be stable if all eigenvalues are negative. This stability condition of the solutions can be expressed in terms of the maximum eigenvalue of the matrix J , denoted by max . For the stability analysis we consider a particular case when η1 = η2 = η. Fig. 1 shows the distribution of maximum eigenvalues, max , for three types of solutions θ(v): 1) memorized patterns; 2) patterns with a single bit error; and 3) random patterns. For this simulation we generate 500 samples considering a network with N = 1000 oscillators and p = 60 binary patterns chosen randomly with the same probability. We observe that even when the solutions are chosen for the memorized patterns, the eigenvalues are always positive, which corresponds to unstable solutions for η = 0. This takes place for any combination of p > 2 and N [17], [25]. Similarly to the model presented in footnote 1, the second and third modes in the associative memory network might shift the eigenvalues by 3η for each solution corresponding to the stored pattern, indicating the borderline of stability. III. PATTERN R ETRIEVAL To demonstrate the retrieval capability of the oscillatory network described above, we introduce the letters of the word MIND as

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 26, NO. 7, JULY 2015

1541

Fig. 3. Evolution of the phase oscillators during the retrieval process of the letter N with m initial = 0.85, shown at every 40 time units. Each cell is shaded in terms of the phase deviation of the corresponding oscillator.

Fig. 5. Evolution of the phase oscillators during the retrieval process of the letter N distorted with m initial = 0.65, shown at every 40 time units. Each cell is shaded in terms of the phase deviation of the corresponding oscillator.

Fig. 4. Evolution of the oscillators energies L i during the recovery process of the letter N with (a) m initial = 0.85 and (b) m initial = 0.65.

memorized patterns. Each letter is a 1 × 100 vector formed with 1s and −1s. Fig. 2 shows the memories in image format of 10 × 10 cells. An external stimulus v is presented to the network considering the transformation θ0 (v i ) = 0 if v i = 1, and θ0 (v i ) = π/2 if v i = −1, where θ0 (v i ) is the initial condition of the ith neuron. A distorted image of one of the memorized patterns is used as external stimulus, with the percentage of distortion given in terms of the overlap between the distorted and the original pattern. In this section, we consider the particular case of equal influence of the second and third modes in the network model, i.e., η1 = η2 = η. A more quantitative comparison between these two terms is presented in Section IV. We start with the letter N as initial stimulus with initial overlap m initial = 0.85, i.e., 15% of the bits are scrambled randomly, (see t = 0 in Fig. 3). Fig. 3 shows a sequence of graphs representing the phase evolution of the oscillators at every 40 time units. Each cell in the images represents an oscillator and is shaded in terms of the phase deviation intensity in the range [0 π]. The network progresses until oscillators energies evolve to a minimum and the final overlap is greater than 0.99 (the solution must be close to one of the memorized patterns), or until 2000 time units. For t = 0 we have the initial condition and after 760 time units the network recovers the pattern with a final overlap 0.998. Notice that by direct visual inspection of Fig. 3 we can see the retrieval of the letter N after 440 time units, although at this time the final overlap is still 0.982. The evolution of the oscillators energies are shown in Fig. 4(a). Around t = 600 time units the oscillators energies evolve to a minimum indicating that the states of the system converged to a solution, and a pattern is retrieved. Considering a distorted version of the letter N as external stimulus, but now with 35% of bits scrambled randomly, i.e., a initial overlap of m initial = 0.65, the network is still able to recover the information correctly. These results are shown in Fig. 5, where the letter N is correctly recovered after 1720 time units with m final = 0.996. For this external stimulus, the network states need more time units to evolve to the stable solutions. Fig. 4(b) shows the

Fig. 6. Color map of the η1 versus η2 parameter space. Dark red colors correspond to correct recovery of information (m final ≈ 1) and dark blue colors when information is not properly recovered (m final ≈ 0). Markers indicate the parameters used in Figs. 7 and 8.

evolution of the oscillators energies during the retrieval process of the information. Notice, the oscillators energies evolve to a minimum around t = 1000 time units. IV. I NFORMATION R ETRIEVAL A NALYSIS We now analyze the influence of the second and third modes in the network model (2). Fig. 6 shows a color map in the η1 versus η2 parameter space. The colors represent the final overlap averaged over 10 random initial conditions with initial overlap 0.7 for a network with N = 200 oscillators, p = 8 stored patterns (created randomly). There is a wide region in the parameter space, dark red, where information can be properly recovered (m final ≈ 1). For η2 = 0 and increasing values of η1 we observe a decreasing of m final indicating that information is not correctly recovered anymore. When η2 is included we observe an enhancement of correct information retrieval for the range of η1 values. Take for example η1 = η2 = 0.6 indicated by the cross marker along the diagonal, corresponding to m final = 0.996, as oppose to the case of m final = 0.882 considering

1542

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 26, NO. 7, JULY 2015

Fig. 7. Final overlap after 2000 time units as a function of the initial overlap for 200 oscillators, eight stored patterns, and different values of η1 and η2 , as indicated. The final overlap is averaged over the results from 10 random initial conditions with the same initial overlap. Value of m initial ≈ 0.6 indicates the threshold of the basin of attraction to recover information.

only the influence of η1 = 0.6 (η2 = 0, indicated by the light blue triangle). The inclusion of the third mode in the network model expands the range of the parameter space to recover information with no error. The existence of the energy function guarantees that any solution of the system converges to a phase-locked solution as t → ∞. In addition, the stability of the solutions that represent a memorized pattern implies the existence of a basin of attraction for each of these solutions. Therefore, to quantify the size of these basins it is possible to investigate the required proximity of the initial condition from the memorized pattern, such that the network will be able to evolve to the phase deviations that codify the pattern. To achieve this, we evaluate the relationship between the initial and final overlaps of θ. The initial overlap, m initial , measures the proximity between the initial state θ(0) and one of the memorized patterns, which is computed using (5). The proximity between the final state θ(t) and one of the stored patterns is called final overlaps, m final , and is obtained from (5). If the solutions are close to one of the stored patterns, then m final ≈ 1, i.e., the solutions correspond to practically identical patterns. Otherwise, if the solutions are far away from one of the stored patterns, then m final ≈ 0. In this case, the solutions correspond to a totally different pattern from the stored pattern. Fig. 7 shows the final overlap in terms of the initial overlap after 2000 time units, for a network with N = 200 oscillators, p = 8 stored patterns (created randomly), and distinct combinations of the parameters η1 and η2 . The m final is an average over the solutions for 10 different initial conditions with the same m initial . The larger the m initial the closer the initial condition is from one of the stored patterns, facilitating the recovery of the information. For values of m initial ≈ >0.6 the final overlap approaches values close to one, i.e., the network is able to recover correctly the information. This value of m initial denotes the threshold of the basins of attraction to recover the information. When there is no influence of the second and third terms (η1 = η2 = 0), the network (under this configuration) recovers the information with approximately 18% error. A similar percentage of recovery of information is obtained when only the third term is allowed on the model (η1 = 0 and η2 = 0.6). However, with the influence of the second and third terms (combinations of η1 and η2 as shown in Fig. 7) the network recovers the information free of error. Notice that for equal influence of second and third terms

Fig. 8. Final overlap versus storage capacity (p/N) for different values of η1 and η2 . As the ratio p/N is increased, the more difficult it is for the network recover the information free of error.

Fig. 9. Surface image showing the relation between η1 = η2 = η and the storage capacity p/N with recovery of information. Colormap is according to the final overlap, the mean over 10 different initial conditions. Dark blue indicates solutions close to one of the stored patterns.

η1 = η2 = 0.4 and η1 = η2 = 0.6, this threshold shifts to m initial ≈ 0.5. In addition, the values of the final overlap are approximately unity when the third term is included. The storage capacity is given by the number of stored patterns per number of neurons necessary for the network recover the information. In Fig. 8, we present the final overlap for increasing the capacity of the network. In this computation, we consider N = 200 neurons and initial overlap 0.7. For η1 = η2 = 0 (dark blue line with circles), without influence of the second and third terms on the network, the capacity is 0.03 patterns per neuron. When both terms are included (η1 = η2 = 0.4, green line with right-pointing triangles, and η1 = η2 = 0.6, yellow line with crosses) the capacity of the network is increased to approximately 0.07 patterns per neuron. While a similar capacity is achieved when η1 = 0.4 and η2 = 0 (blue line with diamonds), the same is not true for η1 = 0.6 and η2 = 0 (light blue line with triangle) and for η1 = 0 and η2 = 0.6 (brown line with stars). We now analyze the relevance of the particular case when η1 = η2 in (2) during the retrieval process of information. Fig. 9 shows the results for the dependence of the parameter η1 = η2 and the

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 26, NO. 7, JULY 2015

Fig. 11.

Fig. 10. Surface image showing the relations η1 and the storage capacity p/N with recovery of information. Here, we consider η2 = 0, which leads to the model.1 Colormap is according the final overlap, the mean over 10 different initial conditions. Dark blue indicates solutions close to one of the stored patterns.

storage capacity p/N to recover information. For each combination of the parameters, the final overlap is an average over 10 different initial conditions. The surface is blue shaded according to the final overlap where dark blue indicates complete recovery of information. We observe a large region in parameter space with dark blue, indicating enhancement of information recovery of the network when including the influence of both terms η1 = η2 . Considering a similar configuration, but not considering the influence of the third mode (η2 = 0). Under this configuration, the neural network model reduces to the model presented in footnote 1. Fig. 10 shows the retrieval of information as the parameter η1 and the storage capacity p/N is changed. Again, the surface is blue shaded according the final overlap, where dark blue indicates solutions close to one of the stored patterns. The z-axis indicates the final overlap, averaged over 10 different initial conditions, for varying intensity of the parameter η1 and storage capacity p/N. Notice a wider region of correct information retrieval is observed for the model of (2) (Fig. 9) in comparison with the model1 (Fig. 10). This suggest an enhance of the efficiency of the correct information retrieval for the model presented here. Locking at position p/N = 0.06 and η = 0.3 in both Figs. 9 and 10, we have m final = 0.9981 and m final = 0.9782, respectively. The change of the coupling function proposed here, by including a third-order mode, improved the overall effectiveness of the network. The numerical results indicate an increase of the free of error recovery of information, as well as a marginal increasing of the storage capacity of the network. V. V ISUAL PATTERN R ECOGNITION E XPERIMENT Visual pattern recognition, as an innate ability of humans, and most animals, involves the identification of shapes, objects, faces, colors, and so on. The visual system captures the arranged stimuli, processes visual details, and recognizes the pattern, or not, according to data stored in the long-term memory. To compare human capability for recognizing perturbed binary patterns with the outcome of our network model, an experiment was designed using the same initial distortion percentage. The aim is to determine the threshold from which a subject is capable of recognizing a known binary pattern that has been distorted. Basically, we want to find out how much disturbance can be applied to

1543

Capital letters of the alphabet.

Fig. 12. Snapshot of interactive interface of the experiment showing the letter H with (a) 10% (m initial = 0.9) and (b) 30% (m initial = 0.7) of the bits scrambled randomly and the five options to be selected.

Fig. 13. Snapshot of interactive interface of the experiment showing a random letter with (a) 40% (m initial = 0.6) and (b) 50% (m initial = 0.5) of the bits scrambled randomly.

the known pattern and still keep it recognizable. Preparing for the experiment, we first developed an interactive interface using GUI development from MATLAB to facilitate applying the test. First, a figure containing the 26 letters of the alphabet (Fig. 11) was momentarily displayed to 20 subjects, after which a distorted image of one of the letters was presented, with five options of different original letters shown below the distorted image. The subject was supposed to choose, among the five undistorted options, which one corresponded to the distorted letter. The subjects were instructed to check the answers as quickly as possible, and each of the individual experimental sections was supposed to last no more than 30 min. Fig. 12(a) exemplifies the display for the letter H with 10% of the bits scrambled randomly and the options D, A, G, P, and H. Likewise, Fig. 12(b) shows the same letter H, but now with 30% of the bits scrambled randomly and options H, J, S, U, and Q. Larger distortions, as shown in Fig. 13(a) and (b), clearly make it more difficult to identify the original letter. For each distorted letter we take the average over three initial overlaps which are presented randomly during the experiment. In Fig. 14, we show the outcome of the test applied to one subject. Notice that, as the initial overlap increases, the subject is capable of selecting the correct choice among the options. For this particular trial, a threshold around m initial ≈ 0.6 can be inferred from the subjects’ choices.

1544

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 26, NO. 7, JULY 2015

R EFERENCES

Fig. 14. Mean of correct answers from the test with one subject. Each point represents an average over three initial overlaps presented in a random sequence during the trial, including the respective errorbars.

Fig. 15. Mean of correct answers for all subjects as a function of m initial (circle-blue line) and the outcome of the associative memory network (squarered line), including the respective errorbars. Dashed line denotes the threshold to correct choices at m initial = 0.7.

The results for the whole set of 20 subjects is shown in Fig. 15 where the circles (blue line) represent the average of the number of correct answers in terms of m initial values. For comparison, the squares (red line) represent the outcome from the network with 600 oscillators, 26 stored patterns, and same stimuli input as the distorted patterns given to the 20 subjects. The outcomes for the two cases are similar, with a correct retrieval threshold of m initial  0.6 for the network, and m initial  0.7 for the average of the 20 subjects, with the network performing slightly better. VI. C ONCLUSION In this brief, we extend the Kuramoto network model for associative memory by expanding the Fourier series of the coupling function to include second and third order modes. This approach considerably improves the network’s performance allowing, for example, the correct retrieval of information for patterns with up to 40% of distorted bits in the initial stimuli. The approach also elicits substantial enhancement in the storage capacity of the network yielding a 0.07 pattern per neuron of storage with correct information retrieval. These results are compatible with the outcome of a real experiment performed on subjects exposed to recognize binary patterns with same initial distortion as those utilized on the network.

[1] R. R. Llinás, I of the Vortex: From Neurons to Self. Cambridge, MA, USA: MIT Press, 2002. [2] J. J. Hopfield, “Neural networks and physical systems with emergent collective computational abilities,” Proc. Nat. Acad. Sci. United States Amer., vol. 79, no. 8, pp. 2554–2558, 1982. [3] J. Hertz, R. G. Palmer, and A. S. Krogh, Introduction to the Theory of Neural Computation, 1st ed. Cambridge, MA, USA: Perseus Publishing, 1991. [4] R. Follmann, E. E. N. Macau, and E. Rosa, Jr., “Detecting phase synchronization between coupled non-phase-coherent oscillators,” Phys. Lett. A, vol. 373, no. 25, pp. 2146–2153, 2009. [5] J. Buck and E. Buck, “Mechanism of rhythmic synchronous flashing of fireflies. Fireflies of Southeast Asia may use anticipatory timemeasuring in synchronizing their flashing,” Science, vol. 159, no. 3821, pp. 1319–1327, 1968. [6] T. J. Walker, “Acoustic synchrony: Two mechanisms in the snowy tree cricket,” Science, vol. 166, no. 3907, pp. 891–894, Nov. 1969. [7] V. Torre, “A theory of synchronization of heart pace-maker cells,” J. Theoretical Biol., vol. 61, no. 1, pp. 55–71, 1976. [8] A. Sherman, J. Rinzel, and J. Keizer, “Emergence of organized bursting in clusters of pancreatic beta-cells by channel sharing,” Biophys. J., vol. 54, no. 3, pp. 411–425, 1988. [9] C. Hammond, H. Bergman, and P. Brown, “Pathological synchronization in Parkinson’s disease: Networks, models and treatments,” Trends Neurosci., vol. 30, no. 7, pp. 357–364, 2007. [10] K. Lehnertz et al., “Synchronization phenomena in human epileptic brain networks,” J. Neurosci. Methods, vol. 183, no. 1, pp. 42–48, 2009. [11] G. Buzsáki1 and A. Draguhn, “Neuronal oscillations in cortical networks,” Science, vol. 304, no. 5679, pp. 1926–1929, 2004. [12] R. Q. Quiroga and S. Panzeri, “Extracting information from neuronal populations: Information theory and decoding approaches,” Nature Rev. Neurosci., vol. 10, no. 3, pp. 173–185, 2009. [13] T. Nishikawa, Y.-C. Lai, and F. C. Hoppensteadt, “Capacity of oscillatory associative-memory networks with error-free retrieval,” Phys. Rev. Lett., vol. 92, no. 10, p. 108101, 2004. [14] C. Xiu, X. Liu, Y. Tang, and Y. Zhang, “A novel network of chaotic elements and its application in multi-valued associative memory,” Phys. Lett. A, vol. 331, nos. 3–4, pp. 217–224, 2004. [15] L. Zhao, T. H. Cupertino, and J. R. Bertini, Jr., “Chaotic synchronization in general network topology for scene segmentation,” Neurocomputing, vol. 71, nos. 16–18, pp. 3360–3366, 2008. [16] Y. Kuramoto, Chemical Oscillations, Wave and Turbulence. Berlin, Germany: Springer-Verlag, 1984. [17] T. Aonishi, “Phase transitions of an oscillator neural network with a standard Hebb learning rule,” Phys. Rev. E, vol. 58, no. 4, pp. 4865–4871, 1998. [18] R. J. McEliece, E. C. Posner, E. R. Rodemich, and S. S. Venkatesh, “The capacity of the Hopfield associative memory,” IEEE Trans. Inf. Theory, vol. 33, no. 4, pp. 461–482, Jul. 1987. [19] J. Cook, “The mean-field theory of a Q-state neural network model,” J. Phys. A, Math. General, vol. 22, no. 12, pp. 2057–2067, 1989. [20] F. C. Hoppensteadt and E. M. Izhikevich, “Pattern recognition via synchronization in phase-locked loop neural networks,” IEEE Trans. Neural Netw., vol. 11, no. 3, pp. 734–738, May 2000. [21] J. R. C. Piqueira, F. M. Orsatti, and L. H. A. Monteiro, “Computing with phase locked loops: Choosing gains and delays,” IEEE Trans. Neural Netw., vol. 14, no. 1, pp. 243–247, Jan. 2003. [22] L. H. A. Monteiro, N. C. F. Canto, J. G. Chaui-Berlinck, F. M. Orsatti, and J. R. C. Piqueira, “Global and partial synchronism in phase-locked loop networks,” IEEE Trans. Neural Netw., vol. 14, no. 6, pp. 1572–1575, Nov. 2003. [23] F. C. Hoppensteadt and E. M. Izhikevich, “Synchronization of laser oscillators, associative memory, and optical neurocomputing,” Phys. Rev. E, vol. 62, pp. 4710–4713, Sep. 2000. [24] F. C. Hoppensteadt and E. M. Izhikevich, “Synchronization of MEMS resonators and mechanical neurocomputing,” IEEE Trans. Circuits Syst. I, Fundam. Theory Appl., vol. 48, no. 2, pp. 133–138, Feb. 2001. [25] T. Nishikawa, F. C. Hoppensteadt, and Y.-C. Lai, “Oscillatory associative memory network with perfect retrieval,” Phys. D, Nonlinear Phenomena, vol. 197, nos. 1–2, pp. 134–148, 2004. [26] D. O. Hebb, The Organization of Behavior, 1st ed. New York, NY, USA: Wiley, 1949. [27] F. C. Hoppensteadt and E. M. Izhikevich, Weakly Connected Neural Networks. Secaucus, NJ, USA: Springer-Verlag, 1997.

Phase Oscillatory Network and Visual Pattern Recognition.

We explore a properly interconnected set of Kuramoto type oscillators that results in a new associative-memory network configuration, which includes s...
2MB Sizes 0 Downloads 6 Views