Accepted Manuscript Self-organization of a recurrent network under ongoing synaptic plasticity Takaaki Aoki PII: DOI: Reference:

S0893-6080(14)00132-4 http://dx.doi.org/10.1016/j.neunet.2014.05.024 NN 3347

To appear in:

Neural Networks

Please cite this article as: Aoki, T. Self-organization of a recurrent network under ongoing synaptic plasticity. Neural Networks (2014), http://dx.doi.org/10.1016/j.neunet.2014.05.024 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

Self-organization of a recurrent network under ongoing synaptic plasticity Takaaki Aoki Faculty of Education, Kagawa University,1-1 Saiwai-cho, Takamatsu, Kagawa 760-8521, Japan

Abstract We investigated the organization of a recurrent network under ongoing synaptic plasticity using a model of neural oscillators coupled by dynamic synapses. In this model, the coupling weights changed dynamically, depending on the timing between the oscillators. We determined the phase coupling function of the oscillator model, Γ(φ), using conductance-based neuron models. Furthermore, we examined the effects of the Fourier zero mode of Γ(φ), which has a critical role in the case of spike-time-dependent plasticity-organized recurrent networks. Heterogeneous layered clusters with different frequencies emerged from homogeneous populations as the Fourier zero mode increased. Our findings may provide new insights into the self-assembly mechanisms of neural networks related to synaptic plasticity. Key words: Spike-Timing-Dependent Plasticity; Phase oscillators; Adaptive networks; Synchronization

1. Introduction Synaptic plasticity plays a vital role in learning in the brain, and it has been intensively investigated to understand the mechanism underlying learning. It induces changes in the structures of synaptic connections associated with neuronal activity, facilitating the organization of memory-related functional neural Email addresses: [email protected] (Takaaki Aoki)

Preprint submitted to Neural Networks

June 4, 2014

assemblies [1]. Recent neurophysiological experiments revealed that changes in synaptic connections depend on the relative spike timing between neurons during spike-timing-dependent plasticity (STDP) [2, 3, 4]. This observation implies that the temporal spike patterns of neurons determine synaptic patterns, raising the question of how STDP organizes neural networks into functional neuronal assemblies. This query remains an open question in the field of theoretical neuroscience, particularly when a network has rich recurrent connections. Several numerical studies reported that STDP-organized recurrent networks exhibit interesting behaviors, including the emergence of clusters with neurons that fire synchronously [5, 6, 7, 8, 9, 10] and feed-forward networks [11, 12, 13, 14, 15]. The interplay between neurons and their synapses makes it difficult to analyze the dynamics of STDP-organized recurrent networks. In the presence of plasticity, the spike pattern alters the structure of the synaptic connections, resulting in the formation of new spike patterns. In other words, the synaptic connections and neuronal activities evolve simultaneously. This co-evolution presents a novel, complicated situation from the the perspective of statistical physics. A conventional system is defined on the basis of a static substrate, in which several units of the system are coupled with some fixed interactions, usually denoted by a lattice, all-to-all connections, or a static network. In contrast, in a co-evolving dynamical system, the interactions change together over time with the states of the units, leading to reorganization of the interactions within the system. To elucidate the essential nature of co-evolving neural network dynamics, we developed a simple, co-evolving dynamical model of neuronal oscillators [16, 17]. In this report, we demonstrate that STDP can lead to the organization of several distinct network types. In section 2, we briefly review the phase description method and describe synapse dynamics using the phase oscillator model. In section 3, to illustrate the typical behaviors of the model, as presented in our previous papers [16, 17], we summarized the numerical results obtained by approximating Γ(φ) as a sine function. In section 4, we present the main results 2

of this paper. First, we confirm the validity of the sine approximation of Γ(φ), by determining the parameters of the phase oscillator model. These parameters are deduced from conductance-based regular- and fast-spiking neuron models. Second, we show that the determined form of the coupling function exhibits a novel behavior, which we explain via the Fourier zero mode of the phase coupling function of the oscillator. Previous studies on phase oscillator have ignored this mode. However, we find that this constant term is critically important for STDP-organized recurrent networks, as evidenced by the result that heterogeneous layered clusters with different frequencies emerge from homogeneous populations with identical natural frequencies. Finally, we analyze the transition from homogeneous to heterogeneous layered clusters. In section 5, we discuss and summarize our findings. 2. Materials and Methods 2.1. Dynamics of neuronal oscillators We consider the following equation of a coupled dynamical system: X dxi = F(xi ) + fij (xi , xj ), dt j where, xi denotes the state of the i-th neuron in a network of N neurons. The first term describes the intrinsic dynamics of the neurons (e.g. several types of ion channels) and the second term describes coupling with other neurons via synapses. The activity of a neuron is assumed to be oscillatory, rather than random. Thus, we consider that the neuron model undergoes a limit-cycle oscillation, which is perturbed by synaptic inputs and noises. This assumption enables us to reduce the description of the neuron to a simple form with the variable, φ. Using a standard reduction technique [18], the coupled limit-cycle system can be described as follows: N 1 X dφi = ωi + kij Γ(φi − φj ), dt N j

3

(1)

where, φi denotes the phase of the limit-cycle oscillation of the i-th neuron in the network (i = 1, . . . N ), ωi is its natural frequency, and kij is the coupling weight of the connection from the j-th to the i-th neuron.

In section 3 and

our previous papers [16, 17], the coupling function Γ(φ) was assumed to take the simple form Γ(φ) = − sin(φ + α) for the several reasons. By definition,

Γ(φ1 − φ2 ) is written as

Γ(φ1 − φ2 ) =

1 2π

Z



0

dθZ(θ + φ1 ) · g(θ + φ1 , θ + φ2 ),

where Z(φ) is the phase sensitivity function [18], and g is the coupling term between the oscillators. Near a Hopf bifurcation point involving many biological and chemical oscillators, the limit-cycle oscillation can be described by the Stuart-Landau equation. The phase sensitivity function for the oscillator is derived as Z(φ) = (− sin φ − c2 cos φ, cos φ − c2 sin φ) where c2 is a parameter of

the oscillator. Thus, in the case of diffusive coupling between Stuart-Landau oscillators, the coupling function Γ(φ) can be derived analytically as the above form [18]. Although Γ(φ) generally has higher Fourier modes, we approximate the form of the function while neglecting these higher modes. The constant term of the coupling function Γ0 is neglected because it can be absorbed into P the natural frequency term ωi → ωi + N1 ki jΓ0 . Under suitable conditions,

the parameter α can be regarded as the phase difference induced by a short transmission delay in the coupling [19]. For example, α can represent an axonal transmission delay in the synaptic connection or a delay in the metabolic reaction path in nerve cells. The reduced function Γ(φ) can be systematically calculated from the original

neuron model [18]. In section 4, we will determine the form of Γ(φ) for two types of conductance-based neuron models. We will compare the results with findings from the approximated form of Γ(φ), to check the validity of the approximation. 2.2. Dynamics of synaptic weights Next, we introduce the dynamics of the synaptic weights due to the plasticity. The evolution of the weights depends on the relative timing between the neurons, 4

similar to the case with STDP. dkij = ǫΛ(φi − φj ), dt

|kij | ≤ 1.

(2)

The function Λ(φ), which we refer to as a learning function, determines the evolution of the weights. The learning parameter ǫ has a very small value because the dynamics of the synaptic weights are much slower than those of the neurons. The condition |kij | ≤ 1 means that the synaptic weight is bounded. If the weight has a value outside [1,1], then the weight is immediately set to the

appropriate bounded value (1 or 1). This rule is reasonable because the weight cannot increase indefinitely. The learning function Λ(φ) is periodic. Therefore, for the sake of simplicity, we assume that Λ(φ) takes the form: Λ(φ) = − sin(φi − φj + β) where β is the shift parameter that characterizes the learning function (top panels in Fig. 1). For example, when β ∼ −π/2, the weights for a pair of in-phase (or anti-phase) neurons will increase (or decrease). This relationship

can be considered as a Hebbian-like rule. When β ∼ 0, the dependency on the

relative timing becomes similar to temporally asymmetric Hebbian rule. When β ∼ π/2, the learning function has the opposite form to the Hebbian-like rule. 2.3. A simple model of co-evolving dynamics Finally, we obtained the following equations for the model: dφi 1 X = ωi − kij sin(φi − φj + α) dt N dkij = −ǫ sin(φi − φj + β), |kij | ≤ 1. dt

(3) (4)

This model is characterized by two main parameters, α and β. We can systematically examine the behavior of this model across the parameter space. We performed numerical investigations of the asymptotic state of this dynamical system, given a random initial state, selecting the phases φi from a uniform distribution in [0, 2π) and weights from a uniform distribution in [−1, 1]. 5

3. Results : Typical behavior of the model In this section, we summarize our results to illustrate the typical behavior of the model. The model exhibits three types of self-organization: two-cluster, coherent, and chaotic states [16, 17]. Figure 1 shows the phase diagram of this model and typical raster plots of the three states. In the two-cluster state, the neurons are divided into two synchronized groups. Within a group, the oscillators are synchronized with positive couplings and the two groups are separated into anti-phase relationships. In the coherent state, the oscillators are eventually organized into a sequence with a fixed phase relationship, and a feed-forward type of network is organized throughout the sequence. In the other parameter region, the system does not settle into a fixed state; the weights and phase relationships continue to change. In this state, the system has positive Lyapunov exponents, and we can observe a chaotic state. For a detailed analysis of these states, please see our previous study [16]. These states are characterized by three measures: the order parameters, P Rm = | N1 j eimφj |; the normalized rate of change in the weights, averaged over all connections, given by ∆K(t) =

X |kij (t) − kij (t − ∆)| 1 N (N − 1) ∆ i6=j

1 where the sampling interval is ∆ ∼ 2π ω ≪ ǫ ; and the auto-correlation function P of the phase pattern, C(τ ) = h| N1 j eiφj (t) e−iφj (t−τ ) |i. Figure 2 presents these

¯ 2 , the measures as a function of β for α = 0.1π, the second order parameter R

long-term correlation C¯ = C(τ ∗ ), where τ ∗ = 200, and the rate of change in ¯ are evaluated by weights ∆K. Measures denoted by the diacritical mark X averaging over a long period after removing the transient period. The profiles of these measures indicate that the system converges to one of the three states, ¯ 2 converges to 1 in the twodepending on β. The second order parameter R ¯ 2 converges to 0, but the long-term correlation C¯ is 1, cluster state. When R the system is in the coherent state. When ∆K does not converge to zero, the 6

system does not settle into a fixed state which indicates the case of the chaotic state. 4. Results: Emergence of heterogeneous layered clusters from homogeneous populations In the previous section and our previous papers [16, 17], the phase coupling function Γ(φ) was approximated by the first Fourier mode alone. In this section, we directly determine the function Γ(φ), using conductance-based neuron models, to check the validity of the assumption of Γ(φ) form, Thus we compare the results with those for the determined function for Γ(φ). We find that an uncategorized state exists. This state can not be explained by the results with the simple sine function of Γ(φ), and it is caused by the neglected terms in Γ(φ). In this uncategorized state, heterogeneous layered clusters with different frequencies emerge from homogeneous populations of identical natural frequencies. To explain this behavior, we focus on the effects of the Fourier zero mode of the phase coupling function of the oscillator. 4.1. Phase description of conductance-based neuron models There are various electrophysiological types of neurons, which exhibits different spike responses to external inputs. Using the phase description model, differences in these electrophysiological properties can be mapped onto the phase sensitivity function Z(φ)[20], which describes the phase shift of an oscillation receiving an infinitesimal perturbation input at phase φ 1 . The phase coupling function of the neuron model is determined by: Z 2π 1 Γ(φ1 − φ2 ) = dθZ(θ + φ1 ) · g(θ + φ1 , θ + φ2 ), 2π 0 where g represents the interaction between the oscillators via the synapses. 1 In

conductance-based neuron models, interaction terms between the neurons appear only

in the equation of the membrane potential. Thus, the phase sensitivity function Z is described by a scalar function of the phase.

7

Phase sensitivity functions of regular-spiking [21] and fast-spiking [22] neuron models are shown in Fig. 3 (a) and (c). Given the synaptic coupling of the α-function, the phase coupling function Γ(φ) is derived as shown in Fig. 3 (b) and (d). Details on these neuron models and the synaptic function are provided in the Appendix. Using these derived forms of the function Γ(φ) (i.e. rather than the sine function), we examine the behavior of a neuronal oscillator network that is under-going synaptic plasticity. Figure 4 plots several measures as a function of β. These measures are the same as those in Fig. 2, except that Γ(φ) is given by the reduced form of the regular-spiking neuron model in Fig. 3(b). Profiles of these measures indicate that the behaviors are almost the same as those in Fig. 2. The model exhibits the chaotic and two-cluster states (n.b. the single-cluster state in the graph is a specific case of the two-cluster state [16]). However, Fig. 4 shows an uncategorized state that in not observed when Γ(φ) = − sin(φ + α), instead to

the coherent state. This uncategorized state has both a large and a very small clusters (Fig. 5). In contrast to the two-cluster state, these two clusters have different frequencies. 4.2. Effects of the Fourier zero mode on the adaptive recurrent network due to synaptic plasticity The observed absence of the coherent state and the emergence of a cluster state with heterogeneous frequencies originate from the form of the phase coupling function Γ(φ), which has Fourier modes other than the first mode. Then,

what is the critical difference in the form of Γ(φ)? As shown in Fig. 3(b), the first Fourier mode is dominant and the higher modes are relatively weak. An other dominant mode is the constant term (Fourier zero mode, Γ0 ). When a neural oscillator receives weak external inputs, the perturbations modulates not only the spike timing of the oscillation, but also the frequency of the oscillation. The constant term represents this effect of the frequency modulation. This effect can be large for phase response curve (PRC) Type I neuron, which is defined 8

by a nonnegative phase sensitivity function Z(φ). A PRC type I response can be seen near a saddle-node on in invariant circle bifurcation, such as in Leaky Integrated-and-Fire neuron and θ neuron models [23]. Because of the nonnegative Z(φ) of PRC Type I neurons, Γ0 would be a relatively large component of the phase coupling function compared to the other modes. This constant term has been ignored in previous studies of Kuramoto phase oscillators because it can be absorbed into the natural frequency term:

where

dφi 1 X = ωi + kij Γ(φi − φj ) dt N 1 X kij [Γ0 − sin(φi − φj + α) + · · · ] = ωi + N 1 X = ω˜i − kij [sin(φi − φj + α) + · · · ] , N

ω˜i = ωi +

Γ0 X kij . N

(5) (6) (7)

(8)

In an adaptive network with synaptic plasticity, however, the frequency ω˜i depends on the coupling strength kij , which changes dynamically over time. In other words, Γ0 can provide a mechanism for the adaptive control of the frequency during synaptic plasticity. Therefore, we examined the effects of the constant term Γ0 on the network organization due to the synaptic plasticity using the following equations: dφi 1 X =ω+ kij [Γ0 − sin(φi − φj + α)] dt N dkij = −ǫ sin(φi − φj + β), |kij | ≤ 1. dt

(9) (10)

The oscillators were assumed to have identical natural frequencies. 4.3. Emergence of heterogeneous layered clusters When Γ0 ∼ 0, the model given by the above equations exhibits the same

behavior as that described in section 3. As Γ0 increases, a novel type of network

organization emerges at β ∈ (−0.3π, 0.1π), with a coherent state being observed

at Γ0 = 0.

9

When Γ0 = 0.3, layered clusters of the same frequency are organized (Fig. 6). As shown in the graphs in Fig. 6, the order parameters converge to nonzero values, and the normalized rate of change of the coupling weights reaches zero, indicating a fixed structure of the phase pattern and the coupling weights. The raster plot and phase distribution demonstrate the organization of several clusters of synchronized oscillators. The phase differences between the clusters are locked, and their frequencies are the same. A type of layered connection is organized throughout this phase pattern, in which neurons within a cluster are symmetrically connected via positive couplings. Connections between the clusters become asymmetric according to their phase differences. Figure 6(g) provides a schematic illustration of this state. As Γ0 increases (Γ0 = 0.5), the state of layered clusters with the same frequency transitions to another type of layered clusters (Fig. 7). The order parameters and the normalized rate of change of the coupling weights do not converge to fixed values, and they oscillate quasi-periodically (Fig. 7 (a)). The global synchronization is broken, and multiple clusters with different frequencies emerge. In fact, the raster plot clearly shows that the oscillators are organized into several synchronized clusters, and the clusters are not phase-locked (Fig. 7 (c)). These clusters have different frequencies as indicated by the several peaks in the frequency distribution (Fig. 7 (d)). Moreover, the phase distribution has several peaks indicating the emergence of multiple synchronized clusters (Fig. 7 (e)). The coupling weights indicate that the layered connection type is the same as when Γ0 = 0.3 (Fig. 7 (f)). Specifically, the intra-cluster connections between oscillators of the same cluster converge to symmetrical +1 coupling. Inter-cluster connections show asymmetrical ±1 coupling, depending on the

order of their frequencies. The scheme of this organized network is illustrated in Fig. 7(g). Note that the oscillators have the same natural frequencies. The

heterogeneity of their frequencies is caused by the organization of the neural network as a result of the synaptic plasticity. We analyzed the transition from homogeneous to heterogeneous layered clusters. Although we only considered the simplest situation of the two clusters, 10

this analysis could be extended to cases with many clusters. In this state, as illustrated in Fig. 6, the synaptic weight kij goes to a limiting value of 1 or -1. The dynamics of the synaptic weights, which are controlled by the parameter ǫ ≪ 1, are very slow compared with those of the neurons. Thus, we consider the

stability of the phase-locked state of the oscillators with fixed synaptic weights.

If the phase-locked state is stable, then the synaptic weights are kept to the limiting value (1 and -1). Equations related to phase φi for the preceding cluster (labeled ’L’) and the next cluster (labeled ’H’) are given by: H

L

N N   dφH 1 X 1 X i H L =ω+ Γ0 − sin(φH − φ + α) + Γ0 − sin(φH i j i − φj + α) dt N N j j6=i

(11)

L

H

N N   1 X dφL 1 X i H L Γ0 − sin(φL =ω+ Γ0 − sin(φL − φ + α) − i − φj + α) , i j dt N N j j6=i

(12)

where, N H and N L are the numbers of neurons in the clusters. In the second term, the summation includes neurons in the same cluster, excluding itself. In the third term, the summation includes neurons in different clusters. We consider α ∈ [0, π/4), due to the symmetry of the model equations 2 , and reduce these equations to a two-oscillator system based on the assumption that each cluster is stable: dΦH = ω H − K HL sin(ΦH − ΦL + α) dt dΦL = ω L + K LH sin(ΦL − ΦH + α), dt 2

(13) (14)

The system defined by the Eqs. (4) and (7) is invariant under the translations (α, β, φi ) →

(−α, β + π, −φi ) and (α, β, φi ) → (α + π, β + π, φi ) Owing to this symmetry, it is sufficient to check the region α ∈ [0, π/2) and β ∈ [−π, π).

11

where  1 1+η sin α, − 2 N   1−η 1 ω L = ω − ηΓ0 − sin α, − 2 N 1+η 1−η K HL = , K LH = . 2 2 ω H = ω + Γ0 −

The parameter η (= N

H

−N L ) N



is the ratio of the sizes of the two clusters.

these equations have a stable phase-locked solution, i.e.,

d(ΦH −ΦL ) dt

If

= 0, then

the clusters will have the same frequency. Thus, we define Ψ ≡ ΦH − ΦL , and

derive its stable fixed point for the following equation:

dΨ = ω H − ω L − K HL sin(Ψ + α) − K LH sin(−Ψ + α) dt = δω + A sin(Ψ + θ) ≡ f (Ψ), where δω = (1 + η)Γ0 − η sin α, A =

p

(15) (16)

η 2 cos2 α + sin2 α, sin θ = − sin α/A and

cos θ = η cos α/A. The stable fixed point Ψ∗ satisfies f (Ψ∗ ) = 0 and f ′ (Ψ∗ ) < 0, and it exists in the following range: |(1 + η)Γ0 − η sin α|

Self-organization of a recurrent network under ongoing synaptic plasticity.

We investigated the organization of a recurrent network under ongoing synaptic plasticity using a model of neural oscillators coupled by dynamic synap...
2MB Sizes 0 Downloads 4 Views