Bull Math Biol DOI 10.1007/s11538-013-9928-0 O R I G I N A L A RT I C L E

A Biophysical Model of Adaptive Noise Filtering in the Shark Brain Peter Bratby · John Montgomery · James Sneyd

Received: 1 September 2013 / Accepted: 10 December 2013 © Society for Mathematical Biology 2013

Abstract Sharks detect their prey using an extremely sensitive electrosensory system that is capable of distinguishing weak external stimuli from a relatively strong background noise generated by the animal itself. Experiments indicate that part of the shark’s hindbrain, the dorsal octavolateralis nucleus (DON), is responsible for extracting the external stimulus using an adaptive filter mechanism to suppress signals correlated with the shark’s breathing motion. The DON’s principal neuron integrates input from afferents as well as many thousands of parallel fibres transmitting, inter alia, breathing-correlated motor command signals. There are a number of models in the literature, studying how this adaptive filtering mechanisms occurs, but most of them are based on a spike-train model approach. This paper presents a biophysically based computational simulation which demonstrates a mechanism for adaptive noise filtering in the DON. A spatial model of the neuron uses the Hodgkin–Huxley equations to simulate the propagation of action potentials along the dendrites. Synaptic inputs are modelled by applied currents at various positions along the dendrites, whose input conductances are varied according to a simple learning rule. Simulation results show that the model is able to demonstrate adaptive filtering in agreement with previous experimental and modelling studies. Furthermore, the spatial nature of the model does not greatly affect its learning properties, and in its present form is effectively equivalent to an isopotential model which does not incorporate a spatial element. Keywords Adaptive filter · Anti-Hebbian synaptic plasticity · Computer simulation · Cerebellum-like · Neural model

B

P. Bratby ( ) · J. Sneyd Department of Mathematics, University of Auckland, Auckland, New Zealand e-mail: [email protected] J. Montgomery Institute of Marine Science, University of Auckland, Auckland, New Zealand

P. Bratby et al.

1 Introduction Sensory signals generated by an animal’s sense organs contain information which the animal uses to determine its behaviour. Not all of the information is useful, however: in addition to signals generated by behaviourally important stimuli, the incoming signal contains self-generated components caused by the animal’s own actions. An example of this reafference is the movement of the visual field across the retina caused by the voluntary movement of an animal’s eyeball. The reafferent signal is best suppressed so that the animal may concentrate on the useful component of the signal (von Holst and Mittelstaedt 1950). Elasmobranchs (sharks, skates and rays) possess an extremely sensitive electrosensory system, used primarily for prey detection, which is capable of detecting minute variations in electric field strength. Electrical activity within the prey’s body (largely generated from ion pumping to maintain circulatory homeostatis) results in a weak electric field which extends some distance from its body. The resulting electric field may be detected by the shark’s electrosense. Sharks also generate their own electric field. Any motion of the animal’s body results in a modulation in this field, particularly breathing motion which produces a large, low frequency variation. Consequently the electric field detected at the animal’s skin is a superposition of behaviourally important but weak external stimuli and strong, slowly varying self-generated noise. In order to respond appropriately, the shark needs to be able to differentiate between the self-generated and external fields. Electroreceptors are found at the base of tiny jelly-filled tubes opening directly to the surface through a pore, and are distributed across the surface of the shark’s body. A difference in electrical potential across the electroreceptor causes a signal to be transmitted via afferent neurons towards the animal’s brain, where a structure within the hindbrain, the Dorsal Octavolateral Nucleus (DON) is responsible for its initial processing. Experiments (Montgomery and Bodznick 1994) have shown that secondary neurons within the DON are extremely effective at cancelling the reafferent component of the incoming electrosensory signal. The modulation in electrical field generated by the animal’s breathing motion is almost completely uniform across all electroreceptors, regardless of their position and orientation. The resulting reafferent signal can therefore largely be considered common mode, and a simple common mode rejection mechanism may contribute to its suppression. Indeed, Bodznick et al. (1992) obtained considerable evidence in support of a common mode noise suppression mechanism based on lateral inhibition, where a neuron’s receptive field is composed of a diffuse inhibitory ‘surround’ subtracted from a discrete excitatory ‘centre’. However, experiments (Montgomery and Bodznick 1994) have shown that this is not the only noise suppression mechanism. An artificial electric stimulus applied in time with the shark’s breathing motion initially induces a strong response in the DON’s secondary neurons. However, over the course of several minutes, the secondary neurons (ascending efferent neurons, or AENs) learn to suppress the stimulus. When the stimulus is removed, the AEN response contains a ‘negative image’ mirroring the original response. Notably, this response is specific to the waveform and timing of the stimulus within the breathing cycle.

A Biophysical Model of Adaptive Noise Filtering in the Shark Brain Fig. 1 Schematic of the neural circuitry of the dorsal octavolateral nucleus (DON). Adapted from Montgomery and Bodznick (1994)

Montgomery and Bodznick (1994), adapting earlier work by Bell et al. (1993) on the suppression of reafference within the electrosensory lateral line lobe, proposed an adaptive filter mechanism based on anti-Hebbian synaptic plasticity within the DON. Within the DON lies an extremely regular pattern of neurons: AENs having distinct basal and apical dendritic aborisations; the basal dendrites receiving synaptic input directly from electrosensory afferent neurons carrying sensory signals from the electroreceptors, the apical dendrites receiving synaptic input from many thousands of parallel fibres. Parallel fibre inputs consist of reference signals with a wide range of temporal relationships to the animal’s behaviour, including phase-shifted and filtered copies of motor command signals. In order to assemble a cancellation signal, the AEN must select a suitably weighted combination of parallel fibre inputs which is consistently correlated with the reafference. Bodznick et al. (1999) describe a model which demonstrates the above mechanism as follows (Fig. 1). Adjustments in the strength of the parallel fibre synapses follow two simple learning rules: (1) Reduce the strength of synapses from parallel fibres which are consistently active when the AEN is active; (2) Increase the strength of synapses from parallel fibres which are consistently active when the AEN is inactive.

2 Previous Models The simplicity of the mechanism described above lends itself to investigation through computational simulation. Indeed, Nelson and Paulin (1995) developed a computational simulation based on a network level model of the DON incorporating both common mode suppression and adaptive filtering. Nelson and Paulin’s (1995) approach was to model classes of neurons as isopotential cells whose membrane potentials evolve according to phenomenologically derived rules. A brief description of Nelson and Paulin’s (1995) model follows.

P. Bratby et al.

Each neuron is defined by two state variables representing the membrane potential and an adaptive threshold. Action potentials are not explicitly modelled, the output of each neuron being a stochastically generated spike train whose frequency is determined by the continuously varying membrane potential and adaptive threshold. The model contains a number of distinct classes of neurons—primary afferents, AENs and parallel fibres—distinguished by distinct values for various parameters such as input resistance, time constant and maximum firing rate. These parameters are tuned so as to give physiologically realistic behaviour for each class of neuron. Input current to each neuron, either synaptic current from another neuron or—in the case of primary electrosensory afferents—current induced by an applied electric field, drives a change in membrane potential. Synaptic input current is triggered by each presynaptic spike which is weighted by a time-dependent weight variable. Synaptic weights evolve according to an anti-Hebbian learning rule which decreases the weight of the synapse where pre- and post-synaptic firing rates are simultaneously elevated above their baseline levels, and increases the weight otherwise. The primary afferents receive input composed of a sinusoid representing a common-mode ventilatory signal, in addition to an identical but phase-delayed sinusoid representing an external stimulus applied differentially to ‘centre’ and ‘surround’ neurons. Nelson and Paulin (1995) found that their model resulted in good suppression of the reafferent signal, and that the common-mode rejection mechanism alone was not able to achieve this. In a related study Roberts and Bell (2000) developed a model of another cerebellum-like structure, the mormyrid electrosensory lateral line lobe (ELL), an organ whose function is to suppress unwanted reafference generated by the animal’s electric organ discharge. Like the Nelson and Paulin (1995) study, action potential firing rates are modelled stochastically by calculating the probability that the membrane potential is above its threshold. Average membrane potential is calculated by summing afferent and parallel fibre input current, which in turn is composed of EPSPs of varying phase with respect to the start of each cycle. Unlike the Nelson and Paulin (1995) study, the learning rule is split into two components: associative depression and non-associative enhancement. The associate depression decreases the weight of synapses by an amount functionally related to the time between the start of the EPSP and the time of the spike. Roberts and Bell (2000) found that this model resulted in reliable suppression of the reafference, as long as the associative depression learning rule was coincident with the EPSP. Other forms for the learning rule resulted in suboptimal negative images or instabilities.

3 Method The previously discussed modelling studies are phenomenologically based. It is natural to ask, is it possible to replicate these modelling results using a conductance-based neuron model which demonstrates suppression of reafferent signals based only electrophysiological neuron dynamics? This section sets out in detail the mathematical construction of such a model. In brief, the model consists of a single neuron, whose membrane potential V is modelled according to the Hodgkin–Huxley equations (see, for example, Nelson

A Biophysical Model of Adaptive Noise Filtering in the Shark Brain Table 1 Parameter values for the biophysical neuron model

Parameter

Value

τm

Time constant

λm

Length constant

1 ms 0.5 cm

Rm

Input resistance

1000  cm2

xmax

Neuron length

0.02 cm

L

Synapse length

0.002 cm

S

Soma length

0.002 cm

xs

Soma position

0.01 cm

2011). A single afferent input and a number of parallel fibre inputs consisting of prescribed waveforms Iaff (t) and Ii (t) supply current to the neuron. Each parallel fibre current is weighted by a variable wi representing the synaptic strength. Each synaptic weights wi evolves according to an anti-Hebbian learning rule which correlates the parallel fibre input current and the membrane potential. 3.1 Input Currents Input currents consist of n parallel fibre currents Ii (t) and one afferent current Iaff (t). Each of the parallel fibre currents is weighted by a time-dependent non-dimensional variable wi (t) representing the synaptic strength. 3.2 Neuron Model The neuron model governs the dynamics of the membrane potential V . We present two versions of the model: an isopotential model based solely on the Hodgkin– Huxley equations and a spatial model based on the Hodgkin–Huxley equations in combination with the cable equation. 3.2.1 Isopotential Model The membrane potential V is governed by the Hodgkin–Huxley equations. Cm

dV = Iapp (t) − Iion (t), dt

(1)

where Iion (t) = g¯ K n4 (V − VK ) + g¯ Na m3 h(V − VNa ) + g¯ L (V − VL ) and Iapp (t) = Iaff (t) +

(2) 

wi (t)Ii (t).

(3)

See Appendix for the equations governing the gating variables n, m and h and the values of the reversal potentials VK , VNa and VL and ionic conductances g¯ K , g¯ Na and g¯ L . The membrane capacitance Cm = τm /Rm (see Table 1).

P. Bratby et al.

Fig. 2 Structure of the spatial neuron model. Parallel fibre input currents wi Ii are distributed along the neuron at synapses centred at positions xi . Afferent input current Iaff is located at position xaff . Ionic conductances g¯ K , g¯ Na and g¯ L take on distinct values in the somatic and dendritic portions of the neuron

3.2.2 Spatial Model In order to construct a spatial neuron model (Fig. 2), the cable equation and Hodgkin– Huxley equations are combined, resulting in a non-linear partial differential equation: τm

  ∂V ∂ 2V = λ2m 2 + Rm Iapp (x, t) − Iion (x, t) , ∂t ∂x

(4)

where Iion (x, t) = g¯ K (x)n4 (V − VK ) + g¯ Na (x)m3 h(V − VNa ) + g¯ L (x)(V − VL ).

(5)

Iapp (x, t) represents space- and time-dependent synaptic input current density. Each synapse is assumed to supply current uniformly along a short length of dendrite, with no current supplied at the gaps between synapses. Consequently, at any instant of time, Iapp (x, t) is a piecewise constant function of x. Parallel fibres are assumed to project to synapses on the apical dendrite, whereas the electrosensory afferent projects to a single synapse on the basal dendrite. This tallies with the current understanding of the anatomy of the DON and other cerebellum-like structures (Bell et al. 2008). Iapp (x, t) is formed from a set of spatially distributed synaptic input currents consisting of afferent input Iaff centred at position xaff and parallel fibre currents Ii (t) centred at position xi as measured from the apical end of the neuron: ⎧ ⎪ if xaff − L/2 < x ≤ xaff + L/2, ⎨Iaff (t) Iapp (x, t) = wi (t)Ii (t) if xi − L/2 < x ≤ xi + L/2, (6) ⎪ ⎩ 0 otherwise, where L is the length of each synapse (see Fig. 2). The input currents Iaff (t) and Ii (t), and the positions of the synapses xaff and xi are specified separately for each simulation run. Generally speaking, the afferent input is located at the distal end of the basal dendrite, while the parallel fibre inputs are uniformly distributed along the apical dendrite.

A Biophysical Model of Adaptive Noise Filtering in the Shark Brain

Also, τm and λm are the membrane time and length constants and Rm is the passive membrane resistance; n, m and h are the voltage-dependent gating variables of the Hodgkin–Huxley equations (see Appendix). See Table 1 for the values of parameters used in these equations, which represent typical values across a range of neurons (Keener and Sneyd 1998). The mechanism for synaptic plasticity requires action potentials to be initiated at the soma and then backpropagated along the dendrites. For this reason, it is necessary for the soma to be more excitable than the dendrites, which is achieved by making the values of the ionic conductances g¯ K (x), g¯ Na (x) and g¯ L (x) dependent on the space variable x. The neuron is divided into three sections, representing an apical dendrite, soma and basal dendrite, with the ionic conductances taking distinct values at each. At the soma, they take the standard Hodgkin–Huxley values (see Appendix), and in the dendrites, 10 % of these values. g¯ X if xs − S/2 < x ≤ xs + S/2, g¯ X (x) = (7) 0.1g¯ X otherwise, where xs is the soma position, S is the soma length and X = K, Na, L. It is assumed that there is zero current across the endpoints of the neuron, resulting in no-flux boundary conditions: ∂V ∂V (0, t) = (xmax , t) = 0. ∂x ∂x

(8)

3.3 Learning Rule Each parallel fibre delivers a weighted current density wi (t)Ii (t). The weights wi are not constrained to be positive; the sign of wi determines the polarity of the input current. Synaptic plasticity is incorporated into the model using the following learning rule:   dwi = ε V (t) − Vrest Ii (t), dt

(9)

where ε is a small parameter representing the learning rate and Vrest = 65 mV is the membrane resting potential. In the case of the spatial model, V (t) is the membrane potential at the midpoint of synapse i. Equation (9) implements an anti-Hebbian learning rule where the unweighted synaptic current Ii (t) represents presynaptic activity and the elevation of the membrane potential above its resting value V (t) − Vrest represents postsynaptic activity. We emphasise here that this rule does not express a specific biophysical mechanism of synaptic plasticity, but represents the simplest possible mathematical formulation of a learning rule which correlates pre- and postsynaptic activity. 3.4 Learning Speed This section introduces a measurement thalf which characterises how quickly the model learns. thalf measures the time taken for a synaptic weight to reach half of its equilibrium value.

P. Bratby et al. Fig. 3 The learning rate thalf is defined as the time taken for the weight w to reach half its equilibrium value w∞

Considering an arrangement of the model with a single parallel fibre input, the time course of the synaptic weight w typically looks like the trace shown in Fig. 3. Although w may increase or decrease over the course of a single cycle, if we consider only points at the start of each cycle, w(t) is a monotonically increasing function of time, and eventually approaches a static equilibrium w∞ . Yet, thalf is defined as the value of t where w reaches half its steady-state value. In other words, thalf satisfies w(thalf ) = 0.5w∞ .

(10)

Note that this definition of thalf is only meaningful in the situation described above, where the whole-cycle values of w increase monotonically to an equilibrium value. It is not immediately clear that this will always be the case, or even whether an equilibrium will be attained at all. Indeed, in situations involving two or more parallel fibres, it was found that synaptic weights often overshoot before settling down to a steady state (for example, see Fig. 8). Simulations involving a single parallel fibre, however, consistently follow the pattern described, in which case thalf is well-defined. 3.5 Software and Numerical Methods The simulations were performed using custom Matlab scripts. In the isopotential case, an ODE solver ode45 was used. In the spatial case, a numerical computation based on the implicit finite difference method was developed. In each case, numerical stability was tested by comparing the results of each simulation with the same simulation run at a smaller step size.

4 Results 4.1 Backpropagation of Action Potentials In order to investigate the electrical dynamics of the spatial neuron model, a simulation was run with a single afferent input located at the distal end of the dendrite (xaff = 0.002 cm) and no parallel fibres. Figure 4 shows the time course of the membrane potential where Iaff is a current increasing uniformly from 0 µA/cm2 at t = 0 to 150 µA/cm2 at t = 1000 ms. The graph shows the time course of the membrane potential at the soma and synapse between 500 and 800 ms.

A Biophysical Model of Adaptive Noise Filtering in the Shark Brain

Fig. 4 Time course of the membrane potential of a neuron with a single afferent input located at a the distal end of the dendrite. The input is a current increasing uniformly from 0 to 150 µA/cm2 . The upper and lower traces show the membrane potential at the synapse and soma respectively. Also shown is a close-up of a single action potential. The time course of the membrane potential at the distal synapse closely follows the time course of the membrane potential at the soma

Until roughly 650 ms, the membrane potential at the synapse exceeds the membrane potential at the soma by approximately 5 mV due to current leakage. Once the membrane potential reaches a threshold of a little over 50 mV, action potentials are initiated, which quickly backpropagate along the dendrite to the synapse. The inset graph shows how the time taken for the action potential to backpropagate to the synapse is very small, no more than 0.5 ms, which is much shorter than the time course of a single action potential. 4.2 Comparison of Spatial and Isopotential Models Figures 5 and 6 show a simple example of anti-Hebbian learning in the isopotential and spatial models respectively. In each case, a single parallel fibre supplies a brief pulse of current; afferent input is an identical waveform shifted by a delay Δt:



t − t0 2 I1 (t) = Imax exp − , α Iaff (t) = I1 (t − Δt),

(11) (12)

where t0 = 800 ms, α = 90 ms and ε = 3 × 10−7 . The synaptic weight is set to an initial value w1 (0) = 0.1 and the simulation run until tmax = 2000 ms. The value of Imax is specified separately for the isopotential and spatial cases such that the peak input current is in the correct range to induce a train of action potentials. As Figs. 5 and 6 show, where the parallel fibre and afferent signals are almost coincident (Δt = 100 ms), the weight increases; where they are not (Δt = 400 ms), the weight decreases. In order to compare the two models quantitatively, the same simulation was run for a range of values of Δt and the resulting change in weight Δw = w1 (tmax ) − w1 (0) was calculated. Plotting Δw against Δt shows the two models’ predictions of the dependence of synaptic plasticity on the relative timing of afferent vs parallel fibre input (Fig. 7). In both the isopotential and spatial cases, where the delay is less than about

P. Bratby et al.

Fig. 5 Comparison of the response of the isopotential model to an afferent stimulus delayed by (a) Δt = 100 ms and (b) Δt = 400 ms with respect to the parallel fibre input

Fig. 6 Comparison of the response of the spatial model to an afferent stimulus delayed by (a) Δt = 100 ms and (b) Δt = 400 ms with respect to the parallel fibre input

150 ms, synaptic strength is increased; otherwise it is decreased. Notwithstanding a difference in magnitude (due in part to the difference in input currents), the shape of the resulting curve is almost identical in the spatial and isopotential cases. As demonstrated by the simulations in this section (Figs. 5, 6 and in particular Fig. 7), the learning behaviour is qualitatively very similar in the isopotential and spatial models. For this reason, the remaining simulations were performed with only the isopotential model.

A Biophysical Model of Adaptive Noise Filtering in the Shark Brain

Fig. 7 Plot of change in synaptic weight Δw vs the delay in stimulus relative to the parallel fibre input Δt for the (a) isopotential and (b) spatial models. Parallel fibre input consists of a single brief pulse; afferent input consists of the same waveform phase-shifted by Δt . The change in synaptic weight Δw is calculated for values of Δt ranging from −400 to 400 ms

4.3 Suppression of a Periodic Stimulus Next, the model’s ability to suppress a periodic stimulus was investigated. The afferent signal consists of a brief pulse applied once each ventilatory cycle:



t − t0 2 Iaff (t) = Imax exp − , (13) α where Imax = 30 µA/cm2 , α = 90 ms, t0 = 800 ms and t is the time from the start of each 2000 ms cycle. Parallel fibre input is composed of thirty sinusoids of the following form:



2πit Imax 1 − sin , (14) Ii (t) = 2 T



2πit Imax 1 − cos , (15) Ii+15 (t) = 2 T for i = 1 . . . 15, T = 2000 ms and Imax = 30 µA/cm2 . The learning rate (Eq. (9)) ε = 3 × 10−7 and synaptic weights were set to an initial value of wi (0) = 0. The simulation was run for 120 s. The results of the simulation are shown in Fig. 8. Initially, the neuron spikes strongly in time with the stimulus. After roughly 50 s, the neuron stops spiking and the membrane potential shows a subthreshold depolarisation in time with the stimulus which gradually decreases in size during the remainder of the simulation. The time course of a selection of synaptic weights wi is also shown in Fig. 8. During the spiking phase the weights change quickly, but after spiking has stopped, they continue to gradually move towards their equilibrium values. Figure 9 shows the total parallel fibre current wi (t)Ii (t). Over time, it converges to a waveform which closely approximates the waveform of the afferent stimulus Iaff .

P. Bratby et al.

Fig. 8 (a) Time course of the membrane potential of the isopotential model as it learns to suppress a periodic stimulus consisting of a brief pulse applied at t = 800 ms in each 2000 ms cycle. Parallel fibre input is composed of thirty sine and cosine waveforms of varying periods. (b) Time course of a selection of synaptic weights wi

Fig. 9 The total weighted parallel fibre current wi (t)Ii (t) (solid line) compared to the stimulus Iaff (t) (dashed line) at times t = 0, 30, 60 and 120 s during the simulation shown in Fig. 8. Each plot shows one 2000-ms cycle

4.4 Demonstration of Negative Image Figure 10 illustrates a slightly more complicated example of adaptive suppression, replicating experiments performed by Bodznick (1993). The afferent signal is composed of a stimulus time-locked to a sinusoidal ventilatory signal: Iaff = Ivent + Istim ,

(16)





t − t0 2 , Istim (t) = Imax exp − α

(17)

where

A Biophysical Model of Adaptive Noise Filtering in the Shark Brain

Fig. 10 Adaptive suppression of an external stimulus phase-locked to a ventilatory signal. All traces show three ventilatory cycles. The afferent input is composed of the sum of a ventilatory signal and an external stimulus. The ventilatory signal is a sinusoidal waveform of period 2 s (top trace); the external stimulus is a brief pulse coperiodic with the ventilatory signal (2nd trace). Parallel fibre inputs consist of thirty sine and cosine functions of varying periods. Initially, all synaptic weights are set to zero. At t = 0 s the external stimulus is turned off and AEN responds strongly to the ventilatory signal. By t = 240 s the response has diminished to zero as the synaptic weights adjust. At t = 300 s the external stimulus is turned on and the AEN responds strongly to the stimulus. By t = 600 s the response has again diminished as the synaptic weights adjust to the new stimulus. At t = 900 s the external stimulus is turned off and the time course of the membrane potential contains a ‘negative image’ of the external stimulus. By t = 1200 s the weights have readjusted and the negative image is no longer apparent

Ivent (t) =



2πt Imax 1 − sin . 2 T

(18)

Imax = 30 µA/cm2 , T = 2000 ms, α = 90 ms and t0 = 800 ms. In Eq. (17), t is the time relative to the start of each ventilatory cycle. The simulation begins with zero

P. Bratby et al.

synaptic weights (wi (0) = 0) and with the external stimulus Istim turned off. The external stimulus is turned on at t = 300 s and off again at t = 900 s. Initially the neuron spikes strongly in time with the ventilation. By t = 100 s spiking has been eliminated, although subthreshold depolarisation remains. By t = 240 s the depolarisation has been eliminated entirely. At t = 300 s the external stimulus Istim is turned on, and the neuron spikes in time with the stimulus. At t = 442 s the spiking is again eliminated, and by t = 600 s depolarisation is eliminated entirely. At t = 900 s, when the stimulus is turned off, a ‘negative image’ consisting of hyperpolarisation mirroring the stimulus, is visible. By t = 1200 s the negative image has been eliminated. 4.5 Dependence of Learning on Resting Potential The learning rule (Eq. (9)) defines postsynaptic activity as the elevation of the membrane potential above the resting potential, Vrest . This section examines a simple generalisation of this learning rule by replacing Vrest with a variable parameter V0 . Equations (3) and (9) become   dwi = ε V (t) − V0 Ji (t), dt  Iapp (t) = I0 + Iaff (t) + Ii (t),

(19) (20)

where ε = 3 × 10−7 and we have introduced an additional constant current I0 into Eq. (20). I0 , which depends on V0 , is defined as the value of the applied current which results in an equilibrium resting potential V0 . Afferent input and parallel fibre input are identical sinusoids:



2πt Imax Iaff (t) = I1 (t) = 1 − sin , (21) 2 T where Imax = 30 µA/cm2 and T = 2000 ms. The simulation was run for values of V0 ranging from −65 to −61 mV. For every value of V0 , the value of w1 tends to an equilibrium value w∞ = −1, where the afferent and applied currents exactly balance each other. In order to understand how the learning behaviour depends on the value of V0 , we define two measurements thalf and wspike which characterise two aspects of the learning behaviour. Next, thalf , the time taken for the weight w1 to reach half of its equilibrium value, is defined in Sect. 3.4. wspike , the proportion of learning taking place while the neuron is spiking is calculated as follows. Firstly, the time tspike at which the neuron stops spiking is defined as the maximum value of t for which V (t) > 50 mV. Then wspike = w1 (tspike )/w∞ . Figure 11(a) shows the relationship between thalf and V0 . Increasing V0 results in an increase in thalf , in other words learning is slower for higher values of V0 . Figure 11(b) shows the relationship between tspike and V0 . Increasing V0 results in an increase in the proportion of learning taking place while the neuron is spiking.

A Biophysical Model of Adaptive Noise Filtering in the Shark Brain

Fig. 11 (a) Time taken for the synaptic weight w1 to reach half its equilibrium value, thalf , vs V0 for a range of values of V0 . Increasing V0 results in a decrease in the learning speed. (b) Proportion of learning during the spiking phase, wspike , vs V0 . Increasing V0 results in a greater proportion of the learning taking place while the neuron is spiking

5 Discussion and Conclusions The simulations presented here are based on a greatly simplified model of the principal neuron in the Dorsal Octavolateral Nucleus. Whereas previous modelling studies (Nelson and Paulin 1995; Roberts and Bell 2000) take a network-level, phenomenological approach, this model is biophysically based, and, unlike the previous studies, no attempt is made to compare model predictions with experimental results. However, the qualitative behaviour of the Hodgkin–Huxley equations—subthreshold electrical flow, action potentials initiation at the soma, backpropagation of action potentials—is typical of many classes of neurons including the AEN. In essence, we have traded an empirically accurate description of the DON’s known behaviour for realistic electrophysiological dynamics at the level of an individual neuron. Constructing a model at the biophysical level gives several opportunities not available using the phenomenological approach. For example, the nature of the neuron’s learning capabilities can be studied across the full range of membrane potentials, including subthreshold potentials where explicit spiking is not observed. The model of the adaptive filter could in the future be linked to biophysical models of LTP and LTD at the synapses. In order to isolate the essential features of the neural dynamics which affect the capabilities of the adaptive filter, several assumptions and simplifications were made. A discussion of these follows, as well as an interpretation of some of the results. One might expect the spatial distribution of synaptic input currents to have a significant effect on the neuron’s behaviour. Backpropagating action potentials have further to travel to reach synapses located in the distal region of the dendrite, resulting in a phase delay compared to synapses located more proximally. Likewise, current leakage causes an attenuation of the signal as it travels along the dendrite, result-

P. Bratby et al.

ing in proximal synaptic currents having a greater effect on the somatic membrane potential than distal ones. We argue, however, that because the timescale involved in the propagation of signals along the dendrites is small compared to the timescale of the anti-Hebbian learning rule (9), the spatial model is in fact equivalent to the isopotential model in terms of its learning behaviour. Figure 4 shows the electrical dynamics of the spatial model in the case of afferent input at a synapse located distally. As the inset trace shows, the time taken for the action potential to travel the dendrite to the distal synapse is very short compared to the time course of the action potential itself. Further evidence in favour of the equivalence of the spatial and isopotential models is shown in Fig. 7. According to anti-Hebbian learning rules, the relative timing of pre- and post-synaptic activity is the key factor influencing synaptic plasticity: where afferent and parallel fibre inputs are coincident, the synaptic weight decreases; where afferent and parallel fibre inputs are not coincident, it increases. Both the spatial and isopotential models displayed this characteristic behaviour. Furthermore, by calculating the change in synaptic weight brought about by afferent input of varying phase delays, we obtain a curve which characterises each model’s dependence on timing (Fig. 7). The two curves are remarkably similar to each other—if there were phase delays introduced by dendritic conduction in the spatial model, we would expect there to be a noticeable difference between them. We therefore concluded that the spatial effects should be expected to have only a marginal impact on the behaviour of the model, and further simulations were performed with only the isopotential model. The isopotential model has the added advantage of being simpler and quicker to solve numerically. The synaptic input current was modelled as a continuously varying current, generally of sufficient amplitude such that the current from a single synapse is sufficient to trigger a postsynaptic action potential. In reality, however, a synapse delivers current in a more discrete manner: each presynaptic spike causes a postsynaptic conductance change which results in a single postsynaptic potential (PSP) whose amplitude is much lower than the action potential threshold. The spatial and temporal integration of many PSPs results in a combined current sufficient to trigger action potentials at the soma. Indeed, phenomenological modelling studies such as Nelson and Paulin (1995) typically model a postsynaptic current as the sum of a large number of PSP waveforms. Since no particular physiological mechanism is proposed for synaptic input currents, and we are concerned only with the long-term qualitative behaviour of the model, the waveform of individual PSPs is not considered important. Instead, the simplifying assumption was made that synaptic input current can be reasonably approximated by a continuous waveform representing the incoming sensory signal. That being said, it would be relatively simple to introduce PSPs into this model by forming the synaptic currents from a sum of PSP waveforms. As already discussed, the mechanism for LTD and LTP requires action potentials to be initiated at the soma and then backpropagated along the dendritic tree, and while the exact mechanism remains unclear, good experimental evidence for such backpropagation exists (Linden 1999). In the spatial model, this behaviour is induced by making the ionic conductances g¯ Na , g¯ K and g¯ L 10 times lower in the dendrites

A Biophysical Model of Adaptive Noise Filtering in the Shark Brain

than the soma, which is in accordance with experimental evidence indicating lower ion channel densities in the dendrite than the soma (Migliore and Shepherd 2002). We found that the behaviour of the model was not greatly sensitive to this value as long as the dendritic conductances are sufficiently low to prevent dendritic action potential initiation. Experimental studies indicate that two mechanisms operate in tandem in order to suppress reafferent signals (Montgomery 1984; Montgomery and Bodznick 1999). First, a common-mode suppression mechanism filters out the component of the signal which is uniform across all afferents. Second, any predictable component of the residual signal is filtered out by the adaptive mechanism discussed in this paper. However, as suggested by Nelson and Paulin (1995), since both mechanisms are likely to be mediated by anti-Hebbian plasticity at AEN synapses, the two mechanisms are unified such that the ‘surround’ common-mode inhibitory signal is considered to be a class of ‘parallel fibre’ input. Note that we are not suggesting that the same neural pathway is in operation for both mechanisms, but that at the level of abstraction of our model the two do not need to be distinguished. For this reason, a distinct commonmode suppression mechanism is not included in the model. In this paper we assumed that the breathing-generated reafference is an unchanging, periodic function, simulating the situation where the animal is breathing in a regular, even manner. However, in the situation where the reafference is not periodic, or the period changes—for example, the shark begins swimming and hence breathes more quickly—the reafference is still predictable and therefore we should expect the adaptive filtering mechanism to be capable of its suppression. If we consider an example consisting of two distinct states (say, slow breathing and fast breathing) we might speculate that distinct sets of parallel fibres are responsible for suppressing reafference in each state, and are silent when the animal is in the other state. Clearly, it would be undesirable for the parallel fibres to continue to learn while in the silent state. As shown in Fig. 1, parallel fibre input to the AEN is of two types. In addition to parallel fibres projecting directly onto the AEN via excitatory synapses, there is a secondary pathway delivering parallel fibre signals via inhibitory stellate cells, and it is from this matrix of positive and negative signals that the cancellation signal is assembled. The model, however, does not distinguish between these two types of input. Instead, positive and negative values for wi simulate inhibitory and excitatory currents respectively, and the learning rule (Eq. (9)) is free to determine which polarity each current delivers. The simulations showed that, indeed, the learning rule does naturally result in both positive and negative weights, despite input current being wholly excitatory (Fig. 8). Sejnowski (1977) coined the term ‘covariance rule’ to describe to a learning rule which adapts synaptic strength according to the time-averaged correlation of synaptic input and neuron activity over a discrete time-step. In contrast, the learning rule proposed here (Eq. (9)) relates the instantaneous rate of change in synaptic strength to the instantaneous values of the membrane potential and synaptic input current, a formulation made possible by our model’s capability to simulate action potentials directly, rather than relying on average spike-rates. It should be noted, however, that over any discrete time-step, Eq. (9) is exactly equivalent to Sejnowski’s (1977) rule.

P. Bratby et al.

Our model is based on an adaptive filter mechanism proposed by Montgomery and Bodznick (1994), which in turn was inspired by analogy to models of the mammalian cerebellum. Typical cerebellar adaptive filter models hypothesise the existence of a ‘teaching signal’, assembled outside the cerebellum and transmitted via climbing fibres (Porrill et al. 2012). The DON, on the other hand, lacks climbing fibres, and in Montgomery and Bodznick’s (1994) mechanism the ‘error signal’ is sampled directly from backpropagating action potentials. It is known that, rather than projecting directly onto the dendrite, incoming parallel fibres connect to synapses situated on dendritic spines protruding from the dendrite (Napper and Harvey 1988). We might speculate one function of these spines might be to functionally decouple adjacent parallel fibres, preventing the incoming current from one parallel fibre ‘contaminating’ the membrane potential measured at closely situated parallel fibres. Note that, unlike previous models, the learning rule has no explicit dependence on either pre- or post-synaptic spiking. It is only dependent on the instantaneous values of the presynaptic input current and postsynaptic membrane potential, which allows learning to take place across a wide range of input currents and membrane potentials, including in the absence of spiking. This is illustrated in Fig. 11(b) showing how the majority of the learning takes place after spiking has ceased. This flexibility opens up the possibility of incorporating more complicated learning rules implementing a wide variety of biophysical mechanisms. Given a sufficiently broad spectrum of parallel fibre inputs, it seems likely that there exists a set of synaptic weights which will cancel an arbitrarily chosen afferent signal. Indeed, the model was able to build a cancellation signal that converges closely to the afferent signal, as illustrated in Fig. 9. It is interesting to note that it is not necessary that action potentials be present for the learning to take place—as shown in Fig. 8, the synaptic weights continue to adjust (albeit at a slower pace) long after spiking has ceased. The model is able to predict this behaviour because it calculates depolarisation of the membrane potential across its full range, including subthreshold values. The original anti-Hebbian learning rule (Eq. (9)) defines postsynaptic activity as the instantaneous elevation of the membrane potential above its resting level, which has the effect of driving the average membrane potential towards the resting potential. One advantage of using a biophysical model is the possibility of testing other types of learning rule. For example, also considered was an alternative version of the rule (Eq. (19)) where the resting potential Vrest is replaced by a value V0 which may be higher than the resting potential. In order for this to be effective, it was necessary to include an additional constant input current which artificially elevates the resting potential to V0 . The value of V0 determines the average membrane potential, so increasing V0 lifts the membrane potential towards the action potential threshold. This provides a natural upper limit for V0 , since above a certain level the neuron fires action potentials continuously, even without stimulation. We expect that increasing V0 results in the neuron becoming more sensitive to external stimuli. The rate of learning showed a negative correlation with V0 (Fig. 11(a)). Although at first this might seem counterintuitive, this might be explained by the fact that a

A Biophysical Model of Adaptive Noise Filtering in the Shark Brain

higher value of V0 results in the membrane potential spending a greater proportion of each action potential below the V0 threshold. We chose to characterise ‘learning rate’ by the time taken for the synaptic weight to reach its equilibrium. If we had chosen another measure—perhaps the time taken to suppress spiking—we might well have discovered a different result. Figure 11(b) shows that the proportion of learning taking place while the neuron is spiking increases as we increase V0 . For all values of V0 , most of the learning takes place after spiking has ceased. The elasmobranch electrosensory system is extremely sensitive to tiny variations in the electric field as measured at the surface of the skin, and the DON is able to pick out external stimuli much smaller in magnitude than the electric field generated by its own ventilatory motion. However, it was found that, as would be expected from a model based on Hodgkin–Huxley equations, the range of amplitudes over which the model exhibits spiking is relatively narrow, and this constrains the range of amplitudes of the input currents. For example, as shown in Fig. 10, in the simulations the amplitude of the ventilatory (reafferent) signal is similar to the amplitude of the external stimulus. This behaviour is typical of conductance-based neuron models such as Hodgkin–Huxley, and suggests that the mechanism underlying the extreme sensitivity of the elasmobranch electrosensory system may lie elsewhere in the neural pathway. The results of the simulations presented here demonstrate that the biophysical model is able to successfully perform adaptive filtering, in agreement with previous experimental and modelling work. For example, Fig. 10 demonstrates several features described in experimental studies by Montgomery and Bodznick (1999) and modelling studies by Nelson and Paulin (1995): suppression of a periodic ventilatory signal while remaining responsive to an external stimulus; suppression of an external stimulus time-linked to ventilation; and the appearance of a ‘negative image’ after turning off the stimulus. Furthermore, our results confirm the assumptions made by Nelson and Paulin (1995) and Roberts and Bell (2000) in ignoring the biophysical details of membrane currents. However, a biophysical model, which models the electrical dynamics of the neuron according to physical laws, allows new possibilities in investigating the capabilities of the adaptive filter mechanism. For example, there is opportunity to understand how the detailed structure of action potentials affects the learning mechanism. Perhaps changing the parameters of the Hodgkin–Huxley equations, or introducing new ionic currents, might affect its learning characteristics. This paper focussed on the suppression of simple, period afferent signals by sinusoidal parallel fibre inputs. However, it is far from clear that such a situation is likely to exist in vivo. The period of ventilatory motion might increase or decrease over time, or the movement of the animal’s fins might introduce a non-periodic yet predictable reafference. By introducing a broader variety of parallel and afferent inputs to the model, it might be possible to ascertain whether it is capable of suppressing these kinds of inputs. The learning rule (Eq. (9)) was introduced without regard to the physiological mechanism that underlies it. By developing a learning rule based on a biophysical mechanism for synaptic plasticity, it would be possible to investigate the effects of such mechanisms on the adaptive filter.

P. Bratby et al.

This spatial model was based on a very simple linear model of a neuron, which was found to be equivalent to a simple isopotential model. On the other hand, a typical neuron has a complex geometry, with synapses supplying current to a large number of branching dendrites. A future study might investigate whether a branching structure (perhaps based on a multi-compartment model) makes a difference to its behaviour. Since its introduction by Fujita (1982), the adaptive filter model has found widespread use in modelling the neural circuitry of the cerebellum, and there is much experimental evidence that they provide a good description relating brain structure to function. However, a number of alternative models, such as the Bayesian state estimator model (Paulin 2005) have been proposed in order describe the cerebellum’s function. It is hoped that further computational modelling, along the lines of the simulation presented in this paper, might help to discriminate between these competing views.

Appendix: Hodkgin–Huxley Equations

Cm

dV dt dm dt dn dt dh dt

= −g¯ K n4 (V − VK ) − g¯ Na m3 h(V − VNa ) − g¯ L (V − VL ) + Iapp (t),

(22)

= αm (1 − m) − βm m,

(23)

= αn (1 − n) − βn n,

(24)

= αh (1 − h) − βh h,

(25)

αm = 0.1

V + 40

, 1 − exp ( −(V10+40) )

−(V + 65) , βm = 4 exp 18

−(V + 65) , αh = 0.07 exp 20 βh =

1 1 + exp ( −(V10+35) )

αn = 0.01

V + 55

,

, 1 − exp ( −(V10+55) )

−(V + 65) . βn = 0.125 exp 80

(26) (27) (28) (29) (30) (31)

A Biophysical Model of Adaptive Noise Filtering in the Shark Brain Table 2 Parameters for the Hodgkin–Huxley equations (Keener and Sneyd 1998)

Parameter

Value

VK

−77 mV

VNa

50 mV

VL

−54.4 mV

g¯ K

36 mS/cm2

g¯ Na

120 mS/cm2

g¯ L

0.3 mS/cm2

References Bell, C. C., Caputi, A., Grant, K., & Serrier, J. (1993). Storage of a sensory pattern by anti-Hebbian synaptic plasticity in an electric fish. Proc. Natl. Acad. Sci. USA, 90(10), 4650–4654. Bell, C. C., Han, V., & Sawtell, N. B. (2008). Cerebellum-like structures and their implications for cerebellar function. Annu. Rev. Neurosci., 31, 1–24. Bodznick, D. (1993). The specificity of an adaptive filter that suppresses unwanted reafference in electrosensory neurons of the skate medulla. Biol. Bull., 185(2), 312–314. Bodznick, D., Montgomery, J. C., & Bradley, D. J. (1992). Suppression of common mode signals within the electrosensory system of the little skate (Raja erinacea). J. Exp. Biol., 171(1), 107–125. Bodznick, D., Montgomery, J. C., & Carey, M. (1999). Adaptive mechanisms in the elasmobranch hindbrain. J. Exp. Biol., 202(10), 1357–1364. Fujita, M. (1982). Adaptive filter model of the cerebellum. Biol. Cybern., 45(3), 195–206. Keener, J., & Sneyd, J. (1998). Mathematical physiology, I: cellular physiology (Vol. 1). Berlin: Springer. Linden, D. J. (1999). The return of the spike: review postsynaptic action potentials and the induction of ltp and ltd. Neuron, 22, 661–666. Migliore, M., & Shepherd, G. M. (2002). Emerging rules for the distributions of active dendritic conductances. Nat. Rev. Neurosci., 3(5), 362–370. Montgomery, J. C. (1984). Noise cancellation in the electrosensory system of the thornback ray; common mode rejection of input produced by the animal’s own ventilatory movement. J. Comp. Physiol., A Sens. Neural Behav. Physiol., 155(1), 103–111. Montgomery, J. C., & Bodznick, D. (1994). An adaptive filter that cancels self-induced noise in the electrosensory and lateral line mechanosensory systems of fish. Neurosci. Lett., 174(2), 145–148. Montgomery, J. C., & Bodznick, D. (1999). Signals and noise in the elasmobranch electrosensory system. J. Exp. Biol., 202(10), 1349–1355. Napper, R. M. A., & Harvey, R. J. (1988). Number of parallel fiber synapses on an individual Purkinje cell in the cerebellum of the rat. J. Comp. Neurol., 274(2), 168–177. Nelson, M. E. (2011). Electrophysiological models of neural processing. Wiley Interdiscip. Rev., Syst. Biol. Med., 3(1), 74–92. Nelson, M. E., & Paulin, M. G. (1995). Neural simulations of adaptive reafference suppression in the elasmobranch electrosensory system. J. Comp. Physiol., A Sens. Neural Behav. Physiol., 177(6), 723– 736. Paulin, M. G. (2005). Evolution of the cerebellum as a neuronal machine for Bayesian state estimation. J. Neural Eng., 2(3), S219. Porrill, J., Dean, P., & Anderson, S. R. (2012). Adaptive filters and internal models: multilevel description of cerebellar function. Neural Netw. doi:10.1016/j.neunet.2012.12.005. ISSN 0893-6080. http://www.sciencedirect.com/science/article/pii/S0893608012003206. Roberts, P. D., & Bell, C. C. (2000). Computational consequences of temporally asymmetric learning rules, II. Sensory image cancellation. J. Comput. Neurosci., 9(1), 67–83. Sejnowski, T. J. (1977). Storing covariance with nonlinearly interacting neurons. J. Math. Biol., 4(4), 303– 321. von Holst, E., & Mittelstaedt, H. (1950). Das Reafferenzprinzip: Wechselwirkungen zwischen Zentralnervensystem und Peripherie. Naturwissenschaften, 37, 464–476.

A biophysical model of adaptive noise filtering in the shark brain.

Sharks detect their prey using an extremely sensitive electrosensory system that is capable of distinguishing weak external stimuli from a relatively ...
920KB Sizes 1 Downloads 0 Views