PHYSIOLOGICAL REVIEWS Vol. 72, No. 2, April 1992

Printed

in U.S.A.

White-Noise

Analysis in Neurophysiology HIROKO

Department

of Ophthalmology,

New

M. SAKAI

York University

Medical

Center, New York, New

I. Introduction .......................................................................................... II. Theory ................................................................................................ A. Computation of kernels ........................................................................... III. General Procedures for Application to Neurophysiology ............................................ A. Spike trains ....................................................................................... IV. White-Noise Analysis in Studies of Vision .......................................................... A. Neuron network of vertebrate retina ............................................................. B. Linear modulation response ...................................................................... C. Generation of nonlinearities in amacrine cells ................................................... D. Signals encoded into spike trains ................................................................. E. Spatiotemporal receptive field .................................................................... V. Application to Other Systems ........................................................................ VI. Analysis of Cascade Structures ...................................................................... VII. Conclusions ...........................................................................................

I. INTRODUCTION

Physiology is the study of input-output relationships. White-noise analysis applied to neurophysiology is a specific case in which the input is Gaussian white noise and the input-output relationship is described by a series of Wiener kernels. Although the analysis is known by the name of Nobert Wiener (68), the idea of functional identification dates back to Frechet (11) and Volterra (64). Wiener’s contribution was to combine the idea of functional identification with stimulation by a Gaussian white-noise signal, one of many stochastic processes (68). Gaussian white noise is a formal derivative of fundamental chaos or Brownian motion discovered by botanist Robert Brown in 1827. It was Wiener (67) who gave a mathematical foundation to the “chaos” and proposed the use of a Gaussian white-noise signal to test a system because it had I) a flat spectrum, .z) independent values at each moment, and obviously 3) a Gaussian or normal distribution. A white-noise signal is a continuous analogue of a set of independent and identically distributed random variables with maximum entropy. White-noise analysis, therefore, has a deeply rooted mathematical background, and some notions in the theory are still currently the subjects of mathematical studies (17). The first application of white-noise analysis in physiology was made by Sandberg and Stark (56) in a study of human pupillary reflexes. In the early 197Os, Marmarelis and Naka (28) began analyzing neuron networks of the vertebrate retina by developing the whitenoise technique. Since then, white-noise analysis has been used increasingly in studies of various physiological systems and problems. In the first part of this re0031-9333/92 $2.00 Copyright 0 1992 the American

Physiological

Society

Yorlc

491 491 493 493 496 496 496 497 498 499 500 500 501 503

view, therefore, a brief survey of the basic ideas of the theory is presented. In the second part, examples from experimental applications of white-noise analysis are discussed so that the advantages as well as the limitations of the analysis can be appreciated. White-noise analysis, despite its awe-inspiring mathematical background, is merely a tool that is used to probe a system. As with any tool, its use does not automatically guarantee successful results; it is our skill in using the tool that gives us deeper insights into the system. Before going further I mention the fact that many fine results, theoretical as well as experimental, are not cited here because of the limited space available as well as my own limited experience. II.

THEORY

White-noise analysis is one of many system identification methods. The objective of systems identification in both engineering and biology is to determine the system functional 8’, where y(t) = F[x(t)] and x(t) and y(t) are the stimulus and response for time t, respectively (Fig. 1). By exciting a system with an appropriate input and observing the resulting response, we obtain the system functional 3’. This is a “black box” approach; a system is defined by its transfer characteristics without specifying its internal mechanism, i.e., a system is replaced by a filter with (exactly) the same transfer characteristics. As in any system identification method, white-noise analysis requires that two essential conditions be met. First, a system under study must be time invariant, i.e., its characteristics should not change rapidly over reasonably long periods of time and the system 491

Downloaded from www.physiology.org/journal/physrev by ${individualUser.givenNames} ${individualUser.surname} (163.015.154.053) on November 19, 2018. Copyright © 1992 the American Physiological Society. All rights reserved.

HIROKO

492 x(t) FIG.

output

> SYSTEM -

al

yet,

1. A system is a black box with well-defined

YW = ko +

input [x(t)] and

[ I&)1.

response should remain the same to identical stimuli. Then the stimulus x(t) and the response y(t) are functionsoftime t.Second,the system mustobeythe principle of causality, i.e., the response cannot precede the stimulus that causes it. The main theoretical problem in nonlinear system identification is to find a proper mathematical representation of the system. It was Frechet (11) and Volterra (64’65) who originated the idea of the functional identification of a finite-memory system by a series of functionals of integer order whose convergence was uniform in all compact sets of continuous functions. (A functional is a function whose argument is a function and whose value is a number.) However, there was no practical way to identify a system by Volterra’s original representation. A convenient scheme for measuring the system kernels could result from making the terms of the Volterra series orthogonal to each other for a certain specific stimulus x(t). This is exactly what Wiener accomplished. He showed that if the input was a Gaussian white-noise signal, then a simple change in the Volterra series resulted in a series whose terms were mutually orthogonal. If the system is linear, the principle of superposition must hold; if a stimulus x, produces a response ya and a stimulus Xb produces a response yb, then a response yC produced by x,, that is, the sum of x, and xb, is Y&)

=

Y&)

+

Yb(t)

(1)

The relationship can be described more generally by the well-known convolution integral, which is based on the principle of superposition for linear systems Y(t) =

O”h(T)x(t - 7)d7 s0

(2)

where h(7) is the impulse response, i.e., the system response to a very sharp pulse of unit area, or a delta function. If the system obeys the principle of causality, then h(7) = 0, for 7 < 0. Note that if we know h(7) for a particular linear system, then we can predict its response to any kind of stimulus by using Equation 2. If a system is nonlinear, then the principle of superposition does not hold. If a stimulus x, produces a response ya and a stimulus Xb produces a response yb, then a response yC produced by the sum of x, and Xb is Y,(t)

#

&z(t)

+

Yb@)

Volume 72

M. SAKAI

(3)

Volterra (65) showed that for a time-invariant nonlinear system with a finite memory, the relationship between x(t) and y(t) can be described as

* kl(r)x(t

s0

- r)d7 +

w2

ss0 00

x x(t - T1)x( t - T2)dT1dT2 +

al

sss 0

59

4

0

al

00

k(3 0

x x(t - q)x( t - T2)x( t - T3)dqdT2dT3 + whereko~

klbl)~

Fc2(7172)1

khJ2J3)~.

729

717

73)

0

l

l

l

l

(4)

•~ ~rezer%firsh

second-, third-, . , order kernels. The first-order term is exactly the same as the convolution integral for a linear system (Eq. 2). In fact, if k2 = k3 = k4 = 0, then the system is linear with impulse response h(7) = k,(7). However the problem with the Volterra series is that the &h-order interaction cannot be described exclusively by the &h-order Volterra kernel but is also described by all kernels of an order higher than nth. From the Volterra series (Eq. 4)’ the impulse response of a nonlinear system is, in general l

l

y(t) = k. + k,(t) + k2(t, t) + k3(t, t, t) + . .

l (5)

That is, the impulse response of a nonlinear system ineludes the main diagonal points of all the kernels. Because of the participation of all the kernels in the system response to a set of impulses, the estimation of the Volterra kernels is not practical. It is much more d esirable if each kernel can be measured independently. This can be accomplished if each term in the series becomes orthogonal for a certain stimulus to all other terms, as is achieved by Wiener’s rearrangement of the Volterra series. Wiener constructed a hierarchy of functionals of increasing order that are orthogonal to each other with respect to a Gaussian white-noise input and whose sum characterizes the system. When x(t) is a Gaussian white-noise signal of zero mean, the functional of zero order is ho. The functional of first order is O” h&)x(t s0

- 7)d7 + Kl

(6)

where Kl is a constant. Wiener then made this functional orthogonal to the zero-order functional, as occurs if K 1 = 0 for x(t), a Gaussian variable of zero mean, since E ho co hl (7)x(t [ J0

-

7)d7 + h,K,

1

=0

(7)

where El.1 denotes the infinite time average (taken over t). Wiener then made the functional of the second order orthogonal to all functionals of zero order and first order that are homogeneous with respect to x(t), the functional of the third order orthogonal to all homogeneous functionals of the second, first, and zero orders, and so on. Finally, Wiener showed that the relationship be-

Downloaded from www.physiology.org/journal/physrev by ${individualUser.givenNames} ${individualUser.surname} (163.015.154.053) on November 19, 2018. Copyright © 1992 the American Physiological Society. All rights reserved.

April

xwi?

WHITE-NOISE

tween the input x(t) and the output y(t) of a system Scan be written as

ANALYSIS

493

nels ho, hl, . . ., h, is the best approximation in the mean square error (MSE) sense. That is, the approximation n YnCt>

=

C

GkLhkC

1; x(t)3

k=O

where the G, are orthogonal functionals if x(t) is a Gaussian white-noise signal with zero mean and h, is the mth order Wiener kernel. Each hm(7l,72,. . .,7,) is a symmetrical function with respect to its arguments. The first four Wiener functionals are

Go[ho; x(t)1 = ho G,[h,; x(t)]

- 7)d7

= J=) h,(T)x(t 0

Gdh,;

x(t)3 = j-

Jm h2(T1, 7&x(t 0

-

71) f”o

x(t)]

=

Joa

j-

j0

- P

h3bl,

72,

co00- T2)(t - 3P ss h( 3 719

0

J0

h2(q, T1)dT1

(II)

73)

0

X x(t - +(t

72,

T3)dT1dT2dT3 72)XV

-

7dd7ld72

(I@

0

where the power density spectrum of the white noise = P. The set of Wiener kernels hi characterizes the system, i.e., knowledge of these kernels allows the prediction of the system response to any arbi trary given stimulus. The knowledge of the power level of the white noise is also required, since the Wiener functionals of an order higher than the first order depend explicitly on the input power level. Furthermore, the Wiener kernels themselves depend on that power level. This is because the Wiener series is so constructed as to possess the special feature of orthogonality with respect to the Gaussian white noise, and the power level P determines the range of validity of this orthogonality in a functional space, in much the same way that the domain of the independent variable determines the range of validity of the orthogonality of two functions. It should be noted that if the system kernels are zero for an order higher than second order, then the Wiener kernels h, and h2(T1,T2) are equal to the Volterra kernels k, and k2(T1,T2). There is a unique analytical relationship between the Volterra and the Wiener kernels of a system because both series are constructed from lin .ear expressions of the same mathematical structures, i.e., the multiple convolution integrals, and they are both expansions of the same function, i.e., the system response y(t). If the Wiener series is truncated after the nth-order term, then the resulting approximation in terms of kerx(t) is &Jj)

minimizes the MSE between the measured system response y(t) and y,(t) for a given number of kernels ho, h,, ., h, as would be obtained by any other set of n kernels. The Wiener series gives, because of orthogonality, the best system approximation in MSE at each stage of truncation. For example, ho,h, is the best linear approximation, h,,h,,h, is the best second-order approximation, and so forth. Volterra series, which are not orthogonal to a Gaussian white noise, do not give the best (in the least MSE sense) system approximation for a Gaussian white-noise stimulus. This is another advantage of measuring the Wiener kernels rather than the Volterra kernel in real systems applications.

0

X x(t - 4d7&2

%[hs;

(13)

A. Computation

of Kernels

Ever since Wiener originally proposed the idea of identifying a system by a set of (Wiener) kernels, their computation has been one of the most frequently discussed problems in this field. Wiener (68) expanded the kernels in terms of an orthogonal set of functions. He chose the Laguerre functions as the orthogonal set, because there was an electronic analogue capable of behaving as a Laguerre function filter. Lee and Schetzen (23) developed cross-correlation methods whereby the Laguerre filters proposed by Wiener were replaced by time-delay filters. This method, suited to a modern digital computer, appears to be a general, straightforward, and powerful approach. However, because the kernels are multidimensional and therefore require multidimensional cross correlations for their elucidation, computation was time consuming, particularly in the 1970s. In addition, cross correlation could result in inaccurate kernel estimates and could also generate artifacts (41, 42), partly due to the difficulty of experimentally realizing a white-noise input over an infinite time interval. As an alternative means, French and Butz (12) showed that correlation could be achieved via the frequency domain. Victor and Shapley (63) used the “sum of sinusoids” to obtain kernels. Korenberg (18) developed a fast orthogonal search method that estimates kernels without the creation of orthogonal functions at any stage of the process. For further details of the theory, several textbooks are available (4, 17, 26, 45, 58). III.

GENERAL

PROCEDURES

FOR

APPLICATION

TO NEUROPHYSIOLOGY

A true Gaussian white noise with an infinite frequencv bandwidth is not phvsicallv realizable. In the

Downloaded from www.physiology.org/journal/physrev by ${individualUser.givenNames} ${individualUser.surname} (163.015.154.053) on November 19, 2018. Copyright © 1992 the American Physiological Society. All rights reserved.

494

HIROKO GWN

N-cell

Firs t-order

100

Second-order

response

Kernel

ms

200

Kernel

FIG. 2. In actual experiments, band-limited Gaussian white noise (GWN) must be employed. In this example, input is a GWN-modulated light presented to retina, and output is a response recorded from N-amacrine cell. First-order cross correlation between input and output estimates a lst-order Wiener kernel that is plotted two-dimensionally by time (ms) and kernel amplitude (mV PW-’ cm2 s). Second-order cross correlation estimates a 2nd-order Wiener kernel, which is a three-dimensional representation with two time axes, Q and 72 (x- and y-axes; ms), and amplitude [x-axis; mV/(pW/cm2)2. sz]. In this example, 2nd-order kernel is plotted with solid lines indicating positive peaks and dashed lines indicating negative valleys. l

l

l

real world, we are obliged to use a quasi- or band-limited Gaussian white noise whose cutoff frequency is chosen to cover the bandwidth of the system under study, i.e., it is necessary to choose the bandwidth that optimally excites the system to be identified. In the study of vertebrate visual systems, for example, a relative low cutoff frequency of up to 50 Hz is sufficient, whereas for the auditory system, some species are able to respond to extremely high frequency signals of 100 kHz. It should be noted, however, that Gaussian white noise is not the only signal with respect to which the fun ctional series can be orthogonalized. Any . other signal that possesses suitable autocorrela tion properties can be used. In fact, various forms of pseudorandom signals have been used in many laboratories (14,33). Pseudorandom signals, a special stairlike form, which remain constant within small finite time intervals and switch abruptly at time instants that are integral multiples of a fundamental time interval, At, are deterministic periodic signals with autocorrelation properties close to the quasi-white ones and of very low redundancy. The kernels estimated by the use of pseudorandom signals are, however, slightly different from the Wiener or Volterra kernels. Another critical factor to be considered is a sampling rate for digitizing signals. This is related to the so-called aliasing problem. If the sampling interval is At, the highest frequency that can be described is l/(Zat) cycles/s. Any frequencies present in the data that are above l/(ZAt) cycles/s are not described accurately but are folded into the lower frequency range.

M.

Volume

SAKAI

72

Thus the estimates of the low frequencies are aliased by the pres ence of th .ese frequencies. This problem can be avoided by taking a faster sampling rate, at least twice as fast as the data signals. The original white-noise analysis anticipated a system with analog input and analog output. One such example is shown in Figure 2, where the input is a light stimulus modulated by quasi- or band-limited Gau ssian white-noise signal (GWN) and the output is the analog modulation response from an N-amacrine cell, a type of amacrine cell in the vertebrate retina. First-order cross correlation between the input white noise and the ou .tput respon .se estima tes a firstorder Wiener kernel, which is plotted on two axes, time (abscissa) and amplitude (ordinate). In this example, the first-order kernel is an initial negative waveform followed by a damped oscillation. A second-order Wiener kernel estimated by second-order cross correlation is plotted, in this case, by a contour map. As discussed in the previous section, the second-order kernel denotes the nonlinear interaction between two portions of the stimulus given in the past, which is a three-dimensional representation with two time axes, ~,7~ (xand y-axes) plus amplitude (x-axis). The solid line of the contour indicates positive (depolarizing) peaks, whereas the dotted line indicates negative (hyperpolarizing) valleys. The configuration of the second-order kernel is an initial peak followed by a valley that is elongated perpendicularly to the diagonal. Alternately, a secondorder kernel can be described in a perspective view (see Fig. 1 in Ref. 28 or Fig. 13.6-6 in Ref. 58). Similarly, the third-order cross correlation estimates a third-order Wiener kernel, which, being a fourdimensional representation with three time axes, cannot be displayed graphically on a two-dimensional plane. The power spectrum of the original GWN input is approximately rectangular in shape with a cutoff frequency of 50 Hz and that of the responses reco rded from an N-amacrine cell has more of a bandpass character with a cutoff frequency of 20 Hz (Fig. 3). (The cutoff frequency of a GWN was chosen to cover a cutoff freGWN

[[

I

lo

Hz

100

FIG. 3. Power spectrum of band-limited GWN shown in Fig. 2 is almost flat up to 50 Hz, whereas that of output N-cell’s response has a cutoff frequency of -20 Hz. Band-limited GWN must be designed to cover appropriately bandwidth of system under study. Abscissa corresponds to a logarithmic scale.

Downloaded from www.physiology.org/journal/physrev by ${individualUser.givenNames} ${individualUser.surname} (163.015.154.053) on November 19, 2018. Copyright © 1992 the American Physiological Society. All rights reserved.

April 199.2

WHITE-NOISE

quency of the cell response.) If the bandwidth of the white noise greatly exceeds the system bandwidth at the high-frequency end, then statistical estimation errors increase in the computation of the kernels. As described in section II, the first-order kernel is an impulse response of the linear part of a system and is the best (in the sense of MSE) approximation of the linear part of the impulse response of the nonlinear system. The first-order kernel from the N-amacrine cell is, therefore, the best approximation of the linear part of the cell’s response evoked by a brief flash of light superimposed on a mean luminance. In actual experiments, the impulse input is approximated by the autocorrelation function of the input white noise, because the white noise used in such experiments is band limited and its autocorrelation function has a finite duration and amplitude. In linear system theory, the Fourier or Laplace transformation of the impulse response is the transfer function plotted as gain and phase in the frequency domain, known as a Bode plot. The gain and phase of the first-order kernel (shown in Fig. 2) is plotted in Figure 4. The gain is bandpass limited with a cutoff frequency of -20 Hz, and the phase is negative at low frequencies. Similarly, the second-, third-, and higher order kernels can be transformed into (multidimensional) frequency gain and phase curves (55). The response of a linear system to any arbitrary input or the response of a linear part of a nonlinear system can be predicted by convolution of the input signal by the system’s first-order kernel. The accuracy of a kernel’s prediction in a linear system can best be estimated by convolving the original white-noise input with the system’s first-order kernel. There should be no appreciable difference between the system’s response and the result of such a convolution. Conversely, if the difference is small, the system is linear. In a nonlinear system, the convolution of the white-noise input by the first-order kernel gives the best linear approximation of the linear part of th .e system’s response. In the example of the responses of the N-cell described, the first-order linear model predicts fairly well the low-frequency region of the actual response, but it fails to predict the higher frequency responses (Fig. 5, First-order model). Addition of the second-order non-

IN

495

ANALYSIS

1

LIGHT

0.2 p/cmz

IOmV

RESPONSE I FIRST-ORDER MODEL SECOND-ORDER MODEL THIRD-ORDER MODEL

.0.2

s.

FIG. 5. Light, white noise-modulated light input presented to retina. Bar indicates dark level. Mean luminance of white noise-modulated light was 0.2 pW/cm2. Response, actual response of N-amacrine cell evoked by white-noise input. First-order model, response predicted by lst-order kernel, i.e., convolution of original white-noise input with its lst-order kernel provides lst-order model. Second-order model, response predicted by lst- and 2nd-order kernels. Third-order model, response predicted by lst-, 2nd-, and 3rd-order kernels. Predicted response comes closer to actual response with inclusion of higher order models. [From Sakuranaga and Naka (54).]

linear response, predicted by its second-order kernel, improves the high-frequency response (Second-order model), and further inclusion of the third-order nonlinear response predicted by its third-order kernel (Thirdorder model) improves the higher frequency regions. The predicted response becomes closer to the actual response with the inclusion of higher order models. It is often desirable to define, in a quantitative fashion, the discrepancy between the actual and predicted responses. One such measure is the MSE. For a first-order prediction, the MSE is obtained by computing MSE =

02 {YW

- wh, q2d~

For a second-order puting

prediction,

MSE =

{Y 69 - [G,(h,,

00

(14)

MSE is obtained

x) +

G,W,,

by com-

X)1}2dT(15)

where y(t) is the actual response and G,( . ) and G2( ) are the Wiener functionals in &s. IO and 11, respectively. These MSEs must be normalized by a factor l

I

lo

Hz

100

FIG. 4. Fourier transformation of kernel gives frequency gain and phase characteristics. In this example, lst-order kernel shown in Fig. 2 was transformed by fast Fourier transform method developed by Cooley and Tukey (3a). Gain curve is maximal at ~20 Hz with a differentiating (or bandpass) characteristic, as seen from waveform of kernel. Abscissa corresponds to a logarithmic scale.

and are given as percentages. Thus the measure by the MSE is the averaged discrepancy over the entire record and does not reflect a point-to-point discrepancy between the actual and predicted response.

Downloaded from www.physiology.org/journal/physrev by ${individualUser.givenNames} ${individualUser.surname} (163.015.154.053) on November 19, 2018. Copyright © 1992 the American Physiological Society. All rights reserved.

496

HIROKO

For the N-cell responses shown in Figure 5, MSEs for the first-, second-, and third-order models were 58.4, 43.2, and X8%, respectively. The responses not predicted by the first- to third-order models were attributed to higher order models and were also due to the presence of extraneous noise in the input as well as the output and to the deviation of the signal from true Gaussian white noise. There are several theoretical as well as practical problems associated with error estimation through MSE. Further research on this problem should prove useful.

M. SAKAI

Volume

72

reverse direction, and the occurrence of a spike discharge (event) is considered as a sign or as an indication that something particular happened in terms of the stimulus before that spike. The first-order spike kernel is the best linear approximation of the stimulus waveform that is optimal for triggering the spike discharge, i.e., the reverse correlation looks retrospectively at the preevent stimulus ensemble (9). As is described in section VI, in neuronal circuitry with an LNL (dynamic-linear static-nonlinear dynamic-linear) cascade structure, spike kernels have dual interpretation: I) the optimal stimulus to produce a spike and 2) dynamics of postsynaptic potentials of a spike-generating neuron (21).

A. Spike Trains IV.

Spike discharges are the most ubiquitous means of long-distance communication between cells in the nervous system. Although complex biophysical processes underlie the generation of each spike discharge, only the time of occurrence of a spike carries information from one point to another; spike discharges are a point process in terms of signal transmission. If system input and output are represented as continuous functions, then the original white-noise method can be used, but if the output is spike trains or a point process, then the original method cannot be applied directly. There are two means of computing kernels from spike trains. One is to produce poststimulus time (PST) histograms of spike discharges by repeating many identical quasi-Gaussian white-noise stimuli to make analog outputs (1,29). This method is possible because the timing of spike generation has jitter, otherwise the average of many runs would produce the same function as a single run. A PST histogram produced in this way is a spike density function. Wiener kernels can be estimated by a cross correlation between the Gaussian white-noise input and a smoothed PST by a standard white-noise method. Two difficulties are, however, involved in this method: I) long recording times are required to repeat the same white-noise stimulus, and 2) the timing of the spike occurrence has to be sorted into bins with a finite temporal width, which introduces an artificial high-frequency component. The other method is a reverse correlation pioneered by de Boer and Kuyper (6), which seeks the optimal input waveform to trigger a spike discharge. The procedure consists of averaging an ensemble of signals of a given length that immediately precede a spike. With this method, the computation of the second-order kernel becomes considerably shorter because only one multiplication is required [x(ti - 71) X X(ti - 72)] instead Of tW0

[ y(t)

X X(ti

-

71)

X X(tl

-

72)1*

Kernels computed from analog input and analog output by means of a standard white-noise method can be interpreted as neuronal filters that transform the original input signals into output responses. The interpretation of kernels computed from spike outputs is not, however, that straightforward. For the PST method, kernels are an approximation of the PST histograms or the spike density function produced by an impulse input. For the reverse correlation method. time runs in the

WHITE-NOISE

ANALYSIS

IN STUDIES

OF VISION

White-noise analysis has been applied extensively to the visual system. The first application to a biological system was made by Sandberg and Stark (56) in 1968 in the human pupillary reflex system. Following their attempt, several pioneering papers appeared from the biological system analysis group at Caltech (27, 28) and from The Netherlands (57). The early 1970s saw a great increase in the amount of vision research based on white noise or similar stochastic approaches. More recently, the analysis has been extended to problems of spatiotemporal receptive fields in higher visual centers (10, 24) and to color vision (K.-I. Naka and H. M. Sakai, unpublished observations). A. Neuron

Network

of Vertebrate Retina

In this section, I describe results mainly from studies on the catfish retina as examples of the way in which white-noise analysis has advanced our understanding of signal processing in the visual system, in particular in the retinal neuron network. There are several reasons that the vertebrate retina, particularly of the catfish, is well suited to white-noise analysis: I) the retina is a neuron network with a well-defined input (light) and output (ganglion cell spike discharges); 2) the retina is a well-layered neuron network that consists of five major cell types, namely, photoreceptor, bipolar, horizontal, amacrine, and ganglion cells; 3) the catfish retina has a simple structure because of its lack of complex color coding (37); 4) intracellular recordings can be made from virtually all the major cell types; and 5) physiological results have been interpreted in theoretical frameworks (21, 22, 30). The unique problem associated with the application of white-noise analysis to visual systems is the fact that light stimulus can be modulated only around a mean luminance. A white noise-modulated light stimulus contains a mean. The stimulus can, therefore, be decomposed into two components L(t) = L, + l(t) where L, is the (steady) mean luminance and l(t) is modulation. Under natural conditions, L,, changes very

Downloaded from www.physiology.org/journal/physrev by ${individualUser.givenNames} ${individualUser.surname} (163.015.154.053) on November 19, 2018. Copyright © 1992 the American Physiological Society. All rights reserved.

April mc?

WHITE-NOISE

497

ANALYSIS

slowly and can be taken as constant for a short period of time. As there is no such thing as negative light, the maximal excursion of l(t) is limited by the magnitude of L,. Responses from retinal neurons can also be decomposed into two components R(t) + R, + r(t) where R0 is the DC (steady) response produced by L, and r(t) is the modulation response produced by l(t). The relationship between L, and R, is the static response characteristics and is represented by the Michaelis-Menten equation for the receptor and horizontal cells in simple retinas (39). The relationship between l(t) and r(t) is the dynamic response characteristic and depends on L,. Classically, the relationship between l(t) and r(t) is obtained by finding the l(t) to just evoke a response (threshold measurement) or to produced a unitary response. Such measurements are repeated for various values of L, to obtain an incremental sensitivity function. Often this function is the Weber-Fechner relationship. In white-noise analysis, cross correlation is made between l(t) and r(t) to produce a series of kernels. Amplitudes as well as waveforms of kernels depend on L,; kernels represent incremental sensitivity (including dynamics) at a given L,. For various values of L, (corresponding to changes in the mean luminance according to the time of a day or the season of the year), there is a particular set of kernels. This dependence of the incremental sensitivity on the mean luminance is called the field adaptation (46). In experiments described next, white-noise analysis is used to discover changes in the response dynamics produced by various values of L,, but they are not a single, all-inclusive, master experiment. B. Linear Modulation Response

The input to the retina is light, the energy of which is transformed into electrical signals in photoreceptors. Photoreceptors are considered to be relatively independent of one another as are elements in a photodiode array. Signals from photoreceptors are transmitted to horizontal and bipolar cells. White-noise analysis has revealed that responses of photoreceptor, horizontal, and bipolar cells are linearly related to the input modulation (52). Figure 6A shows responses from an off-center bipolar cell evoked by a flash of light presented in the dark, which was followed by a Gaussian white noise-modulated light. The flash of light produced a response with an initial transient hyperpolarizing peak followed by a sustained membrane hyperpolarization. The onset of the following white-noise stimulus produced an initial transient hyperpolarizing peak, which was followed by a sustained hyperpolarization. Within a few seconds, the cell began to respond to the input modulation. The cross correlation between the input modulation and the resulting output response at a steady state estimates a

6,

B

I

I

I

I

0

2

4

6

I

SEC

8

SEC

2

P e $0

NO

b ;n

I

0

I

1

FIG. 6. A: bipolar cell responses evoked by a flash of light followed by a white noise-modulated light. B: expanded time scale for white noise-modulated light (top record) and resulting responses from off-center bipolar cell (solid line in bottom record) and for responses predicted by lst-order kernel (dashed line in bottom record). Two traces match well, except for occasional deviations, showing that modulation response of bipolar cell is linearly related to input modulation. [From Sakai and Naka (48).]

series of Wiener kernels. Figure 6B shows the input white noise and the resulting responses of the bipolar cell in a fast time scale. The actual response (solid line) is superimposed on the response predicted by the firstorder kernel (dashed line), which is produced by convolving the original white-noise input with the firstorder kernel. The two traces are indistinguishable, indicating that a large part of the bipolar cell’s modulation responses can be predicted by its first-order kernel with a fair degree of accuracy, i.e., the bipolar cell modulation response is linearly related to the input modulation. The MSEs of the first-order kernel were 7% in this particular example. There are two ways to change the modulation of white-noise light; one is to change the depth of the modulation while the mean luminance is kept unchanged, and the other is to change the mean luminance. Figure 7A shows three kernels estimated from inputs of different modulation depth and the resulting responses from an off-center bipolar cell; the depth of the modulation of the white-noise input was varied between 0, -10, and -20 dB, while the mean was kept unchanged at a luminance level of 2 pW/cm2. The amplitudes, as well as the waveforms, of the three kernels were almost identical. The MSEs for inputs of 0 and -10 dB were between 5 and lo%, whereas the MSEs for the input at -20 dB (small-

Downloaded from www.physiology.org/journal/physrev by ${individualUser.givenNames} ${individualUser.surname} (163.015.154.053) on November 19, 2018. Copyright © 1992 the American Physiological Society. All rights reserved.

498

HIROKO

M. SAKAI

Volume

72

extensively through gap junctions (39), whereas bipolar cells form a center-surround biphasic receptive field whose center component is provided by photoreceptors and whose surround component is provided by horizontal cells (62, 52). The linear signals of bipolar cells are then transmitted to amacrine and ganglion cells. C. Generation of Nonlinearities

‘- 4-3

0

I

0.1

d2

set

d3

FIG. 7. A: three l&-order kernels computed from white-noise input at 3 modulation depths, 0, -10, and -20 dB, and resulting responses recorded from off-center bipolar cell. Mean luminance was kept constant at 2 pW/cm2. Waveform and amplitudes of kernels remain unchanged with change of depth of input modulation. B: four lst-order kernels computed from white-noise input of 4 different levels of mean luminance and resulting responses from off-center bipolar cell. Maximum mean luminance (4 log units) was 20 pW/cm2 and was attenuated by inserting a series of neutral density filters. Kernel waveform, as well as amplitude, changes with change in mean luminance.

est modulation depth) were 40% as a result of a poor signal-to-noise ratio due to the small response amplitude elicited by the small stimulus modulation. In both receptors and horizontal cells, changes in the depth of modulation failed to produce any change either in the amplitude or the waveform of the kernels, as we would expect from a (piecewise) linear response (36, 47). The waveform and amplitude of a first-order kernel do change, however, with a change in the mean luminance. This is shown in Figure 7B. The numbers in Figure 7B indicate the mean luminance, I,, in log units. Kernels computed from modulation responses of an offcenter bipolar cell are plotted on a contrast sensitivity scale (see Ref. 48). Amplitudes of kernels are, therefore, a measure of contrast sensitivity. As the mean luminance increases, the peak time of the kernel becomes shorter and the waveform became more differentiated, or the transfer function has more of a bandpass character with a higher cutoff frequency. The amplitude of a kernel changes but only by 50%, with a IOO-fold change in the mean luminance, i.e., the cell’s contrast sensitivity is roughly the same over a large range of mean luminance. This is the Weber-Fechner relationship. The MSE of the linear model is small, which indicates that the modulation response is linearly related to the input modulations, an example of a piecewise linearization produced by a parametric control, which probably originates in photoreceptors (36). Although the response dynamics of horizontal and bipolar cells are not much different from those of photoreceptors, their spatial organizations differ. Horizontal cells form a large monophasic receptive field coupled

in Amacrine Cells

When the signal is transmitted from the outer to the inner retina, characteristic nonlinearities appear in amacrine cells. There are two types of amacrine cell in the vertebrate retina, one referred to as C amacrine and the other as N amacrine, and these types correspond to two distinct morphological classes (25, 38). The C-amacrine cells produce transient depolarizations at the on and off sets of a flash of light given in the dark (Fig. 8A1) and are referred to as transient amacrine cells. Responses in Figure 8A2 were evoked from the C-cell by an incremental and a decremental step of light from a mean luminance. Although there are minor differences, the cell produces transient depolarizations at the onset and offset of both the incremen tal an d decremen .tal steps. As can be easily inferred from these step-evoked responses, the C-cell produces a nonlinear response. Indeed, white-noise analysis has shown that the linear component comprises 40% of the total response (MSE 90% ), and the second-order nonlinear component accounts for a large part of the response (5060%), with the third-order nonlinear components comprising NL.

)

I

model

> LI

> NL

)

L2

)

FIG. 12. Block diagram of 3 cascade models. L is a dynamic linear system, and NL is a static nonli near i ty. [From Naka et al. (40a).]

mutable, whereas nonlinear operators are not. Three nonlinearity cascade models have been studied (Fig. 12). The Hammerstein structure is a static, or no-memory, nonlinearity that is followed by a dynamic linear element, whereas the Wiener structure is a dynamic linear element followed by a static nonlinear element. The Korenberg structure, which is often referred to as the sandwich or LNL structure, consists of a static nonlinearity “sandwiched” by two dynamic linear elements. Second-order nonlinearity of two distinct configurations is obtained experimentally in amacrine cells of the vertebrate retina. Simple analysis shows that the structure leading to the generation of the second-order nonlinearity in C-amacrine cells can be replaced by the Wiener model, whereas that in N-amacrine cells can be replaced by the Korenberg model (48). The second-order kernel of the C-amacrine cell has a characteristic structure, in which two depolarizing peaks and two hyperpolarizing valleys occupy the four corners of a square whose base and height are parallel to the two time axes (see Fig. 9A ). This is a structure consistent with, but not conclusive of, the following relationship

h(2 71,

72)

=

h,(q)hl(r2)

In Equation 17, the value of the second-order kernel h2(Tl,T2) at each point is given by a product, /~JQ) x h1(T2). In addition, along the diagonal where 71 = 72, the following

relationship

is implied

h2(719

72)

=

by

h,(71j2

(18)

In Equation 18, the second-order kernel is produced by two-dimensional multiplication of two cuts that are made through the peaks and valleys parallel to the time axes. Figure 13A shows a second-order kernel of a Camacrine cell and two cuts made through the peak and valley parallel to the two time axes. The cuts have a differentiating waveform, an initial positive waveform followed by a negative waveform. The structure shown in Figure 13B is made by two-dimensional multiplication of the cuts. The squared cut is also superimposed on the diagonal cut from the original kernel to show that the two match exactly, as predicted by Equation 18. This configuration is a dynamic linear followed by a static

Downloaded from www.physiology.org/journal/physrev by ${individualUser.givenNames} ${individualUser.surname} (163.015.154.053) on November 19, 2018. Copyright © 1992 the American Physiological Society. All rights reserved.

502

HIROKO

A

0

0.1

PARALLEL

CUT

DIAGONAL n

CUT

set 0.2

0

0.1

set 0.2

I

set 0.2

FIG. 13. A: contour plot of typical &d-order kernel from C-amacrine cell in catfish retina. Plot has a 4-eye structure. Cuts through peak and trough made parallel to 2 time axes are shown. B: contour plot of Znd-order kernel synthesized by 2-dimensional multiplication of cuts in A. Diagonal cut in A is superposed on diagonal cut in B (dashed line). Two cuts match very well. [From Naka et al. (40a).]

(no-memory) nonlinear element or the Wiener structure. Incidentally, the configuration of the dynamic linear element is similar to the configuration of the firstorder kernel of an on-center bipolar cell. The dynamic linear element can be an initial hyperpolarization followed by a depolarization or a configuration similar to that of the first-order kernel of an off-center bipolar cell (as shown in Fig. 7A) because squaring produces the same product. Thus the neuron network leading from the light to the generation of second-order nonlinearity of the C-amacrine cell is equated by a cascade of the linear system of a bipolar cell followed by a square device, a static (no-memory) nonlinear operation. The search for physiological mechanisms that correspond to this square device is important but not essential to this analysis. The N-amacrine cell yields a second-order kernel of a distinct configuration that is more complex than that of the C-amacrine cell. The configuration of the secondorder kernel of the N-amacrine cell is characterized by alternating peaks and valleys elongated perpendicularly to the diagonal (see Fig. 2). Such kernels cannot be reproduced by a simple squaring of dynamic linear elements, as in the case of C-amacrine cell’s second-order nonlinearity, but it can be reproduced by placing a setond dynamic linear element after the second-order nonlinear element of the C-amacrine cell. This operation is shown graphically in Figure 14. Second-order cross correlation between GWN and responses of a C-amacrine cell produces a second-order kernel of a four-eye structure. If the C-amacrine cell’s responses are followed by a dynamic linear filter, then the second-order cross correlation of the GWN against the resulting synthesized response computes a second-order kernel whose structure is characterized by alternating peaks and valleys elongated perpendicularly to the diagonal. This is an LNL or the Korenberg structure. Thus the process leading from the light input to the

M. SAKAI

Volume

72

generation of the second-order nonlinearity of C-amacrine and N-amacrine cells is consistent with (but not conclusive of) the Wiener structure and the Korenberg structure, respectively. The cross correlation of GWN against the PSPs of retinal ganglion cells yields firstand second-order kernels similar to those of preganglionic cells (21). Moreover, the configuration of the first- and second-order kernels computed from spike trains of a ganglion cell is similar to those computed from its PSPs (21, 51). The final output of the retinal neuron network is the spike train generated in the ganglion cells, which is then transmitted to the brain along the optic nerve fibers. In contrast with the analog potential generated by preganglionic cells, the spike train is a point process. When the amplitude of the PSP reaches a critical depolarizing level, a spike potential fires: it can easily be seen that spike generation is highly nonlinear process. A great deal is known about the ionic mechanisms that underlie the generation of spikes, but not much is known about the kind of information carried by spike trains. Most often spike trains are analyzed in terms of interspike intervals (43) or by producing PST histograms (15). Korenberg et al. (21) have provided a simple analytical explanation, with experimental evidence, to account for the strong similarities between ganglion cell PSP kernels and spike kernels. The explanation requires certain assumptions underlying the functional structure of the retina. The first assumption is that the process involved in generating a spike train is a static function of the ganglion cell PSP, as demonstrated by the experiment briefly described next. Gaussian white noise-modulated current was injected through an intracellular electrode into a ganglion cell, and the resultant spike discharges of the same cell were recorded by an extracellular electrode. A cross correlation was made between the white-noise current and the resultant spike discharges. A first-order and second-order kernel computed in this way are very brief in duration and are an approximate impulse function: the analog-to-point process transformation in the ganglion cells appear to be approximated by a static nonlinear function. Thus for a retinal network that leads from A .O

B

b 0

DYNAMIC

LINEAR

FILTER

iu

g

IT, 0.1

set 0.2

FIG. 14. A: typical 2nd-order kernel of C-amacrine cell with 4-eye structure. If C-amacrine cell’s responses are followed by a dynamic linear filter (B), then 2nd-order cross correlation of GWN against resulting synthesized response computes 2nd-order kernel (C). Structure of 2nd-order kernel in C is similar to that computed from responses of N-amacrine cell.

Downloaded from www.physiology.org/journal/physrev by ${individualUser.givenNames} ${individualUser.surname} (163.015.154.053) on November 19, 2018. Copyright © 1992 the American Physiological Society. All rights reserved.

April 1992

WHITE-NOISE

A

G(7) LIGHT Lb)

B

l

SPIKE GENERATOR

LI -

NI

GA;!;‘oN*

G(T) LIGHT L(t)

b- LI

SPIKE TRAIN

N2



-SPIKE GENERATOR _ -NI

-

l

L2

‘~!;loN~

N2

SPIKE TRAIN

.

FIG. 15. Functional structure of cascade models for generating ganglion cell PSP and spike train output. L, dynamic linear filter; N, static nonlinearity; G(T), impulse response function of filter L,. A: light stimulus to PSP generation is represented by a Wiener model (dynamic-linear followed by static-nonlinear cascade), which is then followed by a static nonlinearity of spike generation (N2). B: light stimulus to PSP generation is represented by LNL structure, which is then followed by a static nonlinearity of spike generation (N2).

light to a ganglion cell, spike discharges can be approximated by either an LN structure or an LNL structure followed by a static nonlinearity (Fig. 15). The cascade of the two static nonlinearities is equivalent to a single static nonlinearity, whatever the complexity of the static operation that underlies spike generation. Consequently, the overall structure receiving the light stimulus and producing the spike output is a Wiener model, as is the structure up to the PSP output. The Wiener model is a special case of the cascade of an LNL structure. If a given LNL structure has a Volterra series representation, then each Wiener kernel is proportional to the Volterra kernel of corresponding order. For the Wiener model having the light stimulus as input and the ganglion PSP as output, the &h-order Wiener kernel has the form

ANALYSIS

(L,), their corresponding Wiener kernels of any given order will be proportional (21). A cascade structure similar to that shown in Figure 15 was used by Marmarelis et al. (31) in studying “minimum-order Wiener modeling” of systems having continuous (analog) inputs and (point process) spike train outputs. In their structure, a Wiener model was followed by a spike generator in the form of a threshold trigger. They pointed out that the cascade of the static nonlinearity of the Wiener model with the threshold trigger was equivalent to a single static nonlinearity. Moreover, the number of intersections of the graph of the first static nonlinearity with the threshold level determined the minimum order of the Wiener model (preceding the threshold trigger) sufficient to yield the same spike timings. Spekreijse (60) made the first attempt to identify a retinal neuron network by an LNL model. Spekreijse used a sinusoidal stimulus modified by an auxiliary noise signal. Victor and Shapley (63) analyzed a retinal neuron network by an LNL (sandwich) model, using spatial input signals that consisted of the sum of sinusoids. Korenberg (18) and Korenberg et al. (20) suggested a simple method for identifying each component of an LNL structure by a Gaussian white-noise experiment. The LNL models based on white-noise analysis have proven to be useful for other sensory systems as well (9,ZO). For comparison of the different methods for dissecting an LNL structure, the reader is referred to the review by Spekreijse and Reits (61). VII.

Here g(7) is the impulse response of the dynamic linear system L, in Figure 15 and Ci is a constant. For the Wiener model having the light stimulus as input and the spike train as output, the &h-order Wiener kernel is

where Di is a constant. Obviously from Equations I and 2, the &h-order Wiener kernel Ki(rl,. . .,7i) computed from the spike discharge is proportional to the &h-order kernel ki(Tl,. . .,ri) from the ganglion cell analogue slow potential. Moreover, spike kernels are also proportional to kernels computed from the slow potential of the preganglionic cell when the relationship between the slow potentials of the preganglionic and the ganglion cells is essentially static (linear or nonlinear). The Wiener model having the spike train output differs from the Wiener model having the PSP output only in the static nonlinearity, which merely affects the proportionality constant (e.g., Ci or Di) scaling the Wiener kernel of a given order. The remaining structure of the Wiener kernel is determined by the dynamic linear system in the Wiener model. Because both the “spike” and “PSP” Wiener models share the same dynamic linear system

503

CONCLUSIONS

There are two significant advantages to the application of white-noise methods. The first advantage is that white-noise analysis provides maximum information with minimum assumptions. That is, the whitenoise input utilizes the entire recording time to obtain a great variety of stimulus-response data compared with a brief test pulse given once every few seconds or a sinusoidal stimulation that is repetitious and redundant. Our experience shows that dynamics as well as linearnonlinear characteristics of a fast-reacting physiological system can be defined with recordings of ~30 s. Results are described in general (canonical) terms, which facilitates the comparison of results obtained under different experimental conditions, as well as from different systems. This kind of analysis is not meant to uncover highly specialized functions of a nervous system, such as, for example, recognition of one’s grandmother’s face, but it does provide basic information on the linear as well as the nonlinear dynamics of a system. The statement “if one believes that one knows very little about a system, then white-noise analysis is the method of choice with which to start” is valid; however, the statement “if one believes that one knows more about the system, then the analysis will not be of much help” is not justified. The neuron network in the catfish retina is one of most extensively studied networks, but white-

Downloaded from www.physiology.org/journal/physrev by ${individualUser.givenNames} ${individualUser.surname} (163.015.154.053) on November 19, 2018. Copyright © 1992 the American Physiological Society. All rights reserved.

504

HIROKO

noise analysis is still being used with justifiable results. It seems to me that there are two significant features of white-noise analysis: it is a general probe that is best employed in studying a relatively unknown system, and it is also an efficient probe to examine a specific aspect of a system. The second advantage is that many types of noise, such as electrode noise or interference noise, can be eliminated through the cross-correlation procedures of the method. Although this is purely a technical point, it is especially important when recordings are made intracellularly. It is relatively easy to use white-noise analysis to study a physiological system, but the interpretation of results can be problematic. This is one of the criticisms from those who prefer not to use white-noise analysis. It is not a problem of the analysis itself but of the investigator because white-noise analysis is a general tool and its use does not automatically guarantee any deeper insight into the system. Knowledge of the system obtained by other means can be used to assist in the interpretation. Multidisciplinary approaches to a system are often much more effective rather than the exclusive use of white-noise analysis. The single most crucial factor that hinders regular use of white-noise analysis in physiological laboratories is computation. In the 1970s and early 198Os, only large mainframe computers with a large memory could compute higher order kernels within a reasonable time. The recent development of computer technology has reduced this problem, and today even small personal computers can efficiently compute second-order kernels. Along with this technological development, the development of new algorithms for computing kernels (20) has enabled conventional use of the analysis in physiological laboratories. White-noise analysis is a powerful method for neurophysiologists, and there is no reason not to use it. It should be emphasized that, for physiologists, white-noise analysis is a means to an end and not an end itself. The results of white-noise analysis should not be verified on the rigor of the theory but rather by the extent of how the analysis has achieved the goal. I thank Ken-Ichi

Naka for critically

reviewing

the manu-

script. Preparation of this article was supported by National Eye Institute Grants EY-08848 and EY-07738 and National Science Foundation Grants DIR-8718461 and BNS-891993. REFERENCES

M. SAKAI

1161-1176,198l.

3a.COOLEY, J. W., AND J. W. TUKEY. An algorithm for the machine calculation of complex Fourier series. Math. Comput. 19: 297-301, 1965.

72

COOPER, G. R., AND C. McGILLEM. Probabilistic Methods of SigAnalysis. New York: Holt, Reinhart & Winston, 1986. 5. DAVIS, G. W., AND K.-I. NAKA. Spatial organization of catfish retinal neurons. I. Single- and random-bar stimulation. J. Neuro4.

nal and System

physiol.

43: 807-831,

1980.

DE BOER, E. Correlation studies applied to the frequency resolution of the cochlear. J. Aud. Res. 7: 209-217,1967. 7. DE BOER, E., AND P. KUYPER. Triggered correlation. IEEE 6.

Trans.

Biomed.

Eng.

15: 169-179,1968.

8.

ECKHORN, R., AND B. POPEL. Responses of cat retinal ganglion cells to the random motion of a spot stimulus. Vision Res. 21: 435-

9.

EGGERMONT, J. J., P. I. M. JOHANNESMA, AND A. M. H. J. AERTSEN. Reverse-correlation methods in auditory research. Q.

443,198l.

Rev. Biophys.

16: 341-414,1983.

EMERSON, R. C., M. C. CITRON, W. J. VAUGHN, AND S. A. KLEIN. Nonlinear directionally selective subunits in complex cells of cat striate cortex. J. Neurophysiol. 58: 33-65, 1987. 11. FRECHET, M. Sur les fonctionelles continues. Ann. Sci. EC. Nor10.

male Superieure

27: 193-216,191O.

FRENCH, A. S., AND E. G. BUTZ. Measuring the Wiener kernels of a non-linear system using the fast Fourier transform algorithm. Int. J. Control 17: 529-539, 1973. 13. FRENCH, A. S., AND J. E. KUSTER. Nonlinearities in locust photoreceptors during transduction of small number of photons. J. 12.

Comp.

Physioh

156: 645-652,

1985.

14.

FRENCH, A. S., AND R. K. S. WONG. Nonlinear analysis of sensory transduction in an insect mechanoreceptor. BioL Cybern. 26:

15.

GERSTEIN, G. L., AND N. Y. S. KIANG. An approach to the quantitative analysis of electrophysiological data from single neurons.

231-240,1977.

Biophys.

J. 1: 15-28,

1960.

16.

HIDA, E., AND K.-I. NAKA. Spatio-temporal receptive field as revealed by spatio-temporal random noise. 2. Nuturforsch. Sect. C

17. 18.

HIDA, T. Brownian Motion. Berlin: Springer-Verlag, 1980. KORENBERG, M. J. Identification of biological cascades of linear and static nonlinear systems. Proc. Midwest Symp. Circuit Theory

19.

KORENBERG, M. J. Fast orthogonal identification of nonlinear difference equation and functional expansion models. Proc. Mid-

Biosci.

37: 1048-1049,

18: l-9,

1982.

1973.

west Symp.

Circuit

Sys. 1: 270-276,

1987.

KORENBERG, M. J., A. S. FRENCH, AND S. K. L. VOO. Whitenoise analysis of nonlinear behavior in an insect sensory neuron: kernel and cascade approaches. BioZ. Cybern. 58: 313-320,1988. 21. KORENBERG, J. M., H. M. SAKAI, AND K.-I. NAKA. Dissection of neuron network in the catfish inner retina. III. Interpretation of spike kernels. J. Neurophysiol. 61: 1110-1120, 1989. 22. KRAUSZ, H. I., AND K.-I. NAKA. Spatio-temporal testing and modeling of catfish retinal neurons. Biophys. J. 29: 13-36,198O. 23. LEE, Y. W., AND M. SCHETZEN. Measurement of the Wiener kernels of a nonlinear system by cross-correlation. Int. J. Control 20.

2: 237-254,1965.

MANCINI, M., B. C. MADDEN, AND R. C. EMERSON. White noise analysis of temporal properties in simple receptive fields of cat cortex. BioZ. Cybern. 63: 209-219, 1990. 25. MARCHIAFAVA, P. L., AND R. WEILER. The photoresponses of structurally identified amacrine cells in the turtle retina. Proc. R. 24.

Sot. Lond.

B Biol.

Sci. 214: 403-415,

1982.

P. Z., AND V. Z. MARMARELIS. Analysis of Physiological Systems. New York: Plenum, 1978. 27. MARMARELIS, P. Z., AND G. D. MCCANN. Development and application of white-noise modeling techniques for studies of insect visual nervous systems. Kybernetik 12: 74-89, 1973. 28. MARMARELIS, P. Z., AND K.-I. NAKA. White-noise analysis of a neuron chain: an application of the Wiener theory. Science Wash. 26.

1. BRYANT, H. L., AND J. P. SEGUNDO. Spike initiation by transmembrane current: a white-noise analysis. J. Physiol. Land. 260: 279-314,1976. 2. CITRON, M. C., AND R. C. EMERSON. White noise analysis of cortical directional selectivity in cat. Bruin Res. 279: 271-277,1983. 3. CITRON, M. C., J. P. KROEKER, AND G. D. MCCANN. Nonlinear interactions in ganglion cell receptive fields. J. Neurophysiol. 46:

Volume

MARMARELIS,

DC 175: 1276-1278,1972.

MARMARELIS, P. Z., AND K.-I. NAKA. Nonlinear analysis and synthesis of receptive-field responses in the catfish retina. III. Two input white-noise analysis. J. Neurophysiol. 36: 634-648,1973. 30. MARMARELIS, P. Z., AND K.-I. NAKA. Experimental analysis of 29.

Downloaded from www.physiology.org/journal/physrev by ${individualUser.givenNames} ${individualUser.surname} (163.015.154.053) on November 19, 2018. Copyright © 1992 the American Physiological Society. All rights reserved.

April

WHITE-NOISE

1992

a neural system: two modeling approaches. Kybernetik 15: 11-26, 1974. 31. MARMARELIS, V. Z., M. C. CITRON, AND C. VIVO. Minimumorder Wiener modeling of spike-output systems. Biol. Cybern. 54:

ANALYSIS 50.

51.

115-124,1986. 32. 33.

34.

35.

36. 37.

MIZUNAMI, M., H. TATEDA, AND K.-I. NAKA. Dynamics of cockroach ocellar neurons. J. Gen. Physiol. 88: 275-292,1986. MOLLER, A. R. Statistical evaluation of the dynamic properties of cochlear nucleus units using stimuli modulated with pseudorandom noise. Bruin Res. 57: 443-456, 1973. MOLLER, A. R. Frequency selectivity of phase-locking of complex sounds in the auditory nerve of the rat. Hear. Res. 11: 267-284, 1983. MOORE, L. E., AND B. N. CHRISTENSEN. White noise analysis of cable properties of neuroblastoma cells and lamprey central neurons. J. Neurophysiol. 53: 636-651,1985. NAKA, K.-I., N.-A. ITOH, AND R. L. CHAPPELL. Dynamics of turtle cones. Gen. Physiol. 89: 321-337, 1987. NAKA, K.-I., AND P. W. NYE. Receptive field organization of the catfish retina: are at least two lateral mechanisms involved? J. NeurophysioL

33: 625-642,197O.

NAKA, K.-I., AND T. OHTSUKA. Morphological and functional identification of catfish retinal neurons. II. Morphological identification. J. Neurophysiob 38: 79-91,1975. 39. NAKA, K.-I., AND W. A. H. RUSHTON. The organization and spread of S-potentials in fish (Cyprinidae). J. Physiob Lond. 192: 437-461,1967. 40. NAKA, K.-I., AND H. M. SAKAI. The messages in optic nerve fibers and their interpretation. Brain Res. Rev. 16: 135-145,199l. 40a.NAKA, K.-I., H. M. SAKAI, AND H. ISHII. Generation and transformation of second-order nonlinearity in catfish retina. Ann. Biomed Eng. 16: 53-64,1988. 41. PALM, G., AND T. POGGIO. Wiener-like system identification in physiology. J. Math. BioZ. 4: 375-381, 1977. 42. PALM, G., AND B. POPEL. Volterra representation and Wienerlike identification of nonlinear systems: scope and limitations. Q. Rev. Biophys. 18: 135-164,1985. 43 PERKEL, D. H., G. L. GERSTEIN, AND G. P. MOORE. Neuronal spike trains and stochastic point processes. 1. The signal spike train. Biophys. J. 7: 391-418, 1967. 44 POWERS, R. I., AND D. W. ARNETT. Spatio-temporal crosscorrelation analysis of catfish retinal neurons. Biol. Cybern. 41: 179-196,198l. 45. RUGH, W. J. Nonlinear System Theory. The Volterra/ Wiener Appreach. Baltimore, MD: Johns Hopkins Univ. Press, 1981. 46. RUSHTON, W. A. H. The Ferrier lecture: visual adaptation. Proc. R. Sot. Lond. B BioZ. Sci. 162: 20-46, 1965. 47. SAKAI, H. M., AND K.-I. NAKA. Signal transmission in the catfish retina. IV. Transmission to ganglion cells. J. Neurophysiol. 58: 38.

52.

53.

54.

SAKAI, H. M., AND K.-I. NAKA. Signal transmission in the catfish retina. V. Sensitivity and circuit. J. Neurophysiol. 58: 13291350,1987. 49. SAKAI, H. M., AND K.-I. NAKA. Dissection of neuron network in the catfish inner retina. II. Interactions between ganglion cells. J. NeurophusioZ.

60: 1568-1583.1988.

SAKAI, H. M., AND K.-I. NAKA. Dissection of neuron network in the catfish inner retina. IV. Bidirectional interactions between amacrine and ganglion cells. J. Neurophysiol. 63: 105-130,199O. SAKURANAGA, M., Y.-I. ANDO, AND K.-I. NAKA. Dynamics of the ganglion cell response in the catfish and frog retinas. J. Gen. Physiol. 90: 229-259, 1987. SAKURANAGA, M., AND K.-I. NAKA. Signal transmission in the catfish retina. I. Transmission in the outer retina. J. NeurophysioZ. 53: 373-389, 1985. SAKURANAGA, M., AND K.-I. NAKA. Signal transmission in the catfish retina. II. Transmission to type N cell. J Neurophysiol. 53: 390-410,1985. SAKURANAGA, M., AND K.-I. NAKA. Signal transmission in the catfish retina. III. Transmission to type-C cell. J. Neurophysiol. 53: 411-428,1985.

55.

56.

57.

58.

59.

60

.

61.

62.

SAKURANAGA, M., N. SATO, E. HIDA, AND K.-I. NAKA. Nonlinear analysis: mathematical theory and biological applications. Crit. Rev. Biomed. Eng. 14: 127-184, 1986. SANDBERG, A., AND I. STARK. Wiener G-function analysis as an approach to nonlinear characteristics of human pupil light reflex. Brain Res. 11: 194-211,1968. SCHELLART, N. A., AND H. SPEKREIJSE. Dynamic characteristics of retinal ganglion cell responses in goldfish. J Gen. PhysioL. 59:1-Z& 1972. SCHETZEN, M. The Volterra and Wiener Theories of Nonlinear Systems. New York: Wiley, 1980. SCLABASSI, R. J., J. L. ERIKSSON, R. L. PORT, G. B. ROBINSON, AND T. W. BERGER. Nonlinear systems analysis of the hippocampal perforant path-dentate projection. 1. Theoretical and interpretational consideration. J. Neurophysiob 60: 1066-1076, 1988. SPEKREIJSE, H. Rectification in the goldfish retina: analysis by sinusoidal and auxiliary stimulation. Vision Res. 9: 1461-1472, 1969. SPEKREIJSE, H., AND D. REITS. Sequential analysis of the visual evoked potential system in man: nonlinear analysis of a sandwich system. Ann. NYAcad. Sci. 388: 72-97,1982. TOYODA, J., AND K. TONOSAKI. Effect of polarization of horizontal cells on the on-center bipolar cell of the carp retina. Nature Lond.

63. 64. 65. 66.

67.

1307-1328,1987. 48.

505

68. 69.

276: 399-400,1978.

VICTOR, J. D., AND R. M. SHAPLEY. The nonlinear pathway of Y ganglion cells in the cat retina. J. Gen. PhysioZ. 74: 671-687,1979. VOLTERRA, V. Leqons sur les Fonctions de Lignes. Paris: Gauthier-Villars, 1913. VOLTERRA, V. Theory of Functionals and of Integral and IntegroDiferentiul Equations. New York: Dover, 1959. WERBLIN, F. S., AND D. R. COPENHAGEN. Control of retinal sensitivity. III. Lateral interactions at the inner plexiform layer. J. Gen. Physiol. 63: 88-110, 1974. WIENER, N. The homogeneous chaos. Am. J. Math. 60: 897-936, 1938. WIENER, N. Nonlinear Problems in Random Theory. New York: Wiley, 1958. YIN, C. T., C. K. CHAN, AND D. R. F. IRVINE. Effects of interaural time delays of noise stimuli on low-frequency cells in the cat’s inferior colleculus. 1. Responses to wideband noise. J. Neurophysiol 55: 280-300, 1986.

Downloaded from www.physiology.org/journal/physrev by ${individualUser.givenNames} ${individualUser.surname} (163.015.154.053) on November 19, 2018. Copyright © 1992 the American Physiological Society. All rights reserved.

White-noise analysis in neurophysiology.

PHYSIOLOGICAL REVIEWS Vol. 72, No. 2, April 1992 Printed in U.S.A. White-Noise Analysis in Neurophysiology HIROKO Department of Ophthalmology,...
4MB Sizes 0 Downloads 0 Views