Biological Cybernetics

Biol. Cybern. 64, 343-351 (1991)

9 Springer-Verlag1991

Terminal chaos for information processing in neurodynamics M. Zak Center for Space Microelectronics Technology, Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91109, USA Received July 7, 1990/Accepted in revised form October 5, 1990

Abstract. New nonlinear phenomenon - terminal chaos caused by failure of the Lipschitz condition at equilibrium points of dynamical systems is introduced. It is shown that terminal chaos has a well organized probabilistic structure which can be predicted and controlled. This gives an opportunity to exploit this phenomenon for information processing. It appears that chaotic states of neurons activity are associated with higher level of cognitive processes such as generalization and abstraction.

1 Introduction Most theoretical studies of dynamical systems have been concerned with modelling of energy transformations. However, in recent years several attempts were made to exploit the nonlinear dynamics phenomenology for information processing. Such dynamical systems are associated with artificial neural networks. The biggest promise of these networks as computational tools lies in the hope that they will resemble the information processing in biological systems. Notwithstanding many successes in this direction, Hopfield (1985); Pineda (1987); Gues et al. (1988); it is rapidly becoming evident that current models based upon classical dynamical system theory are characterized by some limitations. Firstly, the performance of the current models is fully prescribed by initial conditions. The system never "forgets" these conditions unless some external input is applied. In contrast to that, the biological systems are more flexible: they can forget (if necessary) the past adapting their behavior to environmental changes. Secondly, the features characterizing the current models are of the same order: they are insulated from the microworld by a large number of scales. At the same time, biological systems involve mechanisms that span the entire range from the molecular to the macroscopic, Harth et al. (1970). Actually, both of

these limitations are inevitable consequences of the Lipschitz condition, Cronin (1987), which guarantees the uniqueness of the solutions subject to prescribed initial conditions. For a dynamical system ui =f,-(Ul, u2. . . . . u,) this condition requires that I8filauj[ < oo,

i , j = 1, 2 . . . . .

n

New dynamical architecture of neural networks was developed by Zak (1988, 1989a, b, c, 1990a, b). The architecture is based upon some effects of nonLipschitzian dynamics, Fig. 1. Due to the failure of the Lipschitz condition at certain critical points of phasespace, the neural net forgets, its past as soon as it approaches these points. In addition to that, it acquires the ability to be activated not only by external inputs but also by internal periodic rhythms. (Such a spontaneous activity resembles the brain activity.) Due to the existence of the critical points mentioned above, the neural network becomes a weakly coupled dynamical system (Zak 1990a): Its neurons (or groups of neurons) are uncoupled (and therefore, can perform parallel tasks) within the periods between the critical points, while the coordination between the independent units (i.e., the collective part of the performance) is carried out at the critical points at which the neural network is fully coupled. As shown by Zak (1989a), any infinitesimal input applied at a critical point causes finite response of the neural network. This property appeared to be an important tool for creating a chain of coupled subsystems of different scales whose range is theoretically unlimited. More sophisticated versions of the new neural network architecture is introduced by Zak (1990b); there the activation and learning dynamics are coupled in such a way that the system can spontaneously change the locations and types of its attractors. In this work we will introduce a fundamentally new effect of non-Lipschitzian dynamics: terminal chaos which can be associated with dynamical model of the brain activity.

344 U

It can be verified that at the equilibrium points

l

u - 0 - REGULAR ATTRACTOR u (k) =

/

nk~

co

k ....

, - 2 , - 1, 0, 1, 2 . . . . .

etc.

(2)

the Lipschitz condition is violated: ~ / d u --, oo

t

at u ~ uk

(3)

If u = 0 at t = 0, then during the first period

U u2

l

~

t

3~

,.

7~ 0< t 0 and the solution at this point splits into two (positive and negative) branches whose divergence is characterized by unbounded Lyapunov exponent, Zak (1989a). Consequently, with an equal probability u can move into the positive or the negative direction. For the sake of concreteness, we will assume that it moves in the positive direction. Then the solution will approach the second equilibrium point u (n = nct/o9 at

Convergenceto regularattractor

u3

t* = 1to arcc~ [ 1

-IEnM'NAtlACTO !

B(89 2,/3~) ~ ]

(5)

in which B is the Beta function. It can be verified that the point u ~ will be a terminal attractor, Zak (1988), at t = t~ if

Ul

t I ~ h. Then the solution splits again: one of two possible branches approach the next equilibrium point U (2) = 2n0t/og, while the other returns to the point u (~ = 0, etc. The periods o f transition from one equilibrium point to another are all the same and are given by (5), Fig. 2. It is important to notice that these periods t* are bounded only because o f the failure of the Lipschitz condition at the equilibrium points, Zak (1989a). Otherwise they would be unbounded since the time of approaching a regular attractor (as well as the time o f escaping a regular repeller) is infinite.

,',.-uk

0 -'U

"\

x~xx\

~

X

Convergenceto terminalattractor Fig. 1

,AI 2 Terminal chaotic attractor

Let us start with the following one-neuron dynamical system: 09 = 7 sinl/3 - - u sin cot, y = Const, to = Const, 0t = Const. ct

(1)

Fig. 2. Oscillations about the attractor ,~ = 0

?\

345

solution returns to the original point u = 0. The amplitude and the period of the oscillations about u = 0 can be found from (2) and (5), respectively (for simplicity we will normalize them): . co . .

.

.

co arccos 1

21/3

= 1 (10)

However, in contrast to a classical version o f (9), fi = --62u Fig. 3. Unpredictable system

Thus, the evolution o f u prescribed by (1) is totally unpredictable: it has 2" different scenarios where m = E(t/t*), Fig. 3. Let us assume that the dynamical system (1) is driven by a vanishingly small input e(t): co

fl=~sinl/3--usincot+e(t), ot

I~(t)l ~w

(7)

This input can be ignored when fi ~ 0, or when ti = 0 , but the system is stable, i.e. u =not/co, 3net/co . . . . . etc. However, it becomes significant during the instants of instability when fi = 0 at u = 0 , 2na/co . . . . . etc. Since actually a vanishingly small noise is always present, one can interpret the unpredictability discussed above as a consequence of small random inputs to which the one-neuron dynamical system (7) is extremely sensitive. Thus, in contradistinction to classical dynamical systems, the variable u in (7) "forgets" its past motion at each critical point uk (see (2)), while any infinitesimal 'inputs applied to these points cause finite responses of the neuron. This property is appeared to be an important tool for creating a chain of coupled subsystems of different scales whose range is theoretically unlimited. The function e(t) ,~ ~ is not necessarily random: it can be associated with a microsystems which controls the neuron behavior through a string of signs (Zak 1990b):

Sgne(tk) = + , + , --, + ' ' ' e t c .

nk tk = - CO

(8)

Let us now take a deeper look at the environmental noise e(t) and try to distinguish its "hidden" microstructure whose scale is much smaller than the scale of the original dynamical system. We will start with the simplest microdynamical structure assuming that ~(t) = --~2u, 6 ~ 0 . Then (7) reads: 1i = T sinl/3 co u s i n cot - - 6 2 u , ot

6 --~ 0

(9)

It can be verified that the solution to (9) will oscillate about the point u = 0. Indeed, when the point u = 0 becomes a terminal repeller, i.e. when sin cot > 0, the solution escapes to the neighboring (right or left) equilibrium point. However, ti < 0 at u~ = not~co > 0, and ti > 0 at u~ = -not/co. Therefore, in both cases the

(11)

where u = 0 is a static attractor, the same point u = 0 is not a static, and not even a periodic, but a chaotic attractor. Indeed, there are several equally probable patterns of oscillations:

0, 1,0,-1,0;

0, -1,0, 1,0;

0, - 1 , 0 , - 1 , 0 . . . .

0,1,0,1,0; (12)

which can follow each other in an arbitrary order. In probabilistic terms the oscillations can be characterized 2nn as: u = 0 a t t co

while

n = - 2 , - 1 , 0, 1, 2 . . . . . etc.

(13)

so the probability o f any combinations o f the patterns (12) can be found from (13). It is easy to conclude that the chaotic attractor u = 0 o f (9) is different from chaotic attractors in classical dynamics. Firstly, here the mechanisms o f stability and instability act sequentially: during the first period the neuron is attracted to the point u = 0, then it is repelled from it (in one o f two possible directions). Secondly, the time o f approaching the center u = 0 is finite (due to failure of the Lipshitz condition at u = 0, (see 10). That is why this chaotic attractor is terminal. Clearly terminal chaos is characterized by a well organized probabilistic structure (see 13) which simplifies its prediction and control. The last property provides leadmg reasonings for utilization of terminal chaos as a compact memory storage. We recall that a static attractor in current neurodynamical models is successfully exploited for associative memory and pattern recognition: the idea of storing patterns as point attractors implies that initial configurations of neurons in some neighborhood of a memory state will be attracted to it. Let us compare the performance of (11) with the performance of its non Lipschitzian version (9) in that respect. Obviously (11) can store only one number u = 0. At the same time, (9) can store, in addition to that, all the probabilistic characteristics of the chaotic oscillations (13). In other words, (9) can store a temporal behavior o f the solution.

346

Conversely, if

3 T e r m i n a l c h a o s in o n e - n e u r o n d y n a m i c s

In this section we will consider more complex models of terminal chaos which can be performed by one-neuron dynamics. We will start with the following microdynamics: ~ ( 0 = , ~ 2 u ( 0 . 5 - u)(u -

1),

,~ --, 0

(14)

In contrast to the previous case, now the solution to (7) is undetermined not only at u = 0, but also at u = 1. (The root u = 0.5 can be ignored since it does not coincide with any critical point, (see 10). N o w when the point u = 0 becomes a terminal repeller, i.e., when sin tot > 0, the solution can escape to u = - 1 or u = 1. In the first case it will return to u = 0 , since e(u = - 1 ) > 0. In the second case, since e(u = 1 ) = 0, the solution can split again in two new branches: it can return to u = 0, or it can approach u = 2. In the last case the solution will come back to u = 1 since e(u = 1) < 0. Hence, the solution will be bounded --l

Terminal chaos for information processing in neurodynamics.

New nonlinear phenomenon-terminal chaos caused by failure of the Lipschitz condition at equilibrium points of dynamical systems is introduced. It is s...
772KB Sizes 0 Downloads 0 Views