**Computation in Nervous Systems.**

**Gerhard Werner**

** **Biomedical Engineering Program

Department of Electrical and Computer Engineering

University of Texas at Austin, TX, USA

last updated: 7/25/2000.

**Index:**

Introduction

The
very Idea of Computation

Turing
Machine Computation

The
Analog-Digital distinction

The
Origin of the Digital Brain

The
Coding Problem

Second
Order Cybernetics

Excursus
to nonlinear Science

Neurodynamics

Neural Network Modelling

Dynamical
Systems in Cognition and adaptive Behavior

Chaos, Computation and Physics

Chaos in the Nervous System

Dynamics
in Physics and Computation

Lessons
for Neuroscience

References

**Introduction: the past informing the present.**

Fulton's "Physiology of the Nervous System" of 1949 (51) contained a comprehensive view of the essential facts available at that time in a mere 650 pages (format 8x5"). Compare this, for instance, with the 1060 text pages (format 11X9") of the 1991 edition of "Principles of Neural Science" edited by Kandel, Schwartz & Jessell (66) as a representative example. Clearly, the growth of knowledge is impressive. In what areas of Neuroscience did it occur ? Of the 1060 pages in the Kandel et al. book, almost 600 pages deal with ionic, molecular and electrical events at synapses, receptors and related issues; topics that existed at best in bare outlines in the days of Fulton's book. The growth of detailed knowledge is also impressive when considering the turnover in the chapters on central processing. There, entirely new chapters take the place of older accounts, superseding them by intricacy and subtlety of details.

Despite these extraordinary advances, the "higher functions' of the brain such as perceiving, thinking, adaptively interacting with the world, have remained inscrutable achievements, the magnitude of which is most impressive under conditions of breakdown in neurological disease. New experimental methods using combined behavioral, neurophysiological and imaging techniques add to the realization of the intricacies of functions which brains (in conjunction with the organism as a whole) are capable of performing.

Is there a lesson in the striking discrepancy between the wealth of observations at all operational levels of the Nervous System, and the still lacking comprehension of the system's performance as a whole, embedded in its environment ? I will argue in this essay that the conceptual and operational implications of the Nervous System's functional and structural complexity have largely eluded main-stream theoretical Neuroscience: notably by disregarding nonlinear system dynamics, of which the generation of information by symmetry breaking is an important consequence. Therein lies the fundamental difference between Physics and Biology (155): the capacity of non-equilibrium systems to follow different trajectories in phase space from different initial states, and in different contexts.

The Neurosciences have traditionally been open to enlist concepts and methods from other disciplines, if not in a formal way though at least as metaphors or informal models for thinking and talking (94). The history of the past 50 years attests to the pervasiveness an adopted world view can assume: on the one hand creating an extraordinary momentum for research, on the other hand marginalizing the pursuit of alternatives. Thus arises the illusion of a paradigmatic consensus among participants, perpetuated by flattening into a linear monological narrative what historically was a process of disagreement, qualifications, and deliberately selected "roads not taken" ( for a penetrating study of this process in the history of Science, see: 23 ).

My objective in this essay is, first, to retrace the signal events that have shaped what is now a widely accepted conceptual framework in the Neurobehavioral Sciences, and at what junctures a consensus shaping dialogue among participants relegated alternatives to, at best, minority opinions of negligible impact, at worst to outright disregard, despite absence of rigorous evidential criteria. My second objective is to tell the story of the "losers" in the creation of the "majority view", and their currently rising credence as some of the arbitrary choices that were welded into the "master narrative" are no longer immune to scrutiny.

A notion, shared by "winners" and "losers" in the Neuroscience debate alike is to view the brain as a system that generates an output based on certain transformations applied to an input. In a general sense, it thus performs a function commonly associated with 'computing' : computation designates a process that is carried out by a dynamical system moving through its state space from an initial to a final state.

There are many ways in which this can be accomplished, and to merely say "X is a computer" or "Y is a computational process" is ambiguous. Different kinds of computation are conceivable, some differing in very fundamental ways. Analogue computation is based on the execution of symbolic expression (algorithmic procedures) by physical processes (turning of motors, amplifying electric currents, etc.). The variables in algorithmic expressions are the place holders for the magnitudes of the respective physical processes. The relationship between the abstract instructions and the physical execution of the computation is transparent: Vanevar Bush, known amongst many other achievements for the creation of the 'Differential analyser' is reported to have cherished observing the actual physical executing of the computational process, i.e. the turning of the motors etc., as its tangible and experiential manifestation of the act of computing (15)

In digital computation, this relation is more convoluted: physical states in the world (or symbolic data) and procedural instructions are represented in the machine as bit strings, according to encoding rules. The latter map one configuration of the former into another until the procedure halts. At this time the bit string in the machine is 'decoded' into -as the case may be - a new physical state in the world or a new symbolic datum. There is, then, a triade of notions involved: a representation of the world in the machine (reminiscent of the Cartesian projection of the world on the screen of the Mind); the encoding and decoding relations for establishing correspondence between physical states and symbolic representations; and, finally, the formal rules for syntactic (algorithmic) transformation of machine states. The basic principle rests on the mapping between the mechanical transformations of machine states and logical implication relations (108)

Analogue and digital computation are both forms of
information processing, using 'information' in an every-day sense rather
than with information theoretic connotations. In one form or another,
information is always instantiated in physical states: *"Information
is Physical"* (74). But with respect to computation,
the essential difference is this: analogue computation __applies__ the
laws of physics, whereas digital computation __simulates__ them. This
distinction matters since the differential equations that constitute the
laws of physics are not (digitally) calculable with the advertised accuracy
in finite memory. But computationally implementable laws of physics require
operations available (in principle) in our actual physical universe
(75). More about this in later
section of this essay.

Finally, there is a third kind of processes, also
subsumed under the broad notion of 'computation', which is
biologically most important, and also most neglected. In digital computation,
symbolic content is assigned to and represented in machine structures which
constrain the number of possible interactions among machine components.
Conventional digital computers and Connectionist architectures (see below)
are programmable at the level of their structural design since the function
of each of the components, and their interrelations (wiring diagram), are
specified by the designer. This form of __structural programmability__
is efficient at the expense of being tied to a static computational medium
(36). The contrast is exemplified by the __structural
non-programmability__ of many natural physical systems. Examples are
conformational protein interactions, allosteric phenomena in enzyme catalysis
or ion channels across neuronal membranes, and second messenger mechanisms.
In these and related natural biomolecular systems, information is processed
quite differently from machines with programmable structures. In structurally
non programmable systems, the physical substrate itself is
the target of transformations by informational 'instructions'
from its environment, which propel it dynamically through a continuous
state space. Therein lies the contrast to structural programmability where
it is the symbolic content of the computation that moves dynamically in
state space, constrained by a static machine structure. Structurally non
programmable systems can be
__simulated__ by digital computation, but
the simulation has no relation whatsoever to the physicality of the simulated
process: any confusion of the the two is a category error.

As is well known, Turing conceived a minimalist, abstract conceptual machine, designed according to what he thought constitutes the essence of human thinking. At stake are only the abstract principles of operation. It does not matter how the device is physically implemented (with some reminiscence of Descartes clockwork constructions of toys). What does matter is that this device consists of a component which can read a segment of a tape and issue instructions according to some built in rules; and a tape where symbols from a finite alphabet can be read, written and erased, according to these rules. Turing proved that this abstract conceptual device is functionally equivalent to any algorithmic computing device whatsoever.

Computing with a Turing's device is based on the
recursive specification of a procedure by which a given type of problem
can be solved in a finite number of mechanical steps. Such a conceptual
machine permits the precise delineation of the type of computing tasks
it can in principle execute, and what the requirements are. This principle
is now known as the __Church-Turing Thesis (CTT)__ . The thesis formulates
the notion of effective or mechanical method in logic and mathematics:
a method is said to be effective if it consists of a finite number
of exact instructions, executable (in principle) by any physical device
of the type of a Turing machine (TM). This thesis is not formally provable,
and has remained intuitive and empirically contingent: a surprising contrast
to the pervasive influence it has assumed in many disciplines, including
those in which rigor of formal proof is of premium value.

One of the features of TM consists in exchanging logical transparency (thus certainty) with acceptance of limitations (90). They are manifest in a number of ways. When engaged with a problem, a TM may at some point come to a Halt, having arrived at the end of the computation; or it may go on for a long time in which case it is impossible to know if the problem is TM unsolvable, or whether a Halt will still occur sometime in the future. However, it is provably true that there is no effective procedure for determining in advance whether an arbitrary program will halt. There are also classes of problems which, while in principle TM-solvable, would require inordinate computing time (irrespective of the computing speed), and others for which storage requirements increase faster than polynomial with the number of data elements, thus being intractable with finite memory

The embodiment of the abstract principle of
the TM as physical machine states gave some plausibility to the idea
that the principles of Turing computability also apply to the physical
world which, of course, also includes brains. Thus originated the __Physical
CTT__**. ** This thesis can be formulated in a number of ways,
each emphasizing one or another aspect. Here are two examples: *"any
causally determined sequence of physical events can be represented by a
purely recursive process"*, and *"any causal sequence can be
described by purely syntactic means"*; the latter making explicit
that causal relations among material events in the physical universe can
be brought into congruence with implication relations among propositions
describing those events. This is the basis for modelling physical processes
in formal, syntactic structures, with suitable encoding and decoding of
variables (109).

Like the computational CTT, its physical counterpart
is non provable and contingent. It is surrounded by much controversy. The
following quote from Deutsch (40) reflects one
point of view:* "There is no a priori reason why physical laws should
respect the limitations of the mathematical processes which we call algorithms
... there is nothing paradoxical or inconsistent in postulating physical
systems which compute functions not in [the set of recursive functions]...
Nor conversely, it is obvious a priori that any of the familiar recursive
functions is in physical reality computable. The reason why we find it
possible to construct, say, electronic calculators, and indeed why we can
perform mental arithmetic, cannot be found in logic or mathematics. The
reason is that the laws of physics 'happen to' permit the existence of
physical models for the operations of arithmetic such as addition, subtraction
and multiplication" *(see also: 90).

The basic question is: can the physical universe be exhaustively described by recursive functions ? To be sure, some aspects of neurophysiological processes can be effectively captured by digital computers. But this implies by no means that brains are digital computers, nor does it imply that a neurophysiological observation is the result of some digital computation in neural structures. Many classes of physical systems can be simulated by digital computation without anyone suggesting that these systems achieve a given behaviour because they themselves perform such computations. (36).

The real question is whether or not all of Physics
is amenable to __digital modelling __by a TM process, and whether there
is some component of Physics that cannot, in principle, be modelled by
Turing machine computation. One of the decisive limitations has to
do with finitist restriction on TM computation in the real world: since
the tape cannot be of infinite length (as postulated in the idealized model),
computational errors due to rounding and truncating of real numbers render
certain classes of nonlinear systems, although deterministic, noncomputable.
However, neither this fact, nor the question of the principled soundness
of the CTT deterred experimental and computational neuroscientists from
assimilating Turing machine computation theory into their discursive practices
and theory formation.

The ambiguous meaning of 'computing', referred to
earlier becomes still more confounding in the relation of TM computation
to brain function. There are three levels at which TM computation
can be engaged:

1) at the level of the **physical CTT
**which
entails that the brain is actually a physical instantiation of a Turing
machine in the same sense as the von Neumann architecture is for electronic
circuitry;

2) at the level of the **computational CTT**,
with Turing machine computation as a model for the brain's operation in
the sense that execution of brain processes can be mapped on to the execution
of Turing machine states, with the materiality of the computational states
left unspecified;

3) at the level of simulating brain processes, whereby
the criterion for adequacy is merely identity (or similarity) of outcome,
irrespective of the fact that the simulated computation may have arrived
at its results via a route that has no relationship whatsoever to brain
processes, and no interpretation in such terms.

It is now generally assumed, unless otherwise specified, that most of ordinary Neuroscience discourse refers to TM computation, although the level of commitment is rarely explicitly enunciated. Nor is there, generally, an indication of explicit awareness that the discourse at levels 1 and 2 tacitly imports the entire load of conceptual baggage associated with it. This baggage consists of two essential components: first, representing physical states in the world as symbolic machine (brain) states by means of an encoding relation ; and second, transforming these machine states by means of effective and recursive algorithmic procedures, which upon decoding designate states in the world.

Obviously, the notorious problematic of 'cracking the neural code' has a pivotal place in this scheme, to which I will turn in the section on coding. I will then argue that the the elusiveness of the coding scheme of the nervous system raises the level of suspicion that not all, or none, of the requirements for Turing machine computation in (or by) the nervous system are warranted.

**The analog-digital Distinction**

The laws of Physics are generally formulated in terms
of continuum mathematics, requiring unlimited sequences of operations.
Thus, all of the classical continuum mathematics, normally invoked in the
formulations of physical law, is not materially executable: quoting Landauer
(75) *"if we cannot distinguish pi from a very close neighbour, then
all the differential equations that constitute the laws of physics are
only suggestive; they are not really algorithms that allow us to calculate
the advertised arbitrary precision"*. Consider planetary motion, the
system that led Poincare to discover chaos: planets move with high precision
according to the gravitational constant G which we know at best up to a
few decimal positions. But planetary motion is guided by the real value
of G, as a constant of nature. In general, rational numbers are associated
with discrete processes, real numbers with analogue processes. Almost all
of physics is framed in Real Numbers, but it is generally assumed that
the set of Rational Numbers is sufficient for modeling and scientific explanation
without jeopardizing the qualitative adequacy of the explanations
and laws (156). But strictly speaking, digital computation
works with abstractions of physical properties, thereby rendering the true
information content of the laws of physics inaccessible. Nancy Cartwright
(31) is, in part, motivated by similar ideas
when she speaks of *"How the laws of Physics lie"*.

By 'digital' one usually has in mind
a computation performed in several discrete steps by the manipulation of
symbols. Analogue computation is taken to be a smooth physical process
which transforms the initial value of some physical observable to
a final output. In the analogue case, reference is made to the physical
laws that govern the dynamics of the physical machine. The key to the distinction
lies in the role of recursion (98). Being recursive
is an abstract property of a functional relation. The analogue-digital
distinction rests on whether this abstract property is in some way reflected
in the physical process of computation, or completely extraneous to it.
Pitovsky (l.c.) gives an illustrative example:* " Take the motion of
the planet Earth. Its position relative to the sun at time t is a recursive
real function of its position at time t=0. Yet there is nothing in our
description of this motion that in any way refers to this fact. Recursiveness
is in this case a totally foreign and completely irrelevant feature of
the observable"*.

The point here, and in analogous situations, is this: the analogue process of motion was for computational purposes mapped on to a recursive process. In this sense the recursive computation is a convenient artefact which makes us forget that the physical process we are dealing with is in reality an analogue process. This is quite admissible in Physics, but questions arise in Biology where the mapping between the actual physical events (allosteric transformations, molecular shape configurations with Coulomb forces etc.) make for a much greater "distance" between the digital model and the physical process in Nature. What then happens, I suggest, is this: we force a biological analogue process into the mould of recursive computation. Thus, a natural process was artefactually made into an algorithmic process, and thus made to obey the computational CT Thesis, leaving us with the misleading impression that Nature herself is also algorithmic !

**The origin of the "digital
Brain"**

** ** For the purpose of exposition,
I cluster the next series of significant events around the Josiah Macy
Foundation Symposia. The symposium series began in 1943 and extended over
a decade. The meetings were a virtually unparalleled undertaking, intended
to explore the interrelations among new inventions and insights that were
forthcoming at rapid rate in various disciplines. The composition of the
meetings varied from time to time, with W. McCulloch being in a leading
role. Participants were drawn from the Sciences of Physics, Mathematics,
Biology, the Humanities and the emerging fields of Computer Science and
Automata theory. Concerning the understanding of brain function, the prevailing
(though by no means unanimous) attitude is reflected by McCarthy's and
Shannon's preface to a 1956 symposium on Automata Theory: "*Currently
it is fashionable to compare the brain with large scale electronic computing
machines. Recent progress in various related fields leads to an optimistic
view toward the eventual and not too remote solution of the analytic and
synthetic problems*".

What then were the essential themes that
constituted the deliberations of the Macy Foundation Symposia that warranted
this optimism ? McCulloch and Pitts published in 1943 their seminal work
*"A
logical calculus of the ideas immanent in Nervous Activity"* (86)
which established that stylized, abstract models of neurons (as switching
devices) can be combined to logical networks for representing all of propositional
logic. With this thesis, McCulloch and Pitts set a principal agenda
for the 'digital brain' in Theoretical Neurobiology, established
the basis for the study of finite state automata and, presumably, also
influenced von Neumann's developing concept of the architecture for
digital computers, explicitly set forth by him in a technical report
two years later.

This work was part of McCulloch's life-long commitment to the idea of an 'Experimental Epistemology' as the study of natural phenomena in the brain via the logical operations of artefacts. It was a brilliant synthesis of several disparate influences: Lorente de No (80) had provided exquisite demonstration of neuronal circuitry in Golgi stains, and also introduced in neurophysiological studies the idea of neurons acting as 'coincidence detectors' (79); the all-or-none character neural events; and the edifice of symbolic logic of Carnap, Hilbert and Russel and Whithead. Combining these sources led to the powerful idea that the nervous system could be considered as a system of interconnected logical devices, capable of forming relations among symbolic propositions. As is well known, this work triggered initially a series of studies in the field of Neurodynamics and led eventually to the extensive field of Neural Network computation. I will return to both under their respective headings.

As another signal event, the year 1948 marked the
publication of Norbert Wiener's book "*Cybernetics*" (139)
with the notable subtitle: *'Control and Communication in the animal
and machine'*. Wiener's innovative approaches to control theory originated
with the war time effort to develop control systems for missiles and airline
steering. For the development of the mathematical theory of these and related
processes, he took some of his cues from the notion of homeostasis in Biology,
and from experimenting with tremor and ataxia in animals, with which he
familiarized himself in collaboration with the Physiologist A. Rosenbluth.
This collaboration led in 1943 to a landmark paper entitled "*Behavior,
Purpose and Teleology" *(110) which
delineated a taxonomy of forms of behavior on the basis of feedback: Teleology
in the sense of Aristotle's "final cause" - could be reconciled with
determinism if governed by negative feedback, regulated by deviation from
a specified goal state. Cybernetics became for Wiener a comprehensive vision:
it applied to formal engineering applications, and to biological
and social systems, as well. And it gave substance to the machine metaphor
of man, and the idea of a man-machine symbiosis. The notion of 'Cyborg'
originating much later in literature and in the Humanities, attests to
the ramifications, some of which were quite disturbing to Wiener's own
liberal humanistic outlook. His book '*Human use of human beings*"
is a telling testimony (140).

The concept of information introduced by Shannon
in 1948 (114) is complementary to its role in
Wiener's control theory: in the former, it deals with transmitting binary
information over noisy communication channels, and in the latter with recovering
signals from noise. One of Shannon's significant insights established the
formal correspondence of information (as defined in his theory) with Entropy
in closed thermodynamic systems. Since this theory is often inappropriately
interpreted and applied (the latter primarily in our every day language
habits), it warrants quoting verbatim from the introduction to Shannon's
original paper which was entitled " A mathematical theory of communication:
*"The
fundamental problem of communication is that of reproducing at one point
either exactly or approximately a message selected at another point. Frequently
the messages have meaning; that is they refer to or are correlated according
to some system with certain physical or conceptual entities. These semantic
aspects of communication are irrelevant to the engineering problem"*.

After introducing the idea of the programmable electronic computer, J. von Neumann devoted much attention to the conceptual issues related to self reproduction of automata (28): a daring advance into the minefield of reflexivity which eventually became one of the principal issues of the Second Phase of Cybernetics. It also led him to material construction and the edge of the 'computable', as G. Kampis (65) showed much later.

In striking contrast to the prevalence of the digital point of view, R. Ashby (14) introduced theoretical results and practical examples with 'ultrastable systems' which can achieve adaptive behaviour and learning by purely electro-mechanical means, without recourse to digital computation whatsoever. Given some function of the environment, the internal components of this automaton reorganize to a new stable state with adaptive functionality. As an intriguing aside: after 50 years of hibernation, M.Tilden (124) at Los Alamos invented refinements of Ashby's basic idea which enable robots to cope with unstructured environments, presumably by exploiting nonlinearity in analogue circuits. Gordon Pask (93), another core Cybernetician, built a chemical automaton that would adaptively develop new sensing mechanisms for guiding its internal adjustment to new environmental conditions. Despite its far reaching theoretical and practical implications (30), this went the way of several other brilliant ideas of that era.

These and many other ideas were boiling in the crucible of the Macy Symposia, with intense, often quite emotional controversy abounding. The objective of the conferences was to examine common conceptual grounds for brains and computers, and to generalize from such principles to interpersonal and social processes. Dissension was voiced mostly by those who were unwilling to subscribe to the hegemony of the digital doctrine: notably R. Gerard as Neurophysiologist, and G. Bateson as Ethologist. Efforts to narrow the gap between Shannon's information and semantics were mostly led by D.MacKay (82) who insisted that signals can have a double valence: from one perspective, they relate to the selection from a set of alternatives agreed upon by sender and receiver (i.e. selective information in Shannon's sense); from another perspective, as structural information, they relate to the receiver alone, and its instrument for interpretation. Bateson's thoughts on metacommunication were in line with this idea. But Shannon carried the day in this dispute: the neat quantification of ëselectiveí information was too seductive to compete successfully with 'muddy' semantics.

Despite the profound impact the neural nets
of McCulloch and Pitts made (and still continue to make), McCulloch himself
remained dissatisfied. Barely four years after the "logical Calculus "
paper, he and W. Pitts devoted another seminal study (99)
to assemblies of co-operating neurons that would create abstractions
of stimulus representations, applying integro-differential equations (
hence: continuum mathematics ! ). A few years later, still searching for
new ways, McCulloch foresaw in his concluding remarks to a
meeting on '"*Biological Prototypes and synthetic systems*" that new
forms of logic, and a crucial extension of thermodynamics will be
required.

The formal dissolution of the Macy Conferences in
1953 marked the end of an era with the consolidation of a network of ideas
which, together, compose the framework of the __'Digital Brain'__: the
elegant simplicity of Turing machines computing a logical calculus of discrete,
symbolically represented information reduced the brain's complexity to
an internally consistent and transparent account. Despite McCulloch's own
realization of shortcomings, the 'Digital Brain' assumed a life of
its own, warding off the objections from the analogue camp and giving in
short order rise to two influential offsprings: Artificial Intelligence
and the 'Cognitive Revolution' of the 50s and 60s. In the Neurosciences
themselves, the growing emphasis on recording single neuronal activity
with microelectrodes seemed to underscore the merit of digital processing.
The role of the microelectrode technique for defining explanatory models
is explicitly discussed by Uttal (127). As added
bonus, symbolic representation conformed to the influential doctrine of
Cartesian Representationalism that has shaped Western thought for the past
three Centuries.

What had started in the 40s and 50s in the minds
of many as a close analogy between electronic computers and brains turned
in time into a 'poetic metaphor' in habits of thought and discourse (at
least for many Neuroscientists). Metaphors can be helpful with finding
new perspectives of old problems, or propelling thought into new directions.
On the other hand, as "Trojan Horses", they may also canalize thought in
some directions and forecloses others. R. Lewontin (77)
attributes a word of caution to Rosenbluth and Wiener:"* the price of
metaphor is eternal vigilance*".

I suggest that the most important among the 'blind
passengers' imported with the metaphor is the __Physical CT Thesis__,
as it seeks to fit Nature into the idealized and unrealistic
constraint of an algorithmic, recursively programmable computer.
This bears also on the question: is Neurocomputing modelling or simulation
? Much of current Neuroscience discourse straddles the spectrum between
between the two, leaving the discussant's commitment to one or the other
often unspoken or ambiguous. Ambiguity shrouds also the use of the term
"information" in Neuroscience discourse, where the alternative contexts
can be communication engineering, control engineering, or its plain,
informal everyday use.

The end of the Cybernetics era also brought forth a new
set of topics. The principal concerns of this new epoch are commonly subsumed
under the term *"Second Order Cybernetics"*, roughly spanning the
period from 1960 to 1980. It developed into an assembly of ideas, less
closely connected than were those of its predecessor, and also lacking
the organizational cohesiveness.

Assuming that the Nervous System is a machine for
processing information in the mode of the 'digital brain' mandates
inquiring how the physical events impinging on an organism's receptors
may be represented, transformed and intermingled among various sources.
The *locus classicus* of a comprehensive examination of these issues
is the report by Perkel and Bullock (96), based on
a work session of the Neuroscience Research Program in 1968.

Perkel & Bullock identified some 15 neural 'candidate codes' , each of which could in principle serve as carrier of significant information in the nervous system. The report is a rich source of valuable insights and careful circumspection, and raises tantalizing questions, regrettably often not receiving due attention in current work: why assume a single code for all species, neuron systems, under all conditions and at different developmental stages ? Why insist on a generic form of coding at all levels of the Nervous System, when many other features of neuron interactions may fulfil specific functions in different contexts? What criteria identify a certain aspect of neural activity as 'code' ? Is there a unique relation between certain attributes of the incoming impulse barrage and the recipient neuron's response ? If so, what are the determining properties of the input ? Strictly speaking, the distinction between functionally relevant and incidental properties of presynaptic impulses separates 'signs' from 'codes' (157).

Twenty five years after the publication of this report, Bullock(26) returned to the coding problem, reinforcing his call to widening the window of variables that may contribute to neural integration at the system level: variables that may have lost attractiveness because of not falling readily within conventional neuron modelling tools. A seemingly inexhaustible stream of ingenious 'coding schemes' is forthcoming from inventions with Neural Network computations. However, most of them ignore or, at best, pay lip service to the many possibilities for non-synaptic interactions at presynaptic levels of which chemical interactions, electrical field effects, structural-geometric features of neuronal interactions are likely candidates (159). Softky (120) added to these variables the possible effect of transient localized events in fine dendritic branches as a mechanism for making spike generation in the service of coding more efficient. Sadly to say for Neurophysiology, the investment of impressive ingenuity in theoretical coding schemes in computer simulations has proven to be more relevant for technological applications and innovations than for Neuroscience: see for instance the balance of emphasis in a recent collection of papers (81).

Despite the extraordinary output of publications, the 'cracking of the neural code' seems as elusive now as it was some 30 years ago. A steady flow of publications seeks to show the potential effectiveness of this or that coding scheme, were it to be used by a nervous system. The favourite candidates of neuron spike codes seem to change, it seems, in cyclic order: rate codes, pulse codes (the pulse response code and the fire-and-integrate code), analogue match code, each having their own variants and subtypes; ensemble codes; and various combinations of interaction between neuron spikes and slow wave activity. The field has recently been reviewed in great detail by Rieke et al (107), with emphasis on the single spike as carrier of the code.

Are there deeper reasons for the coding problem
appearing quite intractable ? Some answers are contained in Bullock's admonitions
which suggest reframing the focus of investigations, away from exclusive
attention to neuron spike discharges as the __only__ significant state
variable of the Nervous System. Taking these admonitions seriously has
deeper implications: for bringing non-spike mediated neuronal interactions
into the arena of investigation entails departing from the strict adherence
to the model of the 'digital brain'. It entails returning to the opponents
of the 'digital hegemony' in the forum of the Josiah Macy Symposia,
and beyond. Admitting chemical interactions, electrical field effects and
other local analogue processes to significant roles in neuronal interactions
also avoids the commitment to the__ physical CT Thesis__: this re-instates
the brain as a natural system subject to physical laws, rather than a recursively
computing machine.

To these considerations, I can now add a question of principle: does the notion of 'coding' at all apply to characterizing the dynamics of contextually dependent interactions among pools of neuron ? This is also related to the emerging evidence that the notion of 'representation', does not apply to neural systems, at least not in the form carried over from Cartesian Philosophy to computer Science and to the 'Digital Brain' (44). These issues arise against the background of several more recent developments in Neurophysiology, as the functional unit of interest shifts from the individual neuron to dynamically linked aggregates of neurons.

Getting (52) surveyed in 1989 some of the lessons learned in invertebrate Neurobiology with attemps to 'crack' the functional organization of (biological) neural networks.'Network cracking' fared very much like 'neural code cracking' : attempts to determine the role of individual neurons in the network function, and their connectivity failed. Instead, two conclusions emerged: one, networks operate on the basis of interactions among multiple nonlinear processes at cellular, synaptic and network levels; and, second, networks can be multifunctional, subject to context-dependent selection of one or another from among possible perfomances. Getting's paper is noteworthy for listing an impressive array of elementary neuronal functions that underlie the dynamcis of switching among different (biological) network behaviors.

Among recent studies in vetebrates are Aertsen's
et al.(1) observations of the dynamics of functional
coupling in the cerebral cortex: pairs of cortical neurons can display
rapid modulations of discharge synchronization; they may switch from
mutually incoherent states to joint synchrony, or between different patterns
of mutual coherence. The evidence suggests context-dependent rapid dynamic
association of neurons to functional groups. Von der Malsburg's (132)
idea of *"correlation dynamics"* may account for this phenomenon:
it postulates a dynamic process of self-organization for stabilizing
correlations by some form of self-amplifying synaptic activation in neural
networks which can assemble and disassemble 'internal objects'
at a macro level. Related to this idea is also the *"Dynamical
Cell Assembly Hypothesis"* of Fuji et al. (50):
a kind of resonance between internal system configuration and external
context is thought to emerge from the 'dialogue' among neuron
pools. In this process, attractors and even chaotic behaviour may be established,
governed by the interaction between the context of the neuronal aggregates
and the nature of the external signals received. The appropriate question
is then: what property of the neuronal discharge patterns supports
the 'dialogue' among neurons ? is it firing rate, is
it timing of spikes for coincidence detection, is it the temporal fine
structure fo impulse sequences, or some other not exclusively spike related
measure ? In any case: it is not appropriate to speak
of 'coding' in the basic meaning of the term, which signifies
essentially invariant rules of mapping between two domains(135).
Instead, the system itself is an active participant in the orchestration
of activity among neuron pools, dynamically changing its contribution to
the dialogue with changes in the situational context and its internal dynamics.
It is then, of course, much more appropriate to speak of complex adaptive
systems, than to adhere to the illusion of 'coding' , taken in its essential
meaning.

Thinking in terms of attractors has also been influential for the work of other investigators: Tsuda and associates (125) have argued for some time that itinerant attractors could be a mode of cognitive processing in the brain. Recent experimental findings by Miyashita (88) in monkeys engaged in cognitive behavioural tasks suggested to Griniasti et al. (56) a model neural network which converts temporal sequences of patterns presented for discrimination into spatially distributed attractors. Amit (8) extended this idea to stabilizing reverberations in neuronal assemblies, maintained by the synaptic dynamics postulated by Hebb (62). Accordingly, Hebbian assemblies would function as configurations of neurons which collectively maintain each other in elevated states of activity, functioning as content addressable, associative attractors. Accordingly, dynamically created and sustained attractors would be the instrument for 'representing' sets of equivalent stimuli. Equating 'assemblies' and 'attractors' is misleading because of the latter's superior informational potential as it refers also to its basin from which multiple trajectories can originate in different contexts. Applying the notion of 'coding' to the relation between stimulus context and attractor dynamics misleadingly suggests the search for a unique mapping between the external and the internal domain, when the fluidity of the dynamics is, in fact, the antithesis of coding.

An admonition, also hinted at in Bullock's reflections, consists in treating the 'coding problem' generically, as if there was only one type of 'neural coding', everywhere centrally and peripherally alike. Here, it is necessary to attend to the events at the interface of organism and environment. The organism is embedded in the energy fluxes in its surround from which it samples according to the spectral selectivity of its specialized receptors. For making choices, decisions and selections on continuously varying variables, man invented the world of tools and artefacts for conducting a measurement. It imposes a discontinuous scale on continuous variables. In informational terms, the precision of the measurement depends how fine a discontinuous gradation is imposed on the continuous variable. This is the essence of man-made coding systems. Now, apply an analogous consideration to the organism-environment relation: the primary afferent nerves fibres originating from their respective receptors are in fact performing the continuous-discontinuous transformation, thereby generating a neural code as the information transmitting link between the external world and the central nervous system. They supply information (loosely speaking) on the state of the world. Note, however that this is a straightforward process of digitizing analogue variables. It is quite unlike the complexity of analog-digital neuronal interactions inside the nervous system: there, 'coding' becomes at best a loose manner of speaking, in tacit reference to the belief that states in external world are somehow represented in encoded form, much as the 'digital brain' would have it. Peripheral processes assume in the distinction a role which is similar to the cognitively penetrable 'functional architecture' Pylyshyn (104) differentiated from central (cognitive) processing almost 20 years ago. But his is not to say that there are not some processes inside the nervous system that, at some point again create discreteness out of the complexity of its internal hybrid processing, as Anderson (12) conjectures on the basis of a new type of artificial neural network models.

Restricting the 'coding problem' to primary afferent
neuronal channels does not resolve it, but turns it into a well defined
research agenda. Here it is now possible to ask some specific and meaningful
questions: what physical attributes are signalled in a certain channels,
and in what quantitative relation ? What are the boundaries of the transduction
process ? How much of the world, and in what gradation, does the brain
get to 'see' ** **the world ? This, of course, does not necessarily
imply that all of the presented signals are in fact usable or actually
used by the brain. But investigations conducted in this spirit at least
delimit the maximal signal space that is potentially available to
the brain. The idea of this approach originated with a study by MacKay
and McCulloch in 1952 (83), showing on information
theoretical grounds that up to 9 Bits per impulse could be transmitted
across a synaptic link, using pulse interval modulation. The work of Bialek
(25) and associates addresses the related
question of coding efficiency in certain sensory channels with the surprising
result that information transmission in the periphery approaches the theoretical
limit in a number of sensory systems studied.

The question "How can order and structure originate
in systems, in the face of the Second law of Thermodynamics ?" became
a leading theme of the loosely organized efforts that followed the Cybernetics
movement after an interlude of a few years. v. Foerster took this
line of inquiry to a new direction with a focus on self-organizing systems:
In his characteristic whimsical manner, he introduced a Symposium on self-organizing
systems by saying: *"there is no such thing as a self-organizing
system"*. This statement is a clue to his basic premise: a system cannot
be meaningfully considered if divorced from its environment. Within the
domain of equilibrium thermodynamics, apparent self-organization becomes
possible at the expense of resources from the environment, imported as
order (i.e. negative entropy, as Schroedinger claimed in his book "What
is Life"), or by drawing on energy sources from the environment, as v.
Foerster stipulated in his 'order from noise' principle. In this sense
the system is strictly speaking not self-organizing, as it requires interacting
with the environment. In analogy, the relation between observer (as an
observing system) was construed as a reciprocal and dynamic interaction
between two systems. *"Observing systems"* (133)
is the title of one of v. Foerster's publications which - like a Necker
Cube- invites the dynamic shifting between the perspectives of observed
and observing, neither completely circumscribed in separation from the
other, and in some sense defining each other. These thoughts on 'observing
systems' had considerable influence on motivating new directions in Sociology.
However, the reflexivity inherent in 'Observing systems' is also a source
of trouble. Karl Popper (100), working independently
of the Cybernetics movements, proved in 1950 a theorem according
to which no system can give an up-to-date description of itself.
This raises two issues: one, since humans seems to be able to give instant
self-description of themselves in deliberate reasoning, it conflicts with
the belief in the human brain being an automaton; and, second, it raises
a question of automata being able to give instantaneous description of
other automata operating on identical principles.

Although conceptually to some extent related with Second Order Cybernetics, Autopoiesis should be considered as a theory in its own right. Originating in the 1970s with Maturana and Varela (85), it views the Nervous System as closed self-contained system, merely coupled (via receptors) to its environment, but otherwise following its own internal dynamics. An external event impinging on the receptors acts as a perturbation that changes the system's internal state according to its own structural and functional dynamics. The external event does not specify the adaptive changes that occur, nor does it become associated with some form of an internal image (representation). It merely triggers the transition to one of the internal states that are accessible to the system's own dynamics. Instead of viewing perception as a matter of information transmission from an external source (the stimulus) to a receiver (the NS), the recipient structure responds according to its own internal laws of transformation to an impinging external event of which it ìknowsî nothing, other than that a transition in its internal state was elicited.

The philosophical implication of this point of view are fundamental: The idea of representation is part of the traditional Cartesian world view according to which body and mind are two separate agencies, separated from the external world which is accessible merely as a representation (i.e. a projection on a screen, as it were ). This ideology has dominated Western Thought for the past 300 years. In contrast, Autopoiesis (like Second Order Cybernetics) aligns itself with the philosophic doctrine of Constructivism which posits that knowledge is an active achievement of the cognizing subject. The function of cognition is adaptive and serves the subject's organization of the experiential world, not the discovery of an objective ontological reality. The trend to align Second-order Cybernetics with Constructivism led in the mid 1980s to issuing a "Declaration of the American Society of Cybernetics" under the aegis of v. Glasersfeld who also published in 1995 the monograph "Radical Constructivism".

Issues related to Autopoiesis troubled me in my own experimental work (136, 137 ): as observing experimenter, I seek to relate two events: the stimulus as an event I control, and a response in the experimental subject, which I observe (measure); a relationship between stimulus and response can then be constructed. However, as far as the animal is concerned, it has only access to the dependent variable as change in its neural activity. In the conventional way of thinking, this activity is considered a representation, an image of the external event. But the animal has access only to ONE of the two variables: strictly speaking, the category of representation does not apply. What does it then mean if the experimenter constructs a stimulus response relation ? Straddling the line between Autopoeisis and 'Observing Systems' seemed to hold a clue: if experimenter and experimental subject are construed as one common system, it is in fact possible to speak of stimulus response relations in terms of the experimenter's semantics. But this semantic is parasitic to the experimental subject's internal mechanisms, and has no relevance for the observed system per se, as the theory of Autopoiesis would insists.

In retrospect, it now appears that Second Order Cybernetics remained trapped in equilibrium thermodynamics. The decisive advances on self-organization took place outside its own confines, associated with the rise of nonlinear dynamics, and attention to systems far from equilibrium and 'on the edge of chaos' . The study of co-operative phenomena in physical systems led H. Haken (59) to develop the conceptual system of Synergetics. The leading idea is the dialectic between a macroscopic "order parameter" and the elementary components of the system, leading in circular interaction to the system's self-organization into qualitatively and quantitatively different configurations. The theoretical framework can be applied to many kinds of systems; in Biology and Psychology, it has been shown to formally account for a variety of perceptual and motor behaviours, and for features of the electroencephalogram. Manifestations of self-organization became also a prominent feature of the work of I. Prigogine and associates (103) on irreversible thermodynamics where exchanges of energy and matter keep reaction-diffusion systems far from equilibrium. Systems containing large numbers of nonlinear elements which are diffusely coupled and acting as a continuum can be driven by energy influx to critical levels of phase transitions into new macroscopic patterns. Katchalsky et al.(72)conjectured that this principle of pattern formation in dissipative systems may give rise to a hierarchy of different levels of cooperativity in ensembles of neurons.

**Excursus to Nonlinear
Science:**

As a body of knowledge with distinct character, Nonlinear Science originated in the past three decades with a series of diverse developments in analytic numerical and experimental fields, not without some identity crisis (13). It encompasses concepts and techniques that afford a unified characterization of a large class of phenomena in virtually every field of the Natural Sciences, extending also to some areas in the Social Sciences. Nonlinear Science discards the principle of proportionality of classical Physics, the laws of which by and large prescribe simple proportionality between causes and effects, that is: increments in the independent variable lead to qualitatively identical changes in the dependent variable of proportional magnitude. Hence, the effects of the combined action of two different causes leads to a superposition of the effects of each cause taken separately. In contrast, non linearity implies non-additivity of effects, and the occurrence of abrupt transitions among qualitatively dissimilar states, and unpredictable evolution in space and time. The stability of a dynamical system depends on the values of certain control parameters: for some of their range, the system may remain stable and settle at a point attractor; for other parameter values, the system may settle to a steady state periodic orbit (limit cycle) in phase space. Still further changes of control parameters may lead to bifurcation with periodicities at multiple stable states in alternation, and finally in chaos. Evolution to chaos may be intermittent, at times separated by epochs of relative stability. As is the case with dynamical systems generally, chaos is also relative to the level of the system's decription. Chaotic dynamics in computational models does not necessarily generalize to the natural system being modeled (157). More on this in Section: Chaos, Computation and Physics.

The distinction between chaos and randomness is significant:
chaotic regimes are deterministic in the sense that their evolution is
governed by fixed rules. Thus, in principle, the future behaviour of a
chaotic system is completely determined by its past, but in practice, slight
differences in initial parameters or randomly occurring errors grow exponentially
with time, thus secondarily introducing unpredictability (the dependence
on initial values *per se* is not a defining criterion for Chaos,
158).
This would correspond to randomly occurring fluctuations in the physical
substrate in nature. In computational simulations, rounding errors and
the finite limit on the precision of real numbers contribute to the unpredictability,
though not necessarily so since chaotic functions restricted to the computable
reals also remain chaotic (160). Deterministic
chaos is one member of a larger family of chaotic regimes for which Freeman
(48) recently suggested the generic designation
*"stochastic
chaos"*.

Since familiarity with the intricate properties of Chaos is largely based on mathematical and computational models, one may ask: what role, if any, does Chaos play in Nature ? (for Nervous Systems, see Section: 'Chaos in the Nervous System'). A variety of operational tests can suggest chaos in time series measurements of natural phenomena although caution is in order: in general, it is no more possible to prove from empirical data that a real physical system is chaotic than it is possible to prove that it is governed by a particular set of equations. If a process is chaotic, it eludes TM computation and the CT Thesis because the exponential sensitivity to initial conditions and the necessary truncation of real constants renders chaos TM intractable beyond, in many instances, merely qualitative assertions.

Nonlinear theory is both phenomenological and explanatory (111): its elegance results from the system's dynamics itself which support the reduction of a large number of degrees of freedom to the lower dimensionality of the system's attractor. This is opposite to the essence of the traditional 'Galiean procedure' which aims at isolating one variable from others in the experimental design. It characterizes systems in a qualitative way by its invariants: the Lyapunov exponent, the dimensions of attractors and the topological features of the entanglement of its trajectories (92), but the invariants of hypothetical models are subject to validating with the corresponding quantities extracted from data (usually time series analysis).

l will now return to the two main offsprings of the McCulloch-Pitts work on logical calculus which I introduced before as Neurodynamics and Neural Networks, but postponed discussion. A dividing line between the two approaches can be drawn between continuum and statistical fields on the oner hand, and the logic of threshold networks on the other, although the boundaries blur at the edges.

Rashevsky's initial work on a statistical theory of neural fields antedated the birth of the McCulloch-Pitts Neuron by several years. In the decades from 1950 to 1970, Beurle (24) , Griffith (148), Wilson and Cowan ( 141, 142), Amari (5,6) and others studied collective phenomena exhibited by unstructured populations of neuron-like elements. Despite variations of detail, the original neurodynamic models share some basic design principles: the models emphasize the properties of populations rather than individual elements; the cells, usually variants of McCulloch-Pitts type neurons, comprising the populations, are assumed to be in close proximity and randomly connected, and in varying proportions excitatory and inhibitory. Thresholds of excitability may be randomly distributed. The input to the excitable elements is summed over the incoming connections, and the effects of stimulation may decay at some defined rate. Time is treated as continuous variable. In general, the property of interest is the temporal dynamics of the aggregate for which nonlinear differential equations are derived in accord with the model's detailed specifications. This work was motivated, in part, by the interest in the conditions for maintaining stable activity at intermediate levels, and for displaying oscillatory behavior and wave propagation comparable to activity patterns in Nervous Systems. Continuing the tradition of this work to the present, Ventriglia (129) and Barna et al (19) described kinetic population models in which communications among aggregates of neurons via packets of particles (impulses) can be subject to statistical description of interactive dynamics.

Models of this kind display to varying degrees some of the typical activity patterns of nonlinear dynamic systems, i.e.: oscillations, hysteresis, travelling waves, formation of patterns (islands) of active neurons, stationary attractors, limit cycles and even strange attractors (i.e. chaos). The balance between excitatory and inhibitory influences is an important factor for determining the model's performance. The model of Cragg and Temperly (38) suggested to its authors in 1954 that the cooperative properties of its elements are analogous to ferromagnets (spins), and that the model as a whole displays analogies to aspects of physical systems, anticipating insights in Neural Network theory some 30 years later. Speculations as to the functional significance of one or the other of these nonlinear manifestations are inconclusive, though at least in one study offering suggestive analogies to certain drug-induced subjective phenomena: Cowan (37) demonstrated in neurodynamic models the occurrence of synchronized bursting activity of neuron groups, maintained oscillations, and excitation flow among modular units. Harth and coworkers (149, 150) extended neurodynamics modeling to Neural Nets in which discrete populations of randomly connected model neurons ('netlets') are coupled to higher order structures. This principle anticipated by almost three decades the more recent approaches of computing with Cellular Neural Networks and Coupled Map Lattices. The intent was to simulate brain stem regulatory functions (see also: 151).

Some aspects of the work of Pribram, Freemanand MacLennan, thematically more appropriately referred to in later sections, can also be subsumed under Neurodynamics, albeit in a more extended than its original sense.

**Neural Network (connectionist)
modelling**

Its history has been told many times and need not be reiterated. Similarly, an overview of the great number of variations of the basic Neural Network designs would be excessively redundant, notably since a large number of excellent books and collection of publications makes the field easily accessible. Therefore, this section will be limited to a few significant events and turning points, to the extent to which they are relevant for the objective of this essay.

Some 20 years after the introduction of the McCulloch-Pitts formal neuron, efforts started to enlist neural networks for modelling perceptual and cognitive processes, at first applying linear algebra concepts ( 138, 9, 73) although Caianiello (29) had earlier recognized the role of nonlinearity for the selforganizing propensity of neural networks. Rosenblatt proved in 1962 the convergence of a learning algorithms by iterative synaptic weight changes. The transition to explicit consideration of nonlinearity occurred with the creation of the 'Brain-state-in-a-box" model of Anderson et al (10) . Nonlinearity became also a dominant theme in Grossberg's work (58, beginning with a study of prediction and learning theory in 1967 (57) ).

The publication of *"Parallel Distributed Processing"
*by
Rumelhart and McClelland in 1986 contributed importantly to the rapid ascendancy
of Neural Network research, although it may have overstated its case as,
for instance, Minsky & Papert contend. In addition to the original
concept of the McCulloch-Pitts neurons, the field is strongly influenced
by ideas on synaptic function which Hebb formulated as early as 1949.
The original notion of the 'Hebbian Synapse' underwent numerous modifications
and additional specifications which, generally, share with the original
some essential principles: activity induced modification in a synapse
depend on the exact time of occurrence of pre- and postsynaptic events;
the synaptic modification is input specific; change in a synapse
results from the conjunction of pre- and postsynaptic signal effects.

Neural Networks are intrinsically nonlinear dynamic systems, but differ from the usual approach in dynamics where the dynamical system is a fixed object with but a few variable parameters. In contrast, dynamics in Neural Nets can occur at three levels: the states of the network, the values of connection strength, and the architecture of connectivity itself (121). Unlike the sequential von Neumann computer architecture, memory and processing unit are coupled: each neuron is part of the processing unit and memory is implicitly encoded in the mutual connection between any pair of neurons. Hopfield's discovery (63) that an energy measure can be defined for neural nets with recurrent connections was an important event. It is then possible to show that the movement of the network activity through phase space follows the direction to energy minima. The energy measure establishes an intrinsic analogy with spin glass models and connected neural network modelling with conceptualizations in statistical mechanics, some of which had been recognized earlier by Little & Shaw (76). Spin glass magnetic models can be conceptualized as networks of McCulloch-Pitts neurons if one interprets the two opposing spin directions of the ferromagnets as binary logical values. The energy approach has been developed for three different approaches: discrete transition - discrete time, continuous transition - discrete time, and continuous transition - continuous time. The first case was studied by Hopfield who showed convergence to stable attractors in asynchronously nets updated by Hebb interactions. Golden (54) applied the second case to the "Brain-state-in-a-Box" model of Anderson et al.; and the third case was first studied by Cohen & Grossberg (35) who obtained general results regarding the stability of recurrent nets.

The important point is that attractors are local energy minima for application of hill-climbing optimization procedures, and can also be regarded as content addressable memories. Which kind of attractor is obtained depends on the network structure. Suitable learning algorithm make it possible to design the desired type of attractor. A learning algorithm then takes the form of a nonlinear dynamical equation that manipulates the location of the attractors for encoding information in the desired form, or for learning temporal structures in event sequences (7). A large proportion of Neural Network research is, in fact, devoted to designing networks for computing with stable attractors. Knowledge representation in the form of stable attractors makes it possible to stay close to the type of cognitive tasks which artificial intelligence - type computationalism addresses in its way. All else aside, Neural Networks have the advantage of avoiding the 'brittleness' of symbolic Artificial Intelligence programs. Modifications of the original unsupervised learning in Perceptrons led to a many different classes of computational models: Principal Component Analysis, Self-organizing maps, Information theoretic models and process control constitute the main areas of activity, sometimes including also stochastic methods rooted in statistical dynamics. Architectural elegance and processing efficiency are in these applications generally given precedence over biological realism.

Attractors, complex adaptive systems, and self-organization
in central neural activity, referred to at the end of the section on coding,
delineate an area in Neurophysiology, to which the study of attractor
dynamics in artificial neural networks can make useful contributions. Distributed
networks, nested in the manner of __Cellular
Neural Networks__ (33) are promising candidates
for new directions in neural computation. In one version, neural
networks composed of arrays of smaller attractor neural networks were shown
to have intriguing properties for modelling cognitive processes (11)
and aspects of neural organization (123). The unit
of computation is in these models not a neuron, but a neuronal net. Thus,
thse models match more closely the recent trends in System Neuroscience,
referred to in the Section on Coding, than standard neural network modelling
does. In the next section I refer to another method for meaningfully extending
the complexity of Neural Network modeling.

**Dynamical
Systems in Cognition and adaptive Behaviour:**

The *"Dynamical challenge"* (34)
is one aspect of the revisionist cognitive science which subjects
the classical notions of representation and computation by rules to critical
scrutiny (64). Since this ramification of the dynamical
systems approach is tangetial, though related, to the Neuroscience
focus of this essay, I will merely point to some of its principal directions.
The introduction to Port and VanGelder 's *"Mind as Action"*
(101) sets the agenda of the core dynamical hypothesis:
a cognitive process specifies sequences of numerical states
which are subject to mutual and simultaneous transformations by the
mathematical tools of dynamics, unfolding in real time with a changing
environment. This view captures the intrinsic relationship between
the dynamical approach to cognition and learning, and the theory of adaptive
systems (113). As a** **self-organizing process,
the cognitive system adaptively modifies its parameters ( and, thus,
its phase space) in interaction with the environment. Numerous task
domains in perception, regulation of motor behavior and cognition have
been studied on this basis, including also the application of the
formal apparatus of Synergetics (60). Their
discussion is beyond the scope of this article, except for one study which
I single out for introducing a special style of investigating
dynamical systems. It consists of dynamical elements on a lattice which
interact ("couple") with sets of other elements. (69).
Such *" Coupled Map Lattices"* (CML) are dynamical
systems with discrete time, discrete space and continuous state variables,
which can display tunable bifurcation behavior extending over the
entire network, or over regions separated by domain boundaries.(67,68).
DeMaris(39) interpreted the switching between clusters
of activity centers in a multilayer CML to model the shifting attention
between foreground and background in ambiguous figure perception and, more
generally, to 'choosing' among different perspectives of a
sensory field.

**Chaos, Computation
and Physics**

While working with stable attractors in Neural Networks assured approaches to conservative cognitive tasks, possibilities for informational use of chaotic systems also received attention, albeit by a distinct minority, often by investigators more allied with statistical dynamics than with primary allegiance to neurobehavioral sciences (e.g.:122,128). Current topics under discussion are whether problems generally classified as undecidable can be approached by means of chaotic processors: would computing with chaos be capable of dealing with mathematically undecidable functions, and functions not computable in polynomial time? Is it possible to harness chaos for innovative problem solving in non-algorithmic ways, and for discovering and learning new behaviours ? What kind of functionality, if any, might chaotic regimes in Nature have ? That chaotic systems can perform pattern classification has been shown, but whether they can outperform non-chaotic systems remains to be determined. The intensity of theoretical studies by several groups of investigators (e.g.: 21, 61, 70, 71; see also the next Section) is also evidence for anticipated technological applications. A distinctive feature of the chaotic dynamics is the speed for synchronizing and desynchronizing interacting neuron clusters, thereby enabling virtually instantaneous dynamic modulation of the system by external input: a feature of potential significance for interacting with rapidly changing environments.

These and related questions of computational
and physical aspects of Chaos, and their relation to algorithmic complexity
(32) and randomness (22)
await clarification while conflicting views abound. P. Smith' book
*"Explaining
Chaos" (119)* situates these issues in perspective.
In the face of measurement uncertainty and computing errors, what inferences
can be drawn from chaos in computational models to the dynamics of the
corresponding physical system ? Under certain conditions, a computed
trajectory can 'shadow' the 'true' trajectory within an arbitrary
small error. Thus, chaotic models are candidates for 'approximate truth'
, and can yield some useful predictions.

** **The principled impossibility of obtaining
defininitive certainty referred to in the foregoing Section and in the
Section
on Nonlinear Science prevents differentiating Chaos at subcellular
levels (e.g. ion channels) from a Markovian processes (53),
but Squid Giant axons can be readily induced to deterministic chaos under
certain conditions (2). Fluctuations of
activity in single cells may display chaos in the spontaneous activity
of mollusc pacemaker neurons (89). Subthreshold
oscillations of membrane potential may give rise to the phenomenon of stochastic
resonance: additive noise keeping the signal-to-noise ratio of certain
sensory transducers at optimal level, whereby the source of the background
perturbation may be deterministic chaos which can synchronize with common
signals to select attractors (95). Elbert
et al. (42) prepared a comprehensive review
of evidence suggestive of deterministic Chaos in excitable cell assemblies,
tracing its role at all organizational levels and systems of Physiology.

Chaos in a complex neural system was discovered by W. Freeman
and associates, and its role for learning of stimulus discrimination elucidated
in systematic studies which extended over almost two decades (43,
47).
Selecting the olfactory system as well understood in terms of structure
and function, Freeman studied electroencaphalographic records of the olfactory
bulb in animals trained to discriminate odours. The discrimination of odours
is associated with the spatial distribution of stimulus elicited activity
bursts which occur against the spontaneously ongoing background wave forms
of the electroencephalogram. Freeman identified the latter to meet the
criteria of being chaotic.

In the language of dynamics, the following picture emerged in Freeman's studies: during late inhalation and early exhalation of a test odour, a barrage of input from the receptors induces a bifurcation in the olfactory bulb's neural activity from the natural state of chaos to an oscillatory state with characteristics of a limit cycle. The location of the oscillatory pattern in the olfactory bulb is stimulus specific. It is as if the training set of odours in the conditioning paradigm exists as latent attractors in the bulb. The input from activated receptors places the neural response into the particular basin from which the stimulus specific attractor emerges. On exhalation, the system is promptly reset to its chaotic state. The extraordinary novelty of this point of view is the dependency of neural dynamics on chaotic activity: sensitivity to initial conditions and the ability to amplify microscopic events into macroscopic patterns predispose the system for rapid adaptation in changeable environments. The chaotic background activity assures also the system's open endedness and swift readiness to respond to completely novel as well as familiar input, without need for memory search, obviating the artificial construct of internal representations.

The framework of Freeman's findings requires a re-assessment of the balance between digital and analogue processing in the nervous system. Foregoing sections of this essay documented that this balance had been grossly tilted towards the digital mode. The interpretation of cellular and axonic spike discharges as neural code of sorts is part of this same stance. In contrast, Freeman emphasizes the duality of pulse and wave as two state variables which, together, define activity densities over populations of neurons. Action potentials of neurons (pulses) and dendritic currents (waves) are the microscopic variables, the former subserving transmission, the latter integration of information. Their interactions in large neuron ensembles create continuous macroscopic state variables which can be extracellularly recorded as amplitudes and time series of mean fields of current densities. The synaptic gain in the biological network is subject to modification by external and internal (motivational and attentional) contingencies. It is a principal determinant of the system's bifurcation parameters and potential for self-organization, comparable to phase transitions in physical systems.

The proposed mechanism for learning and pattern recognition in the olfactory system was implemented in computer simulations of the olfactory system (145 ). A point for debate is whether this work has established chaos as merely sufficient, or in addition also as necessary for learning and pattern recognition in brains. Further evidence seems to be required to assure the latter condition.

In Freeman's work with the olfactory system, Chaos is assigned the roles of a nonlinear pattern classifier (45), novelty filter and catalyst for learning (118) . On theoretical grounds and supported by computer simulations, Tsuda (126) and Aihara et.al (3) developed intriguing network models in which cortical chaos may dynamically link memory traces and search by transitions among memory representations.

Definitive proof that the Electroencephalogram (EEG)
globally reflects a chaotic process is still controversial. Nunez
(91) secured computational support for viewing
the EEG as a linear wave process, subject to mass action of coupled neuron-like
elements. In contrast, Babloyantz et al. (16)
identified in 1985 chaotic dynamics of brain activity during sleep.
E. Basar (20) edited a collection of publications
"*Chaos in Brain Function"* which contains contributions from
his own laboratory and that of a dozen of other veterans in the field of
attractor dynamics, adducing evidence for global chaotic dynamics
in the brain under various conditions. The analysis of spatio-temporal
EEG pattern in the framework of Synergetics also established nonlinear
coupling as source of chaotic dynamics (49). The
dynamic characterization of brain activity applies also to Event Related
Potentials (105). Not surprisingly, nonlinear dynamics
bears also on the genesis of epileptic seizures (97)
and raises questions of clinical relevance. The balance between excitatory
and inhibitory processes is an important control parameter for the bifurcation
dynamics leading to seizures** **(78). Chaos control
in biological networks assumes in this case practical significance, in
addition to the more general interest in harnessing Chaos for useful computational
applications(17). Self-delayed feedback control
for stabilizing orbits in the manner of delayed differential equations
appear to be among the effective model interventions
(18)
as
do small perturbations (116) and noise from a random
number generator (46) in computational models.

Several groups of investigators (41,55) identified since the mid 1980s the appearance of 40 to 80 c/sec periodic wave patterns in sensory cortex which seemed to coordinate neural activity across stimulus feature detectors, despite their being disjointedly located on the cortical projection field. These findings were thus far mostly evaluated as temporal correlations between neuronal spike discharges and wave patterns. However, these periodic waves can now also be subsumed under the comprehensive theory of Wright et al (144). These investigators synthesized recently many seemingly disparate findings and interpretations from the studies of a wide range of investigators during the last 25 years, and their own studies, into continuum model of the cerebral cortex. It can account for cerebral rhythms and synchronous oscillations by a dynamic process akin to self-organization. Their recommendations are condensed into a set of state equations which, they suggest, reflects the dynamics of cortical neurons at micro-, meso- and macroscopic scales. The model can be subject to empirical tests in a number of ways, yet unexplored. This work is in an "in progress" state, with several as yet unpublished papers in preparation.

**Dynamics in Physics
and Computation**:

This section extends some of the points already adumbrated in the Section: 'The very idea of computation. Certain conditions differentiate decisively physical and computational dynamics: except under special conditions, digital computing can approximate the evolution of chaotic systems at best up to a certain point in time. This restriction is due to the limitation in the precision of the initial state description, requiring real numbers for computation in a continuous domain. Physical dynamics involves real constants that influence the macroscopic behaviour of the system in ways that are not computable. Hence, there is no effective algorithm possible. This is another reason why the application of the CTT to the physical world is at best of limited validity.

New perspectives on the relation between physical dynamics and computational dynamics come from two sources: Vergis et al (130) designed an amazing analogue computer which was at least for one class of problems computationally more powerful than a Turing machine. In another remarkable, though as yet controversial, development, Siegelmann (117) introduced in a series of publications since 1991 the theory of analogue recurrent Neural Networks with computational power exceeding that of Turing machines. Her work is predicated on the idea that physical systems are described by differential equations or maps in continuous phase space and are, thus, inherently analogue systems, requiring a theory of computation of their own, and distinct from that of discrete systems. The dynamical behavior of natural systems is influenced by real (numerical) values which may describe basic physical constants such as the speed of light, or Planck's constant. Digital computation cannot satisfy the accuracy of prediction stipulated by the laws of Nature (see: ''The very idea of computing' ). Computation with (digital) Neural Networks limits the values of connection weights and signals passing among neurons to rational or truncated real numbers; their dynamic evolution is thus artificially constrained. The mathematical model of recurrent analog Neural Network computation suggested by Siegelmann is a dynamical system that evolves in a continuous phase space, in contrast to the discrete space model of digital computers: it is essentially analogue and chaotic and it may follow phase trajectories which differ qualitatively and quantitatively from those of any Turing-computable rational number approximation. At this point, the theory of analogue recurrent Neural Network is comparable to the role of the TM for discrete computation: an abstract concept, possibly the theoretical basis for the eventual design of a new type of computing device. Aihara (4) intimates the connection between dynamical neural network models and analog computing.

In a different approach, Zak (146) has extended the computational range of the laws of Newtonian mechanics to encompass irreversible and chaotic processes. One of the effects of the work of Siegelmann and of Zak is to lessen the constraints imposed by rational numbers in the computational domain. It does not, of course, address the problems related to non-structural computation, adumbrated earlier in the Section 'The very idea of computation'.

**What are the
lessons for Neural Science ?**

Prior to the 'digital revolution', cooperativity between digital and analog processes in the nervous system was informally appreciated, but lacked conceptual tools for formalizing its precise mode of operation. The achievements of analog computation by V. Bush in the late 1930s-early 1940's, exceptionally as they were, could not hold their own against the forces of digital computation, nor could they render conceptual assistance to Neuroscience. There was much heated debate in the Macy Symposia, and the analogue principle did not submit without a fight. Von Neumann acknowledged in his book "The computer and the Brain" that the real puzzle of the nervous system lies in the functional integration of digital and analog processes. Nevertheless, the digital route's seductiveness was based on an explicit theory and formalism, ready for operationalizing in any number of ways, and thus carried the day. Furthermore, adopting on questionable evidence as a community of investigators, the validity of CTT silenced the incentive to pay attention to the dynamics of natural processes. This is another fascinating chapter (yet to be written) to the Sociology of Science, which studies the forces that affect the direction of mainstream research.

Redressing the imbalance between digital and analogue
processing that occurred with the 'digital revolution' appears as
the most immediate and pressing step to take. History is, once again, a
revealing guide. Attention turns to a formation in the Nervous System,
which attracted in the early part of the last Century the interest of Neuroanatomists
such as Ariens Kappers, Crosby, Herrick and others for its abundance and
wide spread distribution in the nervous system, difficult to capture adequately
with the erratic Golgi method. As neuropil (in the older literature
'Intercellular Gray of Nissl') it became known in the literature for the
intricacy of its texture which made photographic representation or
pen drawings inadequate renditions of its delicacy. Bullock &
Horridge (27) refer quite extensively to 'neuropile'
in their monumental monograph *"Structure and function in the nervous
systems of invertebrates"* of 1965. Here are some relevant quotations
from this work:* "...seat and secret of many of the most characteristically
nervous achievements, especially integrative events" * and *"..still
largely a terra incognita but deserves concrete attention both anatomically
and physiologically"*.

References to neuropil in the current literature
in Neurophysiology are few and far between. Braitenberg & Schuetz (152)
report quantitative data on synaptic connectivity in the neuropil of the
mouse cortex, suggesting connectivity for dispersing activity onto all
neighbors within reach, and axons making synapses whenever in proximity
of dendrites (153). Freeman refers in "*Societies
of Brains"* (47) to the filamentous texture
of neuropil to which he attributes unique and complex aggregate functions. Pribram's
holonomic brain theory (102) attributes the generation
of dynamical patterns of wave interactions to the densely connected matrix
of dendrites in which axon and cell bodies are embedded. Together with
K. Yasue and M. Jibu, Pribram developed a neural wave equation
for the dynamics of wave interactions in dendritic networks. To facilitate
mathematical tractability, MacLennan (84)
modified this approach by resorting to a dynamic lumped parameter system.
This choice situates his work into linear system theory, despite this approximation
missing some of the subtle interactions that may obtain among dendrites.The
detailed and careful work of W. Rall and coworkers in the decades
since 1970 with modelling dendritic function (112),
and the recent work on the dynamics of spike generation in dendrites view
them largely in isolation, except for their relation to the connected cell
soma and axon (87). In computational models, details
of dendritic geometry markedly influence the firing pattern of the simulated
neurons (154). The theoretical study of Bulsara et
al (161) illuminates the complexity of events due
to coupling neuron cell bodies with a noisy dendritic bath: bifurcation
dynamics and stochastic resonance in the dendritic field ultimately increase
the neuron's signal-information capabilities. The closely packed matrix
of dendrites, axons, cell bodies (and possibly glia) provide conditions
for complex interactions in a system of ion fluxes, extracellular current
fields, action potentials, chemical transmitters and modulators of neuronal
activity. The conditions far from equilibrium can sustain complex self-regulatory
mechanisms.

The neuropil is unlike any physical structure we are familiar with. In the perspective of nonlinear dynamics, we should assume that the neuropil's internal dynamics is played out in physical transactions at a molecular, electrical and perhaps even quantum level. Impinging digital signals can trigger bifurcations, dispatching the complex nonlinear system in the direction of one attractor or another, or harnessing chaotic activity. Emerging digital signals can recurrently enter other neuropils or, eventually, communicate via axonal impulse traffic with the external world. The neuropil's structural and functional complexity provides a substrate for self-organization to attractors of various complexities and phase transitions, from which macroscopic events can result as mean field potentials. The tools of dynamical system theory offer great promise for elucidating the complexity of these events: attractor dynamics may reduce the number of degrees of freedom to a conceptually manageable range and, perhaps, even identify state variables at a macro level or control parameters amenable to functional interpretation closer to organism-environment dynamics. Implicit in this outlook is the requirement to shed the wholesale commitment to TM computation and the physical CT Thesis as a guiding principle, except for domains which rigorously satisfy their assumptions: 'computation' is pluralistic !

Our understanding of complex linear and of nonlinear dynamics of natural phenomena is to a considerable extent based on intuitions gained from computational models. We tend to overlook that Nature does not compute in the sense in which we model or simulate natural processes. Nature performs transformations in the material domain: her tools are not numbers, but matter and energy. Consider as example the basilar membrane of the inner ear: its function can be described in the computational domain as spectral analysis. But in actuality, it is a physical resonator responding mechanically to different frequencies along its extension, as Helmholtz suggested and von Bekesy worked out in detail. Digital signals carry the truncated message which originates from a physical device. The laws of physics designate the operations available (in principle) in our actual universe. Yet, information (in a broader than the Information theoretic sense) is the currency for the organism's commerce with the environment. The processing of information and the laws of physics seem reciprocally related, one imposing constraints on the other. Ultimately, however, information's currency is in Physics.

Intuition is a fallible and inadequate guide when confronting complexity of this magnitude, and does not seem to offer at this time directions how to meaningfully approach a detailed experimental analysis of the neuropil's material physical dynamics. However conceptual guidance may be obtained from building successively more complex computational models, serving as "intuition pumps" (using the felicitous expression of D. Dennett). The role of such models is to offer guidance for investigating the physics of the processes in experimental work. But as simulacra, they must not be mistaken for ontology.

__Aertsen, A., M. Erb & G.
Palm__: Dynamics of functional coupling in the cerebral cortex: an attempt
at a model based interpretation. *Physica D 75:103-128, 1994*.

__Aihara, K. & G. Matsumoto__: Chaotic oscillations
and bifurcations in squid giant axon. p. 257-269, in: A.V. Holden,
(edit.): Chaos.* Princeton University Press, N.J., 1986.*

__Aihara, K.T.
Takabe & M. Toyoda:__ Chaotic neural network.
*Physics Letters
A, 144:333-340, 1990.*

__Aihara, K.T__.: Chaos in neural
response and dynamical network models: toward a new generation of analog
computing. in: M. Yamaguchi (edit):
*Towards the harnessing of chaos:
a collection of contributions based on lectures presented at*
*the seventh Toyota Conference,* *Mikkabi, Amsterdam,
1994*.

__Amari,S.__: Dynamics of pattern
formation in lateral inhibition type neural fields. *Biol.Cybernetics
27:77, 1977*.

__Amari, S.__: A method of statistical
neurodynamics.
*Kybernetik 14:201-225, 1974*.

__Amit, D.J.__: Modelling Brain
Function. The world of attractor neural networks. *Cambridge University
Press, 1990*

__Amit, D.J.__: The Hebbian paradigm
reintegrated: local reverberations as internal representations.* Behavioral
and Brain Sciences 18:617-657, 1995.*

__Anderson, J.A.__: A memory
model utilizing spatial correlation functions. *Kybernetik 5:113-119,
1968.*

__Anderson, J.A., J.W. Silverstein,
S.R. Ritz & R.S. Jones__: Distinctive features, categorical perception,
and probability learning: some applications of a neural model. *Psychological
Review 84:413-451, 1977*.

__Anderson, J.A. &
J.P. Sutton__: A network of networks: Computation and neurobiology. *World
Congress on Neural Networks 1:561-568, 1995.*

__Anderson, J.A.__: From discrete
to continuous and back again, in: R. Moreno Diaz & J. Mira-Mira: Brain
processes, Theories and Models, an international conference in honour of
W.S. McCulloch 25 years after his death. *MIT Press, Cambridge MA, 1996*

__Anderson P.W__*.:
*Is
complexity Physics ? is it Science ? What is it ? *Physics Today
p. 9-11, July 1991.*

__Anninos, P.A., B. Beck, T.J.
Csermely, E.M. Harth & G. Pertile__: Dynamics of neural structures.*
J. Theoret.Biol. 26:121-148.*

__Ashby, W.R__.: Design for a
brain.
*Chapman & Hall, 1952*.

__Aspray, W.__(edit.): Computing
before computers.
*Iowa State Univ. Press, 1990*

__Babloyantz,A., J. Salazar &
C. Nicolis__: Evidence of chaotic dynamics of brain activity during sleep.
*Phys.
Lett. A 111: 152-156, 1985.*

__Babloyantz, A. & C. Lourenco:__
Computation with chaos: a paradigm for cortical activity.
*Proc.Nat.Acad.Scie
USA 91:9027, 1994*

__Babloyantz, A__: Chaos control
in biological networks. p.405-426, in: H.G. Schuster (edit): Handbook of
Chaos control, *Wiley 1998.*

__Barna,G., T. Groebler &
P. Erdi:__ Statistical model of the hippocampal CA3 region. *Biol.
Cybernetics 79:309-321, 1998.*

__Basar, E__. (edit). : Chaos in
brain function. *Springer, NY, 1990.*

__Basti, G. & A.L. Perrone:__
Chaotic neural nets, computability and undecidability: towards a computational
dynamics. *Internat. J. of Intelligent Systems 10:41-69, 1995.*

__Batterman, R.W__.: Defining
Chaos.
*Philosophy of Science 60:43-65, 1993*.

__Beller, M__.: Quantum Dialogue:
the making of a revolution.
*University of Chicago Press, 1999*

__Beurle. R.L__.: Properties
of mass of cells capable of regenerating pulses. *Philosophical Transactions
of the Royal Society of London 240:55, 1956.*

__Bialek, W__.: Theoretical Physics
meets experimental neurobiology. SFI Studies in the Sciences of Complexity,
Lect.Vol.II

p. 513-595, E. Jen (edit.)
*Addison-Wesley,
1990.*

__Braitenberg, V. & A.
Schutz__: Cortex: Statistics and Geometry of neural connectivity.*
2nd edit., Springer NY, 1998.*

__Bullock, T.H.__: Integrative
systems research on the brain: resurgence and opportunities.
*Ann.Rev.Neurosci.
16:1-15, 1993*.

__Bullock, T.H. & G.A. Horridge__:
Structure and Function in the nervous system of invertebrates. *
W.H. Freeman, San Francisco, 1965.*

__Bulsara, A.R., A.J. Maren &
G. Schmera__: Single effective neuron:dendritic coupling effects and
stochastic resonance*. Biological Cybernetics 70:145-156, 1993.*

__Burks, A.W.__(edit).: Essays
on Cellular Automata,
*University of Illinois Press, 1970*.

__Caianiello, E.R.__: Outline
of a theory of thought an thinking machines. *J. of theoretical Biology
1:204-235, 1961.*

__Cariani,P.__: Epistemic autonomy
through adaptive sensing. in: *Proceedings of the 1998 IEEE ISIC/CRA/ISAS
Joint Conference, P. 718-723, Sept. 1998.*

__Cartwright, N. __: How the
laws of Physics lie. *Oxford University Press, 1983.*

__Chaitin, G.J.__: Algorithmic
Information Theory,
*Cambridge University Press, 1990.*

__Chua, L.O.__: Cellular Neural
Networks: Theory. *IEEE Transactions on Circuits and Systems, 35(10):1257-1272,
1988*

__Clark, A.__: The dynamical challenge.
__Cognitive
Science 21(4):461-481, 1997.__

__Cohen M. & S. Grossberg:__
Absolute stability of global pattern formation and parallel memory storage
by competitive neural networks. *IEEE Transaction Systems, Man
& Cybernetics 13:815-826, 1983*.

__Conrad,M.__: The brain-machine
disanalogy. *BioSystems 22:197-213, 1989*. (see also: M.Conrad:
The price of programmability, in: R. Herken,
The universal Turing machine,
*Oxford University Press, 1988*.)

__Cowan, J.D.:__ Symmetry breaking
in large-scale neural activity. *Internat.J.Quantum Chemistry 22:1959,
1982.*

__Cragg, B.G. & H.N.V. Temperly__:
The organization of neurons: a cooperative analogy.
*EEG.Clin.Neurophysiol.
6:85, 1954.*

__DeMaris, D.:__ Attention,
depth gestalts, and spatially extended chaos in the perception of ambiguous
figures. p. 239-258, in: D.S. Levine, V.R. Brown & V.Timothy
Shirey.: Oscillations in neural systems. *Mahwah, N.J.,L. Erlbaum Associates,
2000.*

__Deutsch, D.:__ Quantum theory,
the Church-Turing principle and the universal quantum computer. *
Proc.Royal.Soc.London A 400:97-117, 1985*

__Devaney, R.L__.: An introduction
to chaotic dynamical systems.* Addison_Wesley, 1987.*

__Eckhorn, R., R. Bauer, W. Jordan,
M. Brosch, W. Kruse, M. Munk & H.J. Reitboeck:__ Coherent oscillations:a
mechanism of feature linking in the visual cortex ? *Biol. Cybernetics
60:121-130, 1988.*

__Elbert,T.__ (et al.): Chaos
and Physiology - deterministic Chaos in excitable cell assemblies.
*Physiol.
Reviews 74(1):1, 1994*.

__Freeman, W.J__.: Mass action
in the nervous system : examination of the neurophysiological basis of
adaptive behavior

through EEG. *New York,1975*.

__Freeman, W.J. & C.A. Skarda:__
Representations: Who needs them ? p. 375-380, in: J.L. McGaugh, N.M. Weinberger
&

G. Lynch.: Brain organization
and memory : cells, systems, and circuits.
*Oxford University Press,
1990.*

__Freeman,W.J.:__ Chaos in
the Brain: possible roles in biological intelligence. *Internat.J. of
Intelligent Systems 10:71-88, 1995.*

__Freeman, W.J., H.J. Chang,
B.C. Burke, P.A. Rose & J. Badler:__ Taming Chaos: stabilization
of aperiodic attractors by noise.
*IEEE Transactions on Circuits and
Systems, I: Fundamental Theory and Applications,44(10):989-996, 1997.*

__Freeman, W.J.:__ Neurodynamics
: an exploration in mesoscopic brain dynamics. *Springer NY, 2000. (
*see
also: How the brain makes up its minds, *Wiedenfeld & Nicolson, 1999*.
and : Societies of brains : a study in the neuroscience of love and hate.
*Hillsdale,
NJ : L. Erlbaum Associates, 1995*.)

__Freeman, W.J.__:
A proposed name for aperiodic brain activity: stochastic chaos. *Neural
Networks 13:11-13, 2000*.

__Friedrich, R., A. Fuchs &
H. Haken__: Synergetic analysis of spatio-temporal EEG patterns, in:
A.V. Holden, M. Markus & H.G. Othmer (edits.):*Proceedings of a NATO
Advanced Research Workshop on Nonlinear Wave Processes in Excitable Media,
Leeds, UK, 1989*.

__Fuji,H., H. Ito, K. Aihara, N.
Ichinose & M. Tsukada:__ Dynamical cell assembly hypothesis - theoretical
possibility of spatio-temporal coding in the cortex. *Neural Networks
9(8):1303-1350, 1996*

__Fulton, J.F.__: Physiology of
the Nervous System.
*Oxford University Press, 1949/1970*

__Getting, P.A.__: Emerging principles
governing the operation of neural networks. *Ann.Rev.Neurosci. 12:185-204,
1989*

__Glass, L__.: Chaos in neural
systems, p.186-189, in: M.A. Arbib (edit.): Handbook of Brain Theory and
neural networks, MIT Press, *Cambridge MA, 1995.*

__Golden, R.M.__: The "Brain-state-in-a-box"
neural model is a gradient descent algorithm.
*J. Math. Biol. 30:73-80,
1986*.

__Gray, C.M. & W. Singer__:
Stimulus specific neuronal oscillations in orientation columns of visual
cortex. *Proc.Nat.Acad.Sci. USA. 86:1698, 1989.*

__Griffith, J.S.__: On the stability
of brain-like structures. *Biophysical Journal 3:299-308, 1963.*

__Griniasti, M., M.V. Tsodyks
& D.J. Amit:__ Conversion of temporal correlations between stimuli
to spatial correlations between attractors.
*Neural Computation 5:1-17,
1993*

__Grossberg, S__.: Nonlinear
difference-differential equations in prediction and learning theory. *Proc.Nat.Acad.Sci.
USA 58:1329-1334, 1967*

__Grossberg, S.__: Nonlinear
neural networks: principles, mechanisms ad architectures. *Neural Networks
1:17-61, 1988*

__Haken, H__.: Synergetics: Cooperativ
phenomena. *Springer, NY, 1973*.

__Haken, H.__: Synergetic computers
and cognition,
*Springer, NY, 1991*.

__Hansel D. & H. Sompolinsky:__
Synchronization and computation in a chaotic neural networks,
*Physical
Rev.Let. 68(5):718-721, 1992.*

__Harth, E.M., T.J. Csermely, B. Beck
& R.D. Linsay__: Brain Functions and Neural Dynamics, *J.
Theoret. Biol. 26:93-120,1970.*

__Harth, E.__: From Brains to Neural
Nets to Brains.* Neural Networks 10(7):1241-1255, 1997.*

__Hebb, D.O.:__ The organization
of behavior. *Wiley, NY, 1949.*

__Hopfield, J.J.__: Neural
networks and physical systems with emergent collective computational abilities.
*Proc.Nat.Acad.Sci.
USA 79:2554-2558, 1982.*

__Hopfield, J.J.:__ Physics,
computation and why Biology looks so different ?* J. Theoret. Biol. 171:53-60.
1994.*

__Horgan, T. & J. Tienson__:
A nonclassical framework for cognitive Science. *Synthese 101:305-345,
1994.*

__Kampis, G.:__ Self-modifying
systems in biology and cognitive Science. *Pergamon Press 1991*

__Kandel,E., J.H. Schwartz &
T.M. Jessel__: Principles of Neuroscience, 3rd Edition, *Elsevier,
NY, 1991.*

__Kaneko, K.:__ Clustering,
coding,switching, hierarchical ordering, and control in a network of chaotic
elements. *Physica D 41:137-172,1990.*

__Kaneko, K.:__ Simulating Physics
with coupled map lattices, in: K. Kawasaki, A. Onuki,& M.Suzuki, Formation,
Dynamics and statistics of patterns, *World Scientific, Singapopre, 1990.*

__Kaneko, K__.: Overview of
coupled map lattices. *Chaos 2(3): 279-283, 1992*.

__Kaneko, K.:__ Cooperative
behavior in networks of chaotic elements, in: M. Arbib, Handbook of Brain
Theory and Neural Networks, p. 258, 1995, *MIT Press, Cambridge MA, 1995*

__Kapitaniak, T.__: Controlling
Chaos:theoretical and practical methods in non- linear dynamics.
*London
1996,*

__Katchalsky, A.K., V. Rowland
& R. Blumenthal__: Dynamic patterns of brain cell assemblies.
*NRP
Bulletin 12(1), 1974.*

__Kohonen, T.__: Associative
memory - a system theoretical approach, *Springer NY, 1977*

__Landauer, R__.: Information
is Physical. *Physics Today 1991 (May): 23-29.*

__Landauer, R.__: Information
is inevitably physical. In: J.G. Hey (edit): Feynman and computation :
exploring the limits of computers. *Perseus Books, Reading Mass, 1999.*

__Little W.A.. & G.L. Shaw__:
A statistical theory of short and long term memory. *Behav. Biol. 14:115,
1975*.

__Levontin, R.__: The triple
helix : gene, organism, and environment.
*Harvard Univ.Press, Cambridge,
MA. 2000*

__Lopes da Silva, F.H., H. Kamphuis
& W.J. Wadman:
__Epileptogenesis as a plastic phenomenon of the brain.*
Acta Neurol. Scand. 86:34-40, 1992.*

__Lorente de No__: Transmission
of impulses through cranial motor nuclei. *J. Neurophysiol. 2:401-64,
1939*

__Lorente de No,__ in:
Ref. (Fulton)

__Maass, W. & C.M. Bishop, (__edits.):
Pulsed neural networks *MIT Press, Cambridge Mass., 1999*

__MacKay, D.__: Information,
Mechanism and Meaning.
*MIT Press, Cambridge MA, 1969.*

__MacKay, D. & W.S. McCulloch__:
The limiting information capacity of a neuronal link. *Bull. Math.Biophysics
14:127-135,1952*

__MacLennan, B.__: Information
processing in the dendritic net. p. 161-197, in: K. Pribram (edit.) Rethinking
Neural Networks: quantum fields and biological data.
*Hillsdale NJ, L.
Erlbaum, 1993. *See also*: Univ. of Tennessee Tech.Report CS-92-180.*

__Mainen, Z.F. & T.J. Sejnowski__:
Influence of dendritic structure on firing pattern in model neorcortical
newurons.* Nature 382:363-366, 1996.*

__Maturana H. & Varela
F__.: Autopiesis and Cognition,
*Reidel, Dordrecht, 1980*.

__McCulloch, W.S. & W.H.
Pitts__: A logical calculus of then ideas immenent in nervous activity.
*Bulletin
of Mathematical Biophysics 5:115-133, 1943.*

__Mel, B.W.__: Why have dendrites
? - a computational perspective, in: G. Stuart et al.: Dendrites,
*Oxford
Univ.Press, 1999*

__Miyashita, Y.:__ Neuronal
correlate of pictorial short-term memory in the primate temporal cortex.
*Nature
(London) 331:68, 1988.*

__Mpitsos, G.J., A. Bulsara, H.C.
Creech & S.O. Soinila:__ Evidence for chaos in spike trains of neurons
that generate rhythmic motor patterns. *Brain Res.Bull. 21:529-538, 1988.*

__Muhlhauser, G.R. __: Mind
out of Matter: topics
in the physical foundations of consciousness and cognition.
*Kluver,
Dordrecht,1998*

__Nunez, P.L__.: Neocortical dynamics
and human EEG rhythms. *Oxford University Press, 1994.*

__Ott, E__.: Chaos in dynamical
systems. __Cambridge Universty Press, 1993.__

__Pask, G.:__ Physical analogues
to the growth of a concept. p. 765-794, in: Mechanization of Thought Processes,
*National
Physical Laboratory, H.M.S.O.,London, 1958.*

__Paton R__: Towards a metaphorical
Biology. *Biology and Philosophy 7:279-294, 1992*

__Pecora, L.M. & T.L. Carroll__:
Synchronization in chaotic systems. *Physical Rev. Lett. 64(8):821-824,
1990.*

__Perkel, D.H. & T.H. Bullock__:
Neural Coding: a report based on an NRP work session.
*Neurosciences
Research Program Bulletin 6(3): 223-344, 1968*

__Pijn, J.P., J. Van Neerven, A. Noest
& F.H. Lopes da Silva__: Chaos or noise in EEG signals: dependence
on state and brain site. *EEG.Clin.Neurophysiology 79:371-381, 1991.*

__Pitovsky, I.__: The Physical
Church-Turing Thesis and physical computational complexity.
*Iyyun 39:81-99,
1990.*

__Pitts, W., & W.S. McCulloch:__How
we know universals: the perception of auditory and visual forms. *Bulletin
of Mathematical Biophysics 9:127-147, 1947.*

__Popper, K__.: Indeterminism
in quantum physics and in classical physics. *Brit. J. Phil. Sci. 1:117
& 173,1950*

__Port, R.F. & T. van Gelder__:
Mind in Action,
*MIT Press, Cambridge MA, 1995.*

__Pribram, K.__: Brain and Perception:
Holonomy and structure in figural processing. *Hillsdale, NJ., Lawrence
Erlbaum, 1991*

__Prigogine, I.__: Introduction
to the thermodynamics of irreversible processes, 3rd edit.
*Interscience,
NY, 1967*

__Pylyshyn, Z.W.__: Computation
and Cognition: toward a foundation of Cognitive Science. *MIT Press,
Cambridge MA, 1984.*

__Rapp, P.E., T.R. Bashore, I.D.
Zimmertman, J.M. Martineri, A.M. Albano & A.I. Mees:__ Dynamical
characterization of brain electrical activity, p. 10-22, in: S. Krasner
(edit): Ubiquity of Chaos. *AAAS, Washington DC, 1990.*

__Rieke, F., D. Warland & W. Bialek__: Coding efficiency
and information rates in sensory neurons. *Europhysics Letters 22(2):151-156,
1993.*

__Rieke, F., D. Warland, R. de Ruyter
van Steveninck & W. Bialek: __Spikes : Exploring the neural
code . *MIT Press, Cambridge Mass., 1997.*

__Rosen, R.__: Causal structures
in brains and machines.
*Int.J.General Systems 12:107, 1986*.

__Rosen, R.:__ Effective Processes
and Natural Law, in: R. Herken, The universal Turing machine,
*Oxford
University Press, 1988.*

__Rosenblueth, A., N. Wiener
& J. Bigelow__: Behavior, Purpose and Teleology. Philosophy of Science
10:18-24, 1943.

__Rueger, A. & W.D. Sharp:__Simple
theories of a messy world: truth and explanatory power in nonlinear dynamics.
*Brit.J.Phil.Sci.
47:93-112, 1996.*

__Schuetz, A.: Randomness and constraints
in the cortical neuropil__. in: A. Aertsen & V. Braitenberg: Information
processing in the cortex*, Springer NY, 1992.*

__Segev, I.__:The theoretical
foundation of dendritic function : selected papers of Wilfrid Rall with
commentaries. *MIT Press, Cambridge, MA, 1995.*

__Serra, R. & G. Zanarini:__
Complex systems and cognitive processes.
*Springer, NY, 1990.*

__Shannon, C.E.__: A mathematical
theory of communication,
*The Bell System Technical Journal 27:379
and 623, 1948.*

__Shannon, C.E. & J. McCarthy__: Automata studies,
*Princeton
University Press, 1956.*

__Shinbrot, T., C. Gebogi, E.
Ott & J.A. Yorke__: Using small perturbations to control chaos. *Nature
363:411, 1993*.

__Siegelmann, H__.: Neural
networks and analog computation: beyond the Turing limit. *Birkhauser
Basel, 1999.*

__Skarda, C.A. & W.J. Freeman:__
How brains make chaos in order to make sense of the world. *Behavioral
& Brain Sciences 10:161-195, 1987*

__Smith, P.:__ Explaining Chaos,
Cambridge University Press, 1998.

__Softky, W.R.__: Simple codes
versus efficient codes.
*Current Opinion in Neurobiology 5:239-247, 1995.*

__Sompolinsky, H__.: Statistical
mechanics of neural networks. *Physics Today 41 (December): 70-80, 1988.*

__Sompolinsky, H., A. Crisanti
& H.J. Sommers:__ Chaos in random neural networks.
*Physical Rev.
Letters 61(3):259-262, 1988.*

__Sutton,J.P.__: Network hierarchies
in Neural Organization, development and pathology. p. 319-363, in: C.L.
Lumsden, W.A. Brandts & L.E.H. Trainor (edits), Physical Theory in
Biology, *World Scientific , Singapore, 1997*.

__Trachtman, P.__: Redefining
Robots. T*he Smithonian, February 2000, p. 97-112*.

__Tsuda, I., E. Koerner & H.
Shimizu__: Memory dynamics in asynchronous neural networks.
*Progress
in Theoretical Physics 78(1):51-71, 1987.*

__Tsuda I.__: Dynamic link of
memory-chaotic memory map in nonequilibrium neural networks.
*Neural
Networks 5:313-326, 1992*.

__Uttal, W.R. __: The Psychobiology
of sensory coding. *Harper & Row, NY, 1973*,

__Uttal, W.R.__: Toward a new
Behaviorism - the case against perceptual Reductionism. *Mahwah, NJ,
Erlbaum Assoc., 1989*

__van Vreeswijk, C. & H.
Sompolinsky__: Chaos in neural networks with balanced excitatory and
inhibitory activity.
*Science 274:1724-1725, 1996.*

__Ventriglia,F.: __Towards
a kinetic theory of cortical-like neural fields. p. 217-248, in:
Neural modelling and neural networks, edit. F.Ventriglia, *New
York: Pergamon Press, 1994.*

__Vergis,A., K. Steiglitz &
B. Dickinson__: The complexity of analog computation. *Mathematics
and Computation in Simulation 28:91-113, 1986.*

__Vizi, E.S. & E. Labos:__
non-synaptic interactions at presynaptic level. *Progress in Neurobiology
37:145-163, 1991.*

__von der Malsburg C.: __, The
correlation theory of brain function. reprint of a 1981 Technical Report
in: E. Domany, J.L. van Hemmen, K. Schulten (eds.): Models of neural networks
II :

temporal aspects of coding and information processing
in biological systems, *Springer NY, 1994.*

__von Foerster, H.__: Observing
systems. 2nd edit.,
*Intersystems Publications, CA, 1984*.

__Watanabe, M. & K. Aihara__:
Chaos in neural networks composed of coincidence detector neurons. *Neural
Networks 10(8):1353-1359, 1997.*

__Werner, G.:__ The study of
sensation in Physiology, p. 605-628, in: V.B. Mountcastle, (edit), Medical
Physiology, 14 th edition,
*CV Mosby, St. Louis, 1980.*

__Werner, G.__: The many faces
of Neuroreductionism, p.241-257, in: E. Basar (ed.): Dynamics of
sensory and cognitive processing in the brain, *Springer 1988*

__Werner, G.__: Five Decades
on the Path to naturalizing Epistemology, p. 345-359, in: J. Lund,
edit.: Sensory Processing in the mammalian brain, *Oxford University
Press 1989.*

__Widrow, B__.: Generalization
and information storage in networks of Adaline neurons, in: M.C.Yovits,
G.T. Jacobi & D.D. Goldstein (edits): Selforganizing systems. *Spartan
Books, Washington DC.*

__Wiener, N__.: Cybernetics;
or, Control and communication in the animal and the machine. *Wiley,
NY, 1948.*

__Wiener, N.__: The human use
of human beings : cybernetics and society. London : *Eyre and Spottiswoode,
1954*

__Wilson, H.R. & J.D. Cowan__:
Excitatory and inhibitory interactions in localized populations of mode
neurons.* Biophysical Journal 12:1-24, 1972.*

__Wilson, H.R. & J.D. Cowan:__
A mathematical theory of the functional dynamics of cortical and thalamic
nervous tissue. *Kybernetik 13: 55-79, 1973*

__Winnie, J.A.__: Computable
Chaos.
*Philosophy of Science 59:263-275, 1992.*

__Wright, J.J. , P.A. Robinson,
C.J. Rennie, E. Gordon, P.D. Bourke, C.L. Chapman, N. Hawthorn, G.J. Lees
& D. Alexander:__ Toward an integrated continuum model of cerebral
dynamics: the cerebral rhythms, synchronous oscillation and cortical stability.
*Available
as electronic publication:*

www.mhri.edu.au/bdl/papers/paper24.

__Yao, Y.Y. & W.J. Freeman:__
Model of biological pattern recognition with spatially chaotic dynamics.
*Neural
Networks 3:153-170, 1990.*

__Zak, M.__: Terminal model of Newtonian
Dynamics,
*Internat. J. of Theoretical Physics 32(1):159-190,1993.*