Reductionism & Systems Theory: While the path of any particu

I

Immortalist

Guest
....Science had been founded on the belief that the proper route to
understanding a complex system, such as the movement of the heavens,
the mixing of chemicals, or the emergence of life, was to break it
down into a collection of parts linked by simple mathematical
formulae. You wanted a list of bits and the rules that put them back
together again. And if the essence of a system could be reduced to an
equation that fitted comfortably on the front of a T-shirt - something
like Einstein's famous e = mc^2 - then that was perfect.

But reductionism depends on the assumption that the world is
discontinuous, that it is made of discrete bits. However, real life
does not have sharp boundaries. For instance, even our own bodies are
not cleanly separated from their surroundings. The surface of our skin
may appear to be a perimeter marking 'us' from 'not us' with digital
clarity. It seems a binary distinction. Yet when viewed on a
microscopic scale, when does an oxygen or water molecule stop
belonging to the surrounding air and become part of ourselves? Or when
does a skin flake or spot of grease become sufficiently detached from
our body to count as just a passing speck of dust? From a distance,
things can seem to have sharp boundaries, but get in close and those
boundaries turn soft. The idea of the bounded object is really just a
convenient fiction.

Of course, reductionism has served science well. The reason is that
for most of the time scientists stick to situations, or scales of
magnification, where the simplification does no real harm. When we
talk about having a body, the fuzziness of its actual physical
boundaries is normally quite irrelevant at our level of discussion.
The odd skin flake or water molecule the wrong side makes little
difference when our use of the concept captures at least 99.99 per
cent of what we mean to say. In the same way, the normal laws of
physics are as accurate as we need for most of the problems we face in
life. When calculating the load forces on a new bridge design, the odd
quantum blip affecting an atom in a steel girder will be lost among
the statistical regularity of zillions of other atomic interactions.

There is a lot of science that can be done by concentrating on
situations so close to being digital as not to make a difference. Yet
there are clearly also a great many areas in life where the blurring
of boundaries and the fluid nature of relationships cannot be ignored.
The classic examples are the weather, economics, social systems,
condensed matter physics, quantum mechanics, fluid dynamics, and
anything to do with biology. Such systems are not just accumulations
of components, bits of clockwork in which every gear is locked into a
fixed relationship with its fellows. Instead, they are restless and
evolving, driven by the pressures of their own internal competition.
If such systems seem to have any stability, it is only because they
have reached a momentary accommodation of tensions. Like soap bubbles,
they have been stretched to some delicately trembling pitch of
organisation. It should not be surprising, then, that attempts to
break them into collections of labelled parts will destroy what seems
most important about them. Reductionism is much too clumsy-fingered to
perform such a task....

....While the path of any particular system could not be predicted,
outcomes had a tendency to group. Certain kinds of outcome would be
far more likely than others....

....Three Types of Attractors [1] The simplest type of attractor is a
point attractor - a system within which no matter where you begin the
calculation you will always end up at the same spot; water funnelling
down a plughole or a pendulum swinging to rest. [2] A slightly more
interesting class of attractor is the limit cycle in which the set of
allowed outcomes forms a line rather than a point; a marble rattling
around inside the brim of a bowler hat. The marble might roll about
from side to side a bit, but eventually it will have to settle
somewhere along a two-dimensional path. [3] The butterfly effect, the
gentle fluttering of a butterfly's wings could be enough to tip the
balance of a developing weather system and make the difference as to
whether or not a hurricane eventually swept across a country on the
far side of the planet...

....So the concept of the attractor went some way to salvaging the loss
of certainty that came with chaos theory. Even more encouragingly,
there was the promise that science might discover that many quite
different systems actually shared the same kinds of attractors. There
could be a family resemblance linking natural phenomena as diverse as
weather systems, the turbulence of a river, and the firing of a
neuron. A study of attractor mechanics might end up uniting many areas
of science...

....Once their eyes had been opened, scientists began to see the hand
of chaos in all kinds of natural phenomena. Biologists used chaos
theory to explain everything from the growth of patterns on snail
shells to the branching of the body's blood vessels. Physicists saw
chaotic patterning in the shape of clouds or the melting of ice. Earth
scientists found chaos in the frequency of earthquakes and the
tributary patterns of river systems...

....The distinction between chaos and complexity can seem hazy at
times, but, essentially, chaos theory describes how a simple,
repetitive interaction, left alone to rub along, can produce something
of rich structure. It is about the feedback-driven generation of
complication. Genuine complexity is something else, however.
Shorelines, rain puddles and weather patterns have an intricate
structure, but the really interesting things in life - systems like
cells, economies, ecologies, and, of course, human minds - have extra
properties such as an ability to adapt, to self-organise, to maintain
some sort of coherence or internal integrity. These systems are not
slaves to their maths, passively following a trajectory through phase
space. Instead, they have developed some sort of memory or genetic
mechanism which allows them to fine-tune the very feedback processes
that drive them. They can change the attractor landscapes in which
they dwell, and so reshape their own futures. A complex system is one
that has harnessed chaos, rather than one that is merely produced by
it.

In its most straightforward guise, complexity theory sounds no more
than a restatement of classical Darwinian evolution, which is based on
the simple statistical fact that what works has a tendency to outlast
what doesn't...

Going Inside - A Tour Round a Single Moment of Consciousness
John McCrone - 1999
http://www.amazon.com/exec/obidos/tg/detail/-/0880642629/qid=1085586459/
http://www.dichotomistic.com/readings_intro.html
 
I actually agree with alot of your points, But there a couple of
places where you and that author were saying the same thing, but you
were choosy about the particular terms. But systems theory is bigger
than his ideas, and he was trying to apply it to nerve cells.

The assumption was that brain cells were also basically digital
devices. The brain might be a pink handful of gloopy mush; brain cells
themselves might be rather unsightly tangles of protoplasm, no two
ever shaped the same; but it was believed that information processing
in the brain must somehow rise above this organic squalor. There might
be no engineer to draw neat circuit diagrams, but something about
neurons had to allow them to act together with logic and precision.

- Elaboration Upon Brain Cell Features & Electro-Chemical Properties

Brain cells certainly had a few suggestive features. To start with,
the very fact that they have a separate input and output end says
there is a direction in which information flows. Signals arrive at a
root-like bush of fibres known as the dendrites. Then, sprouting from
the other end of the neuron, is the axon, the long fibre which carries
its output message. It is true that a few synapses are also usually
found on the cell body, and sometimes even on the axon itself, but
generally speaking dendrites collect the information and axons deliver
the response. Even more obviously, all the messages come from
somewhere and go somewhere. Whether two cells are connected is a black
and white issue. Under the microscope, dendrites and axons may look as
though they are forming unruly tangles, but there is an unbiological
precision in the way that a signal can be sent to a fixed destination
- and only that destination - almost instantly anywhere in the brain.

There is a physical logic in the wiring patterns of the brain. Then,
on top of this, there is something quite plainly binary about the all-
or-nothing nature of a neuron's decision to fire. The simple story
about how a cell fires is that incoming messages pool as a series of
small charges in the dendrites. These individual charges creep up the
branches and over the surface of the cell body to converge on a
trigger zone at the base of the axon, known as the axon hillock. The
hillock is delicately balanced so that it will only spark an output
signal if the accumulation of charge exceeds some threshold value. The
pooling of charge can take many different courses. Sometimes a cell
might be triggered by just a few strong impulses arriving at almost
the same instant; at other times, the threshold might be reached more
gradually by the slow addition of many weaker or fading impulses. But
the decision to fire is black and white. The cell convulses and a
message is sent flying down the line to all the other cells to which
it is connected. A bit of information has either been created, or it
hasn't.

So, despite the brain being made of flesh and blood, the propagation
of signals looks to have a digital clarity. But the question is
whether brains are exclusively digital in their operation. A computer
knows only a world of blacks and whites. It relies on its circuits
being completely insulated from any source of noise which might
interfere with the clockwork progression of 0s and 1s. But it is not
so clear that brain cells are designed to be shielded from the messy
details of their biology. Indeed, a closer look at a neuron soon
suggests the exact opposite: it is alive to every small fluctuation or
nuance in its internal environment. Where a transistor is engineered
for stability, a brain cell trades in the almost overwhelming
sensitivity of its response.

It could hardly be any other way, because the firing of a neuron is
actually an electro-chemical process - and more chemical than
electrical. Nerves do not conduct impulses like wires. Their
electrical activity is based on moving charge-carrying ions, such as
sodium, potassium, chlorine, and calcium, across the cell wall. The
membrane of a neuron is finely covered with pores. Some of these pores
are like pumps which can force ions either in or out of a cell to set
up an imbalance in the concentration of charge. Then other pores are
simply valves which open to let the ions flood back through again,
swiftly righting the balance.

The principle of pushing ions back and forth across a membrane to
create a voltage drop is simple enough, but the control of the
channels is an immensely complex business, being both electrical and
chemical. It is electrical because a change in membrane potential can
itself cause a pore to open or shut. Not only does this mean that a
pore can influence its own level of activity, any changes feeding back
either to amplify or stabilise whatever it happens to be doing, but a
drop in voltage in one region of the membrane will tend to spread. The
opening and shutting of a group of pores will create a creeping
electrical potential drop that causes neighbouring pores to follow
suit, setting up a chain reaction that propagates across the surface
of the neuron.

The electrical response is complex enough because there are many
classes of pores, each handling a different kind of ion and reacting
to different voltage levels in different ways. But pores can also be
controlled by a whole range of chemical messengers - neurotrans-
mitters and neuromodulators - which either bind directly to a channel
to change its shape, or cause it to alter its activity through some
more subtle chain of events. There are hundreds of different
signalling substances that the brain uses to open and shut pores,
from, simple amino acids like glutamate right up to hefty proteins
similar in chemical structure to a drug like morphine. Some cause an
instant change, others work over minutes or even days; some affect
just one kind of pore, others affect all. So, depending on what mix of
pores is built into an area of membrane - something which itself can
be changed in minutes or hours - the cell wall of a neuron can show a
tremendous variety of responses. A computer is made of standardised
components. One transistor is exactly like the next. But every bit of
membrane in the brain is individual. The blend of pores can be
tailored to do a particular job, and that blend can be fine-tuned at
any time. There is a plasticity that makes the outside of a neuron
itself seem like a learning surface, a landscape of competition and
adaptation.

The electro-chemical properties of a neural membrane are, of course,
put to two general kinds of use: making axons and synapses. An axon is
just a tube of membrane with a fairly simple pore structure. There are
pores which pump out sodium ions and pump in potassium ions to
establish an initial state of electrical tension across the axon
membrane, then another set of pores acts as a valve for the sudden
release of this tension. The trick with the valves is that they are
electrically sensitive. If depolarisation begins in one section of an
axon, the change in potential will open the valves in the next. A
spike or action potential is created as one bit of tubing after
another depolarises in a chain reaction that flies all the way down to
the end of the line. Because, physically, little moves - the ions
simply step sideways across the axon wall - the process is highly
efficient. Depending on the thickness of the axon, a spike can be sent
a distance of several feet at a rate of several hundred miles an hour.
The speed at which the axon can then be reset means that a cell can
fire as many as a thousand spikes a second.

In keeping with its role as a bit of wire, the axon is the least
plastic part of a neuron - although fatigue and growth changes can
still change its operating characteristics. Where things get fancy is
at the synaptic junction connecting two cells. The membranes on either
side of this cleft are thick with a great many different kinds of
pores and receptor sites, and how they react at any moment can be
finely controlled by a whole range of chemical messengers and self-
tuning feedback loops. The basic story of how a signal crosses a
synapse is that when a spike of depolarisation reaches the tip of an
axon, it causes a set of electrically sensitive calcium channels to
open. The inflow of positive calcium ions triggers an enzyme reaction
that eventually makes the axon tip eject stored bubbles of
neurotransmit-ter into the junction. These messenger molecules simply
float across - a journey of about a thousandth of a second - and bind
to chemically sensitive sodium channels on the other side. The pores
of the dendrite are forced wide open, so beginning the depolarisation
of the next cell in the line.

But in practice, there is nothing certain about any of the steps in
this chain. An axon may often not even release any neurotransmitter,
despite being hit by a full-strength spike. The amount of
neurotransmitter spilled into the gap can also vary. Plus, there is a
whole cocktail of other substances that may or may not be released at
the same time. Then, what sort of reception the message gets on the
opposite bank can alter from moment to moment. There might be
magnesium ions physically blocking some of the sodium pores, or a
longer-acting brain neurotransmitter may have subtly changed their
response; often, chlorine channels may have been opened, letting in a
negative charge that dampens the effect of any new input. So a spike
might seem like a digital event - the all-or-nothing creation of a bit
of information guaranteed to reach a known destination - but the same
signal might one moment be met with an instant and enthusiastic
response, the next only fizzle away into nothing, failing even to stir
a cell's own axon tip.

Some of the variability in the behaviour of a neuron could be just
noise - an unpredictability caused simply by the fact that a brain
cell is an organic system depending on ions and molecules to bump
about and hit the right spot. For example, on one occasion there might
be 10,000 molecules of neurotransmitter secreted into a synaptic
cleft; on another, it could be 9,000 or 11,000 - just enough
sloppiness in the chain of transmission to create the odd glitch. If
this was all that was happening, then a spot of clever design would
always solve the problem. Brain cells might react only to the average
of a train of spikes rather than any individual spike. In this way, a
few stray signals could be ignored. But while there is undoubtedly a
degree of noise in the brain, much of the variability looks
deliberate. Neurons do not even seem to be trying to deliver a digital-
like predictability in their response. Instead, they appear to thrive
on being fluid. By using competition and feedback to fine-tune their
workings, they can adapt their response to meet the needs of the
moment. They can go with the flow.

This shows most clearly when scientists compare the behaviour of a
synapse in an alert brain with that of a resting brain. When the brain
is in a state of high vigilance, or if it is dealing with a stimulus
that is interesting and new, the synapses along the way will respond
with extra vigour. They will trigger easily and continue to buzz for
some time after. But when a synapse is part of a pathway dealing with
something dull, like the never-changing background drone of a fridge,
then the transmission of spikes becomes much more haphazard and
irregular. Experiments show that as few as one in ten of the spikes
will even cause an axon to release its transmitter. It is as if the
synapse knows whether the information it carries is important to the
state of consciousness as a whole. When things do not matter much, its
response is loose; the transmission of spikes can look rather erratic
and noisy. But as soon as the message begins to count, the neural
machinery tightens right up. Suddenly every signal starts leaping the
gap.

This quite recently discovered fact is something very important. It
had always been assumed that it was the action of billions of synapses
that added up to make a state of consciousness. But consciousness -or
at least, levels of attention and alertness - seems able to influence
the response of individual synapses. In computer terms, the logic
sounds alarmingly backwards. It is as if rather than the circuits
creating the results, the results are creating the circuits. But there
is really no great mystery if the brain is an evolving feedback and
competition-driven system. The whole brain - both the settings of its
circuits and its global state of organisation - would need to develop
in concert during a bout of processing. One would impact the other,
nudging everything along to some final balance of tension. Like tuning
into a distant radio station, the responses of millions of brain cells
would be twiddled a little bit this way, a little bit that way, until
they began to produce a coherent signal.

This was the sort of logic that Hebb had been talking about. The
problem for neuroscientists was that it was not until the 1990s that
they started to get a clear sight of the actual feedback mechanisms by
which a lowly synapse could be tuned. One crucial discovery was that a
cell's output spike actually travels both ways: it runs down the axon,
but also back over the cell itself and through its own dendrites. What
this means is that the synapses are told whether or not they
contributed to the last firing decision. So a synapse which played a
part might be encouraged to react a little more strongly next time,
while another which had not been active might be dampened to keep it
quiet. Such tuning might last a fraction of a second, long enough to
turn up the volume on a faint signal or a useful pattern, or it might
lead to more permanent growth changes. The rebounding spikes could
cause an individual dendrite to sprout extra connection sites, or to
reabsorb redundant synapses. Tuning could become learning.

The backflow of spikes was just one of many feedback mechanisms that
began to be recognised during the early 1990s. A still more surprising
discovery was that a stimulated dendrite releases a small dose of a
usually poisonous gas, nitric oxide (NO), to send a message back
across the synaptic junction. Nitric oxide can be deadly because it
binds to iron atoms and so can destroy the haemoglobin in blood cells.
But it also appears to switch on certain enzymes in an axon tip,
prompting them to ramp up the production of neurotransmitters. Nitric
oxide also has the secondary effect of relaxing the walls of blood
vessels, so its release probably increases the blood flow into an
active area of the brain. Both directly and indirectly, a very simple
messenger system could quickly improve the tone of a connection.

These two examples show the feedback mechanisms that exist even within
a cell or across a junction. But then there are the feedback
connections between cells that put them in touch with the wider
picture. Here, Hebb turned out to be even more right than he imagined.
As researchers found ways to record from cells and follow their
signals, they discovered that feedback connections dominated the
brain. They were everywhere, from short loops linking neighbouring
cells to long chains in which a signal might bounce right around the
brain before feeding back, much modified, to its source. The feedback
connections were not even limited to which part of a neuron they made
contact with. Some came back as part of the wash of input hitting a
cell's dendrites, but others formed synapses on the cell body or close
to the axon hillock where - being closer to the action - they could
exert a much more powerful effect. It was found that some connections
were even made right on the axon tip. If the connection was
inhibitory, one neuron would be able to block another cell's spike
even after it had been fired. So there was no shortage of pathways
through which the activity of the wider network could feed back to
influence the behaviour of its individual components. No part of the
chain of transmission was immune from adjustment.

- Drawing Parallels Between Information Processing in Brain &
Computers

This meant there was a dilemma for anyone trying to draw parallels
between information processing in the brain and in computers. Was the
feedback-driven nature of brain activity merely a complication, or did
it pose a more fundamental problem? It was all getting terribly
confusing. Brain cells looked to have something digital about them.
There was the all-or-nothing fact of a spike. And then each neuron
made a precise set of connections. Signals were delivered to known
locations. Yet how could a spike count as a bit of information if the
next synapse might simply choose to ignore it? And where was the
certainty in a connection pattern if synapses could be switched in and
out of the action, depending on the needs of the moment?

But then again, the transmission of spikes did seem to tighten up when
things began to count. And connection patterns were more stable than
they were fluid. Even though it might be fine-tuning the activity at
the connections, a cell remained joined to the same group of 10,000 or
so other neurons. The growth changes needed to wrench itself away took
hours or days. And even then, there was only limited scope for change.
In a mature brain there was little room for movement by the cell body
or the long filament of the axon, so all that could really happen was
a slight shift in the balance of connections being made with the local
group of cells clustered around either its dendrites or its axon tip.

As a biological organ, the brain could not help being a little noisy
and unpredictable in its workings. But presumably that did not matter
as the brain would have the means to insulate its processing from mere
noise. The brain then used feedback to adjust its circuits and
competition to evolve its answers, which again introduced an element
of unpredictability. But ultimately, all this feedback and competition
appeared to be directed towards producing a well-organised response.
And no one could say that spikes and connection patterns did not
matter. To the computer-minded, the foundations might look soggy, but
there did seem to be something concrete going on. The brain's circuits
offered a processing landscape that might be plastic - it could adapt
to its experiences - but which still had enough structural rigidity to
make things happen.

The trouble with this charitable view was that there remained
something fundamentally different about brains and computers. Any
digitalism in the brain was a weak, blurred-edged, pseudo kind of
digitalism. Spikes and connection patterns emerged out of a sea of
metabolic and growth processes. Behind the scenes, everything had to
be in some kind of dynamic balance to create a particular state of
response. Computers, on the other hand, were digital by nature. They
dealt only in defined bits of data and defined processing paths. There
was no room for unpredictability. A transistor either worked to
specification, or it was broken. So if a computer wanted to behave
like a dynamic, feedback-tuned system, it had to fake it.

Being inherently predictable, a computer can only pretend to be basing
its calculations on unpredictable or continuously varying processes.
It is impossible to disguise the black and white nature of the
computer, even with clever tricks. For example, to make the neurons in
a backprop network seem more realistic, it is possible to program them
so that instead of sending out a simple binary on or off message, they
broadcast an actual value, some figure between the full-off of a 0 and
the full-on of a 1. The nodes can be made to appear to be dealing in
shades of grey rather than the unyielding blacks and whites of a
conventional computer.

The problem is that a digital computer can only specify any given
value to a limited number of decimal places. It does not have an
infinite number of registers to represent a figure, so in practice
every number has to be rounded off at some point. It is tempting to
believe that this does not really matter. After all, even specifying
the strength of its output signals to just a couple of decimal places
would give a backprop network a hundred shades of grey with which to
work. Plenty, it would seem. And computers can easily manage 32-bit or
even 64-bit precision in their calculations, quickly pushing the
available number of values into the millions. Surely, it would not
take too many more decimal places to render the problem of rounding up
completely irrelevant? A simulated neuron should be able to show all
the rich variety of output of a real one.

Going Inside - A Tour Round a Single Moment of Consciousness
John McCrone - 1999
http://www.amazon.com/exec/obidos/tg/detail/-/0880642629/qid=1085586459/
http://www.dichotomistic.com/readings_intro.html
 

Welcome to EDABoard.com

Sponsor

Back
Top