Chip with simple program for Toy

Glen M. Sizemore wrote:
"Daryl McCullough" <stevendaryl3016@yahoo.com> wrote in message
news:eueeep0rmt@drn.newsguy.com...
Kevin Aylward says...

I'm saying that you can give an evolutionary explanation for
*behavior*, because behavior affects survival and reproduction.
You can give an evolutionary explanation for brain structures
that give rise to those behaviors. So as long as you are
identifying consciousness with "the brain structures necessary to
produce such and such behavior" then I think it's fine to invoke
evolution as an explanation for consciousness. That's my preferred
approach. Consciousness is just another name for a sophisticated
process of modeling the world and acting on that model.

Consciousness, is more. A kick in the balls *hurts*.

But we don't know whether that is "more" or not.

Just how is that accounted for by "a sophisticated process
of modeling the world and acting on that model"?

You haven't really said what it would *mean* to account
for it. We can certainly account for the fact that a kick
in the balls causes a person to grab his crotch, to scream,
to keel over, to strike back, to avoid situations in which
that sort of thing happens. What, exactly, are you thinking
is left to account for?

I don't know what Kevin is thinking, but I'll tell you what is
unaccounted for: the verbal response "That hurts." Accounting for
that is the key to "self-awareness," and it is reasonably well
understood by a few people.
Who would those people be? As I have noted prior, for me it is intrinsically
impossble to reduce "that hurts" to any existing mass-phyiscs. I cover a bit
more on this in another post.


--
Kevin Aylward
ka@anasoftEXTRACT.co.uk
www.anasoft.co.uk
SuperSpice
 
RichD wrote:
On Mar 28, "Kevin Aylward" kevin_aylw...@ntlworld.com> wrote:
yeah the speed of light is an invariant in an inertial frame,
well how do we know what is an inertial frame or not?
Look mate these issues are pretty fundamental and unresolved.

An inertial frame is a collection of objects
which are not accelerating with respect to
one another.

Er..how do you define acceleration?

Start with a measuring stick.
Then get a clock... tick, tick, tick...

Measure how far something went, in
a given number of ticks. Repeat,
later. If the distance is different,
for the same number of ticks, its
velocity changed, i.e. it accelerated.
I don't suppose that you are acquainted with that fact that under General
Relativity, a body free falling under a gravitational field is not
accelerating?

if F=ma

How do you account for the fact that you feel the same in free space when
you are in a fixed position, i.e. not moving, as when you do when falling
off a cliff, i.e "accelerating". i.e. weightless? To wit, you don't feel any
forces, your floatin, so, F=ma seems a tad of a problem don't it.


--
Kevin Aylward
ka@anasoftEXTRACT.co.uk
www.anasoft.co.uk
SuperSpice
 
Daryl McCullough wrote:
Kevin Aylward says...

I'm saying that you can give an evolutionary explanation for
*behavior*, because behavior affects survival and reproduction.
You can give an evolutionary explanation for brain structures
that give rise to those behaviors. So as long as you are identifying
consciousness with "the brain structures necessary to produce
such and such behavior" then I think it's fine to invoke
evolution as an explanation for consciousness. That's my preferred
approach. Consciousness is just another name for a sophisticated
process of modeling the world and acting on that model.

Consciousness, is more. A kick in the balls *hurts*.

But we don't know whether that is "more" or not.

Just how is that accounted for by "a sophisticated process
of modeling the world and acting on that model"?

You haven't really said what it would *mean* to account
for it.
I have already explained. Its like the axiom that the speed of light is
invariant. It is quite impossble to explain in any existing way. If you run
into a light source, no matter what speed you run, the speed of the light
coming toward you will always be c. Why? It is simple a new fact of physics.

We can not account for the thing that is pain, because it cannot, in
principle be derived from existing knowledge. But like light, it is a
property that exists and contains new information.

We can certainly account for the fact that a kick
in the balls causes a person to grab his crotch, to scream,
to keel over, to strike back, to avoid situations in which
that sort of thing happens. What, exactly, are you thinking
is left to account for?
Yeah,...this is all getting pretty pointless.

Err. I can fake all of that all of that, without a kick in the balls. The
bit that is missing in the description is the bit that hurts.

You are simply arguing that because we cant express the actual experience of
pain in inanimate physical terms, that there is no new information in pain,
so you claim, there is no more to it. Machines don't feel pain, we do,
therefore there must be something else. This is trivially obvious. Denying
that there is no more to a kick in the ball than the physical expression of
that kick in balls is de-facto denying that consciosness exits. You can do
that if you like.

So, here we go again, Godel tells us that there are new statements, not
derivable from existing knowledge. That is, there are true statements that
are not explainable by prior knowledge. The inability to explain these new
statements does not imply that those statements are meaningless. That is,
there is nothing more to know other than the existing knowledge. These
statements carry new information. Consciousness is such new information. We
have to include our axiom set to include consciousness, as a new,
unexplainable fact, and accept that it cant be reduced to something else. We
have to explain emotions like pain in others, by referencing them to
ourselves, i.e our own experiances. Sure, it would nice to have an
indepandeannt varification method, but we dont. Conventional physics simply
fails to account for consciouness. Physics is incomplete in the Godel sense.
Physics cannot explain all properties of mass-energy systems, and there is
no reason why it should be able to. However, this property that it can't
explain, does not mean that physics fails to predict actions as it does for
non conscious entities, any conscious actions. Consciousness can not
physically do anything, it is a VDU by-product of the electro-chemical
machine. Physical action requires physical forces. Consciousness is not
physical. Its a bit like temperature in a sense. Temperature is just a label
for how entities behave when they have different energies, it is not a
physical thing as such.



--
Kevin Aylward
ka@anasoftEXTRACT.co.uk
www.anasoft.co.uk
SuperSpice
 
"Kevin Aylward" <kevin_aylward@ntlworld.com> wrote in message
news:AdTOh.25116$0Z1.4823@newsfe7-win.ntli.net...
Glen M. Sizemore wrote:
"Daryl McCullough" <stevendaryl3016@yahoo.com> wrote in message
news:eueeep0rmt@drn.newsguy.com...
Kevin Aylward says...

I'm saying that you can give an evolutionary explanation for
*behavior*, because behavior affects survival and reproduction.
You can give an evolutionary explanation for brain structures
that give rise to those behaviors. So as long as you are
identifying consciousness with "the brain structures necessary to
produce such and such behavior" then I think it's fine to invoke
evolution as an explanation for consciousness. That's my preferred
approach. Consciousness is just another name for a sophisticated
process of modeling the world and acting on that model.

Consciousness, is more. A kick in the balls *hurts*.

But we don't know whether that is "more" or not.

Just how is that accounted for by "a sophisticated process
of modeling the world and acting on that model"?

You haven't really said what it would *mean* to account
for it. We can certainly account for the fact that a kick
in the balls causes a person to grab his crotch, to scream,
to keel over, to strike back, to avoid situations in which
that sort of thing happens. What, exactly, are you thinking
is left to account for?

I don't know what Kevin is thinking, but I'll tell you what is
unaccounted for: the verbal response "That hurts." Accounting for
that is the key to "self-awareness," and it is reasonably well
understood by a few people.

Who would those people be?
Radical behaviorists and, to a lesser extent, some Wittgenseinians. Maybe
some others.

As I have noted prior, for me it is intrinsically impossble to reduce "that
hurts" to any existing mass-phyiscs.
Physics is not yet relevant to much other than what is traditionally
regarded as physics. It is not now, and probably never will be, directly
relevant to the behavior of animals, though animals are made of matter.

I cover a bit
more on this in another post.


--
Kevin Aylward
ka@anasoftEXTRACT.co.uk
www.anasoft.co.uk
SuperSpice
 
partso2@yahoo.com wrote:
On Mar 28, 10:20 pm, Wolf <ElLoboVi...@ruddy.moss> wrote:
part...@yahoo.com wrote:

[...]

Now it's obvious that free will can
force one of these following states to actually happen, preventing all
the rest, without violating any physical law.
It's not obvious at all. Unless of course "free will" is some sort of
physical force that acts at the quantum scale.

No, you don't have to assume that. What I wanted to say is that if
free will exists ('physical' or not), and if it affects the brain
workings by selection of quantum states as described, then there's no
contradiction to known laws of physics.
Oh, but there is. The known laws of physics do not allow for quantum
states to be selected as you describe.

Thus I wanted to refute the
materialist claim that physical laws prevent free will.
Materialism doesn't imply what you are trying to refute.

Now, free will being a 'physical force' or not depends on your
definition of that. If 'physical force' is anything that acts on
physical objects, then free will is physical, by definition, by acting
on the brain. If you want to include in the definition that physical
force must result out of some mass, well, I believe it isn't, and gave
a few hints in the direction in my previous post.
I have no idea where you get the notion that force must result out of
some mass.

Care to describe what such a force would have to be able to do? And how
it could do so?

It should be able to select the following brain (quantum) state out
of the possible states following a given state, as allowed by quantum
mechanics, of course. Maybe more, but at least that.
You've actually described Maxwell's Demon. Look it up.

That's enough to
understand free will.
Oh no it ain't. All you've done is posit a SOO (Something Or Other) that
satisfies your a priori notions about free will. That explains nothing,
not even your notions.

How it can do it? well, that's an illegitimate
question in physics.
On the contrary, that's all physics is about.

Do you know how charges attract/repel each
other? all physics can do is offer descriptions which don't really
explain anything
Ah, I see, you're one of those people who can't or won't accept that
such descriptions are all we can do. There are unanswerable questions,
you know. Read St Augustine on that subject - very instructive.

(virtual photons? well, then I'll ask how exactly can
the charges emit/absorb the virtual photons? and if you answer that,
I'll ask how does THAT work, and so on). Feynmann once called gravity
'a mysterious force no one knows how/why it acts', but really, we
don't need Feynmann for that.
Be careful of assuming you understand what Feynmann meant.

The question of how things are able to
do what they do isn't physical, but philosophical.
That's not how I intended my question. I intended it as a thoroughly
physical one.

Physics finds
enough difficulty in describing how they do it, no more.
The distinction between how things are able to do something, and how
they do it, is lost on me. I have no idea how you got onto it. You seem
to have misunderstood my perfectly idiomatic usage of "what is X able to
do."


--


Wolf

"Don't believe everything you think." (Maxine)
 
On Mar 29, 9:07 pm, Wolf <ElLoboVi...@ruddy.moss> wrote:
part...@yahoo.com wrote:
On Mar 28, 10:20 pm, Wolf <ElLoboVi...@ruddy.moss> wrote:
part...@yahoo.com wrote:

[...]

Now it's obvious that free will can
force one of these following states to actually happen, preventing all
the rest, without violating any physical law.
It's not obvious at all. Unless of course "free will" is some sort of
physical force that acts at the quantum scale.

No, you don't have to assume that. What I wanted to say is that if
free will exists ('physical' or not), and if it affects the brain
workings by selection of quantum states as described, then there's no
contradiction to known laws of physics.

Oh, but there is. The known laws of physics do not allow for quantum
states to be selected as you describe.

Why not? take a simple example: a photon hits a surface. It can
either be relflected or not. What decides what it does? nothing
physics knows about - physics knows only the probabilities. But now,
let's assume there's somthing (physical or not - see below) that
actually decides what the photon is to do. Does this 'something'
violate any known law? actually, if it keeps working on the 4%
probability, you'll never know if it's there or not. Now, known laws
don't offer such a 'something', of course, but also don't claim there
isn't, as they say nothing on actually what does determine the
photon's behaviour (if indeed there's one). That's what I meant in
free will not contradicting them.

Now, free will being a 'physical force' or not depends on your
definition of that. If 'physical force' is anything that acts on
physical objects, then free will is physical, by definition, by acting
on the brain. If you want to include in the definition that physical
force must result out of some mass, well, I believe it isn't, and gave
a few hints in the direction in my previous post.

I have no idea where you get the notion that force must result out of
some mass.

I didn't. I wasn't sure of your definition to 'physical force', and
am not now, so I tried various options I thought you might have in
mind. Please supply your actual definition of one, so we can really
discuss whether it's physical or not.

Care to describe what such a force would have to be able to do? And how
it could do so?

It should be able to select the following brain (quantum) state out
of the possible states following a given state, as allowed by quantum
mechanics, of course. Maybe more, but at least that.

You've actually described Maxwell's Demon. Look it up.

Not exactly. The Demon actually forces molecules to not doing what
they wanted to - they wanted to pass, and he stopped (some of) them.
Selecting a quantum state doesn't force the system to work in a
previously-impossible way, but rather select one of the possible ways
of behaviour. It's like opening the box in which Schroedinger's cat is
- you don't force it to do something it couldn't do before (like
changing the molecule's momentum). Only in opening the box you don't
have control over the result, and free will has. I'ts like opening the
box and forcing the cat to be alive. There's no known mechanism to do
it with real cats, but no reason to think there isn't such.

That's enough to
understand free will.

Oh no it ain't. All you've done is posit a SOO (Something Or Other) that
satisfies your a priori notions about free will. That explains nothing,
not even your notions.

There's probably a misunderstading here. I answered a different
question than what you meant to ask, and you thought I try to explain
something I didn't.

How it can do it? well, that's an illegitimate
question in physics.

On the contrary, that's all physics is about.

I take it you only want the physical mechanism in which it selects
these quantum states (like saying that electrons repell by exchanging
virtual photons, without any further understanding). Well, first we
have to clarify if free will is indeed physical before we know if such
a question is legitimate. Again, please supply your definition for
that.

Do you know how charges attract/repel each
other? all physics can do is offer descriptions which don't really
explain anything

Ah, I see, you're one of those people who can't or won't accept that
such descriptions are all we can do. There are unanswerable questions,
you know. Read St Augustine on that subject - very instructive.

Well, of course, you speak on physics, and I on philosophy. These
descriptions are certainly not all we can do in philosophy, and
philosophers actually try to answer such questions. In general,
there're no unaskable questions in philosophy (I mean different
philosophers may think different questions as unanswerable, but
there's no general consensus that there are such, as far as I know).
Again, we need to define things in order to know whether our
discussion is in the physical domain of the philosophical.

(virtual photons? well, then I'll ask how exactly can
the charges emit/absorb the virtual photons? and if you answer that,
I'll ask how does THAT work, and so on). Feynmann once called gravity
'a mysterious force no one knows how/why it acts', but really, we
don't need Feynmann for that.

Be careful of assuming you understand what Feynmann meant.

Thanks for the warning. I'll try to keep that in mind. But do you
have a better idea of his meaning?

The question of how things are able to
do what they do isn't physical, but philosophical.

That's not how I intended my question. I intended it as a thoroughly
physical one.

Physics finds
enough difficulty in describing how they do it, no more.

The distinction between how things are able to do something, and how
they do it, is lost on me. I have no idea how you got onto it. You seem
to have misunderstood my perfectly idiomatic usage of "what is X able to
do."

I could clarify mean meaning and this obvious distinction, but it
seems to open a side-discussion, not really related to the main topic.
Anyway I did answer the 'what it's able to do', I think.

--

Wolf

"Don't believe everything you think." (Maxine)-
I think I believe in that quote.
 
Kevin Aylward says...
Daryl McCullough wrote:

We can certainly account for the fact that a kick
in the balls causes a person to grab his crotch, to scream,
to keel over, to strike back, to avoid situations in which
that sort of thing happens. What, exactly, are you thinking
is left to account for?

Yeah,...this is all getting pretty pointless.
I'm trying to understand the sense in which your "hard problem"
is *not* pointless. It's not clear to me that there is any
hard problem at all.

I mean, there is certainly a hard problem of how does the
human brain do what it does.

Err. I can fake all of that all of that, without a kick in the
balls.
So what? I'm suggesting that once you've explained a causal
connection between (A) getting kicked in the crotch, and (B)
the behaviors that I described, then you've explained all there
is to explain about pain. The fact that the same behaviors can
arise in other circumstances doesn't affect this. "Pain" is just
a name that we give for the causal relationship between
environmental stimuli and behaviors.

The bit that is missing in the description is the bit that hurts.

You are simply arguing that because we cant express the actual
experience of pain in inanimate physical terms,
No, I'm saying on the contrary, that pain is the name of
a particular causal relationship between stimuli and behaviors.

It's actually a more complicated than that, but not in the
sense that the causal relationships are leaving anything out.

Obviously, for a creature to be able to survive, it needs to
adjust its behavior based on environmental clues. If you touch
something really hot, you should yank away your hand before it
is damaged.

However, an automatic, rule-like response is too inflexible. Sometimes,
the best strategy is to endure a small amount of damage in order to
prevent a much worse fate. For example, if you have to walk barefoot
over broken glass in order to escape from a deadly predator, you'll
do it.

So creatures have a two-step process. Evidence that the body is getting
damaged is noted, and then a *separate* process takes place to decide
what to do about it. In this second process, other factors, such as
the possibility of great danger or great reward, can take priority
over the simple imperative to avoid damaging your body.

Pain is just a memo that the creature writes to itself: Hey, I'm
getting damaged here. Do something about it at the first opportunity.
The intensity of pain is an indication of the priority.

This is obviously just a sketch, and not a full theory of pain,
but I really don't see where you think that it becomes necessary
to introduce qualia.

Machines don't feel pain, we do, therefore there must be something else.
We can give purely behavioral criteria for saying that the response
of animals to damage is very different from the response of
any existing machines.

So, here we go again, Godel tells us that there are new statements, not
derivable from existing knowledge.
Godel has no relevance here that I can see.

That is, there are true statements that are not explainable by prior
knowledge.
That's not what Godel says. Godel says that for any axiomatizable
theory in the language of arithmetic, there are true statements that
are not provable by that theory.

But the interesting thing about Godel's sentence G which is
"true but unprovable" is that it has *consequences* that are
understandable in terms of our incomplete axiomatizable theory.
These statements are like Popper's falsifiable claims. They can
be *disproved* by a finite observation, but cannot be proved
by any finite observation.

In contrast, the "new physics" that you are proposing to explain
consciousness has *no* observable consequences.

...Consciousness can not physically do anything, it is a VDU
by-product of the electro-chemical machine. Physical action
requires physical forces. Consciousness is not physical. Its a
bit like temperature in a sense. Temperature is just a label
for how entities behave when they have different energies, it is not a
physical thing as such.
That's an interesting analogy, and I might be willing to go along with
it. However, I would say that temperature is *not* new physics, in
a certain sense. If you have billions and billions of little particles
all interacting via Newtonian mechanics, you can identify statistical
properties of the whole collection and prove things about those properties
using Newtonian mechanics. Temperature is such a statistical property.

So would you say that consciousness is just a label for macroscopic
properties of certain living creatures?

--
Daryl McCullough
Ithaca, NY
 
Glen M. Sizemore wrote:
"Kevin Aylward" <kevin_aylward@ntlworld.com> wrote in message
news:AdTOh.25116$0Z1.4823@newsfe7-win.ntli.net...
Glen M. Sizemore wrote:
"Daryl McCullough" <stevendaryl3016@yahoo.com> wrote in message
news:eueeep0rmt@drn.newsguy.com...
Kevin Aylward says...

I'm saying that you can give an evolutionary explanation for
*behavior*, because behavior affects survival and reproduction.
You can give an evolutionary explanation for brain structures
that give rise to those behaviors. So as long as you are
identifying consciousness with "the brain structures necessary to
produce such and such behavior" then I think it's fine to invoke
evolution as an explanation for consciousness. That's my
preferred approach. Consciousness is just another name for a
sophisticated process of modeling the world and acting on that
model.

Consciousness, is more. A kick in the balls *hurts*.

But we don't know whether that is "more" or not.

Just how is that accounted for by "a sophisticated process
of modeling the world and acting on that model"?

You haven't really said what it would *mean* to account
for it. We can certainly account for the fact that a kick
in the balls causes a person to grab his crotch, to scream,
to keel over, to strike back, to avoid situations in which
that sort of thing happens. What, exactly, are you thinking
is left to account for?

I don't know what Kevin is thinking, but I'll tell you what is
unaccounted for: the verbal response "That hurts." Accounting for
that is the key to "self-awareness," and it is reasonably well
understood by a few people.

Who would those people be?

Radical behaviorists and, to a lesser extent, some Wittgenseinians.
Maybe some others.
Oh.. You mean you believe that these people actually account for what is
pain by a reductionist argument? any references?

As I have noted prior, for me it is intrinsically impossble to
reduce "that hurts" to any existing mass-phyiscs.

Physics is not yet relevant to much other than what is traditionally
regarded as physics. It is not now, and probably never will be,
directly relevant to the behavior of animals, though animals are made
of matter.
This really misses my point a bit. Sure, in practice, physics doesn't say a
lot about human behaviour, but in principle, it says everything there is to
know. All mass-energy obeys the laws of physics. Period. In principle, human
behaviour is a solution of the shrodinger equation, along with all the other
laws of physics to numerous to mention. The fact that the equations are
impracticle to solve, is not really relevent the principle.

The point above, is despite behaviour being the result of the strict laws of
physics, physics can not account for the existence of awareness. Awareness
is a new feature of mass-energy physics.

--
Kevin Aylward
ka@anasoftEXTRACT.co.uk
www.anasoft.co.uk
SuperSpice
 
Daryl McCullough wrote:
Kevin Aylward says...

Daryl McCullough wrote:

We can certainly account for the fact that a kick
in the balls causes a person to grab his crotch, to scream,
to keel over, to strike back, to avoid situations in which
that sort of thing happens. What, exactly, are you thinking
is left to account for?

Yeah,...this is all getting pretty pointless.

I'm trying to understand the sense in which your "hard problem"
is *not* pointless. It's not clear to me that there is any
hard problem at all.
Its not my problem. goggle "hard problem" consciousness.

I simply fail to understand why you don't understand that the hard problem
is real. I think you would be better doing some lookup on this to get a
wider view than mine as to why many people think it is important.

I mean, there is certainly a hard problem of how does the
human brain do what it does.
In the consciousness field, this is called the "Easy Problem". That is,
physically how does an eyes work, what signals are present etc,. although
difficult in one contexts, its all engineering.

Err. I can fake all of that all of that, without a kick in the
balls.

So what? I'm suggesting that once you've explained a causal
connection between (A) getting kicked in the crotch, and (B)
the behaviors that I described, then you've explained all there
is to explain about pain. The fact that the same behaviors can
arise in other circumstances doesn't affect this. "Pain" is just
a name that we give for the causal relationship between
environmental stimuli and behaviours.
No it isn't. It a name we give to the *feeling* we get that accompanies
certain stimuli. The behaviour that occurs with this feeling, is incidental
to the feeling itself

The bit that is missing in the description is the bit that hurts.

You are simply arguing that because we cant express the actual
experience of pain in inanimate physical terms,

No, I'm saying on the contrary, that pain is the name of
a particular causal relationship between stimuli and behaviours.
In part, pain is associated with this idea, but it is more than this idea,
but words cannot express it.

Again, you are simply denying that there is something else, because it can
not be reduced to anything less than, pain is something we feel. What we
feel is what we are aware of, what we are aware of is that thing our
consciousness gives etc..etc..

It's actually a more complicated than that, but not in the
sense that the causal relationships are leaving anything out.
They do. Pain is something that hurts.

Obviously, for a creature to be able to survive, it needs to
adjust its behavior based on environmental clues. If you touch
something really hot, you should yank away your hand before it
is damaged.
But why should there be the feeling of pain?

The suggestion of Zombies, is that a machine could act in the same manner as
you described above, but not actually *feel* the discomfort of pain. So, why
the pain feeling?

Which is actually an argument as to why consciousness evolved. It is quite
reasonable to suggest that this feeling of pain, by a non Zombie, is more
advantages to replication than non pain of a Zombie. One wants to move ones
hand because it hurts, not because of the damage that is being done.

Why are we *not* Zombies if the results are the same?

However, an automatic, rule-like response is too inflexible.
Sometimes, the best strategy is to endure a small amount of damage in
order to prevent a much worse fate. For example, if you have to walk
barefoot over broken glass in order to escape from a deadly predator,
you'll do it.
All of this is trivial, and ignores the point.

So creatures have a two-step process. Evidence that the body is
getting damaged is noted, and then a *separate* process takes place
to decide what to do about it. In this second process, other factors,
such as the possibility of great danger or great reward, can take
priority over the simple imperative to avoid damaging your body.

Pain is just a memo that the creature writes to itself: Hey, I'm
getting damaged here. Do something about it at the first opportunity.
The intensity of pain is an indication of the priority.
Ho humm... preaching to the converted. Look,

http://www.kevinaylward.co.uk/replicators/emotions.html

I *define* genes and memes. I derive properties of such genes and memes, I
mathematically, define emotions from these definitions and properties, to
wit

Definition of an Emotion - a conscious experienced trait of a Replicator,
such that that trait attempts to maximize its Replicator numbers.

I state what emotions are, and why they are as they are.

I have been all thorough your arguments, extensively. None of them explain
conscious experience.

This is obviously just a sketch, and not a full theory of pain,
but I really don't see where you think that it becomes necessary
to introduce qualia.
Because it exists. And no, quaila doesn't physically do anything, so in one
sense, it isn't necessary to explain behaviour. In the other it does,
because, no matter how much you argue, on the physical realties, "I am aware
that I exist" is a fact. A stone makes no such assertion.

Machines don't feel pain, we do, therefore there must be something
else.

We can give purely behavioral criteria for saying that the response
of animals to damage is very different from the response of
any existing machines.
I have already covered all of this in my papers noted above., "The Mean
Meme-GeneDarwinian Machine. "

If you have a new argument, please let me know.

So, here we go again, Godel tells us that there are new statements,
not derivable from existing knowledge.

Godel has no relevance here that I can see.
Oh dear...

That is, there are true statements that are not explainable by prior
knowledge.

That's not what Godel says.
Yes he does.

Godel says that for any axiomatizable
theory in the language of arithmetic, there are true statements that
are not provable by that theory.
Just what do you think "provable means". It means taking some axioms, and
trying to *derive* by logical steps a new relation. Godel states some of
these new relations that are true, cannot be derived, i.e. provable That
they stand on their own as new information. They are than added to the list
of axioms.

There is an attempt to explain consciousness. That is, to derive the fact,
from logical application of the known mathematical laws of physics, that we
are "aware". Physics is founded on arithmetic, and so is constrained by
Godel. Hint: Physics works by modelling the universe with mathematics, hence
all explanations depend on arithmetic.

There is no Consciousness = formula

We cannot derive the experience of "hurt" from knowledge of elementary
particle properties. All you are doing is saying that consciousness don't
exist because you don't believe that there are any observable consequences.
I disagree, there is no evidence that shows that any entity that behaves in
enough detail like a consciousness entity exists. Show me a non conscious
entity that behaves sufficiently like a conscious one. The onus is on the
proposer to show that something actually exists. The proof that
consciousness exists, is you.


In contrast, the "new physics" that you are proposing to explain
consciousness has *no* observable consequences.
No, you claim it doesn't, but have no evidence to support this assertion.
That is, produce a non conscious entity that behaves sufficiently like a
conscious enity.


...Consciousness can not physically do anything, it is a VDU
by-product of the electro-chemical machine. Physical action
requires physical forces. Consciousness is not physical. Its a
bit like temperature in a sense. Temperature is just a label
for how entities behave when they have different energies, it is not
a physical thing as such.

That's an interesting analogy, and I might be willing to go along with
it. However, I would say that temperature is *not* new physics, in
a certain sense. If you have billions and billions of little particles
all interacting via Newtonian mechanics, you can identify statistical
properties of the whole collection and prove things about those
properties using Newtonian mechanics. Temperature is such a
statistical property.
True, but irrelevent to my point.

So would you say that consciousness is just a label for macroscopic
properties of certain living creatures?
No.

It just seems to me that you have great difficulty in the actuall principle
of new axioms. Not everything we learn as children as self evidence truths
is enough to explain everything we see.

--
Kevin Aylward
ka@anasoftEXTRACT.co.uk
www.anasoft.co.uk
SuperSpice
 
"Kevin Aylward" <kevin_aylward@ntlworld.com> wrote in message
news:glVOh.8749$F82.6850@newsfe4-win.ntli.net...
Glen M. Sizemore wrote:
"Kevin Aylward" <kevin_aylward@ntlworld.com> wrote in message
news:AdTOh.25116$0Z1.4823@newsfe7-win.ntli.net...
Glen M. Sizemore wrote:
"Daryl McCullough" <stevendaryl3016@yahoo.com> wrote in message
news:eueeep0rmt@drn.newsguy.com...
Kevin Aylward says...

I'm saying that you can give an evolutionary explanation for
*behavior*, because behavior affects survival and reproduction.
You can give an evolutionary explanation for brain structures
that give rise to those behaviors. So as long as you are
identifying consciousness with "the brain structures necessary to
produce such and such behavior" then I think it's fine to invoke
evolution as an explanation for consciousness. That's my
preferred approach. Consciousness is just another name for a
sophisticated process of modeling the world and acting on that
model.

Consciousness, is more. A kick in the balls *hurts*.

But we don't know whether that is "more" or not.

Just how is that accounted for by "a sophisticated process
of modeling the world and acting on that model"?

You haven't really said what it would *mean* to account
for it. We can certainly account for the fact that a kick
in the balls causes a person to grab his crotch, to scream,
to keel over, to strike back, to avoid situations in which
that sort of thing happens. What, exactly, are you thinking
is left to account for?

I don't know what Kevin is thinking, but I'll tell you what is
unaccounted for: the verbal response "That hurts." Accounting for
that is the key to "self-awareness," and it is reasonably well
understood by a few people.

Who would those people be?

Radical behaviorists and, to a lesser extent, some Wittgenseinians.
Maybe some others.

Oh.. You mean you believe that these people actually account for what is
pain by a reductionist argument? any references?
I don't recall saying anything about a "reductionistic argument."
Furthermore the term "reductionistic" is used in two different ways - you
are probably talking about what is sometimes called "constitutive
reductionism." I do have references - or at least partial refs. (you can
track down the full ref. from what I give you).

The Operational Analysis of Psychological Terms (1945) BF Skinner. Science
and Human Behavior (1953) BF Skinner (particularly Chapter XVII; "Private
events in a natural science"). Verbal Behavior (1957) BF Skinner (Chapter 5;
"The Tact", especially under the heading "Verbal behavior under the control
of private stimuli" - begins page 130). That should hold you for a while.

As I have noted prior, for me it is intrinsically impossble to
reduce "that hurts" to any existing mass-phyiscs.

Physics is not yet relevant to much other than what is traditionally
regarded as physics. It is not now, and probably never will be,
directly relevant to the behavior of animals, though animals are made
of matter.

This really misses my point a bit. Sure, in practice, physics doesn't say
a lot about human behaviour, but in principle, it says everything there is
to know.
This is one of my favorite quotes describing science: "Scientific
description must be consistent with the resources available to an observer
who belongs to the world he describes and cannot refer to some being who
contemplates the physical world 'from the outside.'" I interpret it,
perhaps, a bit broader then its authors - Ilya Prigogine and Isabelle
Stengers - but it strikes me that you are talking about some outside
"observer."

All mass-energy obeys the laws of physics. Period. In principle, human
behaviour is a solution of the shrodinger equation, along with all the
other laws of physics to numerous to mention. The fact that the equations
are impracticle to solve, is not really relevent the principle.
Ok, God, I'll remember that.

The point above, is despite behaviour being the result of the strict laws
of physics,
Perhaps for You, errr, Thou.

physics can not account for the existence of awareness. Awareness is a new
feature of mass-energy physics.
You must have meant "[current] physics can not account for the existence of
awareness. Awareness is a new feature of mass-energy physics," otherwise
your statement reduces to not A=A.

Your Humble Supplicant,
Glen





--
Kevin Aylward
ka@anasoftEXTRACT.co.uk
www.anasoft.co.uk
SuperSpice
 
billcalley wrote:
What I gleaned from the excellent answers for the original "VSWR
Doesn't Matter?" thread is that high VSWR doesn't really matter in a
lossless transmission line environment between a transmitter's antenna
tuner and the antenna, since any reflected RF energy will simply
continue to "bounce" back and forth between the tuner's output
impedance and the antenna's input impedance until it is, finally,
completely radiated from the antenna without loss.

But then why does the concept of "mismatch loss" exist in
reference to antennas? I have quickly calculated that if a
transmitter outputs 100 watts, and the TX antenna has an impedance
that will cause a VSWR of 10:1 -- using lossless transmission line --
that the mismatch loss in this "lossless" system would be 4.81dB!
(Reflected power 66.9 watts, RL -1.74).

Since mismatch loss is the "amount of power lost due to
reflection", and is as if an "attenuator with a value of the mismatch
loss where placed in series with the transmission line", then I would
think that VSWR would *definitely* matter, and not just for highly
lossy lines either. But here again, I'm probably not seeing the
entire picture here. What am I missing??

Your "amount of power lost due to reflection" statement would be true if
the line were connected to something resistive at the line's
characteristic impedance. With a properly tuned tuner, that's not the
case -- instead, the impedance looking into the tuner will also reflect
power, and in a way that makes it all work out so that the power all
ends up being radiated, which is what you wanted in the first place.

--

Tim Wescott
Wescott Design Services
http://www.wescottdesign.com

Posting from Google? See http://cfaj.freeshell.org/google/

Do you need to implement control loops in software?
"Applied Control Theory for Embedded Systems" gives you just what it says.
See details at http://www.wescottdesign.com/actfes/actfes.html
 
On 28 Mar 2007 03:06:06 -0700, partso2@yahoo.com wrote:

Some materialists here have seem to claim that consciousness isn't a
separate, non-physical entity, so that brain states = mind states
completely.

It could depend on what one means by a "brain state". While it may be
possible to observe some brain states, there could be many that are
too subtle to be scientifically observed.

So mind states could be equivalent to non-observable brain states.

Further, if mind states and brain states were completely independent,
then how could the mind and brain interact with each other?

For example, how could I see anything if the images transmitted from
my eyes to my brain did not somehow affect my mind states?

And how could I move anything if my mind states could not cause my
brain states to send nerve signals to my muscles?

It seems to me that mind/matter/energy must be based on one substance.

Cheers,
Surfer
http://www.pantheism.net
http://www.pantheist.net
 
partso2@yahoo.com wrote:
On Mar 29, 9:07 pm, Wolf <ElLoboVi...@ruddy.moss> wrote:
part...@yahoo.com wrote:
On Mar 28, 10:20 pm, Wolf <ElLoboVi...@ruddy.moss> wrote:
part...@yahoo.com wrote:
[...]
Now it's obvious that free will can
force one of these following states to actually happen, preventing all
the rest, without violating any physical law.
It's not obvious at all. Unless of course "free will" is some sort of
physical force that acts at the quantum scale.
No, you don't have to assume that. What I wanted to say is that if
free will exists ('physical' or not), and if it affects the brain
workings by selection of quantum states as described, then there's no
contradiction to known laws of physics.
Oh, but there is. The known laws of physics do not allow for quantum
states to be selected as you describe.

Why not? take a simple example: a photon hits a surface. It can
either be reflected or not. What decides what it does? Nothing
physics knows about - physics knows only the probabilities.
Actually, physics knows pretty exactly what determines
reflection/absorption. A couple of the relevant factors are the angle of
approach and the energy of the photon. Heisenberg uncertainty prevents
us from knowing both of these precisely enough to predict the trajectory
of the photon after either of these factors has been measured. IOW, we
cannot precisely predict the trajectory of the photon. But all the same,
it is completely determined.

Partso2, I don't think you know enough physics to argue your point(s).


[...]
 
Kevin Aylward says...

Its not my problem. goggle "hard problem" consciousness.
Yes, I know the phrase has been around for a long time.
I used to have email discussions with David Chalmers about it.
But I still am not convinced that there is any "hard problem".

I simply fail to understand why you don't understand that the hard problem
is real.
Not all big name philosophers agree that it's real. For example,
Dennett http://www.imprint.co.uk/online/HP_dennett.html and
"There is no Hard Problem of Consciousness", Kieron O'Hara & Tom Scutt
(not available online).

I mean, there is certainly a hard problem of how does the
human brain do what it does.

In the consciousness field, this is called the "Easy Problem".
Yes, well, the "Easy Problem" is hard enough. There is no
reason to go around making up new problems for oneself.

That is, physically how does an eyes work, what signals
are present etc,. although difficult in one contexts, its
all engineering.
I suppose so.

...I'm suggesting that once you've explained a causal
connection between (A) getting kicked in the crotch, and (B)
the behaviors that I described, then you've explained all there
is to explain about pain. The fact that the same behaviors can
arise in other circumstances doesn't affect this. "Pain" is just
a name that we give for the causal relationship between
environmental stimuli and behaviours.

No it isn't. It a name we give to the *feeling* we get that accompanies
certain stimuli.
And a feeling is...? A feeling is a "loose" causal relation
between stimuli and behavior. I say "loose" because, as I
explain below, rather than stimulus directly causing behavior,
the feelings are intermediaries, informational states containing
something of the stimulus that caused them, and something of the
behavior that it predisposes the "feeler" to perform.

The behaviour that occurs with this feeling, is incidental
to the feeling itself
I think that's completely wrong. Pain is only pain because
of the sort of behavior that it leads to. If you could completely
disconnect pain from behavior, it wouldn't be pain anymore.

Suppose that some sophisticated brain surgery rewired your
brain so that the sensation of tasting sweetness and
the sensation of pain were switched. So tasting sweetness
causes pain, and stubbing your toe causes you to taste
sweetness. But the rewiring also made the corresponding
change in behavior, so that you tend to flinch when you
taste sweetness, but you tend to seek out pain (especially
painful ice-cream).

I would say that such a rewiring is *meaningless*. There
is no meaning to "the feeling of tasting sweetness" other
than the typical causes of that mental state and the typical
behaviors that result. There *is* no such thing as a feeling
divorced from stimulus and behavior.

Again, you are simply denying that there is something else, because it can
not be reduced to anything less than, pain is something we feel.

It's actually a more complicated than that, but not in the
sense that the causal relationships are leaving anything out.

They do. Pain is something that hurts.
That's a tautology, and so conveys no information. If
an observation contains no information, then there is
nothing to explain.

The point about *scientific* explanation is that it
has the power to unify diverse observations and to
predict future observations. Newton's law of gravity
unified the motion of the planets and that of a
dropped apple. It allowed people to calculate
and predict trajectories of cannon balls and rocket
ships. In contrast, a theory of "feelings" makes
no testable predictions. If you're wrong, or you're
right, it doesn't make *any* difference.

Obviously, for a creature to be able to survive, it needs to
adjust its behavior based on environmental clues. If you touch
something really hot, you should yank away your hand before it
is damaged.

But why should there be the feeling of pain?
Because "feeling of pain" *is* a loose predisposition
for certain behaviors.

The suggestion of Zombies, is that a machine could act
in the same manner as you described above, but not actually
*feel* the discomfort of pain. So, why the pain feeling?
I don't think that such a possibility really makes
any sense.

Which is actually an argument as to why consciousness evolved.
It is quite reasonable to suggest that this feeling of pain,
by a non Zombie, is more advantages to replication than non
pain of a Zombie. One wants to move ones hand because it
hurts, not because of the damage that is being done.
I agree that you don't move your hand *because* damage is
being done to it. Instead, pain is an intermediary information
state. Damage causes you to move into a state that predisposes
you to move your hand. It's certainly not a logical inference:
"If I want to avoid damage, I should move my hand."

Why are we *not* Zombies if the results are the same?
I don't accept that there is a meaningful distinction
between Zombies and non-Zombies.

However, an automatic, rule-like response is too inflexible.
Sometimes, the best strategy is to endure a small amount of damage in
order to prevent a much worse fate. For example, if you have to walk
barefoot over broken glass in order to escape from a deadly predator,
you'll do it.

All of this is trivial, and ignores the point.
It *is* the point. The word "pain" is simply a name for this
intermediary information state, caused (typically) by bodily
damage and causing (typically) avoidance behavior. The hard
problem (which you call the "Easy Problem") is how this all
works. How do sense organs recognize damage? How is this
recognition represented? How does this representation influence
behavior? Those are the interesting questions, to me. The question
of "why does pain hurt?" is not an interesting, or particularly
meaningful, question.

http://www.kevinaylward.co.uk/replicators/emotions.html

I *define* genes and memes. I derive properties of such genes and memes, I
mathematically, define emotions from these definitions and properties, to
wit

Definition of an Emotion - a conscious experienced trait of a Replicator,
such that that trait attempts to maximize its Replicator numbers.
I don't agree with that definition. For one thing, the qualification
"...that trait attempts to maximize its Replicator numbers" doesn't
seem to be particularly about "emotion". Nearly everything about
an evolved creature---the hardness of its teeth, the composition of
its blood, the strength of its muscles, the acuity of its vision---can
be said to serve reproduction. On the flip side, when someone is
sad or happy or excited or bored, it very rarely has anything
*directly* to do with reproduction. After a man has a vasectomy,
he continues to have emotions, but they no longer have anything
to do with reproduction. The love of one's children is no less
strong when the children are adopted.

The other thing I don't like is the qualifier "conscious". What
extra information is that conveying?

I think that emotions are, like other feelings such as pain,
causal information representations that serve as an intermediary
between environmental stimulus and behavioral response.

I have been all thorough your arguments, extensively. None of them explain
conscious experience.
I haven't heard a good definition of what it would *mean* to explain
conscious experience. Note the difference with Newton explaining the motion
of the planets, or Bohr explaining the emission spectrum of the hydrogen
atom. Their explanations took the form of simple laws from which the
observations could be derived and from which new testable predictions could
be extracted. It seems that your notion of explaining conscious
behavior would *not* entail new testable predictions.

This is obviously just a sketch, and not a full theory of pain,
but I really don't see where you think that it becomes necessary
to introduce qualia.

Because it exists.
What evidence is there that qualia exist, above and beyond
the "causal informational intermediate representations" that
I talk about above?

And no, quaila doesn't physically do anything, so in one
sense, it isn't necessary to explain behaviour.
I disagree. If you identify qualia with the information
state that is intermediate between processing sensory
information and producing behavior, then it *is* an
important part of explaining the complexity of animal
behavior.

So, here we go again, Godel tells us that there are new statements,
not derivable from existing knowledge.

Godel has no relevance here that I can see.

Oh dear...
Sorry, but it's a fact. Godel has no relevance here.
As I said, the sorts of statements that Godel proves
are independent still have observable *consequences*.
The Godel statement G is a statement of the logical
form "Forall natural numbers n, Phi(n)". We can test
one by one Phi(0), Phi(1), etc. We can see that the
claim Forall n, Phi(n) is consistent with everything
we have seen so far, and that that statement makes
new predictions about what we are going to see in
the future. So it's like Popper's falsifiable claims.
We can't prove G, but we can disprove it.

In contrast, your claims about qualia and consciousness
have *no* testable consequences.

We cannot derive the experience of "hurt" from knowledge of elementary
particle properties.
If we had a sophisticated enough model of a living creature,
we would be able to investigate, using no new physics, the
manner in which environmental stimulus causes changes to
the creature's brain states, and how those brain states lead
to future behavior (and future brain states).

Studying the behavior long enough, we might see that
it is very convenient to posit certain labelled brain
states: Happy, sad, in pain, in lust, etc. Then the
complex behavior of this creature could possibly be
easier to understand by separating the question of
how such states are produced, and what are the behavioral
predispositions of being in those states.

Introducing the concept of "feelings" helps us to
make *predictions* about the behavior of creatures.
That's the reason we *have* that concept, in my
opinion. The notions of "pleasure, pain, desire,
fear", etc. help give us rules of thumb for predicting
how other creatures act. If the concepts *didn't* help
us in that way, we never would have developed those
concepts.

All you are doing is saying that consciousness don't
exist because you don't believe that there are any observable consequences.
No, I'm not agreeing that consciousness has no observable
consequences. I think that consciousness is a sophisticated
type of behavior.

I disagree, there is no evidence that shows that any entity that behaves in
enough detail like a consciousness entity exists.
I agree with that, because I identify consciousness with
behavior.

In contrast, the "new physics" that you are proposing to explain
consciousness has *no* observable consequences.

No, you claim it doesn't, but have no evidence to support this assertion.
I thought you were claiming that it has no observable consequences.

That is, produce a non conscious entity that behaves sufficiently like a
conscious enity.
I consider "being conscious" and "behaving like a conscious entity"
to be the same thing.

It just seems to me that you have great difficulty in the actuall principle
of new axioms. Not everything we learn as children as self evidence truths
is enough to explain everything we see.
You haven't really explained what needs explaining, or what it would
mean to explain it.

--
Daryl McCullough
Ithaca, NY
 
On Mar 29,"Kevin Aylward" <kevin_aylw...@ntlworld.com> wrote:
well how do we know what is an inertial frame or not?
Look mate these issues are pretty fundamental
and unresolved.

An inertial frame is a collection of objects
which are not accelerating with respect to
one another.

Er..how do you define acceleration?

Start with a measuring stick.
Then get a clock... tick, tick, tick...

Measure how far something went, in
a given number of ticks. Repeat,
later. If the distance is different,
for the same number of ticks, its
velocity changed, i.e. it accelerated.

I don't suppose that you are acquainted with that fact that under
General Relativity, a body free falling under a gravitational field
is not accelerating?
Is that a question?
I don't know what you suppose.

But if supposing is pertinent, you may suppose that
I am acquainted with the equivalence principle. A
body in free fall feels no forces, and constitutes
an inertial reference frame.

Whch was known before Einstein, but he saw deeper
than anyone else, inroducing geometry, and eliminating
the 'action at a distance'. Space is curved by mass,
and 'acceleration' depends on the co-ordinate system.
With a proper tensor transformation, you get the object
in free fall, moving at constant velocity.

So there aren't any 'issues' here.

if F=ma
How do you account for the fact that you feel the same in free
space when you are in a fixed position, i.e. not moving, as
when you do when falling off a cliff, i.e "accelerating".
i.e. weightless?
Because inertial mass equals gravitational mass.
Gravity = acceleration, as old Al posited; therefore, in
free fall, the effects cancel.

To wit, you don't feel any forces, your floatin, so, F=ma
seems a tad of a problem don't it.
Only to you.

Kevin, you have made claims about inertial frames of
reference, acceleration, and the equivalence principle,
containing 'fundamental, unresolved' logical problems.
None of which stands up.

It's good that you question dogma, think for yourself...
the problem is, at the end of the day, you look like a
snake swallowing its tail...

Now here's a conundrum for you - when standing on flat
ground, you feel your weight, i.e. there is a force
acting on you, yet you are not moving. Hence there
is a positive force, but zero acceleration!

Chew on that one, Grasshopper... while Newton rolls
his eyes...

--
Rich
 
On Mar 29, stevendaryl3...@yahoo.com (Daryl McCullough) wrote:
It a name we give to the *feeling* we get that accompanies
certain stimuli.

And a feeling is...? A feeling is a "loose" causal relation
between stimuli and behavior...

The behaviour that occurs with this feeling, is incidental
to the feeling itself

I think that's completely wrong. Pain is only pain because
of the sort of behavior that it leads to. If you could completely
disconnect pain from behavior, it wouldn't be pain anymore.

Suppose that some sophisticated brain surgery rewired your
brain so that the sensation of tasting sweetness and
the sensation of pain were switched. So tasting sweetness
causes pain, and stubbing your toe causes you to taste
sweetness. But the rewiring also made the corresponding
change in behavior, so that you tend to flinch when you
taste sweetness, but you tend to seek out pain (especially
painful ice-cream).
Feelings are separate from physical phenomena... the
feeling of pain, is independent of brain activity.

Suppose one had a substance which could suppress
the brain's pain receptors. This would not affect one's
consciousness, which is inexplicable by neural activity.

Therefore, no such thing as anesthesia is possible...

--
Rich
 
On 28 Mar, 12:06, part...@yahoo.com wrote:
None of you materialists have referred to my previous post on March
25, on free will, so I guess this matter is closed. On now.

Some materialists here have seem to claim that consciousness isn't a
separate, non-physical entity, so that brain states = mind states
completely. This can be refuted by a (relatively) simple experiment,
which has been done.

Monitoring the brains of moving people, can show which electric
currents, and where, correspond to each motoric act. E.g., we can see,
brainwise, what it means to 'lift an arm'. More than that, putting
electrodes in the right places we can artificially create these
currents, and the arm of the patient will really be lifted. Now, had
consciousness been only a state of the brain, and 'a person's will'
were no more than currents in the brain, this patient (whose arm had
been lifted) would feel as if s/he WANTED to lift it, as his/her brain
is no different from one who really wants to lift it. Actually, in
experience, patients report that they feel as if someone has forced
them to lift their arms. IMHO it proves the point.
It's knonw that the brain is organized in specialized areas. The fact
that we are able to observe and partially reproduce the state
corresponding to the command 'lift right arm' in the motor areas of
the brain, doesn't tell us much about what normally originates this
state, probably states of the brain areas (the frontal lobes, perhaps)
that are associated with higher functions.

As an analogy consider an air conditioning system: the velocity of the
motors that pump the coolant and power the fans are controlled by a
control unit which drives them according to signals from the
temperature sensors and the desidered temperature set by the user. You
can tap into the control unit and change the signals that drive the
motors, yet other parts of the control units, such the ones that
calculate the difference between the sensed and desidered temperature,
will still be producing the same output.

A better way of carrying out this experiment would be to completely
measure the whole brain state of someone who wants to lift and lifts
his arm, then feed this person with the same input (both external and
from other organs) and fully reproduce the measured brain state. This
seems out of reach of current technology.
This, of course assumes determinism, which is not needed by
materialism.

(note: no definition of consciousness is needed here (and I really
don't know if there's one). It only proves it isn't physical, so
apparently even if there's a definition, physical-scientific ideas
aren't going to help much here).

And a few words on metrialiasm in general, as it seems to be ruling
here. No, I'm not going to refure meterialism here, and I don't need
to. Materialists have to prove their case.
Actually, Materialism seems to me the proper default position. Dualism
is a more complex hypothesis (it requires at least an additional
entity), thus shall not be accepted unless the default hypothesis is
contraddicted by evidence.

You see, science isn't
about observations. It's about a very limiting way of observing the
universe. E.g., is someone comes and claims he's a prophet, and tells
as a few facts known to him by prophecy - non of us will get them as a
valid scientific observation. So science is about observations made
with very firm ideas of what is a valid observation and what is not.
That means that all the phenomena not observable by these means are
outside the scientific debate. There's a whole world there, out of
Science. Now the materialist comes and claims that it doesn't exists -
what isn't observed in scientific means doesn't exist. It's like a
greek geometrician claiming that every geometric theorem which isn't
provable by a ruler and a pair of compasses isn't true. Clearly, the
burden of proof lies on the materialist.
The burden of proof lies on who proposes that something exists. The
materialist simply says that he will not accept something unless it's
phylosophically and scientifically needed to accept it.

But, I'll give materialists a small point to ponder about, something
that amazed my years ago. You know, the stomach liquids can digest
almost everything. They can digest even another creatures's stomach,
should one care to eat it. But, the stomach doesn't digest itself.
More than that, when a craeture dies, this stomach immediately starts
digesting itself. Well, dear materialists, has something non-material
acted here while the creature was alive? or do you think the matter of
the stomach changed in the moment of death as to make it digestable?
A living stomach constatly digests its internal surface, which it
constantly regenerates at the same rate. Once the stomach is dead,
there is no regeneration but the chemicals that do the digestion
continue their job.

And a last point about leprechauns (for you, RichD). About 400 years
ago, if you came to a scientists (or just a man on the street) and
told him there're a lot of invisible waves, undetecable by all known
means, that flow everywhere, you'd be laughed at. Then came Newton and
discovered the infra-red, and now invisible electromagnetic waved are
widely accepted as reality. That gives a knock-out to the old
Aristotelean thesis of what-I-can't-measure-doesn't-exist (and I don't
care whether it was really him of one of his students to put it on his
name). How do you know that no one will come, 200 years from now, with
a leprechauns-detector? How can you ever claim they don't exist?
Inablity to either prove or refute something should leave a rational
mind in doubted state, not to choose one way or another.

And now. I don't believe leprechauns exists. But I do it for a better
reason than not being able to detect them.
 
<snip>
In the consciousness field, this is called the "Easy Problem".

Yes, well, the "Easy Problem" is hard enough. There is no
reason to go around making up new problems for oneself.
Actually, it is one of the other so-called "easy problems" that is at the
heart of the matter. This is the problem of how we come to talk about
subjective phenomena. This sets the stage for answering questions concerning
the physiological mediation of the behavioral phenomena just described.
Unfortunately, a physiological explanation of this entails pretty much a
physiological explanation of the entirety of behavior. I agree, though, that
the "hard-problem" is a myth. What we feel when we introspect is our own
behavior, and that includes the behavior called "seeing," "hearing,"
"tasting" etc. It simply makes no sense to ask something like: "Why is
'seeing green' the way it is?" What other way could it be? And there is not
really "anything it is like" to "see green" except, perhaps, some other
self-observed perceptual event. But the philosophical import of this is
simply that only one person can make contact of a particular sort with a
part of the world, and we already knew that.


<snip>
 
In article <1174361192.227243.148810@p15g2000hsd.googlegroups.com>,
"RichD" <r_delaney2001@yahoo.com> wrote:

On Mar 19, Richard The Dreaded Libertarian <n...@example.net> wrote:
The truth is Free Will is Good, and anything in
opposition to Free Will is Evil.

Except that, like Never Never Land, it is a
mirage, per a simple reductionist argument...

Holding your own Will in denial doesn't make it not be.

Say what?

OK, I'll try to translate it into little words for you.

Give it your best shot.

You have Free Will. You can't get rid of your Free Will, but
you can deny it.

Bzzzzzzz! I'm sorry, our panel of judges does not
accept this answer.

Correct is: You do not have free will, but you can deny
this reality, and ride with the illusion..

If you had no will, then you would nothing but a machine, that
follows a program, or follows orders.

I am. So are you. As are we all.
***{Yup. We are machines with free will. --MJ}***

But no matter whom you install as the authority over you, or what
program you choose to run, it is you who makes that choice, of
your own free will.

Dude, it's science.

You are composed of cells, billions and billions. Each
cell follows the laws of chemistry, immutably - including
your brain cells. They just run along, minimizing the
Gibbs free energy, that's what molecules do. Your
consciousness/soul/'free will' is nothing but a sequence
of brain states.

Think otherwise? (of course you do) Then please specify
the catalytic reaction inside any neuron or synapse,
which your 'free will' alters contrary to the laws of
thermodynamics and electro-chemistry.

I fully expect you to continue bleating about free will -
"I am not a robot!" - while offering no counter-argument.
***{Here is my counter to your argument: there is no conflict between
free will and determinism. Man has the ability to extrapolate ahead from
his present situation into the future, based on knowledge accumulated in
the past. Such extrapolations are called "expectations." Each
expectation contains an extrapolated series of experiences--what you
will see, hear, feel, etc., if you take a particular action. It's like a
movie of the future that a person plays in his mind. When many plausible
actions may be taken, there will be an extrapolated sensory sequence
associated with each one of which the individual is aware.

Suppose, for example, that you are walking in the African veldt. There
is a tree about 10 feet to your left, and a lion leaps out of a bush
about 200 yards to your right, and charges. You have no weapon. You
anticipate what will happen if the lion catches you, and what will
happen if you run to the tree and climb out of his reach before he gets
to you. The former expectation has lots of pain. The latter has a bit of
exertion, but no pain. Result: you choose the follow the expectation
which has the highest satisfaction total--i.e., the highest total of
pleasure minus pain. That means you run for the tree, and climb as fast
as you can.

Free will is what happens when your actions are controlled by the
expectation with the highest satisfaction total, and lack of free will
is what happens when any other expectation controls your actions.
Therefore your will is free if you run toward the tree as fast as you
can; and it is unfree, if you stand there and wait to be eaten.

Consider another situation. Suppose someone walks out into the African
veldt to commit suicide. His life is miserable. He is dying of cancer,
and in constant, horrific pain. Thus when he sees the lion, he has two
expectations: (1) he can climb the tree to "safety," and continue to
drag through his miserable life, with nothing to look forward to but
more unbearable pain, or (2) he can stand there an wait for the lion to
put him out of his misery. As he sees it, expectation (1) involves weeks
and months of continued unbearable pain, and no pleasure, while
expectation (2) involves a brief moment of pain, and then blackness.
Result: both satisfaction totals--the quantity of pleasure in the
expectation minus the quantity of pain--are negative, but the
satisfaction total of expectation (2) is greater than that of
expectation (1), because it is closer to zero. Result: this man's will
is free if he waits for the lion, and it is unfree if he runs for the
tree.

Free will is the state of the person whose actions are controlled by the
expectation with the highest satisfaction total. A person is unfree if
and only if his actions are controlled by some other expectation, one
that does *not* have the highest satisfaction total.

There is a mechanism of choice in the human brain. It operates by
ensuring that the expectation with the highest satisfaction total
controls the actions we take. When that mechanism controls actions, the
will is free, despite the fact that it is a mechanism, and that we, in
whom it resides, are machines.

That means the supposed conflict between free will and determinism is a
false dichotomy. It is, in fact, the way our choices are determined that
makes us free. If they were not determined--if we want to flee the lion
and climb the tree, but some random force intervenes and causes us to
keep standing there--then our will is not free. Indeterminism, in short,
is by its very nature incompatible with free will. To have free will,
two conditions must be met: (1) the universe must be a deterministic
machine; and (2) the mechanism by which our choices are made must be of
the type described above.

That's all there is to it.

--Mitchell Jones}***

Of course, anyone who would freely choose to be a slave has a
very seriously damaged Will.

What about a damaged brain?

--
Rich
*****************************************************************
If I seem to be ignoring you, consider the possibility
that you are in my killfile. --MJ
 
About 13 years ago, I developed 12 proofs that Muhammad was a false prophet.
I noted brief summaries in the margin of a book I was reading.
Unfortunately, what was preeminently clear at the time has become very
obtuse now. Can anyone help me reconstruct the proofs?
 

Welcome to EDABoard.com

Sponsor

Back
Top