FleshFactor FleshFactor
logo; AEC FORUM

[Prev][Next][Index][Thread]

FleshFactor: comments



---------------------------------------------------------
A E C  F O R U M - "F L E S H F A C T O R"
(http://www.aec.at/fleshfactor/arch/)
---------------------------------------------------------


(1) In response to Phoebe Sengers, who wrote:

> In this case the rhetorical point of saying that humans are machines is
> to state that nothing human can be beyond the (eventual) reach of
> science - which, while perhaps fitting into a conspiracy theory, is
> actually quite likely exactly what Dennett and many other practicioners
> of AI believe.

Yes, but what does such a belief imply if not this: Everything we
comprehend, we can construct. Everything we construct is a machine. If we
can comprehend ourselves, i.e. construct ourselves, we are machines too.
Now, if these are the propositions, then we can define what we consider to
be the similarity or the difference between humans and machines as the
area of what we know or don't know about ourselves. As this is changing
all the time, that is we know more and more about ourselves, the
difference between us and machines appears to be less and less, until the
day when we can equate ourselves to machines. 

This could come to pass if we were constant and finite, but we're not,
and as you say:

> The best we can do is 'good enough', always, presuming there will be
> something left over that is not explained. 

I don't think it is what's left over, but the NEW, which would elude us. 
We 'construct' and 'reconstruct' ourselves all the time, this ability is
the very essential property of the human brain. Some do it more, some less
- that's why some people look more like machines than others.

Now, what about machines that we can construct but not comprehend - neural
network machines? The NN systems we are dealing with today are extremely
primitive, but they exhibit properties of the human brain - they can learn
and organize themselves. If we one day someone manages to construct a very
complex neural-network machine that can learn and structure/restructure
itself in an unpredictable way, i.e. a machine that in many ways resembles
human behaviour, would we still call it a 'machine'?  Or would we have
invented artificial LIFE? 

You say:
 
> Science works by providing us with general explanations. If we
> cannot develop a model of a particular individual, how will we know the
> theory holds?" 

Why would we want to develop a model of a particular individual? If we
develop a model that provides for the same kind of variability and
self-enhancement that humans exhibit, couldn't we say that we had
developed a model of a human being.  Of course the same kind of question
in the preceding sentence inevitably drags us out of the realm of strict
definitions into the no-man's-land of fuzzy definitions and of, why not,
subjective (human) assessment - "do I feel that this machine is behaving
rather like a person?"

> I'd love to hear some ideas and/or opinions (the wilder the better).

Can't you AI people invent some little things that we can stick into our
heads and transmit our thoughts to each other directly, without having to
struggle with language and concepts and stuff? I mean things that would
transmit our thoughts as they were before we put them into language. Or
are there any thoughts that aren't language? 

_________________________________________________________________________________
_

(2) Re: Humans as Machines (Richard Brown wrote...)

> We (humans) design and build machines - we KNOW how they are made and
> constructed and therefore can understand and predict their behaviour
>
> Can someone please explain to me the benefits of the Human as Machine
> (HaM) ideology?  I can see only negative associations: Humans as
> disposable/recylable/repairable units measurable as to worth
> (effectiveness/efficiency).  Humans as ultimately understandable (as only
> machines can be)... 

Quite so: we need to try and be clear about what we want to talk about and
why we want to do so. 

I see that many in this forum are AI researchers and technologists and I
hardly dare say this, but it seems to me that a lot of confusion in this
discussion is due to no clear distinction being made between algorithmic
machines (computers, AI) and neural-network machines. Why doesn't anybody
take it up? Isn't it known by now that our brains don't work on
algorithmic principles, while it is also known that neural network systems
exhibit properties resembling those of the brain - they can learn and
organize themselves, and some are unpredictable; one could almost call
them organisms, rather than machines.

Questions about NN's are still being hotly debated by physicists,
cognitive scientists and neuro-biologists (so my son-in-law who is an NN
expert tells me), and it remains to be seen whether complex NN's will be
able to resemble anything like human beings, but we may imagine they maybe
could. Or if the opposite proves to be true, we may come to the conclusion
that it is some kind of 'intelligent life force' (given to us by Nature,
or God, or aliens, or whatever) that we are driven by. I have a feeling
that this is what many of us would like to believe, and that this may be,
when all is said and done, the reason why this debate sometimes takes
emotional turns. 

Dj Spooky was right when he said that we need a new vocabulary for this
discussion. 

All other organs of the body fit pretty well with most people's concept of
machines. Artificial hearts are now going into mass production. We may
still not be able to fathom how the liver manages all that complicated
chemistry it does, but still nobody objects that humans cannot be machines
because they have such a complicated liver. But brains do not fit very
well with this paradigm of machines. Claiming that the brain is a machine
is of course tantamount to saying humans are machines, and that causes an
outcry from many quarters.

If a machine is something which we can manufacture, then indeed there must
be doubts as to whether we could ever build anything resembling a human
brain. These can only be doubts though, not categorical denial. What do we
mean by 'manufacturing'? The old-fashioned meaning, the way steam-engines
or computers are built, is to know what each part does, how they interact
with each other, and to put them together so that they cooperate to
achieve a desired and entirely predictable functionality.

But here we are talking about an entirely different kind of machine, one
which consists of parts which are themselves simple and predictable, but
when assembled in a certain way and in sufficient numbers, no longer have
a predictable functionality - maybe it's alive?

Or perhaps even the concept of manufacturing can be taken a step higher -
we don't try to build anything as complex as a brain, but we try to
manufacture something which embodies a principle of self-enhancement which
then carries on building and structuring itself. Or we find out how to
manufacture smaller entities which are contructionally within our reach,
but which embody a principal of "social organization" (the anthill thing),
which are able to come together and create something (a being?) of
comparable complexity. What shall we call these "products", and, more to
the point, how shall we live with them? 


Dinka Pignon

Stockholm

--------------------------------------------------------------------
to (un)subscribe  the Forum just mail to
fleshfactor-request@aec.at (message text 'subscribe'/'unsubscribe')
send messages to fleshfactor@aec.at
--------------------------------------------------------------------


[FleshFactor] [subscribe]