FleshFactor FleshFactor
logo; AEC FORUM

[Prev][Next][Index][Thread]

FleshFactor: good luck to you, Shafer Ramsey...



---------------------------------------------------------
A E C  F O R U M - "F L E S H F A C T O R"
(http://www.aec.at/fleshfactor/arch/)
---------------------------------------------------------


Shafer Ramsey wrote (in response to brad brace's comments on the
impossibility of reproducing human-like intelligence in a computer, due to
the complexity of the human mind/brain):

>Life isn't as complex as we think it is.  We -make- it complex by not
>completely defining the terms.  There is no definition for Intelligence,
>Life, Death, Good or Evil.  Until we define these terms we will always see
>various problems as complex and goals unattainable.  How can you map out
>intelligence/life experiences when you don't have a definition of what
>those concepts are? 


To which I can only remark, good luck to you! As I recall, Socrates had a
bash at defining a number of allied imponderables a long time ago, thereby
imparting a certain wobble to the world whose consequences we yet inherit.
The problem of determining "definitions" -- what things are, what things
mean -- is precisely what makes the damned thing so refractory. If we could
only agree on our terms, then sure, everything would be real simple. But how
prithee, might one go about enforcing compliance with the definitions?
Perhaps the linguistic philosophy of Joseph Stalin would be helpful in this
regard ... ?

And Brian Molyneux voices a different reason for doubting the attainability
of artificial intelligence:

>To crudely (and badly) paraphrase Wittgenstein: if lions could speak, we
>wouldn't be able to understand them anyway. Different strokes for
>different folks. Context (environment) is everything. If it can't eat,
>shit, go on vacations, love, die, how can it 'be capable of thinking
>thoughts about [its] own thinking in the same way and on the same level' 
>as us? 

Brian's little list isn't so far from something my daughter said to me
recently -- that all that people require for a decent life is to be able to
eat, drink, piss and shit, go to sleep with a modicum of comfort and
security when they're tired, run around and do stuff when they're not
sleeping. Life, she was saying, really isn't very complicated. Yet we
haven't even figured out how to meet the basic requirements of food and
shelter and clean water, for everybody. (When I reflect upon how simple it
*should* be to ensure this minimally adequate basis for everyone on planet
earth, I find myself doubting that there's such a thing as *human*
intelligence, at all.)

Why should this be so? Well, my guess is, it's because we really can't
*trust* other people. And this would also be the reason, I'd say, why we're
so dead-set on building *artificial* intelligence -- if we build it
ourselves, and it ain't human, then maybe we can trust it to behave
reasonably, in a way we can't trust other people to behave, mm? People are,
sadly, such unreliable creatures ...

As for recursiveness, things get real snaky real quickly, as soon as you've
got a bunch of smart 'epistemologizing' critters running around making up
theories about the other critter's internal states / motives / trajectories.
Hell, they don't even need to be programmmed to deceive each other on
purpose. It's enough that they will occasionally make errors in their
predictions of other critters' motives and actions, and they'll therefore
make, on occasion, the 'wrong' moves themselves, thus making the other guys'
predictions go wrong, and so on, and so on. And so long as these critters
don't have supernatural access to the actual 'program' directing the
behavior of their fellow critters, anticipatory errors will be inevitable --
and their behavior, as individuals and collectively, will become real
complicated / recursive / chaotic / nondeterministic, real fast --
notwithstanding that the 'program' that's inside the critters telling them
what to do, might well be (and probably is), itself, completely
deterministic, dead simple. 

In other words (and pretty much at 180 degrees from the kind of theorizing
that preoccupies people working in cognitive science and AI), it is in the
*social* and *ethical* order that you'll find the reasons for the apparent
complexity of human behavior and (consequently) of human intelligence. Take
a simple learning algorithm (call it the "law of affect" or "animal habit"
or "Bayes' theorem" or what-you-will), and embed it in an environment of ape
sex and power; augment it with a very large capacity associative memory;
then lightly season the mix with a symbolic capability to invoke vivid
memories or 'shared hallucinations' via a repertoire of learned,
conventional, visual and vocal signs or gestures. Simmer gently over low
heat, stirring occasionally, for 4 million years. And there, to an
approximation, you are.


Derek Robinson    <drdee@interlog.com>

Toronto


--------------------------------------------------------------------
to (un)subscribe  the Forum just mail to
fleshfactor-request@aec.at (message text 'subscribe'/'unsubscribe')
send messages to fleshfactor@aec.at
--------------------------------------------------------------------


[FleshFactor] [subscribe]