FleshFactor FleshFactor
logo; AEC FORUM

[Prev][Next][Index][Thread]

FleshFactor: some scientific angles



---------------------------------------------------------
A E C  F O R U M - "F L E S H F A C T O R"
(http://www.aec.at/fleshfactor/arch/)
---------------------------------------------------------


This is a mail I received from my son-in-law, a physicist turned
NN-applications developer, in response to my request for his thoughts on
some of the scientific topics the discussion has touched upon. I found his
remarks enlightening in several respects so I thought it would be useful
to present them for the rest of the forum. 

Dinka Pignon

________________________________________________________________________

Date: Tue, 17 Jun 1997 21:15:37 +0000
From: Dean <dp@maxwell.ph.kcl.ac.uk>
To: 71662.321@COMPUSERVE.COM
Subject: Dinka, here goes...!


There are three issues: fundamental theory of physics, fundamental theory
of information, and practical. 


THEORY (and I mean VERY fundamental, from a physics perspective!):

1) Some people say that the brain works using tiny neurons which work using 
chemistry and electrical impulses which both use physics. If we could 
understand exactly how this chemistry works, then we could, in theory, make 
something that works exactly like our brains. Furthermore, since these 
processes can, in theory, be modelled on a computer, we could use a vast 
network of big computers to model the chemistry of the neurons and of their 
interconnections so as to make a brain in a computer.

2) Other people say that in the neurons quantum mechanics plays an
integral role (because the processes are so small) and quantum mecahnics
is inherently unpredictable and impossible to model exactly on a computer.
This is because a digital computer is completely describable; indeed that
is why they are made digital because digitising removes uncertainty. We
would therefore have to build a replica of a brain (i.e. out of real
biological matter or something similar) in order to make something that
resembles our brain. We could not model it on a computer and therefore
could never know exactly what is going on, EVEN IN THEORY. We could make
generalisations and predict how probable that it will do this or that,
just like we can with other quantum mechanical systems. 

These two viewpoints hinge on whether quantum mechanics has anything to do
with it and nobody is even near answering that question as far as I know.
I myself find it hard to believe (2) so I generally go for (1) but that is
just a 'gut feeling'. 


THEORY OF INFORMATION:

Ok, so assuming that (1) above is correct, we could build something that
matches our brain in processing paradigm. If we made every neuron exactly
as in my brain and with the same state of chemical settings and electrical
impulses and connections we would have another me. But would we understand
what I am? 

Well, our brain works (roughly!) by constructing simplified internal
representations of the universe which includes the brain itself and the
outside world. When you see a red round stone, patterns in your brain are
fired up forming an internal state with the label "a red round stone". You
do not have a red round stone in your brain, you have an internal
representation consisting of chemicals and electric impulses which you
have learnt to associate with the object you see. When you notice someone
is sad your brain associates their state with some internal state of your
own brain which you call sadness. It is not their sadness, it is your
version of it which you have learnt to associate with theirs. You can
imagine how they feel because your "sadness" is very similar to theirs
(they have a similar brain and expressions which you recognise in
yourself), but it is not EXACTLY the same. It is a representation. Do you
know how a mouse feels? You would know if you grew an exact replica of a
mouse's brain inside your own.

Ok, so if that above is true, then we could never hope to understand fully
the whole universe (in a scientific sense). Imprecisely speaking, we would
need the same amount of matter and energy in our brains to construct an
exact replica of the universe in order to understand it fully. Now comes
the basic theoretical objection: The universe contains our own brain which
is trying to understand it. So we could understand the universe OUTSIDE
our brain if we made our brain big enough. But to understand our own brain
we would need as much matter and energy as is in our own brain. But that
is our own brain! To understand something in the intellectual sense you
need to be apart from it. So we'd need an extra bit to think, "I
understand this brain." But this extra bit is part of our brain then and
so must be included in the understanding. So we need yet another extra
bit... and so on, expandum ad absurdum (or something!). It is a never
ending loop, truly unfathomable in the same way as trying to really
understand infinity. 

So the brain can at best hope to achieve a good simplified, generalised,
model of itself as an internal representation of itself. 

It follows from this information theory thay we can NEVER fully understand
our own brains, even in theory and even if we can build a replica of our
brains if (1) is true! 


PRACTICALLY:

Artificial neural nets (as made in academia and industry) have very little
indeed to do with our brains. They are mathematical constructs based only
very loosely on the parallel structure of our brains. They process
information in different ways to traditional statistics and they
adaptively change their structure in response to the input information
(i.e. comparable to "learning"), but they are essentially another
mathematical or statistical technique. 

Practically, there is no chance of building anything close to the brain
for eons, even if it is possible in theory (that is, even if (1) is true). 

So here's what I myself think: we may soon (in decades rather than
millennia) see some machines that exhibit signs of some "intelligence" and
which can adapt to do different stuff. We will be able to talk to them,
they will talk to us, etc., but will they have self-awareness? For that
they need to build simplified internal representations of themselves and
even if we can prove that they do have some form of primitive
self-awareness, we won't be able to say that they are "like us" or even
like a mouse until they are as complex as our own brains. And if, in 1000
years, we make a computer simulation as complex as our brain, it's exact
workings will be as incomprehensible to us as another person is. If fact,
that means we could not make an exact replica. We would have to make
something simple that evolved by itself into something complex. 

Will they have their own free will, that we cannot predict? The last
paragraph is applicable to that question too. (On the other hand if (1) is
not true and (2) is, then they will not have unpredictable free will if
they are built on a computer and they will be very fundamentally different
from us. Still "intelligent" in a sense, but not the same way we are.)


 Dean Pignon

<dp@maxwell.ph.kcl.ac.uk>



--------------------------------------------------------------------
to (un)subscribe  the Forum just mail to
fleshfactor-request@aec.at (message text 'subscribe'/'unsubscribe')
send messages to fleshfactor@aec.at
--------------------------------------------------------------------


[FleshFactor] [subscribe]