FleshFactor FleshFactor
logo; AEC FORUM

[Prev][Next][Index][Thread]

FleshFactor: Re: Intelligent Software



---------------------------------------------------------
A E C  F O R U M - "F L E S H F A C T O R"
(http://www.aec.at/fleshfactor/arch/)
---------------------------------------------------------


I would like to reply to some of the things Pattie Maes said in her post
concerning software agents, because her post brings up a host of issues
which I think should not be ignored, or shukked under the rug in order to
prepare the world for the technotopia MIT seems to be heralding.

Pattie wrote:

>Computers are as ubiquitous as automobiles and toasters, but exploiting
>their capabilities still seems to require the training of a supersonic
>test pilot. 

Is 'exploitation' really a suitable model for the future (and past, let us
be realistic) of technological innovation?  What does it mean to talk of
exploiting technology in relation to the wholesale exploitation of nature? 
I always get the feeling that underlying technological speculation is a
tacit master/slave dialectic; sure, the roles are allowed to be reversed,
but we are still left with masters and slaves, swapping the roles of
exploiter/exploited as they feel fit.  There is no hint of mutual respect,
understanding, or love.  Let us think about the love of technology (and
here I take my thoughts from Bruno Latour), about respect for technology
and what it really does for us, about not allowing ourselves to think of
technology as a slave, but rather, as a gift.  DO NOT THINK that what you
'create' is yours entirely.

Another issue:  Pattie, you talk of the 'millions of untrained users' as a
force that must be (almost militaristically) mobilized and trained in
order for the future of a technological world to be satisfactorily
attained.  Isn't it kind of spooky to picture the rets of the world as a
population of 'untrained' but potential 'users' of technology?  Might it
be that not all of these millions even want to be 'trained'?
Meta-narratives of connectivity and performability abound in technotopian
discussions of technology, the kind of talk that Nicholas Negroponte
(perhaps a colleague of yours?) engages in so abundantly.  My question is
whether these millions of potential users are really as keen as
technotopian thinkers want them to be. 

This of course relates to the popular idea of some future of maximum
connectivity, when the world is finally wired.  I've been having visions
of a wired world; it's not a global village, pulsing with light, energy,
liberty, equality and technological fraternity for all; it's an armour-
plated Earth, a steel ball rocketing through space, blind to everything
but what happens on the network.  Why is it that the Borg conjures up such
primal fears?  It is because we are, indeed, the Borg.  I don't want to
live on a Borg cube; I want a planet.

Another issue: the agent.  Is there a possibility that the agent, in
'knowing' its user, actually shuts down thought rather than enabling it?  

Maes notes:

>Over time, "artificial evolution" can codify and combine the behaviors of
>the most effective agents in a system (as rated by their owners) to breed
>an even fitter population. My colleagues and I have built such a system to
>develop agents to search a database and retrieve articles that might
>interest their users. Each succeeding generation matches its owner's
>interests better. 

What does it mean for a software agent to assume it has the knowledge of
what you want to read, when you want to read it, what you want to find out
about, how you want it packaged, etc?  Who is this 'owner' of which it can
be known what he or she WANTS?  Why the need to simulate ourselves this
way?  The 'will to simulation' that governs most technological innovation
in the area of AI or AL needs to be addressed I think, because it is a
will that has functioned throughout the long and tortured history of
Western science and technological expansion.  For example, 'time' is
perhaps the first product of this will, inasmuch as it laid a simulated
grid across the earth and in the minds of its inhabitants and had the
temerity to call itself nature. 

Maes also wrote:

>At the Massachusetts Institute of Technology, we have built software
>agents that continuously watch a person's actions and automate any
>regular patterns they detect.  An E-mail agent could learn by observation
>that the user always forwards a copy of a message containing the word
>"meeting" to an administrative assistant and might then offer to do so
>automatically. 

I guess I would wish to question whether this is all we are, a set
of regularly performed patterns, freely available for technological
surveillance.  Certainly, you do ask the important question of whether an
agent should replicate a person's BAD habits as well, but nevertheless, I
would argue that once again, the will to simulation attempts to show that
all we ever were was a set of regulated behavioural traits, good, bad or
whatever, just as the mechanistic philosophers of the Enlightenment
attempted to prove that we were no more than advanced clockwork mechanisms
animated from the outside by a divine spark.

So yeh, I'm just wondering you know, because it is indeed you, Pattie, who
is creating my future, if I am to continue to live in the western world (and
increasingly the east as well, sadly enough) and I can't help but have
something of an interest in it. 


Grayson Cooke   <gcooke@alcor.concordia.ca

Doctoral Humanities Programme
Concordia University
Montreal

--------------------------------------------------------------------
to (un)subscribe  the Forum just mail to
fleshfactor-request@aec.at (message text 'subscribe'/'unsubscribe')
send messages to fleshfactor@aec.at
--------------------------------------------------------------------


[FleshFactor] [subscribe]