FleshFactor: Intelligent Software
A E C F O R U M - "F L E S H F A C T O R"
>Increasingly we find ourselves having to contend with new methodologies
>for interfacing with the physical/real and virtual/digital aggregate
>states of our environment, including the sensuous portrayal of
>information as a strategy for an "expressiveness of the subject" in
>telematic art. In a society defined by the psycho-sociology of
>surveillance, our media are a second skin at the periphery of the body, a
>body whose sentient pores are formed by surveillance cameras, image
>recognition systems, 'eye in the sky' satellites, personal data record
>systems, networked databases and intelligent agents.
--Gerfried Stocker, from his Opening Statement to FleshFactor
Programs that can act independently will ease the burdens that computers
put on people...
Computers are as ubiquitous as automobiles and toasters, but exploiting
their capabilities still seems to require the training of a supersonic
test pilot. VCR displays blinking a constant 12 noon around the world
testify to this conundrum. As interactive television, palmtop diaries and
"smart" credit cards proliferate, the gap between millions of untrained
users and an equal number of sophisticated microprocessors will become
even more sharply apparent. With people spending a growing proportion of
their lives in front of computer screens--informing and entertaining one
another, exchanging correspondence, working, shopping and falling in
love--some accommodation must be found between limited human attention
spans and increasingly complex collections of software and data.
Computers currently respond only to what interface designers call direct
manipulation. Nothing happens unless a person gives commands from a
keyboard, mouse or touch screen. The computer is merely a passive entity
waiting to execute specific, highly detailed instructions; it provides
little help for complex tasks or for carrying out actions (such as
searches for information) that may take an indefinite time.
If untrained consumers are to employ future computers and networks
effectively, direct manipulation will have to give way to some form of
delegation. Researchers and software companies have set high hopes on
so-called software agents, which "know" users' interests and can act
autonomously on their behalf. Instead of exercising complete control (and
taking responsibility for every move the computer makes), people will be
engaged in a cooperative process in which both human and computer agents
initiate communication, monitor events and perform tasks to meet a user's
The average person will have many alter egos in effect, digital proxies--
operating simultaneously in different places. Some of these proxies will
simply make the digital world less overwhelming by hiding technical
details of tasks, guiding users through complex on-line spaces or even
teaching them about certain subjects. Others will actively search for
information their owners may be interested in or monitor specified topics
for critical changes. Yet other agents may have the authority to perform
transactions (such as on-line shopping) or to represent people in their
absence. As the proliferation of paper and electronic pocket diaries has
already foreshadowed, software agents will have a particularly helpful
role to play as personal secretaries--extended memories that remind their
bearers where they have put things, whom they have talked to, what tasks
they have already accomplished and which remain to be finished.
This change in functionality will most likely go hand in hand with a
change in the physical ways people interact with computers. Rather than
manipulating a keyboard and mouse, people will speak to agents or gesture
at things that need doing. In response, agents will appear as "living"
entities on the screen, conveying their current state and behavior with
animated facial expressions or body language rather than windows with
text, graphs and figures.
A Formidable Goal
Although the tasks we would like software agents to carry out are fairly
easy to visualize, the construction of the agents themselves is somewhat
more problematic. Agent programs differ from regular software mainly by
what can best be described as a sense of themselves as independent
entities. An ideal agent knows what its goal is and will strive to achieve
it. An agent should also be robust and adaptive, capable of learning from
experience and responding to unforeseen situations with a repertoire of
different methods. Finally, it should be autonomous so that it can sense
the current state of its environment and act independently to make
progress toward its goal.
Programmers have difficulty crafting even conventional software; how will
they create agents? Indeed, current commercially available agents barely
justify the name. They are not very intelligent; typically, they just
follow a set of rules that a user specifies. Some Email packages, for
example, allow a user to create an agent that will sort incoming messages
according to sender, subject or contents. An executive might write a rule
that forwards copies of all messages containing the word "meeting" to an
administrative assistant. The value of such a minimal agent relies
entirely on the initiative and programming ability of its owner.
Artificial-intelligence researchers have long pursued a vastly more
complex approach to building agents. Knowledge engineers endow programs
with information about the tasks to be performed in a specific domain, and
the program infers the proper response to a given situation. An
artificially intelligent Email agent, for example, might know that people
may have administrative assistants, thata particular user has an assistant
named, say, George, that an assistant should know the boss's meeting
schedule and that a message containing the word "meeting" may contain
scheduling information. With this knowledge, the agent would deduce that
it should forward a copy of the message.
People have been trying to build such knowledge-based agents for 40 years.
Unfortunately, this approach has not yet resulted in any commercially
available agents. Although knowledge engineers have been able to codify
many narrow domains, they have been unable to build a base of all the
common sense information that an agent might need to operate in the world
at large. At present, the only effort to systematize that knowledge is at
the CYC project at Cycorp in Austin, Tex. [this text was first published
in September 1995 in Scientific American]. It is too early to tell whether
a CYC-based agent would have all the knowledge it needs to make
appropriate decisions and especially whether it would be able to acquire
idiosyncratic knowledge for a particular user. Even if CYC is successful,
it may prove hard for people to trust an agent instructed by someone else.
Both the limited agents now distributed commercially and the
artificial-intelligence versions under development rely on programming in
one form or another. A third and possibly most promising approach employs
techniques developed in the relatively young field of artificial life,
whose practitioners study mechanisms by which organisms organize
themselves and adapt in response to their environment. Although they are
still primitive, artificial-life agents are truly autonomous: in effect,
they program themselves. Their software is designed to change its behavior
based on experience and on interactions with other agents. At the
Massachusetts Institute of Technology, we have built software agents that
continuously watch a person's actions and automate any regular patterns
they detect. An E-mail agent could learn by observation that the user
always forwards a copy of a message containing the word "meeting" to an
administrative assistant and might then offer to do so automatically.
Agents can also learn from agents that perform the same task. An E-mail
agent faced with an unknown message might query its counterparts to find
out, for example, that people typically read E-mail messages addressed to
them personally before they read messages addressed to a mailing list.
Such collaboration can make it possible for collections of agents to act
in sophisticated, apparently intelligent ways even though any single agent
is quite simple.
Turing Meets Darwin
Over time, "artificial evolution" can codify and combine the behaviors of
the most effective agents in a system (as rated by their owners) to breed
an even fitter population. My colleagues and I have built such a system to
develop agents to search a database and retrieve articles that might
interest their users. Each succeeding generation matches its owner's
In time, this approach could result in a complete electronic ecosystem
housed in the next century's computer networks. Agents that are of service
to users or to other agents will run more often, survive and reproduce;
those that are not will eventually be purged. Over time, these digital
life-forms will fill different ecological niches some agents could evolve
to be good indexers of databases, whereas others would use their indices
to find articles of interest to a particular user. There will be examples
of parasitism, symbiosis and many other phenomena familiar from the
biological world. As external demands for information change, the software
ecosystem will continually renew itself.
Obviously the widespread dissemination of agents will have enormous
social, economic and political impact. Agents will bring about a social
revolution: almost anyone will have access to the kind of support staff
that today is the mark of a few privileged people. As a result, they will
be able to digest large amounts of information and engage in several
different activities at once. The ultimate ramifications of this change
are impossible to predict.
The shape of the changes that agents bring will, of course, depend on how
they are employed; many questions have yet to be answered, others even to
be asked. For example, should users be held responsible for the actions of
their agents? How can we ensure that an agent keeps private all the very
personal information it accumulates about its owner?
Should agents automate the bad habits of their owners or try to teach them
better ones (and if so, who defines "better")? As the electronic ecosystem
grows in complexity and sophistication, will it be possible to ensure that
there is still enough computing power and communications bandwidth left
over for the myriad tasks that human beings want to get accomplished? The
limited experiments that researchers have performed thus far only hint at
the possibilities now opening up.
Pattie Maes <email@example.com>
to (un)subscribe the Forum just mail to
firstname.lastname@example.org (message text 'subscribe'/'unsubscribe')
send messages to email@example.com