FleshFactor FleshFactor


FleshFactor: HI

A E C  F O R U M - "F L E S H F A C T O R"

Humanistic Intelligence (HI)

The goal of Artificial Intelligence (AI) is to produce machines with
human-level intelligence.

The goal of Humanistic Intelligence (HI) is to produce a combined
entity of human+machine inextricably intertwined.  In this sense, the
machine acts as a true extension of the mind and body.

STEPS TOWARD HI: HI is characterized by three aspects:
  (1)   A goal of HI is to create a machine that dramatically assists
        the human, through a form of synergy, the design of which
        recognizes the strengths and weaknesses of each, so that the two
        function in a complimentary rather than competitive way.
        (This first goal of HI is closely related to so-called
        'intelligence amplification'.)
        This assistance need not be attained strictly from the machine
        itself, but may also be attained from the intelligence of one
        or more other human beings, through the facility of the
        machine.  For example, in the original photographer's assistant
        application, there was typically another human
        providing information over the wireless communications
        network\protect\footnote {Much of what we will see later pertains
                         to new forms of communication between humans,
                         facilitated by the machine.  These new
                         forms of 'humanistic intelligence' are
                         related to the principle of the so-called
                         cyranoid~\protect\cite{milgram}, but are also quite
                         different.  A person talking
                         to someone equipped with the form of
                         humanistic intelligence I propose
                         experiences a mixture of personalities
                         and opinions (that of the person in their immediate
                         vicinity mixed with that of the one or more remote
                         humans), rather than just the personality of
                         one remote human as is the case with a
        The way that this assistance is inextricably intertwined with the
        user is, most notably, of very short latency such that it appears
        to be an extension of one's own capability.  This short latency
        itself has two facets, which I illustrate by way of example, using
        the way that the human mind communicates with its peripherals (parts
        of the body):
    (1A)        Because of the 'constancy of user-interface' our brain
                has to parts of our own body, we have adapted, over many
                years, to experience these peripherals as very immediate.
                This user-interface is not 'user-friendly' in the
                traditional MacIntosh sense, but, rather, it takes many
                years to learn. Instead it is 'user-friendly' in a
                different sense, that is, it is consistent, so that one    
                need expend very little mental energy or mental delay to
                use it, although this immediacy only develops after a
                period of many years of use.
                I call this subcriterion 'first brain ephemeral'.
    (1B)        We do not experience or perceive
                delays when our brain issues commands to parts of our own
                bodies.  We do not perceive that parts of our own bodies
                have a 'mind of their own' (e.g. are held-up 'waiting
                for I/O').  In the this context, I will
                propose that the apparatus be thought of as a 'second
                brain', so
                I will call this subcriterion 'second brain ephemeral'.
        Thus because of both constancy of user-interface,
        and through its temporal immediacy/responsiveness,
        the machine appears as a true extension of the user's mind and body.
        I refer to this criterion as the '{\bf ephemeral} criterion'.
  (2)   Physically, the human and machine are inextricably intertwined,
        to seamlessly fit together into a single unit, in order to meet
        the ephemeral criterion stated above.  This inextricable
        intertwining has two purposes, the first social, and the second
    (2A)        The social aspect is that others would not perceive the
                machine as a separate entity.  This means, for example, that
                if one enters a department store or the like, where one is
                typically asked to leave one's bag or briefcase at the
                counter, that the apparatus should be so-designed that
                one is not required to leave behind one's 'second brain'
                at the counter, for this would impair constancy of
                user-interface (e.g. the ephemeral criterion).
                This sub-criterion may be achieved either by making the
                apparatus covert, or by situating the apparatus within
                our {\em prosthetic territory}~\protect\cite{gonzalez}.
                I refer to this sub-criterion as the `social eudaemonic
    (2B)        The personal aspect also pertains to this long-term
                adaptation.  If we ourselves regard the apparatus as part
                of our own day-to-day lifestyle --- part of our own existence,
                then we will begin to treat it as such, and think of it
                as part of ourselves, which also involves an altering of
                human perception.  This sub criterion may be achieved by
                making the device {\em comfortable}.  I refer to this sub-
                criterion as the 'personal eudaemonic criterion'.
        I refer to this criterion as the '{\bf eudaemonic} criterion'.
  (3)   The apparatus 'empowers' the user (e.g. puts the user in
        control).  By this, I mean that the user and his/her intellect
        are in the feedback loop of the important high-level processes
        of the combined (human and machine) intelligence.  A very simple
        example of user-empowerment is an automatic camcorder with
        electronic viewfinder and full manual override such that
        the interface to the override is ergonomically well-designed.
        Although much of the processing ('thought'), such as decisions
        regarding exposure settings, is implemented in the 'second brain'
        (the machine) the user is still inside the feedback loop by virtue
        of the fact that the electronic viewfinder mediates his/her
        perception of reality in accordance with decisions that the
        system has made.  Thus just as in the ephemeral criterion where
        the machine does not exhibit a 'mind of its own' through delays
        in responsiveness, here the machine does not exhibit a 'mind
        of its own' through the theft of control from the user.
        By theft of control I mean the taking of control away from a
        user who wants more control.  Indeed, the second brain can and
        should have its own 'intelligence', and this in fact may
        empower the user (e.g.  the fully automated camera frees the
        user to concentrate more on higher level compositional and
        cinematographic aspects), but it should not 'enslave' the
        user by removal of the mechanism for controllability or observability.
        Thus there are two sub-criteria associated with this criterion:
    (3A)         The apparatus should afford as much control to the user
                 as is reasonably possible/practical.
                 I call this sub-criterion
                 the 'existential controllability criterion'
    (3B)         The apparatus should inform the user of its operation
                 and operational status as much as is reasonably
                 I call this sub-criterion
                 the 'existential observability criterion'
        In order to better understand this criterion, I consider some
        counter-examples (e.g. systems that violate it).  An extreme
        example of this violation is the synergy of enslavement arising from
        the remotely-controlled pain-giving device attached to
        prisoners~\protect\cite{hoshen95} to make them into
        obedient 'cyborgs'.  This third goal of HI borrows from
        existential philosophy the principle of self-determination
        and mastery over one's own destiny, as well as from
        humanistic psychology, the principle of
        I refer to this last of the three criteria as the
        {\bf existential} criterion'.
The eudaemonic and existential criteria also provide a certain
degree of personal assertiveness, which I discuss in Chapters~7, 8,
and~9 of my PhD thesis.  This talk* ("Humanistic Intelligence"), and the
related performance ("The Personal Safety Device") will discuss
humanistic aspects of HI, in particular, principles of self empowerment,
self-determination, and mastery over our own destiny.  I will also present
an apparatus called "WearComp" that meets these three criterion, and is
therefore an example embodiment of HI.  The example application I will
demonstrate on WearComp is "Personal Imaging" -- re-situating the video
camera in a disturbing and disorienting way in order to challenge our
pre-conceived notions of video surveillance in our society (and similar
forms of "interrogative art").

If anyone's interested in further details on this, please email me and
I'll point you to my recent PhD thesis called "Personal Imaging".


Steve Mann <steve@media.mit.edu>

*[Steve will speak/demonstrate/perform at the live FleshFactor Symposium
  in Linz, September 9-10, 1997, T.S.]

to (un)subscribe  the Forum just mail to
fleshfactor-request@aec.at (message text 'subscribe'/'unsubscribe')
send messages to fleshfactor@aec.at

[FleshFactor] [subscribe]