FleshFactor FleshFactor
logo; AEC FORUM

[Prev][Next][Index][Thread]

FleshFactor: Re: the ultimate interface?



---------------------------------------------------------
A E C  F O R U M - "F L E S H F A C T O R"
(http://www.aec.at/fleshfactor/arch/)
---------------------------------------------------------


Guillermo Cifuentes wrote: "we tend ... to turn ourselves into language: the
keyboard, and not the screen, is the ultimate interface ... "

Puts me in mind of a 'mind reading' program I wrote a couple of years back.
An adaptive interface to help motor disabled people (e.g. multiple
sclerosis, ALS, ALD, cystic fibrosis, spinal bifida or spinal cord injury)
to communicate via a dynamic 'wordboard' display, to get their rates up from
the 5 words per minute typical of pointer & menu systems, to rates closer to
what most of us take for granted in conversation (100 wpm or better).

The challenge was to display the (content) words a user will *probably* want
to use next without limiting their vocabulary to the few tens of words that
can be comfortably and rapidly reviewed on a computer screen, or having to
onerously navigate a nested sequence of menus. It was an inverted index
associating with each content word from an existing body of text (a general
corpus to start, but soon enough supplanted by the user's own words) the set
of 'contexts' where the word has appeared. Each context was a few (from 2 to
5) lines of text, and the contexts, importantly, overlapped. The words in
the 'current context' (the text of the preceding few lines produced by the
user) would invoke their associated sets of past contexts. The word sets
making up the selected contexts would then be merged, the individual word
counts tallied, sorted, and the top-ranked 10 or 20 words displayed on the
screen. Very quick, real-time, 'on wings of thought'. 

As I say, a 'mind reader', and surprisingly sensitive & adept, given its
utter simplicity, viz. just inverted indexing with multi-key (best matching)
look-up. ('Mind-reading', many would insist, being strictly a *human* preserve.)

And here we perhaps begin to make out the shape of a 'dream machine', where
an operator's intent is transduced *autonomically*, from certain subtle
signs not directly accessible to awareness, but only by their results, by
trial & error, eventually creating in the user a 'feel' for how to 'do'
what, evidently, must be 'done' to produce the desired effects, though not
in any way that one could ever tell 'what', in fact, one 'did' to achieve
the desired ends. No more than we can tell how we (our bodies, carrying out
our desires effortlessly, without deliberation, without thinking) are able
to *wish* a limb or tongue or eye to move; and it moves ... 'mind over
matter', mm? 

A kind of 'navigation' through an imaginary 'space' of distinctions or
choices, internalized and made somatic, semi-automatic, the body's memory,
not conscious but obedient to volition. Instrument navigation, flying blind
through dense fog, night wings planing over unseen landscapes, invisible
cities. We can do this, I think. I think we do it all the time. 

Certain primitive proprioceptive parts of ourselves -- body image, the ratio
of the senses, spatial mappings of environmental constancies and differences
upon our sensory surfaces -- are extraordinarily plastic, can be remapped
with great facility if the means are made available. For example, the TVSS
(tactile visual sensory substitution) system of Paul Bach-y-Rita at the
Sloan-Kettlewell Visual Sciences Lab (I believe it was) back around 1968-70.
This was a two dimensional array of 'vibrotactors' worn upon a blind
subject's skin (chest, thigh, or back) which translated a low resolution
video image into 'tingles' -- what was really extraordinary is that after
just 10 or 15 minutes, the wearer of this apparatus would be 'seeing'
objects located *out there* in the 3D space in front of them, although if
they were asked to introspect about the nature of the 'image', they would
report that they were 'seeing' it as if upon an inner 'screen', viewed by
the 'mind's eye'. (And of course they could also, if they wished, attend to
the tactile stimulation itself, local to the skin surface under the TVSS
display.) 

This was the experience of sighted (blindfolded) subjects and of subjects
blind since birth, both. Once this 'mapping' had been learned, the
vibrotactile array could be shifted elsewhere on the body (from the back to
the chest, say) and it would make no difference whatsoever to their 'visual'
apprehension of the world transduced by the video camera. The remapping was
virtually instantaneous. (Nor would they be fooled or distracted by 'itches'
or 'twinges' occurring under the TVSS interface; these were correctly
interpreted as pertaining to the skin, not the visual scene displayed
'through' the skin.)

Another remarkable finding of the TVSS team was that subjects seemed to be
able to discriminate subtle visual differences 'beyond' the actual
resolution of the TVSS array, which was very coarse (e.g. 20 x 20 = 400
vibrotactors). This would have been due in part to 'active vision' -- the
fact that we MOVE AROUND in the environment, and this, to a far greater
extent than binocular disparity, allows the visual integration of an 'in
depth' knowledge of our surroundings, e.g. by the differential rates at
which objects are occluded by other objects, as these relatively homogeneous
patches move across the visual scene. And it would have also been due, in
part, to people's prior knowledge of *what* they are seeing, which allows
(unconscious, undeliberate) 'filling-in' or interpolation of what *should*
be there, if the thing seen is indeed what one believes it to be. (We have
all been fooled, haven't we? A face in the crowd that you're convinced is
someone you know, then when you get closer you see it's not at all like who
you thought it was; it is in fact nothing like the face that you, in fact,
'saw'.)


Derek Robinson

<drdee@interlog.com>

--------------------------------------------------------------------
to (un)subscribe  the Forum just mail to
fleshfactor-request@aec.at (message text 'subscribe'/'unsubscribe')
send messages to fleshfactor@aec.at
--------------------------------------------------------------------


[FleshFactor] [subscribe]