FleshFactor FleshFactor


FleshFactor: From the Lost and Found Dept.

A E C  F O R U M - "F L E S H F A C T O R"

One and All --

Debra Solomon has afforded an opening where I can clamber up onto my
favourite hobby-horse and take it for a brisk morning trot --

> Speaking of extended phenotypes, good example, them books.....all of a
> sudden there was so much stuff to remember that folks just had to start
> writin' it down to keep it from disappearing into thin air. 

Yes, and once we'd written down and collected so many words that it had
become impossible to remember where exactly we came across a certain passage
that we now want to re-read or refer to -- why then, folks had to start
inventing ways to file and code texts, by making lists, and giving sections
and chapters their own names and heads, and maybe assigning numbers to
pages, and then organizing the assembled lists of words and 'common places'
according to some agreed-upon but essentially arbitrary convention, as for
example, "A-B-C"; and so invented the symbolic machinery of bibliographic
reference that allows us to manage the 'extended phenotype' in a more
convenient manner than just searching through an entire library every time
you need to find this or that particular item. Which is the
'meta-literature' which makes possible the scholarly and scientific and
commercial bureaucracies of the present global civilization. Computers as
such are secondary to these practices of information management which,
although they are now carried out using computers, in fact predated them by
some 700 years.

In the West, the technologies of reference (alphabetization of word-lists,
page numbering, indexes and union catalogues) appeared in a flurry of
invention over the 50 years from 1230 to 1280, in the monastic libraries and
scriptoria of the late Middle Ages, two centuries before Gutenberg. It's a
story that incredibly, Marshall McLuhan managed to miss, though the slack
has been taken up in part by Mary Carruthers ("The Book of Memory"), Ivan
Illich ("In the Vineyards of Text") and Bruno Latour ("Drawing Things
Together"). The story I have to tell is a bit to one side, and more
abstract, though rooted in this 13th century information explosion -- namely
the compilation of the first concordance of the Bible by an ecclesiastical
data processing department of some 500 monks under the direction of one Hugo
de Santo Caro, in 1247. (A concordance is an 'inverted index' that collects
every instance of a particular word or phrase occurring in a text into 'the
same place', by listing those places in the text -- e.g., page numbers, or
'chapter and verse' citations -- where the 'look-up key' is to be found.)

I would draw a parallel between this external, artifactual memory of words,
and the internal biological memory system in the head, and propose that the
'search problem' (namely, how to find something when it is needed) which was
solved for our 'extended phenotype' of written records by the development of
indexing, had been similarly 'solved' a great deal earlier, over the course
of evolution, as a means of managing the private memory of an individual's
lived experience, and by much the same 'method', i.e. indexing. Whereby the
sensible signs of objects or situations encountered in the world come to be
associated with those things, so that upon again encountering a certain
sign, there is automatically invoked or raised to 'preparedness' (if not to
fully conscious awareness) in the mind (or brain, whatever) the congeries of
circumstances where this sign has in the past occurred. 

This is a 'best matching' memory, where the greater the number of signs or
cues that a past episode shares with the signs of the present situation, the
greater the epistemic weight accorded it, as being a 'similar' sort of
thing, hence any information associated with that episode will 'probably'
(making the 'inductive bet' that similar causes will, more often than not,
lead to similar effects) be relevant to the present case, and in degree
proportional to the 'closeness' of their resemblance, as estimated by the
number of features they have in common. By taking the lists (inverted files)
of 'contexts' associated with all signs or keys present to the senses,
merging these lists, and counting up the number of times each item is cited,
we get a ranking of the matching candidates, ordered by their similarity
(ditto salience) to the multi-key query. (Essentially this same scheme is at
the heart of the Web's ubiquitous 'search engines'.)

The best match algorithm has an interesting history. A case could be made
that, had the AI community been aware of the existence of the algorithm,
there would have been no need for 'AI', as such, at all. But since they did
not know of it, the whole field stumbled along for four decades under the
misapprehension that to implement efficient best matching search (in a way
that wouldn't embroil in exhaustive item-by-item examination of all records
in memory) was not possible on conventional serial processors. Which led to
the development of artificial neural nets, and massively parallel
multiprocessor systems like the Connection Machine (RIP), and the
establishment of computational geometry as a sub-field within computer
science whose fundamental problem is to find the 'nearest point to a point',
viz. the closest / nearest / best match, yet again. (And likewise with
'collision detection' algorithms, or the 'herding / flocking / schooling'
algorithms used by A-life programs like Craig Reynolds' "Boids", which
naively require comparing the location of every object with that of every
other object in order to determine which objects are closest to which other

I won't bore you with the many testimonials from AI's grey eminences about
how best matching look-up, could it be efficiently implemented, would
immediately put paid to all the outstanding problems of the field --
analogical reasoning, large-capacity associative memory, natural language
understanding, recognizing and discovering patterns, machine vision,
learning from experience. Suffice it to say that the crucial importance of
best matching to artificial intelligence has long been recognized, even
though no-one actually knew how to go about it. There have been, withal, a
number of reinventions of the magic algorithm, but far from AI's beaten
track, and its reinventors as a rule haven't noticed that their clever hack
is really just the commonplace, back-of-the-book index; and they failed too
to appreciate its full generality -- that, for all intents and purposes, it
is a one-size-fits-all universal 'algorithm of intelligence'. But the great
jest is that AI's Holy Grail had been hiding in plain view, right at our
fingertips, for the past 750 years. (I believe it's called irony, yes?)

I could go on and on about the coolness of the thing, but I'll leave it to
you to connect the dots. I will however share this one amazingly lovely but
almost wholly forgotten item, the world's first medical expert system, the
'Logoscope' of Dr. F. A. Nash, F.R.C.S., which was invented by him in 1953,
and was in fact a working 'AI' made of cardboard, resembling nothing so much
as a pocket train schedule. Had it been seen for what it was, things might
have turned out rather differently, in virtually any area of human activity
you'd care to mention. (I want the New Jerusalem, and I want it now -- I
most humbly request of everyone concerned, please to very much all stop
dicking around, and just do it okay?)

Here's a picture of the Logoscope in typographic simulation (warning: ASCII
Art Alert!) --

          a   b   c   d   e   f   g   h   i   j   k   l   m   n
    1                 1       1   1                   1        
    2     1               1       1       1               1    
    3         1       1           1           1   1           1
    4         1   1           1       1               1   1    
    5     1               1           1       1               1
    6             1           1   1       1   1           1    
    7         1       1   1               1       1       1    

Columns 'a' to 'n' represent diseases, with the rows corresponding to
symptoms. A patient presents with symptoms [2, 5, 7], e.g. spots, diarrhoea,
fever. You select out the appropriate rows. (In the Logoscope, the
individual 'rows' were long narrow strips of card, to be lined up with a
separate index-card listing close to 400 disease types.)

          a   b   c   d   e   f   g   h   i   j   k   l   m   n
    2     1               1       1       1               1    
    5     1               1           1       1               1
    7         1       1   1               1       1       1    

Letting the 'ticks' tumble down to the bottom yields a histogram, with
column sums (number of 'votes' = matching symptoms per disease) beneath.

          a   b   c   d   e   f   g   h   i   j   k   l   m   n
          1               1               1               1    
          1   1       1   1       1   1   1   1   1       1   1
          2   1   -   1   3   -   1   1   2   1   1   -   2   1

Finally, sorting the sums gives a ranking of diseases ordered by similarity
to the patient's ailment.

          e   a   i   m   b   d   g   h   j   k   n   c   f   l
          3   2   2   2   1   1   1   1   1   1   1   -   -   -  

And that's all there is to it. Notice that in the original graphical
Logoscope the best matching candidates are apparent as soon as the rows are
selected and grouped, while a computer version requires the extra summing
and sorting steps. On the other hand, computers can utilize conditional
probabilities or weights, rather than the bare 0/1 'ticks' of the Logoscope,
for greater sensitivity.

By showing that diagnosis can be construed formally as multi-key look-up
using inverted files, Nash showed that any cognitive process which can be
regarded as diagnostic can be realized as an index. Sense perception,
pattern recognition, interpreting a poem, identifying a tune, making a pun.
It corresponds to C. S. Peirce's 'abduction' or 'hypothesis', or 'inference
to the best explanation' -- the logical fallacy of 'affirming the
consequent'. (Which Gregory Bateson took as the type of magical or primary
process thinking, of poetic and mystical and visionary ideation. But also,
delusion, paranoia, psychosis: the 'undiscovered country' of the
Unconscious, Wonderland's looking-glass logic, the world under the world.) 

It's just semiotics really, which my old 1935 Concise Oxford defines as
"that branch of pathology concerned with symptoms" -- how to read the signs.
(Vide., Carlo Ginzburg's beautiful essay, "Clues: Morelli, Freud, Sherlock

(If you take an index (e.g. the Logoscope, above) and represent the
designated entities as two parallel series, drawing lines connecting
'symptoms' and their associated 'diseases', you'll get a picture of a
'neural network'. Or for that matter, anything else which involves numerous
interpenetrating and overlapping many-to-many mappings. E.g. gene maps,
trophic webs, ecosystems, the spread of rumours, contagions, technologies,
memes ... )

What's lovely about the Logoscope's best matching algorithm (which BTW was
also implicit in 'Peek-a-Boo' card indexes, c. 1947) is that -- against the
standard 'all-or-nothing' Boolean operators AND, OR, and NOT -- there is no
penalty for including a greater number of terms in the search query. Such
multi-key queries induce a topographic 'density map' of the multidimensional
neighbourhood of the query, where the more terms an item shares with the
query, the more similar and hence 'closer' it is to the query. But
less-similar ('more distant') items are not excluded, as would happen if we
took the Boolean AND (set-intersection) of the terms. Their presence
(manifesting as 'noise', which neuroscientists would call 'dark current',
beautiful) confers statistical robustness and a power of generalization (or
'smoothing'), ameliorating what the statisticians call the 'bias-variance'

I said 'topographic' but maybe 'tomographic' is nearer the mark. Best
matching look-up is a kind of abstract image reconstruction from
projections, analogous to CAT-scan imaging. By superimposing the selected
inverted files of 'contexts' associated with the terms of a query, we are
given an image of the 'probability density function' (a multidimensional
histogram really) in the neighbourhood of the query. But past the technical
language, it's just what everyone has always known, that our 'memories' (as
much as to say 'minds') are stitched together by likeness, or affinity, such
that the things which are 'close' to one another in meaning, will be
('geometrically') close to one another, in the mind. 

As I like to say, "Proximity may not be probability, but it's close."

 Derek Robinson, Toronto


to (un)subscribe  the Forum just mail to
fleshfactor-request@aec.at (message text 'subscribe'/'unsubscribe')
send messages to fleshfactor@aec.at

[FleshFactor] [subscribe]