Transversing Shamanism, Turing, and ELIZA

Every so often, scholarship returns to the problem of human dialogic interaction with systems designed with capabilities for patterned response. What archetypes does intelligence mirror?

Every so often, scholarship returns to the intractably alluring problem of human dialogic interaction with systems designed with capabilities for patterned response. Several ontic questions persist — are such systems truly intelligent? And if not, what of the insightful behavior which they stimulate, not to say awaken, of the users who interact with them? These systems came about as experiments in artificial intelligence revolving around three primary areas of research interest: strategies for knowledge representation, problems in understanding natural language, and methods for search optimization. All research was empirical; software was created and tested in accordance to comparative benchmarks appropriate to each of these problem spaces. The outcome of each of these lines of effort led to thinking whose echoing influence we might call small or large.

In knowledge representation, for example, the large problem was that of dynamically mapping schemas in a logically stable framework, while the smaller work involved the creation of semantic networks, frames and scripts, and production systems. All three of these methodologies exercised tremendous impact on research thinking of the 1970’s and 1980’s, and are all virtually dead today. Likewise, in natural language work, the big problem of parsing engendered the smaller approaches to it involving the investigation of grammars (formal, tranformational, systemic, and case grammars); systems to demonstrate real-time machine and natural-language translation (e.g., LUNAR, SHRDLU, MARGIE, LIFER); and state-based parsing solutions like augmented transition networks, and systems like the General Syntactic Processor. In search, too, the big problem was optimization against enormously large symbol spaces, whose smaller problems involved now-defunct research on state-space search, game-tree search, and proving machines like the General Problem Solver, David Gelernter’s geometric theorem-prover; STRIPS and many others.

It might be obvious that the large in each research area was the lasting question — proof perhaps that many of them remained unsolved — and the small was a compartmented decomposition of it into a hypothesis framework which would be the target of a particular kind of software program. It was the inability for this separation to become unified, a chasm in which big problems were never resolved by small programs, that led to the extinction of much of the artificial intelligence era. But the few species that survived did so as pedagogy, as examples of the philosophical, or ethical, conundrums that obtain when non-instrumental questions are so intimately addressed by machine-level cognitive processing. MYCIN, for example, the expert system whose rule base would prompt physicians with a series of incrementally specific questions on a patient’s presenting symptomatology and generate both a diagnosis and the rationale for the reasoning, was never widely accepted, unable to conquer another problem space, relating to less transparent questions of professional discomfort and liability at the implication of having a program render a medical verdict and treatment instructions. Even after output latency improved so that the time-delay in producing a diagnosis were reduced to several minutes, MYCIN, like CADUCEUS and INTERNIST-I (and unlike systems like DxPlain that produce analysis rather than diagnosis), was quietly swept under the production rug, never allowed in a clinical setting, despite systematically outperforming human professionals in diagnostic questionnaires.

Lastingly discussed today, but in a discipline distinct from computer science, in which it was created, or psychology, whose practice it reflected, is another of the survivors of AI lore, ELIZA. This program, which impersonates a Rogerian psychotherapist, performs a real-time dialogue with a user, who plays the presumed role of neurotic.

One of the unique and insufficiently discussed realities of this program is how the user complies — or not — with the implicit instigation to “become” a patient. Two worlds simultaneously claim the user’s speech-as-writing: in one, what is typed belongs to a curious “tester” of the system’s ability to understand and respond, to function in a converation; in another, the utterances are forced into another discourse as they become interpreted within the scope of a greater malady which ELIZA persists in exploring. There is communication but not dialogue, which with Bakhtin we would agree presupposes the position of both speakers in one and the same, not different, discursive worlds. The first is the intersubjective world of communicative competence that Habermas has problematized; the second, institutionalizing and projective of a pathogenic paradigm, is that of Foucault.

In its place in multiple intellectual lineages, ELIZA is more than an empirical probe for implementing and studying automatic conversation. Two additional ramifications persist, one involving the foundations of the literary (as beyond the linguistic), the other the foundations of being (as beyond simulation) . In the first, Noah Wardrip-Fruin has brought ELIZA into dialogue with story-construction systems, particularly James Meehan’s 1976 work with Tale-Spin, whose interactive output is not diagnostically constraining banter but rather questions about the world which constitute the basis for a new real-time work of fiction.[1] The second, in which dialogic intelligence is bound up with being — or rather the idea of identity as locating one’s condition in multiple comparative worlds — is one that I experienced firsthand in early life.

The experience in question relates, improbably enough, to a very specific moment in my youth in Cuba. A young girl, around the same seven years of age as myself at the time, took sick in a house party that I was attending. One of the visitors, a tall black Watusi, assumed control, demanding silence and dimmed lights. Shaking a petite bag of rattling pebbles, he gesticulated around the girl in a commanding exertion of energy, whereupon, returning to Occidental reality, he recomposed himself and declared the girl cured. Arising, she briskly ran to the rear of the house where we children were playing; everything resumed as before.

To all witnessing this, analysis seemed unnecessary, Cuban culture always accepted the workings of obscure causes on faith. But I was left with a why that transcended any possible how, as it had been shortly before witnessing this therapeutic intervention that I had been the target of a rather different one. A few months earlier, my parents noticed a persistent lump near my umbilical area, and brought me to Havana?s central hospital emergency room, where an aunt who coincidentally was the duty nurse that day was preparing the operating room for a patient. Suspecting symptoms of something ominous, she summoned doctors to my torso, who ordered me moved to the operating room for emergency surgery. Several hours later, I regained consciousness to the hovering voices of my aunt, my parents and the surgeon, clarifying the attack of an virulent, Ebola-like staphylococcal infection about to fatally enter my bloodstream. So when at the party, the girl had, like a phoenix, risen perkily, the imprint of Western medicine — my twenty stitches – forced the question of why I couldn?t have had that mystical intervention, rather than that of Western-style surgery.

Decades later, I recall this event as the example of transformation through opacity; the inexplicability of ritual shrouded in the enactment of a healing act was proof of intervention. In the space of the miraculous there is no room for explanation: the performance alone suffices. The logos of Western medicine, on the other hand, depends on a transparent kind of visibility: description, prediction, explanation. Its enemy is murkiness; no intervention is legitimate without explanation of method. Opacity is the wall separating the dialectic of miracle against that of mechanism, the sense of meaning versus the structure of language.

There is another domain in which the same tension plays out, concerning itself with the study of intelligence. In the frequent but miraculous performances of learning and deduction that entail human understanding, the shaman is the Every-person whose adaptation to cognitive challenges is both normal and extraordinary – and as opaque as the shaman?s rite. Learning is the greatest mystery for which no explanation has yet proved complete. It is with some irony then, that, as with medical interventions, there exist in the context of intelligent behavior performance conditions that are recurring and highly formulaic. And these recurrences have produced opportunities for mimicking intelligent behavior in computers through a tradition of experiments in which the challenge is to design the proper recipe capable of straddling the distance between the opaque miracles of understanding and the transparent mechanisms of language. Historically, two cases stand out. One of these, proposed as a Gedanken, was theoretical computer science?s greatest unrealized challenge, and the other, as its inverse, emerged as an actual computer program performing a quasi-farcical play on the opacity of intelligence and our desire for connecting with an Other through the miracle of understanding, even when that Other is a mechanism.

The first example, a theoretical challenge to understanding, was posed by Alan Turing in 1950, near the end of his brief but astonishing career and life, whose professional vector contributed vital chapters to the histories of computer science, artificial intelligence, and mathematics. The miracle under Turing?s scrutiny was framed by the question, “Can a machine communicate like a human being?”, whose underlying problem is whether such processing can ever be indistinguishable from human processing, perhaps locked in powerful opacities similar to those concealed by the “black box” of the brain. To that end, Turing imagined an imitation game consisting of three rooms: room A houses a computer capable of communicating using natural human language, room B accommodates a human being, as does room C, whose inhabitant serves as prompter and judge in the game. The computer and human respondents in rooms A and B would engage in ostensibly convincing dialogue with the judge, who cannot see which of the two locutors is the human one, but who, able to converse openly with each, must attempt to spot the computer. If, pondering the conversation, the judge cannot distinguish which of the two participants is the machine, the machinery will have passed the Turing Test. This test is not intended to establish objective definitions of intelligence, but to mark the point of sufficiently flexible processing at which the expressive difference between machine and human cannot be made with certainty. We might note that there is no need to identify how the machine constructs responses. The point is rather whether it can generate communication sufficiently intelligent so as to deceive human understanding, be it akin to the form of shaman, inspired by the forces of an unseen causality, or of physician, guided by the transparencies of scientific method.

Until recently, all computer learning followed the latter, procedural model. A set of instructions, explicitly ordered into a software program, was run by a system whose behavior conforms entirely to the logic of the source code, the computer?s recipe. And while by now, computer science has developed modes of machine learning through neural networks, whose complex webs of triggering associations agglomerate learning in a manner that is self-organizing and opaque to analytic breakdown, there is one program from the dawn of artificial intelligence?s golden age, designed roughly fifteen years after Turing?s challenge, that explored the minimal feasibility conditions for the Turing Test. Evocatively named ELIZA[1], the program?s ploy was the presumed encapsulation of specific human characteristics, much as Pygmalion?s statue, whose femininity seemed so flawless that he fell in love with it. If Ovid?s poem, recounting that “Art hid with art, so well perform’d the cheat/It caught the carver with his own deceit”, might have produced the earliest reference to an aesthetic Turing Test that history knows, ELIZA was the most trivial yet transparent case of impersonation in dialogue.

Presenting a teletype interface in which a user answers prompts generated by the system, ELIZA was configured as a Socratic therapist using the Rogerian technique of posing open-ended questions to probe for moments of cathexis and then selectively steering the patient?s attention. Even if it could arguably approximate a psychotherapeutic Turing Test, how could such a system be programmed in software? ELIZA?s method, exploiting the fact that intelligence is assumed to resemble understanding, focused on creating the illusion of understanding by drawing from a minimal recipe of syntactic patterns that transformed user input to construct a convincing response. When ELIZA?s rules match words and word groups from the user-patient?s statements, a transformation of the matched input produces a response. One such transformation involved first person to second person conversions, so the user typing, “It?s obvious that you must be bored of me by now” would surprisingly encounter, “What makes you think I am bored of you?”. This riposte was produced by the decomposition template (0 YOU 0 ME) where the first 0 matches anything until the word” YOU”, and the next 0 again captures everything until the word “ME”. Applying the four components of the template matches the input as follows:

Eliza_Substitution_1

This rule is in turn matched to another: (WHAT MAKES YOU THINK I 3 YOU), in which 3 represents the words matching the third element of the prior rule (“must be bored of”),  permitting ELIZA to transform the user?s input into the seemingly conscious reply

Eliza_Substitution_2

In another kind of transformation rule ELIZA exchanged specific words for categories within which they can be classified. Thus, if a user mentions the word “sister”, ELIZA, retrieving the family category, would then ask, “Tell me more about your family”. Similarly, words like depression promote up to feeling, so that if the user complains, “I am often depressed”, ELIZA counters with, “Tell me more about your feelings”. The illusion within what Weizenbaum called the “overwhelmingly psychological orientation” of the pseudo-therapeutic context to which it was meant to be compared, was absorbing.

However, none of ELIZA?s transformations actually preserved knowledge; the program only manipulated linguistic markers via single-sentence interaction. One of the therapist?s strengths is managing some memory of a patient?s statements. ELIZA, however, discards every input after its transformation into response. It thus has no notion of therapy through the logic of discourse, the perception of consistency, or contradiction, across a span of utterances[2]. Even so, with prescient anticipation of the fervor that the program would provoke in the coming decades, Joseph Weizenbaum, its creator, was careful from the outset to present the anatomy of ELIZA exclusively as a collection of processing statements amalgamated into a recipe for a specific encounter in discourse, by whose destruction of opacity in the explanation of its method, he was desirous to “rob ELIZA of the aura of magic to which its application to psychological subject matter has to some extent contributed”[3]. For ELIZA, as for the physician and the shaman, exchange of signs frames an encounter that turns on transformation rules, but in converting opacity into transparency, the visible evidence of these rules eradicates their power so that “once a particular program is unmasked, once its inner workings are explained in language sufficiently plain to induce understanding, its magic crumbles away; it stands revealed as a mere collection of procedures, each quite comprehensible.”[4]

Lest, then, we interpret the notion of “transformation” too ‘opti-mystically?, it is worth noting that continual allusion to divergences between the transformation of conceptual structures, like machine-based conversation, against the opposite of transformation, the stasis of institutional structures which utilize those machines was not lost on Weizenbaum, who much later suspected that

if it had not been for the computer, if the computer had not been invented, what would the banks have had to do? They might have had to decentralize, or they might have had to regionalize in some way. In other words, it might have been necessary to introduce a social invention, as opposed to the technical invention.

What the coming of the computer did, “just in time,” was to make it unnecessary to create social inventions, to change the system in any way. So in that sense, the computer has acted as fundamentally a conservative force, a force which kept power or even solidified power where is already existed.[5]

Given the constraints against hopes for meaningful understanding that ELIZA demonstrated, we could read this as a proclamation of how the virtual symbolic provides a field for encounters involving recipe-like transformations not of knowledge, but rather of doubt itself.

APPENDIX
Selected and Annotated Fragments of ELIZA Script Source Code

Comments on source code functionality appear in italic prior to the source code.

“Sorry” mentioned in the patient?s input:

(SORRY ((0) (PLEASE DON’T APOLOGIZE)

(APOLOGIES ARE NOT NECESSARY) (WHAT FEELINGS DO YOU HAVE WHEN YOU APOLOGIZE) (I’VE TOLD YOU THAT APOLOGIES ARE NOT REQUIRED)))

Transformation rule REMEMBER 5 collects into item 4 everything after the patient?s mention of the word “remember”:

(REMEMBER 5

((0 YOU REMEMBER 0) (DO YOU OFTEN THINK OF 4)

(DOES THINKING OF 4 BRING ANYTHING ELSE TO MIND

(WHAT ELSE DO YOU REMEMBER)

(WHY DO YOU REMEMBER 4 JUST NOW)

(WHAT IN THE PRESENT SITUATION REMINDS YOU OF 4)

(WHAT IS THE CONNECTION BETWEEN ME AND 4))

Any mention of “dream” is responded with a general prompt for the patient?s own interpretation:

(DREAM 3 ((0) (WHAT DOES THAT DREAM SUGGEST TO YOU)

(DO YOU DREAM OFTEN) (WHAT PERSONS APPEAR IN YOUR DREAMS)

(DON’T YOU BELIEVE THAT DREAM HAS SOMETHING TO DO WITH

YOUR PROBLEM)

))

nothing matched, prompt for  more:

(NONE ((0) (I AM NOT SURE I UNDERSTAND YOU FULLY)

(PLEASE GO ON)

(WHAT DOES THAT SUGGEST TO YOU)

(DO YOU FEEL STRONGLY ABOUT DISCUSSING SUCH THINGS)))

Seize on tentative statements:

(PERHAPS ((0) (YOU DON’T SEEM QUITE CERTAIN)

(WHY THE UNCERTAIN TONE)

(CAN’T YOU BE MORE POSITIVE)

(YOU AREN’T SURE) (DON’T YOU KNOW)))

(MAYBE (-PERHAPS))

Mention of computers is another loaded term:

(COMPUTER 50 ((0) (DO COMPUTERS WORRY YOU)

(WHY DO YOU MENTION COMPUTERS) (WHAT DO YOU THINK MACHINES

HAVE TO DO WITH YOUR PROBLEM) (DON’T YOU THINK COMPUTERS CAN

HELP PEOPLE) (WHAT ABOUT MACHINES WORRIES YOU) (WHAT

DO YOU THINK ABOUT MACHINES)))

Echo patient?s statement by inverting first-person into second-person::

(AM – ARE ((0 ARE YOU 0) (DO YOU BELIEVE YOU ARE 4)

(WHAT WOULD IT MEAN IF YOU WERE 4) (=WHAT))

((0) (WHY DO YOU SAY ‘AM’) (I DON’T UNDERSTAND THAT)))

(ARE ((0 ARE I 0 )

(WHY ARE YOU INTERESTED IN WHETHER I AM 4 OR NOT)

(WOULD YOU PREFER IF I WEREN’T 4) (PERHAPS I AM 4 IN YOUR

FANTASIES) (DO YOU SOMETIMES THINK I AM A) J-WHAT))

((0 ARE 0) (DID YOU THINK THEY MIGHT NOT BE 3)

(WOULD YOU LIKE IT IF THEY WERE NOT 3) (WHAT IF THEY WERE NOT 3)

(POSSIBLY THEY ARE 3)) )

REFERENCES

ben-Aaron, Diana. “Weizenbaum Examines Computers and Society.” The Tech, April 9, 1985, 2.

Weizenbaum, Joseph. “Eliza:A Computer Program for the Study of Natural Language Communication between Man and Machine.” Communications of the ACM 9, no. 1 (1996): 36-45.


[1] The primary work in this line of reasoning is Wardrip-Fruin’s lucid text Expressive Processing: Digital Fictions, Computer Games, and Software Studies. For similarly close treatment of the Eliza effect, see another presentation by him. My own thoughts after this introduction were published in Recipes for an Encounter, edited by Marisa Jahn, Candice Hopkins and Berin Golonu. New York: Western Front and Pond: Art, Activism, and Ideas, 2009.


[1] The program is named after Eliza Doolittle, the deprived girl with Cockney accent and working class mannerisms in George Bernard Shaw?s 1913 Pygmalion. The play is itself an adaptation of the myth of Pygmalion and the Statue from Ovid?s narrative poem, Metamorphoses.

[2] This was left as a goal for a possible “augmented ELIZA program” that itself was never built.

[3] Joseph Weizenbaum, “Eliza:A Computer Program for the Study of Natural Language Communication between Man and Machine,” Communications of the ACM 9, no. 1 (1996): 45.

[4] Ibid.: 36.

[5] Diana ben-Aaron, “Weizenbaum Examines Computers and Society,” The Tech, April 9, 1985.

Leave a Reply

Your email address will not be published. Required fields are marked *