★ AMSTRAD CPC ★ GAMESLIST ★ ELIZA (c) INTERFACE PUBLICATIONS ★

★ Ce texte vous est présenté dans sa version originale ★ 
 ★ This text is presented to you in its original version ★ 
 ★ Este texto se presenta en su versión original ★ 
 ★ Dieser Text wird in seiner Originalfassung präsentiert ★ 

The Doctor is In

No study of ‘natural language processing' would be complete without a discussion of ELIZA, the most famous program of this type (and, at one stage, the most famous computer program of all).

Joseph Weizenbaum, Professor of Computer Science at the Massachusetts Institute of Technology, created ELIZA in 1964-66. The program was designed to take on the role of a Rogerian psychotherapist (Weizenbaum insists it was intended to parody such a practitioner). A psychotherapist of Carl Rogers' school works by refusing to lead the conversation with a patient, preferring instead to reflect the patient's remarks back on themselves, so the conversation direction is constantly dependent upon the patient.

The reaction to the program — once it was made public — at first startled, and then seriously worried Weizenbaum. As a result of the train of thought engendered by the shock of its reception (which we'll discuss in a moment), Weizenbaum left MIT for two years to ponder the wider implications of the ELIZA experience, and during that time wrote the very important work Computer Power and Human Reason (San Francisco: W. H. Freeman and Co., 1976). There is quite a bit of material in this book that I don't agree with, but this in no way diminishes the importance of the work. Regardless of my views, it is a work I urge you to read if you are seriously interested in artificial intelligence, and in the relationships and dependencies which we see developing between man and machine.

You use Weizenbaum's program by typing in your side of a ‘conversation', to which the program replies, using its inner stock of answers. Here's part of one ‘conversation' I had with my Amstrad, running ELIZA:

? I HAVE A SERIOUS PROBLEM
CAN YOU ELABORATE ON THAT?

? I AM AFRAID OF MICE
HOW LONG HAVE YOU BEEN AFRAID OF MICE

? SINCE BEFORE I TOOK UP COMPUTERS
WHY DO YOU MENTION COMPUTERS?

? IT SEEMED RELEVANT AT THE TIME
CAN YOU ELABORATE ON THAT?

? IT SEEMED RELEVANT TO THE FACT THAT I AM AFRAIO OF LITTLE SQUEAKY THINGS
HOW LONG HAVE YOU BEEN AFRAID OF LITTLE SQUEAKY THINGS

? I THINK WE HAVE BEEN THROUGH THIS BEFORE
MARTIN, DO YOU DOUBT WE HAVE BEEN THROUGH THIS BEFORE

? I SOMETIMES DOUBT EVERYTHING
WHAT DOES THAT SUGGEST TO YOU?

? THAT I AM LIKE MOST PEOPLE IN HAVING DOUBTS
HOW LONG HAVE YOU BEEN LIKE MOST PEOPLE IN HAVING DOUBTS

? SINCE I STARTED DREAMING OF ROBOTS
WHY DID YOU BRING UP THE SUBJECT OF DREAMS?

? I HAVE THIS THING ABOUT MICE
COULD YOU SAY THAT AGAIN, IN A DIFFERENT WAY?

? I'M AFRAID THEY'LL COME AND JUST STAY FOR TEA
WHY ARE YOU TELLING ME YOU'RE AFRAID THEY'LL COME AND JUST STAY FOR TEA

? I THOUGHT YOU WERE MEANT TO ASSIST IN AN HOUR OF NEED
I'M NOT SURE I UNDERSTAND YOU FULLY.

? HOW CAN I SAY IT MORE CLEARLY
PERHAPS YOU DON'T WANT TO BE ABLE TO SAY IT MORE CLEARLY

? I THINK YOU ARE AN AMSTRAD FRAUD
WHAT MAKES YOU THINK I AM AN AMSTRAD FRAUD

? BECAUSE YOU NEVER GIVE ME A STRAIGHT ANSWER
WHAT DOES THAT SUGGEST TO YOU?

? COME ON TURKEY I WANT A STRAIGHT ANSWER
WHY DO YOU WANT A STRAIGHT ANSWER

? TO HELP ME IN MY HOUR OF NEED
COULD YOU SAY THAT AGAIN, IN A DIFFERENT WAY?

? YOU'RE PRETTY DUMB FOR A MACHINE
WHAT IS YOUR REACTION TO ME BEING PRETTY DUMB FOR A MACHINE

? IT MAKES ME WANT TO DRINK HEAVILY
MODERATION IN ALL THINGS SHOULD BE THE RULE.

? EVEN IN SEX
I FEEL YOU SHOULD REAl.LY DISCUSS THIS WITH A HUMAN.

? BYE
BYE BYE FOR NOW.
SEE YOU AGAIN SOME TIME.

Weizenbaum chose the name ELIZA for his program because he said, like the Eliza in Pygmalion, it could be taught to speak increasingly well. The original ELIZA program was written in two parts. The first part analysed the user input, and the second part was a ‘script'. Different scripts were designed for different topics, and DOCTOR was one of the scripts. (Other scripts could well have held discussions on ancient ships, real estate, currency exchange rates or whatever).

The DOCTOR (Rogerian) script was the first one Weizenbaum tried out. The program became well known around MIT because it was a very effective way to demonstrate the power of a computer (remember, this was all a long, long time ago, in a galaxy far away, when people could not buy computers off the shelf at their local store).

Weizenbaum reported his work on ELIZA to the computer press in due course (“ELIZA — A Computer Program for the Study of Natural Language Communication Between Man and Machine”, Communications of the Association for Computing Machinery, vol. 9, no. 1 [January 1965], pp. 36-54) and soon a number of versions of it — based on his description — were running at other institutions in the United States.

Weizenbaum reports that there were three distinct events which “shocked” him, as ELIZA's use became widespread. Firstly, he was horrified (and I find it hard to appreciate why he was as alarmed as he reports) to discover that people quickly became involved with the program.

Weizenbaum's secretary had worked closely with him over the six months or so it took to produce the program, and she knew as well as he how it worked, scanning a person's input for ‘key words' (such as DREAM or FRIENDS) and then choosing a suitable reply from a bank of such replies.

Other words from the user's sentences could be incorporated, sentences could be ‘turned around' (so “I am happy because of the weather” could be simply fed back as either “Why are you happy because of the weather” or just as a statement of the form “You are happy because of the weather. What does this suggest to you?”). A number of other replies (such as “That is interesting, please go on”) could be used if no key word was recognized.

Despite the secretary's familiarity with the program, Weizenbaum noticed that if he walked into the office when she was accessing the program, she became embarrassed, and refused to let him see the printout. Further, when he suggested it would be interesting to hook up a printer to the main body of the computer to record the late night conversations students were having with the program, the suggestion was greeted with horror. It was as though he had suggested a kind of electronic peeping-Tom activity.

Weizenbaum was bothered by how strongly people identified with the program, given it a personality, and sharing their most intimate thoughts with it. He said he had not realised the “powerful delusional thinking” a fairly simple program could create in normal people.

The Russian Connection

Pamela McCorduck, in her splendid book Machines Who Think (San Franciso: W. H. Freeman and Co., 1979) confirms the effect the program can have. She reports that the first time she saw ELIZA up and talking was at the Stanford Computation Center where an internationally respected computer scientist from the Soviet Union was being shown around.

He sat down at a computer connected to a version of the program written by one of Weizenbaum's colleagues, Kenneth Colby (who we'll be meeting again, shortly) and started typing. McCurduck reports watching in embarrassment as — triggered by a phrase such as TELL ME ABOUT YOUR FAMILY — the scientist proceeded to discuss some personal worries in some depth, becoming oblivious to those around him.

Weizenbaum found that some accesses to the program, via time-sharing terminals scattered around the university, often went on for an hour or more, late into the night. He received telephone calls from people who desperately wanted access to the program for a short time, in order to sort out their problems.

Colby, who we mentioned a short time ago, had met Weizenbaum some time earlier, at Stanford. Colby — Professor of Psychiatry at UCLA — was interested in artificial intelligence. He thought its findings might possibly lead to new views on human thinking (and Colby hoped to gain new insights into neurotic behaviour through his studies). Before Weizenbaum's original paper on ELIZA appeared, a short note on it was published by Colby in the Journal of Nervous and Mental Diseases.

The two men split shortly after this, primarily because Weizenbaum strongly disagreed with Colby's claims that the program could have genuine therapeutic applications, but also because it seemed that Colby did not properly credit Weizenbaum for the original work on ELIZA.

Colby and two colleagues suggested that an improved version of DOCTOR would have genuine therapeutic use. Colby thought it could be made available to mental hospitals which were short of staff, so patients could access the program (via time-sharing systems) on demand. Weizenbaum was horrified. He says he thought it was vital that there be, as a starting point from which one person could assist another in coping with problems, an emphatic, ‘fellow-human' recognition of those problems.

Short, Sharp Shocks

Weizenbaum was shocked that even a single practicing psychiatrist could advance the view that the healing process could be replaced purely by mechanical technique. Such a thought had never crossed his mind. Furthermore, even if it could be done, it should not be done. There are some areas where machines should never be allowed to stray, claimed Weizenbaum, even if they have the ability to do so.

Colby was not chastened by Weizenbaum's response. He was, it seems, perfectly happy to consider the possibility of pure technique proving efficacious. Further he defended his view, saying that only laymen confused psychotherapy with marriage. A professional working relationship between therapist and patient was what mattered he said.

More to the point, Colby attacked Weizenbaum for the claim that there were areas in which the computer should never be employed. Why not, asked Colby. Just because Weizenbaum says so? Does Weizenbaum believe that helping people by computer is somehow worse than letting them suffer? And should not a therapist explore every possible tool which is available, just in case one of them proves to be genuinely effective?

Colby's view is more or less supported by Carl Sagan who is quite at peace with the idea of an ELIZA-like program being available — for a few dollars a session — in specially constructed areas, somewhat like telephone booths (Broca's Brain, London: Coronet Books, 1980, p. 300).

And this is where Weizenbaum's third ‘shock' came in. Remember, he had been startled by the identification with and the unequivocal anthropomorphization of the program. Then he was very alarmed at the suggestions that somehow ELIZA could take the place of, or assist, human therapists. His third ‘shock' came from his observation that many people came to believe that somehow the program was important in demonstrating that a real solution to the problems of a machine understanding human language was at hand. He dismissed this idea out of hand. Indeed, in the original paper on the program, Weizenbaum had been at pains to point out that it was impossible to find a general solution to this problem.

I said earlier that I did not agree with all in Weizenbaum's book Computer Power and Human Reason. One of the points upon which I disagree is the ‘there are some things which should never be done by machines'. John McCarthy (1976, “An Unresonable Book'', in Three Reviews of J. Weizenbaum's Computer Power and Human Reason, Memo AIM-291, Stanford A1 Laboratory, November), advances the view that if there are functions which a computer should not be taught to carry out, these should not be done at all, by a person or a machine.

Others agree. In the book Artificial Reality (1983: Addison-Wesley Publishing Co., Reading, MA; p. 168), Myron W. Krueger suggests that even if Weizenbaum's horror at the thought of using his program — or a development of it — for genuine therapy was real, such fear was groundless and misplaced.

However, regardless of my views (or others') of Weizenbaum's thesis, and of the value of the book (I've already said I think you should read it, if only to give your own mental mill grist regarding the debate), there is no doubt that ELIZA has proved an extremely entertaining companion. You will soon prove this assertion for yourself.

As well as enjoying the program, you'll also be in a position to judge whether or not it actually suggests that intelligence resides in the machine which is running it. Adrian Berry (in The Super-Intelligent Machine, Lond: Jonathan Cape Ltd., 1983; p. 63) concludes that ELIZA (and PARRY, a program which is designed to mimic a paranoid patient) is a pretty poor advocate for the possibilities of true artificial intelligence.

You'll find that your own views will sway back and forth as you use the program. When ELIZA produces a particularly inspired or appropriate remark, you'll feel this is sure evidence of intelligence, on at least some level. At other times, you'll discover the ELIZA is great simply for entertainment. Nobody, you claim, could manage to ask such absurd questions (DID YOU COME TO ME BECAUSE I WAS DEAD? one implementation asked Berry, he reports in the book mentioned above) or innocently create superb nonsequiturs as you will certainly find when you run this program on your Amstrad.

The Program

Of course, despite the impression that his program creates when running on your Amstrad, there is no real intelligence in it. It is, instead, basically a sleight of hand track. As we said earlier, the program scans your input for words it can use, and then reflects your words back to you in a way which makes it seem as if the program really is speaking to you. Once you've run it a few times, you'll learn how to trigger the most effective responses. As well, as I pointed out earlier, it makes a good demonstration to run for those who are not used to computers. (But be careful in case some of that ‘powerful delusional thinking' comes into play, and you find one of your friends confiding his or her darkest thoughts to your machine). Now that you know how it works, it is time to hang up your computer's shingle, and go into practice:

Amstrad Omnibus

ELIZA
(c) INTERFACE PUBLICATIONS

★ Author: Martin FAIRBANKS

★ YEAR: 1985
★ LANGUAGE:
★ GENRE: INGAME MODE 1 , OTHER GAME , BASIC
★ LiCENCE: ???
★ COLLECTION: AMSTRAD OMNIBUS

 

Page précédente : Elisa Creativ Computing
★ AMSTRAD CPC ★ DOWNLOAD ★

Type-in/Listing:
» Eliza    (Interface  Publications)    ENGLISH    LISTINGDATE: 2024-06-23
DL: 31
TYPE: text
SiZE: 6Ko
NOTE:

Dump cassette (originale):
» Eliza    (Interface  Publications)    ENGLISHDATE: 2012-09-18
DL: 220
TYPE: ZIP
SiZE: 4Ko
NOTE:
.HFE: Χ

Je participe au site:
» Vous avez des infos personnel, des fichiers que nous ne possédons pas concernent ce jeu ?
» Vous avez remarqué une erreur dans ce texte ?
» Aidez-nous à améliorer cette page : en nous contactant via le forum ou par email.

CPCrulez[Content Management System] v8.73-desktop/c
Page créée en 022 millisecondes et consultée 101 fois

L'Amstrad CPC est une machine 8 bits à base d'un Z80 à 4MHz. Le premier de la gamme fut le CPC 464 en 1984, équipé d'un lecteur de cassettes intégré il se plaçait en concurrent  du Commodore C64 beaucoup plus compliqué à utiliser et plus cher. Ce fut un réel succès et sorti cette même années le CPC 664 équipé d'un lecteur de disquettes trois pouces intégré. Sa vie fut de courte durée puisqu'en 1985 il fut remplacé par le CPC 6128 qui était plus compact, plus soigné et surtout qui avait 128Ko de RAM au lieu de 64Ko.