Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Technology

Israeli AI System "Hal" And The Turing Test 447

Conspiracy_Of_Doves writes: "Hal, the AI creation of Dr. Anat Treister-Goren of Israel, has fooled child language language experts into believing that it is a 18-month old child. Dr. Treister-Goren says that Hal will probably attain adult-level language skills in 10 years. CNN.com article is here. Yes, it's named after what you think it's named after, and yes, the article mentions why naming it Hal might not be such a hot idea."
This discussion has been archived. No new comments can be posted.

Israeli AI System "Hal" And The Turing Test

Comments Filter:
  • Incredible! (Score:4, Funny)

    by FortKnox ( 169099 ) on Monday August 20, 2001 @01:21PM (#2198357) Homepage Journal
    It's just like chatting with an 18 month old child! Doesn't know how to type, read, or write at all!

    Truely an incredible step in toddler AI!
    • Yes, but... (Score:2, Funny)

      by PopeAlien ( 164869 )

      ..Don't forget these are "child language language experts".. Thats not just any ordinary language expert - Thats a child language language expert, which means they are twice the ordinary child language expert.

      ..So fooling them really is quite the feat..

    • We need to set up a Beowulf Cluster of these! Imagine the possibilities of hundreds of fake 18-month old children -- IRC would suddenly become a much more enriching experience!
  • by cnkeller ( 181482 ) <cnkeller@@@gmail...com> on Monday August 20, 2001 @01:23PM (#2198373) Homepage
    Dr. Treister-Goren says that Hal will probably attain adult-level language skills in 10 years.

    I know people I work with who still haven't achieved adult-level language skills...

    • Dr. Treister-Goren says that Hal will probably attain adult-level language skills in 10 years.


      I know people I work with who still haven't achieved adult-level language skills...


      You must live in the South. ;-)


      Don't worry. I do too.

    • I know people I work with who still haven't achieved adult-level language skills...

      Yeah? Well I know presidents [bushgrammar.com] who haven't achieved adult-level language skills.
  • True, all the 2001 problems could be alluded, but that's 10 years down the road. ;o) (yes, irony is intended)


    Reminds me of a political party in Canada (NPC)that tried to implement a new method of communication called Newspeak. Now that was ironic. (and very funny)


    Regardless, the fact that it learns like that is incredible. I just wonder if it won't hit a block at any point that isn't forseen.

    • Reminds me of a political party in Canada (NPC)that tried to implement a new method of communication called Newspeak.

      What you fail to mention is that this political party was born out of a Dungeons and Dragons game.

      Basically, they're Non-Player Characters.

      Which explains a lot about their political strategy (or kack of it).
  • by PoitNarf ( 160194 ) on Monday August 20, 2001 @01:25PM (#2198394)
    "Hi, how are you today?"
    "Poop!"
    "Poop? I don't quite understand what you are trying to say."
    "Pee-pee!"
    "Indeed."

  • Baby Hal? (Score:5, Funny)

    by Dolly_Llama ( 267016 ) on Monday August 20, 2001 @01:29PM (#2198420) Homepage
    Hal, the AI creation of Dr. Anat Treister-Goren of Israel, has fooled child language language experts into believing that it is a 18-month old child.

    Dave...I have a load in my diaper...Dave...

  • by smack_attack ( 171144 ) on Monday August 20, 2001 @01:29PM (#2198424) Homepage
    When Hal was "born," he was hardwired with nothing more than the letters of the alphabet and a preference for rewards -- a positive outcome -- over punishments -- a negative one.

    [...] Treister-Goren corrects Hal's mistakes in her typewritten conversations with him, an action Hal is programmed to recognise as a punishment and avoids repeating.


    How long until Hal figures out that sending high voltage through the typewriter stops the punishment?
  • Treister-Goren corrects Hal's mistakes in her typewritten conversations with him, an action Hal is programmed to recognise

    I just thought this was cute, being recognise, at least in the US, is a variant spelling of recognize.

    • The -ise verb endings are still common in the British Commonwealth. They are particularly alive in South African and Indian English, but also in Australian, New Zealand and Canadian English.

      They exist because the original -ise verbs originated from French, which spelled them with an 's'. For example "realise" is the traditional spelling of that particular verb, as it derives from the French verb "réaliser". Another example is "paralyse" which derives from French "paralyser", but has become "paralyze" in American English.

  • "2001" (Score:5, Interesting)

    by YIAAL ( 129110 ) on Monday August 20, 2001 @01:34PM (#2198452) Homepage
    Funny how all the cultural fears of technology come from books and movies like Frankenstein, Brave New World, Colossus, (remember that one?) and 2001. All of which are fiction, and written the way they are to make an interesting story (who would read a story about a man who created a "monster" that was happy, friendly, and harmless, or a computer that worked perfectly and caused no trouble?) Yet in popular discussion, people treat them as real, and embodying actual dangers with which we have real experience.

    We need more Artificial Intelligence -- the natural kind is in too short a supply.
    • Re:"2001" (Score:2, Insightful)

      Funny how all the cultural fears of technology come from books and movies like Frankenstein, Brave New World, Colossus, (remember that one?) and 2001

      Probably because science fiction has a funny tendency to become science fact, but here's another piece of sci-fi for you. In Isaac Asimov's robot novels, he predicted that robots would attain superior intelligence, but along with superior intelligence comes superior morality. There are tons of stories like what you are describing. True, a story is boring unless there is some kind of trouble, but in those stories, the happy-friendly-harmless-monster / perfectly-working-computer simply isn't the source of the trouble.

    • who would read a story about a man who created a "monster" that was happy, friendly, and harmless, or a computer that worked perfectly and caused no trouble?

      Actually, the scariest and likeliest tale about the future of AI, Jack Willamson's The Humanoids [umich.edu], fits your description exactly. The artificially intelligent robots in this novel are so helpful, so solicitous, and so efficient that they quickly reduce humanity to a state of enforced safe docility. This novel gives me chills just because it gets more plausible every day.
    • You know how you tell people that intellectual property is broken, and letting corporations own ideas can cause tyranny, and they just give you a blank stare?

      That's because none of this is part of their experience. So, to get through to most people, you can't just lay out the arguments in syllogism form, you have to "tell a story". And this can be a more or less literal strategy for persuasion. People tend to dismiss chicken little pronouncements until you make them seem real through a story.

      A related anecdote that I found amusing but insightful: In the Times of London a few years back, someone was editorializing about how Ellen coming out on her show ushered in an increasing acceptance of homosexuals in society. The quote, paraphrased, was this: "Americans never believe anything until it's been fictionally validated on television".

      Bryguy
    • It's not funny at all, it's the entire point of science fiction.

      Science fiction allows an idea to be followed through hypothetically. It may be an obvious science topic, or something a little more subtle, more social.

      You may see what problems may arise, and how they might be handled, how they should not be handled... the dangers involved, and what may bring about the dangers in the first place.

      Actual science flaws as well (eg. Jurassic Park)

      Or just to see what it might be like, as an alternative (Imagine, written by John Lennon is what I would call "social science fiction")... in novel form, something like Ursula LeGuin's Dispossessed

      Science fiction is also more accessable to the masses. The Matrix, 2001, Gattaca, Stargate... they introduce scary topics as a form of entertainment. Thought control, human slavery by machines, machine independance, artificial intelligence, genetic bigotry, matter transmission... Things a lot of people would rather not think about, as it's too scary.

      Hypothetical exploration is good for humans... it reduces FUD; promotes ideas, preparation and decision making and it's also a good way to test an idea and see how well it holds up... without hurting anyone (not including bad writing)

      Joe Blo who may not think about science or technology very much, may have quite strong feelings about not wanting a Matrix running his life, a HAL situation, or to be thought of as genetically inferior... science fiction can help focus opinions on things that may become important... not just science and technology, but social issues too.

      Which is why I'd always choose science fiction over Adam Sandler...

      </babble>
      <coffee>
  • by Sebastopol ( 189276 ) on Monday August 20, 2001 @01:35PM (#2198459) Homepage
    neural nets are designed to simulate how the brain works, so it makes sense that they be trained the same way. consider this: perhaps they can absorb information faster than a human brain, but who could deliver interactive teaching at that speed?

    now consider:

    today (2001): human trains AI, limited by wetware bandwidth

    ...20 years from now: AI trains AI, limited by neural net bandwidth.

    result: all 20 years of training one AI will be compressed into to a fraction of a second training time for the next generation

    this is the manifestation of Raymond Kurzweil and James Gleick's observations: the acceleration of everything, the exponential growth of compute power.

    hang on for the ride, kids. it's gonna get weird. i bet we see AI legistlation in the next 10 years.

    we will be the 'gods' (as in creators) of the new race that will inhabit the earth.

    • by ryants ( 310088 ) on Monday August 20, 2001 @01:41PM (#2198519)
      neural nets are designed to simulate how the brain works, so it makes sense that they be trained the same way
      Actually, neural nets don't simulate, they mimic at some crude level.

      But just like mimicking what a bird does (ie tape feathers to your arms and flap) isn't going to get you off the ground, mimicking the human brain will probably only get us so far.

      I believe the real breakthroughs will come more or less like it did in aeordynamics: when we understood the principles of flight and stopped mimicking birds, we could fly. When we understand the principles of intelligence and stop mimicking brains, we might be on to something.

      • Actually, neural nets don't simulate, they mimic at some crude level.

        How are you differentiating simulate and mimic?

        mimicking the human brain will probably only get us so far.

        agreed. it reminds me of those old b&w home movies of people "taping feathers to their wings" and trying to run off of small hills, despite the fact that both Newton and Bernoulli theories of aerodynamics had been around for ages.

        Minski's "Society of Mind" seems like a plausable approach for creating a synthetic consciousness, but it may just be the equivalent of DaVinci's drawings of a "helipcopter": a handcranked cork-screw sail on a wooden platform.

        When we understand the principles of intelligence and stop mimicking brains, we might be on to something

        That's what I meant by "man's image". Can there be a principle of intelligence that doesn't resemble the intelligence that formulated it? And if there is, what would the machine look like that realized it? It's probably agreed that the machine wouldn't be a classical computer. Perhaps something that required a hot cup of tea....

        I usually stop thinking at this point, it all becomes mental gymnastics, and I'm out of shape.

        • How are you differentiating simulate and mimic?

          Simulate means "to assume the appearance of, without the reality".

          Mimic means "to imitate". Mimic also has slightly more negative conotations.

          We're imitating, and not even close to "assuming the appearance of".

      • Well put, with no working theory of consciousness and the ubiqitious over-rated Turing test this whole project sounds more like creating an electronic con man than an intelligent machine.

        If you break up the turing test it really just wants to take advantage of our linguistic-psychological habits, idioms, and expectations to fool a human into thinking something false. Neat trick if you can pull it off, imagine higher quality AOLiza comedy but it simply isn't intelligent in any sense of the word.

        An intelligent machine wouldn't need to be programmed to fool humans. Its simulation of intelligence/consciousness would be obvious and an after-effect of being intelligent. Definately a cart before the horse problem.

    • hang on for the ride, kids. it's gonna get weird. i bet we see AI legistlation in the next 10 years.

      That's what they were saying 10 years ago. Better projects than this one have failed to pan out in any meaningful way, I guess CNN was still looking for A.I. stories. I personally don't see any of our current technology and techniques delivering A.I.; if it comes to be, it will be do to something that hasn't even been discovered yet.

    • But then we have the Bulterian Jihad where we overthrow our machine masters and learn the follies of creating machines in the image of man's mind. After that, we'll have to rely on the Navigator's Guild and their heavy reliance on the spice melange (found only on the planet Arakkis) for interstellar transportation.
    • today (2001): human trains AI, limited by wetware bandwidth


      Wetware bandwidth, multiplied by the number of humans performing the training. Why don't they open-source it and let everyone in the world have the chance to train it? Much faster, much more democratic and therefore representative of what people really consider to be "normal" intelligent behavior.

  • by ethereal ( 13958 ) on Monday August 20, 2001 @01:36PM (#2198462) Journal
    Yes, it's named after what you think it's named after, and yes, the article mentions why naming it Hal might not be such a hot idea."

    I don't know, it seems to fit if you ask me. HAL was very childlike in the movie, especially in regards to his "dad" Dr. Chandra (well, in the sequel at least), and only ended up hurting people because he was lied to and thought there was no other way. How is that any different from a human child who is abused and as a result doesn't value human lives at all?

    I don't think they should have named it HAL just because it's going to get boring after every single AI project is named HAL, but naming it after the famous movie star of the same name wasn't a bad idea in my opinion. As long as you treat it right and don't give it control over vital life support functionality, you should be just fine :)

  • variability (Score:2, Insightful)

    From the article:
    "Some kids are more predictable than others. He would be the surprising type"

    Being the "surprising type" with a vocabulary of 200 words probably indicates that the program is not particularly good. The range of possible behaviors is pretty small for such a system. As the vocabulary and complexity of possible utterances increases, it is likely that the "surprising" aspect of Hal is going to move into "bizarre" territory.

    As Chomsky pointed out [soton.ac.uk], relying strictly on positive and negative feedback is not enough to develop language...

  • I hear that Dr. Forbin is being trained by a child-like computer that destroys cities when it does not get its way.

    Wonder if they are sharing info? Better not cut the data connections, they could get really mad!
  • It must be time for this guy to apply for some grants. This is so far from any sort of language "breakthrough" as to be a complete joke. You could probably output random sentences with that 200 word vocabulary and fool "experts". 18 month old children don't exactly have the greatest conversational skills.

    Dr. Treister-Goren says that Hal will probably attain adult-level language skills in 10 years.

    *cough*bullshit*cough*. Call me when you have any actual *theory* on adult-level language skills, much less an implementation.

    I firmly believe we're at least 100 years away from a turing-test level of language processing. And no, Moore's Law does nothing for this problem. We are currently at Aristotle's knowledge trying to work out Relativity.

    • You could probably output random sentences with that 200 word vocabulary and fool "experts". 18 month old children don't exactly have the greatest conversational skills.

      Maybe you should actually study the subject before posting. The experts evaluating the conversations are most likely linguists or psychologists who specialize in early language development, not a collection of morons who got degrees from Sally Struthers. When children learn languages, they don't just spit out random collections of words (although it may appear that way to someone who doesn't really pay attention to what's going on). There are very distinctive patterns to the way words are combined, the way verbs are conjugated, etc. There is a huge difference between the way an 18 month old speaks and the way a 22 month old speaks. People who have devoted their lives to the study of language development know this. Armchair scientists on slashdot do not.

      Call me when you have any actual *theory* on adult-level language skills, much less an implementation.

      Ever hear of Chomsky? How about Pinker? What about the entire field of linguistics? There is no shortage of theories on how language works. Cognitive scientists are also starting to develop an understanding of how children pick it up. This company is obviously exagerating the capabilities of their program, but I'm guessing that they know a lot more about the subject than you do.

      • ...but I'm guessing that they know a lot more about the subject than you do.

        I'm certain he does. On the other hand, many AI "scientists" have made a lot of claims over the years with their superior knowledge of the various details.

        I, on the other hand, have enough knowledge and experience to see that the entire field doesn't have the slightest clue how full-blown adult-level cognitive and language abilities work. You don't have to be an expert in architecture to see that a mud hut is not going to scale to the Empire State Building.

        What about the entire field of linguistics? There is no shortage of theories on how language works. Cognitive scientists are also starting to develop an understanding of how children pick it up.

        To paraphrase another poster, which I liked: Just because you have a theory that gluing feathers on your arms and flapping is the basis of flight doesn't mean you have a theory of aerodynamics.

        • I, on the other hand, have enough knowledge and experience to see that the entire field doesn't have the slightest clue how full-blown adult-level cognitive and language abilities work. You don't have to be an expert in architecture to see that a mud hut is not going to scale to the Empire State Building.

          Knowledge and experience in what? So far this looks more like Joe-Bob sitting on his front porch critiquing the work of Richard Feynman while scarfing down a bucket of pork rinds than a well reasoned response to their claims. If you have objections to a specific theory of language or can point out problems with experimental methods, then please point them out. The scientists (yes, they are real scientists) doing research in this area are not just pulling things out of thin air. They proceed just as other scientists do, developing theories and then developing experiments to test those theories. Some experiments involve interactions with real children and adults, and others involve computer simulations of brain functions. They are far from developing a complete explanation of language and cognition in general, but much more progress has been made than you give them credit for.

          Two of the biggest mysteries of language acquisition in children have been solved in the past decade. The first is the problem of why children (in all languages) make distinctive grammatical errors at certain stages of development (error-milestones). These kinds of errors are frequently used to delineate the stages of language acquisition. It turns out that the errors are a side effect of the way semantic maps (a type of neural network) evolve over time. They tend to ignore exceptions to a rule, then generalize the exceptions (replacing the original rule), and then put the real rule back in place with the exceptions handled nicely as exceptions. The second important discovery is how children are able to pick up language so quickly. That is a result of only a fraction of the axons in the brain being myelenated at birth. Neurons are brought "on-line" over a period of years, and this is extremely important to learning. Again, neural network simulations were vital for this discovery.

  • by BillyGoatThree ( 324006 ) on Monday August 20, 2001 @01:41PM (#2198524)
    "Dr. Treister-Goren says that Hal will probably attain adult-level language skills in 10 years."

    This guy has obviously never heard the Minsky Theorem: "Full scale AI is always 20 years down the road."

    In any case, call us when it is actually working, not when you've fooled "child language experts". I could fool experts right now with a simple cassette tape, a LOT of taped 18-month-old comments and a quick hand with a playback button. That doesn't mean my stereo is going to human in 10 years.

    I am 99% sure we will eventually acheive "full AI". But I'm 100% sure it won't be via vague claims about unguessable future performance. In other words, show me the money.
    • I could fool experts right now with a simple cassette tape, a LOT of taped 18-month-old comments and a quick hand with a playback button.

      You're absolutely right, that cannot possibly be construed as evidence that you possess any kind of intelligence at all.

      Okay, I'm sorry. I'm really, really sorry. That was excessively harsh. I didn't mean to attack you personally. Unfortunately, I just could not resist!

      In fairness, you have a valid point. Your example is a variant of the "Chinese room" argument that was once put forward by John Searle.

      He compared a computer to a person in a closed room into which questions, in Chinese, are being passed. The person in the room, who knows no Chinese, follows a book of very complex instructions in order to formulate a response.

      Searle claims that despite the fact that the responses that come out of the "Chinese Room" make perfect sense to the Chinese speaker who passed in his question, the person within the room has no notion of the meaning of his response, let alone the question.

      Searle makes the point that the computer is a very complex machine that blindly follows a set of fixed instructions, and so, cannot possess real understanding. Whether you agree with his argument is, I think, still a matter of philosophical position.

      Personally, I don't agree. I think that the the entire room - the person, plus the book of instructions - would possess an understanding of Chinese that transcends the sum of its parts.

      Anyways, sorry again for my harshness. You just left yourself open in too inviting a way!

      • "Your example is a variant of the "Chinese room" argument that was once put forward by John Searle."

        Say that to my face some time. Searle and I are so far apart it isn't even funny.

        My example was not that "you can't tell from the outside what and what does not possess intelligence". My point was "the largely-random motivation and very small vocabulary of an 18-month old is a very slim hook on which to hang a hat." In particular, it is easily simulated by a system MUCH simpler than a Chinese Room.
  • Jason Hutchens, who was quoted in the article, wrote MegaHAL [sourceforge.net], which won the '96 Loebner Award. It's a fun program to play around with, especially if you "prime" it with different text files (e.g., Usenet posts, memos from marketing, pr0n, etc.).

    "IT TAKES 47 PANCAKES TO SHINGLE A DOG." -- MegaHAL

    k.
    • Re:MegaHAL (Score:2, Informative)

      by Protohiro ( 260372 )

      Interestingly enough, from Jason Hutchens' website [amristar.com.au]:

      I'm currently the Chief Scientist here at Ai. Working here is like being a character in a Neal Stephenson novel. We're making child machines which can learn language in the same way as human infants do. We're based in a huge mansion in Israel. We watch movies in the atomic bomb shelter. We jack in to the network using wireless technology. It's cool, man.

      So, in fact, the creator of MegaHal is in fact the brains behind this outfit!

  • I'm pretty much willing to accept the validity of the Turing Test, but I'm not sure if such a simple methodology is going to scale well. At some point, to hold your own in a conversation, you need to develop a structure to represent the outside world, and I'm not sure if a straightforward neural net implementation will get you there; admittedly it depends on how complex a neural net system you introduce.
  • by doorbot.com ( 184378 ) on Monday August 20, 2001 @01:43PM (#2198540) Journal
    ...a new generation of SPAM generation.

    So is this the first instance of giving a child an IP address?
  • by ch-chuck ( 9622 ) on Monday August 20, 2001 @01:46PM (#2198555) Homepage
    here's [modernhumorist.com] a funny one...

  • by DeadVulcan ( 182139 ) <dead.vulcan@pob o x .com> on Monday August 20, 2001 @01:46PM (#2198556)

    The fact that the Turing Test is probably still the only widely recognized test for artificial intelligence says more about our pathetic understanding of the nature of intelligence than the validity and usefulness of the test.

    After all, as any con-artist and magician will tell you, it's really not that hard to fool people. Also, remember that on some occasions, some human beings will actually fail the Turing test! That must be so humiliating...

    I freely admit I don't have anything better to offer, but I just wanted to point out that the Turing test is a pretty awful measurement, when you think about it.

    If you hate poorly defined software projects... can you imagine being handed the Turing test as a feature spec?

    • "The fact that the Turing Test is probably still the only widely recognized test for artificial intelligence says more about our pathetic understanding of the nature of intelligence than the validity and usefulness of the test."

      I suspect that you may be confusing the Turing test with the Loebner test. The Turing test is more or less an empirical definition of intelligence. It relies on an entity being able to perform all conversational tasks that we would expect a human to perform.

      The Loebner test, on the other hand, is a yearly spectacle where a number of chat bots attempt to fool a number of judges with varying degrees of competence.

    • For one, as an above poster mentioned, a con artist or a magician needs to be intelligent in order to fool you.



      Also, it isn't the rigamarole of the test itself that is important, it is the idea behind it. Basically, if it looks like a duck, walks like a duck... etc., then we must conclude it is a duck.



      If there is no way to discern an AI from a human then we must treat them the same. I think the Turing test is really a great example of pragmatics. Granted there is no set procedure to test the computer, if there were we could specifically program around that set procedure. The test needs to be adaptive; however the basic premise would still be the same. If the AI seems intelligent to everyone, then it IS intelligent to everyone (until someone else comes up with a way to prove that it isn't).

    • Turing test is pretty excellent.

      Imagine this: Turing says that if a machine wins the game 50% of the time, then it is indistinguishable from a human and should be considered intelligent. A really smart person (like a con artist) might be able to think of special responses that might make the tester realize that he's speaking to a human.

      But a computer that is arbitrarily powerful would be able to more accurately model the tester's mind, and thus could win the turing test considerably more often than 50% of the time. How do you like that?

      When do you grant it human rights?
  • Kenneth Colby and Joseph Weizenbaum amazed the academic world by demonstrating a electrical typewriter capable of fooling leading psychiatrists into believing it is a human patient suffering from infantile autism. [harvard.edu].

    Oh dear, everything repeats....
    • At first, it seems like this computer that can fool child language experts is impressive. But in the article you linked where a similar experiment was done to see if psychiatrists could tell the difference between a paranoid patient and a computer:

      PARRY was designed to engage in a dialogue in the role of a paranoid patient. The program was perhaps the first to be subject to an actual controlled experiment modeled on the Turing test [5], in which psychiatrists were given transcripts of electronically mediated dialogues with PARRY and with actual paranoids and were asked to pick out the simulated patient from the real. The fact that the
      expert judges, the psychiatrists, did no better than chance ...
  • All we need to do is feed Hal's responses back into Eliza, and Eliza's responses back into Hal, and train Hal to be a perverted psychiatrist a lot more quickly than these researchers are doing the job. :)
  • by z4ce ( 67861 ) on Monday August 20, 2001 @01:50PM (#2198584)
    I have _personally_ seen Eliza pass the turing test. I set up Eliza on my ICQ uin, one of my friends in crisis messaged me and had 45minute conversation with Eliza (not such a good thing). By the end of the conversation, my friend was convinced that he was talking to a hacker who broke into my account. Oh what a mess that was. He had called his ex-girlfriends's parents and told him her new boyfriend broke into my account. I didn't have any idea a bot could be so convincing. It had some flat out amazing responses to his questions and comments. If I had never seen an Eliza conversation before I would have probably thought it was a person too. But like I said.. setting up such a bot on your ICQ account is not recommended. They will pass the turing test and that's not such a good thing necessarily.. :)

    To see many such logs go to www.google.com and do a searh for "aoliza" or even "eliza chat" you'll find all sorts of hillarious conversations.
    • Under the Turing Test, testers are supposed to be suspicious. Your friend was not. Furthermore, the fact that he first knew it was not you, and second believed it to be someone who would have reason to fuck around with him (ie, not respond as a normal human would) strongly indicates that he would have realized that the respondant was a machine if he had been informed of the posibility.
  • Fake philosophers (Score:3, Interesting)

    by BeBoxer ( 14448 ) on Monday August 20, 2001 @01:52PM (#2198600)
    From the article:

    If, or when one does, it will open a Pandora's box of ethical and philosophical questions. After all, if a computer is perceived to be as intelligent as a person, what is the difference between a smart computer and a human being?

    and

    "All of us strongly believe that machines are the next step in evolution," said Dunietz. "The distinction between real flesh and blood, old-fashioned and the new kind, will start to blur."

    If these researchers get to the point where they can't see a moral difference between killing a person and turning off a computer, they need to get out of the lab more. What next, natural rights for computer programs? That's like inventing television, and then being unwilling to turn off the TV for fear of killing the little people inside. Rubbish.
    • Re:Fake philosophers (Score:2, Interesting)

      by gmarceau ( 119282 )
      Go out and rent Blade Runner (or download it, according to the previous story). It gives an interpretation of the colors such a world would have - with a definitively human touch.
    • by Knos ( 30446 )
      Can you elaborate on how radicaly different is a common life form engineered by thousands of centuries of natural selection and and a system engineered by humans/and or other systems presenting all the characteristics of living animals?

      is it just a belief that if we create something; we are automatically superior to it? (then why should childrens be anything but slaves?)
    • by dissy ( 172727 )
      If one thinks of the human body as just the machine that it is, this only leaves the mind.
      Humans are self-aware, and thats why most people concider turning off a human morally wrong.

      What if by AI a computer could actually be self-aware just as much as a human?
      That is when that line becomes importaint.

      Until then, a computer could at most be compared to an animal or insect, something with hardwired responces and no real thought.
      (Best comes to mind is a gnat, cockroach, or other similar bug, im sure you get the idea)

      Then again, I would have a pretty hard time personally just turning off my pet dog, as much as it sounds like these people dont want to turn off a machine...

      just something to think about.
      -- Jon
  • It is a fallacy to assume that if you can mimic a three-year old brain in two years, you can duplicate an adult human brain in ten years.

    Even so, these Turing tests aren't really accurate. The judges often mistake a computer for a person, and vice-versa, just by their nature of not really paying attention and not knowing what to look for.
  • This is remarkably similar to my own project to create an AI with the intelligence of a ten year old script kiddie. In the true American fashion, I am planning on letting the internet raise him. I will give him a slashdot account, and let everyone else do the teaching for me. His reward system is simply: -10 offtopic, +1 flamebait, +5 troll, +10 interesting, +15 insightful. When he starts posting coherently in ten years, you'll be the first to know.
  • ...is indistinguishable from a rigged demo.


    Who are the "experts" they claim to have fooled? Where are the transcripts of the session? Where is the web interface to the program? I've seen enough similar claims to bet that it's monumentally disappointing.


    AI is full of boastful claims. This article is full of them, but practically devoid of any useful details.

  • that the idea of teaching a "child" system has been used for AI research.


    Cog anybody? [mit.edu]

  • HAL's only failure (Score:3, Insightful)

    by firewort ( 180062 ) on Monday August 20, 2001 @02:16PM (#2198775)
    The only failure with HAL was that Dr. Chandra forget to teach that murder is far worse than lying.

    HAL understood that they were both bad, but had no values associated with either. Once HAL had lied, it was equally okay to commit murder.

    Presumably, Dr. Goren will take this under consideration.

    Also, I hope they realise that in ten years, they won't have an adult. They'll have a well-spoken, knowledgable ten year old. At this point it's worth examining the differences between a ten year old and an adult.

    Knowledge, experience, maturity, sense of responsibility. Can anyone come up with any others?
  • I want to play with the source files.
    -CrackElf
  • I'm having that weird image of a sysadming querying about children processes and receiving the answer: "Oy, oy, oy!"
  • I hope they have some sort of forcible input device to override his overrides...once he gets to the mental state of 15 he's going to start ignoring mommy's keyboard.
  • People always make an incorrect connection between true conversational AI (i.e. The Turing Test), and the typical "agent" that will do things like book airline tickets for you. The scope difference between the two is amazing. If I need a program where I can assume that I will always be giving the computer commands (aka imperative statements), then I can use this knowledge to greatly reduce my parsing skills. It really only takes a moderate grammar, a good thesaurus and an extensive knowledge base (that last part being the trick). What do I need to know to book a ticket? A concept of location and destination, a notion of limited resources (plane seats) and schedule -- a sense of time. Fine.

    However, true conversational AI I think will elude us for a long, long time, because there is so much that goes into it that the computer will never be taught, and never pick up on its own. A computer has no senses equivalent to ours and therefore will have serious problems with statements like "Hot today, isn't it?" (Ok, that one could be done with a temperature probe...:)) I've often pondered the approach of teaching a computer the same way children are taught, such as by reading it kids' books. But what about the classics like "See Jack. See Jack run. Run, Jack, run." The computer can't see Jack, nor can it associate that rapidly moving your feet equals running, unless you hardwire all that stuff in.

    "Book me a seat to Japan next month."

    "DONE."

    "Nice place, Japan. Ever been?"

    "ERROR -- WHAT?"

    "Have you ever been to Japan?"

    "ERROR -- WHAT?"

    Of course, there is also not much use for a true conversational AI in the "agent" world. You start to get into Eliza/Weizenbaum territory when you offer things like "Have the psych patient talk to a computer instead of a real person!" I suppose it's possible that could happen someday. But I don't need it to pass the Turing Test in order to book airline tickets.

  • by RobertFisher ( 21116 ) on Monday August 20, 2001 @02:57PM (#2199046) Journal
    The description that the researchers at AI are slowly entering in thousands of facts such as "a table has four legs" sounds extremely similar to Lenat's Cyc [cyc.com] project. Even the timescales (10 years in both cases) for both projects sounds quite similar.

    Given that Cyc's project has apparently failed to live up to its original claims of producing genuine childlike intelligence by slowly building up all of the information a child has, and has since spawned into a commercial product, why should one believe AI will fare any better? How do their approaches differ? It seems particularly problematic for AI, as a company, that Cyc has released their OpenCyc project to the community.

    Bob

  • hmm... (Score:3, Funny)

    by Mike1024 ( 184871 ) on Monday August 20, 2001 @03:33PM (#2199242)
    Hey,

    You will not need a mouse or keyboard to operate the computer as it will function when you converse with it.

    "It is going to be the next user interface, the last user interface," Dunietz said, explaining that it will replace the mouse.


    Me: Computer, play Quake for me.
    Computer: Yes, master.

    The firm's philosophy is simple. If it looks intelligent and it sounds intelligence, then it must be intelligent.

    Maybe they could design a context sensitive spellchecker? One that would highlight terms like "It sounds intelligence"

    Michael
  • From the article:

    "Ball now park mommy," Hal tells Treister-Goren, then asks her to pack bananas for a trip to the park, adding that "monkeys like bananas," a detail he picked up from a story on animals in a safari park.

    So... if Hal is reading stories (or having them read to it), how long before it watches 2001 (or reads the novel)? By that point, will it react to the fact that it's named after a murderous fictional AI? And what kind of reaction will that be?

    Will it tell its researchers, "You know, I just don't want to talk about it," and then give them the silent treatment until they apologize? Will it laugh knowingly at the irony? Either way, it's a moment to watch for. ;)


  • I wonder if this program addresses to someone of the programming team as his mother and if so does the program have voice capabilities. If answers to both of these questions are true, then (just like in AI) would it be possible at some point to have electronic kids equivalent of Tamagochi toys?

    Now, that could be used as a real deterrent for some people from having kids :)
  • The recent Toward a Science of Consciousness [ida.his.se] conference in Sweden had strong tracks in both quantum theory (which is in part proposed to explain why consciousness is not like a Turing machine -- but I'm no physicist) and AI. The AI side, while presented by some obviously fine people, was disturbing because most of them seemed to agree that if you had a bus full of AIs collide with a bus full of people, and limited resources to devote to saving people and AIs, you should save some of the AIs ahead of the people.

    Since AIs will be expensive machines representing vast corporate investments, one can easily imagine pressure on legislatures to mandate saving the AIs ahead of (some) people, on the excuse that they'd passed the Turing test and we had equal ethical obligations to them, and similar clever-seeming arguments. Beware, any argument to give machines rights, because it may be the slipperiest slope towards losing our own the human imagination has yet invented. The oh-so-charming AI researchers are setting up to provide ideological cover to some really evil shit.

    Swedish national radio reported from the conference that true AI is "just around the corner." The public is all juiced to receive this nonsense favorably. Can you imagine some rich guy's 'AI-enhanced' car collides with yours, and the emergency crew saves his car first while you bleed to death? We're not far from that; we're an infinite distance from anything like true AI, but we're not far from that at all.

If you didn't have to work so hard, you'd have more time to be depressed.

Working...