Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Technology

Israeli AI System "Hal" And The Turing Test 447

Conspiracy_Of_Doves writes: "Hal, the AI creation of Dr. Anat Treister-Goren of Israel, has fooled child language language experts into believing that it is a 18-month old child. Dr. Treister-Goren says that Hal will probably attain adult-level language skills in 10 years. CNN.com article is here. Yes, it's named after what you think it's named after, and yes, the article mentions why naming it Hal might not be such a hot idea."
This discussion has been archived. No new comments can be posted.

Israeli AI System "Hal" And The Turing Test

Comments Filter:
  • FP? (Score:0, Interesting)

    by Anonymous Coward on Monday August 20, 2001 @02:21PM (#2198356)
    FP
  • "2001" (Score:5, Interesting)

    by YIAAL ( 129110 ) on Monday August 20, 2001 @02:34PM (#2198452) Homepage
    Funny how all the cultural fears of technology come from books and movies like Frankenstein, Brave New World, Colossus, (remember that one?) and 2001. All of which are fiction, and written the way they are to make an interesting story (who would read a story about a man who created a "monster" that was happy, friendly, and harmless, or a computer that worked perfectly and caused no trouble?) Yet in popular discussion, people treat them as real, and embodying actual dangers with which we have real experience.

    We need more Artificial Intelligence -- the natural kind is in too short a supply.
  • by z4ce ( 67861 ) on Monday August 20, 2001 @02:50PM (#2198584)
    I have _personally_ seen Eliza pass the turing test. I set up Eliza on my ICQ uin, one of my friends in crisis messaged me and had 45minute conversation with Eliza (not such a good thing). By the end of the conversation, my friend was convinced that he was talking to a hacker who broke into my account. Oh what a mess that was. He had called his ex-girlfriends's parents and told him her new boyfriend broke into my account. I didn't have any idea a bot could be so convincing. It had some flat out amazing responses to his questions and comments. If I had never seen an Eliza conversation before I would have probably thought it was a person too. But like I said.. setting up such a bot on your ICQ account is not recommended. They will pass the turing test and that's not such a good thing necessarily.. :)

    To see many such logs go to www.google.com and do a searh for "aoliza" or even "eliza chat" you'll find all sorts of hillarious conversations.
  • Fake philosophers (Score:3, Interesting)

    by BeBoxer ( 14448 ) on Monday August 20, 2001 @02:52PM (#2198600)
    From the article:

    If, or when one does, it will open a Pandora's box of ethical and philosophical questions. After all, if a computer is perceived to be as intelligent as a person, what is the difference between a smart computer and a human being?

    and

    "All of us strongly believe that machines are the next step in evolution," said Dunietz. "The distinction between real flesh and blood, old-fashioned and the new kind, will start to blur."

    If these researchers get to the point where they can't see a moral difference between killing a person and turning off a computer, they need to get out of the lab more. What next, natural rights for computer programs? That's like inventing television, and then being unwilling to turn off the TV for fear of killing the little people inside. Rubbish.
  • Re:Fake philosophers (Score:2, Interesting)

    by gmarceau ( 119282 ) <dnys2v4dq1001@sneakemail.com> on Monday August 20, 2001 @03:04PM (#2198684) Homepage
    Go out and rent Blade Runner (or download it, according to the previous story). It gives an interpretation of the colors such a world would have - with a definitively human touch.
  • Re:Variant Spelling (Score:3, Interesting)

    by screwballicus ( 313964 ) on Monday August 20, 2001 @03:06PM (#2198701)
    The -ise verb endings are still common in the British Commonwealth. They are particularly alive in South African and Indian English, but also in Australian, New Zealand and Canadian English.

    They exist because the original -ise verbs originated from French, which spelled them with an 's'. For example "realise" is the traditional spelling of that particular verb, as it derives from the French verb "réaliser". Another example is "paralyse" which derives from French "paralyser", but has become "paralyze" in American English.

  • Re:What a crock (Score:3, Interesting)

    by Reality Master 101 ( 179095 ) <RealityMaster101@gmail. c o m> on Monday August 20, 2001 @03:26PM (#2198852) Homepage Journal

    "...you assume they have a considerable amount of nonlinguistic cognitive machinery in place before they start" [...] Additionally, the idea that children learn laguage because of rewards or praise is, apparently, inconsistent with studies of human language acquisition.

    Hmm, interesting. To tell you the truth, I have a tottler about 20 months old myself, and it's been fascinating watching him developing cognitive skills. I think there is room for both views. On the one hand, there is no question that there is a considerable amount of hard-wired machinery at work. This is immediately apparent when compared to raising a puppy (which I've also done).

    When my child was born, I was interested to see how long it would take for me to see there was something "different" over the puppy. To my amazement, once an infant starts noticing the world (they are pretty much oblivious for the first three months), the differences are noticeable right away. It's subtle, but you can see them looking at the world and you can see "the little gears turning". I don't know how to define it exactly, but there is no doubt that there is a qualitative difference in how each brain works.

    On the other hand, I don't think you necessarily need to look to straight parental or world positive/negative reinforcement to find feedback at work. There is a tremendous amount of self-motivated feedback at work in a child. In my boy, at least, his biggest motivations are 1) look at everything and analyze how it interacts in his world, and more importantly, 2) to be a "big boy" by mimicking the adults around him. If there's something that he thinks he can do, he gets pissed if you don't let him try it himself. Much of his positive/negative feedback is coming directly from comparing his actions and results to those around him.

    I think that hard-wired self-motivated feedback based on mimickry is going to be shown to be an important factor in child development. Which makes it all the harder to make a machine do it, because you have to give it something to mimic in a relatively real world environment.

  • by RobertFisher ( 21116 ) on Monday August 20, 2001 @03:57PM (#2199046) Journal
    The description that the researchers at AI are slowly entering in thousands of facts such as "a table has four legs" sounds extremely similar to Lenat's Cyc [cyc.com] project. Even the timescales (10 years in both cases) for both projects sounds quite similar.

    Given that Cyc's project has apparently failed to live up to its original claims of producing genuine childlike intelligence by slowly building up all of the information a child has, and has since spawned into a commercial product, why should one believe AI will fare any better? How do their approaches differ? It seems particularly problematic for AI, as a company, that Cyc has released their OpenCyc project to the community.

    Bob

  • by Myco ( 473173 ) on Monday August 20, 2001 @08:38PM (#2200317) Homepage
    today (2001): human trains AI, limited by wetware bandwidth


    Wetware bandwidth, multiplied by the number of humans performing the training. Why don't they open-source it and let everyone in the world have the chance to train it? Much faster, much more democratic and therefore representative of what people really consider to be "normal" intelligent behavior.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...