Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Technology

Israeli AI System "Hal" And The Turing Test 447

Conspiracy_Of_Doves writes: "Hal, the AI creation of Dr. Anat Treister-Goren of Israel, has fooled child language language experts into believing that it is a 18-month old child. Dr. Treister-Goren says that Hal will probably attain adult-level language skills in 10 years. CNN.com article is here. Yes, it's named after what you think it's named after, and yes, the article mentions why naming it Hal might not be such a hot idea."
This discussion has been archived. No new comments can be posted.

Israeli AI System "Hal" And The Turing Test

Comments Filter:
  • by Sebastopol ( 189276 ) on Monday August 20, 2001 @02:35PM (#2198459) Homepage
    neural nets are designed to simulate how the brain works, so it makes sense that they be trained the same way. consider this: perhaps they can absorb information faster than a human brain, but who could deliver interactive teaching at that speed?

    now consider:

    today (2001): human trains AI, limited by wetware bandwidth

    ...20 years from now: AI trains AI, limited by neural net bandwidth.

    result: all 20 years of training one AI will be compressed into to a fraction of a second training time for the next generation

    this is the manifestation of Raymond Kurzweil and James Gleick's observations: the acceleration of everything, the exponential growth of compute power.

    hang on for the ride, kids. it's gonna get weird. i bet we see AI legistlation in the next 10 years.

    we will be the 'gods' (as in creators) of the new race that will inhabit the earth.

  • by ethereal ( 13958 ) on Monday August 20, 2001 @02:36PM (#2198462) Journal
    Yes, it's named after what you think it's named after, and yes, the article mentions why naming it Hal might not be such a hot idea."

    I don't know, it seems to fit if you ask me. HAL was very childlike in the movie, especially in regards to his "dad" Dr. Chandra (well, in the sequel at least), and only ended up hurting people because he was lied to and thought there was no other way. How is that any different from a human child who is abused and as a result doesn't value human lives at all?

    I don't think they should have named it HAL just because it's going to get boring after every single AI project is named HAL, but naming it after the famous movie star of the same name wasn't a bad idea in my opinion. As long as you treat it right and don't give it control over vital life support functionality, you should be just fine :)

  • variability (Score:2, Insightful)

    by 4n0nym0u53 C0w4rd ( 463592 ) on Monday August 20, 2001 @02:36PM (#2198464) Homepage
    From the article:
    "Some kids are more predictable than others. He would be the surprising type"

    Being the "surprising type" with a vocabulary of 200 words probably indicates that the program is not particularly good. The range of possible behaviors is pretty small for such a system. As the vocabulary and complexity of possible utterances increases, it is likely that the "surprising" aspect of Hal is going to move into "bizarre" territory.

    As Chomsky pointed out [soton.ac.uk], relying strictly on positive and negative feedback is not enough to develop language...

  • by ryants ( 310088 ) on Monday August 20, 2001 @02:41PM (#2198519)
    neural nets are designed to simulate how the brain works, so it makes sense that they be trained the same way
    Actually, neural nets don't simulate, they mimic at some crude level.

    But just like mimicking what a bird does (ie tape feathers to your arms and flap) isn't going to get you off the ground, mimicking the human brain will probably only get us so far.

    I believe the real breakthroughs will come more or less like it did in aeordynamics: when we understood the principles of flight and stopped mimicking birds, we could fly. When we understand the principles of intelligence and stop mimicking brains, we might be on to something.

  • by BillyGoatThree ( 324006 ) on Monday August 20, 2001 @02:41PM (#2198524)
    "Dr. Treister-Goren says that Hal will probably attain adult-level language skills in 10 years."

    This guy has obviously never heard the Minsky Theorem: "Full scale AI is always 20 years down the road."

    In any case, call us when it is actually working, not when you've fooled "child language experts". I could fool experts right now with a simple cassette tape, a LOT of taped 18-month-old comments and a quick hand with a playback button. That doesn't mean my stereo is going to human in 10 years.

    I am 99% sure we will eventually acheive "full AI". But I'm 100% sure it won't be via vague claims about unguessable future performance. In other words, show me the money.
  • Re:"2001" (Score:2, Insightful)

    by Conspiracy_Of_Doves ( 236787 ) on Monday August 20, 2001 @02:59PM (#2198654)
    Funny how all the cultural fears of technology come from books and movies like Frankenstein, Brave New World, Colossus, (remember that one?) and 2001

    Probably because science fiction has a funny tendency to become science fact, but here's another piece of sci-fi for you. In Isaac Asimov's robot novels, he predicted that robots would attain superior intelligence, but along with superior intelligence comes superior morality. There are tons of stories like what you are describing. True, a story is boring unless there is some kind of trouble, but in those stories, the happy-friendly-harmless-monster / perfectly-working-computer simply isn't the source of the trouble.
  • by Knos ( 30446 ) on Monday August 20, 2001 @03:05PM (#2198688) Homepage Journal
    Can you elaborate on how radicaly different is a common life form engineered by thousands of centuries of natural selection and and a system engineered by humans/and or other systems presenting all the characteristics of living animals?

    is it just a belief that if we create something; we are automatically superior to it? (then why should childrens be anything but slaves?)
  • by darkharlequin ( 1923 ) on Monday August 20, 2001 @03:05PM (#2198689) Homepage Journal
    To the journal publishers and make the proper connections in the scientific community? Peer review has NOTHING to do with the scientific merit of a paper, my Q-Mech teacher explained that one to me. Peer review has to do with who you know and what you have done for them lately.
  • by schnitzi ( 243781 ) on Monday August 20, 2001 @03:08PM (#2198713) Homepage
    ...is indistinguishable from a rigged demo.


    Who are the "experts" they claim to have fooled? Where are the transcripts of the session? Where is the web interface to the program? I've seen enough similar claims to bet that it's monumentally disappointing.


    AI is full of boastful claims. This article is full of them, but practically devoid of any useful details.

  • by dissy ( 172727 ) on Monday August 20, 2001 @03:10PM (#2198734)
    If one thinks of the human body as just the machine that it is, this only leaves the mind.
    Humans are self-aware, and thats why most people concider turning off a human morally wrong.

    What if by AI a computer could actually be self-aware just as much as a human?
    That is when that line becomes importaint.

    Until then, a computer could at most be compared to an animal or insect, something with hardwired responces and no real thought.
    (Best comes to mind is a gnat, cockroach, or other similar bug, im sure you get the idea)

    Then again, I would have a pretty hard time personally just turning off my pet dog, as much as it sounds like these people dont want to turn off a machine...

    just something to think about.
    -- Jon
  • HAL's only failure (Score:3, Insightful)

    by firewort ( 180062 ) on Monday August 20, 2001 @03:16PM (#2198775)
    The only failure with HAL was that Dr. Chandra forget to teach that murder is far worse than lying.

    HAL understood that they were both bad, but had no values associated with either. Once HAL had lied, it was equally okay to commit murder.

    Presumably, Dr. Goren will take this under consideration.

    Also, I hope they realise that in ten years, they won't have an adult. They'll have a well-spoken, knowledgable ten year old. At this point it's worth examining the differences between a ten year old and an adult.

    Knowledge, experience, maturity, sense of responsibility. Can anyone come up with any others?
  • Ha. And bah. (Score:5, Insightful)

    by Lemmy Caution ( 8378 ) on Monday August 20, 2001 @03:43PM (#2198958) Homepage
    The exaggerated claims are classic for private-sector AI research. Language acquisition is a process of highly structured connectionism that will probably require some hardware isomorphism (if we're talking about something that humans can talk to, anyway) and, ultimately, ontology. I see no reference to any sort of ontology engine - the sorts of successes they've had indicate absolutely no ontological grounding, and there's no way that simply training a network without some way of generating ontologies that allow things like binding pronouns accurately to precedent references could occur.

    When I see an AI claim, I check its source - if its a business, I suspect exaggeration; if it's a real research center (public or private, MIT or Bell Labs) then I'm more likely to take the claims on face value. This is hyperbolic investor-porn, no more.

  • Re:How it works... (Score:5, Insightful)

    by DanMcS ( 68838 ) on Monday August 20, 2001 @04:17PM (#2199158)

    These researchers are getting back from the program exactly what they put in. At one point in your link, I read:

    (person) where is daddy?
    (ai) daddy gone
    (person) daddy is at work
    (ai) daddy gone bye bye
    ...
    (person) wait hal, we need to take something to eat
    (ai) banana
    (person) ok we can take some bananas
    (ai) monkeys eat bananas
    (person) you're so smart hal. do you want to take a book?
    (ai) juice mommmy
    (person) sure we'll take some juice as well

    The researcher elsewhere claims that the AI's words "relate to its well-being". This is utter projection- the only reason the AI is stuck on concepts of mommy, daddy, monkey, and juice is because this is the inane crap they insist on talking to it about!

    Notice also that they claim the AI is tracking almost exactly with a child its same age. Seem strange? Wouldn't you expect a little deflection over 15 months? Shouldn't the thing be a little smarter or a little dumber than a normal child- just statistically speaking, how likely is it they happened to program one that advances /exactly/ as quickly as a normal human infant?

    The paper talks a lot about feedback loops. I've got a huge one for them, but it isn't the AI caught in it, it's the researchers. By expecting the thing to react at a child-level, they're talking to it that way, rewarding it that way, and making it that way. If they started talking to it about quantum mechanics tomorrow, it would bd confused as hell for about a month, but I bet it would pick up real fast after it absorbed the new vocabulary. They claim it cares about monkies and juice?! Those are just words to it, you could just as easily raise it on gluons and dark matter, and I don't think it would notice a difference.

  • Re:Ha. And bah. (Score:3, Insightful)

    by Lemmy Caution ( 8378 ) on Monday August 20, 2001 @05:00PM (#2199414) Homepage
    Ontology, in AI, is pretty specially. The system has to know what the thing is. Not just as a word that gets associated with other words, but as a thing in itself, interaction with which reveals properties. When you have ontology, all the sorts of logical inferences that CYC is being taught by rote ("if David is in New York, his left foot is in New York") don't need to be made explicit. If I had said "epistemology", then you'd be right to make your point.

Real Programmers don't eat quiche. They eat Twinkies and Szechwan food.

Working...