Israeli AI System "Hal" And The Turing Test 447
Conspiracy_Of_Doves writes: "Hal, the AI creation of Dr. Anat Treister-Goren of Israel, has fooled child language language experts into believing that it is a 18-month old child. Dr. Treister-Goren says that Hal will probably attain adult-level language skills in 10 years. CNN.com article is here. Yes, it's named after what you think it's named after, and yes, the article mentions why naming it Hal might not be such a hot idea."
creating computers in man's image, exponentials (Score:5, Insightful)
now consider:
today (2001): human trains AI, limited by wetware bandwidth
...20 years from now: AI trains AI, limited by neural net bandwidth.
result: all 20 years of training one AI will be compressed into to a fraction of a second training time for the next generation
this is the manifestation of Raymond Kurzweil and James Gleick's observations: the acceleration of everything, the exponential growth of compute power.
hang on for the ride, kids. it's gonna get weird. i bet we see AI legistlation in the next 10 years.
we will be the 'gods' (as in creators) of the new race that will inhabit the earth.
Why isn't it a hot idea? (Score:3, Insightful)
I don't know, it seems to fit if you ask me. HAL was very childlike in the movie, especially in regards to his "dad" Dr. Chandra (well, in the sequel at least), and only ended up hurting people because he was lied to and thought there was no other way. How is that any different from a human child who is abused and as a result doesn't value human lives at all?
I don't think they should have named it HAL just because it's going to get boring after every single AI project is named HAL, but naming it after the famous movie star of the same name wasn't a bad idea in my opinion. As long as you treat it right and don't give it control over vital life support functionality, you should be just fine :)
variability (Score:2, Insightful)
"Some kids are more predictable than others. He would be the surprising type"
Being the "surprising type" with a vocabulary of 200 words probably indicates that the program is not particularly good. The range of possible behaviors is pretty small for such a system. As the vocabulary and complexity of possible utterances increases, it is likely that the "surprising" aspect of Hal is going to move into "bizarre" territory.
As Chomsky pointed out [soton.ac.uk], relying strictly on positive and negative feedback is not enough to develop language...
Re:creating computers in man's image, exponentials (Score:5, Insightful)
But just like mimicking what a bird does (ie tape feathers to your arms and flap) isn't going to get you off the ground, mimicking the human brain will probably only get us so far.
I believe the real breakthroughs will come more or less like it did in aeordynamics: when we understood the principles of flight and stopped mimicking birds, we could fly. When we understand the principles of intelligence and stop mimicking brains, we might be on to something.
Must be a misquote or an AI newbie (Score:3, Insightful)
This guy has obviously never heard the Minsky Theorem: "Full scale AI is always 20 years down the road."
In any case, call us when it is actually working, not when you've fooled "child language experts". I could fool experts right now with a simple cassette tape, a LOT of taped 18-month-old comments and a quick hand with a playback button. That doesn't mean my stereo is going to human in 10 years.
I am 99% sure we will eventually acheive "full AI". But I'm 100% sure it won't be via vague claims about unguessable future performance. In other words, show me the money.
Re:"2001" (Score:2, Insightful)
Probably because science fiction has a funny tendency to become science fact, but here's another piece of sci-fi for you. In Isaac Asimov's robot novels, he predicted that robots would attain superior intelligence, but along with superior intelligence comes superior morality. There are tons of stories like what you are describing. True, a story is boring unless there is some kind of trouble, but in those stories, the happy-friendly-harmless-monster / perfectly-working-computer simply isn't the source of the trouble.
Re: Fake philosopher (Score:2, Insightful)
is it just a belief that if we create something; we are automatically superior to it? (then why should childrens be anything but slaves?)
Oh, so you mean pay the requisite kickbacks... (Score:2, Insightful)
Any sufficiently advanced technology... (Score:2, Insightful)
Who are the "experts" they claim to have fooled? Where are the transcripts of the session? Where is the web interface to the program? I've seen enough similar claims to bet that it's monumentally disappointing.
AI is full of boastful claims. This article is full of them, but practically devoid of any useful details.
Re:Fake philosophers (Score:2, Insightful)
Humans are self-aware, and thats why most people concider turning off a human morally wrong.
What if by AI a computer could actually be self-aware just as much as a human?
That is when that line becomes importaint.
Until then, a computer could at most be compared to an animal or insect, something with hardwired responces and no real thought.
(Best comes to mind is a gnat, cockroach, or other similar bug, im sure you get the idea)
Then again, I would have a pretty hard time personally just turning off my pet dog, as much as it sounds like these people dont want to turn off a machine...
just something to think about.
-- Jon
HAL's only failure (Score:3, Insightful)
HAL understood that they were both bad, but had no values associated with either. Once HAL had lied, it was equally okay to commit murder.
Presumably, Dr. Goren will take this under consideration.
Also, I hope they realise that in ten years, they won't have an adult. They'll have a well-spoken, knowledgable ten year old. At this point it's worth examining the differences between a ten year old and an adult.
Knowledge, experience, maturity, sense of responsibility. Can anyone come up with any others?
Ha. And bah. (Score:5, Insightful)
When I see an AI claim, I check its source - if its a business, I suspect exaggeration; if it's a real research center (public or private, MIT or Bell Labs) then I'm more likely to take the claims on face value. This is hyperbolic investor-porn, no more.
Re:How it works... (Score:5, Insightful)
These researchers are getting back from the program exactly what they put in. At one point in your link, I read:
(person) where is daddy?
...
(ai) daddy gone
(person) daddy is at work
(ai) daddy gone bye bye
(person) wait hal, we need to take something to eat
(ai) banana
(person) ok we can take some bananas
(ai) monkeys eat bananas
(person) you're so smart hal. do you want to take a book?
(ai) juice mommmy
(person) sure we'll take some juice as well
The researcher elsewhere claims that the AI's words "relate to its well-being". This is utter projection- the only reason the AI is stuck on concepts of mommy, daddy, monkey, and juice is because this is the inane crap they insist on talking to it about!
Notice also that they claim the AI is tracking almost exactly with a child its same age. Seem strange? Wouldn't you expect a little deflection over 15 months? Shouldn't the thing be a little smarter or a little dumber than a normal child- just statistically speaking, how likely is it they happened to program one that advances /exactly/ as quickly as a normal human infant?
The paper talks a lot about feedback loops. I've got a huge one for them, but it isn't the AI caught in it, it's the researchers. By expecting the thing to react at a child-level, they're talking to it that way, rewarding it that way, and making it that way. If they started talking to it about quantum mechanics tomorrow, it would bd confused as hell for about a month, but I bet it would pick up real fast after it absorbed the new vocabulary. They claim it cares about monkies and juice?! Those are just words to it, you could just as easily raise it on gluons and dark matter, and I don't think it would notice a difference.
Re:Ha. And bah. (Score:3, Insightful)