AI Researchers Say 'Rascals' Might Pass Turing Test 337
An anonymous reader writes "Passing the Turing test is the holy grail of artificial intelligence (AI) and now researchers claim it may be possible using the world's fastest supercomputer (IBM's Blue Gene). This version of the Turing test pits a human conversing with a synthetic character powered by Rascals software crafted at Rensselaer Polytechnic Institute. RPI is aiming to pass AI's final exam this fall, by pairing the most powerful university-based supercomputing system in the world with its new multimedia group which is designing a holodeck, a la Star Trek."
Creating a character won't help (Score:5, Insightful)
Re:yes, but is it really intelligent? (Score:2, Insightful)
Re:yes, but is it really intelligent? (Score:5, Insightful)
But it will demonstrate that past a certain point we won't know the difference between real intelligence and something attempting to appear intelligent.
in fact, just what is intelligence / conciousness? if we can't define it, how can we hope to produce it?
If we can't tell the difference maybe there isn't one. Are you intelligent? Or are you just sufficiently complex enough that you simulate it well?
Re:yes, but is it really intelligent? (Score:1, Insightful)
With all due respect to Turing, and he was a brilliant man, I don't think that his test is the definition of AI. True AI must be self programming. Here is the ArcherB test. When you can place a machine in a particular situation with no programming whatsoever, and it figures it does something on its own, then you have AI. For example, hook up a computer to an Internet connection. This computer can have no BIOS, OS, no programming at all. When it learns to use its own hardware, figures out network protocols and starts downloading web pages and porn, you have true AI.
What crap (Score:5, Insightful)
"That's how we plan to pass this limited version of the Turing test."
If it's a limited version of the Turing Test, then it's not the Turing Test. They don't actually define exactly what the limits are. But any open ended test is doomed to failure based on our state of the art in A.I. (read: there is no science of Artificial Intelligence, in the sense of artificial cognition).
"What do you think a typical mother would say if she found out her daughter was going to enter the porn industry."
"Why do you think children have emotional attachments to their parents?"
"Which is worse, racism or sexism?"
"Would you rather be a fireman or an astronaut, and why?"
Any sort of open-ended question that requires human cultural knowledge and asking it to support its conclusion is going to cause it to barf.
Now, if the point of this is whether you can fool someone into thinking the Avatar was human when they didn't know it was a test, well, who cares? Eliza was able to do that back in the 1970s.
Lastly, who says the Turing Test (or any A.I. test) needs to take place in real time? I would be impressed if they came back with a human-level answer in a month of processing time. That's equivalent to a computer 2.5 million times faster than a computer that could produce the answer in one second. That they can't even do that should tell people that speed is not the problem in A.I. research. We have absolutely no fundamental model of how it all works.
Ask it the color of a Coke can. (Score:3, Insightful)
Re:yes, but is it really intelligent? (Score:5, Insightful)
Re:Turing Test is Nonsense (Score:4, Insightful)
Sure, writing a bot the does first post is easy.
We are talking about a conversation here, or even better a debate over a topic that requires evaluating new concepts on the fly.
We will know we are getting some where when we can gt a computer to changes it's mind on something from a conversation.
Real turing test (Score:5, Insightful)
I wonder if it's easier to do this in Japanese than English. From what I've read Japanese is easier to text message in because the object and direct object are usually inferred and there are no cases or articles. A single sentence can be one character and just a verb. Thus by constraining the nuance into discrete choices rather than sparsely populated product space of self-consistent cases, predicates and adjectives, perhaps japanese would be easier to generate turing worthy text.
Or maybe the reverse is true. But I'd bet one was a lot easier than the other.
Turing tests of various degrees of difficulty (Score:2, Insightful)
There are lots of computers that can pass a 5-minute version of the test.
No, to really pass this test the computer will have to have and display a definite, self-consistent personality that is consistent over time. It doesn't matter much what this personality is as long as it's self-consistent and credible. A lack of a personality will be picked up on by an observer over time, especially when compared to the real human the observer is also conversing with.
Re:The Loebner Prize (Score:3, Insightful)
Re:yes, but is it really intelligent? (Score:4, Insightful)
Well, you do have to admit that even humans are born with some very basic instincts, such as the desire to suckle when hungry, closing their hand when something is touching their palm, cry when they're uncomfortable (hungry, wet, tired, in pain) as well as the involuntary actions such as cardiopulminary functions.
That said, I would agree that you shouldn't have to give a machine anything more than basic resources to begin its process of learning, but you do need to give it something a rudimentary kernel to get it kick-started from the state of being an inanimate pile of silicon. From that kernel, it should be able to learn from its surroundings, build its own OS and begin to interact with its surroundings.
Re:yes, but is it really intelligent? (Score:4, Insightful)
True, we have to essentially figure out how to USE the signals we get from our senses, but the brain already has the basic structure to interpret your senses and do gross movement. (Or did your baby not move it's arms and legs when she was born?)
Therefore, the correct analogy would be the hardware necessary (including BIOS) AND the basic OS. You don't tell your AI how to "read" the internet, but you do tell it how to interpret the signals. So your AI knows that there is something out there and then figures out what it means and starts using it productivly.
Also remember that the "hardware" for the AI could be entierly software based...
You have an excellent point but are taking the analogy too far.
Re:yes, but is it really intelligent? (Score:5, Insightful)
Re:The Loebner Prize (Score:2, Insightful)
So, when did you learn how to beat your heart? (Score:4, Insightful)
However, I think I see what you are getting at. This is a programmed system, not one that learned most of its behaviors through trial and error. A system that can't start where a baby starts, and can learn the basics on its own the way a baby does, is still lacking. But the "No BIOS, no OS" thing is going a little too far.
Re:yes, but is it really intelligent? (Score:3, Insightful)
Voight-Kampff was used to determine whether the subject was able to empathize with others. Interesting that the replicants were the ones who actually exhibited the quality (Leon and Rachael keeping photos of their "families", Batty breaking Deckard's fingers for killing Pris, even Deckard lying to Rachael that he was only joking about her being a replicant) while the humans in the movie seemed to lack it. Then again, I guess that was the whole point.
Re:yes, but is it really intelligent? (Score:3, Insightful)
I love the modern hindsight we now have no this retro-futuristic point of view. Modern AIs have shown that emotions are a lot easier to implement than intelligence. We have computer pets now, exactly because we have managed to simulate emotions, but not intelligence.
Re:yes, but is it really intelligent? (Score:3, Insightful)
It is fairly trivial even now to develop machines with no or minimal programming that can display emergent behaviors as complex as you are describing.
Which is largely beside the point, your baby, at the stage of development you describe is not displaying "intelligence" or even (and I use the term specifically in the Philosophical sense of an entity that displays complex moral reasoning) a person. Humans infants of the newborn to several months stage of development are not even close to displaying personhood. In general, humans show the first signs, which include complex (and by "complex" I mean anything more than just simple imitation and repetition) speech, ability to recognize the self in a mirror, etc at around 2 years.
I'm not saying these things just to ruffle your feathers, but to make a point. If you were to take a newborn infant, provide it with the bare minimum to keep it alive, but not provide it with sufficient nurturing and social stimulation for a decade, the result wouldn't be a person either. It would be a criminally insane animal.
What I am suggesting is that doing the same thing with a purported AI would probably have the same effect. Even if it managed to develop "true" intelligence, which I very much doubt, how could we expect it to be anything other than dangerously insane from our perspective? How is it going to develop the ability to engage in moral reasoning about the rights of other intelligent entities without direct, and extensive interaction with them? Human Beings can't do that, why should we expect AIs to?
In my opinion it is absolutely necessary that an AI develop complex moral reasoning. Hopefully better than much of human history indicates the average human has.
The reasoning behind Turing is broken (Score:3, Insightful)
And this demonstrates what, exactly?
I have always regarded this leap of logic as the biggest problem with the Turing test. Just because you can't tell the difference between two things in particular circumstances doesn't mean they are the same, or functionally the same in all circumstances. An AI could simulate a human perfectly, down to the smallest detail, and still have no actual intelligence whatsoever.
For example, the use of 3D animation to simulate (say) an image of an aeroplane in a film doesn't mean that a 3D animated plane is the same as a real plane. But to an audience in a cinema there is no difference. To me, this is how the Turing test appears to work (or should I say, not work) (footage of real plane = test human; footage of CGI plane = test AI; method of projecting film = Turing's text conversation restriction; audience = tester).
If we can't tell the difference maybe there isn't one. Are you intelligent? Or are you just sufficiently complex enough that you simulate it well?
Again, where is the actual reasoning behind this? The above criticism still applies.
Another fundamental problem with Turing is this: why does a computer have to display human intelligence? An intelligent alien lifeform would fail the Turing test too. Expecting a deliberately designed bundle of wires and microchips to exhibit the same variety of intelligence as a highly evolved monkey which is adapted to hunting mammoth, reproducing to make more monkeys and killing other highly evolved monkeys is totally unrealistic.
As others have pointed out, we need a better definition of intelligence. "Able to mimic a human" just doesn't cut it.
Re:Real turing test (Score:5, Insightful)
Re:yes, but is it really intelligent? (Score:3, Insightful)
The original poster raised a good point... passing a Turing test is not the same thing as creating intelligence, artificial or not. But many members of the slashdot community seemed to think so, because the story was tagged "singularity" - a term which, when applied to the field of intelligence research, is used to refer to the creation of an intellect greater than that of humanity.
Passing a Turing test is probably merely a step towards a true AI - and possibly a rather small step. It is not necessarily an AI.
Re:The reasoning behind Turing is broken (Score:4, Insightful)
But that's the point - it's not a leap of logic, it's a sufficient (and necessary?) condition for a proposed equivalence between humans and machines that Alan Turing used. Either you agree with Turing or you don't, but it's not a fallacy unless someone tries to sneak it in as a premise.
> "Just because you can't tell the difference between two things in particular circumstances doesn't mean they are the same, or functionally the same in all circumstances."
Absolutely. Let's assume for the sake of discussion we had some way to guarantee that they are functionally the same.
> "An AI could simulate a human perfectly, down to the smallest detail, and still have no actual intelligence whatsoever."
Well, that obviously depends on your personal beliefs regarding intelligence. Since I'm a Turing Functionalist, I disagree on this point - an identical machine necessarily has an identical intelligence.
> "For example, the use of 3D animation to simulate (say) an image of an aeroplane in a film doesn't mean that a 3D animated plane is the same as a real plane. But to an audience in a cinema there is no difference. To me, this is how the Turing test appears to work (or should I say, not work)"
If the animation was in fact a simulated world where all the other actors functioned as they should, then I'd argue that it is indeed a plane in that world. It's not the Test itself you're arguing with so much as the Functionalism part.
> "Another fundamental problem with Turing is this: why does a computer have to display human intelligence? An intelligent alien lifeform would fail the Turing test too. Expecting a deliberately designed bundle of wires and microchips to exhibit the same variety of intelligence as a highly evolved monkey which is adapted to hunting mammoth, reproducing to make more monkeys and killing other highly evolved monkeys is totally unrealistic."
Sure, sure. It's just that we consider humans to be intelligent (sometimes I wonder why, etc. etc., but in this context we just do), so if we can show equivalence between a machine and a human, that's sufficient to show the machine to be intelligent. Failure of this test does not necessarily mean the machine is not intelligent via equivalence with some alien creature. (I guess that answers my parenthetical question at the top about whether the Test was a necessary condition.)
> "As others have pointed out, we need a better definition of intelligence. "Able to mimic a human" just doesn't cut it."
After the above, will you understand when I say that I think it does?