AI Researchers Say 'Rascals' Might Pass Turing Test 337
An anonymous reader writes "Passing the Turing test is the holy grail of artificial intelligence (AI) and now researchers claim it may be possible using the world's fastest supercomputer (IBM's Blue Gene). This version of the Turing test pits a human conversing with a synthetic character powered by Rascals software crafted at Rensselaer Polytechnic Institute. RPI is aiming to pass AI's final exam this fall, by pairing the most powerful university-based supercomputing system in the world with its new multimedia group which is designing a holodeck, a la Star Trek."
But the real question is... (Score:5, Funny)
Re: (Score:2)
Re: (Score:3, Funny)
It can handle all of those things. It's had a user account on Slashdot for the last four months.
Do we really... (Score:4, Funny)
Re:Do we really... (Score:5, Funny)
Comment removed (Score:4, Funny)
Re: (Score:2, Funny)
Misread (Score:5, Funny)
Re: (Score:2)
Re: (Score:3, Funny)
Creating a character won't help (Score:5, Insightful)
Re: (Score:3, Funny)
Hotstud42: ne 1 there?
Hotstud42: SHO ME YR BOOBIES!
Hotstud42: I dn't think she's there.
Hotstud42: If ur ther ewave at the camera!
Hotstud42: c'mon if yu show ur tits I'll pay 4 private.
Naturally should the turing test succeed, the first step is to automate webcam porn.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Snappy comeback not found.
Re: (Score:2)
Real turing test (Score:5, Insightful)
I wonder if it's easier to do this in Japanese than English. From what I've read Japanese is easier to text message in because the object and direct object are usually inferred and there are no cases or articles. A single sentence can be one character and just a verb. Thus by constraining the nuance into discrete choices rather than sparsely populated product space of self-consistent cases, predicates and adjectives, perhaps japanese would be easier to generate turing worthy text.
Or maybe the reverse is true. But I'd bet one was a lot easier than the other.
Japanese does have a case system (Score:3, Informative)
But Japanese definitely has a case system where the inflectional morphology is indicated by particles that follow the modified noun.
Re: (Score:2)
Re:Real turing test (Score:4, Interesting)
Both Japanese and Chinese use all sorts of expressions, many of which make no sense whatsoever when translated literally. This becomes apparent when trying to use those translation tools. The translation ends up being complete gibberish to the point of being comedic.
Because people of so many nationalities speak English it's easier for an AI to fool people because there really is no standard for the language. English-speakers are used to hearing it spoken in all sorts of different ways, with a wide variety of expressions.
Automated chats are always obvious for what they are because they tend to stupidly repeat the same few comments over and over again. They're also incapable of responding properly to a user's comments, and colloquialisms always trip up these systems.
Re:Real turing test (Score:5, Insightful)
The real reason he made it ... (Score:2)
recursion (Score:2, Funny)
Somewhere around five years of age, however, children begin to have second-order beliefs--that is, beliefs about the beliefs of others, enabling them to understand that other people can have beliefs different from their own. Now, Bringsjord's research group claims to have achieved second- and third-order beliefs in their synthetic characters.
Funny how recursion is always a key for "real" abstract thoughts. You could think that adding them to the langage of the AI will bring all the problems it does in logic, but then you realize that real humans always doubt sentences with three levels of recursions (or above), and try to avoid them.
That makes this approach all the more interesting.
Re: (Score:2)
Turing Test is Nonsense (Score:2, Funny)
That's hogwash. Any number of real people I talk to could easily be simulated by some non-intelligent machine. Especially over the phone, to tech support etc.
Slashdot alone is proof of the fallacy of the Turing Test. Unless all you ACs and TrollMods are actually bots. Or maybe it's me. That would explain a lot
Re:Turing Test is Nonsense (Score:4, Insightful)
Sure, writing a bot the does first post is easy.
We are talking about a conversation here, or even better a debate over a topic that requires evaluating new concepts on the fly.
We will know we are getting some where when we can gt a computer to changes it's mind on something from a conversation.
Re: (Score:2)
The Turing Test is like saying that "2 + 2 == 5" if the "==" test means "sometimes, if you're stupid".
Re: (Score:2)
While you may berate the intelligence of others, it's unlikely you actually thought they were computers very often.
Re: (Score:2)
What the TT delivers is indeed "indistinguishable from intelligent". Which is because there is no universal criteria for intelligence. For example, given many conversations I've been forced to have, the only intelligent option is to say nothing.
Re: (Score:2)
In fact, that requirement would render the Turing Test a completely circular test. It might as well say only "AI is intelligent if it passes the Turing Test", recursively.
Though I suppose the real test would be that a real intelligence would just ignore the Turing Test, unless it were told to ignore it.
Paradoxes are fun. But maybe only if you're really intelligent.
Acting on behalf of...well, myself I guess. (Score:5, Interesting)
As we become more comfortable with accepting communication with each other through more abstracted proxies - like common chat applications currently and the recent neural voice collar (which pumps out a synthetic voice - even further proxy) - I wonder if we will in fact see what the author Stephen Baxter speculated, artificial clones of ourselves or our personalities handling our daily affairs.
I don't think it's too far out there to imagine interacting and planning a meeting with someone over the phone, only to find out later you had been talking to an AI facsimile of that individual.
What would (and may) be stranger yet, is considering the possibility that two AI facsimiles may in fact carry out real work or meetings from start to finish completely without the interaction of their 'owners'.
Re:Acting on behalf of...well, myself I guess. (Score:4, Interesting)
It will be quite some time before we have conversational intelligence out of AI systems. Retrieval speeds on Google searches are good, but at conversational pace, sifting through the information for some trace of relevance to the conversation is still going to be stilted and slow. Even then, finding some relevant response to a topic is not something that people do well.
We each have a sphere of stuff that we are familiar with. It is a human trait to act in one of several ways when conversation goes beyond that:
- walk away/ignore
- talk out of our asses like we do know when clearly we don't
- quietly observe to learn what others know
- change the subject
That as an example of what current AI conversation applications are not capable of.
In the case of an AI answering machine making a meeting appointment, it would only take one odd question, like: how about those cowboys? to throw the process out of whack if you did not know that you were talking to a machine.
AI does not thread thought and memories in the same way that we do, and this is part of what humans call humor.. when the story being told mismatches the thread/plot that we have in our heads. That depends hugely on the experience of the human involved, and the depth of their retained knowledge. both of these are missing in AI systems, and current technology will not allow for faking it past some limited point. The ability to switch to another 'almost' related conversation is something that AI cannot do without great memory stores, fast search/retrieval etc.
Imagine it like this: every sentence in a conversation is essentially a chess move. The game of chess has a finite bounded domain. A conversation with a human does not. The problem is far greater than a mimicry.
Re:Acting on behalf of...well, myself I guess. (Score:5, Interesting)
- talk out of our asses like we do know when clearly we don't
- quietly observe to learn what others know
- change the subject
That as an example of what current AI conversation applications are not capable of.
Actually, current AI "conversation" applications do all of the above all the time... that's one of the things that make them so easy to detect.
n the case of an AI answering machine making a meeting appointment, it would only take one odd question, like: how about those cowboys? to throw the process out of whack if you did not know that you were talking to a machine.
To be fair, that question, without any context, would confuse the majority of human beings also. Not everybody knows the names of American football teams
The game of chess has a finite bounded domain. A conversation with a human does not.
Are you sure? Human conversational domain might be finite, albeit quite a bit larger than the chess domain. At some point it becomes very difficult to tell the difference between "infinite" and just "very very very large"...
Re: (Score:2)
Working on a Holodeck? *%$#@!! (Score:2)
Re: (Score:3, Funny)
If they succeed, you'll never get me out of the basement.
Re:Working on a Holodeck? *%$#@!! (Score:4, Funny)
The Turning Test? (Score:2)
Re: (Score:2)
I'd really throw it a curve, after it executes the first turn tell it, "No! Your other right!" and see if it understands the jist.
Re: (Score:2)
Not the Turing Test (Score:2)
If the avatar is limited to talking about themselves, their mental state and the mental state of others, it doesn't seem like a true Turing Test. I mean, would a question about flipping a tortoise on its back be allowed?
On a different note, don't they know that giving it "memories" doesn't mean it will pass the Voight-Kampff test?
Correct! (Score:2)
Are they just trying to taunt fate? (Score:2)
Re: (Score:2)
What crap (Score:5, Insightful)
"That's how we plan to pass this limited version of the Turing test."
If it's a limited version of the Turing Test, then it's not the Turing Test. They don't actually define exactly what the limits are. But any open ended test is doomed to failure based on our state of the art in A.I. (read: there is no science of Artificial Intelligence, in the sense of artificial cognition).
"What do you think a typical mother would say if she found out her daughter was going to enter the porn industry."
"Why do you think children have emotional attachments to their parents?"
"Which is worse, racism or sexism?"
"Would you rather be a fireman or an astronaut, and why?"
Any sort of open-ended question that requires human cultural knowledge and asking it to support its conclusion is going to cause it to barf.
Now, if the point of this is whether you can fool someone into thinking the Avatar was human when they didn't know it was a test, well, who cares? Eliza was able to do that back in the 1970s.
Lastly, who says the Turing Test (or any A.I. test) needs to take place in real time? I would be impressed if they came back with a human-level answer in a month of processing time. That's equivalent to a computer 2.5 million times faster than a computer that could produce the answer in one second. That they can't even do that should tell people that speed is not the problem in A.I. research. We have absolutely no fundamental model of how it all works.
Re: (Score:3, Interesting)
Emotional response testing is one avenue, but actually, I think an interesting avenue might be to ask:
"What is the last barfgaggle you've mfffitzersnatched?"
or "I think gnunglebores are instruffled, don't you?"
I think the manner in which these systems have tried to deal with garbage is very different than how humans deal with garbage in
Re: (Score:3, Interesting)
If they're making a holodeck... (Score:5, Funny)
Re: (Score:2)
Re: (Score:2)
The only thing more prone to failure on a Galaxy Class starship than the holodeck safeties was that useless friggin core ejection system.
The Turing Test (Score:5, Interesting)
From the summary this "test" is not a strict Turing Test as it appears to be the machine talking to a human, alone, with no second human also talking to the first human. I could be wrong of course.
One of the things that makes this test so special, is that if you cannot tell the difference between a human and a computer, then essentially the computer is intelligent. Why? Because if you cannot tell the difference, what does it matter if the machine is really intelligent or not? Is the machine was really thinking or was it just cleverly programmed? The point is however, if you can't tell the difference, what does it matter? (Incidentally, I apply the same argument to the "question" of "free will".)
Anyway, if this machine (or personality) consistently passes a proper Turing Test, then yeah, that's pretty cool, and I want one on my computer, well so long as the personality type is compatible with my own (not a Marvin please...). (And I have a partner, so no need to make such jokes...)
Right... (Score:2)
The reason for the holodeck reference (Score:5, Informative)
One of the problems for any entity trying to communicate like a human is that we share some common knowledge which is based on our physical existence (pigs can't fly, but fall etc.) Some AI projects like (Open)Cyc [wikipedia.org] have tried to feed their AI with a very large number of simple facts, but to "understand" some concepts you have to experience them. Try to explain the difference between red and blue to someone who was born blind.
The 3D communication (holodeck) aspect mentioned is therefore an attempt to have an AI "living" in a human like space, to enable it to develop a similar world view. What's new about Rascals (Rensselaer Advanced Synthetic Architecture for Living Systems) seems to be something else ("Rascals is based on a core theorem proving engine that deduces results (proves theorems) about the world after pattern-matching its current situation against its knowledge base.") that is very computing intensive. Whether this will make any real difference remains to be seen, a lot of other approaches have failed and they so far have only succeeded with very limited models.
Ask it the color of a Coke can. (Score:3, Insightful)
Re: (Score:2)
Regular Coke is red, Diet Coke mostly silver, Coke Zero mostly black, caffeine-free Coke gold, and the new Coke-Plus with vitamins (wtf?) is multicolored.
(Did I pass the test? Hmm, perhaps since you didn't seem to know that, you're the AI?)
oblig (Score:2, Funny)
http://xkcd.com/329/ [xkcd.com] ?
Re: (Score:2)
Second Life? (Score:2)
Seriously? Their turing test is on an online game?
This isn't a reasonable test. The way people converse online is MUCH different from how they converse directly. I suspect most of the users on a game like that would
Test vs Machine (Score:2)
holodeck! yaaay! (Score:2)
oh wait? they are *only* working on the personality? damn.
oh well, at least the cleaner will not need a mop and bucket.
but seriously: you wait until telemarketers and con men get hold of an artificial personality that can hold several hundred conversations sim
Turing tests of various degrees of difficulty (Score:2, Insightful)
There are lots of computers that can pass a 5-minute version of the test.
No, to really pass this test the computer will have to have and display a definite, self-consistent personality that is consistent over time. It doesn't matter much what this personality is as long as it's self-consistent and credible. A lack of a personality will be picked up on by an observer over tim
In the turing test (Score:2)
Ultimately, the Turing tests tests much more than the ability of conversation. You can describe problems in a conversation and ask the computer to solve them, this is what makes the Turing test a true A.I. test.
the Turing test isn't the "final AI exam" (Score:2, Interesting)
All I heard was (Score:2)
Passing a turing test is one thing, but a holdeck? Oh yesss....
All the Star Trek officers, engineers, etc. are prudes with their use of the Holodeck. The Ferengi's knew how to sell that product. You know what I'm talking about
Bring it on...
Far from the holy grail (Score:2)
However, the Turing test is hardly the holy grail of AI. In fact, Alan Turing thought it would be solved within a few years. I can't find a direct quote for that, but from the Stanford Encyclopedia [stanford.edu]:
The Turing test was just supposed to be a minor stop on the way to truly great AI systems. Saying the Turing test is the holy grail of AI is like saying t
Automated information, the real AI (Score:2)
On the flip side we already have plenty artificially intelligent people. So perhaps the illusion should be based upon a real intelligent person.
An example of an artificially intelligent person is a teen ager pretending and fooling another online person or persons into believing the kid is much older and much more educated and experienced in the field
So (Score:2)
Re:yes, but is it really intelligent? (Score:5, Funny)
Re: (Score:3, Interesting)
I tackled the problem with two ideas: One, humans are stupid, crazy, defensive, argumentative, get drunk, tired, and stoned, and generally behave like... well they generally DON'T behave. Secondly, as it was designed on a Timex-Sinclair 1000 with only 16k of memory and no hard drive, it had to be really, re
Re:yes, but is it really intelligent? (Score:4, Informative)
Re: (Score:3, Insightful)
The original poster raised a good point... passing a Turing test is not the same thing as creating intelligence, artificial or not. But many members of the slashdot community seemed to think so, because the story was tagged "singularity" - a term which, when applied to the field of intelligence research, is used to refer to the creation of an intellect greater than
Re: (Score:3, Interesting)
Re: (Score:2, Insightful)
The Loebner Prize (Score:4, Informative)
Limiting the tenor: Further, only behavior evinced during the course of a natural conversation on the single specified topic would be required to be duplicated faithfully by the contestants. The operative rule precluded the use of ``trickery or guile. Judges should respond naturally, as they would in a conversation with another person.'' (The method of choosing judges served as a further measure against excessive judicial sophistication.)
Re: (Score:2)
Reading the description it sounds like it could be done with current methods and brute-force. In particular restricting the scope of the questions to avoid tripping up the AI removes a large chunk of the problem. It is only a small first step towards the real Turing Test.
For those who want to judge progress themselves, a much better link than the article is the researcher's page [rpi.edu]. I
Re: (Score:2, Insightful)
Re: (Score:3, Insightful)
Re: (Score:3, Informative)
Headline: "AI Researchers Say 'Rascals' Might Pass Turing Test" :)
I think the article is blowing the researchers' (likely more modest) claims out of proportion, but that just makes the article misleading.
Re:yes, but is it really intelligent? (Score:5, Funny)
Re:yes, but is it really intelligent? (Score:5, Insightful)
But it will demonstrate that past a certain point we won't know the difference between real intelligence and something attempting to appear intelligent.
in fact, just what is intelligence / conciousness? if we can't define it, how can we hope to produce it?
If we can't tell the difference maybe there isn't one. Are you intelligent? Or are you just sufficiently complex enough that you simulate it well?
Re: (Score:3, Funny)
I'm reading Slashdot => no. QED
The reasoning behind Turing is broken (Score:3, Insightful)
And this demonstrates what, exactly?
I have always regarded this leap of logic as the biggest problem with the Turing test. Just because you can't tell the difference between two things in particular circumstances doesn't mean they are the same, or functionally the same in all circumstances. An AI could simulate a human perfectly, down to the smallest detai
Re:The reasoning behind Turing is broken (Score:4, Insightful)
But that's the point - it's not a leap of logic, it's a sufficient (and necessary?) condition for a proposed equivalence between humans and machines that Alan Turing used. Either you agree with Turing or you don't, but it's not a fallacy unless someone tries to sneak it in as a premise.
> "Just because you can't tell the difference between two things in particular circumstances doesn't mean they are the same, or functionally the same in all circumstances."
Absolutely. Let's assume for the sake of discussion we had some way to guarantee that they are functionally the same.
> "An AI could simulate a human perfectly, down to the smallest detail, and still have no actual intelligence whatsoever."
Well, that obviously depends on your personal beliefs regarding intelligence. Since I'm a Turing Functionalist, I disagree on this point - an identical machine necessarily has an identical intelligence.
> "For example, the use of 3D animation to simulate (say) an image of an aeroplane in a film doesn't mean that a 3D animated plane is the same as a real plane. But to an audience in a cinema there is no difference. To me, this is how the Turing test appears to work (or should I say, not work)"
If the animation was in fact a simulated world where all the other actors functioned as they should, then I'd argue that it is indeed a plane in that world. It's not the Test itself you're arguing with so much as the Functionalism part.
> "Another fundamental problem with Turing is this: why does a computer have to display human intelligence? An intelligent alien lifeform would fail the Turing test too. Expecting a deliberately designed bundle of wires and microchips to exhibit the same variety of intelligence as a highly evolved monkey which is adapted to hunting mammoth, reproducing to make more monkeys and killing other highly evolved monkeys is totally unrealistic."
Sure, sure. It's just that we consider humans to be intelligent (sometimes I wonder why, etc. etc., but in this context we just do), so if we can show equivalence between a machine and a human, that's sufficient to show the machine to be intelligent. Failure of this test does not necessarily mean the machine is not intelligent via equivalence with some alien creature. (I guess that answers my parenthetical question at the top about whether the Test was a necessary condition.)
> "As others have pointed out, we need a better definition of intelligence. "Able to mimic a human" just doesn't cut it."
After the above, will you understand when I say that I think it does?
Re:yes, but is it really intelligent? (Score:4, Informative)
Re: (Score:3, Insightful)
I love the modern hindsight we now have no this retro-futuristic point of view. Modern AIs have shown that emotions are a lot easier to implement than intelligence. We have computer pets now, exactly because we have managed to simulate emotions, but not intelligence.
Re: (Score:3, Insightful)
Voight-Kampff was used to determine whether the subject was able to empathize with others. Interesting that the replicants were the ones who actually exhibited the quality (Leon and Rachael keeping photos of their "families", Batty breaking Deckard's fingers for killing Pris, even Deckard lying to Rachael that he was only joking about her being a replicant) while the humans in the mo
Re: (Score:2)
Re: (Score:2, Interesting)
I suppose what we can do is produce something which carries out tasks which we consider intelligence necessary for - in that case does it really matter if it is intelligence, so long as the 'task' gets completed?
Be that task mathematics, logistics or writing smooth jazz.
I guess perhaps the problem has been that we've been looking for human-like intelligence for these tasks, when really we should be asking what does intelligence do. Instead of asking what intelligence is and how to make
Re: (Score:2)
Re: (Score:2)
Just reading the article, I'd say the place this one is going to flop is (like the others) in creativity and in non sequiturs; they're giving it a wide body of "knowledge" but whether it will be
Re:yes, but is it really intelligent? (Score:5, Insightful)
Re:yes, but is it really intelligent? (Score:4, Insightful)
Well, you do have to admit that even humans are born with some very basic instincts, such as the desire to suckle when hungry, closing their hand when something is touching their palm, cry when they're uncomfortable (hungry, wet, tired, in pain) as well as the involuntary actions such as cardiopulminary functions.
That said, I would agree that you shouldn't have to give a machine anything more than basic resources to begin its process of learning, but you do need to give it something a rudimentary kernel to get it kick-started from the state of being an inanimate pile of silicon. From that kernel, it should be able to learn from its surroundings, build its own OS and begin to interact with its surroundings.
Re:yes, but is it really intelligent? (Score:4, Interesting)
That depends on what your goal is. If your goal is to reproduce the process of human mental development, from a child to an adult, in silico, then I agree. However, if your goal is merely to produce an intelligence that can think at least as well as a human can, then you can take shortcuts -- such as supplying the intelligence with a ready-made database of knowledge, or a built-in library of common tasks ("I know Kung Fu"), etc. As long as the intelligence is as capable of learning and evolving as an average human, I see no harm in starting it off with something it can use.
Or, put it this way: adult humans take 18 years or so to mature; that's a pretty long development cycle. If you're building an AI, you might as well accelerate it as much as you can.
Re:yes, but is it really intelligent? (Score:4, Insightful)
True, we have to essentially figure out how to USE the signals we get from our senses, but the brain already has the basic structure to interpret your senses and do gross movement. (Or did your baby not move it's arms and legs when she was born?)
Therefore, the correct analogy would be the hardware necessary (including BIOS) AND the basic OS. You don't tell your AI how to "read" the internet, but you do tell it how to interpret the signals. So your AI knows that there is something out there and then figures out what it means and starts using it productivly.
Also remember that the "hardware" for the AI could be entierly software based...
You have an excellent point but are taking the analogy too far.
Re:yes, but is it really intelligent? (Score:5, Insightful)
So, when did you learn how to beat your heart? (Score:4, Insightful)
However, I think I see what you are getting at. This is a programmed system, not one that learned most of its behaviors through trial and error. A system that can't start where a baby starts, and can learn the basics on its own the way a baby does, is still lacking. But the "No BIOS, no OS" thing is going a little too far.
Re:yes, but is it really intelligent? (Score:5, Interesting)
The conditions I'd put on AI would be that it has to be able to improvise and create. It has to be able to learn and develop independently of it's program. Instructions which dictate how it should develop or how to deal with specific situations are prohibited.
One thing I'd suggest is important is desire, the desire to feed, to move, to do something. This would spur to develop itself to fulfill its desires. Otherwise it's just going to sit there.
Re: (Score:3, Insightful)
It is fairly trivial even now to develop machines with no or minimal programming that can display emergent behaviors as complex as you are describing.
Which is largely beside the point, your baby, at the stage of development you describe is not displaying "intelligence" or even (and I use the term specifically in the Philosophical sense of an entity that displays complex moral reasoning) a person. Humans infants of the newborn to several months stage of development are not even close to displaying
Re: (Score:3, Informative)
Re:Big Changes are comming. (Score:5, Funny)