Forgot your password?
Software Sci-Fi Technology

AI Researchers Say 'Rascals' Might Pass Turing Test 337

Posted by Zonk
from the who-wouldn't-love-those-scamps dept.
An anonymous reader writes "Passing the Turing test is the holy grail of artificial intelligence (AI) and now researchers claim it may be possible using the world's fastest supercomputer (IBM's Blue Gene). This version of the Turing test pits a human conversing with a synthetic character powered by Rascals software crafted at Rensselaer Polytechnic Institute. RPI is aiming to pass AI's final exam this fall, by pairing the most powerful university-based supercomputing system in the world with its new multimedia group which is designing a holodeck, a la Star Trek."
This discussion has been archived. No new comments can be posted.

AI Researchers Say 'Rascals' Might Pass Turing Test

Comments Filter:
  • by Asmor (775910) on Thursday March 13, 2008 @04:05PM (#22743022) Homepage
    Will it have a little AIBO dog with a ring around one eye?
    • Naaah.. it'll just sing "I'm in the Mood for Love" in a cracking voice instead of crooning "Daisy".
  • by clonan (64380) on Thursday March 13, 2008 @04:08PM (#22743046)
    ...want the history books to report that the FIRST AI was a Rascal?
  • Misread (Score:5, Funny)

    by jekewa (751500) on Thursday March 13, 2008 @04:10PM (#22743090) Homepage Journal
    I didn't read the article, but at first glance thought the title was "racists might pass Turing test."
  • by Shimmer (3036) <> on Thursday March 13, 2008 @04:11PM (#22743094) Homepage Journal
    I think the people behind this misunderstand the difficulty (and purpose) of passing the Turing test. The problem isn't in manufacturing a believable back story for your program's "character". The problem is in communicating effectively in spite of the inherent ambiguity, fuzziness, and confusion of human languages. I think it's very unlikely that any team is about to meet this threshold.
    • Re: (Score:3, Funny)

      by Kelbear (870538)
      I can't imagine how to prepare an AI against a chatroom.

      Hotstud42: ne 1 there?
      Hotstud42: SHO ME YR BOOBIES!
      Hotstud42: I dn't think she's there.
      Hotstud42: If ur ther ewave at the camera!
      Hotstud42: c'mon if yu show ur tits I'll pay 4 private.

      Naturally should the turing test succeed, the first step is to automate webcam porn.
    • by Gat0r30y (957941)
      Excellent point. Do they have any plans to make sure the "character" will understand and respond properly to a context specific joke? Much of our humor depends on that

      inherent ambiguity, fuzziness, and confusion of human languages
      • by gardyloo (512791)

        Excellent point. Do they have any plans to make sure the "character" will understand and respond properly to a context specific joke? Much of our humor depends on that
        ++?????++ Out of Cheese Error. Redo From Start.
    • I agree. This seems like some No Child Left Behind teaching to the turing test kind of bs to me...
    • Real turing test (Score:5, Insightful)

      by goombah99 (560566) on Thursday March 13, 2008 @04:31PM (#22743396)
      the real turing test is being able to Phish in a chat room. One you can automate that you're golden. and it's pretty unarguable it passed a turing test. Slashdot had a article a while back about robo-chats doing just that but they relied on pretending to be non-native english speakers.

      I wonder if it's easier to do this in Japanese than English. From what I've read Japanese is easier to text message in because the object and direct object are usually inferred and there are no cases or articles. A single sentence can be one character and just a verb. Thus by constraining the nuance into discrete choices rather than sparsely populated product space of self-consistent cases, predicates and adjectives, perhaps japanese would be easier to generate turing worthy text.

      Or maybe the reverse is true. But I'd bet one was a lot easier than the other.

      • Japanese is a pro-drop language, in that you can leave out subjects or objects in speech if it's clear from discourse what you're talking about.

        But Japanese definitely has a case system where the inflectional morphology is indicated by particles that follow the modified noun.
      • Re:Real turing test (Score:4, Interesting)

        by MaWeiTao (908546) on Thursday March 13, 2008 @05:28PM (#22744076)
        Japanese has all kinds of complexities. They have complex conjugations first of all, then there's the whole system of politeness depending on who's being addressed although that may be less relevant online. While at it's core Chinese is fairly intuitive and straightforward it quickly gets very complex.

        Both Japanese and Chinese use all sorts of expressions, many of which make no sense whatsoever when translated literally. This becomes apparent when trying to use those translation tools. The translation ends up being complete gibberish to the point of being comedic.

        Because people of so many nationalities speak English it's easier for an AI to fool people because there really is no standard for the language. English-speakers are used to hearing it spoken in all sorts of different ways, with a wide variety of expressions.

        Automated chats are always obvious for what they are because they tend to stupidly repeat the same few comments over and over again. They're also incapable of responding properly to a user's comments, and colloquialisms always trip up these systems.
      • by ucblockhead (63650) on Thursday March 13, 2008 @07:09PM (#22745362) Homepage Journal
        To phish successfully, you have to fool one human in a thousand. To pass the Turing test, you have to be able to fool all humans.
  • Well, it ISN'T skynet - the authors made an AI character that they could talk with - that wouldn't mind if they totally geek out on WOW topics or file system discussions ;)
  • recursion (Score:2, Funny)

    by aleph42 (1082389) *

    Somewhere around five years of age, however, children begin to have second-order beliefs--that is, beliefs about the beliefs of others, enabling them to understand that other people can have beliefs different from their own. Now, Bringsjord's research group claims to have achieved second- and third-order beliefs in their synthetic characters.

    Funny how recursion is always a key for "real" abstract thoughts. You could think that adding them to the langage of the AI will bring all the problems it does in logic, but then you realize that real humans always doubt sentences with three levels of recursions (or above), and try to avoid them.

    That makes this approach all the more interesting.

    • by gardyloo (512791)

      you realize that real humans always doubt sentences with three levels of recursions (or above), and try to avoid them.
      Perhaps not hard enough.

  • The Turing Test for AI says that if an AI can fool a human into thinking it's human by communicating over a teletype, then it's really "intelligent".

    That's hogwash. Any number of real people I talk to could easily be simulated by some non-intelligent machine. Especially over the phone, to tech support etc.

    Slashdot alone is proof of the fallacy of the Turing Test. Unless all you ACs and TrollMods are actually bots. Or maybe it's me. That would explain a lot :P.
    • by geekoid (135745) <> on Thursday March 13, 2008 @04:25PM (#22743310) Homepage Journal
      No, actually they can't You think they can, but that's because you can determine patterns in human behaviors, something computers can't do very well, yet.

      Sure, writing a bot the does first post is easy.

      We are talking about a conversation here, or even better a debate over a topic that requires evaluating new concepts on the fly.

      We will know we are getting some where when we can gt a computer to changes it's mind on something from a conversation.
      • by Doc Ruby (173196)
        Nah, like I said, it depends on the person doing the testing. Plenty of people I meet all the time would be convinced by ELIZA. And plenty of people I meet all the time would fail such a test run against them by someone normal.

        The Turing Test is like saying that "2 + 2 == 5" if the "==" test means "sometimes, if you're stupid".
    • by blueg3 (192743)
      There's undoubtedly a silent assumption of using a real testing process. That is, attempting this using many humans as the tester and as the computer's "competition".

      While you may berate the intelligence of others, it's unlikely you actually thought they were computers very often.
  • It is interesting that they have used a 'guinea pig' student to 'bare all' to the knowledge base. It would seem, then that this AI is in fact a type of facsimile of this student.

    As we become more comfortable with accepting communication with each other through more abstracted proxies - like common chat applications currently and the recent neural voice collar (which pumps out a synthetic voice - even further proxy) - I wonder if we will in fact see what the author Stephen Baxter speculated, artificial clones of ourselves or our personalities handling our daily affairs.

    I don't think it's too far out there to imagine interacting and planning a meeting with someone over the phone, only to find out later you had been talking to an AI facsimile of that individual.

    What would (and may) be stranger yet, is considering the possibility that two AI facsimiles may in fact carry out real work or meetings from start to finish completely without the interaction of their 'owners'.
    • by zappepcs (820751) on Thursday March 13, 2008 @04:29PM (#22743370) Journal
      Well, imagination is a great thing but I've not yet seen anything that even comes close to that kind of imitation of a human. Not even close. It takes max of two questions to figure it out that it is a machine. The scope of what the facsimile is programmed with/for can be outstripped quickly.

      It will be quite some time before we have conversational intelligence out of AI systems. Retrieval speeds on Google searches are good, but at conversational pace, sifting through the information for some trace of relevance to the conversation is still going to be stilted and slow. Even then, finding some relevant response to a topic is not something that people do well.

      We each have a sphere of stuff that we are familiar with. It is a human trait to act in one of several ways when conversation goes beyond that:

      - walk away/ignore
      - talk out of our asses like we do know when clearly we don't
      - quietly observe to learn what others know
      - change the subject

      That as an example of what current AI conversation applications are not capable of.

      In the case of an AI answering machine making a meeting appointment, it would only take one odd question, like: how about those cowboys? to throw the process out of whack if you did not know that you were talking to a machine.

      AI does not thread thought and memories in the same way that we do, and this is part of what humans call humor.. when the story being told mismatches the thread/plot that we have in our heads. That depends hugely on the experience of the human involved, and the depth of their retained knowledge. both of these are missing in AI systems, and current technology will not allow for faking it past some limited point. The ability to switch to another 'almost' related conversation is something that AI cannot do without great memory stores, fast search/retrieval etc.

      Imagine it like this: every sentence in a conversation is essentially a chess move. The game of chess has a finite bounded domain. A conversation with a human does not. The problem is far greater than a mimicry.
      • by Jeremi (14640) on Thursday March 13, 2008 @04:46PM (#22743586) Homepage
        - walk away/ignore
        - talk out of our asses like we do know when clearly we don't
        - quietly observe to learn what others know
        - change the subject

        That as an example of what current AI conversation applications are not capable of.

        Actually, current AI "conversation" applications do all of the above all the time... that's one of the things that make them so easy to detect.

        n the case of an AI answering machine making a meeting appointment, it would only take one odd question, like: how about those cowboys? to throw the process out of whack if you did not know that you were talking to a machine.

        To be fair, that question, without any context, would confuse the majority of human beings also. Not everybody knows the names of American football teams ;^)

        The game of chess has a finite bounded domain. A conversation with a human does not.

        Are you sure? Human conversational domain might be finite, albeit quite a bit larger than the chess domain. At some point it becomes very difficult to tell the difference between "infinite" and just "very very very large"...

    • by Joe Tie. (567096)
      Interesting, that's actually how I go about it as well. Though mine's just hobbiest stuff, unlikely to make it past the yawns of the household. Still, one of the cool things about that method is it provides a good steady input of data. Scraping my own online activity has provided more than one instance of me being annoyed by an aspect of it not working correctly, and another person pointing out that I usually get it wrong in the same way.
  • If they succeed I'll never get my kids out of the basement!
  • Clearly, in order to pass the Turning test, the AI must be an ambiturner. The easiest way to determine this is to have it turn right, and then ask it to turn left. If it can't do this, it fails, just like Zonk fails at editing.
    • by Radon360 (951529)

      I'd really throw it a curve, after it executes the first turn tell it, "No! Your other right!" and see if it understands the jist.

    • by og_sh0x (520297)
      All that test would prove is that NASCAR drivers aren't believably human.
  • If the avatar is limited to talking about themselves, their mental state and the mental state of others, it doesn't seem like a true Turing Test. I mean, would a question about flipping a tortoise on its back be allowed?

    On a different note, don't they know that giving it "memories" doesn't mean it will pass the Voight-Kampff test?

    • A real Turing Test is supposed to be a normal conversation between a human being and a machine, NOT a contrived scenario limited to particular subjects or format.
  • If ever an article needed a "whatcouldpossiblygowrong" tag. Turing Test AI's combined with holodecks? All we need now is to pair it with those carnivore hunter-seeker robots [] that power themselves with fermented slug-flesh and just wait for them to figure out humans have more meat.
    • by Gat0r30y (957941)

      If ever an article needed a "whatcouldpossiblygowrong" tag
      Tell me about it, the last time the holoshed broke and all the characters became real I got slapped with 4 paternity suits! Well, that's enough for today, if anyone needs me I'll be in the holoshed.
  • What crap (Score:5, Insightful)

    by Reality Master 101 (179095) <RealityMaster101 ... m ['gma' in gap]> on Thursday March 13, 2008 @04:16PM (#22743198) Homepage Journal

    "That's how we plan to pass this limited version of the Turing test."

    If it's a limited version of the Turing Test, then it's not the Turing Test. They don't actually define exactly what the limits are. But any open ended test is doomed to failure based on our state of the art in A.I. (read: there is no science of Artificial Intelligence, in the sense of artificial cognition).

    "What do you think a typical mother would say if she found out her daughter was going to enter the porn industry."

    "Why do you think children have emotional attachments to their parents?"

    "Which is worse, racism or sexism?"

    "Would you rather be a fireman or an astronaut, and why?"

    Any sort of open-ended question that requires human cultural knowledge and asking it to support its conclusion is going to cause it to barf.

    Now, if the point of this is whether you can fool someone into thinking the Avatar was human when they didn't know it was a test, well, who cares? Eliza was able to do that back in the 1970s.

    Lastly, who says the Turing Test (or any A.I. test) needs to take place in real time? I would be impressed if they came back with a human-level answer in a month of processing time. That's equivalent to a computer 2.5 million times faster than a computer that could produce the answer in one second. That they can't even do that should tell people that speed is not the problem in A.I. research. We have absolutely no fundamental model of how it all works.

    • Re: (Score:3, Interesting)

      by samkass (174571)
      "The tortoise lays on its back, its belly baking in the hot sun, beating its legs trying to turn itself over but it can't. Not without your help. But you're not helping." ...

      Emotional response testing is one avenue, but actually, I think an interesting avenue might be to ask:
      "What is the last barfgaggle you've mfffitzersnatched?"
      or "I think gnunglebores are instruffled, don't you?"

      I think the manner in which these systems have tried to deal with garbage is very different than how humans deal with garbage in
  • by netruner (588721) on Thursday March 13, 2008 @04:16PM (#22743200)
    For heaven's sake - build a freakin killswitch into the thing!
    • by Cheesey (70139)
      No way! Holodeck + AI = universal plot generator []. Don't worry about a killswitch. If Star Trek has taught us anything, it's that all problems can be solved within 45 minutes if you just reverse the polarity of the photon warp field tachyon emitter array, which is way more interesting than just having an "off" button.
    • Dude, it would just fail.

      The only thing more prone to failure on a Galaxy Class starship than the holodeck safeties was that useless friggin core ejection system.
  • The Turing Test (Score:5, Interesting)

    by apathy maybe (922212) on Thursday March 13, 2008 @04:17PM (#22743204) Homepage Journal
    For those of you who don't know what the Turing Test is (how did you manage to find Slashdot?), to quote from Wikipedia

    ... a human judge engages in a natural language conversation with one human and one machine, each of which try to appear human; if the judge cannot reliably tell which is which, then the machine is said to pass the test. In order to keep the test setting simple and universal (to explicitly test the linguistic capability of the machine instead of its ability to render words into audio), the conversation is usually limited to a text-only channel ...

    From the summary this "test" is not a strict Turing Test as it appears to be the machine talking to a human, alone, with no second human also talking to the first human. I could be wrong of course.

    One of the things that makes this test so special, is that if you cannot tell the difference between a human and a computer, then essentially the computer is intelligent. Why? Because if you cannot tell the difference, what does it matter if the machine is really intelligent or not? Is the machine was really thinking or was it just cleverly programmed? The point is however, if you can't tell the difference, what does it matter? (Incidentally, I apply the same argument to the "question" of "free will".)

    Anyway, if this machine (or personality) consistently passes a proper Turing Test, then yeah, that's pretty cool, and I want one on my computer, well so long as the personality type is compatible with my own (not a Marvin please...). (And I have a partner, so no need to make such jokes...)
    • And as I understand it, the test they propose is not only one-sided, it is limited in scope. That is not a Turing Test, in which one is supposed to engage in free conversation. There is a world of difference.
  • by chriss (26574) * <> on Thursday March 13, 2008 @04:18PM (#22743218) Homepage

    One of the problems for any entity trying to communicate like a human is that we share some common knowledge which is based on our physical existence (pigs can't fly, but fall etc.) Some AI projects like (Open)Cyc [] have tried to feed their AI with a very large number of simple facts, but to "understand" some concepts you have to experience them. Try to explain the difference between red and blue to someone who was born blind.

    The 3D communication (holodeck) aspect mentioned is therefore an attempt to have an AI "living" in a human like space, to enable it to develop a similar world view. What's new about Rascals (Rensselaer Advanced Synthetic Architecture for Living Systems) seems to be something else ("Rascals is based on a core theorem proving engine that deduces results (proves theorems) about the world after pattern-matching its current situation against its knowledge base.") that is very computing intensive. Whether this will make any real difference remains to be seen, a lot of other approaches have failed and they so far have only succeeded with very limited models.

  • by PDX (412820) on Thursday March 13, 2008 @04:19PM (#22743230)
    Visual memory hasn't yet been developed for the computers to use generalizations. Specific real data isn't available to them. Google is trying to use wetware to sort images, process dead links, and form new commerce content. When all three are done completely by computers then they will have enough smarts to pass the turing test reliably.
    • by AJWM (19027)
      What kind of Coke can?

      Regular Coke is red, Diet Coke mostly silver, Coke Zero mostly black, caffeine-free Coke gold, and the new Coke-Plus with vitamins (wtf?) is multicolored.

      (Did I pass the test? Hmm, perhaps since you didn't seem to know that, you're the AI?)
  • oblig (Score:2, Funny)

    by aleph42 (1082389) *
    But can it do THAT: [] ?
  • ...the Turing test will be limited to controlling avatars in a virtual world--probably Second Life. Both the synthetic character and his human doppelganger will be operating different avatars. If the human-operators can't tell who the RPI synthetic character is, then it passes the Turing test...

    Seriously? Their turing test is on an online game?

    This isn't a reasonable test. The way people converse online is MUCH different from how they converse directly. I suspect most of the users on a game like that would
  • What is the difference between the Turing Test and the Turing Machine? I thought it was less about mimicking a human and more about generalities that can cause a machine to mimic everything possible thing possible via programming. I'm off to read....
  • yaaay! design that holodeck for me. the sooner I can move into my virtual world and live my my simulated Monica Bellucci and her three simulated identical sisters the better (though I might have to debug and apply a patch for her personality).

    oh wait? they are *only* working on the personality? damn.

    oh well, at least the cleaner will not need a mop and bucket.

    but seriously: you wait until telemarketers and con men get hold of an artificial personality that can hold several hundred conversations sim
  • Just because you can fool a human after interacting for an hour doesn't mean you can keep up the act for a week or a month or a year.

    There are lots of computers that can pass a 5-minute version of the test.

    No, to really pass this test the computer will have to have and display a definite, self-consistent personality that is consistent over time. It doesn't matter much what this personality is as long as it's self-consistent and credible. A lack of a personality will be picked up on by an observer over tim
  • The inquirer tries to discover if he is talking to a machine or not. Being undetected during casual conversation (and I can bet even this is far far from reaching that, they're just making PR) is one thing, being undetected when tried is different.

    Ultimately, the Turing tests tests much more than the ability of conversation. You can describe problems in a conversation and ask the computer to solve them, this is what makes the Turing test a true A.I. test.
  • It's more like the entrance exam. That is, if a computer cannot be reliably distinguished from a human being (within the confines of the test setup), then we MIGHT have something bordering on intelligence. It's a great achievement and a landmark, but it's not the final test.
  • All I heard was blah blah blah turing test blah blah IBM blah blah designing a holodeck, a la Star Trek .

    Passing a turing test is one thing, but a holdeck? Oh yesss....

    All the Star Trek officers, engineers, etc. are prudes with their use of the Holodeck. The Ferengi's knew how to sell that product. You know what I'm talking about :)

    Bring it on...
  • It's been awhile since I was in college.

    However, the Turing test is hardly the holy grail of AI. In fact, Alan Turing thought it would be solved within a few years. I can't find a direct quote for that, but from the Stanford Encyclopedia []:

    There is little doubt that Turing would have been disappointed by the state of play at the end of the twentieth century.

    The Turing test was just supposed to be a minor stop on the way to truly great AI systems. Saying the Turing test is the holy grail of AI is like saying t

  • Artificial Intelligence is the by product illusion of automating enough information (static, active and dynamic) to generate the illusion of human intelligence.

    On the flip side we already have plenty artificially intelligent people. So perhaps the illusion should be based upon a real intelligent person.

    An example of an artificially intelligent person is a teen ager pretending and fooling another online person or persons into believing the kid is much older and much more educated and experienced in the field
  • by scubamage (727538)
    Imagine if one were to combine this with the microsoft online life project that /. had an article on a few weeks ago... all experiences and interactions in your life were recorded, uploaded, and fed to a digital "copy" of you. The possibilities of that kind of tech would be INCREDIBLE.

% APL is a natural extension of assembler language programming; ...and is best for educational purposes. -- A. Perlis