Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Software Sci-Fi Technology

AI Researchers Say 'Rascals' Might Pass Turing Test 337

An anonymous reader writes "Passing the Turing test is the holy grail of artificial intelligence (AI) and now researchers claim it may be possible using the world's fastest supercomputer (IBM's Blue Gene). This version of the Turing test pits a human conversing with a synthetic character powered by Rascals software crafted at Rensselaer Polytechnic Institute. RPI is aiming to pass AI's final exam this fall, by pairing the most powerful university-based supercomputing system in the world with its new multimedia group which is designing a holodeck, a la Star Trek."
This discussion has been archived. No new comments can be posted.

AI Researchers Say 'Rascals' Might Pass Turing Test

Comments Filter:
  • It is interesting that they have used a 'guinea pig' student to 'bare all' to the knowledge base. It would seem, then that this AI is in fact a type of facsimile of this student.

    As we become more comfortable with accepting communication with each other through more abstracted proxies - like common chat applications currently and the recent neural voice collar (which pumps out a synthetic voice - even further proxy) - I wonder if we will in fact see what the author Stephen Baxter speculated, artificial clones of ourselves or our personalities handling our daily affairs.

    I don't think it's too far out there to imagine interacting and planning a meeting with someone over the phone, only to find out later you had been talking to an AI facsimile of that individual.

    What would (and may) be stranger yet, is considering the possibility that two AI facsimiles may in fact carry out real work or meetings from start to finish completely without the interaction of their 'owners'.
  • The Turing Test (Score:5, Interesting)

    by apathy maybe ( 922212 ) on Thursday March 13, 2008 @04:17PM (#22743204) Homepage Journal
    For those of you who don't know what the Turing Test is (how did you manage to find Slashdot?), to quote from Wikipedia

    ... a human judge engages in a natural language conversation with one human and one machine, each of which try to appear human; if the judge cannot reliably tell which is which, then the machine is said to pass the test. In order to keep the test setting simple and universal (to explicitly test the linguistic capability of the machine instead of its ability to render words into audio), the conversation is usually limited to a text-only channel ...


    From the summary this "test" is not a strict Turing Test as it appears to be the machine talking to a human, alone, with no second human also talking to the first human. I could be wrong of course.

    One of the things that makes this test so special, is that if you cannot tell the difference between a human and a computer, then essentially the computer is intelligent. Why? Because if you cannot tell the difference, what does it matter if the machine is really intelligent or not? Is the machine was really thinking or was it just cleverly programmed? The point is however, if you can't tell the difference, what does it matter? (Incidentally, I apply the same argument to the "question" of "free will".)

    Anyway, if this machine (or personality) consistently passes a proper Turing Test, then yeah, that's pretty cool, and I want one on my computer, well so long as the personality type is compatible with my own (not a Marvin please...). (And I have a partner, so no need to make such jokes...)
  • An interesting point.

    I suppose what we can do is produce something which carries out tasks which we consider intelligence necessary for - in that case does it really matter if it is intelligence, so long as the 'task' gets completed?

    Be that task mathematics, logistics or writing smooth jazz.

    I guess perhaps the problem has been that we've been looking for human-like intelligence for these tasks, when really we should be asking what does intelligence do. Instead of asking what intelligence is and how to make it, perhaps we should just be searching for ways to accomplish the tasks intelligence tackles so well.

    During the early days of powered flight many found it difficult to give up the notion of flapping wings...after all, since everything that flew under it's own power used wings which flapped, flapping must be needed as well as wings. Rocketry might be an example of flying without wings or flapping.

    I guess we can think something along these lines - it doesn't have to flap it's wings to fly.
  • by zappepcs ( 820751 ) on Thursday March 13, 2008 @04:29PM (#22743370) Journal
    Well, imagination is a great thing but I've not yet seen anything that even comes close to that kind of imitation of a human. Not even close. It takes max of two questions to figure it out that it is a machine. The scope of what the facsimile is programmed with/for can be outstripped quickly.

    It will be quite some time before we have conversational intelligence out of AI systems. Retrieval speeds on Google searches are good, but at conversational pace, sifting through the information for some trace of relevance to the conversation is still going to be stilted and slow. Even then, finding some relevant response to a topic is not something that people do well.

    We each have a sphere of stuff that we are familiar with. It is a human trait to act in one of several ways when conversation goes beyond that:

    - walk away/ignore
    - talk out of our asses like we do know when clearly we don't
    - quietly observe to learn what others know
    - change the subject

    That as an example of what current AI conversation applications are not capable of.

    In the case of an AI answering machine making a meeting appointment, it would only take one odd question, like: how about those cowboys? to throw the process out of whack if you did not know that you were talking to a machine.

    AI does not thread thought and memories in the same way that we do, and this is part of what humans call humor.. when the story being told mismatches the thread/plot that we have in our heads. That depends hugely on the experience of the human involved, and the depth of their retained knowledge. both of these are missing in AI systems, and current technology will not allow for faking it past some limited point. The ability to switch to another 'almost' related conversation is something that AI cannot do without great memory stores, fast search/retrieval etc.

    Imagine it like this: every sentence in a conversation is essentially a chess move. The game of chess has a finite bounded domain. A conversation with a human does not. The problem is far greater than a mimicry.
  • by Bryansix ( 761547 ) on Thursday March 13, 2008 @04:35PM (#22743446) Homepage
    This was the premise of Blade Runner. That's why they developed the Voight-Kampff machine [wikipedia.org] to be able to single out replicants.
  • by Jeremi ( 14640 ) on Thursday March 13, 2008 @04:46PM (#22743586) Homepage
    - walk away/ignore
    - talk out of our asses like we do know when clearly we don't
    - quietly observe to learn what others know
    - change the subject

    That as an example of what current AI conversation applications are not capable of.


    Actually, current AI "conversation" applications do all of the above all the time... that's one of the things that make them so easy to detect.


    n the case of an AI answering machine making a meeting appointment, it would only take one odd question, like: how about those cowboys? to throw the process out of whack if you did not know that you were talking to a machine.


    To be fair, that question, without any context, would confuse the majority of human beings also. Not everybody knows the names of American football teams ;^)


    The game of chess has a finite bounded domain. A conversation with a human does not.


    Are you sure? Human conversational domain might be finite, albeit quite a bit larger than the chess domain. At some point it becomes very difficult to tell the difference between "infinite" and just "very very very large"...

  • Re:What crap (Score:3, Interesting)

    by samkass ( 174571 ) on Thursday March 13, 2008 @04:46PM (#22743588) Homepage Journal
    "The tortoise lays on its back, its belly baking in the hot sun, beating its legs trying to turn itself over but it can't. Not without your help. But you're not helping." ...

    Emotional response testing is one avenue, but actually, I think an interesting avenue might be to ask:
    "What is the last barfgaggle you've mfffitzersnatched?"
    or "I think gnunglebores are instruffled, don't you?"

    I think the manner in which these systems have tried to deal with garbage is very different than how humans deal with garbage input.
  • by spiffmastercow ( 1001386 ) on Thursday March 13, 2008 @04:50PM (#22743642)
    It's more like the entrance exam. That is, if a computer cannot be reliably distinguished from a human being (within the confines of the test setup), then we MIGHT have something bordering on intelligence. It's a great achievement and a landmark, but it's not the final test.
  • by sm62704 ( 957197 ) on Thursday March 13, 2008 @04:53PM (#22743672) Journal
    My Turing machine was called "Artificial Insanity". I used to have a copy posted on the internet, but I ran out of room. It was so human that a friend of mine broke his keyboard it pissed him off so much.

    I tackled the problem with two ideas: One, humans are stupid, crazy, defensive, argumentative, get drunk, tired, and stoned, and generally behave like... well they generally DON'T behave. Secondly, as it was designed on a Timex-Sinclair 1000 with only 16k of memory and no hard drive, it had to be really, really simple. So I had to resort to trickery to fool people.

    One of these days I'm going to port it to javascript and post it.

    Once I ran across a Turing machine on the net named "Alice" and had Art have a conversation with it. I think the two machines fell in love with each other! I posted the results at my now-defunct nerdy Quake site, you may still find it at archive.org, even if Google can't.

    -mcgrew
  • by MaWeiTao ( 908546 ) on Thursday March 13, 2008 @05:03PM (#22743780)
    I'd argue our brain and perhaps even our DNA is the equivalent of a BIOS and OS. Humans are even born with certain instincts amounting to preprogrammed instructions, breast-feeding being one of them. A computer with no BIOS or AI is basically a pile of plastic and silicon. There needs to be some foundation to build upon.

    The conditions I'd put on AI would be that it has to be able to improvise and create. It has to be able to learn and develop independently of it's program. Instructions which dictate how it should develop or how to deal with specific situations are prohibited.

    One thing I'd suggest is important is desire, the desire to feed, to move, to do something. This would spur to develop itself to fulfill its desires. Otherwise it's just going to sit there.
  • Re:What crap (Score:3, Interesting)

    by SatanicPuppy ( 611928 ) * <Satanicpuppy@gma ... minus herbivore> on Thursday March 13, 2008 @05:23PM (#22744006) Journal
    But that is itself revealing; a human would assume it was gibberish and respond accordingly. The above response is exactly what I'd expect to see from a computer...Hell, it looks like a response from Zork.
  • Re:Real turing test (Score:4, Interesting)

    by MaWeiTao ( 908546 ) on Thursday March 13, 2008 @05:28PM (#22744076)
    Japanese has all kinds of complexities. They have complex conjugations first of all, then there's the whole system of politeness depending on who's being addressed although that may be less relevant online. While at it's core Chinese is fairly intuitive and straightforward it quickly gets very complex.

    Both Japanese and Chinese use all sorts of expressions, many of which make no sense whatsoever when translated literally. This becomes apparent when trying to use those translation tools. The translation ends up being complete gibberish to the point of being comedic.

    Because people of so many nationalities speak English it's easier for an AI to fool people because there really is no standard for the language. English-speakers are used to hearing it spoken in all sorts of different ways, with a wide variety of expressions.

    Automated chats are always obvious for what they are because they tend to stupidly repeat the same few comments over and over again. They're also incapable of responding properly to a user's comments, and colloquialisms always trip up these systems.
  • On the first day of class, my AI Prof in college asked "What is AI? Well, they used to say 'when a computer can win at chess, then we'll have AI'; but we did that and they said that's not it. So they said 'drive a car', and when we did that they said it didn't count... so they said 'play soccer'; done, 'doesn't count'. So what is AI? AI is anything we haven't figured out how to do with a computer. Yet."
  • by Bugmaster ( 227959 ) on Thursday March 13, 2008 @06:21PM (#22744744) Homepage

    That said, I would agree that you shouldn't have to give a machine anything more than basic resources to begin its process of learning...

    That depends on what your goal is. If your goal is to reproduce the process of human mental development, from a child to an adult, in silico, then I agree. However, if your goal is merely to produce an intelligence that can think at least as well as a human can, then you can take shortcuts -- such as supplying the intelligence with a ready-made database of knowledge, or a built-in library of common tasks ("I know Kung Fu"), etc. As long as the intelligence is as capable of learning and evolving as an average human, I see no harm in starting it off with something it can use.

    Or, put it this way: adult humans take 18 years or so to mature; that's a pretty long development cycle. If you're building an AI, you might as well accelerate it as much as you can.

  • by aussie_a ( 778472 ) on Thursday March 13, 2008 @10:46PM (#22747364) Journal
    I disagree. Any sufficiently simulated intelligence will be indistinguishable from true intelligence. Therefore if it can pass the turing test (passing means its impossible to determine if you're speaking with a machine or human, correct?), how can we determine if its true intelligence or simulated intelligence?

After a number of decimal places, nobody gives a damn.

Working...