AI Researchers Say 'Rascals' Might Pass Turing Test 337
An anonymous reader writes "Passing the Turing test is the holy grail of artificial intelligence (AI) and now researchers claim it may be possible using the world's fastest supercomputer (IBM's Blue Gene). This version of the Turing test pits a human conversing with a synthetic character powered by Rascals software crafted at Rensselaer Polytechnic Institute. RPI is aiming to pass AI's final exam this fall, by pairing the most powerful university-based supercomputing system in the world with its new multimedia group which is designing a holodeck, a la Star Trek."
Acting on behalf of...well, myself I guess. (Score:5, Interesting)
As we become more comfortable with accepting communication with each other through more abstracted proxies - like common chat applications currently and the recent neural voice collar (which pumps out a synthetic voice - even further proxy) - I wonder if we will in fact see what the author Stephen Baxter speculated, artificial clones of ourselves or our personalities handling our daily affairs.
I don't think it's too far out there to imagine interacting and planning a meeting with someone over the phone, only to find out later you had been talking to an AI facsimile of that individual.
What would (and may) be stranger yet, is considering the possibility that two AI facsimiles may in fact carry out real work or meetings from start to finish completely without the interaction of their 'owners'.
The Turing Test (Score:5, Interesting)
From the summary this "test" is not a strict Turing Test as it appears to be the machine talking to a human, alone, with no second human also talking to the first human. I could be wrong of course.
One of the things that makes this test so special, is that if you cannot tell the difference between a human and a computer, then essentially the computer is intelligent. Why? Because if you cannot tell the difference, what does it matter if the machine is really intelligent or not? Is the machine was really thinking or was it just cleverly programmed? The point is however, if you can't tell the difference, what does it matter? (Incidentally, I apply the same argument to the "question" of "free will".)
Anyway, if this machine (or personality) consistently passes a proper Turing Test, then yeah, that's pretty cool, and I want one on my computer, well so long as the personality type is compatible with my own (not a Marvin please...). (And I have a partner, so no need to make such jokes...)
Re:yes, but is it really intelligent? (Score:2, Interesting)
I suppose what we can do is produce something which carries out tasks which we consider intelligence necessary for - in that case does it really matter if it is intelligence, so long as the 'task' gets completed?
Be that task mathematics, logistics or writing smooth jazz.
I guess perhaps the problem has been that we've been looking for human-like intelligence for these tasks, when really we should be asking what does intelligence do. Instead of asking what intelligence is and how to make it, perhaps we should just be searching for ways to accomplish the tasks intelligence tackles so well.
During the early days of powered flight many found it difficult to give up the notion of flapping wings...after all, since everything that flew under it's own power used wings which flapped, flapping must be needed as well as wings. Rocketry might be an example of flying without wings or flapping.
I guess we can think something along these lines - it doesn't have to flap it's wings to fly.
Re:Acting on behalf of...well, myself I guess. (Score:4, Interesting)
It will be quite some time before we have conversational intelligence out of AI systems. Retrieval speeds on Google searches are good, but at conversational pace, sifting through the information for some trace of relevance to the conversation is still going to be stilted and slow. Even then, finding some relevant response to a topic is not something that people do well.
We each have a sphere of stuff that we are familiar with. It is a human trait to act in one of several ways when conversation goes beyond that:
- walk away/ignore
- talk out of our asses like we do know when clearly we don't
- quietly observe to learn what others know
- change the subject
That as an example of what current AI conversation applications are not capable of.
In the case of an AI answering machine making a meeting appointment, it would only take one odd question, like: how about those cowboys? to throw the process out of whack if you did not know that you were talking to a machine.
AI does not thread thought and memories in the same way that we do, and this is part of what humans call humor.. when the story being told mismatches the thread/plot that we have in our heads. That depends hugely on the experience of the human involved, and the depth of their retained knowledge. both of these are missing in AI systems, and current technology will not allow for faking it past some limited point. The ability to switch to another 'almost' related conversation is something that AI cannot do without great memory stores, fast search/retrieval etc.
Imagine it like this: every sentence in a conversation is essentially a chess move. The game of chess has a finite bounded domain. A conversation with a human does not. The problem is far greater than a mimicry.
Re:yes, but is it really intelligent? (Score:1, Interesting)
Re:Acting on behalf of...well, myself I guess. (Score:5, Interesting)
- talk out of our asses like we do know when clearly we don't
- quietly observe to learn what others know
- change the subject
That as an example of what current AI conversation applications are not capable of.
Actually, current AI "conversation" applications do all of the above all the time... that's one of the things that make them so easy to detect.
n the case of an AI answering machine making a meeting appointment, it would only take one odd question, like: how about those cowboys? to throw the process out of whack if you did not know that you were talking to a machine.
To be fair, that question, without any context, would confuse the majority of human beings also. Not everybody knows the names of American football teams
The game of chess has a finite bounded domain. A conversation with a human does not.
Are you sure? Human conversational domain might be finite, albeit quite a bit larger than the chess domain. At some point it becomes very difficult to tell the difference between "infinite" and just "very very very large"...
Re:What crap (Score:3, Interesting)
Emotional response testing is one avenue, but actually, I think an interesting avenue might be to ask:
"What is the last barfgaggle you've mfffitzersnatched?"
or "I think gnunglebores are instruffled, don't you?"
I think the manner in which these systems have tried to deal with garbage is very different than how humans deal with garbage input.
the Turing test isn't the "final AI exam" (Score:2, Interesting)
Re:yes, but is it really intelligent? (Score:3, Interesting)
I tackled the problem with two ideas: One, humans are stupid, crazy, defensive, argumentative, get drunk, tired, and stoned, and generally behave like... well they generally DON'T behave. Secondly, as it was designed on a Timex-Sinclair 1000 with only 16k of memory and no hard drive, it had to be really, really simple. So I had to resort to trickery to fool people.
One of these days I'm going to port it to javascript and post it.
Once I ran across a Turing machine on the net named "Alice" and had Art have a conversation with it. I think the two machines fell in love with each other! I posted the results at my now-defunct nerdy Quake site, you may still find it at archive.org, even if Google can't.
-mcgrew
Re:yes, but is it really intelligent? (Score:5, Interesting)
The conditions I'd put on AI would be that it has to be able to improvise and create. It has to be able to learn and develop independently of it's program. Instructions which dictate how it should develop or how to deal with specific situations are prohibited.
One thing I'd suggest is important is desire, the desire to feed, to move, to do something. This would spur to develop itself to fulfill its desires. Otherwise it's just going to sit there.
Re:What crap (Score:3, Interesting)
Re:Real turing test (Score:4, Interesting)
Both Japanese and Chinese use all sorts of expressions, many of which make no sense whatsoever when translated literally. This becomes apparent when trying to use those translation tools. The translation ends up being complete gibberish to the point of being comedic.
Because people of so many nationalities speak English it's easier for an AI to fool people because there really is no standard for the language. English-speakers are used to hearing it spoken in all sorts of different ways, with a wide variety of expressions.
Automated chats are always obvious for what they are because they tend to stupidly repeat the same few comments over and over again. They're also incapable of responding properly to a user's comments, and colloquialisms always trip up these systems.
Re:yes, but is it really intelligent? (Score:2, Interesting)
Re:yes, but is it really intelligent? (Score:4, Interesting)
That depends on what your goal is. If your goal is to reproduce the process of human mental development, from a child to an adult, in silico, then I agree. However, if your goal is merely to produce an intelligence that can think at least as well as a human can, then you can take shortcuts -- such as supplying the intelligence with a ready-made database of knowledge, or a built-in library of common tasks ("I know Kung Fu"), etc. As long as the intelligence is as capable of learning and evolving as an average human, I see no harm in starting it off with something it can use.
Or, put it this way: adult humans take 18 years or so to mature; that's a pretty long development cycle. If you're building an AI, you might as well accelerate it as much as you can.
Re:yes, but is it really intelligent? (Score:3, Interesting)