Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AI Software

Was Turing Test Legitimately Beaten, Or Just Cleverly Tricked? 309

beaker_72 (1845996) writes "On Sunday we saw a story that the Turing Test had finally been passed. The same story was picked up by most of the mainstream media and reported all over the place over the weekend and yesterday. However, today we see an article in TechDirt telling us that in fact the original press release was just a load of hype. So who's right? Have researchers at a well established university managed to beat this test for the first time, or should we believe TechDirt who have pointed out some aspects of the story which, if true, are pretty damning?" Kevin Warwick gives the bot a thumbs up, but the TechDirt piece takes heavy issue with Warwick himself on this front.
This discussion has been archived. No new comments can be posted.

Was Turing Test Legitimately Beaten, Or Just Cleverly Tricked?

Comments Filter:
  • by Anonymous Coward on Tuesday June 10, 2014 @12:53PM (#47204031)

    It has nothing to do with actual artificial intelligence and everything to do with writing deceptive scripts. It's not just this incident, it's a problem with the goal of the Turing test itself. I always found the Turing test a kind of stupid exercise due to this.

  • by Trepidity ( 597 ) <delirium-slashdo ... h.org minus city> on Tuesday June 10, 2014 @01:02PM (#47204117)

    Restricted Turing tests, which test only indistinguishability from humans in a more limited range of tasks, can sometimes be useful research benchmarks as well, so limiting them isn't entirely illegitimate. For example, an annual AI conference has a "Mario AI Turing test" [marioai.org] where the goal is to enter a bot that tries to play levels in a "human-like" way so that judges can't distinguish its play from humans' play, which is a harder task than just beating them (speedrunning a Mario level can be done with standard A* search, so isn't that interesting as an AI benchmark). This is useful as a benchmark for things like algorithms that try to mimic action styles in general (whether in games or elsewhere).

    However it would definitely be misleading to claim passing these kinds of restricted Turing tests constitutes passing the Turing test in the sense that Turing had in mind: obviously playing Mario levels in a human-like way is not equivalent to full general intelligence, and serious researchers wouldn't claim that.

  • by i kan reed ( 749298 ) on Tuesday June 10, 2014 @01:11PM (#47204175) Homepage Journal

    Sure it is.

    They convinced a human that they were talking to an unimpressive human. That's definitely a step above "not human at all".

  • by Anonymous Coward on Tuesday June 10, 2014 @01:21PM (#47204271)

    So according to you I could make a machine that simulates texting with a baby. Every now and then it would randomly pound out gibberish as if a baby was walking on the keyboard.

  • by Spy Handler ( 822350 ) on Tuesday June 10, 2014 @03:15PM (#47205455) Homepage Journal

    The test as specified by Alan Turing involves a human judge sitting in front of two terminals. One is a computer and the other is human-operated. The judge asks both terminals questions and tries to figure out which one is computer and which is human. It's quite specific.

    It does not involve unsuspecting normal people in everyday situations who are duped into thinking they're interacting with a human... that would be quite easy. For instance if somebody asked the TigerDirect customer service chat window questions they have about a product and receive a good answer, they might not suspect it's a bot. Doesn't mean the TigerDirect bot passed the Turing test.

    Turing also didn't say anything about crippling the test by making it a child who doesn't speak fluent English.

Nothing is finished until the paperwork is done.

Working...