Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
AI Technology

The Lovelace Test Is Better Than the Turing Test At Detecting AI 285

meghan elizabeth writes If the Turing Test can be fooled by common trickery, it's time to consider we need a new standard. The Lovelace Test is designed to be more rigorous, testing for true machine cognition. An intelligent computer passes the Lovelace Test only if it originates a "program" that it was not engineered to produce. The new program—it could be an idea, a novel, a piece of music, anything—can't be a hardware fluke. The machine's designers must not be able to explain how their original code led to this new program. In short, to pass the Lovelace Test a computer has to create something original, all by itself.
This discussion has been archived. No new comments can be posted.

The Lovelace Test Is Better Than the Turing Test At Detecting AI

Comments Filter:
  • by ShanghaiBill ( 739463 ) on Wednesday July 09, 2014 @11:13PM (#47421791)

    That's because they keep shifting the goalposts.

    They are shifting them again. This new test includes this requirement: The machine's designers must not be able to explain how their original code led to this new program. So now anything we understand is not intelligence??? So if someone figures out how the brain works, and is about to describe its function, then people will no longer be intelligent? Intelligence is a characteristic of behavior. If it behaves intelligently, then it is intelligent. The underlying mechanism should be irrelevant.

  • by sjames ( 1099 ) on Wednesday July 09, 2014 @11:34PM (#47421885) Homepage Journal

    Alas, the test that was "passed" was not actually the test Turing proposed.

    So it passed the Turingish test.

  • by AthanasiusKircher ( 1333179 ) on Thursday July 10, 2014 @12:40AM (#47422115)

    It was passed as defined

    The Turing Test was not passed, and the only people who claim it was are ignorant reporters looking for an easy story with a catchy headline

    Indeed. There's a lot of misinformation out there about what Turing originally specified. The test is NOT simply "Can a computer have a reasonable conversation with an unsuspecting human so that the human will not figure out that the computer is not human?" By that standard, ELIZA passed the Turing test many decades ago.

    The test also doesn't have a some sort of magical "fool 30%" threshold -- Turing simply speculated that by the year 2000, AI would have progressed enough that it could fool 30% of "interrogators" (more on that term below). The 30% is NOT a threshold for passing the test -- it was just a statement by Turing about how often AI would pass the test by the year 2000.

    So what was the test?

    The test involves three entities: an "interrogator," a computer, and a normal human responder. The interrogator is assumed to be well-educated and familiar with the nature of the test. The interrogator has five minutes to question both the computer and the normal human in order to determine which is the actual human. The interrogator is assumed to bring an intelligent skepticism to the test -- the standard is not just trying to have a normal conversation, but instead the interrogator would actively probe the intelligence of the AI and the human, designing queries which would find even small flaws or inconsistencies that would suggest the lack of complex cognitive understanding.

    Turing's article actually gives an example of the type of dialogue the interrogator should try -- it involves a relatively high-level debate about a Shakespearean sonnet. The interrogator questions the AI about the meaning of the sonnet and tries to identify whether the AI can evaluate the interrogator's suggestions on substituting new words or phrases into the poem. The AI is supposed to detect various types of errors requiring considerable fluency in English and creativity -- like recognizing that a suggested change in the poem wouldn't fit the meter, or ir wouldn't be idiomatic English, or the meaning would make an inappropriate metaphor in the context of the poem.

    THAT'S the sort of "intelligence" Turing was envisioning. The "interrogator" would have these complex discussions with both the AI and the human, and then render a verdict.

    Now, compare that to the situation in TFS where the claim is that the Turing test was "passed" by a chatbot fooling people. That's crap. The chatbot in question, as parent noted, was not even fluent in the language of the interrogator, it was deliberately evasive and nonresponsive (instead of Turing's example of AI's and humans having willing debates with the interrogator), there was no human to compare the chatbot to, the interrogators were apparently not asking probing questions to determine the nature of the "intelligence" (and it's not even clear whether the interrogators knew what their role was, the nature of the test, whether they might be chatting with AI, etc.).

    Thus, Turing's test -- as originally described -- was nowhere close to "passed." Today's chatbots can't even carry on a normal small-talk discussion for 30 seconds with a probing interrogator without sounding stupid, evasive, non-responsive, mentally ill, and/or making incredibly ridiculous errors in common idiomatic English.

    In contrast, Turing was predicting that interrogators would have to be debating artistic substitutions of idiomatic and metaphorical English usage in Shakespeare's sonnets to differentiate a computer from a real (presumably quite intelligent) human by the year 2000. In effect, Turing seemed to assume that he would talk to the AI in the way he might debate things with a rather intelligent peer or colleague.

    Turing was wrong about his predictions. But that doesn't mean his test is invalid -- to the contrary, his standard was so ridiculously high that we are nowhere close to having AI that could pass it.

Two can Live as Cheaply as One for Half as Long. -- Howard Kandel

Working...