Forgot your password?
typodupeerror
AI Technology

The Lovelace Test Is Better Than the Turing Test At Detecting AI 285

Posted by samzenpus
from the why-did-you-program-me-to-feel-pain? dept.
meghan elizabeth writes If the Turing Test can be fooled by common trickery, it's time to consider we need a new standard. The Lovelace Test is designed to be more rigorous, testing for true machine cognition. An intelligent computer passes the Lovelace Test only if it originates a "program" that it was not engineered to produce. The new program—it could be an idea, a novel, a piece of music, anything—can't be a hardware fluke. The machine's designers must not be able to explain how their original code led to this new program. In short, to pass the Lovelace Test a computer has to create something original, all by itself.
This discussion has been archived. No new comments can be posted.

The Lovelace Test Is Better Than the Turing Test At Detecting AI

Comments Filter:
  • dwarf fortress (Score:4, Insightful)

    by Anonymous Coward on Wednesday July 09, 2014 @10:05PM (#47421493)

    That is all.

  • by voss (52565) on Wednesday July 09, 2014 @10:05PM (#47421499)

    When was the last time the average person created something original?

  • by Anonymous Coward on Wednesday July 09, 2014 @10:19PM (#47421565)

    It was passed as defined: 10 out of 30 judges (lay people) thought they were talking with a human when they were talking with a machine in 5 minute chat sessions. Whether passing this is any way significant is up for debate, but the test was passed.

  • by The Evil Atheist (2484676) on Wednesday July 09, 2014 @10:29PM (#47421615) Homepage
    I do recall reading a while back experiments done with AI in which programs compete for resources by generating programs to do tasks given to it (computing sums etc). Some programs did generate code that were completely unexpected.

    It raises the question programs that are evolved are designed by the programmer or the program, or the process of evolution. And it also raises the philosophical question about whether we should be more humble and accept that our "creativity" that we think is what makes humans intelligent could be nothing more than a process of the evolution of ideas (I hesitate to use the word meme) that we don't actually originate nor control.

    If we consider programs that can create things through evolution as "intelligent", that would ironically make natural selection intelligent, since DNA is a digital program that is evolved into complex things over time that can't be reduced to first principles.
  • by Altanar (56809) on Wednesday July 09, 2014 @10:31PM (#47421621)

    The machine's designers must not be able to explain how their original code led to this new program.

    Whoa, whoa, whoa. I have severe problem with this. This is like looking at obscurity and declaring it a soul. The measure of intelligence is that we can't understand it? Intelligence through obfuscation? There should be no way for a designer to not be able to figure out why their machine produced what it did given enough debugging.

  • by nmb3000 (741169) <nmb3000@that-google-mail-site.com> on Wednesday July 09, 2014 @10:38PM (#47421647) Homepage Journal

    It was passed as defined

    The Turing Test was not passed, and the only people who claim it was are ignorant reporters looking for an easy story with a catchy headline and tech morons who also believe Kevin Warwick is a cyborg.

    The test was rigged in every way possible:

    - judges told they were talking to a child
    - that doesn't speak English as a primary language
    - which was programmed with the express intent of misdirection
    - and only "fooled" 30% of the judges.

    And, even after all that, Cleverbot [cleverbot.com] did a much better job back in 2011 with a 60% success rate.

    This Eugene test outcome was a complete farce -- something to remind everyone that Warwick still exists and to separate the ignorant and sensational tech news trash rags from the more legitimate sources of information.

  • Computer Chess (Score:5, Insightful)

    by Jim Sadler (3430529) on Wednesday July 09, 2014 @10:45PM (#47421683)
    Oddly computer chess programs may already meet this criteria. The programs usually apply a weight or value to a move and a weight and a value to the consequences down stream of the move. But there are times when the consequences are of equal value at some event horizon and random choices must be applied. As a consequence sequences of moves may be made that no human has ever made and the programmer could not really predict either. As machines have gotten more able the event horizon is at a deeper level. But we might reach the point at which only the player playing white can ever hope to win and the player with black may always lose. We are not in danger of a human ever being able to do that unless we alter his brain.
  • by dlingman (1757250) on Wednesday July 09, 2014 @11:51PM (#47421951)
    http://en.wikiquote.org/wiki/I... [wikiquote.org] Detective Del Spooner: Human beings have dreams. Even dogs have dreams, but not you, you are just a machine. An imitation of life. Can a robot write a symphony? Can a robot turn a... canvas into a beautiful masterpiece? Sonny: [With genuine interest] Can you?
  • by Anonymous Coward on Thursday July 10, 2014 @03:23AM (#47422585)

    One of my friends is a philosophy post-doc and he told me many times that in philosophy the gold standard for intelligence is intelligent behaviour. Of course he has some footnotes to add, notably that intelligent things can appear to be bricks if you cut off all their actuators, but to say that this particular variant of ‘behaviourism’ as you call it is discredited is disingenuous. In particular, if one could hypothetically replace someone's brain with a computer and not know the difference then the computer must necessarily be intelligent, insofar as humans are intelligent. Being a philosopher he has opinions about that too.

    It also isn't true that the Lovelace test is more rigorous. To pass it you must produce something truly original but presumably non-random. I can only say good luck getting any human to pass this test. In practice this means of course that the bar must be lowered to some measure of non-rigorous relative originality. The weird use of the word ‘program’ doesn't make the fact that we're could actually be talking about a poem any more rigorous either. Then there is the strange notion that the machine's designers must not be able to explain why it works the way it does. Quite apart from the fact that a lot of software has multiple shades of this already as it is, the designers of course wouldn't be able to explain even the most trivial action their machine took. Of course you have to broaden this from ‘the designers’ to ‘anyone’ but then you get back into the problem that if someone ever figures out how the brain works, yours truly will no longer be intelligent according to our dear Lovelace test.

    And as a matter of practical importance, there already exists a lot of machine generated art, some of it quite beautiful. The programmers can explain the algorithms of course, but often not how it got the final result just right. These would appear to pass the Lovelace test (you can disagree if you want but that just shows that the Lovelace test isn't rigorous), but are not intelligent to the best of our knowledge. The Turing test, as envisioned by Turing (rather than the garbage that has been in the news lately) is a much more reliable measure of intelligence. Could there be a better test? Probably. Maybe something is only intelligent if it can get a philosophy degree. Too bad that this test would disqualify almost all humans, though.

  • by Anonymous Coward on Thursday July 10, 2014 @04:16AM (#47422711)

    No, he's not describing behaviourism. He's saying this:

    If it behaves intelligently, then it is intelligent.

    That's a reasonable statement to make, and if you're disagreeing with that statement, you need to say why. Converting it to a strawman and then making a bald claim that the strawman is "discredited" is a cheap rhetorical trick. And then you go on to talk about free will, which has no direct relationship with intelligence anyway. OK, I get it, you want to turn the conversation around to being about free will, because that's your ax, but telling someone their perfectly reasonable statements are "simply wrong" is a shitty way to do it. OP's point, which you're deliberately missing, is that whatever intelligence is, it is not an observer-relative thing which demands that the observer be unaware of the mechanism. If you want to engage in debate with him, try addressing that specific point, rather than a bunch of points he never made about a subject he's not discussing. And if you want to talk about free will and about how behaviourism is "discredited" maybe you could at least make a couple of points in favour of that argument, for those of us who might be interested anyway. Maybe then we can see how your belief relates to what is actually being said.

    Anyway, what you're both missing is the practical issue with "The machine's designers must not be able to explain how their original code led to this new program." The machine's designers can lie, or be incapable of coming up with an explanation despite one existing, so this is a completely ill-defined criterion - which is what we're trying to get away from.

  • by AthanasiusKircher (1333179) on Thursday July 10, 2014 @05:20AM (#47422835)

    Turing was wrong about his predictions. But that doesn't mean his test is invalid

    Imho it is.
    Suppose we manage to create a strong AI. It's fully conscious, fully aware, but for some quirk we cannot understand, it's 100% honest.
    Such an AI would never pass the Turing test, because it would never try to pass off as human

    That sounds like a legit point at first, but think about it for a sec. Programming a computer to lie and be evasive about its nature is easy, and many chatbots can already do that. Asking a strong AI "are you a computer?" or "what did you have for breakfast?" would not be useful for evaluating intelligence. Getting the AI to debate an intellectual topic, on the other hand, will be less likely to require deception but would be a better measure of intelligence. That's another fundamental point people miss: The point of the Turing test was to imitate human INTELLIGENCE, NOT to pretend to be a physical human.

    A knowledgeable interrogator trying to evaluate intelligence would thus likely be more interested in asking intellectual questions, rather than queries just designed to test whether the computer can make up some nonsense about itself.

After an instrument has been assembled, extra components will be found on the bench.

Working...