Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI Technology

The Lovelace Test Is Better Than the Turing Test At Detecting AI 285

meghan elizabeth writes If the Turing Test can be fooled by common trickery, it's time to consider we need a new standard. The Lovelace Test is designed to be more rigorous, testing for true machine cognition. An intelligent computer passes the Lovelace Test only if it originates a "program" that it was not engineered to produce. The new program—it could be an idea, a novel, a piece of music, anything—can't be a hardware fluke. The machine's designers must not be able to explain how their original code led to this new program. In short, to pass the Lovelace Test a computer has to create something original, all by itself.
This discussion has been archived. No new comments can be posted.

The Lovelace Test Is Better Than the Turing Test At Detecting AI

Comments Filter:
  • dwarf fortress (Score:4, Insightful)

    by Anonymous Coward on Wednesday July 09, 2014 @09:05PM (#47421493)

    That is all.

  • by voss ( 52565 ) on Wednesday July 09, 2014 @09:05PM (#47421499)

    When was the last time the average person created something original?

    • and there are quite a few human pairs for which one would not be able to convince the other that they were speaking intelligibly, either.

      it is irrelevant. it is only necessary for one computer (however that's defined) to pass this test. i don't see how it's really any better than Turing though. it's a nice idea, it seems even more vague than the Turing test.

      • by TWX ( 665546 )
        Has anyone really been far even as decided to use even go want to do look more like?
    • by Lisias ( 447563 )

      People usually make the big mistake of taking himselfs as measure for everybody else.

      Turing was a hell of a smart guy - I bet my mouse that he had this mindset ("everybody is more or less smart as me") when he designed that Test.

      By the way, there's a joke around here that states: The sum of all Q.I. in the Earth is a constant - and the population is growing...

      There's more instructed people nowadays, but smart? I'm afraid that not - Turing didn't live to see what we are nowadays.

    • by mi ( 197448 )
      Well, it does happen every day here and there. But there are a lot of people, who never manage to — throughout their whole lives... And I'm not even sure about myself, unfortunately.
    • by sg_oneill ( 159032 ) on Wednesday July 09, 2014 @10:18PM (#47421823)

      When was the last time the average person created something original?

      Probably every day, BUT it does go to the point with this one. We're still trying to recreate an idealized human rather than actually focusing on what intelligence is.

      My cat is undeniably intelligent, almost certainly sentient although probably not particularly sapient. She works out things for herself and regularly astonishes me with the stuff she works out, and her absolute cunning when when she's hunting mice. In fact having recently worked out I get unhappy when she brings mice from outside the house and into my room, she now brings them into the back-room and leaves them in her food bowl, despite me never having told her that that would be an accepatble place for her to place her snacks.

      But has she created an original work? Well no, other than perhaps artfully diabolical new ways to smash mice. But thats something she's programmed to do. She is, after all, a cat.

      She'd fail the test, but she's probably vastly more intelligent in the properly philosophical meaning of the term, than any machine devised to date.

    • You and I are constantly having original thoughts while walking and chewing gum at the same time, thing is they not impressive enough to be called "original". The test in TFA just extends the psychologically comforting idea that intelligence is something unique to higher life forms, yet when I was at school in the 60's intelligence was generally considered to be unique to humans, animals were generally considered to be instinctual automata, which likely explains why Turing defined AI as the ability to hold
    • by lorinc ( 2470890 )

      The most ridicule part being "must not be able to explain how". That doesn't even make sense for humans! If you ask artists, they'll tell you what their influences are, if you ask critics, they'll tell you why this particular piece of art was made this way and not in a completely different manner.

      Fun fact: any program with yet unseen bugs that make their behavior totally unexplainable to their developers has passed the test. That gives you either an idea of the soundness of this crap, or a deep insight of w

    • Well, no human alive today in any case. All so-called "original" works produced today are derivatives of older works (Shakespeare, folklore, etc) or quirks produced by the artist's mental state. Among deceased artists Van Gogh and Edgar Allan Poe are famous examples. Another reason why we should stop this "all rights reserved" nonsense of the traditional copyright system, where the artist is presumed to be a god that produces unique worlds out of nothing.

  • by mark-t ( 151149 ) <markt AT nerdflat DOT com> on Wednesday July 09, 2014 @09:15PM (#47421547) Journal

    The machine's designers must not be able to explain how their original code led to this new program

    That is a flatly ludicrous requirement, far in excess of what we would ever even consider applying to determine if even a human being is intelligent or not. Hell, if you were to apply that standard to human beings, ironically, many extremely intelligent people would fail that metric, because in hindsight, you can very often identify precisely how a particular thought or idea came out of a person.

    • Re:Absurd (Score:5, Funny)

      by Roger W Moore ( 538166 ) on Wednesday July 09, 2014 @09:28PM (#47421605) Journal
      Agreed - there is no reason to require the program be written in perl.
    • by khallow ( 566160 )

      The machine's designers must not be able to explain how their original code led to this new program

      That is a flatly ludicrous requirement

      Why do you think that? I guess we need actual examples.

    • by Livius ( 318358 )

      And if it's declared intelligent, and then someone figures out how to explain how it came up with whatever the original content is, then does it just become less intelligent?

  • by The Evil Atheist ( 2484676 ) on Wednesday July 09, 2014 @09:29PM (#47421615)
    I do recall reading a while back experiments done with AI in which programs compete for resources by generating programs to do tasks given to it (computing sums etc). Some programs did generate code that were completely unexpected.

    It raises the question programs that are evolved are designed by the programmer or the program, or the process of evolution. And it also raises the philosophical question about whether we should be more humble and accept that our "creativity" that we think is what makes humans intelligent could be nothing more than a process of the evolution of ideas (I hesitate to use the word meme) that we don't actually originate nor control.

    If we consider programs that can create things through evolution as "intelligent", that would ironically make natural selection intelligent, since DNA is a digital program that is evolved into complex things over time that can't be reduced to first principles.
  • by Altanar ( 56809 ) on Wednesday July 09, 2014 @09:31PM (#47421621)

    The machine's designers must not be able to explain how their original code led to this new program.

    Whoa, whoa, whoa. I have severe problem with this. This is like looking at obscurity and declaring it a soul. The measure of intelligence is that we can't understand it? Intelligence through obfuscation? There should be no way for a designer to not be able to figure out why their machine produced what it did given enough debugging.

    • by ornil ( 33732 ) on Wednesday July 09, 2014 @09:44PM (#47421673)

      The way I interpret the test is that the output must not be intended to be produced by some pre-programmed process. Not that you couldn't debug it which would obviously be impossible on anything short of a quantum computer.

      On the other hand, I claim that if I train a neural network on some sheet music, it would be able to produce a new melody. And that melody would not be in any way pre-programmed (like a child learning from experience is not pre-programmed), and it will be original. Where can I collect my prize?

      • Unless the panel of judges is a bunch of hipsters who will always say it sounds derivative.
        • Re: (Score:2, Funny)

          by Anonymous Coward

          Not if they heard it before it was cool, then the AI just sold out.

    • There should be no way for a designer to not be able to figure out why their machine produced what it did given enough debugging.

      Well... [slashdot.org]

    • by dbIII ( 701233 )

      This is like looking at obscurity and declaring it a soul

      That's the undergraduate view of AI that gets repeated at times in this place.

      The measure of intelligence is that we can't understand it?

      Not just yet, so instead of waiting until years of work is done understanding the physical basis of thought the impatient want some sort of measure now.

  • Most of the programs I write produce stuff I can't explain.
  • ...then all the computer will have to do is string together a series of random English words till it puts together something that sounds like a short story written by a Hungarian first-grader for whom English is a second language.

    I don't care what they call the test. It's useless if the grading rubric is rigged to allow any idiot to write something that passes. Now, if you'll excuse me, I'm going to go see if I can talk ELIZA into writing me something that would function as an epistolary novel.

  • by K. S. Kyosuke ( 729550 ) on Wednesday July 09, 2014 @09:39PM (#47421655)

    The machine's designers must not be able to explain how their original code led to this new program.

    If I'm not mistaken, this has already happened when evolutionary algorithms were applied to hardware design: some slides [www-verimag.imag.fr]. The author of the program has no idea how the resulting circuit worked [bcs.org].

    • by geekoid ( 135745 )

      It's actual happened a lot, it's called 'emergent behavior'. The paper is old, poorly thought out, and written by people who want other people to think that are smart, but aren't actually smart enough to do science, you know: philosophers.

      remember kids: philosophers are to science what homeopaths are to medicine.

      • I know what emergent behavior is, I was merely making the point that it has already been observed in software systems and that it (at least from my POV) satisfies these requirements. (And what exactly is poorly thought out about Thompson's research?)
      • people who want other people to think that are smart, but aren't actually smart enough to do science, you know: philosophers.

        remember kids: philosophers are to science what homeopaths are to medicine.

        And also remember that anyone with a Ph.D. in a science field isn't a scientist. They're a doctor of philosophy. Without philosophy, science doesn't exist.

        • Without science, philosophy is useless. Philosophers have a bad habit of treating things as binary true or false and statistical answers are not acceptable. No philosopher I know has made any sense of Quantum Mechanics or natural selection so far and are completely beholden to science in modern times. The only philosophy that's worth pursuing these days is the philosophy of science itself, but even that is hitting its limits. I've been in too many debates where philosophers try to label science as "logical
          • Without science, philosophy is useless.

            Philosophy created science without science's help.

            • Bollocks.

              Science was created because philosophy couldn't cut it. Galileo didn't bother trying to figure out the philosophical underpinnings of things rolling down planks or pendulum swings or the moons of Jupiter. He went straight to observations.
              • The process commonly known as the scientific method is the product of philosophy, and science and the scientific method had nothing to do with the scientific method's birth.
                • Again, bollocks.

                  First, there is no "Scientific Method", with capital letters. There have been many philosophical attempts at trying to formally define science, but none are accurate and often fly in the face of how science is actually done.

                  If science doesn't exist before attempts to formalize science, then you are saying that Galileo wasn't doing science. The practice came before the theory and is a recorded historical fact. You demonstrate precisely the problem with philosophers - the theory override
                  • There's nothing greater than a semantic argument on slashdot.

                    Arguing whether science is a form of philosophy is like arguing whether the Game of Thrones TV show is an example of art. You don't necessarily have any disagreement about what science is (even though that's what everybody is focussing on); you have a disagreement on the definition of philosophy (which, like art, is notoriously hard to pin down).

                    • You mischaracterize the debate. The debate is not about what either of those is, but whether science comes from philosophy, or developed as a complement/reaction to philosophy that has now far exceeded philosophy's capabilities. The corollary to that debate is the argument that if philosophy gave birth to science, whether philosophy is allowed to "pull rank" on science any time they hit a wall and claim credit for things as though science "owes" anything to philosophy for its existence. As though because th
  • A computer infected with a work and a virus led to them combining into a new program.

    It was better and unique.

  • Computer Chess (Score:5, Insightful)

    by Jim Sadler ( 3430529 ) on Wednesday July 09, 2014 @09:45PM (#47421683)
    Oddly computer chess programs may already meet this criteria. The programs usually apply a weight or value to a move and a weight and a value to the consequences down stream of the move. But there are times when the consequences are of equal value at some event horizon and random choices must be applied. As a consequence sequences of moves may be made that no human has ever made and the programmer could not really predict either. As machines have gotten more able the event horizon is at a deeper level. But we might reach the point at which only the player playing white can ever hope to win and the player with black may always lose. We are not in danger of a human ever being able to do that unless we alter his brain.
  • What's a "program" ("anything")?

    What does it mean to be "engineered to produce" one?

    What's a "hardware fluke"?

    What constitutes "explanation" of how it was done?

    Not. Even. Wrong.

  • Till it hit me it was looking for keywords to continue on, yes I was new
    http://en.wikipedia.org/wiki/E... [wikipedia.org] the Doctor is in...

  • "The machine's designers must not be able to explain how their original code led to this new program". I know plenty of programmers that can't explain how the hell their code managed to produce certain results, and trust me it has nothing to do with the servers mysteriously developing AI.

  • The meta-Turing test counts a thing as intelligent if it seeks to devise and apply Turing tests to objects of its own creation.
    --Lew Mammel, Jr.

  • by dlingman ( 1757250 ) on Wednesday July 09, 2014 @10:51PM (#47421951)
    http://en.wikiquote.org/wiki/I... [wikiquote.org] Detective Del Spooner: Human beings have dreams. Even dogs have dreams, but not you, you are just a machine. An imitation of life. Can a robot write a symphony? Can a robot turn a... canvas into a beautiful masterpiece? Sonny: [With genuine interest] Can you?
  • by quantaman ( 517394 ) on Wednesday July 09, 2014 @10:56PM (#47421961)

    Just because someone sets some random people up for a five minute interview with a chatbot doesn't mean they're running a Turing Test.

    Give people enough time to conduct a proper conversation, hell give them time to ask the chatbot for some original content. Do that and you'll be running a real Turing Test.

    The reason you keep hearing about these simplified Turing Tests is those are the only tests people run because those are the only tests computers can pass. But passing a true Turing Test is still a great standard for detecting real AI, and something no one can even approach doing yet.

  • The great thing about the Turing test was that it was a black box. It did not depend on assumptions about what the designers knew, or what hardware was used, or the like. And so far the only test trials I have heard of have been carefully arranged one on one. Give us a dozen Ukranian teen-agers, and pick the one (or two) which are non-human - that's a better test run.

    But, of course, the ultimate test of machine intelligence is when the computer can sue your ass off and win in the Supreme Court.

  • Ada Lovelace or Linda Lovelace? I volunteer for the Linda Lovelace test.
    • The Linda Lovelace test is when you make love to a lady and you can't tell if she's a human or a robot. I live in Thailand, and I have been involved in the Linda Lovelace test many times - including my ex-wive.
  • A guy told me some 20 years ago that he read about an artificial life experiment in which a specially designed operating system was created to allow programs to execute code and, like computer viruses, reproduce themselves while competing for the resources to do so. He said the result was a program that copied itself very efficiently in a manner that the researchers found very hard to understand and was totally unexpected.

    Sadly he couldn't explain the details and didn't know the experiment, but if what
  • This business of the developers not knowing how it works. It reminds me of the question "How can God create a being that sins. Doesn't that make Him responsible?". One way to answer that is that God withdraws his authority within the a locus that we call the "soul". What happens there isn't his action. This implies that while knowingly taking actions that lead to wrong is immoral, withdrawing your power from a particular locus and opening things up to potential wrongs is not immoral.

    It has nothing to d

  • I've written music generators that produce "pleasant" music from scratch (by following time-tested harmonic, chord, and rhythm patterns and ratio's). The music may pass the Lovelace test, but will probably never win any awards.

    The machine's designers must not be able to explain how their original code led to this new program.

    So if we finally figure out how the human brain works, it will fail the Lovelace test just because we know how it works? A silly rule.

  • to pass the Lovelace Test a computer has to create something original, all by itself.

    Are we even sure people can do this?

  • the lovelace test is not a great test if a machine has to create something original, all by itself, as a lot of real humans can't even do that, so a lot of humans wouldn't even pass the lovelace test..

  • > The machine's designers must not be able to explain how their original code led to this new program

    This happens in my office all the time

If you do something right once, someone will ask you to do it again.

Working...