Forgot your password?
typodupeerror
AI Technology

The Lovelace Test Is Better Than the Turing Test At Detecting AI 285

Posted by samzenpus
from the why-did-you-program-me-to-feel-pain? dept.
meghan elizabeth writes If the Turing Test can be fooled by common trickery, it's time to consider we need a new standard. The Lovelace Test is designed to be more rigorous, testing for true machine cognition. An intelligent computer passes the Lovelace Test only if it originates a "program" that it was not engineered to produce. The new program—it could be an idea, a novel, a piece of music, anything—can't be a hardware fluke. The machine's designers must not be able to explain how their original code led to this new program. In short, to pass the Lovelace Test a computer has to create something original, all by itself.
This discussion has been archived. No new comments can be posted.

The Lovelace Test Is Better Than the Turing Test At Detecting AI

Comments Filter:
  • by sg_oneill (159032) on Wednesday July 09, 2014 @11:18PM (#47421823)

    When was the last time the average person created something original?

    Probably every day, BUT it does go to the point with this one. We're still trying to recreate an idealized human rather than actually focusing on what intelligence is.

    My cat is undeniably intelligent, almost certainly sentient although probably not particularly sapient. She works out things for herself and regularly astonishes me with the stuff she works out, and her absolute cunning when when she's hunting mice. In fact having recently worked out I get unhappy when she brings mice from outside the house and into my room, she now brings them into the back-room and leaves them in her food bowl, despite me never having told her that that would be an accepatble place for her to place her snacks.

    But has she created an original work? Well no, other than perhaps artfully diabolical new ways to smash mice. But thats something she's programmed to do. She is, after all, a cat.

    She'd fail the test, but she's probably vastly more intelligent in the properly philosophical meaning of the term, than any machine devised to date.

  • by phantomfive (622387) on Thursday July 10, 2014 @12:29AM (#47422057) Journal

    That's because they keep shifting the goalposts.

    I don't think "a chatbot isn't AI and hasn't been since the 1960s when they were invented, whether you call it a doctor or a Ukrainian kid doesn't make any difference" counts as shifting the goalposts.

    Furthermore, reproducible results are an important part of science. Let him release his source code, or explain his algorithm so we can reproduce it. Anything less is not science.

  • So if someone figures out how the brain works, and is about to describe its function, then people will no longer be intelligent? Intelligence is a characteristic of behavior. If it behaves intelligently, then it is intelligent. The underlying mechanism should be irrelevant.

    No.

    you describe "behaviorism" which is a thoroughly discredited and reductive theory

    the ***whole conversation*** is about ***the underlying mechanism***

    the "Lovelace Test" is more rigorous, but how it will affect computing I cannot say, because the Turing Test itself is a time-wasting notion.

    the problem: questions of "what is intelligence" are Philosophy 101 questions...not scientific or computing questions...and we hurt our industry when we overlap the two

    just because we can prod a human to make them do something, or dose them with a chemical or whathaveyou, doesn't mean we have disproven the existence of "free will"

    we will map every neural connection in the human brain soon, this doesn't mean all humans will become remote controlled techno-zombies

    people take other's freedom by many means:
    by gunpoint
    emotional manipulation
    through blackmail
    too much alchohol
    the Frey Effect [slashdot.org]
    threats of loss of work

    so learning how neurons work is just another potential addition to that list

    the point: humans have free will and it can be subverted in many ways, this does not have any implications in computing

  • by TapeCutter (624760) on Thursday July 10, 2014 @01:46AM (#47422323) Journal
    I think Watson would be able to give it's real age by finding the information rather than recalling it, although it might get confused by progressive versions. AI can also produce a picture of a generic rabbit, or cat as the case may be [blogspot.com.au].

    The thing that Watson (and AI in general) has difficulty with is imagination, it has no experience of the real world so if you asked it something like what would happen if you ran down the street with a bucket of water, it would be stuck. Humans who have never run with a bucket of water will automatically visualise the situation and give the right answer, just as everyone who read the question have just done so in their mind. OTOH a graphics engine can easily show you what would happen to the bucket of water because it does have a limited knowledge of the physical world.

    This is the problem with putting AI in a box labeled "Turing test", it (arrogantly) assumes that human conversation is the only definition of intelligence. I'm pretty sure Turing himself would vigorously dispute that assumption if he were alive today.
  • by Anonymous Coward on Thursday July 10, 2014 @04:37AM (#47422765)

    Turing was wrong about his predictions. But that doesn't mean his test is invalid

    Imho it is.
    Suppose we manage to create a strong AI. It's fully conscious, fully aware, but for some quirk we cannot understand, it's 100% honest.
    Such an AI would never pass the Turing test, because it would never try to pass off as human, and any intelligent human could ask it questions that only a machine could answer in limited time.

The Force is what holds everything together. It has its dark side, and it has its light side. It's sort of like cosmic duct tape.

Working...