The Lovelace Test Is Better Than the Turing Test At Detecting AI 285
meghan elizabeth writes If the Turing Test can be fooled by common trickery, it's time to consider we need a new standard. The Lovelace Test is designed to be more rigorous, testing for true machine cognition. An intelligent computer passes the Lovelace Test only if it originates a "program" that it was not engineered to produce. The new program—it could be an idea, a novel, a piece of music, anything—can't be a hardware fluke. The machine's designers must not be able to explain how their original code led to this new program. In short, to pass the Lovelace Test a computer has to create something original, all by itself.
Re:Most humans couldn't pass that test (Score:4, Interesting)
Probably every day, BUT it does go to the point with this one. We're still trying to recreate an idealized human rather than actually focusing on what intelligence is.
My cat is undeniably intelligent, almost certainly sentient although probably not particularly sapient. She works out things for herself and regularly astonishes me with the stuff she works out, and her absolute cunning when when she's hunting mice. In fact having recently worked out I get unhappy when she brings mice from outside the house and into my room, she now brings them into the back-room and leaves them in her food bowl, despite me never having told her that that would be an accepatble place for her to place her snacks.
But has she created an original work? Well no, other than perhaps artfully diabolical new ways to smash mice. But thats something she's programmed to do. She is, after all, a cat.
She'd fail the test, but she's probably vastly more intelligent in the properly philosophical meaning of the term, than any machine devised to date.
Re:Turing test not passed. (Score:5, Interesting)
That's because they keep shifting the goalposts.
I don't think "a chatbot isn't AI and hasn't been since the 1960s when they were invented, whether you call it a doctor or a Ukrainian kid doesn't make any difference" counts as shifting the goalposts.
Furthermore, reproducible results are an important part of science. Let him release his source code, or explain his algorithm so we can reproduce it. Anything less is not science.
philosophical discussion only not science (Score:2, Interesting)
No.
you describe "behaviorism" which is a thoroughly discredited and reductive theory
the ***whole conversation*** is about ***the underlying mechanism***
the "Lovelace Test" is more rigorous, but how it will affect computing I cannot say, because the Turing Test itself is a time-wasting notion.
the problem: questions of "what is intelligence" are Philosophy 101 questions...not scientific or computing questions...and we hurt our industry when we overlap the two
just because we can prod a human to make them do something, or dose them with a chemical or whathaveyou, doesn't mean we have disproven the existence of "free will"
we will map every neural connection in the human brain soon, this doesn't mean all humans will become remote controlled techno-zombies
people take other's freedom by many means:
by gunpoint
emotional manipulation
through blackmail
too much alchohol
the Frey Effect [slashdot.org]
threats of loss of work
so learning how neurons work is just another potential addition to that list
the point: humans have free will and it can be subverted in many ways, this does not have any implications in computing
Re:Turing test not passed. (Score:4, Interesting)
The thing that Watson (and AI in general) has difficulty with is imagination, it has no experience of the real world so if you asked it something like what would happen if you ran down the street with a bucket of water, it would be stuck. Humans who have never run with a bucket of water will automatically visualise the situation and give the right answer, just as everyone who read the question have just done so in their mind. OTOH a graphics engine can easily show you what would happen to the bucket of water because it does have a limited knowledge of the physical world.
This is the problem with putting AI in a box labeled "Turing test", it (arrogantly) assumes that human conversation is the only definition of intelligence. I'm pretty sure Turing himself would vigorously dispute that assumption if he were alive today.
Re:Turing test not passed. (Score:3, Interesting)
Turing was wrong about his predictions. But that doesn't mean his test is invalid
Imho it is.
Suppose we manage to create a strong AI. It's fully conscious, fully aware, but for some quirk we cannot understand, it's 100% honest.
Such an AI would never pass the Turing test, because it would never try to pass off as human, and any intelligent human could ask it questions that only a machine could answer in limited time.