Upgrading the Turing Test: Lovelace 2.0 68
mrspoonsi tips news of further research into updating the Turing test. As computer scientists have expanded their knowledge about the true domain of artificial intelligence, it has become clear that the Turing test is somewhat lacking. A replacement, the Lovelace test, was proposed in 2001 to strike a clearer line between true AI and an abundance of if-statements. Now, professor Mark Reidl of Georgia Tech has updated the test further (PDF).
He said, "For the test, the artificial agent passes if it develops a creative artifact from a subset of artistic genres deemed to require human-level intelligence and the artifact meets certain creative constraints given by a human evaluator. Creativity is not unique to human intelligence, but it is one of the hallmarks of human intelligence."
Re: (Score:2)
I tried tagging the article "not Linda", but since "not" is an exclamation point, it actually reads "bang Linda."
I don't think it helped.
Turing test is fine (Score:5, Insightful)
Re: (Score:2)
What? It's been controversial since the beginning, and a complete joke after Weizenbaum wrote Eliza.
Re: (Score:2)
Re:Turing test is fine (Score:4, Insightful)
Did you miss the last 64 years of research and philosophy? The last hold-outs, save the most delusional, we're knocked out by Searle in 1980.
It's only controversial for those who haven't read Turing's paper, or have completely failed to understand it.
Eliza, for example, highlights the massive failure in Turing's reasoning -- The question "can machines think" is not equivalent to the question "Are there imaginable digital computers which would do well in the imitation game?"
Weisenbaum found the response to his program from non-technical staff disturbing.
Secretaries and nontechnical administrative staff thought the machine was a "real" therapist, and spent hours revealing their personal problems to the program. When Weizenbaum informed his secretary that he, of course, had access to the logs of all the conversations, she reacted with outrage at this invasion of her privacy. Weizenbaum was shocked by this and similar incidents to find that such a simple program could so easily deceive a naive user into revealing personal information.
( From Eliza to A.L.I.C.E. [alicebot.org])
Further, the so-called "Turing test" hasn't held still. Not even in his 1950 paper! (Turing proposed multiple variations on the test, if you'll recall.) Since then, a number of different versions of the "Turing test" have appeared, none of which are (like Turing's variations) are equivalent to one another!
If you need a *really* simple argument: The results of any variation of the "Turing test" are completely subjective. Consider a program that fools 100% of one set of interrogators may completely fail to fool even 10% of another set.
Re: (Score:2)
"In short then, I think that most of those who support the argument from consciousness could be persuaded to abandon it rather than be forced into the solipsist position."
If someone creates a human level AI no one will give a rats fucking ass about Searle's semantic wanking ... like all philosophy after Hume his was just a complete waste of time.
Re: (Score:1)
John Searle's argument was useful, if only to expose the prejudices behind many folks' conception of intelligence. If it doesn't have a neuron, it clearly can't be intelligent, right?
Re: (Score:3)
Maybe you should inform yourself what a Turing test actually is? Eliza didn't pass it nor would a normal chatbot - unless there is true intelligence behind it.
The problem with the Turing test is that is hard and most ordinary people would probably fail it.
Re: (Score:2)
Maybe you should inform yourself what a Turing test actually is?
Please, enlighten me. There's at least two variations in Turing's 1950 paper, and countless others have appeared since then. (You'll also find tons of research showing that these variations are not equivalent to one another.) Which is the "real" Turing test?
Eliza didn't pass it
Weisenbaum, and countless others, would strongly disagree with you.
Turing's first failure was assuming that the questions "can machines think" and "Are there imaginable digital computers which would do well in the imitation game?" are equivalent. His
Inference is Hard (Score:4, Insightful)
A series of similar and increasingly difficult inference questions like this one can usually knock over an AI pretty easily, while not being too difficult for humans.
Re: (Score:1)
Re: (Score:1)
Re: (Score:2)
No, the Turing test is shit. Any AI that passes it would actually be far smarter than us humans since it would have to take into account the experience of all the things that itself wouldn't actually have to deal with--such as eating, pissing, and shitting. Why should an AI have to think about all the things us meatbags have to think about that aren't relevant to it? AIs don't have parents (well, not in the traditional sense anyway) and so won't have a human-like childhood experience to reflect upon, nor
Re: (Score:3)
Re: (Score:3)
Because if it can't model a meatbag, why would it be able to model an electron (so can't do physics), an industrial robot (so can't program them), a car (can't control vehicles), abstract entities (can't do logic or math) or anything else for that matter?
Imagination is not optional for intelligence. Intelligence is the ability to build mental models and manipulate them.
Re: (Score:1)
I like this thought. Not quite sure what counts as imagination though. Does the ability of a chess algorithm to model hypothetical future board positions count?
My experience - writing a very simple rubik cube solver as an undergraduate project - I rejected the two simple solutions for a trivial case (requires 1 turn to solve). So it turned the opposite face, then turned the first face, th
dodged the question (Score:1)
There are many criticisms of the Turing test...from many angles.
You address none of them, you just simply stated the negative.
That's the problem...supporting the Turing paradigm means constantly avoiding the question (litterally and figuratively if you think about it)
moving target (Score:2, Insightful)
This is just making the "Turing test" into a moving target. The Turing test makes sense, and if you have a long enough test you can eventually rule out the "abundance of if statements."
Re: (Score:2)
By your reasoning, it's been a "moving target" since 1950 as Turing himself offered variations on his test in the original paper!
See, there isn't a single monolithic thing call "The Turing Test". There isn't even widespread agreement on the nature of the tests Turing proposed. When you say "The Turing test makes sense" you're saying that you have some exclusive insight in to Turing that no one else has, and that you think that that variation "makes sense". So, please, share your divinely revealed interp
Re: (Score:2)
Which makes sense - since the AI it's testing for is itself a moving target.
Re: (Score:2)
No, The Turing test doesn't make sense and nor does the new test.
To test intelligence, how about we set the AI,
AN ACTUAL INTELLIGENCE QUOTA TEST, is that not ****ing obvious.
How many AI can pass the same tests that an ape or bird could pass, pretty much none I'd be guessing.
Questions should pass a 'google test' where questions that can be answered by simply googling or using Wolfram Alpha are rejected.
How many meat-people would pass the Lovelace test? (Score:5, Interesting)
There's a Forest Service joke that the problem with designing trash cans is that the smartest bear is smarter than the dumbest tourist.
Re: (Score:2)
Too lazy to RTFP (read the fine PDF), but I assume the point is that some humans can pass the Lovelace test, whereas few or no machines currently can.
Re: (Score:2)
Re: (Score:2)
Human Intelligence (Score:3)
With the many assumptions made about what constitutes 'true' intelligence, how sure are we of the assumption that a human being of at least average intelligence would pass it? What's the research telling us there so far?
Or are human and artificial intelligence somehow considered to be mutually exclusive?
Re: (Score:2)
We will never have "real" AI (Score:3)
We will never have "real" AI because every time we approach it, someone moves the bar as to what is required. It's been happening since the mid-late '80s. We *have* what would have qualified as AI according to the rules of '86-'87.
Re: (Score:2)
Oh baloney. How about listing those rules. I don't ever recall seeing the handbook.
**whoosh** (Score:1)
exactly the point/problem with the Turing and 'teh singularity' paradigms
that's the correct analysis here
Re: (Score:2)
Bullshit. AI is AI, not expert systems as was popular for your time period. The idea that a complex expert system would suddenly become intelligent was a theory that have been thoroughly tested - today there are expert systems with more rules and faster inference processing than even beyond the wildest dreams of those AI researchers.
The working of human intelligence is still not fully known, the definition of intelligence is still not agreed upon. One thing is sure though - expert systems aren't intelligent
Re: (Score:2)
We will never have "real" AI because every time we approach it, someone moves the bar as to what is required.
Artificial bars. The requirement is simple, have a computer that thinks like a human.
You don't even know what algorithm the human brain uses. They didn't in the 80s, either. Figure that out before you complain about bars being moved.
Re: (Score:2)
Artificial bars. The requirement is simple, have a computer that thinks like a human.
Even that bar is way too high for current technology. Give me an AI that can outthink a rat.
You can put a pair of glasses on a rat connected to a webcam and a rat can easily find
food. Put that same webcam on a rc car and no AI in the world is even close to being
able to compete. Based on current technology it would probably be easier to train a rat
to drive the rc car to find food than it would be to train a computer.
That's my definition of intelligence. Something that can accurately navigate in the real
Re: (Score:2)
Now you'll say, "that's not what I meant!", and you will be right, but then people will complain that you're moving the goal posts.
The assumptions, they make a whoosh out of you (Score:3)
So yet another article on Turing test which completely misses the point... First of all computer scientists never considered Turing test valid test of "artificial intelligence". In fact, there's practically no conceivable reason for a computer scientist to test their artificial intelligence by any other way than making it face problems of its own domain.
Perhaps there will come a day where we really have to ask "is this entertainment droid genuinely intelligent, or is it only pretending", possibly for determining whether it should have rights, but this kind of problem still doesn't lie in the foreseeable future.
On the Other hand, as Turing himself put it in the paper where he introduced his thought-experiment, from Wikipedias phrasing: "I propose to consider the question, 'Can machines think?'" Because "thinking" is difficult to define, Turing chooses to "replace the question by another, which is closely related to it and is expressed in relatively unambiguous words." Turing's new question is: "Are there imaginable digital computers which would do well in the imitation game?"
In other words, the Turing test does not seek to answer the question of whether machines can think, because Turing considered the question meaningless, and noted that if a machines thinking was outwardly indistinguishable from human thinking, then the whole question would become irrelevant.
There is a further erroneous assumption at least in the summary - as of present times, even the most advanced computers and software are basically simply an abundance of if-statements, or for the low-level programmers among us, cmp and jmp mnemonics. If, on the other hand, we expand our definition of a "machine" to encompass every conceivable kind, for the materialistic pragmatic it becomes easy to answer whether machines can ever think - yes of course, the brain is a machine that can think.
Re: (Score:2)
If, on the other hand, we expand our definition of a "machine" to encompass every conceivable kind, for the materialistic pragmatic it becomes easy to answer whether machines can ever think - yes of course, the brain is a machine that can think.
But here, you smuggle your answer in inside of your assumption. You are assuming what you are trying to prove.
Turing test is flawed (Score:2)
The Turing Test has flaws.
Firstly, it requires a human-level of communication. One cannot use the it to determine whether a crow (for example, or cat or octopus) is intelligent since they cannot communicate at our level. Even though these creatures demonstrate a surprising level [scienceblogs.com] of intelligence. Watch this video [youtube.com] and be astonished.
The extended video shows the crow taking the worm to it's nest, then returning to grab the hooked wire and taking that back to the nest! Can we use the Turing Test to determine whe
Re: (Score:2)
The Turing test is usually presented as something that a machine either passes or fails, but since no machine has yet passed it, contests have focused on how long a machine can withstand questioning before the interviewer decides it's not human, or what percentage of interviewers it can fool for, say, ten minutes. So you can say one machine is more intelligent than another, even if you don't have a definition of intelligence apart from "intelligence is the ability to convince a human that you are human". To
Re: (Score:1)
well said...this should be the paradigm in computing design
tautology ontology (Score:1)
exactly...it's all based on a tautology...a faulty ontology. The Computability Function is not a computing paradigm, it's reductive.
'AI' is complex machines following instructions. That's what it is. The rest is people projecting their own emotions onto inanimate objects.
When I say "it's a tautology" what I mean is, it's based on linguistic distinctions only. Not actual, functional distinctions.
A tautology says, "If people think a pile of shit is a steak dinner, then it becomes a steak dinner"
That's an extr
Re: (Score:2)
'AI' is complex machines following instructions. That's what it is. The rest is people projecting their own emotions onto inanimate objects.
That is *great* phrasing - thank you. It's going into my notes and will probably make it into my writings (with attribution). Probably as a chapter heading.
The situation is not completely hopeless: there is a small number of people, myself included, who are working on actual AI. Most of the research is using programming to solve a (particular) problem.
Re: (Score:2)
The rest is people projecting their own emotions onto inanimate objects
How do you tell if the object is animate or note? Are you animate? Or am I just projecting my own emotions onto some entity making a post to slashdot? Perhaps 'projection' is the way we understand other humans... Is it ever useful to project onto entities other than humans (animate or not)?
Magic tricks (Score:2)
Lovelace 2.0 is easy to dupe (Score:2)
Lovelace is great, test is dumb (Score:2)
The Turing and Computability Function paradigm for computing is (finally) being rightly and fully criticized (ironically, as we get a Turing hollywood movie)
Ada Lovelace's theories ***do indeed*** provide the theoretical ground work (along with others like Claude Shannon) to cleans ourselves of Turing Test nonsense
However...this test...in TFA is not the test.
It's just a variation on the Turing test that still has the same tautology...it's a test of fooling a human in an artificial, one time only environment
I do wonder why it's taken so seriously (Score:1)
But what he had was a user requirements list. He didn't have a working implementation. He had "computer must be able to respond like a human to questions asked", so we have software that fits thos