The Lovelace Test Is Better Than the Turing Test At Detecting AI 285
meghan elizabeth writes If the Turing Test can be fooled by common trickery, it's time to consider we need a new standard. The Lovelace Test is designed to be more rigorous, testing for true machine cognition. An intelligent computer passes the Lovelace Test only if it originates a "program" that it was not engineered to produce. The new program—it could be an idea, a novel, a piece of music, anything—can't be a hardware fluke. The machine's designers must not be able to explain how their original code led to this new program. In short, to pass the Lovelace Test a computer has to create something original, all by itself.
dwarf fortress (Score:4, Insightful)
That is all.
Re: (Score:2)
Most humans couldn't pass that test (Score:5, Insightful)
When was the last time the average person created something original?
Re: (Score:2)
and there are quite a few human pairs for which one would not be able to convince the other that they were speaking intelligibly, either.
it is irrelevant. it is only necessary for one computer (however that's defined) to pass this test. i don't see how it's really any better than Turing though. it's a nice idea, it seems even more vague than the Turing test.
Re: (Score:3)
Re: (Score:2)
People usually make the big mistake of taking himselfs as measure for everybody else.
Turing was a hell of a smart guy - I bet my mouse that he had this mindset ("everybody is more or less smart as me") when he designed that Test.
By the way, there's a joke around here that states: The sum of all Q.I. in the Earth is a constant - and the population is growing...
There's more instructed people nowadays, but smart? I'm afraid that not - Turing didn't live to see what we are nowadays.
Re: (Score:2)
If it came up with the most efficient / fastest sort and search algorithms I might be impressed. It's still not intelligence.
Re: (Score:2)
Re: (Score:2)
I invented the soleless shoe. To fool the PHB and let me walk around the cube farm, almost barefoot.
My life's ambition (yet unfulfilled) is to invent a new crime. You'll hear about it and say: 'that has to be illegal'...Not as easy as it sounds. Damn 'Computer Fraud and Abuse Act' makes anything remotely related to a computer, that a federal judge doesn't like, a retroactive crime.
So the best I've got is giving dangerous advice on the 'net. BTW did you all know that you can make a miracle cleaning flui
Re: (Score:2)
Re:Most humans couldn't pass that test (Score:4, Interesting)
Probably every day, BUT it does go to the point with this one. We're still trying to recreate an idealized human rather than actually focusing on what intelligence is.
My cat is undeniably intelligent, almost certainly sentient although probably not particularly sapient. She works out things for herself and regularly astonishes me with the stuff she works out, and her absolute cunning when when she's hunting mice. In fact having recently worked out I get unhappy when she brings mice from outside the house and into my room, she now brings them into the back-room and leaves them in her food bowl, despite me never having told her that that would be an accepatble place for her to place her snacks.
But has she created an original work? Well no, other than perhaps artfully diabolical new ways to smash mice. But thats something she's programmed to do. She is, after all, a cat.
She'd fail the test, but she's probably vastly more intelligent in the properly philosophical meaning of the term, than any machine devised to date.
Re: (Score:2)
Re: (Score:2)
The most ridicule part being "must not be able to explain how". That doesn't even make sense for humans! If you ask artists, they'll tell you what their influences are, if you ask critics, they'll tell you why this particular piece of art was made this way and not in a completely different manner.
Fun fact: any program with yet unseen bugs that make their behavior totally unexplainable to their developers has passed the test. That gives you either an idea of the soundness of this crap, or a deep insight of w
No human can pass that test (Score:2)
Well, no human alive today in any case. All so-called "original" works produced today are derivatives of older works (Shakespeare, folklore, etc) or quirks produced by the artist's mental state. Among deceased artists Van Gogh and Edgar Allan Poe are famous examples. Another reason why we should stop this "all rights reserved" nonsense of the traditional copyright system, where the artist is presumed to be a god that produces unique worlds out of nothing.
Absurd (Score:3)
That is a flatly ludicrous requirement, far in excess of what we would ever even consider applying to determine if even a human being is intelligent or not. Hell, if you were to apply that standard to human beings, ironically, many extremely intelligent people would fail that metric, because in hindsight, you can very often identify precisely how a particular thought or idea came out of a person.
Re:Absurd (Score:5, Funny)
Re: (Score:2)
The machine's designers must not be able to explain how their original code led to this new program
That is a flatly ludicrous requirement
Why do you think that? I guess we need actual examples.
Re: (Score:2)
And if it's declared intelligent, and then someone figures out how to explain how it came up with whatever the original content is, then does it just become less intelligent?
Evolutionary algorithms (Score:4, Insightful)
It raises the question programs that are evolved are designed by the programmer or the program, or the process of evolution. And it also raises the philosophical question about whether we should be more humble and accept that our "creativity" that we think is what makes humans intelligent could be nothing more than a process of the evolution of ideas (I hesitate to use the word meme) that we don't actually originate nor control.
If we consider programs that can create things through evolution as "intelligent", that would ironically make natural selection intelligent, since DNA is a digital program that is evolved into complex things over time that can't be reduced to first principles.
Goal Post: Mysticism (Score:5, Insightful)
The machine's designers must not be able to explain how their original code led to this new program.
Whoa, whoa, whoa. I have severe problem with this. This is like looking at obscurity and declaring it a soul. The measure of intelligence is that we can't understand it? Intelligence through obfuscation? There should be no way for a designer to not be able to figure out why their machine produced what it did given enough debugging.
Re:Goal Post: Mysticism (Score:4)
The way I interpret the test is that the output must not be intended to be produced by some pre-programmed process. Not that you couldn't debug it which would obviously be impossible on anything short of a quantum computer.
On the other hand, I claim that if I train a neural network on some sheet music, it would be able to produce a new melody. And that melody would not be in any way pre-programmed (like a child learning from experience is not pre-programmed), and it will be original. Where can I collect my prize?
Re: (Score:3)
Re: (Score:2, Funny)
Not if they heard it before it was cool, then the AI just sold out.
Re: (Score:2)
There should be no way for a designer to not be able to figure out why their machine produced what it did given enough debugging.
Well... [slashdot.org]
Re: (Score:2)
That's the undergraduate view of AI that gets repeated at times in this place.
Not just yet, so instead of waiting until years of work is done understanding the physical basis of thought the impatient want some sort of measure now.
Most of the programs I write... (Score:2)
If it's as easy as that "Turing Test" was... (Score:2)
...then all the computer will have to do is string together a series of random English words till it puts together something that sounds like a short story written by a Hungarian first-grader for whom English is a second language.
I don't care what they call the test. It's useless if the grading rubric is rigged to allow any idiot to write something that passes. Now, if you'll excuse me, I'm going to go see if I can talk ELIZA into writing me something that would function as an epistolary novel.
Already happened? (Score:3)
The machine's designers must not be able to explain how their original code led to this new program.
If I'm not mistaken, this has already happened when evolutionary algorithms were applied to hardware design: some slides [www-verimag.imag.fr]. The author of the program has no idea how the resulting circuit worked [bcs.org].
Re: (Score:3)
It's actual happened a lot, it's called 'emergent behavior'. The paper is old, poorly thought out, and written by people who want other people to think that are smart, but aren't actually smart enough to do science, you know: philosophers.
remember kids: philosophers are to science what homeopaths are to medicine.
Re: (Score:2)
Re: (Score:2)
people who want other people to think that are smart, but aren't actually smart enough to do science, you know: philosophers.
remember kids: philosophers are to science what homeopaths are to medicine.
And also remember that anyone with a Ph.D. in a science field isn't a scientist. They're a doctor of philosophy. Without philosophy, science doesn't exist.
Re: (Score:3)
Re: (Score:2)
Without science, philosophy is useless.
Philosophy created science without science's help.
Re: (Score:3)
Science was created because philosophy couldn't cut it. Galileo didn't bother trying to figure out the philosophical underpinnings of things rolling down planks or pendulum swings or the moons of Jupiter. He went straight to observations.
Re: (Score:2)
Re: (Score:2)
First, there is no "Scientific Method", with capital letters. There have been many philosophical attempts at trying to formally define science, but none are accurate and often fly in the face of how science is actually done.
If science doesn't exist before attempts to formalize science, then you are saying that Galileo wasn't doing science. The practice came before the theory and is a recorded historical fact. You demonstrate precisely the problem with philosophers - the theory override
Re: (Score:2)
There's nothing greater than a semantic argument on slashdot.
Arguing whether science is a form of philosophy is like arguing whether the Game of Thrones TV show is an example of art. You don't necessarily have any disagreement about what science is (even though that's what everybody is focussing on); you have a disagreement on the definition of philosophy (which, like art, is notoriously hard to pin down).
Re: (Score:3)
Re: (Score:2)
Re: (Score:2)
Already been done (Score:2)
A computer infected with a work and a virus led to them combining into a new program.
It was better and unique.
Computer Chess (Score:5, Insightful)
Re: Computer Chess (Score:2)
Chess algorithms are a measure of the budding ability of programers.
When you can write a chess algorithm that can beat yourself at chess, ... ... [Kung Fu])
(When you can snatch the pebble from my hand,
Chess no let's play global thermonuclear war (Score:2)
what side do you want?
Re: (Score:2)
How many questions can YOU beg in one definition? (Score:2)
What's a "program" ("anything")?
What does it mean to be "engineered to produce" one?
What's a "hardware fluke"?
What constitutes "explanation" of how it was done?
Not. Even. Wrong.
Hell, Eliza had me going for a bit. (Score:2)
Till it hit me it was looking for keywords to continue on, yes I was new
http://en.wikipedia.org/wiki/E... [wikipedia.org] the Doctor is in...
seriously bad test (Score:2)
"The machine's designers must not be able to explain how their original code led to this new program". I know plenty of programmers that can't explain how the hell their code managed to produce certain results, and trust me it has nothing to do with the servers mysteriously developing AI.
The meta-turing test (Score:2)
The meta-Turing test counts a thing as intelligent if it seeks to devise and apply Turing tests to objects of its own creation.
--Lew Mammel, Jr.
Asimov already covered this... (Score:4, Insightful)
No one is passing the Turing Test (Score:3)
Just because someone sets some random people up for a five minute interview with a chatbot doesn't mean they're running a Turing Test.
Give people enough time to conduct a proper conversation, hell give them time to ask the chatbot for some original content. Do that and you'll be running a real Turing Test.
The reason you keep hearing about these simplified Turing Tests is those are the only tests people run because those are the only tests computers can pass. But passing a true Turing Test is still a great standard for detecting real AI, and something no one can even approach doing yet.
must be a black box! (Score:2)
The great thing about the Turing test was that it was a black box. It did not depend on assumptions about what the designers knew, or what hardware was used, or the like. And so far the only test trials I have heard of have been carefully arranged one on one. Give us a dozen Ukranian teen-agers, and pick the one (or two) which are non-human - that's a better test run.
But, of course, the ultimate test of machine intelligence is when the computer can sue your ass off and win in the Supreme Court.
Which Lovelace? (Score:2)
Re: (Score:2)
Well... (Score:2)
Sadly he couldn't explain the details and didn't know the experiment, but if what
Core Wars (Score:3)
You're describing Core War. [wikipedia.org] You can still get the source. [sourceforge.net]
Theology now? (Score:2)
This business of the developers not knowing how it works. It reminds me of the question "How can God create a being that sins. Doesn't that make Him responsible?". One way to answer that is that God withdraws his authority within the a locus that we call the "soul". What happens there isn't his action. This implies that while knowingly taking actions that lead to wrong is immoral, withdrawing your power from a particular locus and opening things up to potential wrongs is not immoral.
It has nothing to d
Music Hobby (Score:2)
I've written music generators that produce "pleasant" music from scratch (by following time-tested harmonic, chord, and rhythm patterns and ratio's). The music may pass the Lovelace test, but will probably never win any awards.
So if we finally figure out how the human brain works, it will fail the Lovelace test just because we know how it works? A silly rule.
How is that even possible? (Score:2)
Are we even sure people can do this?
not a great test (Score:2)
the lovelace test is not a great test if a machine has to create something original, all by itself, as a lot of real humans can't even do that, so a lot of humans wouldn't even pass the lovelace test..
Think I solved it (Score:2)
> The machine's designers must not be able to explain how their original code led to this new program
This happens in my office all the time
Re:Lovelace? (Score:5, Funny)
Maybe they mean the "Linda Lovelace" test?
Re: (Score:2, Funny)
if a human cannot determine if they just got a hummer from a machine or another human?
Re: (Score:2)
Re:Lovelace? (Score:5, Funny)
Gives a whole new meaning to, "My computer went down on me..."
Re: (Score:3)
"the server sucked my job right in"
Re: (Score:3)
Given toys for sale and various videos across the Internet, I don't believe most people care whether it was a human or machine that just got them off.
Re: (Score:3)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
...
The less humans on this planet, the better.
Feel free to exit at any time to help mitigate the problem. I plan on staying around as long as possible if for no other reason than just to piss off misanthrops like you.
Cheers,
Dave
Re: (Score:3)
Why is it called the Lovelace test?
Maybe it's because Ada envisioned that the machines that would become computers would one day be capable of all kinds of useful things [ieee.org], as opposed to Babbage who saw them strictly as number crunchers.
Ada Lovelace was just someone that translated a book for the worlds first programmer.
Hardly. She didn't translate the book for a programmer, she translated the book for a machine. She was the programmer.
Re: (Score:2, Insightful)
It was passed as defined: 10 out of 30 judges (lay people) thought they were talking with a human when they were talking with a machine in 5 minute chat sessions. Whether passing this is any way significant is up for debate, but the test was passed.
Re:Turing test not passed. (Score:5, Insightful)
It was passed as defined
The Turing Test was not passed, and the only people who claim it was are ignorant reporters looking for an easy story with a catchy headline and tech morons who also believe Kevin Warwick is a cyborg.
The test was rigged in every way possible:
- judges told they were talking to a child
- that doesn't speak English as a primary language
- which was programmed with the express intent of misdirection
- and only "fooled" 30% of the judges.
And, even after all that, Cleverbot [cleverbot.com] did a much better job back in 2011 with a 60% success rate.
This Eugene test outcome was a complete farce -- something to remind everyone that Warwick still exists and to separate the ignorant and sensational tech news trash rags from the more legitimate sources of information.
Re:Turing test not passed. (Score:5, Informative)
It was passed as defined
The Turing Test was not passed, and the only people who claim it was are ignorant reporters looking for an easy story with a catchy headline
Indeed. There's a lot of misinformation out there about what Turing originally specified. The test is NOT simply "Can a computer have a reasonable conversation with an unsuspecting human so that the human will not figure out that the computer is not human?" By that standard, ELIZA passed the Turing test many decades ago.
The test also doesn't have a some sort of magical "fool 30%" threshold -- Turing simply speculated that by the year 2000, AI would have progressed enough that it could fool 30% of "interrogators" (more on that term below). The 30% is NOT a threshold for passing the test -- it was just a statement by Turing about how often AI would pass the test by the year 2000.
So what was the test?
The test involves three entities: an "interrogator," a computer, and a normal human responder. The interrogator is assumed to be well-educated and familiar with the nature of the test. The interrogator has five minutes to question both the computer and the normal human in order to determine which is the actual human. The interrogator is assumed to bring an intelligent skepticism to the test -- the standard is not just trying to have a normal conversation, but instead the interrogator would actively probe the intelligence of the AI and the human, designing queries which would find even small flaws or inconsistencies that would suggest the lack of complex cognitive understanding.
Turing's article actually gives an example of the type of dialogue the interrogator should try -- it involves a relatively high-level debate about a Shakespearean sonnet. The interrogator questions the AI about the meaning of the sonnet and tries to identify whether the AI can evaluate the interrogator's suggestions on substituting new words or phrases into the poem. The AI is supposed to detect various types of errors requiring considerable fluency in English and creativity -- like recognizing that a suggested change in the poem wouldn't fit the meter, or ir wouldn't be idiomatic English, or the meaning would make an inappropriate metaphor in the context of the poem.
THAT'S the sort of "intelligence" Turing was envisioning. The "interrogator" would have these complex discussions with both the AI and the human, and then render a verdict.
Now, compare that to the situation in TFS where the claim is that the Turing test was "passed" by a chatbot fooling people. That's crap. The chatbot in question, as parent noted, was not even fluent in the language of the interrogator, it was deliberately evasive and nonresponsive (instead of Turing's example of AI's and humans having willing debates with the interrogator), there was no human to compare the chatbot to, the interrogators were apparently not asking probing questions to determine the nature of the "intelligence" (and it's not even clear whether the interrogators knew what their role was, the nature of the test, whether they might be chatting with AI, etc.).
Thus, Turing's test -- as originally described -- was nowhere close to "passed." Today's chatbots can't even carry on a normal small-talk discussion for 30 seconds with a probing interrogator without sounding stupid, evasive, non-responsive, mentally ill, and/or making incredibly ridiculous errors in common idiomatic English.
In contrast, Turing was predicting that interrogators would have to be debating artistic substitutions of idiomatic and metaphorical English usage in Shakespeare's sonnets to differentiate a computer from a real (presumably quite intelligent) human by the year 2000. In effect, Turing seemed to assume that he would talk to the AI in the way he might debate things with a rather intelligent peer or colleague.
Turing was wrong about his predictions. But that doesn't mean his test is invalid -- to the contrary, his standard was so ridiculously high that we are nowhere close to having AI that could pass it.
Re: (Score:2)
where are the mod points when you need them....
I never cared much about the Turing test, but this explanation makes me want to go read his original papers on it.
Re: (Score:3, Interesting)
Turing was wrong about his predictions. But that doesn't mean his test is invalid
Imho it is.
Suppose we manage to create a strong AI. It's fully conscious, fully aware, but for some quirk we cannot understand, it's 100% honest.
Such an AI would never pass the Turing test, because it would never try to pass off as human, and any intelligent human could ask it questions that only a machine could answer in limited time.
Re:Turing test not passed. (Score:5, Insightful)
Turing was wrong about his predictions. But that doesn't mean his test is invalid
Imho it is.
Suppose we manage to create a strong AI. It's fully conscious, fully aware, but for some quirk we cannot understand, it's 100% honest.
Such an AI would never pass the Turing test, because it would never try to pass off as human
That sounds like a legit point at first, but think about it for a sec. Programming a computer to lie and be evasive about its nature is easy, and many chatbots can already do that. Asking a strong AI "are you a computer?" or "what did you have for breakfast?" would not be useful for evaluating intelligence. Getting the AI to debate an intellectual topic, on the other hand, will be less likely to require deception but would be a better measure of intelligence. That's another fundamental point people miss: The point of the Turing test was to imitate human INTELLIGENCE, NOT to pretend to be a physical human.
A knowledgeable interrogator trying to evaluate intelligence would thus likely be more interested in asking intellectual questions, rather than queries just designed to test whether the computer can make up some nonsense about itself.
Re: (Score:3)
Good points. I would add one more- people lie. There is nothing to stop the human comparison from lying and saying he was a computer as well. If both say they are a computer that should level the playing field so that they both need to judged on the merits of the debate.
Re: (Score:2)
I doubt most humans could pass that test either.
Exactly. That's part of my point. A lot of people are acting like the test was "passed" by an AI pretending to be a Ukranian teenager conversing in his non-native language and acting like an evasive weirdo. Turing's standard for "intelligence" was obviously much higher. It sounds like his AI would probably be pitted against an adult human from the top 5-10% of intelligence in his test.
And isn't that a potential standard for evaluating when true AI has arrived? No one would have cared about Deep Blue
Re:Turing test not passed. (Score:4, Interesting)
The thing that Watson (and AI in general) has difficulty with is imagination, it has no experience of the real world so if you asked it something like what would happen if you ran down the street with a bucket of water, it would be stuck. Humans who have never run with a bucket of water will automatically visualise the situation and give the right answer, just as everyone who read the question have just done so in their mind. OTOH a graphics engine can easily show you what would happen to the bucket of water because it does have a limited knowledge of the physical world.
This is the problem with putting AI in a box labeled "Turing test", it (arrogantly) assumes that human conversation is the only definition of intelligence. I'm pretty sure Turing himself would vigorously dispute that assumption if he were alive today.
Re: (Score:2)
No, it says natural language is the best way to measure human intelligence.
http://www.csee.umbc.edu/cours... [umbc.edu] "Computing Machinery and Intelligence":
Turing
Re:Turing test not passed. (Score:5, Informative)
Alas, the test that was "passed" was not actually the test Turing proposed.
So it passed the Turingish test.
Re: (Score:2)
explain to my poor retard self how it has not passed
By definition, one in three means it failed to convince the average layman, when it gets better that one in two I will give it a pass.
Personally I think it's achievable today [youtube.com] but as much as I admire Turing it's entirely irrelevant to the question of intelligence. It's mostly philosophical masterbation by people who misunderstand the modern definition of intelligent behaviour. For example I can't get a sensible reply when asking an octopus about it's garden but there is no denying it's a remarkably intell
Re: (Score:2)
The criteria of the test were defined, the criteria of the test was passed. Please share your superior intellect and explain to my poor retard self how it has not passed.
Because the results are not reproducible. The logical conclusion is that there was some problem with the experiment.
AC and Turing Test (Score:2)
Re:Turing test not passed. (Score:5, Informative)
That's because they keep shifting the goalposts.
They are shifting them again. This new test includes this requirement: The machine's designers must not be able to explain how their original code led to this new program. So now anything we understand is not intelligence??? So if someone figures out how the brain works, and is about to describe its function, then people will no longer be intelligent? Intelligence is a characteristic of behavior. If it behaves intelligently, then it is intelligent. The underlying mechanism should be irrelevant.
philosophical discussion only not science (Score:2, Interesting)
No.
you describe "behaviorism" which is a thoroughly discredited and reductive theory
the ***whole conversation*** is about ***the underlying mechanism***
the "Lovelace Test" is more rigorous, but how it will affect computing I cannot say, b
fixed link (Score:2)
Sorry about the bad link. [wikipedia.org]
Re: (Score:3, Insightful)
One of my friends is a philosophy post-doc and he told me many times that in philosophy the gold standard for intelligence is intelligent behaviour. Of course he has some footnotes to add, notably that intelligent things can appear to be bricks if you cut off all their actuators, but to say that this particular variant of ‘behaviourism’ as you call it is discredited is disingenuous. In particular, if one could hypothetically replace someone's brain with a computer and not know the difference the
Re:philosophical discussion only not science (Score:5, Insightful)
No, he's not describing behaviourism. He's saying this:
If it behaves intelligently, then it is intelligent.
That's a reasonable statement to make, and if you're disagreeing with that statement, you need to say why. Converting it to a strawman and then making a bald claim that the strawman is "discredited" is a cheap rhetorical trick. And then you go on to talk about free will, which has no direct relationship with intelligence anyway. OK, I get it, you want to turn the conversation around to being about free will, because that's your ax, but telling someone their perfectly reasonable statements are "simply wrong" is a shitty way to do it. OP's point, which you're deliberately missing, is that whatever intelligence is, it is not an observer-relative thing which demands that the observer be unaware of the mechanism. If you want to engage in debate with him, try addressing that specific point, rather than a bunch of points he never made about a subject he's not discussing. And if you want to talk about free will and about how behaviourism is "discredited" maybe you could at least make a couple of points in favour of that argument, for those of us who might be interested anyway. Maybe then we can see how your belief relates to what is actually being said.
Anyway, what you're both missing is the practical issue with "The machine's designers must not be able to explain how their original code led to this new program." The machine's designers can lie, or be incapable of coming up with an explanation despite one existing, so this is a completely ill-defined criterion - which is what we're trying to get away from.
Re: (Score:2)
Just because it acts close enough to human to pass for one doesn't mean it would be the kind of person you'd want to collaborate with or confide in.
Re: (Score:3)
So now anything we understand is not intelligence?
I heard a great anecdote about this from an MIT proffessor on youtube [youtube.com]. Back in the 80's the professor developed an AI program that could translate equations into the handful of standard forms required by calculus and solve them. A student heard about this and went calling to see the program in action. The professor spent an hour explaining the algorithm, when the student finally understood he exclaimed, "That's not intelligent, it's doing calculus the same way I do".
It could be argued that neither the st
Re: (Score:3)
Re: 'simply' following rules (Score:3)
One of Feynman's memoirs includes the haha-only-serious observation that mathematical theorems are either unproven or trivial, and this is simply a re-statement of the same principle.
And actually, there's a lot of speculation about whether colonies exhibit intelligence or consciousness (eg Hofstadter's Aunt Hillary, but also Jack Cohen & Ian Stewart's Heaven - they also did the Science of the Discworld series with pterry).
Re: (Score:2)
They are shifting them again.
Read this post [slashdot.org]. Also consider that the test proposed comes from Ada Lovelace, who predated the Turing test by a long way.
Re:Turing test not passed. (Score:5, Interesting)
That's because they keep shifting the goalposts.
I don't think "a chatbot isn't AI and hasn't been since the 1960s when they were invented, whether you call it a doctor or a Ukrainian kid doesn't make any difference" counts as shifting the goalposts.
Furthermore, reproducible results are an important part of science. Let him release his source code, or explain his algorithm so we can reproduce it. Anything less is not science.
Re: (Score:2)
But could an infinite number of ACs?
Re: (Score:2)
How do regurgitated one-lines make you feel?
Re: (Score:2)
One of the things I love about programming is the moment you have to remind yourself that your program is simply executing algorithms that you told it. Depending on how clever the algorithms are it can appear as if the computer is thinking for itself. Programming allows you to encode intelligence in non-thinking machines.
No... programming does not encode intelligence in a machine. Intelligence indicates the ability to think for itself and come up with a creative answer that isn't part of it's original programming. When you write a program, all you are doing is telling the computer what to do given a specific input. There is no intelligence involved.