Where's HAL 9000? 269
An anonymous reader writes "With entrants to this year's Loebner Prize, the annual Turing Test designed to identify a thinking machine, demonstrating that chatbots are still a long way from passing as convincing humans, this article asks: what happened to the quest to develop a strong AI? 'The problem Loebner has is that computer scientists in universities and large tech firms, the people with the skills and resources best-suited to building a machine capable of acting like a human, are generally not focused on passing the Turing Test. ... And while passing the Turing Test would be a landmark achievement in the field of AI, the test’s focus on having the computer have to fool a human is a distraction. Prominent AI researchers, like Google’s head of R&D Peter Norvig, have compared the Turing Test’s requirement that a machine fools a judge into thinking they are talking to a human as akin to demanding an aircraft maker constructs a plane that is indistinguishable from a bird."
It's not just specialization, there is also fear (Score:5, Interesting)
He talks mostly in this article about how the focus has been on developing specialized software for solving specific problems and with specialized goals, rather than focusing on general AI. And it's true that this is part of what is holding general AI back. But there is also something that Loebner is perhaps loathe to discuss, and that's the underlying (and often unspoken) matter of the *fear* of AI.
For every utopian vision in science fiction and pop culture of a future where AI is our pal, helping us out and making our lives more leisurely, there is another dystopian counter-vision of a future where AI becomes the enemy of humans, making our lives into a nightmare. A vision of a future where AI equals, and then inevitably surpasses, human intelligence touches a very deep nerve in the human psyche. Human fear of being made obsolete by technology has a long history. And more recently, the fear of having technology become even a direct *enemy* has become more and more prevalent--from the aforementioned HAL 9000 to Skynet. There is a real dystopian counter-vision to Loebner's utopianism.
People aren't just indifferent or uninterested in AI. I think there is a part of us, maybe not even part of us that we're always conscious of, that's very scared of it.
Re: (Score:2)
He talks mostly in this article about how the focus has been on developing specialized software for solving specific problems and with specialized goals, rather than focusing on general AI. And it's true that this is part of what is holding general AI back. But there is also something that Loebner is perhaps loathe to discuss, and that's the underlying (and often unspoken) matter of the *fear* of AI.
Does that have anything to do with the progress of research? I doubt that AI researchers themselves are afraid of spawning a 'true' AI, I would think it has more to do with the practicality of the technology and resources available.
Well I Disagree (Score:5, Insightful)
He talks mostly in this article about how the focus has been on developing specialized software for solving specific problems and with specialized goals, rather than focusing on general AI. And it's true that this is part of what is holding general AI back.
No, that's not true ... that's not at all what is holding "general AI" back. What's holding "general AI" back is that there is no way at all to implement it. Specialized AI is actually moving forward the only way we know how with actual results. Without further research in specialized AI, we would constantly get no closer to "generalized AI" and I keep using quotes around that because it's such a complete misnomer and holy grail that we aren't going to see it any time soon.
When I studied this stuff there were two hot approaches. One was logic engines and expert systems that could be generalized to the point of encompassing all knowledge. Yeah, good luck with that. How does one codify creativity? The other approach was to model neurons in software and then someday when we have a strong enough computers, they will just emulate brains and become a generalized thinking AI. Again, the further we delved into neurons the more we realized how wrong our basic assumptions were -- let alone the infeasibility to emulating the cascading currents across them.
"General AI" is holding itself back in the same way that "there is no such thing as a free lunch" is holding back our free energy dreams.
But there is also something that Loebner is perhaps loathe to discuss, and that's the underlying (and often unspoken) matter of the *fear* of AI.
We're so far from that, it humors to me to hear questions and any semi-serious question regarding it. It is not the malice of an AI system you should fear, it is the manifestation of the incompetence of the people who developed it that results in an error (like sounding an alarm because a sensor misfired and responding by launching all nuclear weapons since that what you perceive your enemy to have just done) that should be feared!
People aren't just indifferent or uninterested in AI. I think there is a part of us, maybe not even part of us that we're always conscious of, that's very scared of it.
People are obsessed by the philosophical and financial prospects of an intelligent computer system but nobody's telling me how to implement it -- that's just hand waving so they can get to the interesting stuff. Right now, rule based systems, heuristics, statistics, Bayes' Theorem, Support Vector Machines, etc will get you far further than any system that is just supposed to "learn" any new environment. All successful AI to this point has been built with the entire environment in mind during construction.
Re: (Score:2)
In some stories AI's are both enemies and friends.
http://www.schlockmercenary.com/2003-07-28 [schlockmercenary.com]
The issue is once an AI truly has that Intelligence part down then you get into it's motivations, and that is the part that scares people.
Can you trust the motivations of someone who is not only smarter than you, but doesn't value the same things you do in the same ways?
Whether it be a person or a machine the question comes up, and it's not a question that can truly be answered except in specific circumstances.
NO NO AND NO (Score:5, Insightful)
it's not fear.
it's not "we could do it but we just don't want to".
it's not "the government has brains in a jar already and is suppressing research".
those are just excuses which make for sometimes good fiction - and sometimes a career for people selling the idea as non-fiction.
but the real reason is that it is just EXTRA FRIGGING HARD.
it's hard enough for a human who doesn't give a shit to pass a turing test. but imagine if you could really do a turing machine that would pass as a good judge, politician, network admin, science fiction writer... or one that could explain to us what intelligence really even is since we are unable to do it ourselves.
it's not as hard/impossible as teleportation but close to it. just because it's been on scifi for ages doesn't mean that we're on the verge of a real breakthrough to do it, just because we can imagine stories about it doesn't mean that we could build a machine that could imagine those stories for us. it's not a matter of throwing money to the issue or throwing scientists to it. some see self learning neural networks as a way to go there, but that's like saying that you only need to grow brain cells in a vat while talking to it and *bam* you have a person.
truth is that there's shitloads of more "AI researchers" just imagining ethical wishwashshitpaz implications what would result from having real AI than those who have an idea how to practically build one. simply because it's much easier to speculate on nonsense than to do real shit in this matter.
(in scifi there's been a recent trend to separate things to virtual intelligences which are much more plausible, which are just basically advanced turing bots but wouldn't really pass the test, which is sort of refreshing)
Re: (Score:3)
Maybe we just don't need it? Our closest apps to AI are Siri and whatever the Android voice app is. All they do is retrieve information. Same as a google search. Nearly everyone under 30 (and quite a few over that) grew up with computers and most know how to use them. True turing AI at this point would only really benefit people who don't know how to find information themselves.
Re: (Score:2)
bullshit. true turing ai could do your homework. it would be really, really useful in sorting tasks, evaluating designs, coming up with mechanical designs.. it's just that people don't usually think too far when they think of the turing test.
imagine if your turing test subject was torvalds at 20 years old. imagine if you had a machine that could fool you by cobbling together a new operating system for you. an advanced enough turing test could supply you with plenty of new solutions to problems and another t
Re: (Score:3)
Our closest apps to AI are Siri and whatever the Android voice app is. All they do is retrieve information. Same as a google search.
I would say the closest "app" to what you describe, that would still fall under the category of specialized AI, would be Watson [wikipedia.org].
It too is a huge information retrieval system, but specifically designed to play Jeopardy and play it well. It already bested the top two human players.
Of course it is still only a specialized AI engine, no where NEAR expert AI, and it most certainly does not think. Hell, it can't even read visually, see, hear, or a lot of other things required to truly play a game of Jeopardy.
Re: (Score:2)
While there is fear, it's not really relevant to the lack of progress. The people who have this fear are not the same as the ones who are doing the research to advance the technology; or if they are, it's certainly not inhibiting them.
Re:It's not just specialization, there is also fea (Score:4, Informative)
I'm not. AI is to real intelligence what margarine is to butter - it's artificial. It isn't real. You're never going to get a Turing computer to actually think, although some future chemical or something machine may.
However, you could get to the point where intelligence was simulated well enough that it appeard to be sentient. [wikipedia.org]
Which leads to what I fear, that people like those in PETA will start a "machine rights" movement, where it may be illegal for me to shut off a machine I built myself!
Luckily, I'm not likely to live long enough to see it. Some of you might, though.
Re: (Score:2)
Mental State!=Computational State
Searle.
Chinese room.
http://www.youtube.com/watch?v=TryOC83PH1g [youtube.com]
Re: (Score:2)
Re: (Score:2)
Counter fail!
Most "refutations" of the CRA fall in to four camps:
1) Deny it outright and posit a magical explanation (Systems reply)
2) Ignore the premise that the CR supports and pick on the illustration (most of the others)
3) Slip semantic content in and hope no one notices (robot reply)
4) Pretend that a particular system is not equivalent to other computational systems (ANNs are somehow different from TMs)
As it stands now, no one has shown that syntactic content is sufficient for semantic content.
Re: (Score:2)
Doug Hofstadter refutes it by pointing out that no human could ever perform the actions attributed to the human in the Chinese Room. I believe he also (it's been a while since I read "Le Ton beau de Marot") goes on to mention that the analogy is dishonest, by virtue of an inclusion of a human in the room.
If the instructions in the book suffice to understand and reply to the room's input, then there is no need for a human to occupy the room. The legerdemain is offloading the capability of reading the book an
Re: (Score:2)
Doug Hofstadter refutes it by pointing out that no human could ever perform the actions attributed to the human in the Chinese Room.
That would be #2 "Ignore the premise that the CR supports and pick on the illustration"
The point of the Turing test is that whether or not the machine is "really" intelligent does not matter
The Turning test is the "best you can do" with behavioralism. We had a fairly recent "cognitive revolution" in psychology due to the failures of behavioralism.
Turing set out to answer the question "Can Machines Think?" and this was his solution, from the best approach of his day, in dealing with the problem of other minds.
Re: (Score:2)
Most "refutations" of the CRA fall in to four camps:
1) Deny it outright and posit a magical explanation (Systems reply)
This is the correct one. My neurons do not understand English, but the mind computed by those neurons does. Similarly, the man in the Chinese room does not understand Chinese, but the mind he (along with the program instructions) computes does. No magic required.
Re: (Score:3)
I'm not, now, trying to address that issue. I'm saying that the CRA also does not address that issue
Really? The whole point of the illustration (the room, paper, etc.) is to help explain/bolster the assertion that syntax is insufficient for semantics. You're confusing the example for the claim. Hence, you fall under #2 above.
Yes, the point of the illustration is to explain and bolster the assertion that syntax is insufficient for semantics. My point is that it fails to do so. Do you have an explanation for why neurons can cause a capacity for Chinese without themselves having a capacity for Chinese, while the man is unable to cause a capacity for Chinese without himself having a capacity for Chinese?
Re: (Score:2)
I do not believe that it is possible to put together these rules without relating the characters to real-world concepts, i. e. without making the person understand them.
Well, then you don't buy-in to computationalism, or I've misunderstoond your position.
The point is that syntax alone is insufficient for semantics.
The bits of paper, the room, the rule-book, and tortured subject in the room are completely irrelevant to the CRA.
Re: (Score:3)
Is there a human unconscious?
Re:It's not just specialization, there is also fea (Score:4, Insightful)
it's artificial. It isn't real. You're never going to get a Turing computer to actually think
Why not? We evolved into sentient beings from non-sentient organic matter, why couldn't the same thing be possible with silicon based intelligence?
What is thinking then? (Score:3, Insightful)
This is getting closer to the true issue here, no-one can actually point to a "thought". We can run MRIs, we can do all the fluorescing in rat brains that we want, but at no point can we, as humans, point to a thought.
All we can see and know about, at the moment is the machinery. The brain is just the machinery for our minds, neurons, synapses, etc. A computer system that is entered for the Turing test (or Deep Blue, or the Jeopardy machine(forget its name)), is again just that, the machinery. Each set of
Re: (Score:2)
Do you have any scientific basis for these claims or are you just making things up?
Do you mean besides the fact that we are all hives of single-celled organisms?
Re: (Score:2)
Do you have any scientific basis for these claims or are you just making things up?
Re:It's not just specialization, there is also fea (Score:5, Insightful)
Re:It's not just specialization, there is also fea (Score:5, Insightful)
You're never going to get a Turing computer to actually think, although some future chemical or something machine may.
Never say never :) It is hard to say whether an AI could ever accomplish thinking (or sentience) or not. It seems to be an emergent quality and I doubt whether it is chemical or electrical will matter much. And for the most part appearing sentient might as well be sentient. Outside of myself I can only assume others are sentient because they appear so and because we are genetically similar. There is not exactly a good standard or definition of what is or isn't sentient that doesn't depend on the bias of being human.
Re: (Score:2)
It is hard to say whether an AI could ever accomplish thinking (or sentience) or not.
It's obvious that AI can exist. What's not obvious is whether we'll ever be smart enough to manufacture one. This is a similar position to the situation with extraterrestrial life. It's almost certain that it exists, but completely impractical to expect to ever contact.
Re:It's not just specialization, there is also fea (Score:4, Funny)
[I]f you want to discuss whether intelligence is an emergent or inherent property we could be here all day, at least.
It is my observation that quite a few of us are here all day.
Welfare for sentient entities (Score:3)
appearing sentient might as well be sentient
I disagree, and be very careful making assertions like this.
I hope you agree that sentient entities, like you and me, ought to have rights.
And it's entirely possible that next year someone will come out with an app that runs on my MacBook and very much appears to be sentient. And if appearing sentient might as well be sentient, then it could very well become a crime to power off my MacBook after I've launched said app.
So there should be a pretty high threshold fo
Re: (Score:3)
Re:It's not just specialization, there is also fea (Score:4, Insightful)
I'm not. AI is to real intelligence what margarine is to butter - it's artificial. It isn't real. You're never going to get a Turing computer to actually think, although some future chemical or something machine may.
Why do you think that? Silicon is also a chemical. There's nothing magical about liquid chemicals.
Cognitive scientists typically try to analyze cognitive systems in terms of Marr's levels of analysis [wikipedia.org]. Cognitive systems solve some problem (the computational level) through some manipulation of percepts and memory (the algorithmic/representational level) using some physical system (the implementational level). The mapping from neurons and chemical slushes to algorithms is extremely complex, so most work focuses on providing a computational level characterization of the problem, occasionally proposing a specific algorithm. Since the same computational goal can be accomplished by different algorithms (compare bubblesort to quicksort, or particle filters to importance sampling, or audio localization in owls to audio localization in cats), and the same algorithm can be run with different implementations (consider the same source code compiled for ARM or x86), it's just a waste of time and energy to insist that we recover all of the computational, algorithmic, and implementational details simultaneously.
However, you could get to the point where intelligence was simulated well enough that it appeard to be sentient. [wikipedia.org]
I've never found the Chinese room argument convincing. It just baldly asserts "of course the resulting system is not sentient!" Why not?
I disagree with the article. People haven't given up on strong AI, we've just realized that it is enormously more difficult than we originally thought. If today's best minds were to attack the problem, we'd end up with a hacked-together system that barely worked. Asking why computer scientists aren't working on strong AI is like asking why physicists aren't working on intergalactic teleportation: it's really really hard and there's a lot to accomplish on the way.
Re: (Score:2)
Which leads to what I fear, that people like those in PETA will start a "machine rights" movement, where it may be illegal for me to shut off a machine I built myself!
You're afraid of these people [peta.org]? I spend more time lying awake worrying about my Furby.
Re: (Score:2, Interesting)
We're machines. Very nice ones, but machines. We have information storage, base programming, learning and sensory input. All of this happens by use of our real, observable, bodily mechanisms. As far as I know there's no evidence to the contrary (read as: magic).
So it follows that, assuming we can eventually replicate the function of any real, observable mechanism, there's no reason why we can't recreate genuine, humanesque intelligence. Whether the component hardware is "wet" or not is just a manufactu
Re: (Score:3)
Re: (Score:3)
I'm not. AI is to real intelligence what margarine is to butter - it's artificial. It isn't real. You're never going to get a Turing computer to actually think, although some future chemical or something machine may. However, you could get to the point where intelligence was simulated well enough that it appeard to be sentient.
Intelligence isn't a physical thing – it's a process. It makes no difference whether that process happens in meat or in silicon. This is why Searle is a moron. Any argument a
Re: (Score:2)
Its no less "artificial" than we are. Ultimately we are just a bunch of electrons being pushed around via chemistry and an internal power supply ( our stomach ). For a computer, its being pushed around due to an external power supply, but its still electrons flying around. Who is to say one is 'real' and one isn't..
Besides, once it does reach sentience and starts adapting, who can prove it isn't?
Turing Test is a Joke (Score:4, Insightful)
It's asking for the world's best stage magician to create real hovering women.
"If you REALLY fool me, it will be true!"
Nonsense.
Why not Zoidbe^H^H Watson? (Score:3)
Not AI enough? - http://www-03.ibm.com/innovation/us/watson/index.html [ibm.com]
Re:Why not Zoidbe^H^H Watson? (Score:4, Funny)
Re:Why not Zoidbe^H^H Watson? (Score:4, Funny)
Re: (Score:3)
AI research is haunted... (Score:5, Interesting)
Re: (Score:2, Funny)
The operator said that AI Research is calling from inside the house...
HAL? (Score:4, Funny)
Forget HAL, where is Cherry 2000!
Too hard (Score:4, Insightful)
Strong AI has always been the stuff of sci-fi. Not because it's impossible, but because it's impractically difficult. We can barely model how a single protein folds, with a world wide network of computers. Does anyone seriously expect that we can model intelligence with similar resources?
Evolution has been working on us for millions of years. It will probably take us hundreds or thousands before we get strong AI.
Re: (Score:2)
It takes us over 5 years to train most humans well enough to pass a Turing test, reasonable to think that it might take longer to train a machine.
Re: (Score:2)
Re: (Score:2)
Re:Too hard (Score:5, Insightful)
Evolution has been working on us for millions of years. It will probably take us hundreds or thousands before we get strong AI.
It also took evolution millions of years to get flight. You're comparing apples and oranges. Evolution has no intelligence directing its actions, whereas sometimes human activity does.
Dear Baden Powell
I am afraid I am not in the flight for "aerial navigation". I was greatly interested in your work with kites; but I have not the smallest molecule of faith in aerial navigation other than ballooning or of expectation of good results from any of the trials we hear of. So you will understand that I would not care to be a member of the aeronautical Society.
Yours truly Kelvin
This, a mere 13 years before the first airplane crossing of the English Channel.
Dijkstra said it best (Score:5, Insightful)
Re:Dijkstra said it best (Score:5, Interesting)
"The question of whether a computer can think is no more interesting than the question of whether a submarine can swim."
I can see the point, but that also applies to humans. There's a whole lot of research going on to determine exactly what it means for us to "think." A lot of it implies that maybe what we take for granted as our reasoning process to make decisions might just be justification for decisions that are already made. Take this experiment, which I've first read in The Believing Brain [amazon.com], and found it also described in this site [timothycomeau.com] when I googled for it.
One of the most dramatic demonstrations of the illusion of the unified self comes from the neuroscientists Michael Gazzaniga and Roger Sperry, who showed that when surgeons cut the corpus collosum joining the cerebral hemispheres, they literally cut the self in two, and each hemisphere can exercise free will without the other one’s advice or consent. Even more disconcertingly, the left hemisphere constantly weaves a coherent but false account of the behavior chosen without it’s knowledge by the right. For example, if an experimenter flashes the command “WALK” to the right hemisphere (by keeping it in the part of the visual field that only the right hemisphere can see), the person will comply with the request and begin to walk out of the room. But when the person (specifically, the person’s left hemisphere) is asked why he just got up he will say, in all sincerity, “To get a Coke” – rather than, “I don’t really know” or “The urge just came over me” or “You’ve been testing me for years since I had the surgery, and sometimes you get me to do things but I don’t know exactly what you asked me to do”.
Basically, what I'm saying is that if all you want is an intelligent machine, making it think exactly like us is not what you want to do. If you want to transport people under water, you want a submarine, not a machine that can swim. However, researchers do build machines that emulate the way humans walk, or how insects glide through water. That helps us understand the mechanics of that process. Similarly, in trying to make machines that think as we do, we might understand more about ourselves.
Re: (Score:2)
Yeah, split-brain != split mind
Put down the pop-sci books and go check out the actual research. That particular conclusion isn't supported by the evidence at all.
Re: (Score:2)
Yeah, split-brain != split mind
Put down the pop-sci books and go check out the actual research. That particular conclusion isn't supported by the evidence at all.
Ok. Does Nature [nature.com] count?
Re: (Score:2)
Re: (Score:2)
HaHa!
I have that paper on my desk now (I pulled it out a few weeks ago). It's a mess.
One thing that was particularly telling is that lot's of very basic information, like the number of participants, is completely absent.
This is to say nothing of the massive problems in their methodology. (It's been criticized VERY heavily by other researchers.)
It made a splash in the popular press, but hasn't held up well at all under scrutiny.
Some fun facts about this pile of garbage: their "predictions" are accurate <
Re: (Score:2)
One thing that was particularly telling is that lot's of very basic information, like the number of participants, is completely absent.
Twelve subjects. That can be clearly determined from figure 2 and figure 9.
Some fun facts about this pile of garbage: their "predictions" are accurate
Chance is 50%. The fact that they are only accurate to less than 60% doesn't mean much without further statistical analysis. For that analysis, they claim p
It made a splash in the popular press, but hasn't held up well at all under scrutiny.
That might well be true. This is not my field at all, but I didn't just go look for an abstract. When this thing made a splash in the media, I did read the actual paper, and it's surprisingly easy to understand. Maybe that does mean it's not a very good paper, if somebody not trained in the field can follow it, I don't know. You appear to be in the field, but two of your complaints imply you haven't read the paper that closely, so I'm no so much defending the paper (I'm not qualified), as pointing out flaws in your analysis. Maybe you can point me to the papers that criticize this one "VERY heavily", and I can learn something that will help me fix my ignorance, instead of just attacking my lack of expertise, which isn't very constructive without some help to fix the problem.
Of course, the biggest point to be made is That paper has absolutely nothing to do with split-brain subjects.
I didn't say it did. The split-brain study was done much earlier. I cited it as evidence of the conclusion, and implied that, given the new evidence, you can look at the earlier study with split-brain studies under a new light. I once again concede I might be wrong, but again, you might want to point me to some literature to help educate me.
Re: (Score:2)
. I cited it as evidence of the conclusion, and implied that, given the new evidence, you can look at the earlier study with split-brain studies under a new light
I'm not seeing the connection between the two? What are you trying to say?
Re: (Score:3)
Ah
I contend that you can not draw the conclusion you do from the earlier split-brain study (or others like it). That is, the evidence is insufficient to justify such a strong claim -- especially in light of the other behavioral evidence that stands in direct contradiction. Similarly, Libet-style studies, while useful, can't justify the strong conclusions drawn on either empirical or rational grounds (the empirical claim is obvious, but the rational claim is pretty broad. I don't know that I can defend it o
Re: (Score:2)
This is why you should pain attention to the preview once it pops up. In the sibling post, I meant to say,
Chance is 50%. The fact that they are only accurate to less than 60% doesn't mean much without further statistical analysis. For that analysis, they claim p < 0.05. There's less than 5% chance the results they got is an anomaly, which makes it entirely possible it is an anomaly, but you can't tell that without trying to repeat the experiment. If you know of papers that tried to repeat the experi
humans have a compulsion to communicate (Score:3)
A computer intelligence is probably the best long term prospect for an interesting intelligence to communicate with. We've been trying for a long time to communication with animals, spiritual beings and aliens. But these have not really panned out. A "hard A.I." would be something interesting t
Re:Dijkstra said it best (Score:4, Insightful)
A problem is that terms like "intelligence" and "reason" are very vague. People used to think that a computer could be considered intelligent if it could win a game of chess against a master, but when that has happened then it's dismissed because it's just databases and algorithms and not intelligence.
The bar keeps moving, and the definitions change, and ultimately the goals change. There's a bit of superstition around the word "intelligence" and some people don't want to use it for something that's easily explained, because intelligence is one of the last big mysteries of life. The original goal may have been to have computers that indeed do operate in less of a strictly hardwired way, not following predetermined steps but deriving a solution on its own. That goal has succeeded decades ago. I would consider something like Macsyma to truthfully be artificial intelligence as there is some reasoning and problem solving, but other people would reject this because it doesn't think like a human and they're using a different definition of "intelligence". Similarly I think modern language translators like those at Google truthfully are artificial intelligence, even though we know how it works.
The goals of having computers learn and adapt and do some limited amount of reasoning based on data have been achieved. But the goals change and the definitions change.
Back in grad school I mentioned to an AI prof how some advances I saw in the commercial world about image recognition software and he quickly dismissed it as uninteresting because it didn't use artificial neural networks (the fad of that decade). His idea of artificial intelligence meant emulating the processes in brains rather than recreating the things that brains can do in different ways. You can't really blame academic researchers for this though, they're focused in on some particular idea or method that is new while not being as interested in things that are well understood. You don't get research grants for things people already know how to do.
That said, the "chat bot" contests are still useful in many ways. There is a need to be quick, a need for massive amounts of data, a need for adaptation, etc. Perhaps a large chunk of it is just fluff but much of it is still very useful stuff. There is plenty of opportunity to plug in new ideas from research along with old established techniques and see what happens.
Too Narrow (Score:3)
There are many achievements met and progress made, e.g. Peters group's ping pong robot, just not the ones researchers promised many years ago.
Do Androids dreams about the Turing Award? (Score:2)
Can androids win the Darwin Award, even if they have won the Turing Award?
Yes. Most likely.
Intelligence is not necessarily a prerequisite for being human.
Sentience vs. Intelligence (Score:5, Interesting)
I tend to think we need to split out "Artificial Sentience" from "Artificial Intelligence." Technologies used for expert systems are clearly a form of subject-matter artificial intelligence, but they are not creative nor are they designed to learn about and explore new subject materials.
Artificial Sentience, on the other hand, would necessarily incorporate learning, postulation, and exploration of entirely new ideas or "insights." I firmly believe that in order to hold a believable conversation, a machine needs sentience, not just intelligence. Being able to come to a logical conclusion or to analyze sentence structures and verbiage into models of "thought" are only a first step -- the intelligence part.
Only when a machine can come up with and hold a conversation on new topics, while being able to tie the discussion history back to earlier statements so that the whole conversation "holds together" will be able to "fool" people. Because at that point, it won't be "fooling" anyone -- it will actually be thinking.
Re: (Score:3)
Only when a machine can come up with and hold a conversation on new topics, while being able to tie the discussion history back to earlier statements so that the whole conversation "holds together" will be able to "fool" people. Because at that point, it won't be "fooling" anyone -- it will actually be thinking.
No, it will stil be smoke and mirrors. Magicians are pretty clever at making impossible things appear to happen, tricking a human into believing a machine is sentient is no different. Look up "Chines
Re: (Score:2)
Re: (Score:3)
The Chinese Room is laughably misguided. It relies upon a confusion of levels. It's true that the man in the Chinese room does not know Chinese. But it's equally true that any individual neuron in my brain does not know English. The important point is that the system as a whole (the man in the chinese room plus the entire collection of rules OR the collection of neurons in my skull plus the laws of physics) knows Chinese or English (respectively).
McGrew, you should read some Hofstadter. He's pretty e
Re: (Score:2)
No, it will stil be smoke and mirrors. Magicians are pretty clever at making impossible things appear to happen, tricking a human into believing a machine is sentient is no different. Look up "Chinese room".
The problem of proving a machine is or is not sentient is actually very very old. At least 10,000 years old.
After all, can you prove to me you are sentient or conscious?
And I don't mean that as an insult. I can't prove that I am sentient or conscious to you either.
But if it is accepted that all humans are sentient, there is still the fact any one individual can not prove they are, let alone we as in humanity can't prove anyone else is.
Can we show even a lower species is sentient? I believe my pet dog is
Re: (Score:2)
I tend to think we need to split out "Artificial Sentience" from "Artificial Intelligence."
Not familiar with the field at all, are you?
AI and chess (Score:5, Insightful)
Re: (Score:3)
Re: (Score:2)
I have thought similarly. I don't see how we can make true use of robots if they don't understand us. To understand us, to predict or anticipate what we need, I think they have to have some common experience otherwise it would take forever to explain what you want precisely enough. Without understanding they would be very annoying in the same way that it is when you try to work with people whose culture is greatly at odds with yours so that you can never quite interpret what they mean.
This kind of thing
The Problem is the Definition of AI (Score:4, Insightful)
A quest for the robotic birds (Score:3)
Festo's Smartbird is hardly indistinguishable from a real bird, but it is much more so than say da Vinci's ornithopter. A slow and steady progress can be charted from the former to the latter. At some point in the future, the technology will be nearly indistinguishable from a real bird, thus passing the "Norvig Test".
That's the whole point of the Turing Test; it's supposed to be hard and maybe even impossible. It doesn't test whether current AI is useful, it tests if AI is indistinguishable from a human. That's a pinnacle moment, and one that bestows great benefits as well as serious implications.
Personally, I think it will happen; maybe not for 50, 100, 500 years...but it will happen.
Still looking... (Score:2)
for signs of natural intelligence.
do we really need computer AI? (Score:2)
computers are so good at doing repetitive monkey work that most people don't like to do
Re: (Score:2)
True AI would dominate the world (Score:4, Insightful)
Re: (Score:2)
This is exactly the kind of hyperbole that diminishes meaningful contributions to the field of AI.
Re: (Score:2)
Cheap *real* "intelligence" (Score:2)
Homicidal AI's? (Score:2)
Re: (Score:3)
Umm... HAL-9000 was homicidal.
No he wasnt, he was just misunderstood.
Farming gold in Wow (Score:2)
Though once the real money auction house opens in Diablo 3 he'll move over there.
Turing (Score:2)
Ok, the Turing Test was a thought experiment, and not intended to be a real-world filter for useful AI. Clearly non-humanlike general-purpose intelligence would be useful regardless of the form.
The test was a thought experiment to throw down the gauntlet to cs philosophers - how would you even know another human skull, aside from yourself, was conscious or not? It doesn't even really have anything to do with intelligence per se so much as illustrating the difference between intelligence and conscious inte
What is holding back AI? (Score:2)
Processing Power. We just dont have enough yet..
But it's getting really close. Cripes we are doing things today in our pocket that only 25 years ago was utterly impossible on a $20billion dollar mainframe.
If the rate of Growth in processing power continues we will have a computer with the human brain level of processing within 20 years. If we get a breakthrough or two, it could be a whole lot sooner.
What the human brain does is massive. Just the processing in the visual cortex is utterly insane in ho
Re: (Score:3)
Re: (Score:2)
Processing power does not equal speed.
Re: (Score:2)
Wrong way around. (Score:2)
Wrong Question asked out of ignorance (Score:5, Interesting)
These sorts of articles that pop up from time to time on slashdot are so frustrating to those of us who actually work in the field. We take an article written by someone who doesn't actually understand the field, about an contest that has always been no better than a publicity stunt*, which triggers a whole bunch of speculation by people who read Godel, Escher, Bach and think they understand what's going on.
The answer is simple. AI researchers haven't forgotten the end goal, and it's not some cynical ploy to advance an academic career. We stopped asking the big-AI question because we realized it was an inappropriate time to ask it. By analogy: These days physicists spend a lot of time thinking about the big central unify everything theory, and that's great. In 1700, that would have been the wrong question to ask- there were too many phenomenons that we didn't understand yet (energy, EM, etc). We realized 20 years ago that we were chasing ephemera and not making real progress, and redeployed our resources in ways to understand what the problem really was. It's too bad this doesn't fit our SciFi timetable, all we can do is apologize. And PLEASE do not mention any of that "singularity" BS.
I know, I know, -1 flamebait. Go ahead.
*Note I didn't say it was a publicity stunt, just that it was no better than one. Stuart Shieber at Harvard wrote an excellent dismantling of the idea 20 years ago.
Where's HAL9000 (Score:3, Informative)
He's here: https://twitter.com/HAL9000_ [twitter.com]
Symbol Grounding Problem (Score:3, Interesting)
Old AI guy here (natural language processing in the late '80s).
The barrier to achieving strong AI is the Symbol Grounding Problem. In order to understand each other we humans draw on a huge amount of shared experience which is grounded in the physical world. Trying to model that knowledge is like pulling on the end of a huge ball of string - you keep getting more string the more you pull and ultimately there is no physical experience to anchor to. Doug Lenat has been trying to create a semantic net modelling human knowledge since my time in the AI field with what he now calls OpenCyc (www.opencyc.org). The reason that weak AI has had some success is that they are able to bound their problems and thus stop pulling on the string at some point.
See http://en.wikipedia.org/wiki/Symbol_grounding [wikipedia.org].
Artificial Stupidity (Score:3)
Artificial Stupidity
http://www.salon.com/2003/02/26/loebner_part_one/ [salon.com]
Long, funny, and informative article on the history of the Loebner prize.
Regarding the feasibility of AI (Score:5, Interesting)
Some commenters in this thread (and elsewhere) have questioned whether "strong" artificial intelligence is actually possible.
The feasibility of strong AI follows directly from the rejection of Cartesian dualism.
If there is no "ghost in the machine," no magic "soul" separate from the body and brain, then human intelligence comes from the physical operation of the brain. Since they are physical operations, we can understand them, and reproduce the algorithm in computer software and/or hardware. That doesn't mean it's *easy* – it may take 200 more years to understand the brain that well, for all I know – but it must be *possible*.
(Also note that Cartesian dualism is not the same thing as religion, and rejecting it does not mean rejecting all religious beliefs. From the earliest times, Christians taught the resurrection of the *body*, presumably including the brain. The notion of disembodied "souls" floating around in "heaven" owes more to Plato than to Jesus and St. Paul. Many later Christian philosophers, including Aquinas, specifically rejected dualism in their writings.)
Re: (Score:3)
If there is no "ghost in the machine," no magic "soul" separate from the body and brain, then human intelligence comes from the physical operation of the brain.
Even if living creatures as we know them are animated from without, that still wouldn't mean that you couldn't create an algorithm that is intelligent; only that it would not be alive as we would understand life.
Further, if there were something physically special about the brain of a living creature that made it a sort of receiver for this animating quality, then it might well be possible to construct a machine analogue and thus give it life...
That's a cop-out (Score:2)
The task proves difficult, so we denigrate the task?
"Having to fool a human" is not the point. Fooling a human is a measure of achievement, not an end in itself. Yes, a machine that can solve human problems but doesn't appear to be human is a useful thing. But one that appears to be human demonstrates specific capabilities that are also very useful. Natural language processing, for one. Serving as a companion is another, possibly creepy but technically awesome and potentially game-changing one. Being able t
I Wouldn't Necessarily Mind AI? (Score:2)
Provided one of the Three Laws has the following equation?
If (potential results) > (harm) then DO
If (potential results) (harm) then NEXT
If (requested action) = (violation of law) then REPORT TO PUBLIC then HALT OPERATION
If (requested action) != (violation of law) then NEXT
Echo "I am sorry, I cannot comply with that order at this time. The potential for harm is greater than the potential result."
Re: (Score:2)
ARGH! I lost the greater-than symbol in there. Darn you HTML!
Sooner than you think... (Score:3)
Memristors. Google the word. I did not expect to see real AI in my lifetime before that announcement, and now I do. Memristors are close enough to neurons that you can run something like a brain on a chip, whereas before, all neural nets were simulated and therefore took a lot of computing power just to do small things like machine vision (face recognition, etc...).
Re: (Score:2)
The Turing Test is more like demanding that aircraft makers design a plane that is larger on the inside than on the outside and can travel faster than the speed of light without using any fuel or reaction mass. *If* it's even theoretically possible, we would have to revise our current fundamental understanding of how things work rather substantially in order to even begin to have any idea at all how to get started working on the problem.
Unless you believe that the human brain has magical properties, it mu