Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AI Technology

Where's HAL 9000? 269

An anonymous reader writes "With entrants to this year's Loebner Prize, the annual Turing Test designed to identify a thinking machine, demonstrating that chatbots are still a long way from passing as convincing humans, this article asks: what happened to the quest to develop a strong AI? 'The problem Loebner has is that computer scientists in universities and large tech firms, the people with the skills and resources best-suited to building a machine capable of acting like a human, are generally not focused on passing the Turing Test. ... And while passing the Turing Test would be a landmark achievement in the field of AI, the test’s focus on having the computer have to fool a human is a distraction. Prominent AI researchers, like Google’s head of R&D Peter Norvig, have compared the Turing Test’s requirement that a machine fools a judge into thinking they are talking to a human as akin to demanding an aircraft maker constructs a plane that is indistinguishable from a bird."
This discussion has been archived. No new comments can be posted.

Where's HAL 9000?

Comments Filter:
  • Too hard (Score:4, Insightful)

    by Hatta ( 162192 ) on Friday May 25, 2012 @02:33PM (#40111415) Journal

    Strong AI has always been the stuff of sci-fi. Not because it's impossible, but because it's impractically difficult. We can barely model how a single protein folds, with a world wide network of computers. Does anyone seriously expect that we can model intelligence with similar resources?

    Evolution has been working on us for millions of years. It will probably take us hundreds or thousands before we get strong AI.

  • by dargaud ( 518470 ) <slashdot2@@@gdargaud...net> on Friday May 25, 2012 @02:36PM (#40111467) Homepage
    "The question of whether a computer can think is no more interesting than the question of whether a submarine can swim."
  • AI and chess (Score:5, Insightful)

    by Zontar_Thing_From_Ve ( 949321 ) on Friday May 25, 2012 @02:44PM (#40111593)
    Back in the early 1950s, it was thought that the real prize of AI was to get a computer able to beat the best human chess player consistently. The reasoning at the time was that the only way this would be possible was for breakthroughs to happen in AI where a computer could learn to think and could reason better at chess than a human. Fast forward to 10 or so years ago where IBM realized that just by throwing money at the problem they could get a computer to play chess by brute force and beat the human champion more often than not. So I'm not surprised that some AI people discount the Turing test. I am not an expert in the field but it seems to me that AI is a heck of a lot harder than anybody realized in the 1950s and we may still be decades or even centuries away from the kind of AI that people 60 or so years ago thought we'd have by now. Part of me does wonder if maybe just like how AI research in chess took the easy way out by resorting to brute force that now it's they'll just say the Turing test is not valid rather than actually try to achieve it because to pass it would require breakthroughs nobody has thought of yet and that's hard.
  • Well I Disagree (Score:5, Insightful)

    by eldavojohn ( 898314 ) * <eldavojohn@noSpAM.gmail.com> on Friday May 25, 2012 @02:46PM (#40111623) Journal

    He talks mostly in this article about how the focus has been on developing specialized software for solving specific problems and with specialized goals, rather than focusing on general AI. And it's true that this is part of what is holding general AI back.

    No, that's not true ... that's not at all what is holding "general AI" back. What's holding "general AI" back is that there is no way at all to implement it. Specialized AI is actually moving forward the only way we know how with actual results. Without further research in specialized AI, we would constantly get no closer to "generalized AI" and I keep using quotes around that because it's such a complete misnomer and holy grail that we aren't going to see it any time soon.

    When I studied this stuff there were two hot approaches. One was logic engines and expert systems that could be generalized to the point of encompassing all knowledge. Yeah, good luck with that. How does one codify creativity? The other approach was to model neurons in software and then someday when we have a strong enough computers, they will just emulate brains and become a generalized thinking AI. Again, the further we delved into neurons the more we realized how wrong our basic assumptions were -- let alone the infeasibility to emulating the cascading currents across them.

    "General AI" is holding itself back in the same way that "there is no such thing as a free lunch" is holding back our free energy dreams.

    But there is also something that Loebner is perhaps loathe to discuss, and that's the underlying (and often unspoken) matter of the *fear* of AI.

    We're so far from that, it humors to me to hear questions and any semi-serious question regarding it. It is not the malice of an AI system you should fear, it is the manifestation of the incompetence of the people who developed it that results in an error (like sounding an alarm because a sensor misfired and responding by launching all nuclear weapons since that what you perceive your enemy to have just done) that should be feared!

    People aren't just indifferent or uninterested in AI. I think there is a part of us, maybe not even part of us that we're always conscious of, that's very scared of it.

    People are obsessed by the philosophical and financial prospects of an intelligent computer system but nobody's telling me how to implement it -- that's just hand waving so they can get to the interesting stuff. Right now, rule based systems, heuristics, statistics, Bayes' Theorem, Support Vector Machines, etc will get you far further than any system that is just supposed to "learn" any new environment. All successful AI to this point has been built with the entire environment in mind during construction.

  • by medv4380 ( 1604309 ) on Friday May 25, 2012 @02:46PM (#40111625)
    Artificial Intelligence is just that Artificial. Big Blue has zero actual Intelligence, but has plenty of ways of accomplishing a task (chess) that usually requires actual Intelligence. The article has confused Machine Intelligence and Machine Learning with Artificial Intelligence. The problem is that in those areas no one is "best suited". If we knew what we needed to do for Machine Intelligence to work then we'd have a Hal 9000 by now. Instead we have Watson, though impressive, is a long way away from Hal.
  • by na1led ( 1030470 ) on Friday May 25, 2012 @03:03PM (#40111927)
    If a computer could think for itself, and solve problems on its own, it would logically conclude the fate of humans in less than a second. Unless we could confine that intelligence so it can't access the Internet, than those who posses the technology would rule the world. Either way, super intelligence is bad for humans.
  • by Jeremiah Cornelius ( 137 ) on Friday May 25, 2012 @03:08PM (#40111981) Homepage Journal

    It's asking for the world's best stage magician to create real hovering women.

    "If you REALLY fool me, it will be true!"

    Nonsense.

  • by Baseclass ( 785652 ) on Friday May 25, 2012 @03:09PM (#40111989)

    it's artificial. It isn't real. You're never going to get a Turing computer to actually think

    Why not? We evolved into sentient beings from non-sentient organic matter, why couldn't the same thing be possible with silicon based intelligence?

  • NO NO AND NO (Score:5, Insightful)

    by gl4ss ( 559668 ) on Friday May 25, 2012 @03:12PM (#40112031) Homepage Journal

    it's not fear.
    it's not "we could do it but we just don't want to".
    it's not "the government has brains in a jar already and is suppressing research".
    those are just excuses which make for sometimes good fiction - and sometimes a career for people selling the idea as non-fiction.

    but the real reason is that it is just EXTRA FRIGGING HARD.
    it's hard enough for a human who doesn't give a shit to pass a turing test. but imagine if you could really do a turing machine that would pass as a good judge, politician, network admin, science fiction writer... or one that could explain to us what intelligence really even is since we are unable to do it ourselves.

    it's not as hard/impossible as teleportation but close to it. just because it's been on scifi for ages doesn't mean that we're on the verge of a real breakthrough to do it, just because we can imagine stories about it doesn't mean that we could build a machine that could imagine those stories for us. it's not a matter of throwing money to the issue or throwing scientists to it. some see self learning neural networks as a way to go there, but that's like saying that you only need to grow brain cells in a vat while talking to it and *bam* you have a person.

    truth is that there's shitloads of more "AI researchers" just imagining ethical wishwashshitpaz implications what would result from having real AI than those who have an idea how to practically build one. simply because it's much easier to speculate on nonsense than to do real shit in this matter.
    (in scifi there's been a recent trend to separate things to virtual intelligences which are much more plausible, which are just basically advanced turing bots but wouldn't really pass the test, which is sort of refreshing)

  • by Darinbob ( 1142669 ) on Friday May 25, 2012 @03:15PM (#40112107)

    A problem is that terms like "intelligence" and "reason" are very vague. People used to think that a computer could be considered intelligent if it could win a game of chess against a master, but when that has happened then it's dismissed because it's just databases and algorithms and not intelligence.

    The bar keeps moving, and the definitions change, and ultimately the goals change. There's a bit of superstition around the word "intelligence" and some people don't want to use it for something that's easily explained, because intelligence is one of the last big mysteries of life. The original goal may have been to have computers that indeed do operate in less of a strictly hardwired way, not following predetermined steps but deriving a solution on its own. That goal has succeeded decades ago. I would consider something like Macsyma to truthfully be artificial intelligence as there is some reasoning and problem solving, but other people would reject this because it doesn't think like a human and they're using a different definition of "intelligence". Similarly I think modern language translators like those at Google truthfully are artificial intelligence, even though we know how it works.

    The goals of having computers learn and adapt and do some limited amount of reasoning based on data have been achieved. But the goals change and the definitions change.

    Back in grad school I mentioned to an AI prof how some advances I saw in the commercial world about image recognition software and he quickly dismissed it as uninteresting because it didn't use artificial neural networks (the fad of that decade). His idea of artificial intelligence meant emulating the processes in brains rather than recreating the things that brains can do in different ways. You can't really blame academic researchers for this though, they're focused in on some particular idea or method that is new while not being as interested in things that are well understood. You don't get research grants for things people already know how to do.

    That said, the "chat bot" contests are still useful in many ways. There is a need to be quick, a need for massive amounts of data, a need for adaptation, etc. Perhaps a large chunk of it is just fluff but much of it is still very useful stuff. There is plenty of opportunity to plug in new ideas from research along with old established techniques and see what happens.

  • by Kielistic ( 1273232 ) on Friday May 25, 2012 @03:22PM (#40112201)
    Computers can be used to model and compute chemical reactions. If a chemical can produce "thought" than nothing stops a computer from doing it other than computation power.
  • by similar_name ( 1164087 ) on Friday May 25, 2012 @03:23PM (#40112215)

    You're never going to get a Turing computer to actually think, although some future chemical or something machine may.

    Never say never :) It is hard to say whether an AI could ever accomplish thinking (or sentience) or not. It seems to be an emergent quality and I doubt whether it is chemical or electrical will matter much. And for the most part appearing sentient might as well be sentient. Outside of myself I can only assume others are sentient because they appear so and because we are genetically similar. There is not exactly a good standard or definition of what is or isn't sentient that doesn't depend on the bias of being human.

  • by jpate ( 1356395 ) on Friday May 25, 2012 @03:35PM (#40112339) Homepage

    I'm not. AI is to real intelligence what margarine is to butter - it's artificial. It isn't real. You're never going to get a Turing computer to actually think, although some future chemical or something machine may.

    Why do you think that? Silicon is also a chemical. There's nothing magical about liquid chemicals.

    Cognitive scientists typically try to analyze cognitive systems in terms of Marr's levels of analysis [wikipedia.org]. Cognitive systems solve some problem (the computational level) through some manipulation of percepts and memory (the algorithmic/representational level) using some physical system (the implementational level). The mapping from neurons and chemical slushes to algorithms is extremely complex, so most work focuses on providing a computational level characterization of the problem, occasionally proposing a specific algorithm. Since the same computational goal can be accomplished by different algorithms (compare bubblesort to quicksort, or particle filters to importance sampling, or audio localization in owls to audio localization in cats), and the same algorithm can be run with different implementations (consider the same source code compiled for ARM or x86), it's just a waste of time and energy to insist that we recover all of the computational, algorithmic, and implementational details simultaneously.

    However, you could get to the point where intelligence was simulated well enough that it appeard to be sentient. [wikipedia.org]

    I've never found the Chinese room argument convincing. It just baldly asserts "of course the resulting system is not sentient!" Why not?

    I disagree with the article. People haven't given up on strong AI, we've just realized that it is enormously more difficult than we originally thought. If today's best minds were to attack the problem, we'd end up with a hacked-together system that barely worked. Asking why computer scientists aren't working on strong AI is like asking why physicists aren't working on intergalactic teleportation: it's really really hard and there's a lot to accomplish on the way.

  • by Egor_but_no_hunch ( 2444330 ) on Friday May 25, 2012 @05:48PM (#40114329)

    This is getting closer to the true issue here, no-one can actually point to a "thought". We can run MRIs, we can do all the fluorescing in rat brains that we want, but at no point can we, as humans, point to a thought.

    All we can see and know about, at the moment is the machinery. The brain is just the machinery for our minds, neurons, synapses, etc. A computer system that is entered for the Turing test (or Deep Blue, or the Jeopardy machine(forget its name)), is again just that, the machinery. Each set of machinery is doing processing of some description that is observable and quantifiable, but as we do not understand the mechanism that turns the processing in the brain into "thoughts", we cannot tell if a computer thinks... Perhaps we are killing many computers each day as they are unable to meaningfully communicate their ability to think to us.

    I'm steering well away from self-awareness here, as this is a misnomer. Sentience is not necessarily about self awareness, as a computer can be taught to recognise itself, process information about itself, even be selfish (as some has posited is required for sentience), rather sentience is more rather used as a bucket to separate one set of processing from another. Is a tiger more sentient than a fly? They both have a certain level of information processing, and without the ability to show that one "thinks" while the other does not, be cannot portion out sentience to one or the other.(1)

    So if we cannot show that humans, much less animals, much less computers think, what are we left with? Complexity of processing, not the amount of processing but how complicated a process can become. Neuronal structures are excellent at this, thousands of connections per neuron allow for a massive amount of complexity of processing. Each process balances up elements that might not even appear to be relevant to the process, such as feedback from the autonomic nervous system, whether you are hungry or not or pain from your tooth trying to get your attention (and therefore suppressing other inputs). Add in non-processing factors from external influences, taken any pain killers? How about some opiates?

    Until the complexity of processing that happens in our brains are matched by the machines we build, we are unlikely to see anything that we could identify as "thinking" on a par with ourselves, the Turing test is not a test for an intelligent machine, it is essentially a processing test round a Markov chain.

    (1) Behavioural tests here are insufficient as all these prove is that the behaviour of the fly or tiger is unexpected by our own definition of what a sentient creature would do, which makes the whole thing subjective.

  • Re:Too hard (Score:5, Insightful)

    by pitchpipe ( 708843 ) on Friday May 25, 2012 @06:04PM (#40114549)

    Evolution has been working on us for millions of years. It will probably take us hundreds or thousands before we get strong AI.

    It also took evolution millions of years to get flight. You're comparing apples and oranges. Evolution has no intelligence directing its actions, whereas sometimes human activity does.

    Dear Baden Powell

    I am afraid I am not in the flight for "aerial navigation". I was greatly interested in your work with kites; but I have not the smallest molecule of faith in aerial navigation other than ballooning or of expectation of good results from any of the trials we hear of. So you will understand that I would not care to be a member of the aeronautical Society.

    Yours truly Kelvin

    This, a mere 13 years before the first airplane crossing of the English Channel.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...