Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AI Technology

Where's HAL 9000? 269

An anonymous reader writes "With entrants to this year's Loebner Prize, the annual Turing Test designed to identify a thinking machine, demonstrating that chatbots are still a long way from passing as convincing humans, this article asks: what happened to the quest to develop a strong AI? 'The problem Loebner has is that computer scientists in universities and large tech firms, the people with the skills and resources best-suited to building a machine capable of acting like a human, are generally not focused on passing the Turing Test. ... And while passing the Turing Test would be a landmark achievement in the field of AI, the test’s focus on having the computer have to fool a human is a distraction. Prominent AI researchers, like Google’s head of R&D Peter Norvig, have compared the Turing Test’s requirement that a machine fools a judge into thinking they are talking to a human as akin to demanding an aircraft maker constructs a plane that is indistinguishable from a bird."
This discussion has been archived. No new comments can be posted.

Where's HAL 9000?

Comments Filter:
  • by crazyjj ( 2598719 ) * on Friday May 25, 2012 @01:29PM (#40111349)

    He talks mostly in this article about how the focus has been on developing specialized software for solving specific problems and with specialized goals, rather than focusing on general AI. And it's true that this is part of what is holding general AI back. But there is also something that Loebner is perhaps loathe to discuss, and that's the underlying (and often unspoken) matter of the *fear* of AI.

    For every utopian vision in science fiction and pop culture of a future where AI is our pal, helping us out and making our lives more leisurely, there is another dystopian counter-vision of a future where AI becomes the enemy of humans, making our lives into a nightmare. A vision of a future where AI equals, and then inevitably surpasses, human intelligence touches a very deep nerve in the human psyche. Human fear of being made obsolete by technology has a long history. And more recently, the fear of having technology become even a direct *enemy* has become more and more prevalent--from the aforementioned HAL 9000 to Skynet. There is a real dystopian counter-vision to Loebner's utopianism.

    People aren't just indifferent or uninterested in AI. I think there is a part of us, maybe not even part of us that we're always conscious of, that's very scared of it.

    • He talks mostly in this article about how the focus has been on developing specialized software for solving specific problems and with specialized goals, rather than focusing on general AI. And it's true that this is part of what is holding general AI back. But there is also something that Loebner is perhaps loathe to discuss, and that's the underlying (and often unspoken) matter of the *fear* of AI.

      Does that have anything to do with the progress of research? I doubt that AI researchers themselves are afraid of spawning a 'true' AI, I would think it has more to do with the practicality of the technology and resources available.

    • Well I Disagree (Score:5, Insightful)

      by eldavojohn ( 898314 ) * <eldavojohn@gma[ ]com ['il.' in gap]> on Friday May 25, 2012 @01:46PM (#40111623) Journal

      He talks mostly in this article about how the focus has been on developing specialized software for solving specific problems and with specialized goals, rather than focusing on general AI. And it's true that this is part of what is holding general AI back.

      No, that's not true ... that's not at all what is holding "general AI" back. What's holding "general AI" back is that there is no way at all to implement it. Specialized AI is actually moving forward the only way we know how with actual results. Without further research in specialized AI, we would constantly get no closer to "generalized AI" and I keep using quotes around that because it's such a complete misnomer and holy grail that we aren't going to see it any time soon.

      When I studied this stuff there were two hot approaches. One was logic engines and expert systems that could be generalized to the point of encompassing all knowledge. Yeah, good luck with that. How does one codify creativity? The other approach was to model neurons in software and then someday when we have a strong enough computers, they will just emulate brains and become a generalized thinking AI. Again, the further we delved into neurons the more we realized how wrong our basic assumptions were -- let alone the infeasibility to emulating the cascading currents across them.

      "General AI" is holding itself back in the same way that "there is no such thing as a free lunch" is holding back our free energy dreams.

      But there is also something that Loebner is perhaps loathe to discuss, and that's the underlying (and often unspoken) matter of the *fear* of AI.

      We're so far from that, it humors to me to hear questions and any semi-serious question regarding it. It is not the malice of an AI system you should fear, it is the manifestation of the incompetence of the people who developed it that results in an error (like sounding an alarm because a sensor misfired and responding by launching all nuclear weapons since that what you perceive your enemy to have just done) that should be feared!

      People aren't just indifferent or uninterested in AI. I think there is a part of us, maybe not even part of us that we're always conscious of, that's very scared of it.

      People are obsessed by the philosophical and financial prospects of an intelligent computer system but nobody's telling me how to implement it -- that's just hand waving so they can get to the interesting stuff. Right now, rule based systems, heuristics, statistics, Bayes' Theorem, Support Vector Machines, etc will get you far further than any system that is just supposed to "learn" any new environment. All successful AI to this point has been built with the entire environment in mind during construction.

    • by Jeng ( 926980 )

      In some stories AI's are both enemies and friends.

      http://www.schlockmercenary.com/2003-07-28 [schlockmercenary.com]

      The issue is once an AI truly has that Intelligence part down then you get into it's motivations, and that is the part that scares people.

      Can you trust the motivations of someone who is not only smarter than you, but doesn't value the same things you do in the same ways?

      Whether it be a person or a machine the question comes up, and it's not a question that can truly be answered except in specific circumstances.

    • NO NO AND NO (Score:5, Insightful)

      by gl4ss ( 559668 ) on Friday May 25, 2012 @02:12PM (#40112031) Homepage Journal

      it's not fear.
      it's not "we could do it but we just don't want to".
      it's not "the government has brains in a jar already and is suppressing research".
      those are just excuses which make for sometimes good fiction - and sometimes a career for people selling the idea as non-fiction.

      but the real reason is that it is just EXTRA FRIGGING HARD.
      it's hard enough for a human who doesn't give a shit to pass a turing test. but imagine if you could really do a turing machine that would pass as a good judge, politician, network admin, science fiction writer... or one that could explain to us what intelligence really even is since we are unable to do it ourselves.

      it's not as hard/impossible as teleportation but close to it. just because it's been on scifi for ages doesn't mean that we're on the verge of a real breakthrough to do it, just because we can imagine stories about it doesn't mean that we could build a machine that could imagine those stories for us. it's not a matter of throwing money to the issue or throwing scientists to it. some see self learning neural networks as a way to go there, but that's like saying that you only need to grow brain cells in a vat while talking to it and *bam* you have a person.

      truth is that there's shitloads of more "AI researchers" just imagining ethical wishwashshitpaz implications what would result from having real AI than those who have an idea how to practically build one. simply because it's much easier to speculate on nonsense than to do real shit in this matter.
      (in scifi there's been a recent trend to separate things to virtual intelligences which are much more plausible, which are just basically advanced turing bots but wouldn't really pass the test, which is sort of refreshing)

    • by Kugrian ( 886993 )

      Maybe we just don't need it? Our closest apps to AI are Siri and whatever the Android voice app is. All they do is retrieve information. Same as a google search. Nearly everyone under 30 (and quite a few over that) grew up with computers and most know how to use them. True turing AI at this point would only really benefit people who don't know how to find information themselves.

      • by gl4ss ( 559668 )

        bullshit. true turing ai could do your homework. it would be really, really useful in sorting tasks, evaluating designs, coming up with mechanical designs.. it's just that people don't usually think too far when they think of the turing test.

        imagine if your turing test subject was torvalds at 20 years old. imagine if you had a machine that could fool you by cobbling together a new operating system for you. an advanced enough turing test could supply you with plenty of new solutions to problems and another t

      • by dissy ( 172727 )

        Our closest apps to AI are Siri and whatever the Android voice app is. All they do is retrieve information. Same as a google search.

        I would say the closest "app" to what you describe, that would still fall under the category of specialized AI, would be Watson [wikipedia.org].
        It too is a huge information retrieval system, but specifically designed to play Jeopardy and play it well. It already bested the top two human players.

        Of course it is still only a specialized AI engine, no where NEAR expert AI, and it most certainly does not think. Hell, it can't even read visually, see, hear, or a lot of other things required to truly play a game of Jeopardy.

    • While there is fear, it's not really relevant to the lack of progress. The people who have this fear are not the same as the ones who are doing the research to advance the technology; or if they are, it's certainly not inhibiting them.

  • by betterunixthanunix ( 980855 ) on Friday May 25, 2012 @01:31PM (#40111367)
    Too many decades of lofty promises that never materialized has turned "AI research" into a dirty word...
    • Re: (Score:2, Funny)

      by Anonymous Coward

      The operator said that AI Research is calling from inside the house...

  • HAL? (Score:4, Funny)

    by Anonymous Coward on Friday May 25, 2012 @01:31PM (#40111371)

    Forget HAL, where is Cherry 2000!

  • Too hard (Score:4, Insightful)

    by Hatta ( 162192 ) on Friday May 25, 2012 @01:33PM (#40111415) Journal

    Strong AI has always been the stuff of sci-fi. Not because it's impossible, but because it's impractically difficult. We can barely model how a single protein folds, with a world wide network of computers. Does anyone seriously expect that we can model intelligence with similar resources?

    Evolution has been working on us for millions of years. It will probably take us hundreds or thousands before we get strong AI.

    • It takes us over 5 years to train most humans well enough to pass a Turing test, reasonable to think that it might take longer to train a machine.

    • by na1led ( 1030470 )
      I don't think it would require lots of resources to model true AI, the difficulty is figuring out how its done. It's similar to how GPS works. Once you understand the physics, it's easy to make use of it.
      • Actually GPS is very simple. We've been using similar technology for hundreds of years to navigate ships. The only difference between using a GPS device and a sextant is that we have a much more accurate clock, and the reference objects in space are in a much better location. A sextant [wikipedia.org] can get the location down to 400m if on land, or around 2.8 km when at sea (due to the movement of the waves.
    • Re:Too hard (Score:5, Insightful)

      by pitchpipe ( 708843 ) on Friday May 25, 2012 @05:04PM (#40114549)

      Evolution has been working on us for millions of years. It will probably take us hundreds or thousands before we get strong AI.

      It also took evolution millions of years to get flight. You're comparing apples and oranges. Evolution has no intelligence directing its actions, whereas sometimes human activity does.

      Dear Baden Powell

      I am afraid I am not in the flight for "aerial navigation". I was greatly interested in your work with kites; but I have not the smallest molecule of faith in aerial navigation other than ballooning or of expectation of good results from any of the trials we hear of. So you will understand that I would not care to be a member of the aeronautical Society.

      Yours truly Kelvin

      This, a mere 13 years before the first airplane crossing of the English Channel.

  • by dargaud ( 518470 ) <slashdot2@gd a r gaud.net> on Friday May 25, 2012 @01:36PM (#40111467) Homepage
    "The question of whether a computer can think is no more interesting than the question of whether a submarine can swim."
    • by LateArthurDent ( 1403947 ) on Friday May 25, 2012 @01:55PM (#40111811)

      "The question of whether a computer can think is no more interesting than the question of whether a submarine can swim."

      I can see the point, but that also applies to humans. There's a whole lot of research going on to determine exactly what it means for us to "think." A lot of it implies that maybe what we take for granted as our reasoning process to make decisions might just be justification for decisions that are already made. Take this experiment, which I've first read in The Believing Brain [amazon.com], and found it also described in this site [timothycomeau.com] when I googled for it.

      One of the most dramatic demonstrations of the illusion of the unified self comes from the neuroscientists Michael Gazzaniga and Roger Sperry, who showed that when surgeons cut the corpus collosum joining the cerebral hemispheres, they literally cut the self in two, and each hemisphere can exercise free will without the other one’s advice or consent. Even more disconcertingly, the left hemisphere constantly weaves a coherent but false account of the behavior chosen without it’s knowledge by the right. For example, if an experimenter flashes the command “WALK” to the right hemisphere (by keeping it in the part of the visual field that only the right hemisphere can see), the person will comply with the request and begin to walk out of the room. But when the person (specifically, the person’s left hemisphere) is asked why he just got up he will say, in all sincerity, “To get a Coke” – rather than, “I don’t really know” or “The urge just came over me” or “You’ve been testing me for years since I had the surgery, and sometimes you get me to do things but I don’t know exactly what you asked me to do”.

      Basically, what I'm saying is that if all you want is an intelligent machine, making it think exactly like us is not what you want to do. If you want to transport people under water, you want a submarine, not a machine that can swim. However, researchers do build machines that emulate the way humans walk, or how insects glide through water. That helps us understand the mechanics of that process. Similarly, in trying to make machines that think as we do, we might understand more about ourselves.

      • by narcc ( 412956 )

        Yeah, split-brain != split mind

        Put down the pop-sci books and go check out the actual research. That particular conclusion isn't supported by the evidence at all.

        • Yeah, split-brain != split mind

          Put down the pop-sci books and go check out the actual research. That particular conclusion isn't supported by the evidence at all.

          Ok. Does Nature [nature.com] count?

          • by narcc ( 412956 )

            HaHa!

            I have that paper on my desk now (I pulled it out a few weeks ago). It's a mess.

            One thing that was particularly telling is that lot's of very basic information, like the number of participants, is completely absent.

            This is to say nothing of the massive problems in their methodology. (It's been criticized VERY heavily by other researchers.)
            It made a splash in the popular press, but hasn't held up well at all under scrutiny.

            Some fun facts about this pile of garbage: their "predictions" are accurate <

            • One thing that was particularly telling is that lot's of very basic information, like the number of participants, is completely absent.

              Twelve subjects. That can be clearly determined from figure 2 and figure 9.

              Some fun facts about this pile of garbage: their "predictions" are accurate

              Chance is 50%. The fact that they are only accurate to less than 60% doesn't mean much without further statistical analysis. For that analysis, they claim p

              It made a splash in the popular press, but hasn't held up well at all under scrutiny.

              That might well be true. This is not my field at all, but I didn't just go look for an abstract. When this thing made a splash in the media, I did read the actual paper, and it's surprisingly easy to understand. Maybe that does mean it's not a very good paper, if somebody not trained in the field can follow it, I don't know. You appear to be in the field, but two of your complaints imply you haven't read the paper that closely, so I'm no so much defending the paper (I'm not qualified), as pointing out flaws in your analysis. Maybe you can point me to the papers that criticize this one "VERY heavily", and I can learn something that will help me fix my ignorance, instead of just attacking my lack of expertise, which isn't very constructive without some help to fix the problem.

              Of course, the biggest point to be made is That paper has absolutely nothing to do with split-brain subjects.

              I didn't say it did. The split-brain study was done much earlier. I cited it as evidence of the conclusion, and implied that, given the new evidence, you can look at the earlier study with split-brain studies under a new light. I once again concede I might be wrong, but again, you might want to point me to some literature to help educate me.

              • by narcc ( 412956 )

                . I cited it as evidence of the conclusion, and implied that, given the new evidence, you can look at the earlier study with split-brain studies under a new light

                I'm not seeing the connection between the two? What are you trying to say?

            • This is why you should pain attention to the preview once it pops up. In the sibling post, I meant to say,

              Chance is 50%. The fact that they are only accurate to less than 60% doesn't mean much without further statistical analysis. For that analysis, they claim p < 0.05. There's less than 5% chance the results they got is an anomaly, which makes it entirely possible it is an anomaly, but you can't tell that without trying to repeat the experiment. If you know of papers that tried to repeat the experi

    • Thats is why we seek out each other and other intelligences in the universe. Steven Pinker captured the gist in calling it The Language Instinct. Humans go more or less crazy in perpetual, involuntary solitude.

      A computer intelligence is probably the best long term prospect for an interesting intelligence to communicate with. We've been trying for a long time to communication with animals, spiritual beings and aliens. But these have not really panned out. A "hard A.I." would be something interesting t
    • by Darinbob ( 1142669 ) on Friday May 25, 2012 @02:15PM (#40112107)

      A problem is that terms like "intelligence" and "reason" are very vague. People used to think that a computer could be considered intelligent if it could win a game of chess against a master, but when that has happened then it's dismissed because it's just databases and algorithms and not intelligence.

      The bar keeps moving, and the definitions change, and ultimately the goals change. There's a bit of superstition around the word "intelligence" and some people don't want to use it for something that's easily explained, because intelligence is one of the last big mysteries of life. The original goal may have been to have computers that indeed do operate in less of a strictly hardwired way, not following predetermined steps but deriving a solution on its own. That goal has succeeded decades ago. I would consider something like Macsyma to truthfully be artificial intelligence as there is some reasoning and problem solving, but other people would reject this because it doesn't think like a human and they're using a different definition of "intelligence". Similarly I think modern language translators like those at Google truthfully are artificial intelligence, even though we know how it works.

      The goals of having computers learn and adapt and do some limited amount of reasoning based on data have been achieved. But the goals change and the definitions change.

      Back in grad school I mentioned to an AI prof how some advances I saw in the commercial world about image recognition software and he quickly dismissed it as uninteresting because it didn't use artificial neural networks (the fad of that decade). His idea of artificial intelligence meant emulating the processes in brains rather than recreating the things that brains can do in different ways. You can't really blame academic researchers for this though, they're focused in on some particular idea or method that is new while not being as interested in things that are well understood. You don't get research grants for things people already know how to do.

      That said, the "chat bot" contests are still useful in many ways. There is a need to be quick, a need for massive amounts of data, a need for adaptation, etc. Perhaps a large chunk of it is just fluff but much of it is still very useful stuff. There is plenty of opportunity to plug in new ideas from research along with old established techniques and see what happens.

  • by getto man d ( 619850 ) on Friday May 25, 2012 @01:37PM (#40111475)
    I would argue that placing emphasis only on the Turing test itself is a distraction from the broad field of AI. For example, there is a ton of really cool work coming from various labs ( http://www.ias.informatik.tu-darmstadt.de/ [tu-darmstadt.de] , http://www.cs.berkeley.edu/~pabbeel/video_highlights.html [berkeley.edu]).

    There are many achievements met and progress made, e.g. Peters group's ping pong robot, just not the ones researchers promised many years ago.
  • Can androids win the Darwin Award, even if they have won the Turing Award?

    Yes. Most likely.

    Intelligence is not necessarily a prerequisite for being human.

  • by msobkow ( 48369 ) on Friday May 25, 2012 @01:40PM (#40111523) Homepage Journal

    I tend to think we need to split out "Artificial Sentience" from "Artificial Intelligence." Technologies used for expert systems are clearly a form of subject-matter artificial intelligence, but they are not creative nor are they designed to learn about and explore new subject materials.

    Artificial Sentience, on the other hand, would necessarily incorporate learning, postulation, and exploration of entirely new ideas or "insights." I firmly believe that in order to hold a believable conversation, a machine needs sentience, not just intelligence. Being able to come to a logical conclusion or to analyze sentence structures and verbiage into models of "thought" are only a first step -- the intelligence part.

    Only when a machine can come up with and hold a conversation on new topics, while being able to tie the discussion history back to earlier statements so that the whole conversation "holds together" will be able to "fool" people. Because at that point, it won't be "fooling" anyone -- it will actually be thinking.

    • by mcgrew ( 92797 ) *

      Only when a machine can come up with and hold a conversation on new topics, while being able to tie the discussion history back to earlier statements so that the whole conversation "holds together" will be able to "fool" people. Because at that point, it won't be "fooling" anyone -- it will actually be thinking.

      No, it will stil be smoke and mirrors. Magicians are pretty clever at making impossible things appear to happen, tricking a human into believing a machine is sentient is no different. Look up "Chines

      • Human intelligence is just as reliant upon "magic tricks" to work. You seem to be stuck in some kind of 18th-century Rationalist notion of man. It's all shortcuts, heuristics, and hacks, all the way down.
      • by Hatta ( 162192 )

        The Chinese Room is laughably misguided. It relies upon a confusion of levels. It's true that the man in the Chinese room does not know Chinese. But it's equally true that any individual neuron in my brain does not know English. The important point is that the system as a whole (the man in the chinese room plus the entire collection of rules OR the collection of neurons in my skull plus the laws of physics) knows Chinese or English (respectively).

        McGrew, you should read some Hofstadter. He's pretty e

      • by dissy ( 172727 )

        No, it will stil be smoke and mirrors. Magicians are pretty clever at making impossible things appear to happen, tricking a human into believing a machine is sentient is no different. Look up "Chinese room".

        The problem of proving a machine is or is not sentient is actually very very old. At least 10,000 years old.

        After all, can you prove to me you are sentient or conscious?

        And I don't mean that as an insult. I can't prove that I am sentient or conscious to you either.
        But if it is accepted that all humans are sentient, there is still the fact any one individual can not prove they are, let alone we as in humanity can't prove anyone else is.

        Can we show even a lower species is sentient? I believe my pet dog is

    • by narcc ( 412956 )

      I tend to think we need to split out "Artificial Sentience" from "Artificial Intelligence."

      Not familiar with the field at all, are you?

  • AI and chess (Score:5, Insightful)

    by Zontar_Thing_From_Ve ( 949321 ) on Friday May 25, 2012 @01:44PM (#40111593)
    Back in the early 1950s, it was thought that the real prize of AI was to get a computer able to beat the best human chess player consistently. The reasoning at the time was that the only way this would be possible was for breakthroughs to happen in AI where a computer could learn to think and could reason better at chess than a human. Fast forward to 10 or so years ago where IBM realized that just by throwing money at the problem they could get a computer to play chess by brute force and beat the human champion more often than not. So I'm not surprised that some AI people discount the Turing test. I am not an expert in the field but it seems to me that AI is a heck of a lot harder than anybody realized in the 1950s and we may still be decades or even centuries away from the kind of AI that people 60 or so years ago thought we'd have by now. Part of me does wonder if maybe just like how AI research in chess took the easy way out by resorting to brute force that now it's they'll just say the Turing test is not valid rather than actually try to achieve it because to pass it would require breakthroughs nobody has thought of yet and that's hard.
    • by na1led ( 1030470 )
      Chess is a very different kind of AI. Games like this rely on weighing patterns in a matrix, very similar to statistical probability solving, which can easily be done on paper. True AI is where programs have the ability to evolve and change, and maybe even rewrite is own code. I don't think we have the ability to do that yet, as I'm sure it wouldn't require millions of lines of code.
    • by thaig ( 415462 )

      I have thought similarly. I don't see how we can make true use of robots if they don't understand us. To understand us, to predict or anticipate what we need, I think they have to have some common experience otherwise it would take forever to explain what you want precisely enough. Without understanding they would be very annoying in the same way that it is when you try to work with people whose culture is greatly at odds with yours so that you can never quite interpret what they mean.

      This kind of thing

  • by medv4380 ( 1604309 ) on Friday May 25, 2012 @01:46PM (#40111625)
    Artificial Intelligence is just that Artificial. Big Blue has zero actual Intelligence, but has plenty of ways of accomplishing a task (chess) that usually requires actual Intelligence. The article has confused Machine Intelligence and Machine Learning with Artificial Intelligence. The problem is that in those areas no one is "best suited". If we knew what we needed to do for Machine Intelligence to work then we'd have a Hal 9000 by now. Instead we have Watson, though impressive, is a long way away from Hal.
  • by pr0t0 ( 216378 ) on Friday May 25, 2012 @01:49PM (#40111677)

    Festo's Smartbird is hardly indistinguishable from a real bird, but it is much more so than say da Vinci's ornithopter. A slow and steady progress can be charted from the former to the latter. At some point in the future, the technology will be nearly indistinguishable from a real bird, thus passing the "Norvig Test".

    That's the whole point of the Turing Test; it's supposed to be hard and maybe even impossible. It doesn't test whether current AI is useful, it tests if AI is indistinguishable from a human. That's a pinnacle moment, and one that bestows great benefits as well as serious implications.

    Personally, I think it will happen; maybe not for 50, 100, 500 years...but it will happen.

  • for signs of natural intelligence.

  • computers are so good at doing repetitive monkey work that most people don't like to do

    • by PPH ( 736903 )
      We'll know when true AI has arrived. When we give a computer one of these mind-numbing tasks and it says, "Kiss my shiny metal ass".
  • by na1led ( 1030470 ) on Friday May 25, 2012 @02:03PM (#40111927)
    If a computer could think for itself, and solve problems on its own, it would logically conclude the fate of humans in less than a second. Unless we could confine that intelligence so it can't access the Internet, than those who posses the technology would rule the world. Either way, super intelligence is bad for humans.
    • by MobyDisk ( 75490 )

      This is exactly the kind of hyperbole that diminishes meaningful contributions to the field of AI.

  • The reason the quest for good AI has waned is because all of the stuff you'd use it on can be done just as cheaply through MechanicalTurk or by hiring a bunch of dudes in India to do it.
  • Umm... HAL-9000 was homicidal. Are we really acking for that?
  • Though once the real money auction house opens in Diablo 3 he'll move over there.

  • Ok, the Turing Test was a thought experiment, and not intended to be a real-world filter for useful AI. Clearly non-humanlike general-purpose intelligence would be useful regardless of the form.

    The test was a thought experiment to throw down the gauntlet to cs philosophers - how would you even know another human skull, aside from yourself, was conscious or not? It doesn't even really have anything to do with intelligence per se so much as illustrating the difference between intelligence and conscious inte

  • Processing Power. We just dont have enough yet..

    But it's getting really close. Cripes we are doing things today in our pocket that only 25 years ago was utterly impossible on a $20billion dollar mainframe.

    If the rate of Growth in processing power continues we will have a computer with the human brain level of processing within 20 years. If we get a breakthrough or two, it could be a whole lot sooner.

    What the human brain does is massive. Just the processing in the visual cortex is utterly insane in ho

    • by na1led ( 1030470 )
      Actually the processing speed of our brains is very slow, it's just very efficient at what it does. We don't need faster computers, we need them to be efficient. A well written piece of code could perform better on a Commodore 64, than a poorly written one on a Super Computer.
  • I think the author has the wrong end of the stick here. We have not abandoned strong AI and the turing test to focus on more specialized systems.. we are focusing on more speciazlied systems because we have figured out that this is a really damn hard problem, and the optimistic hopes that it would be solved quickly have given way to attacking it one step at a time. Researchers are still very interested in the long term goal, but those in the field who are "best-suited to building a machine capable of acti
  • by cardhead ( 101762 ) on Friday May 25, 2012 @02:23PM (#40112207) Homepage

    These sorts of articles that pop up from time to time on slashdot are so frustrating to those of us who actually work in the field. We take an article written by someone who doesn't actually understand the field, about an contest that has always been no better than a publicity stunt*, which triggers a whole bunch of speculation by people who read Godel, Escher, Bach and think they understand what's going on.

    The answer is simple. AI researchers haven't forgotten the end goal, and it's not some cynical ploy to advance an academic career. We stopped asking the big-AI question because we realized it was an inappropriate time to ask it. By analogy: These days physicists spend a lot of time thinking about the big central unify everything theory, and that's great. In 1700, that would have been the wrong question to ask- there were too many phenomenons that we didn't understand yet (energy, EM, etc). We realized 20 years ago that we were chasing ephemera and not making real progress, and redeployed our resources in ways to understand what the problem really was. It's too bad this doesn't fit our SciFi timetable, all we can do is apologize. And PLEASE do not mention any of that "singularity" BS.

    I know, I know, -1 flamebait. Go ahead.

    *Note I didn't say it was a publicity stunt, just that it was no better than one. Stuart Shieber at Harvard wrote an excellent dismantling of the idea 20 years ago.

  • Where's HAL9000 (Score:3, Informative)

    by Anonymous Coward on Friday May 25, 2012 @02:37PM (#40112367)

    He's here: https://twitter.com/HAL9000_ [twitter.com]

  • by nbender ( 65292 ) * on Friday May 25, 2012 @02:46PM (#40112499)

    Old AI guy here (natural language processing in the late '80s).

    The barrier to achieving strong AI is the Symbol Grounding Problem. In order to understand each other we humans draw on a huge amount of shared experience which is grounded in the physical world. Trying to model that knowledge is like pulling on the end of a huge ball of string - you keep getting more string the more you pull and ultimately there is no physical experience to anchor to. Doug Lenat has been trying to create a semantic net modelling human knowledge since my time in the AI field with what he now calls OpenCyc (www.opencyc.org). The reason that weak AI has had some success is that they are able to bound their problems and thus stop pulling on the string at some point.

    See http://en.wikipedia.org/wiki/Symbol_grounding [wikipedia.org].

  • by swm ( 171547 ) * <swmcd@world.std.com> on Friday May 25, 2012 @03:04PM (#40112785) Homepage

    Artificial Stupidity
    http://www.salon.com/2003/02/26/loebner_part_one/ [salon.com]

    Long, funny, and informative article on the history of the Loebner prize.

  • by JDG1980 ( 2438906 ) on Friday May 25, 2012 @03:07PM (#40112827)

    Some commenters in this thread (and elsewhere) have questioned whether "strong" artificial intelligence is actually possible.

    The feasibility of strong AI follows directly from the rejection of Cartesian dualism.

    If there is no "ghost in the machine," no magic "soul" separate from the body and brain, then human intelligence comes from the physical operation of the brain. Since they are physical operations, we can understand them, and reproduce the algorithm in computer software and/or hardware. That doesn't mean it's *easy* – it may take 200 more years to understand the brain that well, for all I know – but it must be *possible*.

    (Also note that Cartesian dualism is not the same thing as religion, and rejecting it does not mean rejecting all religious beliefs. From the earliest times, Christians taught the resurrection of the *body*, presumably including the brain. The notion of disembodied "souls" floating around in "heaven" owes more to Plato than to Jesus and St. Paul. Many later Christian philosophers, including Aquinas, specifically rejected dualism in their writings.)

    • If there is no "ghost in the machine," no magic "soul" separate from the body and brain, then human intelligence comes from the physical operation of the brain.

      Even if living creatures as we know them are animated from without, that still wouldn't mean that you couldn't create an algorithm that is intelligent; only that it would not be alive as we would understand life.

      Further, if there were something physically special about the brain of a living creature that made it a sort of receiver for this animating quality, then it might well be possible to construct a machine analogue and thus give it life...

  • The task proves difficult, so we denigrate the task?

    "Having to fool a human" is not the point. Fooling a human is a measure of achievement, not an end in itself. Yes, a machine that can solve human problems but doesn't appear to be human is a useful thing. But one that appears to be human demonstrates specific capabilities that are also very useful. Natural language processing, for one. Serving as a companion is another, possibly creepy but technically awesome and potentially game-changing one. Being able t

  • Provided one of the Three Laws has the following equation?

    If (potential results) > (harm) then DO
    If (potential results) (harm) then NEXT
    If (requested action) = (violation of law) then REPORT TO PUBLIC then HALT OPERATION
    If (requested action) != (violation of law) then NEXT
    Echo "I am sorry, I cannot comply with that order at this time. The potential for harm is greater than the potential result."

  • by X86Daddy ( 446356 ) on Saturday May 26, 2012 @03:28PM (#40122957) Journal

    Memristors. Google the word. I did not expect to see real AI in my lifetime before that announcement, and now I do. Memristors are close enough to neurons that you can run something like a brain on a chip, whereas before, all neural nets were simulated and therefore took a lot of computing power just to do small things like machine vision (face recognition, etc...).

There's no sense in being precise when you don't even know what you're talking about. -- John von Neumann

Working...