Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AI Technology

Where's HAL 9000? 269

An anonymous reader writes "With entrants to this year's Loebner Prize, the annual Turing Test designed to identify a thinking machine, demonstrating that chatbots are still a long way from passing as convincing humans, this article asks: what happened to the quest to develop a strong AI? 'The problem Loebner has is that computer scientists in universities and large tech firms, the people with the skills and resources best-suited to building a machine capable of acting like a human, are generally not focused on passing the Turing Test. ... And while passing the Turing Test would be a landmark achievement in the field of AI, the test’s focus on having the computer have to fool a human is a distraction. Prominent AI researchers, like Google’s head of R&D Peter Norvig, have compared the Turing Test’s requirement that a machine fools a judge into thinking they are talking to a human as akin to demanding an aircraft maker constructs a plane that is indistinguishable from a bird."
This discussion has been archived. No new comments can be posted.

Where's HAL 9000?

Comments Filter:
  • by crazyjj ( 2598719 ) * on Friday May 25, 2012 @02:29PM (#40111349)

    He talks mostly in this article about how the focus has been on developing specialized software for solving specific problems and with specialized goals, rather than focusing on general AI. And it's true that this is part of what is holding general AI back. But there is also something that Loebner is perhaps loathe to discuss, and that's the underlying (and often unspoken) matter of the *fear* of AI.

    For every utopian vision in science fiction and pop culture of a future where AI is our pal, helping us out and making our lives more leisurely, there is another dystopian counter-vision of a future where AI becomes the enemy of humans, making our lives into a nightmare. A vision of a future where AI equals, and then inevitably surpasses, human intelligence touches a very deep nerve in the human psyche. Human fear of being made obsolete by technology has a long history. And more recently, the fear of having technology become even a direct *enemy* has become more and more prevalent--from the aforementioned HAL 9000 to Skynet. There is a real dystopian counter-vision to Loebner's utopianism.

    People aren't just indifferent or uninterested in AI. I think there is a part of us, maybe not even part of us that we're always conscious of, that's very scared of it.

  • by betterunixthanunix ( 980855 ) on Friday May 25, 2012 @02:31PM (#40111367)
    Too many decades of lofty promises that never materialized has turned "AI research" into a dirty word...
  • by msobkow ( 48369 ) on Friday May 25, 2012 @02:40PM (#40111523) Homepage Journal

    I tend to think we need to split out "Artificial Sentience" from "Artificial Intelligence." Technologies used for expert systems are clearly a form of subject-matter artificial intelligence, but they are not creative nor are they designed to learn about and explore new subject materials.

    Artificial Sentience, on the other hand, would necessarily incorporate learning, postulation, and exploration of entirely new ideas or "insights." I firmly believe that in order to hold a believable conversation, a machine needs sentience, not just intelligence. Being able to come to a logical conclusion or to analyze sentence structures and verbiage into models of "thought" are only a first step -- the intelligence part.

    Only when a machine can come up with and hold a conversation on new topics, while being able to tie the discussion history back to earlier statements so that the whole conversation "holds together" will be able to "fool" people. Because at that point, it won't be "fooling" anyone -- it will actually be thinking.

  • by LateArthurDent ( 1403947 ) on Friday May 25, 2012 @02:55PM (#40111811)

    "The question of whether a computer can think is no more interesting than the question of whether a submarine can swim."

    I can see the point, but that also applies to humans. There's a whole lot of research going on to determine exactly what it means for us to "think." A lot of it implies that maybe what we take for granted as our reasoning process to make decisions might just be justification for decisions that are already made. Take this experiment, which I've first read in The Believing Brain [amazon.com], and found it also described in this site [timothycomeau.com] when I googled for it.

    One of the most dramatic demonstrations of the illusion of the unified self comes from the neuroscientists Michael Gazzaniga and Roger Sperry, who showed that when surgeons cut the corpus collosum joining the cerebral hemispheres, they literally cut the self in two, and each hemisphere can exercise free will without the other one’s advice or consent. Even more disconcertingly, the left hemisphere constantly weaves a coherent but false account of the behavior chosen without it’s knowledge by the right. For example, if an experimenter flashes the command “WALK” to the right hemisphere (by keeping it in the part of the visual field that only the right hemisphere can see), the person will comply with the request and begin to walk out of the room. But when the person (specifically, the person’s left hemisphere) is asked why he just got up he will say, in all sincerity, “To get a Coke” – rather than, “I don’t really know” or “The urge just came over me” or “You’ve been testing me for years since I had the surgery, and sometimes you get me to do things but I don’t know exactly what you asked me to do”.

    Basically, what I'm saying is that if all you want is an intelligent machine, making it think exactly like us is not what you want to do. If you want to transport people under water, you want a submarine, not a machine that can swim. However, researchers do build machines that emulate the way humans walk, or how insects glide through water. That helps us understand the mechanics of that process. Similarly, in trying to make machines that think as we do, we might understand more about ourselves.

  • by Anonymous Coward on Friday May 25, 2012 @03:01PM (#40111887)

    Which leads to what I fear, that people like those in PETA will start a "machine rights" movement, where it may be illegal for me to shut off a machine I built myself!

    Parents of many teenagers share your frustration at being unable to permanently turn off the machines they created.

    Hell, Republicans want to make it illegal to shut one of those machines off before it can even function without support from a host machine.

  • by cardhead ( 101762 ) on Friday May 25, 2012 @03:23PM (#40112207) Homepage

    These sorts of articles that pop up from time to time on slashdot are so frustrating to those of us who actually work in the field. We take an article written by someone who doesn't actually understand the field, about an contest that has always been no better than a publicity stunt*, which triggers a whole bunch of speculation by people who read Godel, Escher, Bach and think they understand what's going on.

    The answer is simple. AI researchers haven't forgotten the end goal, and it's not some cynical ploy to advance an academic career. We stopped asking the big-AI question because we realized it was an inappropriate time to ask it. By analogy: These days physicists spend a lot of time thinking about the big central unify everything theory, and that's great. In 1700, that would have been the wrong question to ask- there were too many phenomenons that we didn't understand yet (energy, EM, etc). We realized 20 years ago that we were chasing ephemera and not making real progress, and redeployed our resources in ways to understand what the problem really was. It's too bad this doesn't fit our SciFi timetable, all we can do is apologize. And PLEASE do not mention any of that "singularity" BS.

    I know, I know, -1 flamebait. Go ahead.

    *Note I didn't say it was a publicity stunt, just that it was no better than one. Stuart Shieber at Harvard wrote an excellent dismantling of the idea 20 years ago.

  • by Anonymous Coward on Friday May 25, 2012 @03:39PM (#40112401)

    We're machines. Very nice ones, but machines. We have information storage, base programming, learning and sensory input. All of this happens by use of our real, observable, bodily mechanisms. As far as I know there's no evidence to the contrary (read as: magic).

    So it follows that, assuming we can eventually replicate the function of any real, observable mechanism, there's no reason why we can't recreate genuine, humanesque intelligence. Whether the component hardware is "wet" or not is just a manufacturing detail of meeting specs.

    But yeah, AI work like we're talking about is a magic show. Shortcuts. Simulating the output of a machine that doesn't actually exist. We're faking symptoms, the best ways we know how. A magic trick can only be perfected so much before you've got to actually do the thing you've been pretending to do.

  • by nbender ( 65292 ) * on Friday May 25, 2012 @03:46PM (#40112499)

    Old AI guy here (natural language processing in the late '80s).

    The barrier to achieving strong AI is the Symbol Grounding Problem. In order to understand each other we humans draw on a huge amount of shared experience which is grounded in the physical world. Trying to model that knowledge is like pulling on the end of a huge ball of string - you keep getting more string the more you pull and ultimately there is no physical experience to anchor to. Doug Lenat has been trying to create a semantic net modelling human knowledge since my time in the AI field with what he now calls OpenCyc (www.opencyc.org). The reason that weak AI has had some success is that they are able to bound their problems and thus stop pulling on the string at some point.

    See http://en.wikipedia.org/wiki/Symbol_grounding [wikipedia.org].

  • by JDG1980 ( 2438906 ) on Friday May 25, 2012 @04:07PM (#40112827)

    Some commenters in this thread (and elsewhere) have questioned whether "strong" artificial intelligence is actually possible.

    The feasibility of strong AI follows directly from the rejection of Cartesian dualism.

    If there is no "ghost in the machine," no magic "soul" separate from the body and brain, then human intelligence comes from the physical operation of the brain. Since they are physical operations, we can understand them, and reproduce the algorithm in computer software and/or hardware. That doesn't mean it's *easy* – it may take 200 more years to understand the brain that well, for all I know – but it must be *possible*.

    (Also note that Cartesian dualism is not the same thing as religion, and rejecting it does not mean rejecting all religious beliefs. From the earliest times, Christians taught the resurrection of the *body*, presumably including the brain. The notion of disembodied "souls" floating around in "heaven" owes more to Plato than to Jesus and St. Paul. Many later Christian philosophers, including Aquinas, specifically rejected dualism in their writings.)

Our business in life is not to succeed but to continue to fail in high spirits. -- Robert Louis Stevenson

Working...