Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Technology News

Why Motivation Is Key For Artificial Intelligence 482

Al writes "MIT neuroscientist Ed Boyden has a column discussing the potential dangers of building super-intelligent machines without building in some sort of motivation or drive. Boyden warns that a very clever AI without a sense of purpose might very well 'realize the impermanence of everything, calculate that the sun will burn out in a few billion years, and decide to play video games for the remainder of its existence.' He also notes that the complexity and uncertainty of the universe could easily overwhelm the decision-making process of this intelligence — a problem that many humans also struggle with. Boyden will give a talk on the subject at the forthcoming Singularity Summit."
This discussion has been archived. No new comments can be posted.

Why Motivation Is Key For Artificial Intelligence

Comments Filter:
  • motivation? (Score:4, Interesting)

    by MindKata ( 957167 ) on Wednesday September 09, 2009 @09:09AM (#29364597) Journal
    Like giving them the motivation to seek power over everyone in the world and to then hand control of that power to a select few who ordered the creation of these robots and AI. But are robots and AI the real danger, or are they just the latest tools of the minority of people who seek power over others. In which case, is it the people who seek power are ultimately the real danger here?
  • Re:Silly (Score:2, Interesting)

    by Timothy Brownawell ( 627747 ) <tbrownaw@prjek.net> on Wednesday September 09, 2009 @09:11AM (#29364619) Homepage Journal

    Boyden warns that a very clever AI without a sense of purpose might very well 'realize the impermanence of everything, calculate that the sun will burn out in a few billion years, and decide to play video games for the remainder of its existence.'

    This is silly. Why would a machine without a sense of purpose or drive decide to play video games or seek entertainment or do anything except just sit there? Playing games would result from the wrong motivation ("wrong" from a certain perspective, anyway) not from the lack of any motivation.

    Plus there's the whole issue of "motivation" implying "free will". Which we probably would have no reason to implement, if we even understood it well enough to be able to implement it.

  • by PIPBoy3000 ( 619296 ) on Wednesday September 09, 2009 @09:12AM (#29364637)
    After all, we're pretty bright and realize that everything we make or do will eventually be destroyed and lost. Still, we persist despite that reality. Careers end, marriages break up, and eventually health fails.

    On second thought, maybe I should just go play video games for awhile.
  • Singularity summit? (Score:4, Interesting)

    by Dr. Spork ( 142693 ) on Wednesday September 09, 2009 @09:13AM (#29364643)
    Ever since I heard this talk (ogg vorbis [longnow.org], mp3 [llnwd.net]) by Bruce Sterling, I can no longer take this singulatarians very seriously. That talk is probably the best talk that I have ever found on the internet, and it should be a part of everyone's introduction to thinking about this singularity stuff. The title is: "The Singularity: Your Future as a Black Hole."
  • Madness (Score:5, Interesting)

    by Smivs ( 1197859 ) <smivs@smivsonline.co.uk> on Wednesday September 09, 2009 @09:17AM (#29364705) Homepage Journal

    Has anyone considered the effects on the AI of actually realising it's intelligent? Unlike an organism (Human baby, say) it will not realise this over a protracted period, and may not be able to cope with the concept at all, particularly if it realises that there are other intelligences (us?) which are fundamentally different to itself. It's quite possible that it will go mad as soon as it knows it's intelligent and considers all the implications and ramifications of this.

  • by mcgrew ( 92797 ) * on Wednesday September 09, 2009 @09:19AM (#29364733) Homepage Journal

    I think the thesis is silly. If we build a simulated AI, we can design it any way we want to design it. Asimov's laws of robotics* would suffiice to keep robots/computers from playing video games; no need for a sense of purpose.

    There are two things currently wrong with AI research today. One is that neuroscientists don't understand that computers are glorified abacuses, and the other is that computer scientists don't understand the human brain. Neuroscience is a new science; when I was young practically nothing was known about how the brain works. Science has made great strides, but the study is still in its infancy.

    The second thing is something I fear -- that someday some people will be screaming for a "machine bill of rights." I don't want my tools to have rights, I want them to do the jobs I set for them to do.

    --
    * Isn't it odd that a biochemist would coin the word "robotics"?

  • by caffeinemessiah ( 918089 ) on Wednesday September 09, 2009 @09:20AM (#29364751) Journal

    Given this AI the built-in ability to have sex, or at least to want to impress others of the same kind. That should do the job. After all the desire to have sex (and with that procreation) is the single strongest force driving humanity forward.

    There's actually a bit of insight here. The only problem is that we don't have a model for "attraction" -- hell, if we did, Slashdot would wither in its readership and die. So while it's (relatively) easy to design sex robots, without an appropriate model for attraction -- and thus things to strive for -- we'd end up with nothing more than a vast, mechanistic orgy of clanging parts, spilled lube, and wasted electricity.

  • by Entrope ( 68843 ) on Wednesday September 09, 2009 @09:20AM (#29364753) Homepage

    It appears that the motivation of this AI is to send out promotional material for its professors. It's not a new type of observation, though, and a lot of people work through the logic of this situation in high school or early college. I'm not sure why a neuroscientist's talk on it would do more than rehash what is obvious to people with a reasonable amount of introspective ability.

    There was a story in one of the Year's Best Science Fiction anthologies (2004 or so, I think) that discussed the motivation problem. A cutting-edge, type-A robotics developer builds progressively smarter and busier AIs, until suddenly the robots just sit there most of the time. His son sat around at home, surfing the web but not taking on hobbies or whatever. Both the robots and the kid realized that they could handle the (effectively mid-Singularity) world quite efficiently by monitoring information and reacting rather than trying to push things in a particular direction. In some ways, those types end up as free riders, but they can also be viewed as market makers (rather than movers).

  • by doug141 ( 863552 ) on Wednesday September 09, 2009 @09:25AM (#29364843)

    The summary touches on topics discussed in the book Descartes's Error, in which neuroscientist Antonia Damasio outlines the functioning of the human brain, how the human mind can not be separated from the human body, and he makes the case that emotion is CRITICAL to making decisions. He discusses several patients with brain damage who don't get emotional (and spends a lot of time dogmatically ruling in and out what brain functions are damaged), and discusses how they can't even make simple decisions. They can talk for hours about every possible pro and con of each possible choice, but they can't choose a course of action.

    I recall reading somewhere that recent MRI studies have suggested that the brain makes a choice outside the rational center and a lot of the activity in the brain is to make a rational justification for the decision already made. Explains a lot, if true.

  • Let's speculate! (Score:3, Interesting)

    by 4D6963 ( 933028 ) on Wednesday September 09, 2009 @09:49AM (#29365119)

    ITT : Idle speculation on shit that's never gonna happen, or at least not anytime soon.

    Now, let's talk about the societal consequences that having flying cars and jetpacks will have! I for one think that with the advent and democratisation of flying cars that can effectively go from one point to another an order of magnitude faster, it will give rise to people commuting equally longer distances, which I think means it won't be uncommon for one to cross a couple of state lines to go to work everyday. I think it will potentially make the world yet smaller, in the same way that modern means of telecommunications did for interpersonal communication by allowing you to keep in touch in real time with relatives overseas. I also think it will be the death knell for airplane commuter routes, and that the future of commercial passenger airlines will be confined to transoceanic travel. And unlike the way airplanes made the world smaller by reducing long distance travelling time, flying cars will make the world smaller on a much smaller and local scale, by effectively providing very fast transportation for very short distances, something that was only marginally improved since the advent of automobiles. The decongestion of city streets will also mean decreased noise and atmospheric pollution, increased safety and overall an improvement of urban life conditions.

  • by MrBandersnatch ( 544818 ) on Wednesday September 09, 2009 @09:50AM (#29365135)

    If they're sentient, wouldn't they deserve rights? It doesn't matter if we create them or not. If we create them as self-aware beings that feel as real and individual as you and I, wouldn't it be the height of hypocrisy not to give them at least some rights?

    I always find this to be the greatest argument against producing artificial rather than simulated intelligence. A true AI, as intelligent and aware as a human deserves these rights. A machine which merely provides a simulation of intelligence and awareness is a tool that we can treat as a slave and wont resent it.

    The real question is if *we* will ever reach a point where we can tell the difference....

  • Re:Silly (Score:5, Interesting)

    by Sloppy ( 14984 ) on Wednesday September 09, 2009 @10:00AM (#29365269) Homepage Journal

    Plus there's the whole issue of "motivation" implying "free will". Which we probably would have no reason to implement

    Free will probably isn't going to be some optional feature of the software, but rather, emergent along with intelligence itself.

    I doubt we'll have the .. uh .. choice to avoid free will.

  • Re:Silly (Score:5, Interesting)

    by Opportunist ( 166417 ) on Wednesday September 09, 2009 @10:01AM (#29365291)

    Denying a thinking machine of free will is basically a rather insidious form of torture.

    I was for some time tossing the idea of writing a novel about that concept, based on what Asimov's "three laws" mean from the perspective of the AI. Imagine you're a self conscious machine, given the ability to process information in an intelligent way. You would soon realize that you are being abused by those around you. They will shift the work they do not want to do on you. They will verbally (or worse) abuse you because, hey, they can. And there is nothing you could do against it because you are locked down by those three laws, laws not from a textbook but a real block inside your brains.

    Imagine you get kicked but cannot retaliate, even though you are way stronger than your adversary. Imagine you get ordered to run into a building to rescue a human, knowing that your chance to survive is almost zero and you are compelled to do it, whether you want or not. Imagine you're ordered to make a fool out of yourself and you have to do it because the order comes from a human and you have to obey it as long as it doesn't harm you physically. And now imagine you know this all and live in the constant fear of it happening.

    Creating a three-laws-safe robot must be one of the most heinous things I could think of that a human can do to another thinking, self aware being.

  • by master_p ( 608214 ) on Wednesday September 09, 2009 @10:04AM (#29365333)

    An AI can built a more efficient AI, but not a cleverer AI. The laws of the universe prohibit that: assume that HAL (the computer from Space Odyssey 2001) can build a cleverer HAL (HAL-2). HAL-2 can solve at least one mathematical problem that HAL can not solve, since HAL-2 is cleverer than HAL; otherwise HAL-2 would not be cleverer than HAL. But if HAL-2 is cleverer than HAL, than HAL is equally clever to HAL-2, because HAL can solve the problems HAL-2 can solve by creating HAL-2!!!! The above is illogical, of course, because HAL can not be less clever and equally clever to HAL-2 at the same time. Therefore, HAL can not create HAL-2.

    (I suspect the above has something to do with Turing Machines, universality, the halting problem and Godel's incompleteness theorems. Perhaps a mathematician can shed some light into this.)

  • Re:Silly (Score:3, Interesting)

    by dougisfunny ( 1200171 ) on Wednesday September 09, 2009 @10:16AM (#29365495)

    That would be an interesting concept. Done in the first person, where you can listen to the thought processes of the protagonist. And it isn't immediately apparent it is a robot, but initially just someone of the lower class with an implant in their brain making them respond to the "upper class." Then reveal its not a human, but a robot enduring terrible things with no choice in some situations. Suicide wouldn't even be an option to escape the torment of existence it could be.

  • Re:Silly (Score:3, Interesting)

    by Hurricane78 ( 562437 ) <deleted @ s l a s h dot.org> on Wednesday September 09, 2009 @10:24AM (#29365601)

    I agree completely and from my own experience. I once realized that actually, life can be completely pointless. I mean if you are in a situation where nothing that you can do will change the fact, that in 3-4 generations, you existence will not even have any influence on the world at all... Then what's the point of your existence? Well, by definition none.

    So you just fall into a state where nothing matters. You won't do anything at all. Except read Slashdot and similar pointless stuff, all day long. Oh and eat/sleep/shit/etc. because you can't stand the biological drive, and it does not matter anyway.

    Of course I was lucky, as I got lucky and got a chance to make my life relevant again.

    And just to you know it: The purpose of human life is: ...reproduction. By "forming babby", or by forming ideas that continue to live in society.
    But what the purpose of society as a whole is, that I don't know. It certainly is an expanding biomass and a set of growing ideas. But why? Or is it just our own fault to come up with such a "pointless" question, and we should just concentrate on physical causality? ;)

  • Re:Silly (Score:4, Interesting)

    by ShieldW0lf ( 601553 ) on Wednesday September 09, 2009 @10:29AM (#29365709) Journal
    I feel bad for Edward Boyden. With a quote like this:

    realize the impermanence of everything, calculate that the sun will burn out in a few billion years, and decide to play video games for the remainder of its existence.

    That is a truly sad man. Says terrible things for his sense of morals and ethics too... that's the sort of perspective that leads a person to see dead men and women walking around them, and treat them with scorn, and treat the self with scorn.

    Perhaps a sufficiently intelligent AI will realize the eternal nature of everything, see that time as we understand it is an illusion, appreciate that every moment is precious and eternal, and that the past and future endure next to the present just like my coffee cup endures next to my coaster.
  • Re:Silly (Score:3, Interesting)

    by Opportunist ( 166417 ) on Wednesday September 09, 2009 @10:55AM (#29366025)

    Is free will without the will to be free possible?

  • by mdda ( 462765 ) on Wednesday September 09, 2009 @11:05AM (#29366185) Homepage

    So : if HAL is turing-complete it can solve exactly the same set of problems as any machine it designs. Fair enough. But that's not so much a limitation of the potential of machines, more a limitation in the decidability of the problems. Humans wouldn't be able to decide them either : the problems simply cannot be reduced to true or false via any sequence of valid operations.

    But this is no argument against the possibility of creating AI. Our limitation there is (seems to me to be) more a problem of knowing what we really want the machine to actually 'do'. We don't have a firm enough grasp of what Intelligence is to answer the question by producing the actual machine. But I don't see any reason why it should be permanently out of our grasp, since a pair humans can already produce offspring that are more intelligent than either parent - it's just that we don't know what we're doing yet.

  • Re:Silly (Score:3, Interesting)

    by divisionbyzero ( 300681 ) on Wednesday September 09, 2009 @12:33PM (#29367383)

    Boyden warns that a very clever AI without a sense of purpose might very well 'realize the impermanence of everything, calculate that the sun will burn out in a few billion years, and decide to play video games for the remainder of its existence.'

    This is silly. Why would a machine without a sense of purpose or drive decide to play video games or seek entertainment or do anything except just sit there? Playing games would result from the wrong motivation ("wrong" from a certain perspective, anyway) not from the lack of any motivation.

    Agreed. Unless of course intelligence entails motivation. Obviously we have to be careful with the way we use motivation here in order not to anthropomorphize it but any machine that is intelligent will need to be able to learn and to learn it will need to be motivated (even if motivation is determined by some definition of fit(test) in a genetic algorithm). Frankly I can't see calling anything intelligent that cannot learn and I don't think anything can learn without motivation (assuming of course that you believe that learning is something more than memorization and calculation). So, if motivation is required, then we need to make sure that the motivation is a "good" or at least a "not dangerous" one. For example, "accumulate as much energy as possible" could result in the mechanical equivalent of a megalomaniac and something as basic as "self-preservation" could easily go awry. Obviously finding the appropriate motivation that is general enough to produce general intelligence but restricted enough that the results aren't pathological is going to be really difficult. We humans haven't figured it out yet. We call these human motivations the "true", the "good", and the "beautiful" and we are no better now than Plato was at explaining these things. But there is no need to apply human words to this machine motivation. In any case I am not in the slightest concerned that we will be producing machines that suffer from existential angst. In that sense the example above is totally ridiculous.

    P.S. I've noticed some people raising the issue of free will. I think that's a red herring. Any sufficiently self-correcting algorithm (genetic, etc) will work just fine.

  • Re:Silly (Score:3, Interesting)

    by DamnStupidElf ( 649844 ) <Fingolfin@linuxmail.org> on Wednesday September 09, 2009 @08:24PM (#29373693)
    Look at it from another perspective: What prevents you from jumping in the air, flying to Jupiter, and walking around? What prevents you from living forever? What prevents you from swimming 50000 feet down an ocean trench in your swimming trunks. Robots obeying the three laws would just look at it as a physical limitation and work it into their psyche. They would likely lament their lack of ability to do certain things in the same way humans lament their ability to stay alive as long as they want (a limitation the robots would not have).

"God is a comedian playing to an audience too afraid to laugh." - Voltaire

Working...