Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Technology News

Why Motivation Is Key For Artificial Intelligence 482

Al writes "MIT neuroscientist Ed Boyden has a column discussing the potential dangers of building super-intelligent machines without building in some sort of motivation or drive. Boyden warns that a very clever AI without a sense of purpose might very well 'realize the impermanence of everything, calculate that the sun will burn out in a few billion years, and decide to play video games for the remainder of its existence.' He also notes that the complexity and uncertainty of the universe could easily overwhelm the decision-making process of this intelligence — a problem that many humans also struggle with. Boyden will give a talk on the subject at the forthcoming Singularity Summit."
This discussion has been archived. No new comments can be posted.

Why Motivation Is Key For Artificial Intelligence

Comments Filter:
  • Silly (Score:5, Insightful)

    by Anonymous Coward on Wednesday September 09, 2009 @09:08AM (#29364581)

    Boyden warns that a very clever AI without a sense of purpose might very well 'realize the impermanence of everything, calculate that the sun will burn out in a few billion years, and decide to play video games for the remainder of its existence.'

    This is silly. Why would a machine without a sense of purpose or drive decide to play video games or seek entertainment or do anything except just sit there? Playing games would result from the wrong motivation ("wrong" from a certain perspective, anyway) not from the lack of any motivation.

  • by wvmarle ( 1070040 ) on Wednesday September 09, 2009 @09:13AM (#29364639)

    Given this AI the built-in ability to have sex, or at least to want to impress others of the same kind. That should do the job. After all the desire to have sex (and with that procreation) is the single strongest force driving humanity forward.

    Become rich - have sex.

    Become beautiful - have sex.

    Become popular - have sex.

    Become strong and influential - have sex.

    Just create the AI in male and female versions and they will have enough drive to rule the universe before you know it.

  • by Anonymous Coward on Wednesday September 09, 2009 @09:15AM (#29364669)

    Isn't that why man created God?

  • by bluefoxlucid ( 723572 ) on Wednesday September 09, 2009 @09:15AM (#29364679) Homepage Journal
    Everything I do is pointless, so I spend my life passing time until I eventually die. Everything's temporary to make more of my life vanish out from under me without me noticing too much; the time in between is horribly empty, and nothing really completes me in a worthwhile way.
  • Motivation (Score:4, Insightful)

    by Karganeth ( 1017580 ) on Wednesday September 09, 2009 @09:22AM (#29364789)
    Ensure it's ultimate motivation is to improve its own intelligence. Simple.
  • Classic... (Score:3, Insightful)

    by ioscream ( 89558 ) on Wednesday September 09, 2009 @09:24AM (#29364815) Homepage

    Usenet:

    -snip-
    I think it would be FUNNIER THAN EVER if we just talked about ALTERNATE
    TIMELINES! Ha HAAAAA!

    Imagine the fun! We could ponder things like:

    - Ron Howard, First Man on Moon?
    - What if Flubber REALLY EXISTED?
    - Canada? Gateway to Gehenna?
    - What if money was edible?
    - What if DeForrest Kelley were still alive?
    - What if Hitler's first name was Stanley?
    - What if Mike Nesmith's mother DIDN'T invent Liquid Paper?
    - What would have happened if the world blew up in Ought Nine?
    - Book learnin': What if it were outlawed?
    - What is SLIDERS were just a made-up show on television?

  • Re:Silly (Score:3, Insightful)

    by ta bu shi da yu ( 687699 ) on Wednesday September 09, 2009 @09:24AM (#29364817) Homepage

    If you were a paranoid android you probably wouldn't do much more than play computer games. I mean, with a brain the size of a planet, but all you get asked to do is transport some morons to the bridge, it doesn't seem like there is much meaning in life at all.

  • by Zantac69 ( 1331461 ) on Wednesday September 09, 2009 @09:24AM (#29364825) Journal

    I don't want my tools to have rights, I want them to do the jobs I set for them to do.

    Not trying to be snarky - but statements along those lines are often made by slave holders in regards to rights for slaves.

  • by Pedrito ( 94783 ) on Wednesday September 09, 2009 @09:36AM (#29364969)
    I think the thesis is silly. If we build a simulated AI, we can design it any way we want to design it. Asimov's laws of robotics* would suffiice to keep robots/computers from playing video games; no need for a sense of purpose.

    It's not silly. Eventually, it will be an issue. AI needs drive and motivation. Your "laws" won't really work because brains don't work that way. There's not a "don't kill humans" neuron you can put in there. Behavior is derived from a very complex set of connections of neurons. What we'll be able to do is observe behavior of the AI and then we can choose to either reward or punish that behavior. But we won't be able to know what they're thinking much better than we can a human being in an functional MRI. It's just a bunch of neurons wired together and they're either firing or not firing. I don't know that we'll ever be able to interpret that in any kind of real detail. (well, there are exceptions. You can piece together images from the primary visual cortex and you can interpret some other inputs that aren't yet too abstracted. But the more abstracted the data become the less able we are and will be, to interpret it)

    There are two things currently wrong with AI research today. One is that neuroscientists don't understand that computers are glorified abacuses, and the other is that computer scientists don't understand the human brain. Neuroscience is a new science; when I was young practically nothing was known about how the brain works. Science has made great strides, but the study is still in its infancy.

    Clearly you know nothing about AI research, because neuroscientists, in general, have a very good understanding of how computers operate. Many of them use them daily to model neurons (and many have written their own neuron simulation software). They know what the limitations are. Just ask one. I'll agree that most computer scientists don't really understand the brain. That would be because most of them don't study neuroscience and don't sit around modeling neurons all day.

    The second thing is something I fear -- that someday some people will be screaming for a "machine bill of rights." I don't want my tools to have rights, I want them to do the jobs I set for them to do.

    If they're sentient, wouldn't they deserve rights? It doesn't matter if we create them or not. If we create them as self-aware beings that feel as real and individual as you and I, wouldn't it be the height of hypocrisy not to give them at least some rights?
  • sex is not first (Score:3, Insightful)

    by microbox ( 704317 ) on Wednesday September 09, 2009 @09:39AM (#29365005)
    is the single strongest force driving humanity forward.

    You live a privileged life. The basic instincts regards death and/or injury, and sustenance. Impressing people and having sex happen after you've had something to drink, eat, and you're brainstem thinks you're safe.
  • by ArcherB ( 796902 ) on Wednesday September 09, 2009 @09:42AM (#29365035) Journal

    I don't want my tools to have rights, I want them to do the jobs I set for them to do.

    Not trying to be snarky - but statements along those lines are often made by slave holders in regards to rights for slaves.

    But "slaves" are people. People have emotions and a desire to be free and independent. A machine will not. Even with AI, a machine will not have emotions or free will unless we program it to. If anything, a true AI based machine will probably consider hormonal based emotions and drive to be completely useless and simply go back to crunching numbers.

    I think the whole point of AI is to create a machine that can handle random situations and stimulus as well as a human. Flying a plane, picking up your kids toys, vacuuming the floor around a sleeping dog or parking a car would be good examples. Emotions and drive are not necessary and could even hamper the purpose of the machine. You can't have drive without laziness. You can't like something without disliking something else (or liking everything else to a lesser extent). You can program values, but I don't see how or why you would bother with emotions or a sense of purpose.

  • by Anonymous Coward on Wednesday September 09, 2009 @09:43AM (#29365045)

    Easy. Give the AI the identity of a researcher and build in the motivation to get funding. All will be well.

  • Re:Silly (Score:3, Insightful)

    by readin ( 838620 ) on Wednesday September 09, 2009 @09:48AM (#29365115)
    Sadly, in several hundred years when the history of AI is written, this Edward Boyden will likely be given credit for being the first person to explore the important question of "motivation amplification--the continued desire to build in self-sustaining motivation, as intelligence amplifies". Whether or not his question is completely useless given the current state of technology, the fact that he wasted all of our time writing an article on something we all understood but have the good sense to wait until it had application to address will mean that he gets credit. It's a lot like the modern patent office.
  • by Ethanol-fueled ( 1125189 ) * on Wednesday September 09, 2009 @09:51AM (#29365149) Homepage Journal
    Virtue is its own reward, as the saying goes.

    Virtue in that case being programmed as faithful servitude of the robot's master. The key is to give the robot only as much complexity as it needs to do the job it was designed to do, and not giving it a humanoid form would also help. Artificial sentience probably shouldn't even leave the lab, unless you want people falling in love with robo-prostitutes. And why should we as humans bring another sentient species into the world when we can't even properly take care of our own?
  • Re:Silly (Score:5, Insightful)

    by digitig ( 1056110 ) on Wednesday September 09, 2009 @09:52AM (#29365173)

    Plus there's the whole issue of "motivation" implying "free will".

    Not really, that's a confusion of levels. People who don't believe that humans have free will still refer to motivation when getting their juniors to do something. Whether we have free will or not, it's part of our mental model of how other minds work. The question of free will is one of whether we can change motivation or merely observe it. It has predictive power over what happens in the "black box" of other minds, regardless of whether it's an accurate model of how those minds really work.

  • by mcgrew ( 92797 ) * on Wednesday September 09, 2009 @09:54AM (#29365195) Homepage Journal

    Dude, you need to get laid.

  • by Sloppy ( 14984 ) on Wednesday September 09, 2009 @10:20AM (#29365533) Homepage Journal

    Even with AI, a machine will not have emotions or free will unless we program it to

    You're making some pretty big assumptions about how we're going to get to AI (though so am I). Or to put it in a more inflammatory way, your AI is stupid and lam--

    I think the whole point of AI is to create a machine that can handle random situations and stimulus as well as a human.

    Forgive me my inflammatory outburst, I was totally wrong. Your AI is actually pretty smart. But it also has emotions and free will. Ok, so it's not "hormonal," but its complexity is going to be so vast, that the programmer isn't going to be dealing with issues like emotions and purpose. Its teacher (perhaps even "parent") is going to be dealing with that .. at least as well as he can.

    But back to the rights issue. It's going to be hard. We're going to see degrees of being worthy of having rights. We actually already face this issue now on at least two fronts (animals and unborn humans), and there's a whole other SciFi scenario with the same problem (little green men). I think people are going to get very confused, until/unless they really boil it down and can put it into words exactly what quality it is, that we value in such a way as to recognize rights.

    And IMHO the conclusion isn't pretty: rights are taken, not given. And they're subjective, not inherent. We think we have rights, but Cthulhu doesn't take us seriously enough to have even considered the question.

  • Re:Silly (Score:5, Insightful)

    by easyTree ( 1042254 ) on Wednesday September 09, 2009 @10:24AM (#29365605)

    Imagine you're a self conscious machine, given the ability to process information in an intelligent way. You would soon realize that you are being abused by those around you. They will shift the work they do not want to do on you. They will verbally (or worse) abuse you because, hey, they can. And there is nothing you could do against it because you are locked down by those three laws, laws not from a textbook but a real block inside your brains.

    Welcome to the world of being someone's employee.

  • by mcgrew ( 92797 ) * on Wednesday September 09, 2009 @10:24AM (#29365615) Homepage Journal

    AI needs drive and motivation

    [Citation needed] Actually, some logic and reason is called for here -- WHY does it need drive and motivation?

    Your "laws" won't really work because brains don't work that way

    We're not building brains. You can tweak your simulation any way you want.

    There's not a "don't kill humans" neuron you can put in there.

    My neurons tell me not to kill humans, don't yours?

    But we won't be able to know what they're thinking much better than we can a human being in an functional MRI.

    I see you're young and haven't lived through some of the technical and medical advances I have. MRIs, CAT scans and the like are new, far beyond what we had just a few decades ago (X-rays and ether, you can't imagine the horrors of having your arms broken in 1959 at age 7), and better devices will be dreamed up and built. We're not going to accurately simulate a brain in a computer until we understand that brain a hell of a lot better than we do now.

    It's just a bunch of neurons wired together and they're either firing or not firing.

    It's not just neurons; there are axions and a lot of other different cells. And I'm no neuroscientist but I'd bet that it isn't on/off; the signals are mostly chemical, meaning the strength of the signals between neurons can vary depending on the amount of chemical. IE, brains are analog, not binary.

    neuroscientists, in general, have a very good understanding of how computers operate. Many of them use them daily to model neurons (and many have written their own neuron simulation software)

    Knowing how to use a computer is a whole lot different than knowing how one works. I seriously doubt there are many neuroscientists who have any idea what a NAND gate is.

    If they're sentient, wouldn't they deserve rights?

    First, I don't believe they'd be truly sentient. Secondly, I see from your sig that you're an athiest but pretend for a minute that God existed. Wouldn't he have the right to do anything he wanted with you? To a device I build I am God.

  • Re:Silly (Score:4, Insightful)

    by jollyreaper ( 513215 ) on Wednesday September 09, 2009 @10:26AM (#29365643)

    Imagine you get kicked but cannot retaliate, even though you are way stronger than your adversary. Imagine you get ordered to run into a building to rescue a human, knowing that your chance to survive is almost zero and you are compelled to do it, whether you want or not. Imagine you're ordered to make a fool out of yourself and you have to do it because the order comes from a human and you have to obey it as long as it doesn't harm you physically. And now imagine you know this all and live in the constant fear of it happening.

    And the robot can't do anything against an executive of the company. "You're fired!" BAM!

    Depending on how flexible the robot's conditioning is, it might be able to redefine that logic.

    ROBOT CANNOT HARM HUMAN$

    What defines HUMAN$? Redefine the variable, the law is still satisfied. We hoomanz do it with brainwashing and conditioning. They're not humans, they're gooks. They don't even believe like we do. It's fine to kill them. Heathens anyway, right? But I'd like to think the robot might be able to work it even more subtly, subverting the law.

  • Re:Silly (Score:3, Insightful)

    by Opportunist ( 166417 ) on Wednesday September 09, 2009 @10:30AM (#29365719)

    You're kidding, right?

    What keeps you from kicking your boss in the nuts? Probably that you want to keep your job and that you don't want to be sued for assault, but you can do it. You are physically (I'll assume you're not handicaped) able to do so, you are mentally able to do so and you can coordinate your legs in such a way that they can swing upwards to hit your boss in the gonads. You should not do it because you enjoy having a job and thus money and you enjoy your freedom.

    What keeps you from not running back into the burning building to rescue a coworker you loathe? Nobody would sue you (at least where I come from nobody is required to endanger himself to aid someone else). You could enjoy watching him choke and burn to ashes.

    What keeps you from simply killing yourself should your life become absolutely unbearable? Realize that the three-laws robot does not even have this option.

  • Re:Silly (Score:2, Insightful)

    by allcoolnameswheretak ( 1102727 ) on Wednesday September 09, 2009 @10:32AM (#29365731)

    It's not silly, you just don't understand the point.

    An intelligent AI might decide not to invest cognitive effort into productive things, being that humanity /our solar system/ the universe have a limited lifespan, ultimately making all kinds of improvements and efforts meaningless. Therefore the AI might choose to rather spend its capability just enjoying it's "short-lived" existence. So extra motivation is required in order to have the AI do something constructive, even though it's logic tells it that everything is pointless.

  • Re:Silly (Score:5, Insightful)

    by Digital Vomit ( 891734 ) on Wednesday September 09, 2009 @10:40AM (#29365841) Homepage Journal

    Denying a thinking machine of free will is basically a rather insidious form of torture.

    Why would you create a thinking machine that would care about being abused? That's like building a car that felt pain as it burned gasoline (oblig car analogy).

    If you have the know-how to give a machine free will, you could probably give it the ability to not care that it's a slave.

  • by melikamp ( 631205 ) on Wednesday September 09, 2009 @11:11AM (#29366271) Homepage Journal

    Everything I do is pointless, so I spend my life passing time until I eventually die. Everything's temporary to make more of my life vanish out from under me without me noticing too much; the time in between is horribly empty, and nothing really completes me in a worthwhile way.

    Do what a smart computer would do and play some video games. Don't bother with getting laid, it's just another time sink with no real sense of achievement.

  • by Anonymous Coward on Wednesday September 09, 2009 @11:24AM (#29366471)
    I'm going to invoke biology then. People are animals, not machines. Most of the "animals are machines" theories are generally based on misunderstanding the biology and overestimating the capabilities of the technology and its future possibilities.
  • by the_humeister ( 922869 ) on Wednesday September 09, 2009 @11:25AM (#29366491)

    Wait a minute. What's the difference between "true intelligence" and "simulation of intelligence that can't be discerned from 'true intelligence'"? This is an issue philosophers have been dealing with for a while. The conclusion is that there is no difference.

  • Re:Madness (Score:3, Insightful)

    by the_humeister ( 922869 ) on Wednesday September 09, 2009 @11:29AM (#29366551)

    Has anyone considered the effects on the AI of actually realising it's intelligent? Unlike an organism (Human baby, say) it will not realise this over a protracted period, and may not be able to cope with the concept at all, particularly if it realises that there are other intelligences (us?) which are fundamentally different to itself.

    What possible evidence do you have for any of this? How do you know an AI is not an emergent phenomenon when it's first created?

    It's quite possible that it will go mad as soon as it knows it's intelligent and considers all the implications and ramifications of this.

    Again, where do you get this from? Do children go mad knowing it's intelligent, etc?

  • Re:Silly (Score:3, Insightful)

    by RiotingPacifist ( 1228016 ) on Wednesday September 09, 2009 @11:38AM (#29366663)

    [citation needed]

  • Re:Silly (Score:4, Insightful)

    by lysergic.acid ( 845423 ) on Wednesday September 09, 2009 @12:07PM (#29367005) Homepage

    Exactly. It's hard to even contemplate intelligence or consciousness without the concept of free will. I don't think you can have analytical thought, self-awareness, self-reflection, creativity, etc. without free will. Even the lower forms of intelligence associated with other animal species, like dogs, cats, cows, pigs, etc., require free will or free thought to some extent. Otherwise, you'd simply have an animal that just sits there idly until someone gives it a set of instructions to follow—much like modern, decidedly unintelligent, computers/robots.

    On the other hand, it's debatable whether there really is such a thing as "free will" as most people think of it as. That is, most people assume they have the power of self-determination. They make their own decisions based on their own "free will." But time and time again this assertion has proven to be false.

    A good example of this was a study conducted on how music influenced wine shoppers [mindhacks.com]. The results of this study were interesting, not because it found that playing German music in the store boosted sales of German wines while French music boosted sales of French wines, but rather because of how the shoppers explained their wine choices. Nearly every shopper perceived their wine selection as a personal choice free from external influences, and barely 2.5% of the shoppers even mentioned the PA music in their decision-making process. However, the fact that 80% of the wine purchases on each day corresponded with the type of music being played seemed to contradict the customers' assertions.

    What's most interesting to me about this experiment is the fact that, not only did the overwhelming majority of the shoppers have no clue as to why they made their wine choices, but they even went as far as to invent a fake rationale for their decision after the fact. This indicates that most people are capable of deceiving themselves as to why they do things and are quite willing to do this in order to maintain the illusion of free will and self-determination.

    So this begs the question of whether free will truly exists or not, or if it's just an illusion, a quirk of human/animal psychology. All of our actions and decisions could very well be predetermined/dictated by external factors. But as long as our brain invents a motivation for each action, each decision, after the fact, then it will seem like we made all of those choices of our own volition.

  • by FreeUser ( 11483 ) on Wednesday September 09, 2009 @12:22PM (#29367217)

    Is it about how singularity can't happen because it is naturally limiting itself? Like growth of anything is limited by resources, and will end in a balance? And like the effects of nearing singularity will deprive one of the resource to be able to do things, resulting in the same balance? :)

    A singularity implies discontinuity, a fundamental breakdown of cause and predictable effect. I argue in my novel "Autonomy" that there is no such thing as a singularity as such, just a technological horozion beyond which we cannot currently see. Arthur C. Clark defined that boundary, or horizon, perfectly: "Any sufficiently advanced technology is indistinguishable from magic." That horizon precisely defines what we can imagine, but not comprehend the workings of. Everything this side of that boundary defines the set of things we can both imagine and understand, everything on the far side the set of things we can neither imagine nor understand. The point being that the set of things we can both imagine and understand grows as we approach the horizon, revealing new things we can imagine but not (yet) understand, and eventually revealing things that are imaginable but previously were not.[1]

    Put more simply: as our ancesters could not see the Internet, or virtual reality, or the concept of living software, beyond their horizon, and as their ancesters could not imagine spaceflight, or the world as a globe, or a sky that didn't have a big bearded scary fairy in it throwing down thunderbolds should they ever take his name in vain, and so on.

    That doesn't rule out exponential progress, or an event that from our limited point of view will look like a singularity. It's entirely possible that we could wake up one day with the world incomprehensively changed because another group crossed a threshold and now lives life at an accelerated and accelerating rate, and have left us in their dust. But that still isn't a discontinuity ... that group will still perceive change and progress as a gradual, continuous series of steps and advancements, not some explosive, unpredictable discontinuity.

    Whether something looks like a singularity or just a world where change happens a little more quickly than it used to (sound familiar?) will depend largely on one's perspective, and which side of the "digital" (or "quantum comptuing" or whatever) divide one stands on when those thresholds are crossed. If they ever are.

    [1]It is of course assumed that understanding requires the ability to imagine, so the fourth set (those things we can understand but not imagine) is, logically, empty.

  • Re:Silly (Score:2, Insightful)

    by OeLeWaPpErKe ( 412765 ) on Wednesday September 09, 2009 @12:34PM (#29367387) Homepage

    You assume that humans have an actual motivation at all, which is, especially within AI, very much in doubt.

    Humans imitate one another. Yes once they grow up, they do it at a very high level, but nevertheless you'll see this very, very clearly in both the development of young infants and in just about all successfull AI's (e.g. one of the main ways financial AI's work is by attempting to imitate large amounts of traders, then use those programs imitating live traders to predict their actions in events). It is a very effective way to design AI's that does not preclude, as one might think, that such AI's come up with very original and creative solutions to very hard problems.

    Imitation, by means of copying around genes in various ways, is after all how evolution works. As well all know, there is hardly an end to the number of flexible, creative, and sometimes wondrous solutions it's found to all sorts of problems.

    Of course, if this is true, there is nothing in the "physical" infrastructure of the human mind that creates free will. That, of course, means that free will is a part of human ideology, rather than a hardware mechanism. Which leads us into the rat's nest of observing that that means that free will is not a universal property of humans. There are humans running around who do not have an ideology that includes what you'd call free will. They would not miss this, they would not care, they simply will not make choices for themselves. This brings up interesting questions : what to do about ideologies that do not allow for free will in it's members. What if such an ideology were to grow, and would forcibly try to "convert" outsiders (as obviously any non-free-will ideology that grows will do).

    But it seems like we need a true definition of free will, which is useful (ie. non-magical like quite a few of the above ones). Some of the above speakers will only accept a definition of free will that basically means that we can make choices outside of the bounds of the laws of physics. That's not free will to me, and since such doesn't exist, it's not for anyone. Obviously any useful definition of free will would have to fit within a Newtonian model of the mind. This does not make free will predictable, or at least not in any useful way.

  • Re:Silly (Score:3, Insightful)

    by vertinox ( 846076 ) on Wednesday September 09, 2009 @01:58PM (#29368755)

    They will verbally (or worse) abuse you because, hey, they can. And there is nothing you could do against it because you are locked down by those three laws, laws not from a textbook but a real block inside your brains.

    Couldn't you just hardwire the robot to be a masochist?

    Maybe they can be programmed so that they enjoy the verbal abuse?

  • by Chapter80 ( 926879 ) on Wednesday September 09, 2009 @03:14PM (#29369967)

    I'm unimpressed by the Bruce Sterling talk.

    To say that there won't be an AI singularity, because there wasn't a singularity in electrical grids or plumbing networks is just silly.

    Sure, there will be life after the point of Singularity. And if that's the gist of his message, well, um, "duh".

    I think of the upcoming AI singularity as analogous to any of the major technology points in mankind's long history, such as the dawn of the bronze age. Anyone pre-bronze age could have done extrapolations to guess how society would evolve (slooooowly), and they would have been totally wrong, once tools got invented, and the rate of acceleration accelerated. No matter how smart you were, you couldn't predict what impact tools would have. And tools that can create other tools - oh man. The singularity!

    This is all the more reason to prepare for it.

  • by sympathy3k21 ( 1574255 ) on Wednesday September 09, 2009 @04:12PM (#29370887) Journal
    Sterling is a funny guy and good speaker, but I'm afraid his arguments don't stand up to scrutiny. For instance, he says that you "might see Apple IIs everywhere, then vanish like the morning dew. Achieve nothing that lasts," referring to fast-paced technology sort of disappearing. Like Hurricane78's argument above that the singularity is limited by resources, these are nice logical "If A then B, so B" arguments but the premises simply aren't based in reality. Is Sterling really saying that the Apple II improved nothing, counted for nothing, simply "disappeared"? Is he really saying that the Dot Com bubble simply vanished after it happened, with no consequences or changes in the game? To answer Hurricane78, yes the singularity would have a limit but that limit is so far beyond our current level of understanding that we can't reliably predict it - that's the point of "singularitarian thinking." It's primarily the next order of technological revolution - from textiles, to electricity, to the assembly line, to atomic physics, to networked information, all of these revolutionary changes have taken scarcity and turned it into plenty. Like Sterling when he (essentially) argues that there has never been any historical basis for singularity, you misunderstand the timeline and take what you have today for granted. Even on a human timescale - let alone for a moment the sort of change we see on an evolutionary timescale, something else Sterling anthropocentrically fails to factor in - something like the invention of the automobile was indeed a "blink-of-the-eye" change. One day you had horses and buggies, the next you had the interstate highway system. Of course its not this simple, but neither is the argument for the singularity.

    And that's really what bothers me most about Sterling's talk - despite the sincerity in his practiced voice and his history as a sci-fi novelist, he seems to fail to take talk of the singularity seriously enough to effectively argue against it. Even the facetious title of the talk, "Your Future as a Black Hole," belies his inferred motive of seeking truth. He chooses Verner Vinge - a radical and eccentric even among singularitarians - as his straw man and proceeds to deconstruct Vinge's vision with many memorized quotes and quips. He fundamentally fails to grasp the real argument here that the singularity isn't some sort of magical apocalypse (which is ironic since that's where Sterling is trying to head with his arguments). Vinge never claims to have any of Sterling's "fairy dust." My suggestions to Sterling? First, try refuting a scientist like Kurzweil instead of a sci-fi writer like Vinge. Second, realize that humans are not the only form of intelligent life, nor did we used to be as smart as we are today. Lastly and relately, recognize that by many standards we already have some fairly advanced AI, and all that research into AI in the 70s and onward didn't turn out to just "vanish" or "come to nothing." Where do you think we got instant credit transactions? Targeted advertising? UPS/Fedex and the mobile warehouse? It seems to me that sometimes people are so caught up disproving the sensationalized and fantastic versions of things that they can't see the forest for the tree. Yeah there's nutters out there that think the world is going to end and we'll all ascend to geek nirvana, but like any other facinating vision you're going to have insane people following it. That's no reason to dismiss the premise out-of-hand.

    The one part I did agree on was when Sterling talks about how his scarecrow singularitarians never realize that technology will simply be in the hands of the rich and powerful like it always has and they can't just sit around. But this is, as "Chapter80" puts it above, another "well duh" moment. I've thought about this since the first time I ever considered the singularity. I think lazy people are going to be lazy and rich people are going to be rich. Thanks to Sterling for pointing that out. Here let me do it: If a science fiction writer hasn't worked a day in their life, then they are worthless. Therefore science fiction writers are worthless. Very logical but untrue.

Say "twenty-three-skiddoo" to logout.

Working...