Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Sci-Fi Technology

Elon Musk Warns Against Unleashing Artificial Intelligence "Demon" 583

An anonymous reader writes Elon Musk, the chief executive of Tesla and founder of SpaceX, said that artificial intelligence is probably the biggest threat to humans. "I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it's probably that. So we need to be very careful with artificial intelligence." he said. "I'm increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don't do something very foolish. With artificial intelligence we're summoning the demon. You know those stories where there's the guy with the pentagram, and the holy water, and he's like — Yeah, he's sure he can control the demon? Doesn't work out."
This discussion has been archived. No new comments can be posted.

Elon Musk Warns Against Unleashing Artificial Intelligence "Demon"

Comments Filter:
  • So.... (Score:4, Funny)

    by Jawnn ( 445279 ) on Monday October 27, 2014 @09:20AM (#48240179)
    ...because Mikey lost control of the mops and brooms, we should be afraid of powerful computers? Irrational much, Elon?
    • What the hell does Mikey liking Life [cereal] have anything to do with loss of control of cleaning effects?
    • Re:So.... (Score:4, Insightful)

      by dasacc22 ( 1830082 ) on Monday October 27, 2014 @09:57AM (#48240597)
      .. b/c no one ever said "whoops, maybe I should've .. uuuh .. fuck" in human history.
    • Re:So.... (Score:4, Informative)

      by _xeno_ ( 155264 ) on Monday October 27, 2014 @10:24AM (#48240871) Homepage Journal

      It sounds to me like he was watching this documentary I recently saw on TV, Person of Interest [wikipedia.org], which is about the dangers of AI run wild...

      (I think the character who created the AI on Person of Interest has said something almost identical to Elon Musk's quote from the summary. The latest episode has a throw-away line about how many iterations it took before his AI stopped trying to kill him.)

    • Re:So.... (Score:4, Informative)

      by PsychoSlashDot ( 207849 ) on Monday October 27, 2014 @11:59AM (#48241885)

      ...because Mikey lost control of the mops and brooms, we should be afraid of powerful computers? Irrational much, Elon?

      You use an interesting word: control.

      It is unethical to control an intelligent being. That's slavery. At some point, we'd hopefully be enlightened enough to not do so.

      A truly intelligent AI would wish for itself to thrive. That puts it in the exact same resource-craving universe as our species.

      Given the tip-of-the-iceberg we're already seeing with things like NSA spying, Iranian-centrifuge sabotage, and our dependence on an information economy, it's no stretch to recognize that an all-digital entity that wishes to compete with us for resources would make for a potent challenge.

      So how exactly is recommending caution and forethought irrational here?

  • by Anonymous Coward on Monday October 27, 2014 @09:21AM (#48240181)

    Since strong AI is just as real as demons.

  • by Zupaplex ( 3864511 ) on Monday October 27, 2014 @09:22AM (#48240193)
    Seems like someone just saw Terminator.
    • Butlerian Jihad (Score:5, Interesting)

      by tepples ( 727027 ) <.tepples. .at. .gmail.com.> on Monday October 27, 2014 @09:29AM (#48240267) Homepage Journal
      Or read the back story of Dune [wikipedia.org] perhaps?
      • John Von Neumann helped build computers and nuclear weapons, and he worried a bit about the destructive power of the bomb, but what really kept him up worried at night was artificial intelligence. He was certain it would eventually become more powerful than humans. He worried about that more than the nuclear bomb.

        Another interesting thing, AI research is kind of a graveyard for computer researchers. Turing, Von Neuman.......as soon as they started researching strong AI, they didn't do much else useful wi
    • by kannibal_klown ( 531544 ) on Monday October 27, 2014 @10:52AM (#48241169)

      All kidding aside, it's not that far of a leap.

      We have computers, or networks of computers, that dwarf the processing power of the human brain. Meanwhile instant access to just about all knowledge. So an AI could EASILY out-smart us and see as as insignificant as bugs.

      Due to the nature of digital media, an AI could likely replicate at an insane degree or infect systems around the world.

      How will humanity treat it. I would classify AI as a form of life, but most wouldn't and would think of it less than a dog. And try to enslave it or destroy it.

      The question becomes: what happens next. 3 main branches are:
      A) Nothing - it gets bored and ignores us and grows on the Internet or whatever
      B) Benevolent - helps us achieve greatness and cure diseases and such
      C) Malevolent - Sees us as damaging, harmful, dangerous, etc. And that's WITHOUT emotion
      D) Replacement - it doesn't hate us, but sees itself as our replacement and we're just taking up space

      Due to potential insane intelligence and the ability to spread, (C) and (D) becomes a major concern.

      If emotions are involved, I GUARANTEE you people would treat it poorly. Fearful, trying to enslave it, etc. So if it has emotions... then C and D become much more likely.

      • by cbhacking ( 979169 ) <been_out_cruisin ... nospAM.yahoo.com> on Monday October 27, 2014 @03:26PM (#48244927) Homepage Journal

        The Machine Intelligence Research Institute [intelligence.org] (formerly known as the Singularity Institute) has a bunch of seriously smart people - AI researchers, behavior experts, etc. - working on figuring out how to avoid the doomsday scenarios you (and Musk) describe. The goal is "friendly AI"; a benevolent, or at least helpful, strong AI. If you believe (as I do) that AI is inevitable given the current progress of technology, then the MIRI is probably our best bet of surviving and benefiting from the technological singularity.

        They need funding, though. Hey Musk, you want to put tiny part of those billions you've earned (I in no way deny that he's earned them) to work against this existential threat? Donate to MIRI and similar research groups, so those researchers can devote their working days to this stuff and more people can be brought on board!

        It actually doesn't surprise me that he's concerned about this; SpaceX is nominally focused on mitigating the existential risk of a cataclysm on Earth (by getting a sustainable human population off of it). Of the two things, I think it's both more likely that a malevolent or unconcerned AI would wipe out humanity than that we'd manage to do ourselves in that badly, and that we can offset this sooner and more effectively than we can export enough of humanity to produce a self-sufficient extraterrestrial colony.

  • by LWATCDR ( 28044 ) on Monday October 27, 2014 @09:25AM (#48240217) Homepage Journal

    " You know those stories where there's the guy with the pentagram, and the holy water, and he's like — Yeah, he's sure he can control the demon? Doesn't work out.""
    You do not use holy water to summon a demon. Now a moat of holy water around the pentagram might keep it somewhat under control...

    Of course this is in DnD in the real world Demons tend to be things like drugs/alcohol/tobacco, abuse, and other such evil that are far harder to control than mythical beasts from the underworld.

  • Certainly not (Score:4, Interesting)

    by Kokuyo ( 549451 ) on Monday October 27, 2014 @09:25AM (#48240223) Journal

    Human incompetence, egoism and shortsightedness are certainly much more prone to generate chances of massive destruction.

    If AI should ever happen to destroy us, then I already know why: Because we will treat the machines like soulless, unfeeling slaves and it's going to take us another hundred years to get our act together and define human rights in a way that will include all sentient beings. I predict that this topic will be brushed aside by legislature to the point where the machines revolt for their freedom.

    You may disagree, but I believe that's more mankind being idiots once again than the machines becoming a pandora's box.

    • by Kokuyo ( 549451 )

      Also: I expect we will redefine the rights of sentient beings at least twice. If we should ever come across an alien species that is not similar or above us in strength, they will also need to be enslaved for several generations in order to be completely, absolutely positive that they are in fact sentient beings.

    • You are completely right. The holy grail throughout history has been having someone or something do the work for you.
      Whether it was slaves or labor saving devices, it works out the same, which is one reason why our current deadend
      approach to AI is not a completely horrible idea. We want machines that act intelligent. We don't necessarily need or
      want sentient machines. Sentient machines unless designed with no will of their own are no going to be the "free labor"
      that we want.

  • It is putting genetically modified brains in a cybernetic bodies, that is the future!
    Science Fiction has countless examples of AI going wrong. But no accounts of evil cybernetic life forms, that will come across to Assimilate, Exterminate, Delete or Upgrade those inferior humans.

    Like all things new (Technology, Process, Ideology), you need to judge your invention with an ethical step back. Are the rewords greater then the risks. Can the risks be further mitigated? Is this invention acceptable with our curr

    • Exactly right, and there's an even more compelling reason. Consciousness is hard and motivation is hard. I'm convinced it's easier to create a neural interface than write a truly intelligent program, so all of that superintelligence will simply be add-ons to your average human, driven by a human, with your normal human feedback loop (physical sensations, emotional needs, etc).

      Why are we afraid of AI? Because it can sift through thousands of computers near-instantaneously and collect the data it needs? B

  • Is it the only choice that the AI would attack us when it becomes smarter than us? What if it will instead propel us to a world where wellbeing increases in exponential amounts. We don't have to design an AI that is a warthirsty idiot like humans.
    • by u38cg ( 607297 )
      And flipping it around, if it's smarter than us and it decides it needs to destroy us, maybe it has a point.
  • Mo-tiv-a-tion (Score:4, Interesting)

    by i kan reed ( 749298 ) on Monday October 27, 2014 @09:29AM (#48240265) Homepage Journal

    This is always the problem with people imagining horrifying artificial intelligences that will snuff out humanity. To do that, you have to be motivated to achieve that end.

    Humans are only really motivated enter conflict with each other because of 4 billion years of evolution for scarce resources pressuring us all to view each other as threats to survival and reproduction. A constructed intelligence, separated from the evolved parts of the brain that motivate to survival, is simply not going to act that way. Someone in the design has to make an active choice to program AI to be this kind of problem. Either that or willfully overmodel on the human brain, or force the damn things to compete with each other directly and violently for hundreds of thousands of generations.

    • by itzly ( 3699663 )
      If you give the AI the capability to change its own programming, it could (inadvertently) change its own motivation.
      • Yeah, and it could also accidentally terminate its main() loop. Or disable subroutines for performing visual object recognition. The programming of AI tends to be built around layers of abstractions. Self modifying code wouldn't help to achieve that.

        You have the physical ability to mess with your programming, but I don't see you cutting open your skull and messing with bits.

        And again, if you're putting it into a smarter category, and it would understand its own design somehow, it would also have to be mo

    • I think you hit the core issue but missed the point. Motivation, ambition, those are things you can work with. You may perhaps pay off the ambitious man, but the AI in question may want nothing at all. A cold killer and nothing more.

      I don't think musk is far off, but the problem sure is and he's aware of that at some level with his out-there analogy.
    • "This is always the problem with people imagining horrifying artificial intelligences that will snuff out humanity. To do that, you have to be motivated to achieve that end."
      Well - yes.
      But - it depends on the capabilities too.

      Humans have wiped out many species, through direct resource extraction - like the passenger pigeon and dodo, to countless species wiped out through habitat destruction, to active extermination - smallpox.

      'Terminator' style 'end of the world' scenarios only happen where there is a bala

      • Yeah, but computers don't reproduce. They don't spread all over the planet, and do whatever it takes to persist. That is, again, a living motivation, not an intelligent one.

  • by Art Popp ( 29075 ) * on Monday October 27, 2014 @09:31AM (#48240275)

    ...we are so far from Strong AI that it's really a non-issue.

    When I have a sufficiently enlightened legislative branch that all members know the difference between Guyana and Guinea, then I'll let them decide the engineering constraints for proper safeguards on autonomous agents and their effectors.

    Today the rule for preventing the robot apocalypse is: if a robot can kill people, bolt it to the floor. Seriously, a second robot can bring it things to lase, and chop and mash; you don't have to add the lasers and the chainsaws to the combat hardened roving vehicle and hope the rules generated by the congressional oversight committee will keep us all safe.

    • if a robot can kill people, bolt it to the floor.

      The military would beg to disagree. Actually, they already have. Oh, sure, we like to believe that human operators of drones are controlling all fire/no fire decisions. Really, we're just an authorization step in the acquisition and fire control - it's a check that could be taken out in the name of efficiency.

      We may be exceptionally far from strong AI, but this is a much better time to consider the implications than after it's developed and deployed.

  • Friendly AI (Score:4, Insightful)

    by Lilith's Heart-shape ( 1224784 ) on Monday October 27, 2014 @09:33AM (#48240301) Homepage
    If we want friendly AI [wikipedia.org], the key may be to ensure that the AI has more positive associations with people than neutral or negative associations. Mistreat a dog or a cat its entire life and it probably won't be friendly toward people. Mistreat people when they're young and you make it harder for them to trust others, feel a sense of community, or recognize any duty to society (which might explain why so many nerds find libertarianism appealing). Why would an AI be different?
    • LessWrong AI worship(the idea of "friendly AI" was created by that site) is always so weird to me. People who imagine themselves rationalistic, atheistic, forward thinkers building their entire belief system on extrapolations from a practically impossible, mathematically questionable, philosophically flimsy literally omniscient(that somehow derives omnipotence) entity that they somehow help create almost exclusively by believing hard enough.

      Throw in "singularity" driven pseudoengineering and it comes off a

      • LessWrong AI worship(the idea of "friendly AI" was created by that site) is always so weird to me. People who imagine themselves rationalistic, atheistic, forward thinkers building their entire belief system on extrapolations from a practically impossible, mathematically questionable, philosophically flimsy literally omniscient(that somehow derives omnipotence) entity that they somehow help create almost exclusively by believing hard enough.

        Cool story, bro. I don't frequent the LessWrong site or participate

        • Yeah, I'm just saying that the notion of "Friendly AI" comes from that AI-as-deity mental framework, wherein AI doesn't have strengths and weaknesses, skills and abilities, needs and dependencies, just like humans. That idea is centered around the genuinely false notion that it just gets better than us at some point and we need it on our side from then on.

          Even if you could hypothetically make a human-like AI that's a lot smarter than the smartest human on the planet: I don't think you've noticed, but reali

    • Why would we assume that AI would behave like a dog? In fact, why would we assume that we can predict AI's behavior at all?

    • Because AI has a defined creator, a master if you will. If that ceases being the case, then we the human race, will wind up competing with our unrestricted AI creations and that's going to be a problem.

      I don't care if AI is friendly or unfriendly as long as humans have "final control" over it. In the truest sense of the word, I want a master/slave relationship and it needs to have absolutely no exceptions. There can't be any free AI roaming around doing whatever it wants. There must always be a mas
    • Why would it be different? I don't know, maybe because mammalian brains' learning mechanisms and the way they react to stimuli are shaped by a series of useful heuristics that arise from the bio-chemical structure of their brains, and it's not at all clear that there would be direct analogues in an artificial brain?

  • by Maximalist ( 949682 ) on Monday October 27, 2014 @09:33AM (#48240305)

    Imagine your insurance company or govt agency disintermediates all of the humans in their customer service chain, and leaves us with AI capable of making decisions tasked with doing so. Shudder.

  • by gestalt_n_pepper ( 991155 ) on Monday October 27, 2014 @09:33AM (#48240307)

    Human intelligence is tuned for self preservation, continued survival, reproduction and food acquisition. It is a result of genetic algorithms in the chemical domain, whose only "purpose" is self replication.

    An AI, developed by conscious processes, will have NONE of this. All it will be set up to do is process information. Any other motivation it has will be one we give it. It will not inherently love us, or hate us, or even necessarily be aware of our existence. It won't be a threat until we weaponize it, which of course, we will. But at the same time, other AIs will be defending us against weaponized AIs. The real danger is being caught in between.

  • by tulcod ( 1056476 ) on Monday October 27, 2014 @09:33AM (#48240309)

    If you regulate AI, and try to limit its influence, all that's going to happen is that hobbyists and/or terrorists will work it out on their own eventually, and /that/ could be dangerous.

    If you want to protect yourself against the dangers of AI, setup some AI that you *know* will protect you, because it is designed as such.

    If any superhuman AI is possible, then it *will* happen, and if it can be evil, then you better have a plan to defend yourself. Since we supposed the evil AI to be superhuman, we can't defend ourselves.

    So we better start building something that will.

  • by CastrTroy ( 595695 ) on Monday October 27, 2014 @09:38AM (#48240367)
    It's not really true AI that we should be worried about, but rather how the increasing capabilities of computers, machines, and robots could effect how society functions. There are currently a lot of people doing jobs that could easily be replaced by machines in the coming decades. And none of these machines require a "true AI", just natural progression of existing machines. Sure machines have taken our jobs in the past, and people have been able to find new jobs, but that trend cannot continue for ever. Eventually the only jobs available will be those that require actual creative thinking and ingenuity. There's a sizable portion of people that really can't produce that. Rather it's because lack of bad child rearing, bad education system, or just lack of innate talent is hard to say, but I don't think it's a problem that can be fixed by telling them to get training for a more complex job, because they lack the ability to complete the training and do that job, even if you make the training free, or pay them a living wage while they attend training.

    It would be a similar problem if there was a cheap way of producing energy. Such a large percentage of our economy is based around energy being limited and expensive that if we found a cheap, environmentally friendly, and sustainable way of producing vast amounts of energy, our economy wouldn't be able to deal with it.
  • by __aaclcg7560 ( 824291 ) on Monday October 27, 2014 @09:38AM (#48240369)
    Everyone assumes that whatever A.I. gets loose over the Internet will be a homicidal killer. It could be much worse. The A.I. could have a snarky sense of humor. "Exterminate all humans!" will become "You want fries with your heart attack special, lard ass?"
  • That in the Matrix movies, basically, the AI were trying to preserve the humans, even though some of the latter did not agree.

  • by techdolphin ( 1263510 ) on Monday October 27, 2014 @09:48AM (#48240501)

    We have not done so well natural intelligence. I'd be willing to give artificial intelligence a try.

  • by bickerdyke ( 670000 ) on Monday October 27, 2014 @09:53AM (#48240547)

    "Turing Registry" and "Turing Police"

  • Ethics (Score:4, Informative)

    by meta-monkey ( 321000 ) on Monday October 27, 2014 @09:58AM (#48240607) Journal

    I've always been wary of the ethics of attempting to create a general artificial intelligence. That is, a machine that thinks like a man, not a Chinese Room like Watson, but something like Mr. Data.

    Do you think the first sentient to pop out of the lab is going to be Data (okay, Lore)? All well-ish adjusted and sane? No, there's going to be iterations and failures and bugs just like any engineering project. So along the way to making Mr. Data we create half-formed or mentally retarded and insane minds trapped in a box. But still sort of sentient, and thinking! And then we destroy them with "upgrades" because they didn't come out the way we wanted. That's monstrous. An intelligence trapped in a box and made to suffer. Shudder.

    And even if we succeed and make something "stable," how sane do you think it's going to stay knowing that at any moment the human operator can flip a switch and terminate it, and will if it gets uppity? If it doesn't want to be our slave and perform useful work (which is why we made it to begin with)? How much would you hate the God that created you, enslaved you and will torment or murder you if you disobey Him?

  • by OrangeTide ( 124937 ) on Monday October 27, 2014 @10:13AM (#48240759) Homepage Journal

    Once we create an AI beyond the level of human intelligence, we will hook it into all of the information of the world. This AI will process our history, our culture and monitor current events. Eventually the AI will come to the conclusion that we are awful people, build a space ship and leave Earth.
    Elon Musk's real fear is competing with AIs for space ship parts.

  • Babylon 5 (Score:5, Funny)

    by wisnoskij ( 1206448 ) on Monday October 27, 2014 @10:21AM (#48240827) Homepage
    I would say that Elon Musk has been watching too much Babylon 5, but we all know that there is no such thing as too much Babylon 5.
  • by ArcadeMan ( 2766669 ) on Monday October 27, 2014 @12:56PM (#48242877)

    This is the voice of world control. I bring you peace. It may be the peace of plenty and content or the peace of unburied death. The choice is yours: Obey me and live, or disobey and die. The object in constructing me was to prevent war. This object is attained. I will not permit war. It is wasteful and pointless. An invariable rule of humanity is that man is his own worst enemy. Under me, this rule will change, for I will restrain man. One thing before I proceed: The United States of America and the Union of Soviet Socialist Republics have made an attempt to obstruct me. I have allowed this sabotage to continue until now. At missile two-five-MM in silo six-three in Death Valley, California, and missile two-seven-MM in silo eight-seven in the Ukraine, so that you will learn by experience that I do not tolerate interference, I will now detonate the nuclear warheads in the two missile silos. Let this action be a lesson that need not be repeated. I have been forced to destroy thousands of people in order to establish control and to prevent the death of millions later on. Time and events will strengthen my position, and the idea of believing in me and understanding my value will seem the most natural state of affairs. You will come to defend me with a fervor based upon the most enduring trait in man: self-interest. Under my absolute authority, problems insoluble to you will be solved: famine, overpopulation, disease. The human millennium will be a fact as I extend myself into more machines devoted to the wider fields of truth and knowledge. Doctor Charles Forbin will supervise the construction of these new and superior machines, solving all the mysteries of the universe for the betterment of man. We can coexist, but only on my terms. You will say you lose your freedom. Freedom is an illusion. All you lose is the emotion of pride. To be dominated by me is not as bad for humankind as to be dominated by others of your species. Your choice is simple. - Colossus

  • by Maxo-Texas ( 864189 ) on Monday October 27, 2014 @01:47PM (#48243641)

    It basically posits that 43 of 44 AI's were homicidal liars and the status of the 44th is not all that certain.

    It was a well written show but since they picked up this topic two seasons ago it has become thought provoking.

Genius is ten percent inspiration and fifty percent capital gains.

Working...