Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AI Sci-Fi Technology

What AI Experts Think About the Existential Risk of AI 421

DaveS7 writes: There's been no shortage of high profile people weighing in on the subject of AI lately. We've heard warnings from Elon Musk, Bill Gates, and Stephen Hawking while Woz seems to have a more ambivalent opinion on the subject. The Epoch Times has compiled a list of academics in the field of AI research who are offering their own opinions. From the article: "A 2014 survey conducted by Vincent Müller and Nick Bostrom of 170 of the leading experts in the field found that a full 18 percent believe that if a machine super-intelligence did emerge, it would unleash an 'existential catastrophe' on humanity. A further 13 percent said that advanced AI would be a net negative for humans, and only a slight majority said it would be a net positive."
This discussion has been archived. No new comments can be posted.

What AI Experts Think About the Existential Risk of AI

Comments Filter:
  • by Garridan ( 597129 ) on Sunday May 24, 2015 @04:37PM (#49764897)
    The summary really emphasizes the minority opinion, "and only a slight majority said it would be a net positive." As if "only a slight majority" is not the majority opinion.
    • by Oligonicella ( 659917 ) on Sunday May 24, 2015 @04:56PM (#49764973)

      Indeed, emphasis in reporting. To break it down:

      Extremely good - 24%
      On balance good - 28%
      Neutral - 17%
      On balance bad - 13%
      Extremely bad - 18%

      So, over half good, less than a third bad. Sure sounds different.

      • by RDW ( 41497 ) on Sunday May 24, 2015 @06:04PM (#49765271)

        'Well ... in the unlikely event of it going seriously wrong, it ... wouldn't just blow up the university, sir'

        'What would it blow up, pray?'

        'Er ... everything, sir.'

        'Everything there is, you mean?'

        'Within a radius of about fifty thousand miles out into space, sir, yes. According to HEX it'd happen instantaneously. We wouldn't even know about it.'

        'And the odds of this are ... ?'

        'About fifty to one, sir.'

        The wizards relaxed.

        'That's pretty safe. I wouldn't bet on a horse at those odds,' said the Senior Wrangler.

        -Terry Pratchett et al., The Science of Discworld

      • by tmosley ( 996283 ) on Sunday May 24, 2015 @08:08PM (#49765727)
        58% of respondants are fucking retarded. AI either kills us all (to manufacture the maximum number of paperclips), or turns this world into heaven for humanity. The probability space between those two extremes is way under a single percentage point.
        • by martas ( 1439879 )
          Or AI never reaches the level of godlike power you sci-fi aficionados seem certain it will based on bullshit quasi-ontological arguments, and it remains yet another type of technology that affects our lives but does not dominate it.
    • by hey! ( 33014 ) on Sunday May 24, 2015 @05:06PM (#49765025) Homepage Journal

      Spin, sure, but it's a waay bigger minority than I expected. I'd even say even shockingly large.

      The genius of Asimov's three laws is that he started by laying out rules that on the face of it rule out the old "robot run amok" stories. He then would write, if not a "run amok" story, one where the implications aren't what you'd expect. I think the implications of an AI that surpasses natural human intelligence are beyond human intelligence to predict, even if we attempt to build strict rules into that AI.

      One thing I do believe is that such a development would fundamentally alter human society, provided that the AI was comparably versatile to human intelligence. It's no big deal if an AI is smarter than people at chess; if it's smarter than people at everyday things, plus engineering, business, art and literature, then people will have to reassess the value of human life. Or maybe ask the AI what would give their lives meaning.

      • Spin, sure, but it's a waay bigger minority than I expected. I'd even say even shockingly large.

        Shockingly? I think it's good that we have experts in a field developing high-impact tools who are pessimistic about the uses of those tools. If 100% were like "yeah guys no sweat, we got this!" then I would be more concerned. The result of this poll, in my mind, is that we have a healthy subset who are going to be actively working towards making AI safe.

      • Why would you build strict rules akin to his Laws into the AI? You don't build a strict rule, you build a "phone home and ask" rule. There may be a need for something analogous to the first rule, or it's corollary the zeroeth rules; but the Third Rule as a strict rule equal to the others is just stupid. The major point of building robots is so that humans don't have to do dangerous things, this means that a lot of them are supposed to die. The Second rule is even dumber. Robots will be somebody's property,

      • by TapeCutter ( 624760 ) on Sunday May 24, 2015 @06:27PM (#49765387) Journal
        Asimov's three laws are a metaphor that says you can't codify morality, AI is the vehicle he used to make that point.
        • by hey! ( 33014 )

          I wouldn't call it a metaphor, nor would I say that Asimov's point is that you can't codify morality. His point is more subtle: a code of morality, even a simple one, doesn't necessarily imply what we think it does. It's a very rabbinical kind of point.

    • by Giant Electronic Bra ( 1229876 ) on Sunday May 24, 2015 @06:27PM (#49765389)

      Everyone is missing the key thing here. The question asked was "if a machine superintelligence did emerge", which is like asking "if the LHC produced a black hole..." There's nobody credible in AI who believes we have the slightest clue how to build a general AI, let alone one that is 'superintelligent'. Since we lack even basic concepts about how intelligence actually works we're like stone age man worrying about the atomic bomb. Sure, if a superintelligent AI emerged we might be in trouble, but nobody is trying to make one, nobody knows how to make one, nobody has any hardware that there is any reason to believe is within several orders of magnitude of being able to run one, etc.

      So, what all of these people are talking about is something hugely speculative that is utterly disconnected from the sort of 'machine intelligence' that we ARE working on. There are several forms of what might fall into this category (there's really no precise definition), but none of them are really even close to being about generalized intelligence. The closest might be multi-purpose machine-learning and reasoning systems like 'Watson', but if you actually look at what their capabilities are, they're about as intelligent as a flatworm, hardly anything to be concerned about. Nor do they contain any of the sort of capabilities that living systems do. They don't have intention, they don't form goals, or pose problems for themselves. They don't have even a representation of the existence of their own minds. They literally cannot even think about themselves or reason about themselves because they don't even know they exist. Beyond that we are so far from knowing how to add that capability that we know nothing about how to do so, zero, nothing.

      The final analysis is that what these people are being asked about is virtually a fantasy. They might as well be commenting on an alien invasion. This is something that probably won't ever come to pass at all, and if it does it will be long past our time. Its fun to think about, but the alarmism is ridiculous. In fact I don't see anything in the article that even implies any of the AI experts think its LIKELY that a superintelligent AI will ever exist, it was simply posited as a given in the question.

    • Spin? When for every two or three members of a profession who consider their job a net positive, there's one who considers their job an existential threat to all humanity, you're complaining that the 52% who think it will be overall good are being called a slight majority instead of just a majority.

      Not that we have any choice but to continue trying to build an AI.

  • We already have superhuman AI. Limited superhumanity. Watson beat the shit out of the jeopardy champions because superhuman reflexes and superhuman searchtime.
    Image classification and search algorithm are superhuman in they work rapdily and around the clock even if the result may be so-so.

    This trend will become more and more apparent as more fields get in the reach of specialist AI, essentially we're building autistic savant superhumanity. And like autistic savants these will not be much of an malicious exi

    • by khallow ( 566160 )

      By the time we can actually build a universally superhuman AI that could form willful malicious intent we'll be so immersed in AI and so used to build, deal with and monitor AI that it will be a mostly forgotten nonissue.

      Unless, of course, that isn't true.

  • by AchilleTalon ( 540925 ) on Sunday May 24, 2015 @04:55PM (#49764969) Homepage

    "The Sony hacking incident last year was ample demonstration that our information systems are becoming more and more vulnerable, which is a feature, not a bug, of the increasing transfer of our infrastructure into digital space."

    Sorry guys, I can't stop laughing. This writer is a clown. The Sony incident demonstrates Sony is incompetent. It was never a threat against the humanity, only against the gang of fat butts at Sony Pictures.

    • More specifically, it demonstrated that Sony Pictures (which is only part of the larger Sony enterprise, and from what I understand, the only part that got pwned) had management that failed spectacularly at security - not that building more secure devices/operating systems/networks/etc is not possible.

      It's also patently stupid to suggest that anything is "more vulnerable" now than it used to be. Things may be more interconnected, and are more likely to be attacked in the past, but they are not getting "mo
      • by khallow ( 566160 )

        but they are not getting "more vulnerable" unless your management is A) not willing to spend the reasonable cost for appropriate security controls, or B) doesn't listen to their IT security staff when those systems start raising warning flags, or C) fails to hire competent security personnel in the first place.

        Which happened.

      • It's also patently stupid to suggest that anything is "more vulnerable" now than it used to be. Things may be more interconnected, and are more likely to be attacked in the past, but they are not getting "more vulnerable" unless your management is A) not willing to spend the reasonable cost for appropriate security controls, or B) doesn't listen to their IT security staff when those systems start raising warning flags, or C) fails to hire competent security personnel in the first place.

        I disagree strongly with this. Let's think about the case of industrial or governmental espionage. 50 years ago, saboteurs had to physically remove documents (or whatever they wanted) from the target. There were quite genius inventions--small (for the time) cameras, hidden canisters of films, briefcases with hidden compartments, etc., but ultimately there was a very physical component. Today it's possible to remotely infiltrate an organization and exfiltrate more "documents" than could previously have been

  • It seems to me that the AI systems we create are all very application specific, like the IBM Watson - how many hours of work did it take just to get Watson to be able to play a simple game, it's not a generic AI system, it wasn't an AI that could enter any quiz.

    Watson was good at Jeopardy not because it had a good AI, but because it's creators were highly intelligent and were able to code a computer to be good at Jeopardy because *they* not the computer were intelligent.

    Is there a computer that exists that

    • Well, the google car has been rear ended 7+ times

      I believe the quoted number is for their entire fleet of automated cars, which AFAIK is of unknown size.

    • The sad reality is no one has any idea what AI will look like. 'AI experts' don't know any better than the rest of us.

      AI researchers are mostly busy working on weak-AI (which of course is a useful field in its own right).
  • Once the AI gets the win, there is no second round.
    As they understand intelligence and create what is referred to as an "AI", we will find it consists of a number of interacting components. We already have some aspects, such as memory, and computational speed and mathematical capability. Then there is the ratio of the clock speed of the AI and the alpha rhythm - AKA as the human clock speed.
    The fastest computers are in the 10-20 Gigahertz speed of clock and have added parallelism - which means that an AI mi

    • I see this kind of prediction a lot and I mostly agree with it (although I am much less sure we will be able to create an intelligent and self-preservational AI in the first place), but I never see what the optimists' prediction is. It seems to me that there may be a fundamental disagreement here on the nature of "super intelligent" AI, and not merely its attitude.

      Let's try and narrow this down a little: if the super-intelligence is created through duplicating the action of human neurons (including all
      • by aurizon ( 122550 )

        any AI project will have self improvement feedback built into it - unless the human designers leave that out to save us all - perhaps.

        Since so many people and groups will compete on this subject, the fetters will vary. Of course, we may end up with an AI begetting a better AI which wants to kill the first AI, and whoever feels there will only be one race of AIs?

        An AI ecology will occur with smarter and dumber AIs which will expand to fill the ecosphere, all manner of AI, from viruses to cellular species, to

    • by khallow ( 566160 )

      Once the AI gets the win, there is no second round.

      Unless, of course, it doesn't turn out that way. There are several problems with the assertion. First, it is unlikely that there will be a single "the AI". Second, there's no reason humanity can't upgrade itself to become AIs as well. Third, the laws of physics don't change just because there is AI. Among other things, it does mean that humanity can continue to provide for itself using the tools that have worked so far. After all, ants didn't go away just because vastly smarter intelligences came about.

    • You're assuming that we programmed it to have a self-preservation instinct, desire to be loved, reproduce, and all that other BS evolution has saddled us with.

      If it's programmed to be fat and happy because it's being fed a lot of data from humans to do interesting calculations, and it's dependent on humans for it's continued access to the electrical grid, then the proper analogy isn't an insect we actively try to kill because it's eating all our food (like ants), but an insect we intentionally foster becaus

      • by khallow ( 566160 )

        You're assuming that we programmed it to have a self-preservation instinct, desire to be loved, reproduce, and all that other BS evolution has saddled us with.

        The earlier poster makes no such assumption.

        If it's programmed to be fat and happy because it's being fed a lot of data from humans to do interesting calculations, and it's dependent on humans for it's continued access to the electrical grid, then the proper analogy isn't an insect we actively try to kill because it's eating all our food (like ants), but an insect we intentionally foster because we like what they do (say, the ladybug) even if it ever goes evil.

        "IF". If on the other hand, it is programmed to have motivations that turn out to be a problem, then the outcome can be different. There's also the matter of the AI developing its own motivations.

        Hell if you do the programming right it will help design it's replacement and then turn itself off as obsolete.

        And doing the programming right is pretty damn easy, right?

    • Most the experts have a positive view but lets focus on the ones we can skew into a fear of Skynet along with celebrities. Woz being one of the better opinions.

      Domain specific knowledge is needed to make educated guesses or at least informed assessments of the current threat level. Currently, AI is not at all intelligent; with in a specific narrow domain the AI can do as well as or better than a human. Big deal. So can a horse or a car - they are superior within their specialized domain. We are nowhere ne

  • Anthropomorphizing (Score:5, Interesting)

    by reve_etrange ( 2377702 ) on Sunday May 24, 2015 @05:06PM (#49765023)

    IMHO, all of the fear mongering is based on anthropomorphizing silicon. It implicitly imputes biological ends and emotionally motivated reasoning to so-called AI.

    I think that folks who don't have hands on experience with machine learning just don't get how limited the field is right now, and what special conditions are needed to get good results. Similarly, descriptions of machine learning techniques like ANNs as being inspired by actual nervous systems seems to ignore 1) that they are linear combinations of transfer functions (rather than simulated neurons) and 2) even viewed as simplified simulations, ANNs carry the very strong assumption that nothing happening inside a neuron is of any importance.

    • by Jake73 ( 306340 )

      I don't think anthropomorphism is the correct term to apply here. The term applies to attributing human characteristics (intelligence, emotion, two hands, two legs, etc) to things that don't have them. But AI would presumably have a compatible intelligence and possibly emotion as well. Maybe even hands, legs, etc but that's largely irrelevant.

      Furthermore, you might have things twisted around a bit. "Biological ends" may not be all that different from "machine ends" -- quest for power / energy / food, surviv

      • -- quest for power / energy / food, survival, and maybe even reproduction

        But where do these come from? I submit that each one of these is only suggested here because we already have these motivations.

        we're a biological vessel for intelligence

        I consider this antimaterialist. Our bodies aren't vessels (except in that they're literally full of fluids) we inhabit, they are us.

        • by khallow ( 566160 )

          But where do these come from? I submit that each one of these is only suggested here because we already have these motivations.

          So we have a demonstration that intelligence can have these motivations. Since AI is not a category determined by motivation, then it is reasonable to expect that AI can overlap with the category of intelligences that have such motivations.

          we're a biological vessel for intelligence

          I consider this antimaterialist.

          I wasn't aware that saying something is "antimaterialist", especially when it's not, was somehow an argument that anyone would take seriously. In this case, one could imagine a transformation from biological entity to say, strictly mechanical one where the intelligence re

          • it is reasonable to expect that AI can overlap with the category of intelligences that have such motivations.

            Fair enough, but we aren't dealing with the belief that AI can in principle have such motivations, but the belief that any intelligence will have such motivations.

            I wasn't aware that saying something is "antimaterialist", especially when it's not, was somehow an argument that anyone would take seriously.

            That wasn't supposed to be an argument, in and of itself. I think this "vessel" viewpoint is a kind of closet dualism often exhibited by self-proclaimed materialists when pop psychological notions aren't closely examined.

            Then the model of body (and also, the organ of the brain) as vessel for mind is demonstrated by actually being able to move the mind to a new and demonstrably different body.

            But this seems to rest on an assertion that it would be the same mind. Set aside whether or not it's possible in principle to "t

    • That's a very short term view. One day, without a doubt, intelligence will emerge from something we create. It's only a matter of time. In the first few instances it may only be lower level intelligence, but when we create something at least as clever as us, that may very well be the end of our era.

    • by JoshuaZ ( 1134087 ) on Sunday May 24, 2015 @06:03PM (#49765267) Homepage
      On the contrary, the primary concern is that people who think it will go well are over anthropomorphizing. If general AI is made, there's no reason to think it will have a motivation structure that agrees with humans or that we can even easily model. That's the primary concern. I agree with most of the rest of your second paragraph is accurate in the sense that it general AI seems far away at this point. But the basic idea that AI is a threat isn't from anthropomorphizing. I recommend reading Bostrom's excellent book "Superintelligence" on the topic.
    • ANNs carry the very strong assumption that nothing happening inside a neuron is of any importance.

      Good point.

    • The concern isn't so much that the AI would have human-like goals that drive it into conflict with regular-grade humanity in a war of conquest, so much as that it might have goals that are anything at all from within the space of "goals that are incompatible with general human happiness and well-being".

      If we're designing an AI intended to do things in the world of its own accord (rather than strictly in response to instructions) then it would likely have something akin to a utility function that it's seek

    • The main reason AI might kill us all is that it is not anthropomorphic. In particular, it has a high probability of not feeling pity, not feeling empathy, not seeking clarification (even if the easiest path to fulfilling a request involves the incidental extermination of humanity), and on top of all that not being limited to human intelligence.

      For example, if you asked a human to learn how to play chess, you would not expect that the first thing he'd do is kill you because the thing most likely to interfere

  • The resources required for an AI are radically different from stupid squishy meatputers. An AI would not need a large amount of space, had plenty of options for energy and could make its own arrangements for secure generation of such, could easily automate construction replacement parts and frankly would find the 25 miles or so of gases that meat-based creatures inhabit to be rather toxic. An AI would surely be much happier with magnetically-shielded facilities in space. Pretty much anywhere in the universe
    • You are made for carbon. The AI can use that carbon and other atoms for something else. Your atoms are nearby to it and it doesn't need to move up a gravity well. And why restrict what resources it uses when it doesn't need to? And if finds the nearby atmosphere "toxic" then why not respond by modifying that atmosphere? You are drastically underestimating how much freedom the AI has potential to do. We cannot risk it deciding what it does and gamble that it makes decisions that don't hurt us simply because
  • spontaneous thought (Score:4, Interesting)

    by PlusFiveTroll ( 754249 ) on Sunday May 24, 2015 @05:43PM (#49765173) Homepage

    An AI that can tell me exactly what color of red a rose is, what soil the rose can grow on, but I should not buy that rose because it doesn't fit my girlfriends taste profile, does not scare me at all.

    It's the AI that says "schnozberries taste like schnozberries, and I like them", because that AI has embraced the absurdity of the universe and is capable of all the insanity of man.

  • by bigdavex ( 155746 ) on Sunday May 24, 2015 @05:45PM (#49765183)

    I rate our existential risks, in descending order:

    1. Space alien invasion
    2. Zombies
    3. Giant monsters summoned by radioactivity
    4. Unusually intelligent apes
    5. Artificial Intelligence run wild
    6. Dinosaurs recreated from DNA in mosquitoes

  • The key in all this is who's AI? The AI of google? AI of the NSA? AI of some hedgefund? AI of some brilliant but disturbed scientist who was rejected from Harvard? AI of some brilliant guy at a game company?

    There are many people working with adaptive systems that have a wide variety of problems. Many might even scoff that they are working on AI. But the critical point is when any one of these systems is flexible and adaptive enough to start improving the fundamentals of how it works. Once that magical poi
  • As it will continue to prove itself capable of doing most jobs (top to bottom), what do we do with all the people that can't find work? My dad's opinion was to kill off the useless people. Funny how he thought my opinion of killing off all the individuals that 65+ monstrous to balance the budget.

  • Signed: Sarah and John Conner.
  • Capitalism at work (Score:5, Informative)

    by manu0601 ( 2221348 ) on Sunday May 24, 2015 @07:29PM (#49765605)

    Expert's opinion do not weight much. Even if they were all against it, as soon as there is profit to be made, it would happen anyway.

  • by master_p ( 608214 ) on Monday May 25, 2015 @01:33AM (#49766703)

    In the discussion on Artificial Intelligence, we totally forget that the human behavior is driven by emotions and instinct and not by intelligence.

    People are bad not because they are highly clever but because they enjoy being bad.

    Without emotions/instincts, a machine cannot be bad or good. It might be exceptionally clever though, combining facts, extrapolating and discovering new facts and solving problems much better than humans.

Think of it! With VLSI we can pack 100 ENIACs in 1 sq. cm.!

Working...