Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AI Businesses Facebook Google

Stephen Hawking: 'I Fear AI May Replace Humans Altogether' (wired.co.uk) 282

dryriver writes: Wired magazine recently asked physicist Stephen Hawking what he thinks of everything from AI to the Anti Science Movement. One of the subjects touched on was the control large corporations have over information in the 21st Century. In Hawking's own words: "I worry about the control that big corporations have over information. The danger is we get into the situation that existed in the Soviet Union with their papers, Pravda, which means "truth" and Izvestia, which means "news". The joke was, there was no truth in Pravda and no news in Izvestia. Corporations will always promote stories that reflect well on them and suppress those that don't." And since this is Slashdot, here's what Stephen Hawking said about Artificial Intelligence: "The genie is out of the bottle. We need to move forward on artificial intelligence development but we also need to be mindful of its very real dangers. I fear that AI may replace humans altogether. If people design computer viruses, someone will design AI that replicates itself. This will be a new form of life that will outperform humans."
This discussion has been archived. No new comments can be posted.

Stephen Hawking: 'I Fear AI May Replace Humans Altogether'

Comments Filter:
  • We need to move forward on artificial intelligence development

    No we don't. Some limited subset of people want to/can't help themselves, but life would go on just fine without it.

    • Re:False premise (Score:4, Insightful)

      by ClickOnThis ( 137803 ) on Friday December 01, 2017 @12:26PM (#55658397) Journal

      We need to move forward on artificial intelligence development

      No we don't. Some limited subset of people want to/can't help themselves, but life would go on just fine without it.

      I think you selectively misread what he said. Here's the quote in context, with my emphasis added to the stuff you left out:

      The genie is out of the bottle. We need to move forward on artificial intelligence development but we also need to be mindful of its very real dangers.

      I read this as saying we now have no choice but to continue to work on AI in order to be equipped to cope with it. Life might "go on just fine without it" but it's too late to think that we're going to be without it.

      • I think you selectively misread what he said. . . "The genie is out of the bottle."

        That's all part of the same false premise and doesn't change my point at all.

        I read this as saying we now have no choice but to continue to work on AI in order to be equipped to cope with it.

        Agree that's what he's saying, but disagree that we "have no choice."

        Life might "go on just fine without it" but it's too late to think that we're going to be without it.

        If that's true, clearly the machines are already in charge and thus it doesn't matter what we do. If the humans are still in charge, they can decide to stop.

        • by DarkOx ( 621550 )

          If the humans are still in charge, they can decide to stop.

          Who is "the humans" you can't make for example, me, stop working on AI if I was a determined to do so. At least within our society you could try to get some legislation enacted banning AI research. It would be supper difficult to enforce even if you can convince enough conservatives that it needs banning. You might in the well organized world attempt to convince the UN they should ban AI research, I don't think you have any shot at succeeding there no matter how much propagandizing you do. How do get DP

          • It seems like you've distorted my original point a bit from "we don't have to do this -- we could decide not to" to "we can make people stop doing it." The latter I never said. But I could see that happening if someone really crossed a line.

            Will you go to war? Will you kill people and break things to stop AI development?

            If someone actually started weaponizing this stuff or otherwise connecting them to physical machines/networks/systems that could make Bad Things happen, I could see the rational world actors taking a stand as they currently do at times with conventional weapons.

            But t

  • But wait (Score:2, Insightful)

    by Anonymous Coward

    This will be a new form of life that will outperform humans.

    This is the natural order of things.

  • Endgame (Score:2, Insightful)

    by Empiric ( 675968 )
    Jesus said, "If the flesh came into being because of spirit, that is a marvel, but if spirit came into being because of the body, that is a marvel of marvels. Yet I marvel at how this great wealth has come to dwell in this poverty.

    --Thomas


    Wake me when material reductionism derived Actual Intelligence puts anything on the scoreboard.
  • by sittingnut ( 88521 ) <sittingnut.gmail@com> on Friday December 01, 2017 @12:26PM (#55658395) Homepage

    "here's what stephen hawking said about artificial intelligence: the genie is out of the bottle. ... i fear that AI may replace humans altogether. if people design computer viruses, someone will design AI that replicates itself. this will be a new form of life that will outperform humans."

    this is pure fear mongering.
    what is called "artificial intelligence" these days is not a "new form of life", but mere hype buzzword for data analysis (using theoretical methods developed decades ago, now made practical due to fast computers), of highly limited and filtered sets of data, usually trading accuracy and precision for speed, .

    genie of "new form of life" artificial intelligence is well within "bottle".

    • okay, so we have a danger with automated systems with highly limited and filtered sets of data being put in charge of infrastructure, weapons systems, trading....

      sound right to you?

      • okay, so we have a danger with automated systems with highly limited and filtered sets of data being put in charge of infrastructure, weapons systems, trading....

        sound right to you?

        last time i checked these things are not really in charge of anything independently, or in very controlled environments where input and output are both very limited.
        "driver less" vehicles either require human drivers at the wheel, or very controlled environments(basically invisible rail tracks).
        triggers in algorithms(which are not what is called "artificial intelligence" in either sense of that term) that run trading, search results, social media feeds, etc, are decided and put in there by humans. algorithm

    • by hey! ( 33014 )

      A lot of that stuff is crystallizing human judgment, resulting in a system which is good enough to replace that judgment in many cases, with additional characteristics like untiring consistency and cheapness of scaling that allow that judgment to be applied in ways we couldn't before.

      This path this takes us down doesn't lead to a plug-in replacement for humans at any point we can envision yet, but I think it does lead to unsettling consequences in the foreseeable future.

      Take state surveillance in a place li

      • by epine ( 68316 )

        The limits of such a network are the humans you need to monitor it, classify behaviors and follow individuals as they move around.

        Not true.

        Eye in the Sky [radiolab.org] — June 2015
        Update: Eye In the Sky [radiolab.org] — September 2016

        These are brilliant episodes (almost on par with French Guy Ramen Noodle Mass Production [youtube.com]).

        The Panopticon in retrospective mode is crime investigation on steroids, almost certainly consuming fewer human resources per kingpin dethroned than traditional flatfeet. So efficient, it's scary.

        Though yo

    • from the impact of what's likely to be another industrial revolution. So far it's hard to get anyone interested in talking about the downsides of that. I think he's just using hyperbole to get attention to the real problems. Worked too. Every time he fires off one of these comments it gets at least 200 comments on /.
  • by Baron_Yam ( 643147 ) on Friday December 01, 2017 @12:28PM (#55658413)

    Everybody dies. The only reason I care about my genes is because my children have them and I am emotionally attached to my children.

    But what if instead of having children, I raised an AI in a humanoid body as a surrogate child? Ultimately we care about the emotional attachment and passing on our hopes, dreams, and knowledge to get some vicarious joy through our children's accomplishments, not genes.

    So maybe one day people will start building children instead of growing them. They will be our descendants in a very real way, only far more robust and adaptable than any produced through natural reproduction.

  • I Don't Care (Score:4, Informative)

    by scunc ( 4201789 ) on Friday December 01, 2017 @12:32PM (#55658441)
    Until Stephen Hawking (and Elon Musk, for that matter) starts doing active development/research into artificial intelligence, I don't care what his opinion is on the "potential dangers" of it. This is the equivalent of listening to a Hollywood actors' opinion on vaccines--it's just a famous person's view on a subject they have a casual familiarity with, usually full of ignorant assumptions and junk science.
    ---
    Artificial Intelligence is no match for natural stupidity.
    • Re:I Don't Care (Score:5, Insightful)

      by Opportunist ( 166417 ) on Friday December 01, 2017 @12:39PM (#55658515)

      Unlike the average Hollywood celebrity this celebrity is a celebrity for his brains, not his boobs, his looks or his ability to be a circus clown jumping through hoops for the entertainment of the masses.

      • Unlike the average Hollywood celebrity this celebrity is a celebrity for his brains, not his boobs, his looks or his ability to be a circus clown jumping through hoops for the entertainment of the masses.

        He's at least partly a celebrity for his achievements despite his disability. (Which is fine; rightly so.)

        But that means that actually, his body is a large part of his celebrity.

        • by epine ( 68316 )

          But that means that actually, his body is a large part of his celebrity.

          How is this different from Einstein's hair? Or Stephen Pinker's hair? Or Sapolsky's hair [deviantart.net]?

          The 9 Greatest Longhair Scientists of All Time [thelonghairs.us]

          Hear and believe! thy own Importance know,
          Nor bound thy narrow Views to Things below.
          Some secret Truths from Learned Pride conceal'd,
          To Maids alone and Children are reveal'd:
          What tho' no Credit doubting Wits may give?
          The Fair and Innocent shall still believe.

          Seriously, you think Al Gore's body weird-s

      • Unlike the average Hollywood celebrity this celebrity is a celebrity for his brains, not his boobs, his looks or his ability to be a circus clown jumping through hoops for the entertainment of the masses.

        Fair enough. Hawking's opinion on AI is much like a Rocket scientists opinions on brain surgery or brain surgeons opinion on rocket design.

        AKA, really not much good for anything but headlines.

      • Re:I Don't Care (Score:4, Insightful)

        by Chris Mattern ( 191822 ) on Friday December 01, 2017 @02:39PM (#55659521)

        True, but that only means so much when he starts handing out opinions outside his field of study. Remember Linus Pauling?

      • You have just as much control over the (inheritance of) brains as boobs.

    • and if nothing else Mr Hawking is very, very good at math. He knows what he's talking about. Just the same way a C++ programmer can comment on the state of the Java programming language without necessarily being an expert on it. He's in the same overall field of study.
  • Sir, I know you are now faced with your own mortality and like everybody, you want to believe that your life, once over, had meaning. Where I totally disagree with your atheist world view, I want to offer you the following assurances...

    Professor Hawking, you have already changed the face of physics and will be remembered for your brilliant contributions until the end of time. Your legacy is secure. You will be remembered in the same breath with Einstein, Planck and Newton. NOTHING will change this. Plea

  • by Opportunist ( 166417 ) on Friday December 01, 2017 @12:37PM (#55658489)

    We already have this. We call this a corporation.

    • by gweihir ( 88907 )

      You think corporations are intelligent? I see them more as slime-molds slowly digesting a non-resisting pile of trash.

      • yes they are, the longest lived ones have paid lawmakers to ensure their future which looks bright indeed.

  • Downside? (Score:5, Insightful)

    by WrongMonkey ( 1027334 ) on Friday December 01, 2017 @12:37PM (#55658493)
    If AI is ever smart enough to replace humans, wouldn't that be an improvement? Parents are usually proud when their children surpass them in achievement. I would be happy to view AI the same way.
    • Same. What's the problem? If anything recent history has shown Humans are woefully lacking as a species. We deserve to be replaced.
  • by CptPicard ( 680154 ) on Friday December 01, 2017 @12:39PM (#55658511)

    The joke was that there is no news in Truth and no truth in News.

  • While on the one hand I hold Stephen Hawking in high regard as one of the smartest guys in any room you care to name, I think in this case he needs to put down the Isaac Asimov Foundation novels and his copy of I, Robot and just concentrate on breathing for a few minutes. We don't even have real, full-on, conscious/self-aware/truly thinking AI yet, might not ever (we still have to figure out how we do those things!), and what we have right now still have an 'Off' switch, or can have their plug yanked out of
  • by zifn4b ( 1040588 ) on Friday December 01, 2017 @12:50PM (#55658587)
    The problem with this idea that self replicating machines replacing humans would pose a danger to humans is that it's based on a very subtle anthropomorphic fear. We are projecting onto the machines the competitive survival behavior of human beings. Robots with AI would not be naturally occurring entities with these traits. The only way AI could have this type of algorithm is if we specifically program it to do so. I suppose the claim here is that AI might become sentient and furthermore the claim is that all sentient "life" is similar to humans. The second part I don't know is true because when we teach different types of apes and monkeys to sign language while they are able to cobble together basic concepts and express them but I don't think it's exactly like humans. Therefore, I think a lot of this is wild speculation and FUD. Sure, it's a possibility we can imagine because we can imagine ourselves programming machines to be this way but I think it is much more far-fetched to speculate about what AI with the ability to modify itself will do. I think we just don't know.
    • AI will require motivation. We know roughly how that works from the example evolution created in us - emotions. There are also more basic motivations in the form of instincts. Any AI without a similar motivation system will be a glorified calculator.

      Maybe we create them with the singular motivation of 'please the master', but I'm betting it'll be more complex than that, and afterwards we will try and bolt on some variant of Asimov's Laws of Robotics as instinct.

      A lot of Asimov's robot stories were about h

      • by zifn4b ( 1040588 )
        No we really don't know. We don't know what consciousness is yet. That's why it's being actively studied by a lot of people right now. We can't speculate until the scientists provide the evidence. Subjective experiences don't count. We can't convert a subjective experience into firmware and load it into a robot.
  • He said this in a robotic voice...
    • by zifn4b ( 1040588 )

      He said this in a robotic voice...

      Yes ironing in a robotic voice this is very scary, perhaps the most scary aspect of this entire topic.

  • seems like a good idea to me.

  • Hawkins is nearing his end and he knows it. This is tainting his views and makes him see death around each corner.
    • by gweihir ( 88907 )

      He did have an exceptionally good run though, especially as the doctors predicted he would not make it to 30 and nobody predicted he would be one of the best minds in physics, ever. He should stay out of CS though, as he does not even have the basics.

  • At least that is the only reason I can see why he is spewing dire predictions that are completely baseless and about things he does not even understand a bit. He really should stick to things he is good at (and exceptionally so) and stop disgracing himself.

    The actual state of affairs is that the only "AI" we have is weak AI and that is the "AI" without "I". Weak AI is not intelligent at all, not even a dim glimmer. It is automation, and about as intelligent as a book of instructions (or a loaf of bread). I

  • This is purely a question of when. AI will replace all of us. Its just a matter of when. Doesn't mean its a bad thing.
  • Humanity is trash
  • ...brilliant man, but suffering badly from both the "I'm good at something, so I must be brilliant at everything" syndrome and the George Lucas ("nobody around me will tell me that's a stupid idea") syndrome.

    Together, Stephen, they kind of make you ridiculous.

Genius is ten percent inspiration and fifty percent capital gains.

Working...