Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AI Technology

Hawking: AI Could Be 'Worst Event in the History of Our Civilization' (usatoday.com) 243

An anonymous reader shares a USA Today report: Elon Musk isn't the only high-profile figure concerned about the rise of artificial intelligence. Scientist Stephen Hawking warned AI could serve as the "worst event in the history of our civilization" unless humanity is prepared for its possible risks. Hawking made the remarks during the opening night of the Web Summit in Lisbon, Portugal. Hawking expects AI to transform every part of our lives, with the potential to undo damage done to the Earth and cure diseases. However, Hawking said AI could also spur the creation of powerful autonomous weapons of terror that could be used as a tool "by the few to oppress the many." "Success in creating effective AI could be the biggest event in the history of our civilization, or the worst," he said. Hawking called for more research in AI on how to best use the technology, as well as implored scientists to think about AI's impact. "Perhaps we should all stop for a moment and focus our thinking on not only making AI more capable and successful, but maximizing its societal benefit," he said.
This discussion has been archived. No new comments can be posted.

Hawking: AI Could Be 'Worst Event in the History of Our Civilization'

Comments Filter:
  • Fear mongering (Score:4, Informative)

    by Anonymous Coward on Tuesday November 07, 2017 @01:03PM (#55507173)

    AI is nowhere near an existential threat, so let's just stop it. AI is useful but very primitive when considering what could actually pose a threat. Please stop.

    The main threat is developing AI and data mining operations to interpret large amounts of data and build profiles of all of us. It's a privacy issue, and one we are capable of solving by mandating that our privacy is respected. While I'm not confident we'll actually do so, it is definitely in our control.

    • Re:Fear mongering (Score:5, Insightful)

      by Dutchmaan ( 442553 ) on Tuesday November 07, 2017 @01:14PM (#55507261) Homepage
      AI is improving every day, it far exceeds human capabiltiy at recognizing patterns and responding to them. When you watch what's happening with video games and how AI is beginning to trounce even the best players you can see how even in it's infancy AI needs to be treated with the same caution as you would a dangerous virus. Once the genie is out of the bottle it will be too late to discuss it.
      • With the speed of computers, I'm afraid it will be too late before we even realize the genie is out of the bottle. That's the risk right there, thinking the threat is in the distant future when it might only be a few months or a few years away.

        • Re:Fear mongering (Score:5, Insightful)

          by Oswald McWeany ( 2428506 ) on Tuesday November 07, 2017 @01:40PM (#55507523)

          With the speed of computers, I'm afraid it will be too late before we even realize the genie is out of the bottle.

          There doesn't even need to be a malevolent AI to take over humanity. It could be a benevolent takeover that is prompted by people.

          Forget science fiction movies and books; there doesn't need to be a revolution where an AI is more intelligent than us and we realize too late. It could happen slowly step by step.

          To be effective in the stock market now you have to have certain computer led decisions. That might not be true AI yet, but a computer can respond to news faster than a human. All the major traders use computer made decisions now. So, there is one industry where computers are already prominent. What if it happens in other industries over time (it is... and we're gladly and willingly turning over control).

          What if we decide computers, or AI can control the economy better than a human. If one country does it and it proves to be successful, others will have to do it to keep up. What if AI can handle trials better than a jury. What if AI can produce better military strategies.

          There doesn't have to be an revolution; AI will evolve to take over humanity with us willingly handing it the reigns. Probably won't happen in our lifetime, but the slow transfer of power has already begun. Right now humans can override computer decisions, but that will eventually disappear when AI is less flawed than people and we realize a human overriding it is usually wrong.

          AI will one day rule and control humanity- and we WILL give it that power over us willingly.

          • by thomst ( 1640045 )

            DontBeAMoran cautioned:

            With the speed of computers, I'm afraid it will be too late before we even realize the genie is out of the bottle.

            Prompting Oswald McWeany to respond:

            There doesn't even need to be a malevolent AI to take over humanity. It could be a benevolent takeover that is prompted by people.

            Forget science fiction movies and books; there doesn't need to be a revolution where an AI is more intelligent than us and we realize too late. It could happen slowly step by step.

            There doesn't have to be an revolution; AI will evolve to take over humanity with us willingly handing it the reigns. Probably won't happen in our lifetime, but the slow transfer of power has already begun. Right now humans can override computer decisions, but that will eventually disappear when AI is less flawed than people and we realize a human overriding it is usually wrong.

            AI will one day rule and control humanity - and we WILL give it that power over us willingly.

            I was going to upmod Oswald's comment, but someone else will put him over the +5 Insightful bar soon enough. So, instead, I'd like to point out that his projection of AI's in charge of all executive decisionmaking is the exact model of the Culture's society in the late, great Iain M. Banks's visionary novels.

            Yes, there are plenty of drones with human- or more-than-human-level intelligence who have no more executive authority than the average human or alien citizen, but al

      • Limit its use to lecturing and ranting at us about how stupid the human race is.

      • by kiviQr ( 3443687 )
        There is no "I"ntelligence in AI it is pattern recognition and great algorithms. Google "robo soccer" https://www.youtube.com/watch?... [youtube.com]
        • Intelligence is "the ability to acquire and apply knowledge and skills" according to dictionary definition. That could be done with pattern recognition and great algorithms.

          • by kiviQr ( 3443687 )
            In current state pattern recognition algorithm needs human (intelligence) to prepare training data (which includes labeling objects) then human needs to tweak classification parameters (dimensions, stepping functions) to optimize the result. "AI" is not capable of "acquiring and applying knowledge" - it is us humans.
      • The last article I saw said AI sucks at video games.
      • Comment removed based on user account deletion
        • If that is truly the case, then it is the mark of a lousy AI programmer.

          It has been known, and taught, for literally decades that an AI program has to be able to explain to its human programmers HOW or WHY it came to a particular conclusion or chose a particular course of action. This started with medical work: the AI has to be able to tell the doctors WHY it thinks the patient has this particular condition, and not that one, or why it recommends this drug over that one. It also has to tell the doctors wh

          • Please explain exactly how you catch a ball.

            • You've just demonstrated that human intelligence is incompletely understood. Not that self-explaining AI isn't desirable or being pursued. It's not the aim of AI research to emulate human weaknesses. In fact, the ability of AI to explain its thinking is the one area where it sometimes trumps human intelligence. Humans all too often can't explain how they came up with a solution even if it is desirable, i.e., in mathematics, to explain it to others for them to emulate the fruitful mental process. Even if you
      • A truly dangerous threat is only that which can spread exponentially, for example a biological virus (or perhaps a malformed GM organism). AI is not that.

        Unless of course it is programmed to self reproduce in the physical world, e.g. through a combination of living cells with custom DNA and nano assemblers. (That would be Michael Crichton's "Prey".)

      • There is no genie. Only a person who rubs the bottle

    • Just look at the previous article. There's already "AI" building databases of everyone and all the legislation in the world will not stop Facebook from using it on their own servers.

    • You don't need an awesome AI to make it a destabilizing influence. Think of what all you can do with a spreadsheet that is hard to do otherwise. Now instead of making Mr. Data, you make a smarter hybrid of Siri, Wolfram Alpha, and a webpage price adjustment bot and game the stock market with it. It by itself isn't going to go all Skynet, but how long can finance run as-is if people are running up bubbles with automated trading tools? It'll go all "humans with dogs outcompete the Neandertals" all over ag
      • by mysidia ( 191772 )

        I suggest we do what we should have done a long time ago..... pass a law that every stock trade has to be approved by at least 1 human before submitting it to a broker or marketmaker for minimal delay; if submitted electronically without answering a CAPTCHA or providing verbal confirmation then a minimum mandatory delay of 10 minutes shall be implemented before it may be taken to a market, and the pending trade will be announced/made part of public data, otherwise the mandatory delay will be 90 seco

        • Well you could try, but how long until the AI is made able to complete the captchas or uses a randomly tweaked version of Siri's voice or a recording of the operator to answer the verbal confirmation? Then to get around the instantaneousness you run multiple instances, babysat by the sorts of people that work in call centers running scams.
    • Re:Fear mongering (Score:5, Interesting)

      by hey! ( 33014 ) on Tuesday November 07, 2017 @01:34PM (#55507455) Homepage Journal

      Like most calamities, it's not an existential threat to the species, but it is an existential threat to populations within the species. And it is potentially a long term threat the underlying assumptions on which our civilization rests.

      One of the important things about learning from past experience is understanding the predictive limitations of past experiences. In past technological developments we've been talking about massive productivity improvements. The assumption that there would be no more work stemmed from assuming that the standards of living would remain the same. That assumption was wrong; the average household has as many possessions today as a prince would have had two hundred years ago.

      But AI poses a distinctly different possibilty: that in the upcoming decades machines may be able to replace people, not just augment them. This could lead to a version of capitalism that entails very rigid hereditary class distinctions; if you have no capital you may find yourself with no means to obtain it because your labor is now worthless.

      • That's an excellent observation. The skills will be still needed but in diminishing qualities. Human touch will be valued in many services for a long time.

        Robots will replace humanity not only in material production, but also in services and, most importantly, government, not by a violent takeover, but by gradual sophistications. Think of HMMs that graduated from predicting protein function to making tax grades adjustment decisions for the next year.

        It will become so complicated that even the biggest genius

    • Comment removed (Score:4, Insightful)

      by account_deleted ( 4530225 ) on Tuesday November 07, 2017 @01:36PM (#55507477)
      Comment removed based on user account deletion
    • by mspring ( 126862 )
      "our"? You mean "humanity as a whole" or the few in power?
      • That's it. Those in power are guaranteed to start thinking about how they can weaponize AI (or any modest semblance of it) so they can expand their power at the expense of others. Simple example: you take the major power centers and they get together with the surveillance state, who are augmented by AI so they can monitor people better (because currently that's very weak. Mostly you get after the fact reconstruction of events). You can get an AI 'minder' for everyone.

    • by Kjella ( 173770 )

      The main threat is developing AI and data mining operations to interpret large amounts of data and build profiles of all of us. It's a privacy issue, and one we are capable of solving by mandating that our privacy is respected. While I'm not confident we'll actually do so, it is definitely in our control.

      Our as in our collective control, maybe. Our as in you individually? To some degree I suppose, but I can't stop other people from letting Facebook go through their contact list. I can't stop them from backing up their photos to the cloud, almost everyone I know does. At least they only rarely post them in public or tag me, but still. When I drive to work there's a congestion zone you have to pay to enter, there's no manual booth anymore they just photograph your license plate and you get the bill in the mai

    • I feel like it's incredibly shortsighted to dismiss AI as nothing to worry about just because it's not "general purpose self-aware AI".

      What happens as AIs are increasingly involved with high-level decision making at corporations? Taking it a step further, can you envision a future where AIs are effectively given control of corporations? It seems inevitable that AIs will be able to outperform human CEOs (at many types of companies, particularly financial institutions) at some point. And legally speaking,

    • Mega data-set analysis is not what keeps people alive &a productive every day.

      Most decisions are "either-or", "if-when" or "how-why". I doubt computers can do better when there are only two choices. Just my guess.

    • Comment removed based on user account deletion
    • Say goodbye to it.

      Blame two factors of modern technological cancer:

      Digital. Allows infinite copies of information.

      Internet. Gives the media allowing this info to be copied everwhere.

      It works against content owners and it works against privacy.

      Five years ago my car insurance company disrepected me so i had to pick another one.

      I had to enter in details who i am and such.

      Now, 5 years later the situation repeated itself: and I had to pick another company.

      All i had to do is enter my name and couple of other thin

  • by Anonymous Coward on Tuesday November 07, 2017 @01:07PM (#55507205)
    The real danger from what we're erroneously calling 'AI' right now, is that it's a dead-end approach that will never reach the potential we want it to. It will always fall short because it's not real Artificial Intelligence, not any more than a vegan cheeseburger is a real cheeseburger; it's imitation AI, ersatz, not the real thing at all. None of what is being produced right now can actually think, 'learning algorithms' and 'expert systems' are not true minds, your dog is smarter and more capable of actual cognition than even the best of these machines are. So what will happen is too much trust will be put into them for critical and/or dangerous things, and they will inevitably screw up in spectacular and disasterous ways -- because they cannot think. In order to have true, real AI, we need to understand how an actual brain accomplishes the things it does -- and we're nowhere near understanding that. Maybe in a hundred years, maybe never. In the meantime these over-hyped half-baked excuses for 'AI' need to not be put in charge of anything that could cause disasters or loss of human life.
    • by Falos ( 2905315 )

      > it's not true-scotsman intelligent

      The question was whether we will be able to use it, TRUEAI or not, to control the unwashed proles.

      Given that various countries are already attempting various retarded maneuvers with their networks, already seek to control and compel-by-force, the answer is "You bet your ass we will."

    • Comment removed based on user account deletion
    • by 110010001000 ( 697113 ) on Tuesday November 07, 2017 @03:34PM (#55508621) Homepage Journal
      THIS is finally an insightful post. He is right: the current approach is a dead end and Moores Law ending will kill it even faster.
  • by realmolo ( 574068 ) on Tuesday November 07, 2017 @01:23PM (#55507339)

    Everything that is labeled "AI"...isn't.

    We don't have computers that can think yet. We just don't. We aren't even CLOSE, and it may not be possible at all.

    Hawking doesn't know what he's talking about. Neither does the media.

    • Re: (Score:2, Insightful)

      by Anonymous Coward

      Hawking doesn't know what he's talking about. Neither does the media.

      Problem is that an AI program doesn't have to qualify for your definition of AI to be incredibly powerful and dangerous.

    • by quantaman ( 517394 ) on Tuesday November 07, 2017 @01:50PM (#55507621)

      Everything that is labeled "AI"...isn't.

      We don't have computers that can think yet. We just don't. We aren't even CLOSE, and it may not be possible at all.

      Hawking doesn't know what he's talking about. Neither does the media.

      Alternately, we might be less than 10 years away. We don't really know how far off we are or what the dangers are because we don't know what a strong general AI will really look like.

      Talking about the dangers of string AI now is a bit like talking about super-weapons in 1920. Sure they saw how science + warfare could increase destructiveness, but there's no more reason they should have anticipated Nukes in 20-30 years in 1920 than 1820.

    • by Maritz ( 1829006 ) on Tuesday November 07, 2017 @01:56PM (#55507681)
      Realmoto thinks it might not be possible guys, so relax, can everybody just relax please? Thanks.
    • Can you define what "to think" means?

    • ...and it may not be possible at all.

      I refer you to the counterexample in your skull.

    • Comment removed based on user account deletion
      • If we did it would have rights just as we do

        Cows have autonomous intelligence. We eat them on a bun with pickles and mustard.

  • Every tool we created can be used to save lives or kill us. Pattern recognition (so called AI) can be used to save people (cancer detection, autonomous cars, etc). As a society we can decide how to use it. Note that at the same time society elects emotionally driven people who have access to a nuclear button...
    • by Maritz ( 1829006 )
      With some technologies, we plan improperly, make mistakes, and learn from those mistakes. With certain AI scenarios, we don't get to make mistakes. It is qualitatively different.
  • People forget... (Score:5, Interesting)

    by wjcofkc ( 964165 ) on Tuesday November 07, 2017 @01:41PM (#55507529)
    No one ever said AI has to be sentient or represent some facsimile of what we consider intelligent to be very real. This does not make it less of a potential threat. Even a single celled organism is capable of responding to it's immediate environment for survival. Bacteria behave in intelligent ways and can kill a person in doing so with quickness. Intelligence does not have to equal consciousness. Nature clearly demonstrates awareness is more complicated - even if in being less so - than our human sensibilities care to deal with. For that matter we don't even know what consciousness really even is. So we can't use it as a litmus test. People say it can never be done because they cannot accept the possibility of a true AI in a way that does not offend their fragile sensibilities of what intelligence means. Let's take the anthropomorphic out of this discussion and start over.
    • >Let's take the anthropomorphic out of this discussion and start over.

      If we generalize too much, we might realize that intelligence represents a threat (something with goals that may conflict with ours, and a way of planning how to achieve them despite that conflict). Humans are intelligent. Therefore humans are a threat and we really ought to kill each other for our own protection.

      • by wjcofkc ( 964165 )
        "Therefore humans are a threat and we really ought to kill each other for our own protection"

        We are already doing exactly that and we are doing a damn good job of it.
        • There's 7.5 billion of us, increasing at more than 1% annually, and you think we're doing a good job of killing each other off???

          • by wjcofkc ( 964165 )
            "There's 7.5 billion of us, increasing at more than 1% annually, and you think we're doing a good job of killing each other off???"

            Yes, and we are getting more efficient at it all the time.
      • Therefore humans are a threat and we really ought to kill each other for our own protection.

        The problem is that most humans become a much bigger threat when they figure out you're trying to kill them.

    • Agreed. I think the very example given in the post and the article provide a concrete example: autonomous weapons that allow someone to inflict greater terror or oppression. In the past, you may have needed a larger network, foot soldiers, etc. With AI, you can potentially do greater damage with fewer individuals. Or with automated weapons, you have fewer soldiers to feed and care for in order to impose your will.
  • What is worse, a period where AI is controlled and used by a fascist-like government, or a long period, like after Roman Empire, where religion takes all the place in the mind of people and interest in culture and technology vanishes ? IMHO any authoritarian regime has built-in instability and self-destructs rapidly, while a religious mindset is much more able to infect the mind of everybody and perpetuates for centuries.

  • by Drethon ( 1445051 ) on Tuesday November 07, 2017 @02:50PM (#55508185)

    When you just look at the bad side, new technology is almost always the worst thing to ever come along. The internet has potential to be horrifically misused, no better portal to spread misinformation that appears to be truth. At the same time, real knowledge has spread further via the internet than just about any other invention short of the printing press, maybe even more so.

  • by mopower70 ( 250015 ) on Tuesday November 07, 2017 @03:09PM (#55508391) Homepage
    I remember reading Roger Penrose's "The Emperor's New Mind" when I was in grad school. He's a brilliant mathematician and I was excited to read his take on the field I had spent the last two years studying. I was blown away when I realized the gist of his book was that computers could never develop consciousness because of quantum randomness that occurs in the cells of the human brain. In other words, he follows the millennium old and thoroughly debunked myth that consciousness arises from "brain stuff". I couldn't understand how someone so smart could have devoted so much time to a subject yet be so ludicrously wrong. I realized shortly thereafter that the great minds are usually great in their areas of expertise, but are often just as looney as your drunk Uncle Bob in those that aren't. In other words, don't take relationship advice from Albert Einstein, and don't listen to warnings on the future of AI from a cosmologist no matter how smart he is.
    • Wise words..

      And better expressed that I usually do: "Everybody is an idiot! Including you and me!"

  • by rleibman ( 622895 ) on Tuesday November 07, 2017 @03:31PM (#55508589) Homepage
    Seriously. I have kids, so my genes have been passed on, but what's so special about only my genes passing to the next generation? Why should I have a problem with having the product of our minds carry on our legacy? what if we could create a galactic empire made by our descendants, where our descendants are not biological but the products of our minds? That idea seems to me amazing and worthy!
    I doubt that if AI ever took over it will get rid of biological life, but if it does, so what? other species have gone extinct, we will too. AI may be the worst event in the history of **our** civilization, but the best event in the history of **it's** civilization!
  • We already made book keepers which was once a high paying profession to $10/HR with little demand that can be done overseas or by Excel with Macros.

    We have sites like Wix [wix.com] that have already lowered web developer salaries.

    Once computers can program themselves with a PHB and a template generator why do they need programmers? They cost money and complain all the time about a livable wage. That and robots taking the other end of the jobs we are seeing wages and job openings fall.

  • He's just full of Doom Porn these days. 1. AI will kill us. 2. The planet will become uninhabitable. 3. Aliens are going to find us and snuff us out. It's just one thing after another with him. How about some good, old-fashioned theoretical physics for a change?

  • Would it hurt not to wait till the last minute, to work out the Gigantic task, of what to do with the non-workers. Other than let them die too? There are a LOT of creative people out there that can make life better to a task and to the eye.
  • Is headed towards the lake [slashdot.org].

  • ...yes, this will be abused, and of course marketers will be jumping on this left and right. What would scare me is if they use this to build a "Minority Report" like data base on people so they can better do surveillance on un-suspecting people, or insurance companies use this to better deny people coverage on life saving medicines. "You don't need that pill, I am the computer, and a computer iz never rong!" The AI does not scare me, it's the people who WILL be (ab)using it that scares me shitless!

Keep up the good work! But please don't ask me to help.

Working...