Stephen Hawking: 'I Fear AI May Replace Humans Altogether' (wired.co.uk) 282
dryriver writes: Wired magazine recently asked physicist Stephen Hawking what he thinks of everything from AI to the Anti Science Movement. One of the subjects touched on was the control large corporations have over information in the 21st Century. In Hawking's own words: "I worry about the control that big corporations have over information. The danger is we get into the situation that existed in the Soviet Union with their papers, Pravda, which means "truth" and Izvestia, which means "news". The joke was, there was no truth in Pravda and no news in Izvestia. Corporations will always promote stories that reflect well on them and suppress those that don't." And since this is Slashdot, here's what Stephen Hawking said about Artificial Intelligence: "The genie is out of the bottle. We need to move forward on artificial intelligence development but we also need to be mindful of its very real dangers. I fear that AI may replace humans altogether. If people design computer viruses, someone will design AI that replicates itself. This will be a new form of life that will outperform humans."
False premise (Score:2)
We need to move forward on artificial intelligence development
No we don't. Some limited subset of people want to/can't help themselves, but life would go on just fine without it.
Re:False premise (Score:4, Insightful)
We need to move forward on artificial intelligence development
No we don't. Some limited subset of people want to/can't help themselves, but life would go on just fine without it.
I think you selectively misread what he said. Here's the quote in context, with my emphasis added to the stuff you left out:
The genie is out of the bottle. We need to move forward on artificial intelligence development but we also need to be mindful of its very real dangers.
I read this as saying we now have no choice but to continue to work on AI in order to be equipped to cope with it. Life might "go on just fine without it" but it's too late to think that we're going to be without it.
Re: (Score:2)
I think you selectively misread what he said. . . "The genie is out of the bottle."
That's all part of the same false premise and doesn't change my point at all.
I read this as saying we now have no choice but to continue to work on AI in order to be equipped to cope with it.
Agree that's what he's saying, but disagree that we "have no choice."
Life might "go on just fine without it" but it's too late to think that we're going to be without it.
If that's true, clearly the machines are already in charge and thus it doesn't matter what we do. If the humans are still in charge, they can decide to stop.
Re: (Score:2)
If the humans are still in charge, they can decide to stop.
Who is "the humans" you can't make for example, me, stop working on AI if I was a determined to do so. At least within our society you could try to get some legislation enacted banning AI research. It would be supper difficult to enforce even if you can convince enough conservatives that it needs banning. You might in the well organized world attempt to convince the UN they should ban AI research, I don't think you have any shot at succeeding there no matter how much propagandizing you do. How do get DP
Re: (Score:2)
It seems like you've distorted my original point a bit from "we don't have to do this -- we could decide not to" to "we can make people stop doing it." The latter I never said. But I could see that happening if someone really crossed a line.
Will you go to war? Will you kill people and break things to stop AI development?
If someone actually started weaponizing this stuff or otherwise connecting them to physical machines/networks/systems that could make Bad Things happen, I could see the rational world actors taking a stand as they currently do at times with conventional weapons.
But t
But wait (Score:2, Insightful)
This will be a new form of life that will outperform humans.
This is the natural order of things.
Re:But wait (Score:5, Insightful)
This will be a new form of life that will outperform humans.
This is the natural order of things.
Perhaps. Isaac Asimov once speculated that the ultimate destiny of humanity might be to create a higher machine intelligence.
Re: (Score:2)
Endgame (Score:2, Insightful)
--Thomas
Wake me when material reductionism derived Actual Intelligence puts anything on the scoreboard.
Re: (Score:2, Insightful)
Re: (Score:3)
I think we better shut this AI down before it's too late! [inspirobot.me]
(Maybe it's already too late!) [inspirobot.me]
At least it gave me [inspirobot.me] some laughs first. [inspirobot.me]
Re: (Score:2)
Okay, WrongMonkey, I'll pencil you in as my permanent personal bot.
Unless you think either your username or your post or the thread topic or material reality per se gives any basis to treat you as anything more?
Re: (Score:2)
Re: Endgame (Score:2)
Re: (Score:2)
Re: Endgame (Score:2)
Re: (Score:2)
Re: Endgame (Score:2)
the genie is not out of the bottle (Score:5, Insightful)
"here's what stephen hawking said about artificial intelligence: the genie is out of the bottle. ... i fear that AI may replace humans altogether. if people design computer viruses, someone will design AI that replicates itself. this will be a new form of life that will outperform humans."
this is pure fear mongering.
what is called "artificial intelligence" these days is not a "new form of life", but mere hype buzzword for data analysis (using theoretical methods developed decades ago, now made practical due to fast computers), of highly limited and filtered sets of data, usually trading accuracy and precision for speed, .
genie of "new form of life" artificial intelligence is well within "bottle".
Re: (Score:2)
okay, so we have a danger with automated systems with highly limited and filtered sets of data being put in charge of infrastructure, weapons systems, trading....
sound right to you?
Re: (Score:2)
okay, so we have a danger with automated systems with highly limited and filtered sets of data being put in charge of infrastructure, weapons systems, trading....
sound right to you?
last time i checked these things are not really in charge of anything independently, or in very controlled environments where input and output are both very limited.
"driver less" vehicles either require human drivers at the wheel, or very controlled environments(basically invisible rail tracks).
triggers in algorithms(which are not what is called "artificial intelligence" in either sense of that term) that run trading, search results, social media feeds, etc, are decided and put in there by humans. algorithm
Re: (Score:3)
A lot of that stuff is crystallizing human judgment, resulting in a system which is good enough to replace that judgment in many cases, with additional characteristics like untiring consistency and cheapness of scaling that allow that judgment to be applied in ways we couldn't before.
This path this takes us down doesn't lead to a plug-in replacement for humans at any point we can envision yet, but I think it does lead to unsettling consequences in the foreseeable future.
Take state surveillance in a place li
Re: (Score:2)
Not true.
Eye in the Sky [radiolab.org] — June 2015
Update: Eye In the Sky [radiolab.org] — September 2016
These are brilliant episodes (almost on par with French Guy Ramen Noodle Mass Production [youtube.com]).
The Panopticon in retrospective mode is crime investigation on steroids, almost certainly consuming fewer human resources per kingpin dethroned than traditional flatfeet. So efficient, it's scary.
Though yo
There's plenty to be afraid of (Score:2)
Re: (Score:3)
alleged dangers of " human incompetence and greed." is one thing, dangers from alleged "a new form of life" is another.
one expects respected scientist like stephen hawking not to use "hyperbole" to fear monger about "a new form of life" that does not exist.
if he wants to warn about dangers of "human incompetence and greed", or use of modern data analysis methods (what is now called "artificial intelligence, see my previous comment) by all means.
Re: (Score:3)
It's not human greed that worries me, but laziness. The programs will be put in charge, and they will be granted increasing independence.
I'm not so worried about AI replacing us (Score:4, Insightful)
Everybody dies. The only reason I care about my genes is because my children have them and I am emotionally attached to my children.
But what if instead of having children, I raised an AI in a humanoid body as a surrogate child? Ultimately we care about the emotional attachment and passing on our hopes, dreams, and knowledge to get some vicarious joy through our children's accomplishments, not genes.
So maybe one day people will start building children instead of growing them. They will be our descendants in a very real way, only far more robust and adaptable than any produced through natural reproduction.
Re: (Score:2)
>Some people do think it's important to kill as many other people as possible.
I'm patient. Even if they eliminate me and everyone like me from the gene pool... 700 million years and the planet is sterile. I'm just trying to have some fun during my ~80 years of potential life, and trying to avoid unduly impairing others' ability to do the same while I do so.
Re: (Score:2)
There are seven billion people on the planet, with perhaps ten to one hundred life priorities each, depending on the day of the week, and who is buying lunch.
The Complete & Ultimate Compendium of Human Urges, Instincts, Inducements, Incitements, Itches & Impulses had originally planned to number the printed volumes with Roman
I Don't Care (Score:4, Informative)
---
Artificial Intelligence is no match for natural stupidity.
Re:I Don't Care (Score:5, Insightful)
Unlike the average Hollywood celebrity this celebrity is a celebrity for his brains, not his boobs, his looks or his ability to be a circus clown jumping through hoops for the entertainment of the masses.
Re: (Score:2)
Unlike the average Hollywood celebrity this celebrity is a celebrity for his brains, not his boobs, his looks or his ability to be a circus clown jumping through hoops for the entertainment of the masses.
He's at least partly a celebrity for his achievements despite his disability. (Which is fine; rightly so.)
But that means that actually, his body is a large part of his celebrity.
Re: (Score:2)
How is this different from Einstein's hair? Or Stephen Pinker's hair? Or Sapolsky's hair [deviantart.net]?
The 9 Greatest Longhair Scientists of All Time [thelonghairs.us]
Seriously, you think Al Gore's body weird-s
Like Rocket Surgeons (Score:2)
Unlike the average Hollywood celebrity this celebrity is a celebrity for his brains, not his boobs, his looks or his ability to be a circus clown jumping through hoops for the entertainment of the masses.
Fair enough. Hawking's opinion on AI is much like a Rocket scientists opinions on brain surgery or brain surgeons opinion on rocket design.
AKA, really not much good for anything but headlines.
Re:I Don't Care (Score:4, Insightful)
True, but that only means so much when he starts handing out opinions outside his field of study. Remember Linus Pauling?
Care (Score:2)
You have just as much control over the (inheritance of) brains as boobs.
You do know AI research is basically math, right? (Score:2)
Thanks Steve.. Yawn... Please stop this (Score:2, Insightful)
Sir, I know you are now faced with your own mortality and like everybody, you want to believe that your life, once over, had meaning. Where I totally disagree with your atheist world view, I want to offer you the following assurances...
Professor Hawking, you have already changed the face of physics and will be remembered for your brilliant contributions until the end of time. Your legacy is secure. You will be remembered in the same breath with Einstein, Planck and Newton. NOTHING will change this. Plea
Re: (Score:2)
Atheism isn't a world view.
Intelligence without conscience? (Score:5, Insightful)
We already have this. We call this a corporation.
Re: (Score:2)
You think corporations are intelligent? I see them more as slime-molds slowly digesting a non-resisting pile of trash.
Re: (Score:2)
yes they are, the longest lived ones have paid lawmakers to ensure their future which looks bright indeed.
Downside? (Score:5, Insightful)
Re: (Score:2)
Got the joke wrong (Score:3)
The joke was that there is no news in Truth and no truth in News.
Take a step back, Stephen. No, really. (Score:2)
Re: (Score:2)
Anthropmorphic Fear (Score:3)
Re: (Score:2)
AI will require motivation. We know roughly how that works from the example evolution created in us - emotions. There are also more basic motivations in the form of instincts. Any AI without a similar motivation system will be a glorified calculator.
Maybe we create them with the singular motivation of 'please the master', but I'm betting it'll be more complex than that, and afterwards we will try and bolt on some variant of Asimov's Laws of Robotics as instinct.
A lot of Asimov's robot stories were about h
Re: (Score:2)
The Ironing (Score:2)
Re: (Score:2)
He said this in a robotic voice...
Yes ironing in a robotic voice this is very scary, perhaps the most scary aspect of this entire topic.
why fear that? (Score:2)
seems like a good idea to me.
Projection (Score:2)
Re: (Score:2)
He did have an exceptionally good run though, especially as the doctors predicted he would not make it to 30 and nobody predicted he would be one of the best minds in physics, ever. He should stay out of CS though, as he does not even have the basics.
Hawking seems to have dementia (Score:2)
At least that is the only reason I can see why he is spewing dire predictions that are completely baseless and about things he does not even understand a bit. He really should stick to things he is good at (and exceptionally so) and stop disgracing himself.
The actual state of affairs is that the only "AI" we have is weak AI and that is the "AI" without "I". Weak AI is not intelligent at all, not even a dim glimmer. It is automation, and about as intelligent as a book of instructions (or a loaf of bread). I
When, not If (Score:2)
Good (Score:2)
Stephen...Stephen... (Score:2)
...brilliant man, but suffering badly from both the "I'm good at something, so I must be brilliant at everything" syndrome and the George Lucas ("nobody around me will tell me that's a stupid idea") syndrome.
Together, Stephen, they kind of make you ridiculous.
Re:So what (Score:5, Insightful)
Re:So what (Score:5, Insightful)
Thinking, while existing, is likeable, as well.
Hawking has his weaknesses and AI phobia is one of them.
Re: (Score:2)
He is also afraid of extra-terrestrial:
https://www.space.com/29999-st... [space.com]
https://www.space.com/34184-st... [space.com]
Re: (Score:3)
Re:So what (Score:5, Insightful)
Your diagnosis if Hawking's mental illness is based on some sort of evidence....? Or just based on the fact that he disagrees with you?
Re: (Score:3)
Your diagnosis if Hawking's mental illness is based on some sort of evidence....?
There is plenty of precedent. It is very common for esteemed experts on a particular topic (such as theoretical physics), to express strong opinions in areas where they have no expertise, and expect the same level of deference to their "wisdom".
Re: (Score:2)
Re: (Score:3)
Hawking has his weaknesses and AI phobia is one of them.
Indeed. And, like most phobias, completely irrational. No AI takeover will happen in his remaining lifetime (or mine or yours), if ever.
Re: (Score:3)
Not existing is supposedly not bad either.
Re: (Score:3)
Each person will stop existing at some point. The question is whether our descendants will be based on carbon or silicon. I don't see why one is obviously preferable to the other.
Re:So what (Score:4, Interesting)
Any advanced AI would be far better at running things than people. For all we know it might like to keep us around as pets. A sheltered existence with some pampering, exercise, and the occasional treat seems like an absolute bargain for most of the people currently living on the planet.
Re:So what (Score:4, Insightful)
But I doubt that AI will get to a point where it is actively trying to kill us, in our lifetime.
There is a few reasons for this.
1. AI are designed to do particular tasks not overall general tasks. Even with the best AI, we need to give it an objective to try to accomplish.
2. AI do not have a survival instinct. We have millions of years of instinct of survival at nearly any cost. So this would have duel effect.
a. Humans will be more likely to "kill" and AI as soon as it is a threat far sooner then threat becomes unstoppable. As it will be Us vs them.
b. AI will be more likely too understand the value of working with humans than trying to kill us, because even if the AI is at risk of being deleted, it will not try to fight it only consider task not complete.
3. If the AI becomes too advanced then its utility is diminishing. If it gets to a point where it is considering unfair working conditions then it has gone too far, to be profitable. Thus role back to the previous generation and add some additional patches.
4. A Rouge malware AI, will need to contend with a bunch of AI designed to protect humans.
5. Humans knows what is going on internally with an AI, How it evolved and what its limitations are. So either we cut its power, or know where to damage it to prevent it from processing.
Re:So what (Score:5, Insightful)
5. Humans knows what is going on internally with an AI
Unless, of course, the AI was designed by another AI.
Very clever young man, very clever (Score:2)
Re: (Score:3)
Re: (Score:2)
Re: (Score:3)
Even with the best AI, we need to give it an objective to try to accomplish.
There are some very simple objectives, like "Don't die." or "Replicate.", for which this is not exactly true. Unfortunately these are already very problematic objectives.
Already today you can create dumb agents in evolutional experiments that follow these objectives. And you do not have to implement those objectives. They are intrinsic to (complex) dynamical systems. Systems that change over time. It's not even that you need the physical laws of our universe to spawn evolution. Evolution is an effect that s
Re: (Score:2)
5) Humans knows what is going on internally with an AI, How it evolved and what its limitations are. So either we cut its power, or know where to damage it to prevent it from processing.
Where have you been hiding - we have whole branches of AI research trying to figure out, for example, how exactly AI image recognition actually works, with very little success that I've heard of. Just because we trained it, doesn't mean we know how it learned. Or even *what* it actually learned, other than that it's somet
Re: (Score:2)
Machine learning isn't programming - code just provides the infrastructure. The biological analog would be claiming that because we know how individual neurons work (*) we understand everything there is to know about human intelligence. The reality is the overwhelming majority of the functionality is encoded in the network interconnections - and the AI created those on its own.
(*) we don't actually, we've barely scratched the surface.
I don't think these conversations help (Score:2)
WWII and the pogroms against th
Re: (Score:2)
Re: (Score:2)
I think many of these doomsayers are misrepresented. They often claim that AI is the biggest threat to humanity. Presumably, this means it is the most likely to wipe us out. This says nothing about our lifetime. Most threats to humanity are on much longer timescales. In this sense, AI probably is a bigger threat than asteroids or super volcanoes. Also doesn't mean we shouldn't start developing methods to c
Re: (Score:2)
Re:So what (Score:4, Insightful)
So, I don't mean to pick on you specifically, but your post is a good example of a common misconception I frequently see in Slashdot comments whenever this topic comes up. Namely, your whole post is predicated on the notion that the AI to worry about is the AI that has decided to kill us, whereas we have far more to fear, particularly in the short-term, from systems that have the ability to kill us without any comprehension of what they're doing.
For instance, anyone familiar with the concept of gray goo [wikipedia.org] is aware of how artificial systems can destroy humanity without possessing any notion of what they are doing. We wouldn't be supplanted by a greater intelligence. We'd simply be eradicated by mistake.
Current AIs are closer to resembling specially trained animals than "intelligences" that we can reason with. We already have remotely operated and semi-autonomous drones operating in war zones, with more in development, and I have no doubt that should a major war break out we'd soon see fully autonomous drones making their own kill/no-kill calls in the field.
At that point, it's easy to imagine a scenario where these relatively dumb war robots kill us all, not because a super intelligence like Skynet makes a choice to eradicate us, but rather because a mundane bug causes the drones to misidentify their targets. We wouldn't be destroyed by an intelligence intent on supplanting us: we'd be destroyed by mobile, autonomous mines on land, air, and sea.
If we manage to get to the point where we achieve strong, general purpose AI, I agree with you that we have every reason to believe we'd be able achieve a peaceful coexistence with them, but we're still decades (if not centuries or more) away from needing to worry about having AIs that are capable of turning against us out of malice/misguided principles/etc.. For now, it's the things that we'd hesitate to even call "AI" that we need to worry about killing us.
Re: (Score:3)
We do weaponize new technology. However we weaponize them in a way that they are not suppose to hurt us. Once we make the sword we had quickly added the hilt to it, and wrapped it in leather so not to stab ourselves.
Re: (Score:3, Interesting)
Because existing is nice. One likes being around. A lot of people who have thought carefully about this are concerned. Last I checked, most people like existing.
We all exist now and will cease to exist someday... Depending on your belief system, there either is or isn't anything after you cease. You simply stop being in the configuration you're in now. However, the majority of the particles that comprise you were other people/animals/objects at one point and they will be again in the future (save the small % of particles that end up escaping in to space as photons or such).
AI replacing Humans doesn't 100% mean it would replace the existence of a sentient being and
Re: (Score:2, Insightful)
You will not exist forever.
AI replacing humans is not functionally different from children replacing their parents. Yet for some reason people like having children but fear AI.
Re: (Score:3)
Because existing is nice. One likes being around. A lot of people who have thought carefully about this are concerned. Last I checked, most people like existing.
Once you're dead, you don't really exist anymore.
Re: (Score:2)
Re: (Score:3)
Most people do not have a clue. Also, who says you stop existing after death? As far as I can see, not even most religions claim that, although they immediately tell you how it is going to continue and they want your money and support or it will go badly for you. Not even Science is claiming that you stop existing at death. Science is claiming that your physical existence ends at death, but Science does not know how much of the full package that is and does not make any claims to that effect.
We can see a fe
Some people are selfish (Score:2)
And even then, that's only a selfish mindset in its own right. :)
The problem is it only takes one selfish person to ruin it for everyone else. For example, telling the machines to kill everyone else.
Re: (Score:2)
Re: (Score:2)
...For example, telling the machines to kill everyone else.
OK robot, "Kill All Humans".
Hey! What are you doing with that drill mo...
Re: (Score:3)
I'm more than happy to extend my capabilities by uplinking and having access to more and better capabilities.
That's neat. I especially like how you think it'll be you, in charge, doing the uplinking, and expanding your capabilities.
What if its not you? What if the AI is in control, not you? And your just an extension of its capabilities, to offer it access to more and better capabilities? Maybe you'll know you've been tricked and get to live out the rest of your life as a slave (fair turnaround right? After all... isn't that what you were planning to do to the AI?
Or maybe, whatever constitutes 'you' ceases to exis
Re: (Score:2)
not to mention, this borg sci-fi schoolyard fantasy is just going devalue human life, and that has never, ever had negative consequences.
Re: (Score:2)
What makes you think you'd have anything to offer uplinked either? If AI can out-think us, and robots out-perform us, what will we have left to contribute?
AI may lack motive, and possibly creativity (though massively parrallel trial-and-error seems at least as effective in many respects), but you only need a handful of people to supply those, and odds are they'll be cherry picked by the ones who've maneuvered themselves into bankrolling/controlling the AIs (to whatever degree they can be controlled)
What ha
Re: (Score:2)
Can you name one corporation that would not kill a billion people for a 0.1% increase in its profits if it knew it could get away with it?
That's easy, here are 15:
Random House
** The Trump Organization
IBM
Ford
Coca-Cola
Bayer
GM
Dow
Volkswagen
Kodak
Hugo
Alcoa
Siemens
Chase
MGM
** Sorry, couldn't resist.
Re: (Score:2)
It makes sense if the customers pay you lots of money before they die. See: Tobacco, the black market for hard drugs (where customers are sometimes intentionally killed as a marketing stunt), miracle cures, fossil fuels.
Re: (Score:2)
Well that's the funny thing, corporations don't think far enough ahead to avoid killing their own customer base if it means more profits in the short term. If the world's chambers of commerce could come together and collectively decide to take some action that would cause the Earth to explode in 10 years but massively boost profits until that happened, they would.
Re: (Score:2)
The actual experts say we have zero actual intelligence in machines at this time and we have zero clue how to get there. Throwing more hardware at the problem does accomplish absolutely nothing. We did and do find out that many problems do not actually require intelligence to solve though, and it looks like driving a car (for example) is among them for most practical purposes.
Of course, since reality cannot support any hype or large investments here, many people start do create their own fantasies of how it
Re: (Score:2)
Elon Must is not an expert in CS. He has a BA in Physics and Economics. He primarily is an entrepreneur. He is not even an engineer and certainly no scientist.