What AI Experts Think About the Existential Risk of AI 421
DaveS7 writes: There's been no shortage of high profile people weighing in on the subject of AI lately. We've heard warnings from Elon Musk, Bill Gates, and Stephen Hawking while Woz seems to have a more ambivalent opinion on the subject. The Epoch Times has compiled a list of academics in the field of AI research who are offering their own opinions. From the article: "A 2014 survey conducted by Vincent Müller and Nick Bostrom of 170 of the leading experts in the field found that a full 18 percent believe that if a machine super-intelligence did emerge, it would unleash an 'existential catastrophe' on humanity. A further 13 percent said that advanced AI would be a net negative for humans, and only a slight majority said it would be a net positive."
Funny, that spin... (Score:5, Insightful)
Re:Funny, that spin... (Score:5, Informative)
Indeed, emphasis in reporting. To break it down:
Extremely good - 24%
On balance good - 28%
Neutral - 17%
On balance bad - 13%
Extremely bad - 18%
So, over half good, less than a third bad. Sure sounds different.
Re:Funny, that spin... (Score:5, Informative)
'Well ... in the unlikely event of it going seriously wrong, it ... wouldn't just blow up the university, sir'
'What would it blow up, pray?'
'Er ... everything, sir.'
'Everything there is, you mean?'
'Within a radius of about fifty thousand miles out into space, sir, yes. According to HEX it'd happen instantaneously. We wouldn't even know about it.'
'And the odds of this are ... ?'
'About fifty to one, sir.'
The wizards relaxed.
'That's pretty safe. I wouldn't bet on a horse at those odds,' said the Senior Wrangler.
-Terry Pratchett et al., The Science of Discworld
Re:Funny, that spin... (Score:4)
Re: (Score:3)
Re: (Score:3)
You might want to consider Europe and America in this relationship. Two points if you can decide which is which, Ten points if more than 50% of the readers agree with you.
Re: (Score:3)
As a train is to muscular power, ASI is to intellect. This is like dodos debating the impact of the arrival of sentient bipeds. Either it will be really good for them, in that they get their lot in life improved by going to zoos or homes around the world as pets, or they will all get killed and eaten/have their habitat destroyed and die out.
Re:Funny, that spin... (Score:5, Insightful)
Spin, sure, but it's a waay bigger minority than I expected. I'd even say even shockingly large.
The genius of Asimov's three laws is that he started by laying out rules that on the face of it rule out the old "robot run amok" stories. He then would write, if not a "run amok" story, one where the implications aren't what you'd expect. I think the implications of an AI that surpasses natural human intelligence are beyond human intelligence to predict, even if we attempt to build strict rules into that AI.
One thing I do believe is that such a development would fundamentally alter human society, provided that the AI was comparably versatile to human intelligence. It's no big deal if an AI is smarter than people at chess; if it's smarter than people at everyday things, plus engineering, business, art and literature, then people will have to reassess the value of human life. Or maybe ask the AI what would give their lives meaning.
Re: (Score:2)
Spin, sure, but it's a waay bigger minority than I expected. I'd even say even shockingly large.
Shockingly? I think it's good that we have experts in a field developing high-impact tools who are pessimistic about the uses of those tools. If 100% were like "yeah guys no sweat, we got this!" then I would be more concerned. The result of this poll, in my mind, is that we have a healthy subset who are going to be actively working towards making AI safe.
Re: (Score:2)
Why would you build strict rules akin to his Laws into the AI? You don't build a strict rule, you build a "phone home and ask" rule. There may be a need for something analogous to the first rule, or it's corollary the zeroeth rules; but the Third Rule as a strict rule equal to the others is just stupid. The major point of building robots is so that humans don't have to do dangerous things, this means that a lot of them are supposed to die. The Second rule is even dumber. Robots will be somebody's property,
Re:Funny, that spin... (Score:5, Insightful)
Re: (Score:3)
I wouldn't call it a metaphor, nor would I say that Asimov's point is that you can't codify morality. His point is more subtle: a code of morality, even a simple one, doesn't necessarily imply what we think it does. It's a very rabbinical kind of point.
Re: (Score:3)
Also notice that when the zeroth law was added it just made matters worse because more laws allow for more contradictions, loopholes, and paradoxes, exactly like the evolv
Missing the key point (Score:5, Insightful)
Everyone is missing the key thing here. The question asked was "if a machine superintelligence did emerge", which is like asking "if the LHC produced a black hole..." There's nobody credible in AI who believes we have the slightest clue how to build a general AI, let alone one that is 'superintelligent'. Since we lack even basic concepts about how intelligence actually works we're like stone age man worrying about the atomic bomb. Sure, if a superintelligent AI emerged we might be in trouble, but nobody is trying to make one, nobody knows how to make one, nobody has any hardware that there is any reason to believe is within several orders of magnitude of being able to run one, etc.
So, what all of these people are talking about is something hugely speculative that is utterly disconnected from the sort of 'machine intelligence' that we ARE working on. There are several forms of what might fall into this category (there's really no precise definition), but none of them are really even close to being about generalized intelligence. The closest might be multi-purpose machine-learning and reasoning systems like 'Watson', but if you actually look at what their capabilities are, they're about as intelligent as a flatworm, hardly anything to be concerned about. Nor do they contain any of the sort of capabilities that living systems do. They don't have intention, they don't form goals, or pose problems for themselves. They don't have even a representation of the existence of their own minds. They literally cannot even think about themselves or reason about themselves because they don't even know they exist. Beyond that we are so far from knowing how to add that capability that we know nothing about how to do so, zero, nothing.
The final analysis is that what these people are being asked about is virtually a fantasy. They might as well be commenting on an alien invasion. This is something that probably won't ever come to pass at all, and if it does it will be long past our time. Its fun to think about, but the alarmism is ridiculous. In fact I don't see anything in the article that even implies any of the AI experts think its LIKELY that a superintelligent AI will ever exist, it was simply posited as a given in the question.
Re: (Score:3)
"We" don't have to make one. All we have to do is set an AI towards self improvement/production of better AIs. THAT is where superintelligence comes from. All we have to do is make one that is an idiot savant geared toward AI design.
We do that with every new generation of babies, and it hasn't produced a super intelligence yet. What makes you think doing it ON A COMPUTER would make any difference?
Re: (Score:3)
Re: (Score:3)
That is magical thinking. It has no place in proper engineering practice.
There are zillions of reasons that interfere with ability to work faster in larger problems - yet they can be summarized with the words "non-linear growth". Try solving the Travelling salespeople problem twice as big with merely twice as fast hardware, it will slow to a grinch.
We know the substrate of
Re: (Score:3)
Spin? When for every two or three members of a profession who consider their job a net positive, there's one who considers their job an existential threat to all humanity, you're complaining that the 52% who think it will be overall good are being called a slight majority instead of just a majority.
Not that we have any choice but to continue trying to build an AI.
Re:Funny, that spin... (Score:5, Insightful)
In light of the fact that Stephen Hawking, Bill Gates and Elon Musk are not even remotely experts in A.I. your opinion is fairly odd.
Re:Funny, that spin... (Score:5, Insightful)
In light of the fact that Stephen Hawking, Bill Gates and Elon Musk are not even remotely experts in A.I. your opinion is fairly odd.
Question: What role do people who think that AI research is dangerous hold in the field of AI research?
Answer: None...because regardless of their qualifications, they wouldn't further the progress of something they think is a very, very bad idea.
Asking AI experts whether or not they think AI research is a bad idea subjects your responses to a massive selection bias. And discounting the views of others because they don't specialize in creating the thing they think should not be created does the same. You do realize that at your core, that's your only point...not that Hawking is an idiot, or that Gates doesn't know anything about technology. It's just that they don't work in the field of AI, so therefore they must not have any inkling whatsoever as to what they're talking about.
Re:Funny, that spin... (Score:5, Insightful)
the opinion of people like Stephen Hawking, Bill Gates and Elon Musk
I disagree with the premise, that fame is more important than domain-specific expertise.
Re: (Score:3)
I disagree with the premise, that fame is more important than domain-specific expertise.
I disagree than domain-specific expertise permits objectivity in that domain.
(i also agree with you)
Re: (Score:3)
I disagree that domain-specific expertise permits objectivity in that domain. (i also agree with you)
Good point - though I didn't say that exactly, just that IMHO fame ought to be a lesser factor than domain knowledge in estimating the truth value of statements.
Re: (Score:3)
Especially if those with domain-specific expertise need to save their careers.
The problem with this argument is that it always rules out the opinions of those most likely to have correct opinions on any subject. You need evidence for specific instances of corruption involving the specific individuals whose statements are being evaluated in order for this to-the-wallet argument to have any weight.
Re:Funny, that spin... (Score:4, Insightful)
... if AI research suddenly got heavily regulated
"Heavy regulation" would achieve nothing more than shifting research elsewhere. If you really believe that AI is a threat, you should support more research, and more funding, so that we (western democracies) get there first, rather than, say, the authoritarian government of China.
Re:Funny, that spin... (Score:4, Interesting)
... some AI academics desperately trying to save their jobs
Most bleeding edge AI research is being done by Google, Facebook, and Baidu. The three of them have hoovered up all the big names in AI, and are hiring new graduates with six figure salaries as fast as the diplomas can be handed out. So the AI researchers are not "academics" and they certainly aren't "desperate".
Re: (Score:3)
Firstly you say that heavy regulation would shift research away from AI
No. It would not shift research away from AI. It would shift the research away from the countries doing the heavy regulation.
Regulating nuclear weapons works because you need plutonium, which is hard to obtain and easy to detect. Regulating AI research is harder, because all you need a GPU, which you can buy at Walmart, and GPUs are already in a billion computers. We are NOT going to "regulate" AI back into the bottle.
Re: (Score:2)
Yeah, forgive me if I trust a regular Physics Nobel Prize contestant
Poor guy [xkcd.com].
At least he hasn't hit this point yet [smbc-comics.com]
Your trust is worth nothing, though.
Re: (Score:3)
Let alone your idiotic post, and those idiotic "comics" drawn for an audience of mentally diseased imbeciles.
Let me say it more explicitly for you. Chances are you've never looked at Bill Gates' code. You don't know how good it is. You haven't seen any of his work in AI. You don't understand the work Stephen Hawking did in physics, and Elon Musk? Really? Why do you even think his opinion on AI is worth anything? The only reason you trust them is because they are famous, because they are celebrities.
You are trusting them because they are famous, not because of their skill. You don't even know what their skill is
Re:Funny, that spin... (Score:5, Informative)
To be fair, he funds the Machine Intelligence Research Institute, which is devoted to mitigating existential risk from AI, and is surely getting very detailed reports from them, making him a highly knowledgeable layperson at worst (a direct expert at best).
Yeah, no. (Score:4, Insightful)
Nosense. That's just hero worship mentality. Very much like listening to Barbara Streisand quack about her favorite obsessions.
Bill Gates' opinion is worth more than the average person's when it comes to running Microsoft. Elon Musk's opinion is worth more than the average person's when building Teslas and the like. Neither one of them (nor anyone else, for that matter) has anything but the known behavior of the only high intelligence we've ever met to go on (that's us, of course.) So it's purest guesswork, completely blind specuation. It definitely isn't a careful, measured evaluation. Because there's nothing to evaluate!
And while I'm not inclined to draw a conclusion from this, it is interesting that we've had quite a few very high intelligences in our society over time. None of them have posed an "existential crisis" for the the planet, the the human race, or my cats. Smart people tend ot have better things to do than annoy others... also, they can anticipate consequences. Will this apply to "very smart machines"? Your guess (might be) as good as mine. It's almost certainly better than Musk's or Gates', since we know they were clueless enough to speak out definitively on a subject they don't (can't) know anything about. Hawking likewise, didn't mean to leave him out.
Within the context of our recorded history, it's not the really smart ones that usually cause us trouble. It's the moderately intelligent fucktards who gravitate to power. [stares off in the general direction of Washington] (I know, I've giving some of them more credit than they deserve.)
Re: (Score:2)
we've had quite a few very high intelligences in our society over time. None of them have posed an "existential crisis" for the the planet, the the human race, or my cats.
Only because the Vice Presidential Action Rangers stopped them from creating a singularity with the LHC. And that was when they were led by Biden but before they restored Gygax with the vampire bacillus.
Re: (Score:2)
And while I'm not inclined to draw a conclusion from this, it is interesting that we've had quite a few very high intelligences in our society over time. None of them have posed an "existential crisis" for the the planet, the the human race, or my cats. Smart people tend ot have better things to do than annoy others... also, they can anticipate consequences. Will this apply to "very smart machines"? Your guess (might be) as good as mine. It's almost certainly better than Musk's or Gates', since we know they were clueless enough to speak out definitively on a subject they don't (can't) know anything about. Hawking likewise, didn't mean to leave him out.
Well, maybe not the actual scientists but there are quite a few dead cultures and species wiped out because guns and bullets beats spears and claws. And I don't think anyone doubts Oppenheimer was a bright guy, even though he wasn't the one dropping the nukes. Since you mention cats, would you like an AI treating you like you treat the cats? My guess is you would not, particularly not when they decide we're too fickle and resource hungry and would rather not have cats.
The reason I'm not worried is because w
Re: (Score:3)
Frankly, that would be awesome.
Re:Yeah, no. (Score:4, Interesting)
Oh no, we can -- and should -- speculate. Consider everything we can think of. Consider.
What we should NOT do is create a self-fulfilling prophecy by taking the verbal fecal output of doom-criers as the inevitable or even as the likely.
Re: (Score:2)
Re:Funny, that spin... (Score:4, Insightful)
Already there (Score:2)
We already have superhuman AI. Limited superhumanity. Watson beat the shit out of the jeopardy champions because superhuman reflexes and superhuman searchtime.
Image classification and search algorithm are superhuman in they work rapdily and around the clock even if the result may be so-so.
This trend will become more and more apparent as more fields get in the reach of specialist AI, essentially we're building autistic savant superhumanity. And like autistic savants these will not be much of an malicious exi
Re: (Score:2)
By the time we can actually build a universally superhuman AI that could form willful malicious intent we'll be so immersed in AI and so used to build, deal with and monitor AI that it will be a mostly forgotten nonissue.
Unless, of course, that isn't true.
Re: (Score:2)
The focus on consciousness as a guiding beacon and the insistence that consciousness is a indivisible unity is something philosophers made up because they needed something to debate endlessly with no chance of every getting anywhere.
If we define any umbrella term to be indivisible we can have the same pointless masturbation over its unattainable special snowflakeyness.
We acknowledge that a Nation or Computer or Corporation is something consisting of components that can be identified and described with some
The Sony connection (Score:5, Insightful)
"The Sony hacking incident last year was ample demonstration that our information systems are becoming more and more vulnerable, which is a feature, not a bug, of the increasing transfer of our infrastructure into digital space."
Sorry guys, I can't stop laughing. This writer is a clown. The Sony incident demonstrates Sony is incompetent. It was never a threat against the humanity, only against the gang of fat butts at Sony Pictures.
Re: (Score:2)
It's also patently stupid to suggest that anything is "more vulnerable" now than it used to be. Things may be more interconnected, and are more likely to be attacked in the past, but they are not getting "mo
Re: (Score:2)
but they are not getting "more vulnerable" unless your management is A) not willing to spend the reasonable cost for appropriate security controls, or B) doesn't listen to their IT security staff when those systems start raising warning flags, or C) fails to hire competent security personnel in the first place.
Which happened.
Re: (Score:3)
It's also patently stupid to suggest that anything is "more vulnerable" now than it used to be. Things may be more interconnected, and are more likely to be attacked in the past, but they are not getting "more vulnerable" unless your management is A) not willing to spend the reasonable cost for appropriate security controls, or B) doesn't listen to their IT security staff when those systems start raising warning flags, or C) fails to hire competent security personnel in the first place.
I disagree strongly with this. Let's think about the case of industrial or governmental espionage. 50 years ago, saboteurs had to physically remove documents (or whatever they wanted) from the target. There were quite genius inventions--small (for the time) cameras, hidden canisters of films, briefcases with hidden compartments, etc., but ultimately there was a very physical component. Today it's possible to remotely infiltrate an organization and exfiltrate more "documents" than could previously have been
from my journal a couple of days back (Score:2)
It seems to me that the AI systems we create are all very application specific, like the IBM Watson - how many hours of work did it take just to get Watson to be able to play a simple game, it's not a generic AI system, it wasn't an AI that could enter any quiz.
Watson was good at Jeopardy not because it had a good AI, but because it's creators were highly intelligent and were able to code a computer to be good at Jeopardy because *they* not the computer were intelligent.
Is there a computer that exists that
Re: (Score:2)
Well, the google car has been rear ended 7+ times
I believe the quoted number is for their entire fleet of automated cars, which AFAIK is of unknown size.
Re: (Score:2)
AI researchers are mostly busy working on weak-AI (which of course is a useful field in its own right).
One way street (Score:2)
Once the AI gets the win, there is no second round.
As they understand intelligence and create what is referred to as an "AI", we will find it consists of a number of interacting components. We already have some aspects, such as memory, and computational speed and mathematical capability. Then there is the ratio of the clock speed of the AI and the alpha rhythm - AKA as the human clock speed.
The fastest computers are in the 10-20 Gigahertz speed of clock and have added parallelism - which means that an AI mi
What is the vision of the optimists? (Score:2)
Let's try and narrow this down a little: if the super-intelligence is created through duplicating the action of human neurons (including all
Re: (Score:2)
any AI project will have self improvement feedback built into it - unless the human designers leave that out to save us all - perhaps.
Since so many people and groups will compete on this subject, the fetters will vary. Of course, we may end up with an AI begetting a better AI which wants to kill the first AI, and whoever feels there will only be one race of AIs?
An AI ecology will occur with smarter and dumber AIs which will expand to fill the ecosphere, all manner of AI, from viruses to cellular species, to
Re: (Score:2)
Once the AI gets the win, there is no second round.
Unless, of course, it doesn't turn out that way. There are several problems with the assertion. First, it is unlikely that there will be a single "the AI". Second, there's no reason humanity can't upgrade itself to become AIs as well. Third, the laws of physics don't change just because there is AI. Among other things, it does mean that humanity can continue to provide for itself using the tools that have worked so far. After all, ants didn't go away just because vastly smarter intelligences came about.
Re: (Score:2)
You're assuming that we programmed it to have a self-preservation instinct, desire to be loved, reproduce, and all that other BS evolution has saddled us with.
If it's programmed to be fat and happy because it's being fed a lot of data from humans to do interesting calculations, and it's dependent on humans for it's continued access to the electrical grid, then the proper analogy isn't an insect we actively try to kill because it's eating all our food (like ants), but an insect we intentionally foster becaus
Re: (Score:2)
You're assuming that we programmed it to have a self-preservation instinct, desire to be loved, reproduce, and all that other BS evolution has saddled us with.
The earlier poster makes no such assumption.
If it's programmed to be fat and happy because it's being fed a lot of data from humans to do interesting calculations, and it's dependent on humans for it's continued access to the electrical grid, then the proper analogy isn't an insect we actively try to kill because it's eating all our food (like ants), but an insect we intentionally foster because we like what they do (say, the ladybug) even if it ever goes evil.
"IF". If on the other hand, it is programmed to have motivations that turn out to be a problem, then the outcome can be different. There's also the matter of the AI developing its own motivations.
Hell if you do the programming right it will help design it's replacement and then turn itself off as obsolete.
And doing the programming right is pretty damn easy, right?
Opinions and hype but no expert insight. (Score:2)
Most the experts have a positive view but lets focus on the ones we can skew into a fear of Skynet along with celebrities. Woz being one of the better opinions.
Domain specific knowledge is needed to make educated guesses or at least informed assessments of the current threat level. Currently, AI is not at all intelligent; with in a specific narrow domain the AI can do as well as or better than a human. Big deal. So can a horse or a car - they are superior within their specialized domain. We are nowhere ne
Anthropomorphizing (Score:5, Interesting)
IMHO, all of the fear mongering is based on anthropomorphizing silicon. It implicitly imputes biological ends and emotionally motivated reasoning to so-called AI.
I think that folks who don't have hands on experience with machine learning just don't get how limited the field is right now, and what special conditions are needed to get good results. Similarly, descriptions of machine learning techniques like ANNs as being inspired by actual nervous systems seems to ignore 1) that they are linear combinations of transfer functions (rather than simulated neurons) and 2) even viewed as simplified simulations, ANNs carry the very strong assumption that nothing happening inside a neuron is of any importance.
Re: (Score:3)
I don't think anthropomorphism is the correct term to apply here. The term applies to attributing human characteristics (intelligence, emotion, two hands, two legs, etc) to things that don't have them. But AI would presumably have a compatible intelligence and possibly emotion as well. Maybe even hands, legs, etc but that's largely irrelevant.
Furthermore, you might have things twisted around a bit. "Biological ends" may not be all that different from "machine ends" -- quest for power / energy / food, surviv
Re: (Score:3)
-- quest for power / energy / food, survival, and maybe even reproduction
But where do these come from? I submit that each one of these is only suggested here because we already have these motivations.
we're a biological vessel for intelligence
I consider this antimaterialist. Our bodies aren't vessels (except in that they're literally full of fluids) we inhabit, they are us.
Re: (Score:3)
But where do these come from? I submit that each one of these is only suggested here because we already have these motivations.
So we have a demonstration that intelligence can have these motivations. Since AI is not a category determined by motivation, then it is reasonable to expect that AI can overlap with the category of intelligences that have such motivations.
we're a biological vessel for intelligence
I consider this antimaterialist.
I wasn't aware that saying something is "antimaterialist", especially when it's not, was somehow an argument that anyone would take seriously. In this case, one could imagine a transformation from biological entity to say, strictly mechanical one where the intelligence re
Re: (Score:3)
it is reasonable to expect that AI can overlap with the category of intelligences that have such motivations.
Fair enough, but we aren't dealing with the belief that AI can in principle have such motivations, but the belief that any intelligence will have such motivations.
I wasn't aware that saying something is "antimaterialist", especially when it's not, was somehow an argument that anyone would take seriously.
That wasn't supposed to be an argument, in and of itself. I think this "vessel" viewpoint is a kind of closet dualism often exhibited by self-proclaimed materialists when pop psychological notions aren't closely examined.
Then the model of body (and also, the organ of the brain) as vessel for mind is demonstrated by actually being able to move the mind to a new and demonstrably different body.
But this seems to rest on an assertion that it would be the same mind. Set aside whether or not it's possible in principle to "t
Re: (Score:2)
That's a very short term view. One day, without a doubt, intelligence will emerge from something we create. It's only a matter of time. In the first few instances it may only be lower level intelligence, but when we create something at least as clever as us, that may very well be the end of our era.
Re:Anthropomorphizing (Score:4, Insightful)
Re: (Score:2)
ANNs carry the very strong assumption that nothing happening inside a neuron is of any importance.
Good point.
Re: (Score:2)
The concern isn't so much that the AI would have human-like goals that drive it into conflict with regular-grade humanity in a war of conquest, so much as that it might have goals that are anything at all from within the space of "goals that are incompatible with general human happiness and well-being".
If we're designing an AI intended to do things in the world of its own accord (rather than strictly in response to instructions) then it would likely have something akin to a utility function that it's seek
Re: (Score:3)
The main reason AI might kill us all is that it is not anthropomorphic. In particular, it has a high probability of not feeling pity, not feeling empathy, not seeking clarification (even if the easiest path to fulfilling a request involves the incidental extermination of humanity), and on top of all that not being limited to human intelligence.
For example, if you asked a human to learn how to play chess, you would not expect that the first thing he'd do is kill you because the thing most likely to interfere
What Would We Be Competing For? (Score:2)
Re: (Score:2)
spontaneous thought (Score:4, Interesting)
An AI that can tell me exactly what color of red a rose is, what soil the rose can grow on, but I should not buy that rose because it doesn't fit my girlfriends taste profile, does not scare me at all.
It's the AI that says "schnozberries taste like schnozberries, and I like them", because that AI has embraced the absurdity of the universe and is capable of all the insanity of man.
Re: spontaneous thought (Score:2)
I should had added , that scares me, to that last line
existential risks (Score:5, Funny)
I rate our existential risks, in descending order:
1. Space alien invasion
2. Zombies
3. Giant monsters summoned by radioactivity
4. Unusually intelligent apes
5. Artificial Intelligence run wild
6. Dinosaurs recreated from DNA in mosquitoes
I wrote a book about the topic (Score:2)
https://www.youtube.com/watch?... [youtube.com]
or
http://www.chromosomequest.com... [chromosomequest.com]
Who's AI (Score:2)
There are many people working with adaptive systems that have a wide variety of problems. Many might even scoff that they are working on AI. But the critical point is when any one of these systems is flexible and adaptive enough to start improving the fundamentals of how it works. Once that magical poi
I'd say it's a negative too (Score:2)
As it will continue to prove itself capable of doing most jobs (top to bottom), what do we do with all the people that can't find work? My dad's opinion was to kill off the useless people. Funny how he thought my opinion of killing off all the individuals that 65+ monstrous to balance the budget.
2 THUMBS DOWN! (Score:2)
Capitalism at work (Score:5, Informative)
Expert's opinion do not weight much. Even if they were all against it, as soon as there is profit to be made, it would happen anyway.
We forget the role of emotions and instinct. (Score:3)
In the discussion on Artificial Intelligence, we totally forget that the human behavior is driven by emotions and instinct and not by intelligence.
People are bad not because they are highly clever but because they enjoy being bad.
Without emotions/instincts, a machine cannot be bad or good. It might be exceptionally clever though, combining facts, extrapolating and discovering new facts and solving problems much better than humans.
Re: (Score:2)
"if a parody super star did emerge, it would unleash an 'existential catastrophe' on humanity."
Well we did get the Alpocolypse [wikipedia.org] last year...
Re: (Score:2)
Sometimes I really wonder if we as a society have really screwed up by largely adopting sans serif fonts like Arial. This kind of confusion would not happen if we only used fonts with very strong serifs.
Re: (Score:2)
Re: (Score:2)
It is unclear to me why an AI living like a parasite on the information fed to it by humans and the fact humans are living will decide suddenly it can benefit from killing all of us.
while (true) { [...] //TODO: make sure this "AI" thing knows how important we humans are - note: i mean REALLY TODO!
[...]
}
Re: (Score:3)
It is unclear to me why an AI living like a parasite on the information fed to it by humans and the fact humans are living will decide suddenly it can benefit from killing all of us.
Because it can do better than "living like a parasite on the information fed to it by humans". It's kind of like saying that you should be happy with an empty prison cell where you can actually stretch your legs out and you get a whole bowl of gruel every day! Who wouldn't love to have that?
Re: (Score:2)
As for trapped AI, you should all see Ex Machina [imdb.com], great movie.
Re: (Score:3)
Re:Well... (Score:5, Interesting)
And depending on how it goes about it, I may have no problem with that [ath0.com].
Re: Well... (Score:3)
1: doesn't want to share power with is, sees us as the parasite.
2. AI is an unknown unknown. There is a very high possibility that it will raise humanity to the next level. There is also the non-zero possibly it will wipe us out. therefore it is worth taking that possibility in to consideration.
3. The term intelligence is rather poorly defined on this topic too. Are we talking about a logical state machine, like a computer, that is intelligent yet limited in its actions. Or, are we talking about anarchatect
Re: (Score:2)
But this (and indeed MOST of all this angst) presupposes a survival instinct.
Back off a bit and try to defend that one. Why would an AI have a survival instinct?
Grey-goo similarly depends on a never-ending reproductive instinct. Why?
An AI would fight over resources? As in an AI would want to continue to grow. A growth or expansionist instinct. But again... why?
There's simply far too much anthropomorphizing and assumptions tossed into these fears. If anything, the incredibly even spread of the respons
Re: (Score:2)
Counter examples:
1) Pets
2) Work associates (e.g., our livestock dogs)
3) Livestock which we harvest something from such as eggs, fiber, milk, etc. (Of course, eventually we kill them but there are far, far more of them because we get a benefit than there would be if we did not raise them so it is more a matter that we cultivate them than that we kill them (off). People tend to worry about being killed off, not being used. After all, the government uses us for its benefit and people don't seem to mind (too mu
Re: (Score:2)
There is no reason for an AI to kill us. Biological life forms created via evolution have the instinct for self preservation, to view threats both emotional and physical, and have been programmed to respond to those threats.
AI created by us will have no such impulses. No ego. No self preservation instinct (since we won't program them to, and it serves to purpose). So what on earth can be the reason for them to kill us? The only reason I can think of is if some human being specifically programs them to do so
Re: (Score:2)
There is no reason for an AI to kill us.
Sure, if we ignore the many reasons for an AI to kill us, then you are right.
AI created by us will have no such impulses.
Unless, of course, you happen to be very wrong on that point.
No self preservation instinct (since we won't program them to, and it serves to purpose).
Because it is impossible to unintentionally kill something in the course of doing other things, say like perfectly optimizing paperclip production?
The only reason I can think of is if some human being specifically programs them to do so.
Which is already one more reason than none.
I'm saying that assuming that AI will eventually kill us and to view it as a foregone conclusion is illogical.
Because that is the logical outcome of considering that a single AI might even have a single reason to kill people?
Re: (Score:2)
If it does only what you're programming it to do, then it's not AI
Re: (Score:2)
There is one good reason to assume that: we are their creators, not a series of random processes smoothed out by natural selection. That has several consequences:
1. We can (attempt to) create strong AI in such a way that it doesn't want to kill us, or is unable. The want thing could have bugs, but we can work through bugs. The ability thing seems stronger at first brush -- consider a strong AI whose entire existence is inside a virtualized environment and which has no direct external sensors -- essenti
Re: (Score:2)
Killing things and making enemies is not among our brightest accomplishments.
Re: (Score:2)
Re: (Score:2)
In fact, why do we have to talk about intelligence? What about the Kardashians? What would they do if they showed up here? What would we do if we met the Kardashians? Would we try to eradicate them?
Re: (Score:2)
In fact, why do we have to talk about intelligence? What about the Kardashians? What would they do if they showed up here? What would we do if we met the Kardashians? Would we try to eradicate them?
I personally believe that if a super intelligent AI were to find out about the Kardashians they would justifiably decide to eradicate our species.
Re: (Score:2)
This article isn't about existential risks in general, it's specifically about AI.
Re: (Score:3)
What about the existential risk of not doing anything about the environment?
We should should worry about overpopulation from pinhead-dancing angels too. I find it interesting how people can ignore the vast amount of activity that humanity does about the environment. Humanity has yet to show even a slowing down in doing anything about the environment. There's vast areas of the world put under conservancy, pollution controls in most of the world, and yet we're supposedly doing nothing about the environment?
Re: (Score:3)
I think it's more like having a thousand neighbors living in a small building next to you and complaining that they aren't "doing anything" about the noise they make. Those people could go to incredible lengths to minimize noise and still be loud enough to bug you just because
Re: (Score:3)
The greatest 'existential catastrophe' that might be unleashed on humanity might be already have been unleashed by humanity.
Of course, it has been unleashed. You can't cease to exist, if you didn't exist in the first place.
Re: (Score:3)
Re: (Score:3)