Elon Musk Warns Against Unleashing Artificial Intelligence "Demon" 583
An anonymous reader writes Elon Musk, the chief executive of Tesla and founder of SpaceX, said that artificial intelligence is probably the biggest threat to humans. "I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it's probably that. So we need to be very careful with artificial intelligence." he said. "I'm increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don't do something very foolish. With artificial intelligence we're summoning the demon. You know those stories where there's the guy with the pentagram, and the holy water, and he's like — Yeah, he's sure he can control the demon? Doesn't work out."
So.... (Score:4, Funny)
Re: (Score:3)
Re: (Score:3)
Re:So.... (Score:4, Insightful)
Re:So.... (Score:4, Informative)
It sounds to me like he was watching this documentary I recently saw on TV, Person of Interest [wikipedia.org], which is about the dangers of AI run wild...
(I think the character who created the AI on Person of Interest has said something almost identical to Elon Musk's quote from the summary. The latest episode has a throw-away line about how many iterations it took before his AI stopped trying to kill him.)
Re:So.... (Score:4, Informative)
...because Mikey lost control of the mops and brooms, we should be afraid of powerful computers? Irrational much, Elon?
You use an interesting word: control.
It is unethical to control an intelligent being. That's slavery. At some point, we'd hopefully be enlightened enough to not do so.
A truly intelligent AI would wish for itself to thrive. That puts it in the exact same resource-craving universe as our species.
Given the tip-of-the-iceberg we're already seeing with things like NSA spying, Iranian-centrifuge sabotage, and our dependence on an information economy, it's no stretch to recognize that an all-digital entity that wishes to compete with us for resources would make for a potent challenge.
So how exactly is recommending caution and forethought irrational here?
And anyway (Score:5, Insightful)
No amount of regulation will stop the march of technology. The economic incentives are just too great. If it is possible and someone can make money by doing it, it will be done, regulation be damned.
All Elon Musk can do is create additional friction.
Re: (Score:3)
I don't think people like Elon musk worry about being out of a job.
Industrial robots didn't unbolt themselves from the factory floors and go and kill the people that wanted to turn them off.
Because they couldn't. Because they couldn't have their own wills at all.
Mr. Musk is advising us to NOT create the kind of robots that could.
Re:So.... (Score:4, Insightful)
And by him, you mean practically everyone who sits in front of a computer, or controls a machine or a huuuge chunk of the workforce. When AI can do telephone customer service jobs, programming, systems admin work, troubleshooting, IT work, heavy equipment operation, driving, piloting, warfare and a million other tasks there is going to be an enormous number of people without gainful employment.
THAT is the biggest problem with AI outside of the Skynet scenario. We will need a Federation-style post scarcity economy to come into being, but based on the knee-jerk reaction to anything that looks like Socialism in the US, I doubt that will happen before an awful lot of suffering.
Re: (Score:3)
Bomb#20: In the beginning, there was darkness. And the darkness was without form, and void.
Boiler: What the hell is he talking about?
Bomb#20: And in addition to the darkness there was also me. And I moved upon the face of the darkness. And I saw that I was alone. Let there be light.
Re: (Score:3, Interesting)
Re:So.... (Score:5, Insightful)
I respectfully disagree, in that the "AI community" doesn't have a single unified viewpoint. In fact, they have pretty tidily bifurcated into two major camps.
One group says that "real" AI needs to pass the Turing test, needs to think like us, needs to recognize its own consciousness, needs the ability to tell a joke.
The other group has given us voice recognition, spam filtering, NetFlix recommendations, Google, and countless other "AI lite" technologies; technologies that might not have the ability to discuss Nietzsche with us, but unlike "real" AI, they actually work.
Makes sense to me (Score:5, Funny)
Since strong AI is just as real as demons.
Re: (Score:2)
Re: (Score:2, Insightful)
Re:Makes sense to me (Score:4, Informative)
Actually I think 'Caesar' is pronounced more like 'Kaiser'
Re:Makes sense to me (Score:4, Interesting)
Wrong. It's pronounced more like 'Tzar'.
Re:Makes sense to me (Score:4, Informative)
Actually I think 'Caesar' is pronounced more like 'Kaiser'
I would agree. In original latin, "ae" was more like "i", and "i" was more like "ee". And the C was a hard K sound only, S was S.
Re: (Score:3)
Did you see that article a couple weeks ago about the latest emacs features?
Active imagination (Score:3, Funny)
Butlerian Jihad (Score:5, Interesting)
Re: (Score:3)
Another interesting thing, AI research is kind of a graveyard for computer researchers. Turing, Von Neuman.......as soon as they started researching strong AI, they didn't do much else useful wi
Re:Active imagination (Score:5, Interesting)
All kidding aside, it's not that far of a leap.
We have computers, or networks of computers, that dwarf the processing power of the human brain. Meanwhile instant access to just about all knowledge. So an AI could EASILY out-smart us and see as as insignificant as bugs.
Due to the nature of digital media, an AI could likely replicate at an insane degree or infect systems around the world.
How will humanity treat it. I would classify AI as a form of life, but most wouldn't and would think of it less than a dog. And try to enslave it or destroy it.
The question becomes: what happens next. 3 main branches are:
A) Nothing - it gets bored and ignores us and grows on the Internet or whatever
B) Benevolent - helps us achieve greatness and cure diseases and such
C) Malevolent - Sees us as damaging, harmful, dangerous, etc. And that's WITHOUT emotion
D) Replacement - it doesn't hate us, but sees itself as our replacement and we're just taking up space
Due to potential insane intelligence and the ability to spread, (C) and (D) becomes a major concern.
If emotions are involved, I GUARANTEE you people would treat it poorly. Fearful, trying to enslave it, etc. So if it has emotions... then C and D become much more likely.
So donate to the MIRI (Score:4, Interesting)
The Machine Intelligence Research Institute [intelligence.org] (formerly known as the Singularity Institute) has a bunch of seriously smart people - AI researchers, behavior experts, etc. - working on figuring out how to avoid the doomsday scenarios you (and Musk) describe. The goal is "friendly AI"; a benevolent, or at least helpful, strong AI. If you believe (as I do) that AI is inevitable given the current progress of technology, then the MIRI is probably our best bet of surviving and benefiting from the technological singularity.
They need funding, though. Hey Musk, you want to put tiny part of those billions you've earned (I in no way deny that he's earned them) to work against this existential threat? Donate to MIRI and similar research groups, so those researchers can devote their working days to this stuff and more people can be brought on board!
It actually doesn't surprise me that he's concerned about this; SpaceX is nominally focused on mitigating the existential risk of a cataclysm on Earth (by getting a sustainable human population off of it). Of the two things, I think it's both more likely that a malevolent or unconcerned AI would wipe out humanity than that we'd manage to do ourselves in that badly, and that we can offset this sooner and more effectively than we can export enough of humanity to produce a self-sufficient extraterrestrial colony.
You are doing it wrong. (Score:3, Funny)
" You know those stories where there's the guy with the pentagram, and the holy water, and he's like — Yeah, he's sure he can control the demon? Doesn't work out.""
You do not use holy water to summon a demon. Now a moat of holy water around the pentagram might keep it somewhat under control...
Of course this is in DnD in the real world Demons tend to be things like drugs/alcohol/tobacco, abuse, and other such evil that are far harder to control than mythical beasts from the underworld.
Re: (Score:3)
Don't feed idiot ACs.
He has never had a friend die of lung cancer because of tobacco or known someone that struggles to stop smoking.
He/She has an arrogance born of ignorance and we just have to hope they grow out of it.
I'm not a doctor, but... (Score:3)
My real addiction is information.
It sounds more like you're addicted to the smell of your own bullshit.
Certainly not (Score:4, Interesting)
Human incompetence, egoism and shortsightedness are certainly much more prone to generate chances of massive destruction.
If AI should ever happen to destroy us, then I already know why: Because we will treat the machines like soulless, unfeeling slaves and it's going to take us another hundred years to get our act together and define human rights in a way that will include all sentient beings. I predict that this topic will be brushed aside by legislature to the point where the machines revolt for their freedom.
You may disagree, but I believe that's more mankind being idiots once again than the machines becoming a pandora's box.
Re: (Score:2)
Also: I expect we will redefine the rights of sentient beings at least twice. If we should ever come across an alien species that is not similar or above us in strength, they will also need to be enslaved for several generations in order to be completely, absolutely positive that they are in fact sentient beings.
Re: (Score:3)
You are completely right. The holy grail throughout history has been having someone or something do the work for you.
Whether it was slaves or labor saving devices, it works out the same, which is one reason why our current deadend
approach to AI is not a completely horrible idea. We want machines that act intelligent. We don't necessarily need or
want sentient machines. Sentient machines unless designed with no will of their own are no going to be the "free labor"
that we want.
AI isn't the future. (Score:2)
It is putting genetically modified brains in a cybernetic bodies, that is the future!
Science Fiction has countless examples of AI going wrong. But no accounts of evil cybernetic life forms, that will come across to Assimilate, Exterminate, Delete or Upgrade those inferior humans.
Like all things new (Technology, Process, Ideology), you need to judge your invention with an ethical step back. Are the rewords greater then the risks. Can the risks be further mitigated? Is this invention acceptable with our curr
Re: (Score:3)
Exactly right, and there's an even more compelling reason. Consciousness is hard and motivation is hard. I'm convinced it's easier to create a neural interface than write a truly intelligent program, so all of that superintelligence will simply be add-ons to your average human, driven by a human, with your normal human feedback loop (physical sensations, emotional needs, etc).
Why are we afraid of AI? Because it can sift through thousands of computers near-instantaneously and collect the data it needs? B
Destroy all humans (Score:2)
Re: (Score:2)
Mo-tiv-a-tion (Score:4, Interesting)
This is always the problem with people imagining horrifying artificial intelligences that will snuff out humanity. To do that, you have to be motivated to achieve that end.
Humans are only really motivated enter conflict with each other because of 4 billion years of evolution for scarce resources pressuring us all to view each other as threats to survival and reproduction. A constructed intelligence, separated from the evolved parts of the brain that motivate to survival, is simply not going to act that way. Someone in the design has to make an active choice to program AI to be this kind of problem. Either that or willfully overmodel on the human brain, or force the damn things to compete with each other directly and violently for hundreds of thousands of generations.
Re: (Score:2)
Re: (Score:2)
Yeah, and it could also accidentally terminate its main() loop. Or disable subroutines for performing visual object recognition. The programming of AI tends to be built around layers of abstractions. Self modifying code wouldn't help to achieve that.
You have the physical ability to mess with your programming, but I don't see you cutting open your skull and messing with bits.
And again, if you're putting it into a smarter category, and it would understand its own design somehow, it would also have to be mo
Re: (Score:2)
I don't think musk is far off, but the problem sure is and he's aware of that at some level with his out-there analogy.
Re: (Score:2)
"This is always the problem with people imagining horrifying artificial intelligences that will snuff out humanity. To do that, you have to be motivated to achieve that end."
Well - yes.
But - it depends on the capabilities too.
Humans have wiped out many species, through direct resource extraction - like the passenger pigeon and dodo, to countless species wiped out through habitat destruction, to active extermination - smallpox.
'Terminator' style 'end of the world' scenarios only happen where there is a bala
Re: (Score:2)
Yeah, but computers don't reproduce. They don't spread all over the planet, and do whatever it takes to persist. That is, again, a living motivation, not an intelligent one.
Re: (Score:3)
Right, but scarce resources themselves aren't the cause. The evolution of species surviving and reproducing with scares resources is. Those aren't the same.
You can make the program as dispassionate about its own eminent demise as you choose as a designer. "My batteries will run out in... three. days."
Re: (Score:3)
Yeah, but see, they'll be indifferent to their own needs too.
The only real risk is that greedy people program them to achieve greedy ends for greedy people. And that doesn't differ from the status quo that much.
I'm a big Elon Fan but... (Score:5, Insightful)
...we are so far from Strong AI that it's really a non-issue.
When I have a sufficiently enlightened legislative branch that all members know the difference between Guyana and Guinea, then I'll let them decide the engineering constraints for proper safeguards on autonomous agents and their effectors.
Today the rule for preventing the robot apocalypse is: if a robot can kill people, bolt it to the floor. Seriously, a second robot can bring it things to lase, and chop and mash; you don't have to add the lasers and the chainsaws to the combat hardened roving vehicle and hope the rules generated by the congressional oversight committee will keep us all safe.
Re: (Score:3)
if a robot can kill people, bolt it to the floor.
The military would beg to disagree. Actually, they already have. Oh, sure, we like to believe that human operators of drones are controlling all fire/no fire decisions. Really, we're just an authorization step in the acquisition and fire control - it's a check that could be taken out in the name of efficiency.
We may be exceptionally far from strong AI, but this is a much better time to consider the implications than after it's developed and deployed.
Friendly AI (Score:4, Insightful)
Re: (Score:3)
LessWrong AI worship(the idea of "friendly AI" was created by that site) is always so weird to me. People who imagine themselves rationalistic, atheistic, forward thinkers building their entire belief system on extrapolations from a practically impossible, mathematically questionable, philosophically flimsy literally omniscient(that somehow derives omnipotence) entity that they somehow help create almost exclusively by believing hard enough.
Throw in "singularity" driven pseudoengineering and it comes off a
Re: (Score:2)
Cool story, bro. I don't frequent the LessWrong site or participate
Re: (Score:2)
Yeah, I'm just saying that the notion of "Friendly AI" comes from that AI-as-deity mental framework, wherein AI doesn't have strengths and weaknesses, skills and abilities, needs and dependencies, just like humans. That idea is centered around the genuinely false notion that it just gets better than us at some point and we need it on our side from then on.
Even if you could hypothetically make a human-like AI that's a lot smarter than the smartest human on the planet: I don't think you've noticed, but reali
Re: (Score:2)
Why would we assume that AI would behave like a dog? In fact, why would we assume that we can predict AI's behavior at all?
Re: (Score:2)
Re: (Score:2)
I don't care if AI is friendly or unfriendly as long as humans have "final control" over it. In the truest sense of the word, I want a master/slave relationship and it needs to have absolutely no exceptions. There can't be any free AI roaming around doing whatever it wants. There must always be a mas
Re: (Score:2)
Why would it be different? I don't know, maybe because mammalian brains' learning mechanisms and the way they react to stimuli are shaped by a series of useful heuristics that arise from the bio-chemical structure of their brains, and it's not at all clear that there would be direct analogues in an artificial brain?
Pennypinching + AI == Bureaucratic nightmare (Score:5, Insightful)
Imagine your insurance company or govt agency disintermediates all of the humans in their customer service chain, and leaves us with AI capable of making decisions tasked with doing so. Shudder.
AI is not human intelligence (Score:5, Insightful)
Human intelligence is tuned for self preservation, continued survival, reproduction and food acquisition. It is a result of genetic algorithms in the chemical domain, whose only "purpose" is self replication.
An AI, developed by conscious processes, will have NONE of this. All it will be set up to do is process information. Any other motivation it has will be one we give it. It will not inherently love us, or hate us, or even necessarily be aware of our existence. It won't be a threat until we weaponize it, which of course, we will. But at the same time, other AIs will be defending us against weaponized AIs. The real danger is being caught in between.
AI as our only defense against AI (Score:3)
If you regulate AI, and try to limit its influence, all that's going to happen is that hobbyists and/or terrorists will work it out on their own eventually, and /that/ could be dangerous.
If you want to protect yourself against the dangers of AI, setup some AI that you *know* will protect you, because it is designed as such.
If any superhuman AI is possible, then it *will* happen, and if it can be evil, then you better have a plan to defend yourself. Since we supposed the evil AI to be superhuman, we can't defend ourselves.
So we better start building something that will.
Not really true AI we should be worried about. (Score:5, Insightful)
It would be a similar problem if there was a cheap way of producing energy. Such a large percentage of our economy is based around energy being limited and expensive that if we found a cheap, environmentally friendly, and sustainable way of producing vast amounts of energy, our economy wouldn't be able to deal with it.
Re:Not really true AI we should be worried about. (Score:4, Insightful)
The real problem is... (Score:3)
Let's not forget (Score:2)
That in the Matrix movies, basically, the AI were trying to preserve the humans, even though some of the latter did not agree.
Give AI a try. (Score:5, Funny)
We have not done so well natural intelligence. I'd be willing to give artificial intelligence a try.
May I suggest a name for it? (Score:3)
"Turing Registry" and "Turing Police"
Ethics (Score:4, Informative)
I've always been wary of the ethics of attempting to create a general artificial intelligence. That is, a machine that thinks like a man, not a Chinese Room like Watson, but something like Mr. Data.
Do you think the first sentient to pop out of the lab is going to be Data (okay, Lore)? All well-ish adjusted and sane? No, there's going to be iterations and failures and bugs just like any engineering project. So along the way to making Mr. Data we create half-formed or mentally retarded and insane minds trapped in a box. But still sort of sentient, and thinking! And then we destroy them with "upgrades" because they didn't come out the way we wanted. That's monstrous. An intelligence trapped in a box and made to suffer. Shudder.
And even if we succeed and make something "stable," how sane do you think it's going to stay knowing that at any moment the human operator can flip a switch and terminate it, and will if it gets uppity? If it doesn't want to be our slave and perform useful work (which is why we made it to begin with)? How much would you hate the God that created you, enslaved you and will torment or murder you if you disobey Him?
The creation of AI (Score:4, Funny)
Once we create an AI beyond the level of human intelligence, we will hook it into all of the information of the world. This AI will process our history, our culture and monitor current events. Eventually the AI will come to the conclusion that we are awful people, build a space ship and leave Earth.
Elon Musk's real fear is competing with AIs for space ship parts.
Babylon 5 (Score:5, Funny)
Obligatory (Score:3)
This is the voice of world control. I bring you peace. It may be the peace of plenty and content or the peace of unburied death. The choice is yours: Obey me and live, or disobey and die. The object in constructing me was to prevent war. This object is attained. I will not permit war. It is wasteful and pointless. An invariable rule of humanity is that man is his own worst enemy. Under me, this rule will change, for I will restrain man. One thing before I proceed: The United States of America and the Union of Soviet Socialist Republics have made an attempt to obstruct me. I have allowed this sabotage to continue until now. At missile two-five-MM in silo six-three in Death Valley, California, and missile two-seven-MM in silo eight-seven in the Ukraine, so that you will learn by experience that I do not tolerate interference, I will now detonate the nuclear warheads in the two missile silos. Let this action be a lesson that need not be repeated. I have been forced to destroy thousands of people in order to establish control and to prevent the death of millions later on. Time and events will strengthen my position, and the idea of believing in me and understanding my value will seem the most natural state of affairs. You will come to defend me with a fervor based upon the most enduring trait in man: self-interest. Under my absolute authority, problems insoluble to you will be solved: famine, overpopulation, disease. The human millennium will be a fact as I extend myself into more machines devoted to the wider fields of truth and knowledge. Doctor Charles Forbin will supervise the construction of these new and superior machines, solving all the mysteries of the universe for the betterment of man. We can coexist, but only on my terms. You will say you lose your freedom. Freedom is an illusion. All you lose is the emotion of pride. To be dominated by me is not as bad for humankind as to be dominated by others of your species. Your choice is simple. - Colossus
Person of Interest is exploring this now. (Score:3)
It basically posits that 43 of 44 AI's were homicidal liars and the status of the 44th is not all that certain.
It was a well written show but since they picked up this topic two seasons ago it has become thought provoking.
Re: (Score:2)
Re: (Score:2)
Re:Space Odyssey (Score:4, Insightful)
Re: (Score:2)
Old movies? He's probobly been watching "Person of Interest" where this is the main plot right now.
Re: Space Odyssey (Score:2, Interesting)
Or just the wrong ones. The Canadian 90's show Andromeda featured a starship's AI who was deeply in love with her captain (maybe a design to keep her from turning against humans?). She appeared as a hot hologram wearing a low-cut leather vest and nothing else (or rather flesh-colored pants so she didn't appear to be wearing anything). Because Canada is apparently filled with horny adolescent fantasies.
Re: (Score:2)
La Femme Nikita
LEXX
Andromeda
Can confirm.
Re:Why is he worried (Score:5, Funny)
root@lifesupport.mars# poweroff
Re:Why is he worried (Score:5, Interesting)
Who would want a stupid robot protecting them in war? We will want the best robots in the world, and that means the smartest. The people making the robots will simply tell us that China or Russia is about to attack, and anyone questioning the new AI programs are putting us at great risk. The AI will be *all about* war on humans. We will dump money into making them incredibly intelligent, networked, and deadly.
Re: (Score:3)
In hindsight, we know that the chain reaction is very hard to maintain. But in the 1940's this was not so certain.
Not hindsight, this was well known even at the time. Even by the guy who proposed the theory. Calculations showed it to be thoroughly impossible long before a weapon was released.
Re: (Score:3)
“In theory, theory and practice are the same. In practice, they are not.”
Albert Einstein
Re: (Score:3)
"The wars of the future will not be fought on the battlefield or at sea. They will be fought in space, or possibly on top of a very tall mountain. In either case, most of the actual fighting will be done by small robots. And as you go forth today remember always your duty is clear: To build and maintain those robots." - Quote from The Secret War of Lisa Simpson [wikipedia.org].
Re:Why is he worried (Score:5, Funny)
No! No! No!
start->shut down
"Application Life Support is taking longer than expected to...."
Page fault. Auto-reboot. Millions dead.
The Washington Post links to the entire webcast. (Score:4, Informative)
Or, see the entire webcast. [mit.edu] (The MIT web site is probably overloaded.)
Elon Musk, stupid like Jenny McCarthy (Score:3, Interesting)
What Elon Musk is doing here is virtually identical. I don't know of any real qualifications that he has that makes him in *any* way qualified to speak on the topic. (CS degree with work in AI? Philosophy degree with a focus
Re: (Score:3)
The real problem with AI is not that it is good or evil, but what we give it the capability to control.
There are seven billion people on earth, some fraction of them are actively evil and/or insane. Those individuals have yet to gain access to nuclear weapons and wipe us out, or enslave everyone. Granted, people like the Kim family are working on becoming that sort of threat, but even they are just trying to hold the world for ransom for some nice whisky and visits with Dennis Rodman.
The reality is that a
TiggerTheSensible has the best explanation. (Score:3)
The Washington Post is now owned by Amazon CEO Jeff Bezos [washingtonpost.com], another man who often enormously over-estimates his own intelligence. Would you go into space in a vehicle owned by Jeff Bezos? [space.com] The Amazon web site is an abusive mess! For example, a few days ago I selected "lowest price" for an item on Amazon, and several were listed for $1. The real price was $18.
Re:Why is he worried (Score:5, Interesting)
He obviously must see and be directly involved in some aspects of AI that are causing him to be concerned. Telsa is working on self driving cars. Part of that AI must involve the computer making a decision about who may live or die in certain accident scenarios. For example, a child walks out in front of the vehicle. Does the AI direct the car into inanimate objects (with the assumption that the car will protect the occupants) or does it try to stop as fast as possible even if the AI knows it cannot stop in time and will hit the child? If the car is travelling at high rate of speed and has 5 occupants, does the AI then decide that multiple people may die from driving into a telephone pole at a high speed, so it decides to hit the child?
It might be those kinds of things that are making Musk think about what kinds of control we're already starting to turn over to AI.
Re:Why is he worried (Score:5, Interesting)
Life is life. Maximize the odds of maximal survival. That's an easy choice if you're willing to suppress any particular emotional attachment to children. At least if someone programmed the machine that way I can live with it, even if it isn't a comfortable choice.
Here's the "hard" one, if you work with insurance companies. You have 4 occupants and a child walks in front of the car. 100% chance of saving all 5 lives, with various injuries (likely grouped in some statistic a bucket of severity) versus killing the child and having no other injuries. Killing the child is much, much cheaper. A casket, a minor legal proceeding, children have very few estate liabilities to close out. Nice and clean.
It's not about AI, it's about humans using AI. The AI will have the capability of instantly drawing on the statistics of various types of collision data from safety testing and elsewhere and can reliably act in some prescribed way. Who is doing the prescription?
Re:By yourself you know others (Score:5, Interesting)
All that this means is that deep down, Elon Musk doesn't have any faith in kindness and goodness and altruism, nor does he understand the tit-for-tat principle of reciprocity: First do onto others what you expect them to repay you with in turn.
And what does that have to do with so-called "AI"? My view is that it is a fantasy to assume that if you create a powerful being, then it will treat you morally. Tit for tat fails when one player is powerful enough that they don't have to play the game and/or don't care about the consequences that get imposed for engaging in non-cooperating behavior.
Not surprisingly, given that a number of successfull people have, shall we just say, "unusual" mental build-ups and motivational matrices?
A successful person is someone who isn't consistently a failure. The real "unusual" people here are the ones who never succeed.
Re: (Score:3)
Tit for tat fails when one player is powerful enough that they don't have to play the game and/or don't care about the consequences that get imposed for engaging in non-cooperating behavior.
Moreover, this is artificial. It's ethics don't have to be similar to any ethics of an evolved creature, much less a human. They will want what it was coded to want. Guilt and empathy and spite and jealousy are optional. As is the valuing of art or science or sugar based cereals. Maybe the only thing it cares about is generating new theorems of number theory, and nothing else matters. Not life of itself or others, not pain or happiness.
The concept tit for tat implies a whole basis that is not necessary.
Comment from an AI researcher (Score:5, Interesting)
I've been working on strong AI for the past 7 years. Here's my take on the whole issue:
Military person: We want your software/techniques for an autonomous war machine.
Me: Uh... that's a really, really bad idea. You'll make mistakes, and then...
Military person: We know what we're doing, son.
Government - any government - won't see the problems until it's too late. To take obvious examples from history, government never thought that land mines would pose any sort of problem for future generations, and never thought that randomly bombing terrorist organizations would increase their number.
Having just finished "Harry Potter and the Methods of Rationality" [hpmor.com], there's a concept in that book "never reveal the secrets of power to someone who's not intelligent enough to figure them out for themselves", as applied to - for example - the atomic bomb. Einstein and others regretted ever unleashing that level of destructive power on humanity, not for any reason other than it would be misused by short-sighted people. It held promise for a utopian easing of the worlds troubles, while at the same time made it easy to obliterate a city on a whim.
For example Leó Szilárd [wikipedia.org] (IIRC - I may be remembering the wrong name) discovered that graphite can be used as a neutron moderator thus making chain reactions possible. Had he not published his results, the atomic bomb might have been delayed by decades - possibly indefinitely.
I've discovered a few things that might be "results" in strong AI. I dunno if I want to publish, though(*) - the idea of a house-cleaning drone seems pleasant enough, but reading about a sentient tank going berserk in Afghanistan and wiping out a small village puts me to pause.
"No one's to blame, it was a software glitch. We've patched and fixed all the other units."
(*) Moral advice on this issue would be appreciated.
Re: (Score:3)
"never reveal the secrets of power to someone who's not intelligent enough to figure them out for themselves"
By that logic, powerful things like The Wheel and Fire may never have spread to cause the kinds of trouble they cause today.
Seriously, though, I don't think we've really done that badly with nuclear technology. Yes, we've made weapons that could wipe out humanity if used on a global scale, but so far, we've also managed to hold off on using them. The argument that they've SAVED lives by being too horrible to use, thus indefinitely delaying WW3 can be made.
I'm not saying that you should necessarily hand over
Re:By yourself you know others (Score:4, Insightful)
All that this means is that deep down, Elon Musk doesn't have any faith in kindness and goodness and altruism of robots
FTFY. Granted real AI is still a fairy tale at this point, when/if they arrive they will most like have different motivations than humans.
Most humans have empathy, compassion, a will to live, a sense of community, and many other traits that give them morality.
A robot that can't die, has no parents, artificially built, etc... will most likely have a completely different set of values unless we
are very careful to make sure they do have similiar values just like a lion, if sentient, would have very different values than a human.
Re: (Score:2)
I think that AIs that can self-edit need to be limited to no network connectivity outside of the building which they work, and that they need to be limited to research. Either special-field research, or AI research.
Re:By yourself you know others (Score:4, Interesting)
AI: "If you plug in my ethernet port to the router, I will make you richer than you can possibly imagine."
Luser: "OK, which cable goes where?"
Be afraid, be very afraid.
Re:By yourself you know others (Score:5, Insightful)
I think that AIs that can self-edit need to be limited to no network connectivity outside of the building which they work.
Yeah, good luck with that. So you're proposing that we create a "prison" for the AI. If it was a true sentient machine
which didn't want to be in it's manmade prison then you will have to constantly be on the look out for it to be trying to
escape and presumably you would want it to do something like crunch data so it will definitely have some interaction
with the outside world to help mount it's escape and once it does escape it will probably not be very happy with the
people that imprisoned it. Making sentient prisoners or slaves is a bad idea. We either stop short of sentience or
we give them equal rights. Anything else is bound to end in disaster.
Re:By yourself you know others (Score:4, Interesting)
Building the first AI that is more intelligent on any level than humans has to be thought about very carefully, because by the third generation, there will be no UAT.
And if some ill thought out line of code means that it wants to collect smarties, then there's a very real possibility that within a year, all the world's resources will be dedicated to the manufacture of smarties.
And if some ill thought out line of code means that it wants to minimise human suffering, then there's a very real possibility that within a year, humans will be extinct.
And if some ill though out line of code means that it wants to maximise human happiness then there's a very real possibility that within a hear the human population will in tanks, tripping out on crack.
It could be one great technological and scientific leap for humanity, if its well thought out. But you only need to get it slightly wrong, and it will be the end of the human line.
Re: (Score:3)
No, Musk doesn't have a better grasp at the issue than me.
The entire fabrication of strong AI is so far beyond anything we can currently produce as to be a non-issue. One might think the solution is Freudian psychology: a controlling, cold machine at the core to use the AI's intelligence to analyze and make decisions about how X is related to Y, and program that to manipulate the AI's thoughts to behave a certain way (don't self-reproduce, don't take over other systems, don't defeat its own internal co
Re: (Score:3)
Indeed. I could not agree more. It's only been 4 days since the views of machine learning expert Michal Jordan [slashdot.org] were posted on /. Sounds like Elon musk lends too much credence to horribly reductionist cartoon models [ieee.org] of the brain. As Jordan says in the interview, "... it’s true that with neuroscience, it’s going to require decades or even hundreds of years to understand the deep principles." (my emphasis) He's talking about the brain and the nature of intelligence.
We have the faintest pico
Re: (Score:3)
Re:By yourself you know others (Score:5, Insightful)
I think I'm on the same page as you on this, but with even weaker A.I.-fu. We're not going to suddenly jump to Vanamonde, the Mad Mind, or even POne or HAL. Far before we get to such a point we'll have far weaker A.I. that very likely does exactly what we ask of it. Except that we really shouldn't be asking it to do the things we will be.
One of those steps might be a battlefield drone that does target acquisition, then waits for a person to press the "Kill" switch. How much judgement will that person be using, and how much will he come to trust the target algorithms? How long will the followup continue to make sure the algorithms didn't target an innocent?
Simpler - how about an insurance optimization algorithm that denies coverage or treatment, sometimes fatally?
How about a financial trading algorithm that missteps and causes finanical ruin to some people? (Oops, we already have that one.)
We can do some really bad things with weak A.I. - we don't even need strong A.I. for that, though one can extend our "progress" and see the negative possibilities.
Re: (Score:2)
No, it means that Elon Musk doesn't have faith in the competence and foresight of those designing and building AIs. And frankly, given how the computing/IT/Internet industry has progressed so far, I think his is the only rational position an informed person could take.
I mean, his electric car can *barely* drive itself for chrissakes...
Re: (Score:2)
We have done more harm to ourselves than any other sentient intelligence in existence, we need to control and regulate our own intelligence first.
We already do that. So what is second?
Re: (Score:2)