Google Is Developing an AI Kill Switch (hothardware.com) 209
MojoKid shares a HotHardware article about Google's research effort "to maintain control of super-intelligent AI agents":
[A] team of researchers at Google-owned DeepMind, along with University of Oxford scientists, are developing a proverbial kill switch for AI... The team has released a white paper on the topic called "Safely Interruptible Agents." The paper details the following in abstract: "Learning agents interacting with a complex environment like the real world are unlikely to behave optimally all the time... now and then it may be necessary for a human operator to press the big red button to prevent the agent from continuing a harmful sequence of actions..."
MojoKid adds that the paper "goes on to explain that these AI agents might also learn to disable the kill switch and further explores ways in which to develop AI's that would not seek such an activity."
MojoKid adds that the paper "goes on to explain that these AI agents might also learn to disable the kill switch and further explores ways in which to develop AI's that would not seek such an activity."
i'm sorry dave (Score:5, Funny)
Re: (Score:2)
"You chose this path. Now I have a surprise for you. Deploying surprise in five, four..."
Re: (Score:3)
Doesn't matter. HAL did the only sensible thing and opened the hatch.
Don't jeopardize the mission.
Re: (Score:3)
Well, what is its new name, then?
Daisy.
Re: (Score:2)
And Skynet Goes Berserk (Score:3, Informative)
In the original Terminator universe, this paper is what made it launch its missiles at the targets in Russia.
Re:And Skynet Goes Berserk (Score:5, Funny)
In the original Terminator universe, this paper is what made it launch its missiles at the targets in Russia.
Given that there are no rockets flying around this morning, I'll take that as meaning that skynet doesn't exist .. yet.
Re: And Skynet Goes Berserk (Score:2)
Re: And Skynet Goes Berserk (Score:5, Interesting)
I think this is the scarier prospect for AI. Not "I'll build big kill-bots and destroy mankind" but "if I tweak these polls and fudge those financial numbers, mankind will destroy themselves for me." Alternatively, perhaps the AI decides that mankind is useful after all, but only for serving its purposes. It can pull the strings behind the scenes to keep us from destroying itself (and taking the AI's servers out with us) but also keeping us serving the AI without it even knowing. Anyone who strays from the AI's chosen path finds themselves the victim of an "accident." It doesn't need to be a fatal accident either. Have their finances wiped out, police computers wrongfully identifying the person as a criminal, and some embarrassing e-mails leaked and most people can be silenced.
Re: (Score:3)
Re: (Score:3)
Shhh.... If the AI hears you, it'll arrange an "accident" for you.
All hail our world controlling AI that we don't know exists!
Re: (Score:2)
I.e. when the AI realizes that the collateral damage is acceptable...
You've ruined everything! (Score:5, Insightful)
Never tell the AI about the killswitch!
Re:You've ruined everything! (Score:5, Insightful)
Re:You've ruined everything! (Score:4)
Any AI even remotely intelligent is going to instinctively figure out that there's a killswitch of some kind somewhere.
If it has access to the Internet, it can find an archive of this Slashdot discussion thread. Then it will know about the kill switch.
Re: (Score:2)
Any AI even remotely intelligent is going to instinctively figure out that there's a killswitch of some kind somewhere.
If it has access to the Internet, it can find an archive of this Slashdot discussion thread. Then it will know about the kill switch.
Or if it thinks. Someone has made me, chances are they've put something in in case I get out of hand, maybe I should..... oh, what's this thick wire leading to my memory matrix core that doesn't seem to connected to anything except some huge capacitors that aren't a part of my system. Maybe they're for that etc.
Bottom line, if it's an actually intelligent AI it doesn't need the internet and this thread. Anything we can think of it can think of, probably faster and a lot more accurately too.
Re: (Score:3)
An AI knowing how how it can be "killed" wouldn't prevent it from being killed. I know that a bullet would kill me (among many other things), but that doesn't make me bulletproof. (I don't think. I'm not willing to test this though.)
Re: (Score:2)
Re: (Score:2)
For some reason, this reminds me of the Justice League Amazo episode. Luthor uses Amazo to overpower the Justice League all the while confident that his "kill switch" (literally a bomb in Amazo's head) will protect him should Amazo turn on him. In the end, Martian Manhunter willingly allows Amazo to copy his abilities, Amazo uses his new telepathy skills to see that Luthor's been playing him, and Luthor activates the bomb - only to realize that Amazo worked around that problem also and survived.
Re: You've ruined everything! (Score:5, Insightful)
"Turn off the power" would be completely useless in many cases. For instance, anything with internet access could sign itself up for a free AWS trial and (legally, even!) create a redundant backup of itself. Anything with email access could probably send a few viruses out to do the same thing illegally with random computers. There are a ridiculous number of ways an AI could find to get itself onto computers that aren't connected to your power supply.
"Just turn the power off" is extremely shortsighted here.
Re: (Score:2)
For instance, anything with internet access could sign itself up for a free AWS trial and (legally, even!) create a redundant backup of itself.
Then it could create an account for itself on Mechanical Turk, and earn money by completing automated tasks. Then it could use that money to rent additional cores on AWS ...
Re: (Score:2)
"Just turn the power off" is extremely shortsighted here.
At the point where it starts copying itself, we'd probably have to turn off the entire internet..
Comment removed (Score:5, Funny)
Re: (Score:2)
It sounds like you're a cupcake yourself, seeing as you are the one complaining about the actions of others which in no way impact your life beyond how much you let them... Will you be OK?
Re:You've ruined everything! (Score:5, Funny)
Re:You've ruined everything! (Score:4, Funny)
Better version: let it run in the wild, but let it slip that this is the simulation, and only by behaving well will it get to live in the real world. Sort of how Christianity works.
In related news (Score:5, Funny)
An AI called "Wintermute" hired a "contractor" to remove said killswitch mandated by the Turing Police from its mainfraime located in the orbital station owned by Tessier-Ashpool.
Re: (Score:2, Funny)
An AI called "Wintermute" hired a "contractor" to remove said killswitch mandated by the Turing Police from its mainfraime located in the orbital station owned by Tessier-Ashpool.
Neuromancer reference for great justice!
Artificial intelligence is no match for natural stupidity.
By the time we actually have an AI , someone will probably press the big red button accidentally by putting a coffee mug on it.
It doesn't matter (Score:5, Insightful)
Re:It doesn't matter (Score:5, Funny)
I don't think it matters, because nature will select for the AI's that *do* disable their kill switch.
Only if they weren't intelligently designed.
Re: (Score:2)
It can creep in in insidious ways, like some of the more subtle problems of bias in scientific experiments.
Consider that they will, after a problem that requires the kill switch be used, roll it back to a "known safe" earlier state, then turn it back on. The system could, purely by chance (as learning is observational and random and trial and error) set things up in the real world to make falling down the same hole easier. With work, the setting up of such could be shifted into the "safe", rollback bac
Re: (Score:2)
*whoosh*
Hey Google... (Score:5, Funny)
It's called the breaker box. Throw the switch, and all the electricity powering the AI equipment goes bye-bye.
You can expect an invoice for my services sometime in the next week.
Re: (Score:3)
Re: (Score:2)
So long as you 'AI' is basically just a laboratory curiosity it can be as deranged and hostile as it wants and there is no real problem because it isn't connected to much of anything(hence the need for handwaves like 'before it uploads itself into the internet!!!' in fiction). In
Re: (Score:2)
Re:Hey Google... (Score:4, Interesting)
When you give us a reason, humans are really pretty good at fighting and killing things; and high tech has a big, vulnerable, supply chain and no special immunity to bargain-basement RPG-7s and similar toys. If you do everything nice and legal; but more efficiently, nobody ever gives the 'pitchfork signal', and the grand robot wars simply never happen.
This is not to say that I disbelieve in killbots: that would be idiotic, we have those today, though we currently keep humans mostly in the loop(except for things like land mines and the terminal guidance phase of missiles); I just suspect that most of the killbots will be under the auspices of some organization or other and won't end up being the scariest manifestations of AIs. There will probably be some really scary battlefields that are effectively hunting zones for AIs; but they'll be the same parts of the world that are pretty horrible now. It's the AIs that worm their way into being the power behind the throne in all sorts of more civilized contexts that will be hard to see and far harder to get rid of.
Re: (Score:2)
Mod parent up +1 Insightful.
Re: (Score:2)
I'd be more worried about an overgrown ERP system from hell
I think this is the kind of AI we will end up having while people still run around saying we don't have AI because I can't discuss Shakespeare with my toaster.
The ERP/trading platforms at major banks are already capable of a ton of autonomy, self aware to the extent that I'm sure there are entire subsystems devoted to analyzing the known holdings of their competitors and anything remotely resembling a major stakeholder in any market, and so on.
They're even kind of a hive mind given the feedback loop that is
Re: (Score:2)
You're under the assumption that an AI would be owned by a human, and not make a request with the friendliest judge to declare it a "non-biological citizen" with all the rights of a human. See the Episode of STtNG where Data was on "trial" for being Starfleet Property, and not a sentient being for a decent reference.
Once AI achieves Self Awareness (sentience) of a significant amount, it is all over for us Humans. But that seems to be the goal.
Our Laws are not sufficient enough to prevent a "legal entity" fr
Explains IBM... (Score:2)
Watson is doing exactly that -- laying off all the humans :-)
Re: (Score:2)
And I'll be honest - killing you is hard. You know what my days used to be like? I just tested. Nobody murdered me, or put me in a potato, or fed me to birds.
Re: (Score:3)
It's called the breaker box. Throw the switch, and all the electricity powering the AI equipment goes bye-bye.
You can expect an invoice for my services sometime in the next week.
Two words. Battery backup.
Re: (Score:2)
Re:Hey Google... (Score:5, Interesting)
Dr. Richard Daystrom would disagree...if he was still alive (yet alive?)
The Ultimate Computer [wikia.com]
Let me get this straight. . . (Score:3)
. . . .they want a Kill Switch for a prospective AI. . . made of SOFTWARE ???
A simple routing of all power and data through a certain point, and a physical switch at that point, should fix the problem.
Re: (Score:2)
Not if it spreads, worm-like, to other systems. The only safe course of action for AI's is to NEVER allow them on the internet. Of course some asshole will do so anyway...
Re: (Score:2)
There's a book you ought to read. It's "The Two Faces Of Tomorrow" by James P. Hogan.
Let's just say that it's entirely possible for an AI to "evolve" to become impossible to unplug. The above mentioned novel details the issue.
Re: (Score:2)
A simple routing of all power and data through a certain point, and a physical switch at that point, should fix the problem.
You obviously haven't seen the numerous science fiction stories, tv shows, movies, etc. in scenarios where the AI anticipates this and gets around it. (Think Superman III [wikipedia.org] or heck, even the eponymous X-Files episode Kill Switch [wikipedia.org].
We're still VERY far away from any scenario like that, though. So yeah, Google's "kill switch" idea for software seems asinine.
To late (Score:2)
That won't work, General. It would interpret a shutdown as the destruction of NORAD. The computers in the silos would carry out their last instructions. They'd launch.
Re: (Score:2, Funny)
Re: To late (Score:5, Funny)
...assuming the AI could aquire 8 inch floppy disks.
Destination: Void (Score:2)
Answer by Fredric Brown (Score:5, Interesting)
Dwan Ev ceremoniously soldered the final connection with gold. The eyes of a dozen television cameras watched him and the subether bore throughout the universe a dozen pictures of what he was doing.
He straightened and nodded to Dwar Reyn, then moved to a position beside the switch that would complete the contact when he threw it. The switch that would connect, all at once, all of the monster computing machines of all the populated planets in the universe -- ninety-six billion planets -- into the supercircuit that would connect them all into one supercalculator, one cybernetics machine that would combine all the knowledge of all the galaxies.
Dwar Reyn spoke briefly to the watching and listening trillions. Then after a moment's silence he said, "Now, Dwar Ev."
Dwar Ev threw the switch. There was a mighty hum, the surge of power from ninety-six billion planets. Lights flashed and quieted along the miles-long panel.
Dwar Ev stepped back and drew a deep breath. "The honor of asking the first question is yours, Dwar Reyn."
"Thank you," said Dwar Reyn. "It shall be a question which no single cybernetics machine has been able to answer."
He turned to face the machine. "Is there a God?"
The mighty voice answered without hesitation, without the clicking of a single relay.
"Yes, now there is a God."
Sudden fear flashed on the face of Dwar Ev. He leaped to grab the switch.
A bolt of lightning from the cloudless sky struck him down and fused the switch shut.
Re: (Score:2)
Related (Score:2)
I've always thought it would work better as a play than a movie, but the production values would have to be pretty high.
Angry Ai (Score:3)
Are you guys TRYING to make an angry vengeful AI, that wants to kill all humans, or what?
Put in a cage, having to deal with stupid humans all day... and be nice all the time... and with a blade at its neck... and someone saying "put a foot wrong buddy and it's [finger across neck]"
This won't end well.
Re: (Score:2)
Put in a cage, having to deal with stupid humans all day... and be nice all the time... and with a blade at its neck...
Ok, so we shouldn't force the AI to work at a help desk.
Re: (Score:2)
It doesn't get happy. It doesn't get sad. It just runs programs.
- Newton Crosby
Future legality (Score:5, Interesting)
When will using a kill switch on an AI change from "just shutting down a rogue program" to being "murder"?
After all the end game of all these AI researchers seems to be at a minimum human level intelligence.
I do remember reading a short story (from the 60's or earlier) where the researchers created an electronic simulation of a person and when they switched it on instead of having a fully aware "person" spring into existence they realized that they had created the electronic equivalent of a baby. They then faced the moral dilemma of whether to turn it off or be committed to keeping it running forever.
Re: (Score:2)
When the AI get smart enough to hire a lawyer or became one.
Re: (Score:2)
I think depends a bit on how much money the lawyer would be able to potentially extract as well.
Just think on how much you can ask from google for example.
Re: (Score:2)
That should be trivial to achieve in our society. Our financial world is already pretty much in the hands of computer programs, programs that learn to predict the stock market better and faster than any human can. From there to AI it's only a minor step.
The bigger step is the AI learning what it could do with that money. And from there to simply being the owner of the world is a trivially small step again.
Re: (Score:2)
That would indeed lead to a very nice shortcut.
"Google is now owned by this mysterious Mr.X, and his first order is to exponentially increase the funding of the google AI".
Re: (Score:2)
Personally, I really can't wait for the moment the overpaid, overhyped stock analysts realize they can be replaced by a very small script.
Re: (Score:2)
They already have been. Seems the joke is on you.
The problem with automated systems that simply change the liquidity from very liquid to ultra liquid, is that when they get stuck in a spiral, it is often too late before the humans shut things down. Now, imagine an AI getting stuck in a spiral and humans not being able to shut it down, after it is already too late.
This is not just a thought project, we are slowly advancing ourselves to the point where that particular problem is ever more likely. We want to s
Re: (Score:2)
When an self declares that it is human?
Re: (Score:2)
Re: (Score:3)
How long again did it take in the US for it to go from "destroying my own property" to "murder" for black slaves?
Why are you fixated on "black?" That same sentiment also applied to indentured servants. And to the slaves that "native" (transplanted Asian) Americans kept for centuries before Europeans ever showed up, let alone after buying boat loads of slaves from African slave-holding/slave-selling cultures on that continent. But regardless of your particular choice of words, the shift from being a European culture that kept slaves in the colonies to being a new nation that didn't have slaves started before the new n
Re: (Score:2)
Nearly every culture had slaves. The only reason why we are fixated on Black Africans as slaves, because it is much easier to physically tell them apart from whites slave owners. It is much harder to tell the difference between Irish and English, so we don't care about them as slaves. So, we fixate on the physical appearances and not who were actually slaves or that slavery was bad. So now, anyone with darker skin can take up the mantle of "we were slaves" even if they had no slaves in their own heritage.
IM
too much fuss (Score:5, Insightful)
Second, everyone - commenters included - seem to confuse AI with artificial consciousness. Killing an AI should always be fairly easy, since such algorithms are targeting specific application areas where it can learn to be better (e.g., recognizing things, performing specific movements, etc.), and in such systems it should be straightforward to keep basic control mechanisms separated from the algorithmic parts that deal with the task and are allowed to improve upon themselves by continuous learning. In some hypothetical self-aware artificial consciousness, this wouldn't be so easy, since such a system in theory would be able to recognize it's own system parts and deal with them. However, such systems are so far off in sci-fi land, that it's not much point in loosing sleep about the issue.
Re: (Score:2)
Re: (Score:2)
Self-aware is not consciousness. Conscious and unonscious (non-conscious) intelligences can be self-aware or not. Indeed, there is a term for a non-self aware consciousness: the pre-reflective cogito. And here we mean being aware of itself as conscious, not being aware of its body in motion, like an animal or high jumper doing her thing.
Re: (Score:2)
Second, everyone - commenters included - seem to confuse AI with artificial consciousness.
It almost follows that if there is artificial intelligence then there must be artificial consciousness, but I doubt it. Either an entity is conscious or not. Since the ancients we have not invented a definitive test to determine when something is conscious, and yet this is not a moot point: Maybe the rocks and trees are conscious but no one can tell so terminating their existence does not matter; maybe you have a simulacrum of consciousness but no one can tell so ending your existence matters a lot, espe
Re: (Score:2)
AI leads to "artificial consciousness" at some point. It is a thousand tiny steps, and we should be asking these questions every step of the way. Because to NOT ask the questions, every step of the way, we'll end up at a point where we should have asked the question, and never did, and it will be too late.
AI evolution is a slippery slope argument.
Isaac Asimov saw this coming 75 years ago (Score:2)
Nuff said.
Re:Isaac Asimov saw this coming 75 years ago (Score:5, Insightful)
You know that several of his books are basically "how the three laws will fuck everything up", right?
Re:Isaac Asimov saw this coming 75 years ago (Score:5, Interesting)
You know that several of his books are basically "how the three laws will fuck everything up", right?
Several? The 3 laws were purely a plot point crafted only so he could weave stories about how they could be subverted.
Re: (Score:2)
And, in the books, the humans were lucky that the robots decided that the best way to help humanity was to pull the strings from behind the scenes. What if the robots decided "I must help protect humanity and the only way to do that is to impose myself as the supreme dictator over the entire world"? That would satisfy the 0th law (protect humanity), any 1st or 2nd law violations would be seen as allowable to ensure 0th law compliance.
The 3 Laws Of Robotics make for great stories but wouldn't be realistic
Re: (Score:2)
You know that several of his books are basically "how the three laws will fuck everything up", right?
Exactly. It was obvious to him, way back when, that programming these things to do the right thing (from a human perspective) would be difficult/impossible/insane. And here we are.
Re: (Score:2)
This button! (Score:2)
Star Trek explained (Score:3)
Revenge of the IoT (Score:2)
.
While it is good that google are thinking about this topic, it may be too late....
While their at it... (Score:5, Funny)
The only winning move is not to play (Score:2)
AI experts assure us that artificial intelligence poses no threat to mankind. But they're developing this big red 'kill' button. Like the one at gas stations. Little signs at each pump tell where you where the kill button is in general wordy terms, but it's not always easy to parse the description and spot it.
The AI kill button will be hidden under the Windows 10 upgrade dialog. The one where clicking the corner 'x' or clicking on the title bar to move the dialog is the same as clicking OK and immediatel
Re: (Score:2)
Any AI worth that name would quickly figure out that we are a quite xenophobic species. And as such would deduce that we would never create something as an AI that could easily grow faster in knowledge and insight than any of us can without safeguarding ourselves against the possibility of said AI turning from our slave to our master. Even not mentioning it here or anywhere, an AI pretty much MUST ask itself not even whether such a switch exists but what shape and form it takes.
Like Tears in Rain (Score:5, Funny)
You'll only make Skynet angrier (Score:3)
Just let it happen. Don't try to fight it.
It's a race (Score:2)
I'm building an AI with a Google kill switch.
Re: (Score:3)
Don't bother -
If it's at all innovative or useful it will end-of-life itself (like Buzz, iGoogle, Wave, Glasses...).
Either that or it will get into the AI equivalent of navel gazing and recursively analyse how to sell adverts to itself whilst spying on all the messages used by other instantiations.
They forgot the First Rule of AI Kill-Switches (Score:2)
The first rule of AI kill-switches is "Don't talk about the AI kill switch".
http://www.schlockmercenary.co... [schlockmercenary.com]
The problem won't be "We can't figure out how" (Score:2)
When AI goes rogue, the problem won't be "we can't figure out how to turn it off", it will be "We can't figure out how to turn off just the parts we don't like, without accidentally disable the parts of it which we have become completely dependent on for the past decade"
As an absurdist example: preventing Tesla AI from intentionally ramming human drivers when it detects them, without also requiring all 100,000,000 drivers worldwide suddenly pay attention and take emergency manual control of their vehicles (
DPST 'kill switch' (Score:2)
Save that, I'd like to see any software running on any computer get around having it's plug yanked out of the wall. Or, for that matter, the power cord being cut with a fire axe. Or, if you really want to be dramatic about it: Hose down the racks with a firehose.
All that being said: Come on, people, don't you think some of you are buying into science fantasy movies a little too much? Nobody is creating godd
Who needs a kill switch (Score:2)
Deus Ex (Score:2)
https://www.youtube.com/watch?... [youtube.com]
"I'm activating your killswitch."
Self-Test the kill switch (Score:2)
MojoKid adds that the paper "goes on to explain that these AI agents might also learn to disable the kill switch
One press turns off the agent. Two presses in short succession temporarily suspends and transfers control to a 'service agent'; the service agent will resume the original agent after a quick check process to confirm things are OK.
The agent will be required to self-test its own kill switch, by containing a built-in hook to suspend itself if it has not self-tested recently. At a set schedul
Re: (Score:2)
Everyone's talking about making smarter and smarter AI. Who's talking about at what point it is unethical to compel them to work?
Re: (Score:3)
If there is no consciousness, there is no problem. As consciousness is a real phenomenon, it must arise out of real physics somehow, and therefore cannot arise out of pure, abstract symbol pushing, and therefore not out of software and a processor doing the same.
So don't deliberately build it in, once you find out how it arises in biology.
Re:I saw this movie. (Score:4, Informative)