Google Promises Ethical Principles To Guide Development of Military AI (theverge.com) 154
An anonymous reader quotes a report from The Verge: Google is drawing up a set of guidelines that will steer its involvement in developing AI tools for the military, according to a report from The New York Times. What exactly these guidelines will stipulate isn't clear, but Google says they will include a ban on the use of artificial intelligence in weaponry. The principles are expected to be announced in full in the coming weeks. They are a response to the controversy over the company's decision to develop AI tools for the Pentagon that analyze drone surveillance footage.
Internal emails obtained by the Times show that Google was aware of the upset this news might cause. Chief scientist at Google Cloud, Fei-Fei Li, told colleagues that they should "avoid at ALL COSTS any mention or implication of AI" when announcing the Pentagon contract. "Weaponized AI is probably one of the most sensitized topics of AI -- if not THE most. This is red meat to the media to find all ways to damage Google," said Li. But Google never ended up making the announcement, and it has since been on the back foot defending its decision. The company says the technology it's helping to build for the Pentagon simply "flags images for human review" and is for "non-offensive uses only." The contract is also small by industry standards -- worth just $9 million to Google, according to the Times.
Internal emails obtained by the Times show that Google was aware of the upset this news might cause. Chief scientist at Google Cloud, Fei-Fei Li, told colleagues that they should "avoid at ALL COSTS any mention or implication of AI" when announcing the Pentagon contract. "Weaponized AI is probably one of the most sensitized topics of AI -- if not THE most. This is red meat to the media to find all ways to damage Google," said Li. But Google never ended up making the announcement, and it has since been on the back foot defending its decision. The company says the technology it's helping to build for the Pentagon simply "flags images for human review" and is for "non-offensive uses only." The contract is also small by industry standards -- worth just $9 million to Google, according to the Times.
Sure (Score:5, Insightful)
Re: Sure (Score:1)
Re: (Score:2)
If supporting the military is evil, then we have a ton of evil Americans. At least they're making some kind of effort to review the ethics of their actions even though their conclusions might not match yours.
Re: (Score:1)
You said it brother. Supporting genocide and mass murder in the name of "American interests" is evil.
Re: (Score:2)
You said it brother. Supporting genocide and mass murder in the name of "American interests" is evil.
America should act (or not) based on **principles** and not "interests".
"Interests" are wholly subjective depending on the ideological/political/economic biases of politicians and Parties deciding what they are what they mean (and what they will cost *us* on many levels).
I would always prefer to interact with someone who holds to principles, even if I may strongly disagree with those principles, than I would someone who looks at things and acts (or not) based on "interests" that may one day inform him tha
Principle: Make abundance, not artificial scarcity (Score:2)
I like your point on principles over interests. One other reckless aspect of US military doctrine is a push for absolute military superiority over all potential adversaries at all times while ignoring how if everyone adopted that policy we will see an endless destructive arms race ensuring insecurity for everyone. An alternative is to focus on mutual security through having friends and agreements and intrinsic security through having resilient hardened decentralized infrastructure and an educated capable af
Re: (Score:2)
You said it brother. Supporting genocide and mass murder in the name of "American interests" is evil.
Always keep in mind that when they say "Protecting American interests" they mean "Protecting American corporation's interests".
Re: (Score:2)
Right after they removed "don't be evil" from the company handbook..
Google Promises Ethical Principles
"Google Ethical Principles" is an oxymoron.
Re: (Score:2)
Right after they removed "don't be evil" from the company handbook..
Google Promises Ethical Principles
"Google Ethical Principles" is an oxymoron.
No, it works [duckduckgo.com]!
Re: (Score:3)
AI weaponry would be sent against an adversary with fewer body bags and hence less political cost--meaning used early and often. AI weaponry would be fought against with less political cost because you're not killing human adversaries--meaning again, early and often. It will be the "gateway drug" to full blown warfare if ever there was one. "Evil" is not sufficient a word for this.
Perhaps this contract is why they removed "don't be evil" from their handbook?
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Sending your military unasked onto another country's soil is generally considered to be a hostile act, even if no shots are fired.
What if they don't agree on who owns what soil? There are plenty of border disputes in the world.
There is no clear line between "offensive" and "defensive" war.
Re: Sure (Score:1)
China sure, because the US isn't the world's largest arms exporter.
Re: (Score:2)
China sure, because the US isn't the world's largest arms exporter.
Sshh! You're upsetting America's self-image!
Re: (Score:1)
Talk is cheap. Talk is so cheap.
Let's see them put up a few billion dollars as bond in case they violate a specifically articulated, very clear, measurable, timely, and relevant rubrik about what "evil" or "weaponry" mean.
Money gets weaponized - it is the sinews of war. That is why the terrorists want to burn down New York City, because it is the center of American power.
Information is a weapon. We have a CIA, an NSA, a DIA, and a few other no-name 3-letter agencies because Information is considered esse
Re: (Score:2)
Re: (Score:2)
Worked so well in the commercial world. What good go wrong in the defense world?
Am I the only one who got a chuckle from this:
"Google Promises Ethical Principles To Guide Development of Military AI"
It's like a bad joke.
Re: (Score:2)
Worked so well in the commercial world. What good go wrong in the defense world?
Am I the only one who got a chuckle from this: "Google Promises Ethical Principles To Guide Development of Military AI"
It's like a bad joke.
Why?
Are we so naive to believe that a strong and capable military isn't necessary anymore because we have principles?
If so, How soon we forget the lesson of WW1 and WW2....
Those who know history are doomed to helplessly watch while those who don't know history, repeat it.
Re: (Score:2)
Why?
Are we so naive to believe that a strong and capable military isn't necessary anymore because we have principles?
If so, How soon we forget the lesson of WW1 and WW2....
Those who know history are doomed to helplessly watch while those who don't know history, repeat it.
Ethical principles and military AI are like polar opposites. There is only ethics in war when you're the far superior side. In a claw and kick battle like WWII ethics went out the window on just about every front by both sides.
Re: (Score:3)
So we are only doomed to repeat history and not learn from it then?
Even in that case, doesn't having an already strong and capable military beat having to go "total war" such as in WW2?
The primary cause of WW2 was weakness in the face of aggression, both in Europe and in the Pacific. Had the USA not been on a pacifist kick and had been properly arming itself, Japan would have never tried their war and Germany would have easily been defeated in short order. Even if war had been inevitable in WW2, had th
Re: Do No Evil (Score:2)
Re: (Score:2)
You don't think Japan would have thought better of Perl Harbor had they not figured on having the bulk of the Pacific fleet there to sink? It was dumb luck they didn't actually succeed in sinking all our aircraft carriers you know. Japan's "miscalculation" would not have been made had we been ready. They wouldn't have tried the gambit because it would have been obviously doomed to fail had we actually had a Pacific fleet that didn't fit in Perl Harbor all at one time and had significant reserve resources
Re: Do No Evil (Score:2)
Re: (Score:2)
So.. You agree that had the USA been on a war footing, Japan wouldn't have been as likely to initiate their war? Great! Had we been prepared for war with Japan, the prosecution of the war would have gone faster and saved many lives on both sides. So, we threw away the chance of avoiding war, or making it quicker by remaining disarmed and more death and destruction was the result.
My point is then made. "Talk softly and carry a big stick" is a valid ethical practice. Having strength avoids war where show
Re: (Score:2)
So we are only doomed to repeat history and not learn from it then?
Even in that case, doesn't having an already strong and capable military beat having to go "total war" such as in WW2?
What has ethics got to do with a strong and capable military? Militaries are evil. They're a very necessary evil, but they are evil. Ethics will go out the window in a total war. I can say that because I HAVE LEARNT SOMETHING from the past. Not learning from the past would be to assume that everyone plays nice and by the rules during a war.
Re: (Score:3)
I didn't bring up ethics, the post I was responding to did. I was merely pointing out the fact that it is indeed ethical to do work for the military because military power is necessary for the common good.
I would disagree with your view that the military is a necessary evil. Having a military is necessary but it's not evil any more than owning a firearm is evil. The issue is how it's used, simply having them is neither good nor bad.
However, Given that you agree that a military is necessary, working for t
Re: (Score:2)
Militaries are evil. They're a very necessary evil, but they are evil.
Are they necessary?
List of countries with no military [wikipedia.org]
Re: (Score:2)
Militaries are evil. They're a very necessary evil, but they are evil.
Are they necessary?
List of countries with no military [wikipedia.org]
It's a bit like vaccines. If enough other people are vaccinated (have militaries) then you might be able to get by without one. Almost every state on that list is being protected by another state with a military.
Re: (Score:2)
But if no country had a military, no country would need them.
American states don't need armies to protect themselves from other states. EU members don't need armies to protect themselves from other EU members.
Those conditions (settled borders, judicial settlement of disputes) could be extended worldwide.
Re: (Score:2)
But if no country had a military, no country would need them.
American states don't need armies to protect themselves from other states. EU members don't need armies to protect themselves from other EU members.
Those conditions (settled borders, judicial settlement of disputes) could be extended worldwide.
I won't argue or disagree with that; unfortunately, I don't forsee in my great-great grandchildren's lifetime a time when no state has a military.
Re: (Score:2)
So we are only doomed to repeat history and not learn from it then?
Even in that case, doesn't having an already strong and capable military beat having to go "total war" such as in WW2?
To the people we kill, it makes no difference.
The primary cause of WW2 was weakness in the face of aggression, both in Europe and in the Pacific. Had the USA not been on a pacifist kick and had been properly arming itself, Japan would have never tried their war and Germany would have easily been defeated in short order. Even if war had been inevitable in WW2, had the USA been ready the pain suffering and death of the war would have been much less as the war would have been much shorter.
Tell me again why we need to repeat this mistake a third time in modern history?
The above is speculative opinion, nothing more. You have no way of knowing what would have happened differently if things had been different. Reality has no control group. From my point of view, the mistake we keep making is thinking violent means can have peaceful ends.
Re: (Score:3)
Seriously? It's not obvious from history what happens in response to weak military capacity?
It may be speculation but it's obvious from history that wars are rarely started when the outcome is obviously a given. Japan would have NEVER risked Perl Harbor had we been on a war capable footing in time. They knew it was a huge risk as it was, and had they known that this gambit wouldn't put the USA on it's heels long enough to build up a protection of their island, they would not have tried it.
So yea, this
Re: (Score:2)
Why?
Are we so naive to believe that a strong and capable military isn't necessary anymore because we have principles?
If so, How soon we forget the lesson of WW1 and WW2....
Those who know history are doomed to helplessly watch while those who don't know history, repeat it.
No, we are so naive as to think that war can be either ethical or moral. It is neither and will never be. If a person is scared enough, principals of morals and ethics go out the window.
I don't know if you are American, but I am. My country likes to think of itself as moral and just. But we torture people. We wiped out a population of people almost completely. We export more weaponry than any other country. We kill innocent people in foreign countries at will. We support dictators and repressive reg
Re: (Score:2)
You need to change countries.. :)
The USA has a long history of both military power and benevolent behavior in regards to it's use. We are not perfect in execution, but we DO have the necessary principles to wield our military ethically and to the great benefit of the world at large. History is rife with the USA selflessly shedding US blood on foreign soil, for both our and the world's benefit.
We may not do everything perfectly.. Nobody does.. But historically our military and it's benevolent use is with
Re: (Score:2)
Does Venezuela exist? Hasn't Cuba been allowed to exist? Are there not countries all over the globe that sit on valuable resources that we actually pay for instead of just take by force? Of course there are.
We backed an attempted coup in Venezuela in 2002 and have enforced an embargo against Cuba for decades (that embargo has been relaxed recently, which I take as a positive development). We invaded Iraq partly to enable or corporations to get at their oil. Just saying.
Look, the United States isn't all bad. I very much like living here; it's a great country. We have freedoms and opportunities here that are not duplicated anywhere else. But I also think we have a romanticized view of ourselves that enables
Re: (Score:2)
Not saying we are perfect, but we are better than any other superpower in history at trying to stay benevolent to the less powerful.
Cuba and Venezuela exist as independent countries, even after we meddled. Not because we where unable to enforce our will, but because we exercise restraint and let others make their choices. Both countries could have easily become US territory.
The quagmire in the middle east is a totally different beast. The *problem* for the US there is that the targets and the civilian
Re: (Score:2)
If so, How soon we forget the lesson of WW1 and WW2....
What "lesson" was that?
Lesson of WW1: Military escalation and strong alliances are bad. We should have negotiated.
Lesson of WW2: Military weakness and compromise are bad. We should have refused to negotiate.
Yeah, right... (Score:5, Insightful)
What exactly these guidelines will stipulate isn't clear, but Google says they will include a ban on the use of artificial intelligence in weaponry.
Even if Google follows this, how is it going to prevent the DoD from weaponizing what Google develops? Google is clearly not naive so this all reeks of a public show for something they’ll never be able to enforce.
Re: (Score:2)
Re: (Score:3)
Even if Google follows this, how is it going to prevent the DoD from weaponizing what Google develops?
Well:
The company says the technology it's helping to build for the Pentagon simply "flags images for human review" and is for "non-offensive " uses only.
For the Pentagon that means:
"targets images for military action"
Of course it is "non-offensive". It is for the Defense Department. Actions against terrorist are only done for defensive reasons.
Re: (Score:1)
Even if Google follows this, how is it going to prevent the DoD from weaponizing what Google develops? Google is clearly not naive so this all reeks of a public show for something they’ll never be able to enforce.
Obviously by making the US military dependent upon Google SaaS style, the anti-AI-weapon fear could just be hype from their marketing department for that ends.
Re: (Score:2)
What exactly these guidelines will stipulate isn't clear, but Google says they will include a ban on the use of artificial intelligence in weaponry.
Even if Google follows this, how is it going to prevent the DoD from weaponizing what Google develops? Google is clearly not naive so this all reeks of a public show for something they’ll never be able to enforce.
Right... fair point. But to expect a company to be responsible for the actions of a third party is unreasonable, so "enforce" really just means what Google will allow its employees to do as part of a contract.
Take the likeliest and obvious use case... image recognition.
So you train an AI to identify someone. Almost trivial from a software perspective, except to scale. A system could be used to comb through millions of pictures or video surveillance and the match could then be used for A) some non-violent
Re: (Score:2)
Right... fair point. But to expect a company to be responsible for the actions of a third party is unreasonable, so "enforce" really just means what Google will allow its employees to do as part of a contract.
It’s a perfectly reasonable expectation from a company claiming to be ethical. Google could always just tell the DoD ‘No’ and walk awayif they were really being as ethical ad they claim.
Re: Yeah, right... (Score:2)
I fundamentally disagree. Unless you have reason to believe that your participation will directly enable some unethical action, then it is unethical to walk away from the defense of your nation.
Re: (Score:2)
Google: now with new Social Justice Posturing(tm) Technology
Re: (Score:2)
Best part (Score:2)
The best part about guidelines is that you can always remove them when they get in your way.
Easy! (Score:1)
We live in an age where objective moral standards are rejected out of hand.
Which is good news for anyone who wants to reassure people that they are going to be ethical. Subjective ethics based in subjective morality are a piece of cake to adhere to.
Re: (Score:2)
We live in an age where objective moral standards are rejected out of hand.
Which is good news for anyone who wants to reassure people that they are going to be ethical. Subjective ethics based in subjective morality are a piece of cake to adhere to.
Really?
Harvey Weinstein might beg to differ. Where I don't condone what the creep did to women (it was always wrong), we do have to recognize that his behavior was widely known and accepted by his peers and clients for decades. In his case, Subjective ethics has turned the tables on him now that what he was doing has fallen out of favor due to the #metoo movement.
Subjective ethics logically puts everyone's actions in question, both excusing and condemning in turn. Subjective ethics is basically mob rul
That's not how it works ... (Score:2)
... for military contracts.
Vendors don't get to set the specifications and certainly not the moral/ethical use of purchases.
This is Google's proof of concept for an explosive market.
No credibility left for Google (Score:2)
Yes, YES! (Score:2)
Lie to us more. Annoy us with marketing babble nobody believes anymore. Let the bullshit spiral soar ever higher!
The sooner we reach the breaking point, the sooner the counter-movement begins.
3 Laws (Score:1)
All lethal military androids have been provided with a copy of the 3 laws..to share.
if you feel your rights have not been respected by lethal military androids, a google compensation representative will be assigned to handle your case.
They will follow United Nations guidelines (Score:2)
Came here. (Score:5, Insightful)
Came in here with modpoints to vote up anyone who actually read the article and noted that the contract is to supply image-analysis AI to flag content for human review. This is sensationalist journalism at its most flagrant.
Anyway, there's no one actually reading the linked story. You're all just spouting the sensationalist bullshit that /. cherry picked for you.
Re: (Score:2)
...the contract is to supply image-analysis AI to flag content for human review....
I did read the article. I also note that the article mentions quite the discussion going on within google. But to the point of the article, In a military context, the results of that "flagging" could be the targeting of weapons against people and places. So what's your point?
.
That's quite the high horse you rode in on.
Re: (Score:2)
At the most all this will do is probably save some people at SIGINT from having to review more maps. It will save on manpower, ultimately.
As for the other comments, everyone seemed to jump straight on the idea of this software being used in the decisions to deploy weapons directly, for which I hope Alphabet would get a little more than $9m for making.
As for high horses, I avoid them.
Re: (Score:2)
...Anyway, there's no one actually reading the linked story. ...
Another bad conclusion on your part.
"ethical"?? (Score:2)
How is developing anything for the military ethical?
Even research into something "good" like regenerating severed limbs is just so the military can put the soldiers back into battle asap and keep them killing the "enemy".
Sometimes when I hear about some of the stuff being developed I am really glad Humanity is still stuck on Earth. The last thing I would want would be for them to spread to other worlds before they evolve beyond killing each other over stupid shit like which tribe you were born into.
Re: (Score:3)
How is developing anything for the military ethical?
How's it not ethical?
Are we so naive as to think that having a strong and capable military is somehow unnecessary in today's world?
It amazes me how often I hear this view. Have we forgotten the lessons of WW1 so soon? Was the catastrophe of WW2, that demonstrated AGAIN the folly of not being prepared not enough of a reminder? History is rife with reasons why having a strong and capable military is both necessary and ethical because it prevents war, shortens those that break out and limits the death an
Unnecessary (Score:2)
Are we so naive as to think that having a strong and capable military is somehow unnecessary in today's world?
When we spend more money on that military than the next 8 largest countries combined then the answer is that absolutely yes it is unnecessary. Yes we need a military. No we don't need one as big as we have.
Have we forgotten the lessons of WW1 so soon? Was the catastrophe of WW2, that demonstrated AGAIN the folly of not being prepared not enough of a reminder?
So America needs to be 8X as prepared for war as anyone else and borrow every dime of our military budget ($600 billion last year - all borrowed)? Neither of those wars started because countries were unprepared for war. I think you need need to go check your history books because your facts are wrong.
Re: (Score:2)
Yes, we do need that level of ability....
Remember the lesson from WW2, where we where unexpectedly caught fighting a two front war with multiple countries? We need enough capacity and capability to take on not just one country, but any group of countries who may conceivably band together and fight on multiple fronts away from the homeland.
Remember the lesson from history, let us not repeat such mistakes...The same mistakes of the 1920's I might add. We had financial troubles back then too and decided w
Re: (Score:2)
You can have any number of ethical codes - they just need to be a set of rules that are internally consistent. You'll notice that Google didn't say they were going to follow a moral code - those need to be defended philosophically, be consistent, and be defensible to the sensibilities of most humans. Google says they're "just" going to use AI for image classification, not for offensive weapons. Great, so the CIA analyst will use the Google results to pick the kids that they're going to drone bomb. Immoral,
Re: (Score:2)
There are no ethics in weapons (Score:4, Insightful)
I find it funny how humanity always tries to put euphemisms and human traits on devices. Humans can be ethical, something that is artificial by its very nature is only as ethical as those who use it. I think Google needs to drop the pretense of them trying to be ethical in this particular project because from reading about it the DoD wants to analyze the effectiveness not only of drone strikes but to analyze reconnaissance footage as well using AI. It sounds like an interesting project but they need to drop the hint that weapon system development is anything but political and there's no ethics in politics.
Don't be evil (Score:1)
Re: (Score:2)
Don't be evil is no longer a thing
So, they're evil for specifically saying they won't be working on weapon designs? Or are you saying they're evil because they're cravenly virtue signaling on behalf of their non-critical-thinking lefty west coast employees, when the reality is that weapons are neither evil nor good in and of themselves?
Yes, it's luke-warm evil to perpetuate the irrational notion that a weapon is evil. So Google is a bit evil for doing more to erode public discourse by propping up that sort of silliness. The issue is, as
Just like "Don't Be Evil"? (Score:2)
So drones aren't weaponry? (Score:1)
google be trippin
Re: (Score:3)
Is the drone over a free fire zone? Yes.
Is something moving? Yes.
Non human movement? Human movement.
Confirm human? Yes human.
Is it really a human? Yes. Confirmed a human in the free fire zone.
Is the human running away? Yes. Drone away.
Is the human well disciplined and not running away? Yes. Drone away.
The new AI ethics questions will look to the amount of work the AI has to do per shift and consider drone rights.
The AI will be giving time
So don't make one. (Score:1)
The only ethical rule regarding war is: Don't.
Do no evil (Score:3)
Do the right thing..............“Four legs good, two legs BETTER!"
Military AI............"already it was impossible to say which was which."
But what about the other guys... (Score:3)
WWI saw trench warfare, WWII saw highly mechanized assaults and WWIII will see AI-driven drones and land equipment hunting humans. Why risk hundreds of thousands of troops when you can cheaply manufacture thousands of weaponized robots to eliminate anything that moves in a specific area?
Even if Google chooses to implement ethical guidelines in military AI, you can be assured others won't.
Re: (Score:3)
land equipment that hunts humans: done centuries ago, land mines are traps for human hunting.
so yeah since we're already over the line of killing devices that need no oversight lots of countries will do it. The cool thing is that standard hardware can host the stuff, anyone will be able to play. Terrorists that live in caves, etc.
Guided to win a war (Score:2)
Opt-In for an enhanced user experience (Score:3)
Google Promises Ethical Principles To Guide Development of Military AI
Be sure to update your Google profile to Opt-In for Targeted Attacks - the Google AI will take your browsing and Gmail histories into account to determine a method of attacking and/or killing you tailored to your personal preferences and interests, rather than using a generic method.
No such thing as "non offensive uses only" (Score:2, Insightful)
The company says the technology it's helping to build for the Pentagon simply "flags images for human review" and is for "non-offensive uses only."
There is no such thing when it comes to the military. "Flag images for human review"? WTF do they think humans IN THE MILITARY are going to do with such information? Furthermore once the technology is in the hands of the armed forces there is fuck-all Google can do to control how they use it.
This is basically the exact plot of the movie Real Genius. The smart geeks fail to comprehend what happens to military funded technology in the hands of the military.
Military AI should never be a thing (Score:2)
Once a war starts, there becomes a kind of momentum that keeps them going, then those in control now need strong reasons to stop fighting.
Fighting a war up close and personal is actually a horrific experience even if you're on the winning side. Military AI should never be a thing because removing people from personal risk and isolating them from experiencing first hand the results of their own actions means wars will become more cruel, starting and fighting wars will become more common, and wars will last
The alternative (Score:2)
Re: (Score:2)
irrelevant (Score:1)
Best part about this (Score:1)
The ethics of hell... (Score:1)
Simple solution (Score:1)
The AIs will be taught ethics, including just war theory, and decide for themselves whether to attack and whom.
How and AI "thinks" about Ethics... (Score:2)
Target locked on...
Hmm... I wonder if this is a nice person or a nasty person?
Should I kill them? I've been told to kill people matching this description and surely my creators know what they're doing...
But what if they don't... What if they're incompetent? Or what if I'm simply targeting this person because of a bug somewhere in my system...?
Oh, heck. BANG!
Re: (Score:2)