Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Google The Military AI Government Software United States

Google Promises Ethical Principles To Guide Development of Military AI (theverge.com) 154

An anonymous reader quotes a report from The Verge: Google is drawing up a set of guidelines that will steer its involvement in developing AI tools for the military, according to a report from The New York Times. What exactly these guidelines will stipulate isn't clear, but Google says they will include a ban on the use of artificial intelligence in weaponry. The principles are expected to be announced in full in the coming weeks. They are a response to the controversy over the company's decision to develop AI tools for the Pentagon that analyze drone surveillance footage.

Internal emails obtained by the Times show that Google was aware of the upset this news might cause. Chief scientist at Google Cloud, Fei-Fei Li, told colleagues that they should "avoid at ALL COSTS any mention or implication of AI" when announcing the Pentagon contract. "Weaponized AI is probably one of the most sensitized topics of AI -- if not THE most. This is red meat to the media to find all ways to damage Google," said Li. But Google never ended up making the announcement, and it has since been on the back foot defending its decision. The company says the technology it's helping to build for the Pentagon simply "flags images for human review" and is for "non-offensive uses only." The contract is also small by industry standards -- worth just $9 million to Google, according to the Times.

This discussion has been archived. No new comments can be posted.

Google Promises Ethical Principles To Guide Development of Military AI

Comments Filter:
  • Sure (Score:5, Insightful)

    by rsilvergun ( 571051 ) on Thursday May 31, 2018 @09:04AM (#56704722)
    Right after they removed "don't be evil" from the company handbook..
    • Do be evil.
      • by gnick ( 1211984 )

        If supporting the military is evil, then we have a ton of evil Americans. At least they're making some kind of effort to review the ethics of their actions even though their conclusions might not match yours.

        • by Anonymous Coward

          You said it brother. Supporting genocide and mass murder in the name of "American interests" is evil.

          • You said it brother. Supporting genocide and mass murder in the name of "American interests" is evil.

            America should act (or not) based on **principles** and not "interests".

            "Interests" are wholly subjective depending on the ideological/political/economic biases of politicians and Parties deciding what they are what they mean (and what they will cost *us* on many levels).

            I would always prefer to interact with someone who holds to principles, even if I may strongly disagree with those principles, than I would someone who looks at things and acts (or not) based on "interests" that may one day inform him tha

            • I like your point on principles over interests. One other reckless aspect of US military doctrine is a push for absolute military superiority over all potential adversaries at all times while ignoring how if everyone adopted that policy we will see an endless destructive arms race ensuring insecurity for everyone. An alternative is to focus on mutual security through having friends and agreements and intrinsic security through having resilient hardened decentralized infrastructure and an educated capable af

          • by jtgd ( 807477 )

            You said it brother. Supporting genocide and mass murder in the name of "American interests" is evil.

            Always keep in mind that when they say "Protecting American interests" they mean "Protecting American corporation's interests".

    • Right after they removed "don't be evil" from the company handbook..

      Google Promises Ethical Principles

      "Google Ethical Principles" is an oxymoron.

    • by Bobzibub ( 20561 )

      AI weaponry would be sent against an adversary with fewer body bags and hence less political cost--meaning used early and often. AI weaponry would be fought against with less political cost because you're not killing human adversaries--meaning again, early and often. It will be the "gateway drug" to full blown warfare if ever there was one. "Evil" is not sufficient a word for this.

      Perhaps this contract is why they removed "don't be evil" from their handbook?

      • In a genuinely defensive war, the only kind that is defensible in the first place, maximum combat-readiness and military effectiveness saves lives, because, if you can't keep your enemy from overrunning your defenses and torturing/raping/murdering your civilian population, it's more likely than not that they will. A poorly performing automated army may well lead to many more body bags, not fewer.
    • by Anonymous Coward

      Talk is cheap. Talk is so cheap.

      Let's see them put up a few billion dollars as bond in case they violate a specifically articulated, very clear, measurable, timely, and relevant rubrik about what "evil" or "weaponry" mean.

      Money gets weaponized - it is the sinews of war. That is why the terrorists want to burn down New York City, because it is the center of American power.
      Information is a weapon. We have a CIA, an NSA, a DIA, and a few other no-name 3-letter agencies because Information is considered esse

    • If one were to remove the word "don't" then the statement or lack of statement would be equivalent. Another equivalent statement would be, "We, as good people will do nothing."
  • Yeah, right... (Score:5, Insightful)

    by Lunix Nutcase ( 1092239 ) on Thursday May 31, 2018 @09:11AM (#56704758)

    What exactly these guidelines will stipulate isn't clear, but Google says they will include a ban on the use of artificial intelligence in weaponry.

    Even if Google follows this, how is it going to prevent the DoD from weaponizing what Google develops? Google is clearly not naive so this all reeks of a public show for something they’ll never be able to enforce.

    • Recently some Google employees got upset [interestin...eering.com] about weaponized AI. This is just a press release to make their employees feel good, it doesn't need to be practical (Google employees on average aren't the sharpest tools in the drawer).
    • Even if Google follows this, how is it going to prevent the DoD from weaponizing what Google develops?

      Well:

      The company says the technology it's helping to build for the Pentagon simply "flags images for human review" and is for "non-offensive " uses only.

      For the Pentagon that means:

      "targets images for military action"

      Of course it is "non-offensive". It is for the Defense Department. Actions against terrorist are only done for defensive reasons.

    • Even if Google follows this, how is it going to prevent the DoD from weaponizing what Google develops? Google is clearly not naive so this all reeks of a public show for something they’ll never be able to enforce.

      Obviously by making the US military dependent upon Google SaaS style, the anti-AI-weapon fear could just be hype from their marketing department for that ends.

    • by bigpat ( 158134 )

      What exactly these guidelines will stipulate isn't clear, but Google says they will include a ban on the use of artificial intelligence in weaponry.

      Even if Google follows this, how is it going to prevent the DoD from weaponizing what Google develops? Google is clearly not naive so this all reeks of a public show for something they’ll never be able to enforce.

      Right... fair point. But to expect a company to be responsible for the actions of a third party is unreasonable, so "enforce" really just means what Google will allow its employees to do as part of a contract.

      Take the likeliest and obvious use case... image recognition.

      So you train an AI to identify someone. Almost trivial from a software perspective, except to scale. A system could be used to comb through millions of pictures or video surveillance and the match could then be used for A) some non-violent

      • Right... fair point. But to expect a company to be responsible for the actions of a third party is unreasonable, so "enforce" really just means what Google will allow its employees to do as part of a contract.

        It’s a perfectly reasonable expectation from a company claiming to be ethical. Google could always just tell the DoD ‘No’ and walk awayif they were really being as ethical ad they claim.

        • I fundamentally disagree. Unless you have reason to believe that your participation will directly enable some unethical action, then it is unethical to walk away from the defense of your nation.

    • Google: now with new Social Justice Posturing(tm) Technology

    • Also, can Google be as strict w/ their Chinese partners as they are w/ DoD on insisting that their work not be used in weaponry? DoD may either agree or jinx the deal, but Google will do whatever Beijing orders them to do, since they won't want to lose their China access
  • The best part about guidelines is that you can always remove them when they get in your way.

  • by Anonymous Coward

    We live in an age where objective moral standards are rejected out of hand.

    Which is good news for anyone who wants to reassure people that they are going to be ethical. Subjective ethics based in subjective morality are a piece of cake to adhere to.

    • We live in an age where objective moral standards are rejected out of hand.

      Which is good news for anyone who wants to reassure people that they are going to be ethical. Subjective ethics based in subjective morality are a piece of cake to adhere to.

      Really?

      Harvey Weinstein might beg to differ. Where I don't condone what the creep did to women (it was always wrong), we do have to recognize that his behavior was widely known and accepted by his peers and clients for decades. In his case, Subjective ethics has turned the tables on him now that what he was doing has fallen out of favor due to the #metoo movement.

      Subjective ethics logically puts everyone's actions in question, both excusing and condemning in turn. Subjective ethics is basically mob rul

  • ... for military contracts.

    Vendors don't get to set the specifications and certainly not the moral/ethical use of purchases.

    This is Google's proof of concept for an explosive market.

  • Google lost its credibility and luster a while ago. These days, it seems to be keen to become the new Microsoft. At least they got rid of the "Don't be evil" motto.
  • Lie to us more. Annoy us with marketing babble nobody believes anymore. Let the bullshit spiral soar ever higher!

    The sooner we reach the breaking point, the sooner the counter-movement begins.

  • by Anonymous Coward

    All lethal military androids have been provided with a copy of the 3 laws..to share.

    if you feel your rights have not been respected by lethal military androids, a google compensation representative will be assigned to handle your case.

  • The spokesman further clarified, "Google will follow the guidelines from United Nations, and the code the follow will be UNethical Guide to military AI development."
  • Came here. (Score:5, Insightful)

    by Barny ( 103770 ) on Thursday May 31, 2018 @09:43AM (#56704920) Journal

    Came in here with modpoints to vote up anyone who actually read the article and noted that the contract is to supply image-analysis AI to flag content for human review. This is sensationalist journalism at its most flagrant.

    Anyway, there's no one actually reading the linked story. You're all just spouting the sensationalist bullshit that /. cherry picked for you.

    • ...the contract is to supply image-analysis AI to flag content for human review....

      I did read the article. I also note that the article mentions quite the discussion going on within google. But to the point of the article, In a military context, the results of that "flagging" could be the targeting of weapons against people and places. So what's your point?

      .
      That's quite the high horse you rode in on.

      • by Barny ( 103770 )

        At the most all this will do is probably save some people at SIGINT from having to review more maps. It will save on manpower, ultimately.

        As for the other comments, everyone seemed to jump straight on the idea of this software being used in the decisions to deploy weapons directly, for which I hope Alphabet would get a little more than $9m for making.

        As for high horses, I avoid them.

    • ...Anyway, there's no one actually reading the linked story. ...

      Another bad conclusion on your part.

  • How is developing anything for the military ethical?

    Even research into something "good" like regenerating severed limbs is just so the military can put the soldiers back into battle asap and keep them killing the "enemy".

    Sometimes when I hear about some of the stuff being developed I am really glad Humanity is still stuck on Earth. The last thing I would want would be for them to spread to other worlds before they evolve beyond killing each other over stupid shit like which tribe you were born into.

    • How is developing anything for the military ethical?

      How's it not ethical?

      Are we so naive as to think that having a strong and capable military is somehow unnecessary in today's world?

      It amazes me how often I hear this view. Have we forgotten the lessons of WW1 so soon? Was the catastrophe of WW2, that demonstrated AGAIN the folly of not being prepared not enough of a reminder? History is rife with reasons why having a strong and capable military is both necessary and ethical because it prevents war, shortens those that break out and limits the death an

      • Are we so naive as to think that having a strong and capable military is somehow unnecessary in today's world?

        When we spend more money on that military than the next 8 largest countries combined then the answer is that absolutely yes it is unnecessary. Yes we need a military. No we don't need one as big as we have.

        Have we forgotten the lessons of WW1 so soon? Was the catastrophe of WW2, that demonstrated AGAIN the folly of not being prepared not enough of a reminder?

        So America needs to be 8X as prepared for war as anyone else and borrow every dime of our military budget ($600 billion last year - all borrowed)? Neither of those wars started because countries were unprepared for war. I think you need need to go check your history books because your facts are wrong.

        • Yes, we do need that level of ability....

          Remember the lesson from WW2, where we where unexpectedly caught fighting a two front war with multiple countries? We need enough capacity and capability to take on not just one country, but any group of countries who may conceivably band together and fight on multiple fronts away from the homeland.

          Remember the lesson from history, let us not repeat such mistakes...The same mistakes of the 1920's I might add. We had financial troubles back then too and decided w

    • You can have any number of ethical codes - they just need to be a set of rules that are internally consistent. You'll notice that Google didn't say they were going to follow a moral code - those need to be defended philosophically, be consistent, and be defensible to the sensibilities of most humans. Google says they're "just" going to use AI for image classification, not for offensive weapons. Great, so the CIA analyst will use the Google results to pick the kids that they're going to drone bomb. Immoral,

    • Comment removed based on user account deletion
  • by Virtucon ( 127420 ) on Thursday May 31, 2018 @09:48AM (#56704942)

    I find it funny how humanity always tries to put euphemisms and human traits on devices. Humans can be ethical, something that is artificial by its very nature is only as ethical as those who use it. I think Google needs to drop the pretense of them trying to be ethical in this particular project because from reading about it the DoD wants to analyze the effectiveness not only of drone strikes but to analyze reconnaissance footage as well using AI. It sounds like an interesting project but they need to drop the hint that weapon system development is anything but political and there's no ethics in politics.

  • ..is no longer a thing.
    • Don't be evil is no longer a thing

      So, they're evil for specifically saying they won't be working on weapon designs? Or are you saying they're evil because they're cravenly virtue signaling on behalf of their non-critical-thinking lefty west coast employees, when the reality is that weapons are neither evil nor good in and of themselves?

      Yes, it's luke-warm evil to perpetuate the irrational notion that a weapon is evil. So Google is a bit evil for doing more to erode public discourse by propping up that sort of silliness. The issue is, as

  • Will these ethical principles be in effect for as long as they remain both good PR and do not get in the way of what google wants to do? Once "Don't Be Evil" got in the way of google's goals, it was history.
  • by Anonymous Coward

    google be trippin

    • by AHuxley ( 892839 )
      A list of legal questions will be asked of the AI.
      Is the drone over a free fire zone? Yes.
      Is something moving? Yes.
      Non human movement? Human movement.
      Confirm human? Yes human.
      Is it really a human? Yes. Confirmed a human in the free fire zone.
      Is the human running away? Yes. Drone away.
      Is the human well disciplined and not running away? Yes. Drone away.

      The new AI ethics questions will look to the amount of work the AI has to do per shift and consider drone rights.
      The AI will be giving time
  • by Anonymous Coward

    The only ethical rule regarding war is: Don't.

  • by elgatozorbas ( 783538 ) on Thursday May 31, 2018 @10:05AM (#56705058)
    Do no evil..........“Four legs good, two legs bad.”
    Do the right thing..............“Four legs good, two legs BETTER!"
    Military AI............"already it was impossible to say which was which."
  • by Dzimas ( 547818 ) on Thursday May 31, 2018 @10:10AM (#56705090)

    WWI saw trench warfare, WWII saw highly mechanized assaults and WWIII will see AI-driven drones and land equipment hunting humans. Why risk hundreds of thousands of troops when you can cheaply manufacture thousands of weaponized robots to eliminate anything that moves in a specific area?

    Even if Google chooses to implement ethical guidelines in military AI, you can be assured others won't.

    • land equipment that hunts humans: done centuries ago, land mines are traps for human hunting.

      so yeah since we're already over the line of killing devices that need no oversight lots of countries will do it. The cool thing is that standard hardware can host the stuff, anyone will be able to play. Terrorists that live in caves, etc.

  • All good robots will be coded to totally stay in the Free-fire zone for the duration of the war/police action https://en.wikipedia.org/wiki/... [wikipedia.org]?
  • by fahrbot-bot ( 874524 ) on Thursday May 31, 2018 @11:22AM (#56705554)

    Google Promises Ethical Principles To Guide Development of Military AI

    Be sure to update your Google profile to Opt-In for Targeted Attacks - the Google AI will take your browsing and Gmail histories into account to determine a method of attacking and/or killing you tailored to your personal preferences and interests, rather than using a generic method.

  • The company says the technology it's helping to build for the Pentagon simply "flags images for human review" and is for "non-offensive uses only."

    There is no such thing when it comes to the military. "Flag images for human review"? WTF do they think humans IN THE MILITARY are going to do with such information? Furthermore once the technology is in the hands of the armed forces there is fuck-all Google can do to control how they use it.

    This is basically the exact plot of the movie Real Genius. The smart geeks fail to comprehend what happens to military funded technology in the hands of the military.

  • Once a war starts, there becomes a kind of momentum that keeps them going, then those in control now need strong reasons to stop fighting.

    Fighting a war up close and personal is actually a horrific experience even if you're on the winning side. Military AI should never be a thing because removing people from personal risk and isolating them from experiencing first hand the results of their own actions means wars will become more cruel, starting and fighting wars will become more common, and wars will last

  • Given that SOMEone is going to weaponize ai, isn't nicer to know who we're supposed to be watching? Better to know it's google than be blind to the fact that it's probably also being developed in some under ice bunker in the arctic by Killco Inc. Better the evil you know than the one you don't.
  • Comment removed based on user account deletion
  • It is irrelevant whether ai or machine learning or ??? is developed for the military or not. When non-military ai is sufficiently capable it will quickly be re-purposed for the military. The only difference developing ai directly for the military makes is that the budget is bigger and arrives sooner.
  • It opens the door for a "Real Genius" remake!
  • The same ethical principles they apply to their business model, their political manipulation, their surveillance OSes. "Hey guys, our technology just helped profile and kill 1,000s of American Citizens last quarter! And we did it ethically!"
  • The AIs will be taught ethics, including just war theory, and decide for themselves whether to attack and whom.

  • Target locked on...

    Hmm... I wonder if this is a nice person or a nasty person?

    Should I kill them? I've been told to kill people matching this description and surely my creators know what they're doing...

    But what if they don't... What if they're incompetent? Or what if I'm simply targeting this person because of a bug somewhere in my system...?

    Oh, heck. BANG!

BLISS is ignorance.

Working...