Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
The Military AI United States

US Issues Declaration on Responsible Use of AI in the Military (reuters.com) 33

The U.S. government on Thursday issued a declaration on the responsible use of artficial intelligence (AI) in the military, which would include "human accountability." From a report: "We invite all states to join us in implementing international norms, as it pertains to military development and use of AI" and autononous weapons, said Bonnie Jenkins, Under Secretary of State for Arms Control.
This discussion has been archived. No new comments can be posted.

US Issues Declaration on Responsible Use of AI in the Military

Comments Filter:
  • by Joe_Dragon ( 2206452 ) on Thursday February 16, 2023 @01:42PM (#63299045)

    we will take the men of the loop at the silos and move all control to NORAD

    • by cayenne8 ( 626475 ) on Thursday February 16, 2023 @01:49PM (#63299065) Homepage Journal

      we will take the men of the loop at the silos and move all control to NORAD

      Wouldn't you rather play a nice game of....chess?

    • by Tablizer ( 95088 ) on Thursday February 16, 2023 @02:25PM (#63299199) Journal

      The silo scene from War Games [youtube.com] still sends chills up my spine. There are have been somewhat similar close calls on both sides. I don't think we are lucky, but merely the product of the Anthropic Principle: we are only here to ponder close calls because they didn't go through. If they did, we wouldn't be pondering the failures. (There may have been a few failure-ponderers left, but not us.)

      The scene is scarier than any chainsaw zombie flick because it's almost happened during the cold war.

      Silo Captain: "We have a valid message, stand by to authenticate...enter launch code...launch order confirmed." (paraphrased)

      Soldier: "Holy shit [to self]"

      Silo Captain: "Insert launch key."

      Soldier: "Okay, they're set. Switch 1 on, switch 2 on, switch 3 on...All missiles enabled."

      Silo Captain: "I want somebody on the goddam phone."

      Soldier: "That's NOT the correct procedure, Captain!"

      Silo Captain: "Screw the procedure, I want sombody on the
            goddam phone before I kill 20 million people! Get
            headquarters on the HF!"

      Soldier: "I got nothing, they may have been knocked out already."

      Silo Captain: "Right. On my mark, rotate launch key..."

      Soldier: "Roger, ready to go to launch..."

      Silo Captain slowly moves hand away from key, sweating profusely.

      Soldier: "Turn Your Key, Sir!!!"

      Silo Captain: "I'm sorry."

      Soldier pulls out a gun, aims it at Captain...

      • I always remembered that scene particularly where he pulls out the gun and says "turn your key, sir" as the other guy puts his hand down, but I completely forgot it was from War Games, and didn't even consider it was from that movie. Then about 10 years ago I watched War Games again and "Ahhh, Eureka!" :)

    • we will take the men of the loop at the silos and move all control to NORAD

      Lunch will be provided. Have a WOPR.

  • apparently she only said that while attending an international summit in The Hague, Netherlands. not quite an official statement or position of the united states, and i can see why news / censors would want people to believe it was. perhaps something official from the pentagon or federal government would make a better gesture ?
  • Someone just leaked the whole guidelines:
    Rule 1: only blow up the bad guys
    Rule 2: there is no rule 2
    Absolutely bulletproof
  • Terminator jokes (Score:4, Interesting)

    by timeOday ( 582209 ) on Thursday February 16, 2023 @02:29PM (#63299213)
    Terminator jokes aside, autonomy may actually decrease unwarranted killing. It's not wrong to worry about killer bots. But also consider the status quo - My Lai, the rape of Nanjing, the firebombing of Tokyo. When you put human combatants into mortal danger, after watching enough of their friends die, the cycle of recrimination can rise to unimaginable levels - not only can it, but it always does. Somebody controlling a drone from a safe distance might make more mistakes on accident, but they also might commit fewer intentional atrocities.
    • A drone being controlled by a human isn't the kind autonomous that we need to be worried about at an existential level.

  • I do not trust ANY military use of AI that will not come to humanity's detriment.

    We still have war crimes.
    The only thing preventing the use of AI in warfare (responsibly or not) is the maturity of AI technology.

    • Once the "bad guys" use it to the fullest potential the "good guys" will have to do the same or disappear. I am not sure how it is actually possible to regulate it without compromising the ability to stay competitive. And the "bad guys", whoever they are has never promised to keep their military AIs ethical.
      • by dargaud ( 518470 )
        That. In Ukraine some tiny killer drones have been found crashed after they ran out of juice. They were just loitering around the presidential palace, with 1Kg of C4 on board, ready to pounce on you-can-easily-guess-who. Fortunately their autonomy was only 30min. Once you have cheap solar drones which can stay 24/7 with facial recognition, game is over for some people. Unless drone detection and killing abilities improve dramatically.
  • It's an important discussion to have. There's an emerging field of machine ethics, especially in military use. The questions come around such as the chain of command. A system of rules will never be enough because there may be things outside the parameters of its programming. With unmanned autonomous weapons (UAW) the machine becomes judge, jury, and executioner.

    I have argued that the ability to disobey an order is intrinsic to respecting the sanctity of human life. It cannot just be a cold blooded killer. The ability to NOT take the shot is just as important as its precision to do so.

    Then the next part is how do we hold these systems accountable? When it's a human we can punish the individual, or those responsible along the chain of command. But how do we punish a UAW? It needs the ability to explain and rationalize to OTHERS the decision that it made. We need a system in place to be able to judge, and a system of punishment and enforcement.

    We need experts in these fields to be able to voice their opinions and help guide the regulations that will come out. We can't go naively in thinking that banning the development of technology will suffice. If we don't build it, someone else will, and without international norms being set much like the Geneva convention on war crimes, it will be an unregulated mess that will be hard to reign in.

    • With unmanned autonomous weapons (UAW) the machine becomes judge, jury, and executioner.

      Yes, we have those. They're called land mines.

      I have argued that the ability to disobey an order is intrinsic to respecting the sanctity of human life.

      To the machine, there is no such sanctity. To the people deploying the machine, said sanctity only applies to the "right" people (meaning themselves).

      Then the next part is how do we hold these systems accountable?

      This part is sheer fantasy, and will remain so for the foreseeable future. The machines are not sapient. They won't be any time soon. Morally, they are exactly identical to a land mine. Mobility and active targeting doesn't change that. Accountability attaches to the people deploying them, not the machines t

    • Anyone here that has an opinion on military use of AI, do you watch any videos from the war in Ukraine?

      That's the closest nearly any of you'll ever get to being in a war is watching those videos, the first GoPro war. If you don't because it's gross or barbaric or you don't condone violence that's great, I just think your opinion on AI in combat doesn't mean anything if you don't know what fighting actually looks like. I'm not even saying you'll know by watching GoPro videos either, because you probably stil

  • The US making this declaration is great... except that it's painfully obvious that countries such as China would never make a similar declaration, or even if they did, it would be only for show. The bigger threat would be to fall behind in the "AI Arms Race" and give a totalitarian dictatorship (aka China) the lead.
    • What is somewhat paradoxical about this discussion in the nuclear era is that all major world powers already do possess and will not relinquish the ability to kill everybody indiscriminately at the push of a button. So, there haven't been any wars directly between major powers in 80 years. The concern over the US and China unleashing killer bots on each other presupposes we could enter a conventional war with them, instead of just immediately annihilating each other on day 1 for fear of the other side d
    • by kaoshin ( 110328 )

      The US making this declaration is great... except that it's painfully obvious that countries such as China would never make a similar declaration, or even if they did, it would be only for show.

      "At the conclusion of a two-day conference in The Hague, the United States, the Netherlands, China and dozens of other nations signed off on a 25-point call to action, asking countries to ensure the safe and responsible use of a variety of machine intelligence applications." https://www.courthousenews.com... [courthousenews.com]

  • What's the big deal about the use of AI in military, advertising, policing, or other applications? If AI is a problem, why isn't the use of algorithms based on statistics (e.g., big data), heuristics, or other rules a problem? I think the real issue is the input data and privacy issues related to the input data. After all, if statistical instead of AI algorithms were used to profile people, would the complaints go away?

  • If kill as many as you can, looks like a win, then guess what the AI will do
  • Fog of War is a real thing, during major battles or engagements having tactical knowledge organized for commanders is critical. AI brought to analyze battlefield information will provide a big advantage and that's a scary thought because nations will be more emboldened to use it if their AI is robust and efficient.

  • Technically every booby trap since the beginning of time has been an autonomous weapon. Surely the existing rules would be sufficient?

  • I know they think we're all idiots, but come on.

    AI is a Military / Government / Surveillance wet dream.

    Let me throw this out there for you.

    When a powerful and / or wealthy entity gets caught doing something blatantly unethical and / or illegal what do they usually do ?

    "Oh, it was a computer bug."
    "It was a glitch"
    "It was leftover developer code that made it into production"

    They blame it on the damn computer. It's never their fault, it's the computers fault.
    The only thing that happens is they try to be more

  • It's all bullshit.
    Any promises made by the government are already compromised.

  • This a perfectly logical statement from the Undersecretary of State for Arms Control. Now everyone is discussing AI, and no matter how they joke about Skynet and the end of the world, the issue of gun control in this context is very relevant. I have written on this topic before, so I will try to research this issue further and publish the work on https://graduateway.com/essay-... [graduateway.com] so that it is freely available. After all, in any industry, many companies are trying to implement automation and AI for them as

To be awake is to be alive. -- Henry David Thoreau, in "Walden"

Working...