US Issues Declaration on Responsible Use of AI in the Military (reuters.com) 33
The U.S. government on Thursday issued a declaration on the responsible use of artficial intelligence (AI) in the military, which would include "human accountability." From a report: "We invite all states to join us in implementing international norms, as it pertains to military development and use of AI" and autononous weapons, said Bonnie Jenkins, Under Secretary of State for Arms Control.
we will take the men of the loop at the silos (Score:3)
we will take the men of the loop at the silos and move all control to NORAD
Re:we will take the men of the loop at the silos (Score:5, Informative)
Wouldn't you rather play a nice game of....chess?
Re:we will take the men of the loop at the silos (Score:4, Interesting)
The silo scene from War Games [youtube.com] still sends chills up my spine. There are have been somewhat similar close calls on both sides. I don't think we are lucky, but merely the product of the Anthropic Principle: we are only here to ponder close calls because they didn't go through. If they did, we wouldn't be pondering the failures. (There may have been a few failure-ponderers left, but not us.)
The scene is scarier than any chainsaw zombie flick because it's almost happened during the cold war.
Silo Captain: "We have a valid message, stand by to authenticate...enter launch code...launch order confirmed." (paraphrased)
Soldier: "Holy shit [to self]"
Silo Captain: "Insert launch key."
Soldier: "Okay, they're set. Switch 1 on, switch 2 on, switch 3 on...All missiles enabled."
Silo Captain: "I want somebody on the goddam phone."
Soldier: "That's NOT the correct procedure, Captain!"
Silo Captain: "Screw the procedure, I want sombody on the
goddam phone before I kill 20 million people! Get
headquarters on the HF!"
Soldier: "I got nothing, they may have been knocked out already."
Silo Captain: "Right. On my mark, rotate launch key..."
Soldier: "Roger, ready to go to launch..."
Silo Captain slowly moves hand away from key, sweating profusely.
Soldier: "Turn Your Key, Sir!!!"
Silo Captain: "I'm sorry."
Soldier pulls out a gun, aims it at Captain...
Re: (Score:2)
I always remembered that scene particularly where he pulls out the gun and says "turn your key, sir" as the other guy puts his hand down, but I completely forgot it was from War Games, and didn't even consider it was from that movie. Then about 10 years ago I watched War Games again and "Ahhh, Eureka!" :)
Re: (Score:2)
we will take the men of the loop at the silos and move all control to NORAD
Lunch will be provided. Have a WOPR.
Declaration? (Score:1)
Re: (Score:2)
this just in (Score:1)
Rule 1: only blow up the bad guys
Rule 2: there is no rule 2
Absolutely bulletproof
Terminator jokes (Score:4, Interesting)
Re: (Score:1)
A drone being controlled by a human isn't the kind autonomous that we need to be worried about at an existential level.
We are doomed (Score:2)
I do not trust ANY military use of AI that will not come to humanity's detriment.
We still have war crimes.
The only thing preventing the use of AI in warfare (responsibly or not) is the maturity of AI technology.
Re: (Score:1)
Re: (Score:2)
An Important Discussion (Score:3)
It's an important discussion to have. There's an emerging field of machine ethics, especially in military use. The questions come around such as the chain of command. A system of rules will never be enough because there may be things outside the parameters of its programming. With unmanned autonomous weapons (UAW) the machine becomes judge, jury, and executioner.
I have argued that the ability to disobey an order is intrinsic to respecting the sanctity of human life. It cannot just be a cold blooded killer. The ability to NOT take the shot is just as important as its precision to do so.
Then the next part is how do we hold these systems accountable? When it's a human we can punish the individual, or those responsible along the chain of command. But how do we punish a UAW? It needs the ability to explain and rationalize to OTHERS the decision that it made. We need a system in place to be able to judge, and a system of punishment and enforcement.
We need experts in these fields to be able to voice their opinions and help guide the regulations that will come out. We can't go naively in thinking that banning the development of technology will suffice. If we don't build it, someone else will, and without international norms being set much like the Geneva convention on war crimes, it will be an unregulated mess that will be hard to reign in.
Re: (Score:3)
With unmanned autonomous weapons (UAW) the machine becomes judge, jury, and executioner.
Yes, we have those. They're called land mines.
I have argued that the ability to disobey an order is intrinsic to respecting the sanctity of human life.
To the machine, there is no such sanctity. To the people deploying the machine, said sanctity only applies to the "right" people (meaning themselves).
Then the next part is how do we hold these systems accountable?
This part is sheer fantasy, and will remain so for the foreseeable future. The machines are not sapient. They won't be any time soon. Morally, they are exactly identical to a land mine. Mobility and active targeting doesn't change that. Accountability attaches to the people deploying them, not the machines t
Re: An Important Discussion (Score:2)
Anyone here that has an opinion on military use of AI, do you watch any videos from the war in Ukraine?
That's the closest nearly any of you'll ever get to being in a war is watching those videos, the first GoPro war. If you don't because it's gross or barbaric or you don't condone violence that's great, I just think your opinion on AI in combat doesn't mean anything if you don't know what fighting actually looks like. I'm not even saying you'll know by watching GoPro videos either, because you probably stil
CHINA (Score:2)
Re: (Score:2)
Re: (Score:2)
The US making this declaration is great... except that it's painfully obvious that countries such as China would never make a similar declaration, or even if they did, it would be only for show.
"At the conclusion of a two-day conference in The Hague, the United States, the Netherlands, China and dozens of other nations signed off on a 25-point call to action, asking countries to ensure the safe and responsible use of a variety of machine intelligence applications." https://www.courthousenews.com... [courthousenews.com]
Re: (Score:2)
No country on the planet is involved in more conflicts than the U.S.
This is provably false. Whether you are looking at current data such as the GPI, commonly cited information sources on current or past data such as wikipedia, or history books, the United States while involved or a primary participant in many wars and conflicts has neither been the most warlike, nor has it instigated most conflicts. I'm interested in if you actually learned this crap somewhere, or if it is just anti-US sentiment as you've expressed in your prior posts.
No country spends so much on Arms.
The US for the last couple of years h
Re: Propagandising by Reuters - as usual (Score:2)
tricking some poor Ukrainians into sacrificing themselves for the financial and geopolitical aims of the US
They're defending their country. They asked for help.
You can't possibly spin that. Their country was invaded by a hostile power. If your country was invaded, what would you do, be a coward?
Is AI or the input data the issue? (Score:2)
What's the big deal about the use of AI in military, advertising, policing, or other applications? If AI is a problem, why isn't the use of algorithms based on statistics (e.g., big data), heuristics, or other rules a problem? I think the real issue is the input data and privacy issues related to the input data. After all, if statistical instead of AI algorithms were used to profile people, would the complaints go away?
Payoff (Score:2)
When the military eliminates the fog of war (Score:2)
Fog of War is a real thing, during major battles or engagements having tactical knowledge organized for commanders is critical. AI brought to analyze battlefield information will provide a big advantage and that's a scary thought because nations will be more emboldened to use it if their AI is robust and efficient.
Anti-tank mines are autonomous (Score:2)
Technically every booby trap since the beginning of time has been an autonomous weapon. Surely the existing rules would be sufficient?
Oh spare us the bullsh*t (Score:2)
I know they think we're all idiots, but come on.
AI is a Military / Government / Surveillance wet dream.
Let me throw this out there for you.
When a powerful and / or wealthy entity gets caught doing something blatantly unethical and / or illegal what do they usually do ?
"Oh, it was a computer bug."
"It was a glitch"
"It was leftover developer code that made it into production"
They blame it on the damn computer. It's never their fault, it's the computers fault.
The only thing that happens is they try to be more
"Declaration of responsibility". (Score:1)
It's all bullshit.
Any promises made by the government are already compromised.
Interesting to see what happens next. (Score:1)