The US Military's AI 'Swarm' Initiatives Speed Pace of Hard Decisions About Autonomous Weapons (apnews.com) 70
AI employed by the U.S. military "has piloted pint-sized surveillance drones in special operations forces' missions and helped Ukraine in its war against Russia," reports the Associated Press.
But that's the beginning. AI also "tracks soldiers' fitness, predicts when Air Force planes need maintenance and helps keep tabs on rivals in space." Now, the Pentagon is intent on fielding multiple thousands of relatively inexpensive, expendable AI-enabled autonomous vehicles by 2026 to keep pace with China. The ambitious initiative — dubbed Replicator — seeks to "galvanize progress in the too-slow shift of U.S. military innovation to leverage platforms that are small, smart, cheap, and many," Deputy Secretary of Defense Kathleen Hicks said in August. While its funding is uncertain and details vague, Replicator is expected to accelerate hard decisions on what AI tech is mature and trustworthy enough to deploy — including on weaponized systems.'
There is little dispute among scientists, industry experts and Pentagon officials that the U.S. will within the next few years have fully autonomous lethal weapons. And though officials insist humans will always be in control, experts say advances in data-processing speed and machine-to-machine communications will inevitably relegate people to supervisory roles. That's especially true if, as expected, lethal weapons are deployed en masse in drone swarms. Many countries are working on them — and neither China, Russia, Iran, India or Pakistan have signed a U.S.-initiated pledge to use military AI responsibly.
But that's the beginning. AI also "tracks soldiers' fitness, predicts when Air Force planes need maintenance and helps keep tabs on rivals in space." Now, the Pentagon is intent on fielding multiple thousands of relatively inexpensive, expendable AI-enabled autonomous vehicles by 2026 to keep pace with China. The ambitious initiative — dubbed Replicator — seeks to "galvanize progress in the too-slow shift of U.S. military innovation to leverage platforms that are small, smart, cheap, and many," Deputy Secretary of Defense Kathleen Hicks said in August. While its funding is uncertain and details vague, Replicator is expected to accelerate hard decisions on what AI tech is mature and trustworthy enough to deploy — including on weaponized systems.'
There is little dispute among scientists, industry experts and Pentagon officials that the U.S. will within the next few years have fully autonomous lethal weapons. And though officials insist humans will always be in control, experts say advances in data-processing speed and machine-to-machine communications will inevitably relegate people to supervisory roles. That's especially true if, as expected, lethal weapons are deployed en masse in drone swarms. Many countries are working on them — and neither China, Russia, Iran, India or Pakistan have signed a U.S.-initiated pledge to use military AI responsibly.
But surely, (Score:2)
AI written by the military would _never_ adopt a "kill them all and left god sort them out" attitude?
Hopefully AI will take all the troublemakers out via rapture and leave the rest of us to a hundred years of peace...
What could possibly go wrong?
war plan selection USA 1st strike! (Score:2)
war plan selection USA 1st strike!
What could possibly go wrong? War Irony... (Score:2)
From an essay I wrote on this in 2010: https://pdfernhout.net/recogni... [pdfernhout.net]
"There is a fundamental mismatch between 21st century reality and 20th century security thinking. Those "security" agencies are using those tools of abundance, cooperation, and sharing mainly from a mindset of scarcity, competition, and secrecy. Given the power of 21st century technology as an amplifier (including as weapons of mass destruction), a scarcity-based approach to using such technology ultimately is just making us all insecur
Re: (Score:2)
You can't make predictions 10 years out with a wrong theory: "Why China Cannot Rise Peacefully" https://www.youtube.com/watch?... [youtube.com]
Don't want to watch the video? TLDR it is basically the same argument in Batman vs. Superman
Batman = USA who fears that China will become too powerful. Though they are currently not breaking any international rules, they could in the future.
Re: (Score:2)
Lex Fridman had an in-depth interview with the dude a few days ago, with fresh takes on the latest geopolitical events. Well worth a watch. There're also nuggets of short clips from the Lex Clips channel.
https://www.youtube.com/watch?... [youtube.com]
Re: (Score:2)
Lex Luthor hates superman.
Re: (Score:1)
Certainly you can find that attitude within the ranks of any military, but it doesn't always govern the high-level strategic decisions, or weapons development. Nations, and by extension their armies, can have goals other than mass slaughter.
Take, for example, the US invasion of Afghanistan. The first part, large-scale military operation with the goal of regime change. They achieved it with 10,000 Taliban casualties and 2,000 civilian casualties. Yes, there were a few "kill them all" types who deliberately t
Re: (Score:2)
The problem is that our enemies (Score:3)
...may skip safety to gain an advantage or to save a buck. Suppose they send a swarm after us, but our counter-swarm is remote controlled. They set off an EMP burst to screw up all radio, and bots are left to fight on their own. The more autonomous ones would have a big advantage.
Re: (Score:2)
move to def con 2!
Re:The problem is that our enemies (Score:5, Insightful)
you will find little sympathy with this line of thought around here. they literally define themselves as the saviors, and some actually believe that.
then of course if you are convinced that you are part of the good guys, anything goes, and everything is the other's fault.
the us does this not because it is bad, but simply because it can. it's the way power works and the us harbors a ruthless and very powerful elite that can afford fuck
up here and there again and again and nothing ever happens, except lots of people die and they continue making money. so if you live today in an area some of these motherfuckers want to fuck around with, you're fucked, simple as that.
but that's still localized risk, bad luck living just there. the principal risk to global world peace is that china is getting close to be a stopping force to that, and this the us will not tolerate that and all hell will break loose. rather soonish, i would expect and hope to be wrong.
Re: The problem is that our enemies (Score:1, Troll)
Hamas is the product of 75 years of occupation and Apartheid.
Putin attacked Ukraine because NATO wanted to put nukes there right on their border, and they reacted as strongly as the USA did when the USSR wanted to put nukes in Cuba.
North Korea's regime is the product of the US war in Korea.
So, I guess your sarcastic question could be answered "yes, they are all made to do bad things by foreign interference".
Re: The problem is that our enemies (Score:4, Insightful)
Putin attacked Ukraine because NATO wanted to put nukes there right on their border, and they reacted as strongly as the USA did when the USSR wanted to put nukes in Cuba.
Taking Ukraine would simply put their border closer to NATO's nukes. It is a nonsensical argument.
Re: The problem is that our enemies (Score:2)
Re: (Score:2)
Re: (Score:3, Insightful)
Re: The problem is that our enemies (Score:1)
lol yea coz that's an accurate analogy.
Re: (Score:2)
North Korea's regime is the product of the US war in Korea.
I'm confused, didn't South Korea suffer the same war?
Re: The problem is that our enemies (Score:1)
Ok so sit down this may come as a shock:
The two sides in war often don't come out with the same result.
Re: (Score:2)
So what you mean is that the side that aligns with the USA gets a much better future. I get it. So USA the goodies. I find your position a bit manichean, but cannot really argue with it, I mean, Japan, West Germany as opposed to East Germany... yup, you are mostly right.
Re: (Score:2)
Re: (Score:1)
Re: (Score:1)
What if both sides were doing evil shit, would that blow your mind? The evils committed by one side do not erase the evils committed by the other. Also the evils each side is committing are justified by pointing to the evils of the other. Whole world blind, etc. In fact apartheid is being committed against the Palestinian people by Israel and THAT evil also must stop.
Re: (Score:2)
Re: (Score:2)
Re: (Score:1)
Haha OMG posting a Ben Shapiro video as a "lesson in Middle Eastern history" has got to be the funniest thing I've seen this year.
Re: (Score:1, Troll)
STOP MAKING ENEMIES. problem solved.
Don't you guys realise that the U.S. is the primary driver of conflict the world over?
$$$$ before humanity.
The USA is creating enemies by allowing women to go out in public without a bag over their head, to hold a job, learn to read and write, go see a physician without a male relative escorting them, or just generally have a life outside of their home. I won't say that the USA is the primary driver of international conflict but I can agree that it is a large contributor. If the USA stopped making enemies then that would not likely do much to end conflict because then these women hating assholes that have been
Re: (Score:2)
I wish that would work.
Yes, the US has often acted in such a way as to make enemies. Often the alternative was isolationism, but by no means always.
That said, the domination of "the petro dollar" has caused the US to act in many ways that have created enemies. And so has an earlier paranoid hatred of anything that called itself communist. (I still see no reason for the "Vietnam War" other than the US unilaterally breaking the Geneva agreement.)
But not making enemies isn't sufficient. Some folks will cha
Re: (Score:2)
Re: (Score:1)
> Don't you guys realise that the U.S. is the primary driver of conflict the world over?
USA is far from perfect, but without us, dictators would have their way with the world. We let Germany and Japan be independent democracies after they lost. Dictators wouldn't do this.
That being said, funding Israel is a big mistake. Israel swipes land and has about 30 go-to excuses for doing such. Israel wants land more than peace, and that's a recipe for perpetual conflict. (Giving the land back won't create instant
Re: (Score:2)
Re: (Score:2)
That's not being a hypocrite, that's being self-centered. Just like all the countries that *do* decide to develop nuclear weapons anyway are self-centered. The world would be a lot safer if nobody had nuclear weapons. (OTOH, there would probably be lots more wars.) But it's really hard to stuff a genie back into the bottle.
Re: (Score:3)
We are seeing remote controlled drones used in Ukraine to great effect, and Russia seems to be able to do very little about them. The smaller ones are almost impossible to detect and shoot down, and very cheap to procure. The Ukrainians are literally buying them from AliExpress, strapping grenades to them, and sending them in through windows and doors.
When they become autonomous, swarms of suicide drones will be deployed in a similar fashion.
Well equipped militaries will simply flatten buildings like Israel
Re: (Score:2)
Ah yes. Our enemies might do this, therefore we must do it before they do!
The warmonger's motto.
How is this not a solved problem? (Score:5, Informative)
We already have autonomous weapons, and we've been doing a pretty good job keeping them in check so far. It's not a perfect record but far from something that I'd think is a problem.
Take the Phalanx CIWS system as an example. This system doesn't have an IFF system because anything moving fast enough towards a friendly asset to be a threat is fired upon. There's exceptions written in so friendly aircraft can approach safely, typically by following a path designated for a safe approach, a path kept guarded so an enemy can't use it to send in missiles or something.
In the airspace over Ukraine it's just generally a "no fly zone" and anything airborne that can't be ruled out as a bird is shot down automatically. This has resulted in some "blue on blue" incidents, mostly on the Russian side, but that's a risk taken even without automated systems.
The primary rule that protects friendly forces and noncombatants from automated killing machines is that we don't use them where the risks of "blue on blue" or "blue on green" events could happen. In that case a human is put in the decision loop. That's not foolproof because not every human will get it right, but it does mean that we have a person that is capable of more complex decision making than some rigid algorithm.
What is making automated systems important is that weapons can move at much greater speeds than in the past, meaning if a person is in the loop they may not be able to process the threat quickly enough to respond. If they do respond in time then it may be because the human is using an overly simplistic decision tree on whether to fire or not. An example of that is telling a sailor on a ship to fire upon any radar contact that comes from shore. That's going to be effective in protecting against enemy forces on land, not put friendly aircraft coming from other ships at risk, but could mean someone fleeing the war in a Cessna could end up getting shot to pieces. Crazy things like this has happened.
https://en.wikipedia.org/wiki/... [wikipedia.org]
One of the more notable events occurred on Midway when the pilot of an RVNAF Cessna O-1 dropped a note on the deck of the carrier. The note read "Can you move these helicopter to the other side, I can land on your runway, I can fly 1 hour more, we have enough time to move. Please rescue me. Major Buang, Wife and 5 child." Midway's commanding officer, Captain L.C. Chambers ordered the flight deck crew to clear the landing area; in the process an estimated US$10 million worth of UH-1 Huey helicopters were pushed overboard into the South China Sea. Once the deck was clear Major Buang approached the deck, bounced once and then touched down and taxied to a halt with room to spare.[24] Major Buang became the first RVNAF fixed-wing pilot to ever land on a carrier. A second Cessna O-1 was also recovered by USS Midway that afternoon.[6]:âS121âS
Re: War trumps ethics every time... (Score:2)
Re: (Score:2)
This is why China doesn't build large aircraft carriers, because an aircraft carrier is a big fat target, and there is a lot of symbolic value if one is sunk, rendered inoperable, or the flight deck rendered unusable.
What are you talking about? China wants aircraft carriers badly, and apparently has, or had in the past, plans to have 6 "large" carriers and 8 "light" carriers.
https://en.wikipedia.org/wiki/... [wikipedia.org]
China doesn't have many aircraft carriers now because it takes a lot of money and time to build one, especially for someone that hasn't built any before. Another problem is getting enough suitable aircraft to make the carrier effective, and have competent pilots to fly them.
There is going to be a time where a fleet of ships with drones on them will have more firepower than a present-day carrier group, even bringing back the SLAM as a deterrent option.
An aircraft carrier that has unmanned air
Re: (Score:2)
This is not a new idea. The US has about a hundred frigates and cruisers of this kind and has had for fifty years. A bunch of submarines too. They're pretty handy, but the US still builds carriers. Shockingly, they launch more capable drones off of those.
Also, China has bought several carriers over the years, then they started building their own. They're currently on version 4.
You say "drone" and "swarm" and geeks lose their minds. Sudden
Lies (Score:2)
Southern US racist cops vs military AI (Score:2)
I'd rather have a military AI make a fire decision than a cop in the South.
Those racist "thin blue line" bastards just want to kill minorities.
Military AIs take out actual threats.
The "thin blue line" is a way to make "us vs them" legtimate. It's not. Cops that live this way should be fired or shot by military AIs.
If you're an "us vs them" person in law enforcement, just shoot yourself. Nobody will cry.
Re: (Score:2)
Community policing is better. Not much better, but better. But we're doing our best to reverse those gains and remove the educational requirement for cops so we can start hiring high school bullies again.
Whataboutisms galore (Score:2)
Re: (Score:2)
"Replicator" (Score:2)
So somebody involved is a Stargate fan and has a sense of humour.
Because we're all looking to create an unstoppable swarm of tiny machines bent on destroying all biological life in the galaxy.
Re: "Replicator" (Score:2)
"There is little dispute"... Yeah, there is. (Score:1)
> There is little dispute among scientists, industry experts and Pentagon officials
Scientists are people who follow the principles of science.
Industry "experts" are just people employed who claim qualifications.
Pentagon officials are dipshits with a starched shirt and some medals.
NONE of them are qualified to opine on anything much. The so-called "scientists" might IF and ONLY IF that science field was related to what they opine about.
AI is not a thing. There's no "intelligence" in AI. All there is is
Re: (Score:2)
So, from your argument, you are not qualified to opine about intelligence.
AI is a name. Names are always valid. A black cat can be named Grey, and that's still a valid name.
The real problem with AI is that it's not a good clade. Very different things are given the same name. This is still valid English use, but it *is* confusing.
As for "intelligence", there is no commonly accepted meaning. People just use the "I know it when I see it" test. (And if you mean IQ, there are AIs with a higher one than you
Land mines are autonomous weapons (Score:2)
Land mines, anti-tank mines, and anti-shipping mines are fully autonomous and have been for over a century.
Their "AI" is simple: if someone passes nearby, explode.
Although modern anti-shipping mines are often designed to ignore decoys and take out high-value assets.
Re: (Score:2)
Land mines, anti-tank mines, and anti-shipping mines are fully autonomous and have been for over a century.
Their "AI" is simple: if someone passes nearby, explode.
Although modern anti-shipping mines are often designed to ignore decoys and take out high-value assets.
This reminds me of a couple things, things that might not exactly follow where you were going.
First thing this brings to mind is something of a joke, which may have some basis in reality. An admiral is in his office on the flagship of a navy flotilla when the relative calm is broken by a large explosion. He gets up to run to the bridge to see what has happened. A mine had exploded off to the side of the ship but did only superficial damage to the thick hull, even so this was a considerable danger since h
Replicator? (Score:2)
This is how you get an AI overlord. Naming it fucking "Replicator"... the fuck.
If there were ethics in war... (Score:3)
ofcourse... (Score:2)
Is this skynet? (Score:1)
Autocomplete will kill us all (Score:2)
OpenAI: Behold.. ChatGPT! .no.. AGI... oh no .. LLM.. oh no ...
People: OMG.. autocomplete will destroy civilization
Pundits: ChatGPT is an existential threat to humanity
People: F-35 F*** yeah!
People: Oh..
People: F-22 shoot them out of the sky!
Meanwhile in the military: lets create swarms of AI robots to destroy civilization
Military laughing at us: haha.. they're scared of autocomplete, but don't even blink an eye on the literal civilization ending weapons we routinely create. They even cheer us on! Haha.