Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
The Military United States

US Army Says It Could Acquire Targets Faster With 'Advanced AI' (404media.co) 126

The U.S. Army told the government it had a lot of success using AI to "process targets" during a recent deployment. It said that it had used AI systems to identify targets at a rate of 55 per day but could get that number up to 5,000 a day with "advanced artificial intelligence tools in the future." 404 Media: The line comes from a new report from the Government Accountability Office -- a nonpartisan watchdog group that investigates the federal government. The report is titled "Defense Command and Control" and is, in part, about the Pentagon's recent push to integrate AI systems into its workflow.

Across the government, and especially in the military, there has been a push to add or incorporate AI into various systems. The pitch here is that AI systems would help the Pentagon ID targets on the battlefield and allow those systems to help determine who lives and who dies. The Ukrainian and Israeli military are already using similar systems but the practice is fraught and controversial.

US Army Says It Could Acquire Targets Faster With 'Advanced AI'

Comments Filter:
  • Accurately (Score:5, Insightful)

    by Retired Chemist ( 5039029 ) on Wednesday April 09, 2025 @05:16PM (#65293527)
    I am sure that an AI system can identify targets more rapidly than a human, the question is can they identify targets accurately. An increased rate of friendly fire or noncombatant incidents is not going to be acceptable, at least to the US population. I am not sure that same would apply the Russians or the Chinese.
    • Yea, This information is useless without details.

      Even if it identified false positives at the same rate, there would be more false positive targets. It would need to be much better at not getting false positives. And then for the false positives that are found, Are they the same rating of false as human targeting or worse (school instead of house, etc).

      • To be clear, Quantity should never be the project's first goal, accuracy should be which would then lend to being able to up quantity.

      • What does "identify target" even mean? "That's Bob, and Tom behind him, and the guy to his left looks a lot like Dave, although he never had a moustache before".

        Sounds more like "we don't want to miss out on the latest buzzword".

        • What does "identify target" even mean? "That's Bob, and Tom behind him, and the guy to his left looks a lot like Dave, although he never had a moustache before".

          Let's say you're going after some terrorists that have embedded themselves in the local general populace...innocent civilians.

          It would bode well for targeted, precise attacks to happen if you could identify which were the good and bad guys....and see when the bad guys were all together in a meeting, you hit them with a rocket.

          That's an example...

          • And the magic of "AI" is going to let you identify 5,000 terrorists a day instead of 55?

            TFA is paywalled so no idea what they're after, but it sounds like they're just jumping on the "AI" bandwagon because all the other kids are doing it too.

    • by Anonymous Coward

      An increased rate of friendly fire or noncombatant incidents is not going to be acceptable, at least to the US population.

      Assuming they find out about it. In the Signalgate attacks the US destroyed an entire apartment block just to hit one person, did you hear about that?

    • Re:Accurately (Score:5, Insightful)

      by belthize ( 990217 ) on Wednesday April 09, 2025 @05:54PM (#65293601)

      I'm not entirely sure I agree with the sentiment about the average US Citizen being concerned about increased non-combatant incidents.

      Just look at Gaza and Yemen, in many cases folks are either oblivious to the occurrences or feel its ok in an 'ends justify the means' sort of way.

      As long as US soldiers aren't getting killed the citizenry is, for the most part, pretty ok with whatever.

      • They do not care if a third party is doing it. They sure will care if it is done to them.
        • They do not care if a third party is doing it. They sure will care if it is done to them.

          They will surely be dead if^Wwhen it is being done to them, and then they won't complain.

        • The point of having a capable military is so that "they" can't do to us what we can do to them, and they know it.

          This concept is known as "deterrence" if one is writing a dry technical document, or "peace through strength" if one is up on a soapbox and invoking Mom, Baseball, and Apple Pie.

      • I'm not entirely sure I agree with the sentiment about the average US Citizen being concerned about increased non-combatant incidents.

        Just look at Gaza and Yemen, in many cases folks are either oblivious to the occurrences or feel its ok in an 'ends justify the means' sort of way.

        As long as US soldiers aren't getting killed the citizenry is, for the most part, pretty ok with whatever.

        It's more complicated than that.
        I don't pull triggers or launch rockets, but I sure feel for anyone who's on the wrong end of one... even if they "deserve" it. Anger takes over when the deserving hide behind innocents. I also ask questions such as... why are the innocents not rising up against the "bad guys". And that's just scratching the surface of my stream of thought... because it's complicated.

        • Re:Accurately (Score:4, Insightful)

          by sound+vision ( 884283 ) on Wednesday April 09, 2025 @08:19PM (#65293801) Journal

          The reason the population voted the "bad guys" in 2007 is because the 50 preceding years of voting for "good guys" failed to stop the slow-motion ethnic cleansing they were being subjected to. Neither did appealing to the international community for help. Playing the nice guy while getting punched in the face was starting to get old.

          It should be noted they have not held elections since then. So we can't gauge how voter sentiment may have changed.

          The reason they don't rise up is probably the same reason why the people in Russia don't, the people in North Korea don't, the people in China don't, the people in America don't, the people in India don't... it's not particularly easy. Even if they did "rise up", all that means at this point, is the IDF can kill them faster.

          Israel already employs the blanket excuse that any body found in Gaza is axiomatically a Hamas militant. If Hamas disappeared tomorrow, Israel would keep right on with the killing. The goal was always acquiring territory and removing the residents. They've been at it for 80 years, it's slowed at points but never stopped.

          A lot of the conflict in this region that "just don't make sense" to the American mind, makes a whole bunch of sense once you realize this.

          • They weren't exactly the bad guys when they got elected in 2007. They did the whole right wing, hide your power level, bullshit where they position themselves as moderates. They were lying of course but it got them in the power and then by then it was too late.

            It's a trick the right wing uses basically everywhere. They pretend to be conservative and safe and moderate and then once they've got control the knives come out.
            • Hamas has always been a terror organization.

              They have always been committed -- in writing -- to the destruction of an entire country and the killing of everyone "from the river to the sea."

              I know you have no idea which river (the Jordan River) or which sea (the Med) but that's their thing.

              Don't take your fat ass white pompous rightwing nazi attitude and whitewash Hamas.

              Terrorists since day 1. Still terrorists. October 7th.

              Fuck you for pretending they're just not "exactly the bad guys".

              Yeah, they are. And

              • by jvkjvk ( 102057 )

                So I don't support Hamas.

                But you know what I also don't support - Genocide of Palestinians by Israel.

                That's happening, you know it. It has been going on for 80 years now, slowly, but now very quickly.

                I also don't support that, and I don't think the US should either.

      • As you know, what goes around comes around.

      • Just look at Gaza and Yemen, in many cases folks are either oblivious to the occurrences or feel its ok in an 'ends justify the means' sort of way.

        Actually their outrage is only directed as to who is the one on the receiving end of bombs. Moscow bombing civilians in Ukraine definitely did cause outrage, but Gaza are the "bad guys" in the Israel Gaza war so they had it coming according to the general opinion.

    • I am sure that an AI system can identify targets more rapidly than a human, the question is can they identify targets accurately. An increased rate of friendly fire or noncombatant incidents is not going to be acceptable, at least to the US population.

      The real question is are AI more accurate than a human. A quick (AI based) google suggests 10-15% of American casualties in the 20th century were friendly fire. If AI could lower that to 5-10%, would that not be something the public could get behind? Not perfection, but an improvement? Not we are done, but we are working on it?

      Then again, the AI could be lying about the 10-15% number. Picking something worse than itself to make AI look better. :-)

      • Slashdot is still trying to convince themselves none of this actually works so donâ(TM)t expect a thoughtful reply.
      • People are more accepting of human error, than system error.

        • by drnb ( 2434720 )

          People are more accepting of human error, than system error.

          True, there is more empathy for the human who made the mistake and is remorseful.

          However the military will also be working to create public acceptance. There will be plenty of "I was about to pull the trigger when the AI assistant ID'd the target as a friendly. OMG what I tragic mistake I nearly made. Thank God for that AI."

    • I'm not sure that accuracy is the area of most concern here.

      Let's imagine that the AI system is very accurate. So now we have a technology where machines can effectively choose human targets and kill them.
      As time goes on, this will be employed by most powerful nations. And rather than being deployed for short periods, they may - heaven forbid - be "let loose" to patrol on their own.

      At some point, perhaps during a period of low nationalistic fervour, will we think of ourselves as humans rather than countryme

      • I don't think anyone is suggesting that the AI gets to take the shot. That we are just talking about AI identifying a potential target as friend, foe, or non-combatant to the human operator of the weapon system.
        • As systems are integrated all indications are that Boston Robotix or the next co, will get to RoboCop. I can't see how there is much doubt they would be working on that... just testing, of course.. we'll be ready when someone wants one.
        • by N1AK ( 864906 )
          For brevity's sake I'll be direct. Putting aside arguments about inevitable developments, if a person can accurately assess if a potential target is legitimate or not and has time to do so safely then there is no need for an AI, thus the core advantage of AI will be in determining things that people can't determine accurately or doing so faster than people can. If what you end up with is an AI looking at data and saying shoot to a human operator who can't determine accurately for themselves OR doesn't have
    • This won't increase incidences of friendly fire, civilian deaths, or collateral damage.

      In fact, theses events will be drastically reduced by AI in their recategorization as Hallucinations .
    • Identifying targets and engaging with targets are different things.

    • It can probably do that better than most humans as well, particularly at longer ranges where humans are worst. It can certainly supplement the capabilities of soldiers in the same we computers were used to do ballistics calculations faster and more accurately than humans in the past. I'm fine with it as long as there's a human being behind the trigger of whatever hell is about to be unleashed on that target. Humans aren't great at judgement, but we can at least hold them accountable for exercising poor judg
    • by rsilvergun ( 571051 ) on Wednesday April 09, 2025 @09:54PM (#65293893)
      I do not want to hand autonomous killing machines to our ruling class. It should be painfully obvious why that's a bad idea. Literally the only thing that keeps those fuckers under any semblance of control is the threat to the military could depose them.

      I've said it before and I will say it again The ruling class has had enough of us filthy peasants and they're dependency on us and their fear of us. They are working towards a future where they can leave us wallowing in their garbage while they go off to live like God Kings and anytime we step out of line we're put down hard
    • by AvitarX ( 172628 )

      We seem pretty good at not caring about noncombatant incidents in the US.

      Friendly fire wouldn't go over well if it more than doubled (I suspect a doubling would still look like not that much).

    • by Rujiel ( 1632063 )

      " An increased rate of friendly fire or noncombatant incidents is not going to be acceptable, at least to the US population."

      No offense but, have you been in a fucking coma the last 25 years?

    • I designed a $300 100km range, nearly silent, EM shielded autonomous drone that could fly formations using visual tracking of its peers. It identifies targets through heat signatures, up to 12 at a time. It aims for non-shielded body parts and fires several rounds in rapid succession of slow acting tranquilizer darts.

      It can be used to remove enemies from battle, 1000 drones should be able to effectively demobilize 500-2000 targets. With additional training, they should be able to more effectively target and
      • We need to as quickly as possible remove lethality from war.

        That kind of high-minded ideal is all well and good until you're in an existential fight (e.g., Ukraine) and start weighing the costs of transporting, feeding, and housing hundreds of thousands of POWs.

      • This sounds very good, but it will not happen. Non-lethal combat would remove the strongest reason to avoid it in the first place. Anyway, one side of the other would quickly switch to lethal combat to gain an advantage.
    • With Google forcing Gemini on us, with the result we can all pretty much see that LLMs "get it right and relevant" about maybe 30% of the time, I'm struggling to see why everyone from Shopify to the US Fucking Army are still enamored with the technology.

      At least when everyone was obsessed with adding blockchain to things, most of the people saying that had no idea what a blockchain was.

      • by jvkjvk ( 102057 )

        AI bubble. Got to have *some* kind of tech bubble, don't you know?

        If not, what ever will happen to tech companies and stocks? They won't show any signs that they can grow. Death knell.

    • Humans have not been particularly accurate, I would be surprised if AI targets are not more accurate.

      • Humans have not been particularly accurate, I would be surprised if AI targets are not more accurate.

        If for no other reason than the AI won't get stressed and scared and err towards the side of shooting everything the way humans often do. The AI's behavior will be the same in training and in combat, while human accuracy will degrade under stress.

    • by 1369IC ( 935113 )
      "Target" might be a loaded word in this context, because sometimes there's a decision-making process even after a bad guy is positively IDed. Several years ago the military was testing AI to find the best available weapon to fire on a target by skipping the traditional "platoon at the front line calls its company HQ for artillery support, company calls the battalion HQ..." process. The front line folks would ID the target, and the AI system would present the appropriate commander with the right available we
    • by dillee1 ( 741792 )

      The problem is easier to solve than you think.
      The AI attacker side knows where his friendly force are in the 1st place. Just designate kill-zone / no-fire-zone, and instruct friendly force to never enter the kill zone. Everything that moves in kill-zone is a "valid military target" for lulz.
      Alternatively one can use various electronic friend-or-foe identification, but this will leads to communication complexities and spoofing issue.

      • History shows that the military rarely knows where its forces really are, hence, the common occurrence of friendly fire incidents. Also, this does nothing for the problem of distinguishing military and non-military objects in the combat zone.
  • Speed is great, but the article isn't exactly reassuring about the system's accuracy.
  • by nightflameauto ( 6607976 ) on Wednesday April 09, 2025 @05:22PM (#65293541)

    The U.S. Army told the government it had a lot of success using AI to "process targets" during a recent deployment. It said that it had used AI systems to identify targets at a rate of 55 per day but could get that number up to 5,000 a day with "advanced artificial intelligence tools in the future." 404 Media:

    This illustrates the problem with the current obsession with AI prophecy. "could get that number up to" but that will require "advanced artificial intelligence tools in the future." I'm sorry, but I don't believe the prophets because they've been fairly consistently lying about what these tools are capable of right now. Why should we believe that they have any sort of grip whatsoever on what they'll be capable of in the future?

    On top of that, do we really want to automate our killing capabilities this way? I know we're obsessed with finding new and more efficient ways to kill other humans, but this seems a step too far even for us Americans.

    • I think you miss its purpose as a propaganda boondoggle. It is being used this way to manufacture consent for the ethnic cleansing within Israel and its army.

      It gives people something to point to and say, "Look, here's evidence these were all terrorists. We're not just killing people at random." It gives the commanders something to point to and say, "The computer made me do it."

      It's the automated equivalent of "faulty intelligence", which the US already uses to (for example) justify invading Iraq. It's not

      • I'm pretty sure it was mainstream media reporting the use of AI for target acquisition in some of their high profile .. ummm.... hijinks... they mysteriously seem to know exactly where these hamas commanders are.

        Gee, I wonder what bibi is doing lunch in Washington for?
  • Accuracy? (Score:4, Insightful)

    by fuzzyfuzzyfungus ( 1223518 ) on Wednesday April 09, 2025 @05:47PM (#65293589) Journal
    Raw quantity/day seems like a really weird metric to be assessing target processing on; and one that was probably at its highest during some barely transistorized period when it was still slide rules and saturation bombing with periodic estimates written up into reports on some early IBM dinosaur. The hit rate was probably atrocious; but if enough fragments go out small percentages add up.

    If you are even talking about sensors, much less 'AI' you are implying some set of accuracy and prioritization metrics you have in mind: correctly identifying hostiles, including by type if they differ in value or significance, distinguishing between hostiles and technically hostile but marginal value decoys, hitting what you intended to hit vs. hitting nothing of interest vs. accidentally hitting something neutral or friendly; and so on. Unless you talk about your criteria there "55 per day" or "could reach 5000 per day" are almost wholly meaningless numbers. You'd also want greater detail on what opportunities you expect to enjoy by going faster and how much faster you need to get there.

    For the very specific case of point defense it's fairly safe to assume that the answer is 'as fast as you can; but faster' given the speed of certain contemporary missiles; but are the vaguer improvements in 'targets' active combatants currently not getting air or artillery support used on them because target acquisition is backed up? Low priority discretionary targets who we are modestly sure about but don't have the reaper operator budget to bother following up on? Just blips on satellite photos that we were never planning to do anything about; but could automatically annotate before filing rather than not annotating before filing if we let the bot do it?
  • by glum64 ( 8102266 ) on Wednesday April 09, 2025 @05:48PM (#65293591)

    ... is a requirement for modern wars. Anti-swarm countermeasures. Imagine your AA system being overwhelmed by a coordinated simultaneous assault of 100+ AI-driven drones. Most of the drones are cheap fakes that carry no weapons. You have no idea which is which, you need to acquire and shoot down as many drones as you can, preferably all of them.

    Allowing "those systems to help determine who lives and who dies" is never the goal.

    • Ukraine already uses drones with AI to fly the last 100m or so: the operator designates the target and the drone flies into it by itself, overcoming poor reception at low altitude and jamming from the target. From there, it's a very small step to autonomous loitering or swarming munitions. If you're running a swarm to attack multiple targets, you will also need to acquire and attack your targets quickly, preferably automatically. And that's where the "live & die" decision is made, by an AI.
  • China (Score:2, Informative)

    by backslashdot ( 95548 )

    China announces their AI system can present over 5000 targets a day.

  • Today we can do 50 targets a day. Tomorrow we'll do 5,000 targets a day. We still shot down two of our aviators in the Red Sea.

    What could POSSIBLY go wrong with 100x increase in targetting? We'll go from two aviators in the sea to 200?

    There's a reason fire auth comes from a human. IFF has failed and we've had it for almost a century. GPS and the like are easily blocked as is IFF.

    Orange-soda-can-face is a moron.

    There's your summary for the day. You're welcome.

    • I remember hearing about a friendly fire incident during... I think... Desert Storm. An Apache helicopter fired upon and killed friendlies, even after an approval from the higher ups. Turned out the wind shifted the helicopter drift and after arriving at the destination, they shot the people in front of them not realizing the actual enemies were far away "upwind".

      Even humans goof up.

      • A patriot crew based at an airport shot down a Tornado. Or was it even two?
        I forgot.

        How you can not visually identify a Tornado, shooting a friendly incoming who has its landing gear out and obviously is about to land on your base ...

        No idea.

        A Tornado is such a unique aircraft, can't be mixed up.

        Perhaps they had a "does not look American - shoot it" philosophy?

        A friendly who was based on that airbase got shot down, two crew killed, by a guy who panicked at a Patriot missile battery.

    • Theaterwide Biotoxic and Chemical Warfare will get targets with less ammo

  • US Army Says It Could Acquire Targets Faster With 'Advanced AI'

    Until someone replaces the targeting chips [fandom.com] in the Helicarriers ...

  • So AI is going to increase target identification from 55 per day to 5,000 per day. Brilliant! Of course, whether or not those are the right targets...well, that's a story for another day.

    • So AI is going to increase target identification from 55 per day to 5,000 per day. Brilliant! Of course, whether or not those are the right targets...well, that's a story for another day.

      What's that saying? "Shoot first, ask questions later (or never)."

      • Just a quick thumbs up to you, my friend. I wonder whether there's a single dystopian science fiction novel that isn't being used right now as a blueprint.

  • The way they like it.
    • Did they buy tech from Israel who was using an AI 1st and that is how world central kitchen got killed. it doesn't READ the top of cars and they put it directly into targeting so no human reviewed it...

  • I have friends in all branches of real services, so, you know, no coast guard or space farce.
    They are upright outstanding people -- no two ways about it.

    But the "US Army" as a whole, has never won a war. Ever. Anywhere. They're encumbered by
    bureaucracy, politics, and pricky-shits.

    Could they "acquire targets" 100x faster with nonexistent tech? Of course they can't. Could they
    not put out stupid PR like they're going for VC or PE money? Sure.

    But they won't. We have a messed up system, folks.

    Love my frie

    • Funny how all those other armies that are "encumbered" even more by "bureaucracy, politics and pricky-shits" have a better win-loss record. Say what you want about the US armed services. My dad served in WWII in the RAF. Once the Americans finally got involved, they quickly earned a reputation, one that led to this saying:

      "When Germany flies, England ducks. When England flies, Germany ducks. When America flies, everybody ducks".

      So I hope you and your "upright outstanding" friends will forgive the rest

  • The US was doing rapid target identification during Gulf War 2 using a computer: Many times, US allies cancelled the mission because the target did not have military value and, as members of the International Court of Justice, killing its occupants would be a war crime.

    Compare that with recent news, where the US bombed a residential building killing 50 civilians: No news channel in the world called it a war-crime.

    The US has been sending people to foreign prisons for many years, also. It's only recentl

  • ... and I have acquired plenty of targets. No AI needed. They just pop up in my feed.

    • ... and I have acquired plenty of targets. No AI needed. They just pop up in my feed.

      I wish I had mod points to push this one up. It's deep, man. And really encapsulates the main purpose of social media.

  • This is uncannily similar to the 2014 Robocop remake. It covers this topic pretty early on. Spoilers: They have AI cops and they are basically perfect at identifying targets, which is scaring the public. So they put a human brain into one, but the government complains that it is too slow: all this moral judgement stuff is getting in the way. It was really interesting.

    • There's a TV show that takes stories from great sci-fi authors and modernizes them a bit. Masters of Science Fiction, narrated by Stephen Hawking. The last episode of the series covers a story about a guy that builds an AI "flock" of robotic birds that decide targets in the field, and they have such a great track record in combat zones that of course the government requires them as police forces back home. They eventually arrive at a "Stop any attempt to kill, with deadly force if necessary." They start kil

    • This is uncannily similar to the 2014 Robocop remake.

      Look at Kill Decision by Daniel Suarez.

  • Drone strikes weren't AI-powered and their civilian kill rate was 90%.. so just imagine how atrocious this will be. I'm sure they army and this news outlet have high regards for Israel's automated killing systems trained up on civilians, like Lavender and "daddy's home". Maybe Palantir's stock will peak right before their software kills us all! their investors will be thrilled at the killing they've made even when the victim is their selves.

    The current tech landscape makes me sick to my stomach.

  • AI targeting weapons feels like.... a bad thing. Right? Like a step too far? A bit crossing a line maybe?
  • by ledow ( 319597 )

    I'm not a fan of AI, but maybe it'll do a better job at identifying people with actual weapons instead of journalists with cameras, or which buildings are clearly marked hospitals, or which troops are actually on their own side.

    Pretty sure I'd prefer it if I had an AI rather than an American coming to my aid, in fact.

    • by N1AK ( 864906 )
      This is what is depressing about the current situation. AI target acquisition could help avoid innocents being killed but that would be dependent on the people employing it caring it about that and it's clear at the moment that they don't.
    • I'm not a fan of AI, but maybe it'll do a better job at identifying people with actual weapons instead of journalists with cameras, or which buildings are clearly marked hospitals, or which troops are actually on their own side.

      Pretty sure I'd prefer it if I had an AI rather than an American coming to my aid, in fact.

      What you'll get is an AI under American command, where the term "kill zone" is usually a radius of several meters or even a kilometer or more for acquisition of a single target. Do we really want to make that more "efficient?"

  • Remember, they're "targets", for God's sake don't ever think of them as "people" and an AI is just what you need to be able to forget that what you're doing is killing living breathing people with wives/husbands, mothers, fathers, sons, daughters who, if you actually met them, you might just get on with them.
  • Before we "Target faster with AI" we need to be able to automatically generate Mea Culpa press releases as fast as the mistakes will occur.

Real Users never use the Help key.

Working...