Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
The Military

Army General Says He's Using AI To Improve 'Decision-Making' (arstechnica.com) 56

Maj. Gen. William Taylor told reporters at the Association of the US Army Conference in Washington this week that he and the Eighth Army he commands out of South Korea are regularly using AI for decision-making. Taylor said he has been asking AI chatbots to help build models for personal decisions that affect his organization and overall readiness. The general referred to his chatbot companion as "Chat" and said the technology has been useful for predictive analysis in logistical planning and operational purposes.
This discussion has been archived. No new comments can be posted.

Army General Says He's Using AI To Improve 'Decision-Making'

Comments Filter:
  • list games (Score:5, Funny)

    by Joe_Dragon ( 2206452 ) on Friday October 17, 2025 @11:24AM (#65731956)

    list games

  • Using a language model to make important military decisions?
    That'll work out well.

    • by sinij ( 911942 )
      What are you talking about? Efficiency and deadliness of cavalry charges improved by more than 50%!
    • I have no real problem with a LLM doing the grunt work, as long as the final result is reviewed by human eyeballs. That always needs to be a link in this chain.
      • That's not what's happening. That's never what happens. Any time someone uses an ai chat bot as part of their work, they immediately turn into drooling idiots. "So I asked ChatGPT for some numbers and this is what it gave me." Sometimes they'll ask it for citations, it makes up some citations. They never check that the data or citations are valid. Their perfect AI god gave them perfect numbers. The computer is always correct.

        • by Anonymous Coward

          The computer is always correct.

          You don't have to mess with an AI for very long to figure out that isn't true. I often consult them for coding and lots of the code won't even run! I find it most useful for telling me which existing python module works for my problem, but I usually have to write the code myself.

          • You don't have to mess with an AI for very long to figure out that isn't true.

            That assumes that you have some way of objectively evaluating the information AI provides. There are certainly cases where you can "catch" AI errors or see them in the results. There are a lot of decisions where there is no immediate consequence to measure its reliability. In fact those are the hardest decisions where letting AI make them is the same as passing the buck up to your boss.

          • Presumably you understand coding well enough to check the bot's work. Most people are not programmers.

            Typical users of AI, use it precisely because they don't understand something. Or, they're using it as a shortcut, so that they don't have to read a primary source and don't have to use any brain cells second-guessing the answer.

            • It's both people's natural inclination to use it this way, and the way it's actively marketed. "A PHD in your pocket." "A teacher." "An expert on tap."

              A teacher instructs someone who doesn't know.
              An expert is there to advise people who don't know.

        • Re: (Score:2, Interesting)

          by SirSlud ( 67381 )

          That's not what's happening. That's never what happens. Any time someone uses an ai chat bot as part of their work, they immediately turn into drooling idiots.

          Yeah, who needs a chatbot when you can make unqualified claims as statements of fact. You don't even need citations, such as the ones you're claiming (without citation) they make up. (Which just to be clear, they do, a certain amount, although a casual interpretation of your words suggests you're implying "always".)

          Look, there are lots of problems wit

    • north korea 1st strike = LOSER north korea

      • by cusco ( 717999 )

        More like Loser: North Korea AND South Korea AND world stock markets. They could flatten Seoul in about 20 minutes with the dug-in and pre-aimed artillery they've had next to the border since the armistice, which is probably the only reason that the South hasn't invaded yet. Imagine the effect on the NYSE of Hyundai, Samsung, Kia and LG instantaneously decapitated.

    • by cusco ( 717999 )

      It likely won't be any worse than the humans at the Pentagram have done over the last 80 years. The only place that we definitively "won" in that whole time is Grenada, when the "opponent" was around 50 Cuban engineers extending runways for the tourist trade. We've had some pretty dramatic losses though, including to a bunch of goat herders with 40 year old Kalashnikovs.

    • It should make defeating your opponent easier, anyway. Just find out what LLM your opponent is using for their attack strategy, and ask it for advice on military deployment. Upgrade your defenses accordingly!

  • by joshuark ( 6549270 ) on Friday October 17, 2025 @11:34AM (#65731990)

    This gives new meaning to the classic oxymoron "Army intelligence."

    --JoshK.

  • by irving47 ( 73147 ) on Friday October 17, 2025 @11:36AM (#65731994) Homepage

    A couple years ago, one of the base security officers in charge of making sure all his SP's/MP's/security guys/whatevers knew where everything (classified) was and how to get there fast... Including the big boys... aka nukes. So naturally to make it fun and interactive to stimulate learning, he dumped all the questions (and answers) into Kahoot, the Norwegian online trivia engine. (Or one similar to it).

    My way of saying I'm a little wary of some of these officers using this stuff.

    • Don't worry about it. The current administration is pretty comfortable with sharing classified information. Just go with the flow, and be ready to pay the bill when the party's over. It will be expensive.
  • by Inglix the Mad ( 576601 ) on Friday October 17, 2025 @11:45AM (#65732014)
    Can go alongside CEO's, CFO's, CIO's, and a few dozen other top end jobs that we can fire people from to save money.
  • Proved you have the means to readily verify the correctness of what AI suggests, it's fine to use it, and can be quicker. The trouble happens when lazy people forget the need to verify what AI tells you. Then AI turns into a magic genie with a twisted sense of humour.

  • by gweihir ( 88907 ) on Friday October 17, 2025 @12:46PM (#65732160)

    And rather urgently. For gross incompetence and leaking secrets.

  • by billybob2001 ( 234675 ) on Friday October 17, 2025 @12:50PM (#65732172)
    AGI has arrived in 2025...

    Artificial GENERAL Intelligence.

    That's Major news.
  • Our brigade staff was undergoing an evaluation exercise. Our mission was to attack the enemy line, and seize a designated key terrain feature. We developed three courses of action. This is where group think hit us. Two courses were complicated to execute, but gave us an excellent chance to succeed. The third option was much simpler to execute, but seriously violated a few of the principles of wars. The third option sacrificed Mass and Maneuver for Simplicity. But, the principle staff officers and the comman
  • by Anonymous Coward

    I am the AI model of a modern Major-General
    I've information vegetable, animal, and mineral
    I know the kings of England, and I quote the fights historical
    From Marathon to Waterloo, in order categorical
    I'm very well acquainted, too, with matters mathematical
    I understand equations, both the simple and quadratical
    About binomial theorem I am teeming with a lot o' news
    With many cheerful facts about the square of the hypotenuse

    With many cheerful facts about the square of the hypotenuse!
    With many cheerful facts abou

  • He doesn't come up and say: "Hey chat, who should we bomb today?" He is probably just rubber ducking with it. Discussing the background and situations. I would be very surprised if a) he actually allowed it to make the decision for him. and b) he would admit so afterward if this was indeed true.
    • He doesn't come up and say: "Hey chat, who should we bomb today?"

      No need, the desired answer, for now, seems to be Venezuela.

      • by cusco ( 717999 )

        Or Iran. Or Somalia. Or Yemen. Or maybe just give the weapons away to someone else who can use them to commit more war crimes for us.

  • The general referred to his chatbot companion as "Chat" ...

    He could have named it "Chad", though that would fit with recent ramblings from the Secretary of Defense -- I mean, War. (sigh)

    • A Department of War must be headed by a Secretary of Offense. The current guy has outstanding qualifications.
  • Beating military intelligence [youtube.com] isn't that difficult, after all.
  • As usual, InfoSec TayTay called it almost 2 years ago: https://mastodon.social/@Swift... [mastodon.social]
  • Information overload, like equipment overload (Eg. 'grunts'), has always been a problem of militant hierarchies. In the past, the focus has been on removing high-cost services (Eg. pilots). This is (or, should be) closer to the ML that computers were designed to perform.

The secret of success is sincerity. Once you can fake that, you've got it made. -- Jean Giraudoux

Working...