Forgot your password?
typodupeerror
AI Businesses

Sam Altman Says OpenAI Shares Anthropic's Red Lines in Pentagon Fight (axios.com) 51

An anonymous reader shares a report: OpenAI CEO Sam Altman wrote in a memo to staff that he will draw the same red lines that sparked a high-stakes fight between rival Anthropic and the Pentagon: no AI for mass surveillance or autonomous lethal weapons. If other leading firms like Google follow suit, this could massively complicate the Pentagon's efforts to replace Anthropic's Claude, which was the first model integrated into the military's most sensitive work. It would also be the first time the nation's top AI leaders have taken a collective stand about how the U.S. government can and can't use their technology.

Altman made clear he still wants to strike a deal with the Pentagon that would allow ChatGPT to be used for sensitive military contexts. Despite the show of solidarity, such a deal could see OpenAI replace Anthropic if the Pentagon follows through with its plan to declare the latter a "supply chain risk."

This discussion has been archived. No new comments can be posted.

Sam Altman Says OpenAI Shares Anthropic's Red Lines in Pentagon Fight

Comments Filter:
  • What a win for xAI (Score:5, Insightful)

    by rickrickles ( 10431830 ) on Friday February 27, 2026 @12:51PM (#66013854)
    Pretty sure xAI won't feel the same way and would prefer a complete hands-off approach as long as the money comes rolling in.
    • by Whateverthisis ( 7004192 ) on Friday February 27, 2026 @12:52PM (#66013860)
      Yep. Just what we need: an AI capable of making kill decisions for autonomous weapons that is deep down deeply racist, fascist, and used primarily for NSFW images.

      America!

      • Kill decisions are simple in comparison: Stay within your predefined geofence, kill anything that moves that isn't transmitting Friend beacon. We don't need AI for that, I coded a form of it in both Basic and Forth back in the 1990s.

        • by jd ( 1658 )

          The Gulf War had a lot of friendly fire deaths from the US ignoring any beacon signal and just killing anything they saw.

          Do you seriously think their automated systems (built by people of precisely this mentality) will be any better?

          • He doesn't care about that, he's just happy to present a scenario in which our fascist killbot-wanting regime isn't as abhorrent or alarming.
            • by dfghjk ( 711126 )

              "he's just happy to present a scenario ..."
              And to brag that he solved the problem in middle school. No doubt as good as Musk's self driving and cost effective Falcon 1.

              • I was actually in college in the 1990s, but yes, a middle schooler today with python on a raspberry pi and a pretty simple GPS module could do this.

            • I didn't say it wasn't abhorrent or alarming. I'm presenting the scenario that this task of "defend this three dimensional coordinate box" doesn't require AI.

          • Why sends friends there at all. Just sends weapons that kill eveything. What can 1 billion drones with .5lbs of explosives and facial recognition cost? Fly over, drop, profit.

          • Yes, it did. The beacon signals weren't that good back then, neither were the sensors. I had the same problem in the fake robot battles I was involved in.

            The answer turned out to be a solution not from Defense industries, but from Genie Garage Door Openers.

        • by dfghjk ( 711126 )

          "Stay within your predefined geofence..."

          LOL for what country? And what administration?

          • The robot doesn't care. The robot's job isn't foreign policy. The robot's job is "here's a box defined by this coordinate cloud, defend it"

        • by amosh ( 109566 )

          Ahh, for the confidence of an engineer when faced with a problem they have no familiarity with. Yep, I bet it's that easy and you're the only one brilliant enough to have cracked the code!

          • Like I said, I programmed it for a fighting robot back in the 1990s. It ain't that complex, and with today's drone factory ships, the Navy can now output this level of AI in killbots at a rate of 10,000 a day.

      • by ceoyoyo ( 59147 )

        Currently the kill decisions are entrusted to sleep deprived teenagers on speed so maybe not so much of a difference. Mass surveilance by something trained on edgelord Twitter should be interesting though.

      • Trump going with xAI would just be one more example of Trump choosing loyalty over competence. If he's going to use AI, it's best that he uses the worst.

        • To be fair, there is a long tradition that predates Trump in the DoD if using the work performing system for defense contracting.
    • And if they don't, some other startup will.

      • One of the consistent problems is that extremely intelligent people capable of working on advanced programs like this often don't like working for the Department of defense. The guys who worked on the nuclear bomb didn't fully understand what they were doing. These days after decades of watching American weapons being used against civilian populations pretty much everybody knows.

        So yeah the government will build military AIs but it'll be a little while before the big boys have enough control of the mark
    • Wait, what? The deal went to OpenAI, not xAI.

      I wonder why. Maybe the deal went to the company that was willing to provide the biggest kickback.

  • The fight around the pentagon demands, however stupidly made by Hickseth isn't about right or wrong or the safety of humanity, it is about who controls the product and the badass AI bros and their investors have no plans to relinquish any amount of that to the government, plain and simple.

  • by Anonymous Coward

    The pentagon should just do a deal with DeepSeek.

  • by greytree ( 7124971 ) on Friday February 27, 2026 @01:01PM (#66013876)
    A shame Altman didn't have redlines about staying Open and Non-Profit, the money grubbing scumbag.

    Still, no-one will care when OpenAI go bust.
    • Something along those lines, yes.

    • by dfghjk ( 711126 )

      LOL if only he would follow the lead of his mentor, Elon Musk. Definitely no money grubbing there!

      "Still, no-one will care when OpenAI go bust."
      I think plenty will care, on both sides. But if he could take Elon with him there would be universal celebration, except for MAGA like you.

  • by nightflameauto ( 6607976 ) on Friday February 27, 2026 @01:12PM (#66013898)

    This coming from Altman sounds very much like a plea for a bigger cash infusion. A moral stance from that man would die of loneliness. Which makes me think this is an immoral plea, and the most obvious logical conclusion is that he's taking this approach until the right dollar amount is attached to the potential contract.

    In fact, seeing Altman join this particular fight makes me think it's entirely possible they're all playing that same game. "We have morals, until you pay us not to," seems to be something this particular administration uses itself, so it wouldn't shock me to find out others are trying their hand at the same tactic.

  • I guess good on them for sticking to their guns (for now). But in light of their other changes to policy, this seems like it's just CYA. If the military used Ai to control drones with guns, and a US soldier got wounded, you can bet they'd blame the Ai company. I wouldn't want to risk that either, no matter how much they offered. Imagine the hit your stock would take after the inevitable headlines.
  • Well course he had to blab because the whole fact that his company didnt give 2 shits is why he felt the pressure to blab again.

    It is like completely irrelevant what he thinks if he already signed the deal his competitor is objecting to. Basically saying "Well I didnt welly wanna sign it promise".

    Bullshit. Your name is there. You agreed. You sold out. You now dont like how it is giving favorable light to your competition so here you are trying to slide into the picture from the corner like some wann

  • China will do it and we will be at a disadvantage. But I don't agree with surveillance, especially AI surveillance of American citizens.

  • If they don't play ball, they'll all be forced to via DPA
  • The time to consider the possible consequences of building a highly scalable multipurpose weapon that can be readily deployed by (almost) any country on this planet was before building it. Now? It's too late. Much, much too late. There's no putting this toothpaste back in the tube. Governments want it, and one way or another, they're going to get it.

    And a lot of innocent people are going to suffer as a consequence.

    Not that sociopathic psychopath Sam Altman understands this, or would care even if
  • by whitroth ( 9367 )

    They know just how hallucinatory the chatbots are, and likely to kill their own people.

    I mean, Kegsbreath thinks no US military ever retreats for any reason... ( see "Waist Deep in the Big Muddy" https://www.youtube.com/watch?... [youtube.com]

  • 1) Publicly attach oneself to someone on the moral high ground.

    Next steps are to:
    2) Backpedal from the principled stand - perhaps after appropriate inducements and of course everyone's favourite step,
    3) Profit!!!

  • Because then our adversaries will do the same. Yeah, that sounds monumentally stupid too.

    • Because then our adversaries will do the same. Yeah, that sounds monumentally stupid too.

      What looks stupid can depend on perspective. If you are one of the major nuclear powers, then what you say seems correct. On the other hand, if you were an observer looking at earth as a whole, it might not.

    • by amosh ( 109566 )

      I don't know if you know what a nuke is. It's a bomb with enormous destructive power, capable of leveling a city.

      I don't know if you know what OpenAI makes. It's a computer program that spits out text based on probabilities. It is not capable of leveling a city, but it CAN also make terrible porn, and get lawyers sanctioned for making up bullshit.

      So... um... what are you talking about?

    • by fjo3 ( 1399739 )
      It worked out well for Ukraine!
  • So right now they are competing for talent and they are worried that some of the talented would be uncomfortable working on weapons that kill people especially given the history of using American weapons on civilian populations.

    Also the obvious we don't want the bad press from it.

    But make no mistake none of these fuckers will turn down money no matter where it comes from if they can get it without taking a hit somewhere else.
  • Trump orders government to stop using Anthropic in battle over AI use [bbc.com]

    "We don't need it, we don't want it, and will not do business with them again!" Trump wrote in a Truth Social post.
    Trump made the latest comments just before a deadline the Pentagon had given Anthropic to grant it unfettered access to the firm's AI tools.
    As the Pentagon's deadline approached Trump released the barrage of messages on the Truth Social platform, saying Anthropic "better get their act together, and be helpful during this phase out period, or I will use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow."
    The President also called Anthropic "woke" and accused it of being an "out-of-control, Radical Left AI company [...]

    It's like The Godfather, without the civility.

  • Honest Sam strikes again:

        "OpenAI strikes deal with Pentagon hours after Trump admin bans Anthropic"

    What a scumbag.

    https://cnn.com/2026/02/27/tech/openai-pentagon-deal-ai-systems
  • This good news, because I'm sure China, Russia, North Korea, and all the other countries have drawn the same red lines too. They would never use AI as a weapon. Never!

Progress means replacing a theory that is wrong with one more subtly wrong.

Working...