OpenAI Amends Pentagon Deal As Sam Altman Admits It Looks 'Sloppy' (theguardian.com) 29
OpenAI is amending its Pentagon contract after CEO Sam Altman acknowledged it appeared "opportunistic and sloppy." On Monday night, Altman said the company would explicitly restrict its technology from being used by intelligence agencies and for mass domestic surveillance. The Guardian reports: OpenAI, which has more than 900 million users of ChatGPT, made the deal almost immediately after the Pentagon's existing AI contractor, Anthropic, was dropped. [...] The deal prompted an online backlash against OpenAI, with users of X and Reddit encouraging a "delete ChatGPT" campaign. One post read: "You're now training a war machine. Let's see proof of cancellation."
In a message to employees reposted on X, the OpenAI CEO said the original deal announced on Friday had been struck too quickly after Anthropic was dropped. "We shouldn't have rushed to get this out on Friday," Altman wrote. "The issues are super complex, and demand clear communication. We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy." Upon announcing the deal, OpenAI had said the contract had "more guardrails than any previous agreement for classified AI deployments, including Anthropic's."
[...] However, observers including OpenAI's former head of policy research, Miles Brundage, have queried how OpenAI has managed to secure a deal that assuages ethical concerns Anthropic believed were insurmountable. Posting on X, he wrote: "OpenAI employees' default assumption here should unfortunately be that OpenAI caved + framed it as not caving, and screwed Anthropic while framing it as helping them." Brundage added: "To be clear, OAI is a complex org, and I think many people involved in this worked hard for what they consider a fair outcome. Some others I do not trust at all, particularly as it relates to dealings with government and politics."
In his X post, he also wrote that he would "rather go to jail" than follow an unconstitutional order from the government. "We want to work through democratic processes," Brundage wrote. "It should be the government making the key decisions about society. We want to have a voice, and a seat at the table where we can share our expertise, and to fight for principles of liberty."
In a message to employees reposted on X, the OpenAI CEO said the original deal announced on Friday had been struck too quickly after Anthropic was dropped. "We shouldn't have rushed to get this out on Friday," Altman wrote. "The issues are super complex, and demand clear communication. We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy." Upon announcing the deal, OpenAI had said the contract had "more guardrails than any previous agreement for classified AI deployments, including Anthropic's."
[...] However, observers including OpenAI's former head of policy research, Miles Brundage, have queried how OpenAI has managed to secure a deal that assuages ethical concerns Anthropic believed were insurmountable. Posting on X, he wrote: "OpenAI employees' default assumption here should unfortunately be that OpenAI caved + framed it as not caving, and screwed Anthropic while framing it as helping them." Brundage added: "To be clear, OAI is a complex org, and I think many people involved in this worked hard for what they consider a fair outcome. Some others I do not trust at all, particularly as it relates to dealings with government and politics."
In his X post, he also wrote that he would "rather go to jail" than follow an unconstitutional order from the government. "We want to work through democratic processes," Brundage wrote. "It should be the government making the key decisions about society. We want to have a voice, and a seat at the table where we can share our expertise, and to fight for principles of liberty."
Altman (Score:5, Insightful)
I don't believe a single thing this guy says. It's all BS and greed.
Re:Altman (Score:5, Insightful)
It is kind of funny watching him attempt damage control, though - because he's SO BAD at it.
Not that he cares, really. In the end, it's all about more potential billions in his bank account.
Re:Altman (Score:5, Interesting)
I don't believe a single thing this guy says. It's all BS and greed.
This 14 min post [instagram.com] from More Perfect Union talks about that and Altman's history of (basically) using "Trust me bro" (which we really shouldn't) to get where he is today. It starts off with Altman getting annoyed when an interviewer asks how OpenAi is going to make $1T in spending commitments with only $13B in revenue. I thought the post was pretty interesting and illuminating.
Re: (Score:2)
It doesn't look sloppy at all (Score:5, Insightful)
Re: (Score:2)
It only looks sloppy to people that took Altman's statement at face value (if that was you, please stop eating glue its not good for you). I think to most of us it was a clear and decisive move to leverage OpenAI's relative lack of morales for some big bucks. Everyone in America realizes at this point that hiding behind "the government promises to obey the law" was never particularly believable but has become farcical in 2026.
Altman's motives are so transparent he just as well be a ghost at this point.
I'm hardly clairvoyant, but I stated plainly what Altman's backing of Anthropic's stance meant when it was announced here. [slashdot.org] Anybody that believed him should, as you say, stop eating glue, or using it as a dip for their leaded paint chips.
Re: (Score:2)
"Everyone in America realizes at this point that hiding behind "the government promises to obey the law" was never particularly believable but has become farcical in 2026."
Cannot say it any better. Under the best of circumstances we should be concerned about the government. These are the worst of circumstances, though.
Re: (Score:3)
Sorry, but these are not (yet?) the worst of circumstances.
Re: (Score:3, Funny)
OpenAI's relative lack of morales
A lot of them were deported [lawenforcementtoday.com].
Re: (Score:3)
Re: (Score:2)
"Everyone in America", while I agree with the sentiment, I have my doubts that most people in America know what OpenAI is, or is behind ChatGPT, or what ChatGPT is, much less who Altman is.
Re: (Score:2)
I first read that as "facial" then realized it worked just the same as farcical.
Incompetency all around (Score:5, Insightful)
The government is completely incompetent for letting petty personal grievances by the DOD negotiator subvert the Anthropic deal. (NYT has a good article about what happened).
OpenAI is also completely incompetent. Somebody who rushes into sloppy decisions like this is not the kind of person you want in charge of important technology like this.
Re:Incompetency all around (Score:5, Insightful)
The government is completely incompetent for letting petty personal grievances by the DOD negotiator subvert the Anthropic deal. (NYT has a good article about what happened).
OpenAI is also completely incompetent. Somebody who rushes into sloppy decisions like this is not the kind of person you want in charge of important technology like this.
Incompetent doesn't quite hit the mark. Immoral. Unethical. Looking for the quickest path to the biggest payout. Incompetence implies lack of intelligence. OpenAI has some intelligence within their leadership, but all of that intelligence is put into monetization efforts, rather than attempting to do the right thing. Which may appear as incompetence to those who are wired to try to do the right thing, but is, in fact, much more dangerous than incompetence. Incompetence can't move nearly so quickly toward dangerous end results as competence and immorality teaming up together.
Gov't should sign with Grok (Score:3)
...as Musk would sell his children to Satan for an advantage. That's right up Donald's alley. (At least the kids Elon doesn't like.) [usatoday.com]
Riiiiiiiggghht..... (Score:2)
Well, if there's anyone who would know about slop...
Un-fucking someone (Score:2)
Sorry, you. can't un-fuck someone. It happened. The deed is done, and Altman's true colors have been shown.
It's all just propaganda, anyway. "sloppy" is code for "we are going to keep doing it, but not publicly disclose it".
Re: (Score:2)
It *is* more complicated than that. A private company is allowed to censor, and if the government "requests" rather than orders it, then I'm not sure it's unconstitutional. The problem is if the company is a public accomodation, then it's NOT supposed to be allowed to censor...but practically, that's really necessary. Newspapers use "editorial judgement" as to which "letters to the editor" they print, after all. And most of them won't print things that the government finds too offensive. (The ones that
WOAWOAWOA (Score:2)
HEY! Where's everybody going? We're the same as Anthropic, HONEST! WAIT! DONT GO!
-Sam A, probably
So, I'm not a lawyer...... (Score:2)
But doesn't the amendment and the reasoning after the fact, make the entire statement LESS relevant and binding?
This is only the half of it (Score:5, Insightful)
While OpenAI may have modified their contract to remove mass surveillance on US citizens, there is curiously no mention of the other reason Anthropic was dropped by the Pentagon - using AI for autonomous lethal weapons. So it looks like they're still going to do that part. What could possibly go wrong?
Re: This is only the half of it (Score:2)
Reminds me of a quote (Score:2)
What really happened (Score:2)
Anthropic: We know our AI is not reliable enough to be used in military action.
Trump/Hegseth: What should we say?
Altman: Tell they you're smarter than they are, and you know it will work.
Trump/Hegseth: Hey, Anthropic, we're smarter than you are. Trust us!
Amazing (Score:2)
It amazes me to realize how many people, especially with the mental skills expected here, are furious if the US snoops into your lives or plots to nuke somebody are shrugging off or not even thinking of the consequences of some other less benign country (Russia, Iran, China) doing the same thing. It's gonna happen. How you gonna like it?
{O.O}