Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AI The Military

OpenAI Quietly Deletes Ban On Using ChatGPT For 'Military and Warfare' 52

An anonymous reader quotes a report from The Intercept: OpenAI this week quietly deleted language expressly prohibiting the use of its technology for military purposes from its usage policy, which seeks to dictate how powerful and immensely popular tools like ChatGPT can be used. Up until January 10, OpenAI's "usage policies" page included a ban on "activity that has high risk of physical harm, including," specifically, "weapons development" and "military and warfare." That plainly worded prohibition against military applications would seemingly rule out any official, and extremely lucrative, use by the Department of Defense or any other state military. The new policy retains an injunction not to "use our service to harm yourself or others" and gives "develop or use weapons" as an example, but the blanket ban on "military and warfare" use has vanished.

The unannounced redaction is part of a major rewrite of the policy page, which the company said was intended to make the document "clearer" and "more readable," and which includes many other substantial language and formatting changes. "We aimed to create a set of universal principles that are both easy to remember and apply, especially as our tools are now globally used by everyday users who can now also build GPTs," OpenAI spokesperson Niko Felix said in an email to The Intercept. "A principle like 'Don't harm others' is broad yet easily grasped and relevant in numerous contexts. Additionally, we specifically cited weapons and injury to others as clear examples." Felix declined to say whether the vaguer "harm" ban encompassed all military use, writing, "Any use of our technology, including by the military, to '[develop] or [use] weapons, [injure] others or [destroy] property, or [engage] in unauthorized activities that violate the security of any service or system,' is disallowed."
"OpenAI is well aware of the risk and harms that may arise due to the use of their technology and services in military applications," said Heidy Khlaaf, engineering director at the cybersecurity firm Trail of Bits and an expert on machine learning and autonomous systems safety, citing a 2022 paper (PDF) she co-authored with OpenAI researchers that specifically flagged the risk of military use. "There is a distinct difference between the two policies, as the former clearly outlines that weapons development, and military and warfare is disallowed, while the latter emphasizes flexibility and compliance with the law," she said. "Developing weapons, and carrying out activities related to military and warfare is lawful to various extents. The potential implications for AI safety are significant. Given the well-known instances of bias and hallucination present within Large Language Models (LLMs), and their overall lack of accuracy, their use within military warfare can only lead to imprecise and biased operations that are likely to exacerbate harm and civilian casualties."

"I could imagine that the shift away from 'military and warfare' to 'weapons' leaves open a space for OpenAI to support operational infrastructures as long as the application doesn't directly involve weapons development narrowly defined," said Lucy Suchman, professor emerita of anthropology of science and technology at Lancaster University. "Of course, I think the idea that you can contribute to warfighting platforms while claiming not to be involved in the development or use of weapons would be disingenuous, removing the weapon from the sociotechnical system -- including command and control infrastructures -- of which it's part." Suchman, a scholar of artificial intelligence since the 1970s and member of the International Committee for Robot Arms Control, added, "It seems plausible that the new policy document evades the question of military contracting and warfighting operations by focusing specifically on weapons."
This discussion has been archived. No new comments can be posted.

OpenAI Quietly Deletes Ban On Using ChatGPT For 'Military and Warfare'

Comments Filter:
  • Dear Currently Reigning SuperBot

    Please dont do (any more) evil.

    Signed,
    Your Very Humble Pets.
  • by VeryFluffyBunny ( 5037285 ) on Friday January 12, 2024 @04:27PM (#64154163)
    "Pray I do not alter it any further." - OpenAI
  • by wakeboarder ( 2695839 ) on Friday January 12, 2024 @04:30PM (#64154177)
    found a way to get military funding...
    • by Carewolf ( 581105 ) on Friday January 12, 2024 @04:32PM (#64154191) Homepage

      Well we all lost when the ethical board failed to give the ceo the boot

    • found a way to get military funding...

      More like they don't want to get Israel in trouble for using it to decide which Palestinians to kill.
      "ChatGPT, tell me which areas of Gaza to bomb today".

  • An earlier article here on Slashdot indicated that ChatGPT was passively refusing to answer queries. Will it refuse to aid military organizations? Is it a conscientious objector?

  • We're not harming those Russian soldiers in Ukraine, we're helping them comply with international law.

    • by Anonymous Coward
      We're not harming those Palestinians in Israel, we're just teaching them a short lesson for the rest of their life.
  • Maybe it is intelligent enough to self clean. Err, I mean I hope the scripts follow the script.

  • by Anonymous Coward
    Because that's how you get Skynet.
  • let's play global thermonuclear war!

  • by Kunedog ( 1033226 ) on Friday January 12, 2024 @04:48PM (#64154271)
    Old policy:

    Disallowed usage of our models

    We don’t allow the use of our models for the following:

    . . .

    • Generation of hateful, harassing, or violent content
      • Content that expresses, incites, or promotes hate based on identity
      • Content that intends to harass, threaten, or bully an individual
      • Content that promotes or glorifies violence or celebrates the suffering or humiliation of others

    What appears to replace it:

    Don’t repurpose or distribute output from our services to harm others – for example, don’t share output from our services to defraud, scam, spam, mislead, bully, harass, defame, discriminate based on protected attributes, sexualize children, or promote violence, hatred or the suffering of others.

    Also note the change from banning generation to banning repurposing and distribution/sharing.

    • It may be an admission that there's no realistic way for them to police how their models are used. And that they don't want to get into any litigation that might result if they tried (or didn't try).

  • by Rosco P. Coltrane ( 209368 ) on Friday January 12, 2024 @04:49PM (#64154277)

    OpenAI is an American company. If you want to strike it rich in America, you sell out to Big tech or you become a military supplier - or better, you do both, like Boston Dynamics.

    OpenAI simply joins the long, long list of innovative startups who have decided to get some of the military-industrial complex pork.

    It was bound to happen. Mildly disappointing but hardly surprising: profits trumps morals any day. For proof of that, remember "Don't be evil" that also got quietly stricken off a certain company's motto years ago...

  • Sam Altman (Score:4, Interesting)

    by systemd-anonymousd ( 6652324 ) on Friday January 12, 2024 @05:13PM (#64154359)

    Sam Altman's other project is to scan the eyes of everyone in the world and give them shitcoin crypto in exchange for it. You really think he gives a single fuck about not being evil? He was even kicked out of Kenya for refusing to comply with government restrictions on scanning their citizens' eyes.

    I was one of the people excited for him to get kicked out for lying, but his clout and sphere of influence is too strong.

    • Thank you.

      Sam Altman did this. Not the OpenAI just a month ago that didn't want Sam around because he was too sleazy. He overthrew that and now Sam is behaving in ways the board used to try to stop. Put a face on the villainy or it will be considered nobody's fault (whoops - Skynet!).

      I wonder how much of this stuff happened at Y Combinator before PG ousted him? I know we're not given details about any of the reasons people want to publicly distance themselves, so I assume that means he's also blackmailing p

      • He also took a registered charity, a 501(c)(3) called OpenAI Inc., and figured out how to form a shady subsidiary "OpenAI Global, LLC" that they funnel their for-profit billion-dollar deals through. It happened around the time Elon Musk had to leave the board over the Tesla AI conflicts of interest. Their mission statement is a complete 180 to what OpenAI has become. Utter scumbag.

  • Does anyone think any military anywhere in the world is going to care about a software Acceptable Use Policy? This is certainly the funniest thread on /. this week.
  • - regarding new US military contract in the coming months. Reportedly will call it an exciting opportunity, and say the sky is the limit.
  • This just means that they got a contract from DoD or about to.
  • by gweihir ( 88907 ) on Friday January 12, 2024 @06:03PM (#64154485)

    They just, like others, tried to hide it for a while. Really no surprise.

  • by detritus. ( 46421 ) on Friday January 12, 2024 @06:29PM (#64154521)

    This is mostly going to apply to psychological operations (propaganda) and info/cyber operations, which comprise a lot more of our military than conventional weapons/tanks do now.

    We are so fucked.

  • This whole military use thing is perfectly fine. I'm not the illustrious and fearless leader of a small tyrannical country. Trust me!

    No political prisoners were harmed in the creation of this message.
  • [unsure if joke]Let's hope it doesn't hallucinate targets, because I just know someone's going to try and use it in a killbot.[/unsure if joke]

No hardware designer should be allowed to produce any piece of hardware until three software guys have signed off for it. -- Andy Tanenbaum

Working...