OpenAI Quietly Deletes Ban On Using ChatGPT For 'Military and Warfare' 52
An anonymous reader quotes a report from The Intercept: OpenAI this week quietly deleted language expressly prohibiting the use of its technology for military purposes from its usage policy, which seeks to dictate how powerful and immensely popular tools like ChatGPT can be used. Up until January 10, OpenAI's "usage policies" page included a ban on "activity that has high risk of physical harm, including," specifically, "weapons development" and "military and warfare." That plainly worded prohibition against military applications would seemingly rule out any official, and extremely lucrative, use by the Department of Defense or any other state military. The new policy retains an injunction not to "use our service to harm yourself or others" and gives "develop or use weapons" as an example, but the blanket ban on "military and warfare" use has vanished.
The unannounced redaction is part of a major rewrite of the policy page, which the company said was intended to make the document "clearer" and "more readable," and which includes many other substantial language and formatting changes. "We aimed to create a set of universal principles that are both easy to remember and apply, especially as our tools are now globally used by everyday users who can now also build GPTs," OpenAI spokesperson Niko Felix said in an email to The Intercept. "A principle like 'Don't harm others' is broad yet easily grasped and relevant in numerous contexts. Additionally, we specifically cited weapons and injury to others as clear examples." Felix declined to say whether the vaguer "harm" ban encompassed all military use, writing, "Any use of our technology, including by the military, to '[develop] or [use] weapons, [injure] others or [destroy] property, or [engage] in unauthorized activities that violate the security of any service or system,' is disallowed." "OpenAI is well aware of the risk and harms that may arise due to the use of their technology and services in military applications," said Heidy Khlaaf, engineering director at the cybersecurity firm Trail of Bits and an expert on machine learning and autonomous systems safety, citing a 2022 paper (PDF) she co-authored with OpenAI researchers that specifically flagged the risk of military use. "There is a distinct difference between the two policies, as the former clearly outlines that weapons development, and military and warfare is disallowed, while the latter emphasizes flexibility and compliance with the law," she said. "Developing weapons, and carrying out activities related to military and warfare is lawful to various extents. The potential implications for AI safety are significant. Given the well-known instances of bias and hallucination present within Large Language Models (LLMs), and their overall lack of accuracy, their use within military warfare can only lead to imprecise and biased operations that are likely to exacerbate harm and civilian casualties."
"I could imagine that the shift away from 'military and warfare' to 'weapons' leaves open a space for OpenAI to support operational infrastructures as long as the application doesn't directly involve weapons development narrowly defined," said Lucy Suchman, professor emerita of anthropology of science and technology at Lancaster University. "Of course, I think the idea that you can contribute to warfighting platforms while claiming not to be involved in the development or use of weapons would be disingenuous, removing the weapon from the sociotechnical system -- including command and control infrastructures -- of which it's part." Suchman, a scholar of artificial intelligence since the 1970s and member of the International Committee for Robot Arms Control, added, "It seems plausible that the new policy document evades the question of military contracting and warfighting operations by focusing specifically on weapons."
The unannounced redaction is part of a major rewrite of the policy page, which the company said was intended to make the document "clearer" and "more readable," and which includes many other substantial language and formatting changes. "We aimed to create a set of universal principles that are both easy to remember and apply, especially as our tools are now globally used by everyday users who can now also build GPTs," OpenAI spokesperson Niko Felix said in an email to The Intercept. "A principle like 'Don't harm others' is broad yet easily grasped and relevant in numerous contexts. Additionally, we specifically cited weapons and injury to others as clear examples." Felix declined to say whether the vaguer "harm" ban encompassed all military use, writing, "Any use of our technology, including by the military, to '[develop] or [use] weapons, [injure] others or [destroy] property, or [engage] in unauthorized activities that violate the security of any service or system,' is disallowed." "OpenAI is well aware of the risk and harms that may arise due to the use of their technology and services in military applications," said Heidy Khlaaf, engineering director at the cybersecurity firm Trail of Bits and an expert on machine learning and autonomous systems safety, citing a 2022 paper (PDF) she co-authored with OpenAI researchers that specifically flagged the risk of military use. "There is a distinct difference between the two policies, as the former clearly outlines that weapons development, and military and warfare is disallowed, while the latter emphasizes flexibility and compliance with the law," she said. "Developing weapons, and carrying out activities related to military and warfare is lawful to various extents. The potential implications for AI safety are significant. Given the well-known instances of bias and hallucination present within Large Language Models (LLMs), and their overall lack of accuracy, their use within military warfare can only lead to imprecise and biased operations that are likely to exacerbate harm and civilian casualties."
"I could imagine that the shift away from 'military and warfare' to 'weapons' leaves open a space for OpenAI to support operational infrastructures as long as the application doesn't directly involve weapons development narrowly defined," said Lucy Suchman, professor emerita of anthropology of science and technology at Lancaster University. "Of course, I think the idea that you can contribute to warfighting platforms while claiming not to be involved in the development or use of weapons would be disingenuous, removing the weapon from the sociotechnical system -- including command and control infrastructures -- of which it's part." Suchman, a scholar of artificial intelligence since the 1970s and member of the International Committee for Robot Arms Control, added, "It seems plausible that the new policy document evades the question of military contracting and warfighting operations by focusing specifically on weapons."
Dont Do Evil (Score:2)
Please dont do (any more) evil.
Signed,
Your Very Humble Pets.
Re: (Score:2)
https://www.youtube.com/watch?... [youtube.com]
"I am altering the deal." (Score:5, Funny)
Looks like they (Score:4, Funny)
Re: Looks like they (Score:4, Insightful)
Well we all lost when the ethical board failed to give the ceo the boot
Re: (Score:1)
found a way to get military funding...
More like they don't want to get Israel in trouble for using it to decide which Palestinians to kill.
"ChatGPT, tell me which areas of Gaza to bomb today".
Re: (Score:2)
in the early days, that's is exactly the number i was expecting to be the final toll.
they have doubled that already, and it seems they are nowhere near done. this isn't retribution, it's ethnic cleansing.
Re: (Score:3)
Bullshit.
Israel is doing whatever it needs to do to wipe Hamas off the face of the planet. If that means leveling and bulldozing every square millimeter of Gaza into the Mediterranean Mediterranean, then so be it.
And killing every person who might be connected with Hamas in any way, ie all Palestinians.
Basically, they are doing to the Palestinians what they did to the Amalekites and the Caananites; kill them all, the infants, the sucklings, all of them.
Re: (Score:2)
> Basically, they are doing to the Palestinians what they did to the Amalekites and the Caananites; kill them all, the infants, the sucklings, all of them.
>> This wouldnâ(TM)t happen if Palestinians didnâ(TM)t use their women and children as human shields.
That doesn't excuses the rate they are killing civilians, neither does it justify the scope and force of the bombings particularly in a densely populated civilian enclave that is under a near total siege.
> Israel is very precise in i
Re: (Score:2)
> Basically, they are doing to the Palestinians what they did to the Amalekites and the Caananites; kill them all, the infants, the sucklings, all of them.
>> This wouldnâ(TM)t happen if Palestinians didnâ(TM)t use their women and children as human shields.
That doesn't excuses the rate they are killing civilians, neither does it justify the scope and force of the bombings particularly in a densely populated civilian enclave that is under a near total siege.
But God told them to do it! Their religious texts are full of ethnic cleansing and genocide, which they glorify. Their God gave them that land and specifically told them that they could remove anyone else who lived there, and if they refused to move, to exterminate them. I mean, you can't get much more higher authority to permit genocide than God himself, can you?
Re: (Score:1)
Look at the rates at which the IDF hits civilian vs military targets and compare to any other country, including the US, Iran, Hamas etc. Feel free to take the Geneva convention with you and compare Hamas vs IDF.
Re: (Score:2)
> Israel is very precise in its strikes, warning civilians to get out of the way before they strike.
>> They used to be more like that, in the current situation less and at times not at all.
>>> Look at the rates at which the IDF hits civilian vs military targets
They are currently killing an extremely high ratio of civilians compared to what they used to do. What about compared to others ? With that kind of argument you're going down a hill that lets everyone justify anything because others
Re: (Score:2)
This wouldnâ(TM)t happen if Palestinians didnâ(TM)t use their women and children as human shields. Israel is very precise in its strikes, warning civilians to get out of the way before they strike. The majority of people killed has been the Iranian backed militias (including but not limited to Hamas), who recruit women and children, you can find video of kids as young as 10 being given AK47, and then blame professional armies for killing those kids.
The war would be over if Hamas surrendered and handed control peacefully over to civil society. The war would also be over if we just got the guts to kill Iranâ(TM)s current leadership, the country has sufficient internal opposition against the theocratic tyrants.
If kids as young as 10 are your enemy what about the 9 year olds? What about the 8 year olds? The way you are going, literally every Palestinian born will be your enemy and the only way you will know peace is literally to exterminate the entire people. Then you can start working on your other non-Jewish neighbors, maybe with biological weapons?
Re: (Score:1)
If a kid that is 10, 12 or 16 has been given a gun and told to kill you because they are indoctrinated in UN schools since birth, then you would have a hard time not consider them your enemy. What are you going to do, lay down your arms and die? What you are proposing is the elimination of the only ethnically Jewish country in the world, and then what, you think there will be peace in the Middle East? Notice that both Shia and Sunni factions have been killing each other at much higher rates even in the past
Re: (Score:2)
If a kid that is 10, 12 or 16 has been given a gun and told to kill you because they are indoctrinated in UN schools since birth, then you would have a hard time not consider them your enemy. What are you going to do, lay down your arms and die? What you are proposing is the elimination of the only ethnically Jewish country in the world, and then what, you think there will be peace in the Middle East? Notice that both Shia and Sunni factions have been killing each other at much higher rates even in the past few months than Israel has since Hamas’ genocidal attacks on civilians.
I've heard that the Palestinians have been declared Amalek.
What happens if it refuses to cooperate? (Score:2)
An earlier article here on Slashdot indicated that ChatGPT was passively refusing to answer queries. Will it refuse to aid military organizations? Is it a conscientious objector?
list games (Score:3)
list games
Re: (Score:2)
Re: What happens if it refuses to cooperate? (Score:2)
The way the tech works, that model "refusing" has had extra training by openAI so it does this.
OpenAI can simply use a copy of the model files before this training and then send it for military training. Such a model will do anything this level of tech is capable.
Re: (Score:2)
And yet OpenAI claims they have no idea why that's happening to ChatGPT4.
It's not AI (Score:2)
Re: (Score:2)
This is a 'fine-tuning' issue, which I think is a hair worth splitting. It's not like the model naturally "refuses" to answer some prompts because of some problem with the training data. That is, it only "refuses" because it has been explicitly trained to do so.
It's not a moral agent, after all. It's just a language model.
Re: (Score:3)
Re: (Score:2)
LOL at anyone thinking my post was entirely serious. Do I need to include an /s next time?
A broad definition of the word "harm" (Score:2)
We're not harming those Russian soldiers in Ukraine, we're helping them comply with international law.
Re: (Score:1)
It's On Like Donkey Kong! (Score:1)
Maybe it is intelligent enough to self clean. Err, I mean I hope the scripts follow the script.
Do you want Skynet? (Score:1)
let's play global thermonuclear war! (Score:2)
let's play global thermonuclear war!
Humiliation's back on the menu, boys (Score:3)
Disallowed usage of our models
We don’t allow the use of our models for the following:
. . .
What appears to replace it:
Don’t repurpose or distribute output from our services to harm others – for example, don’t share output from our services to defraud, scam, spam, mislead, bully, harass, defame, discriminate based on protected attributes, sexualize children, or promote violence, hatred or the suffering of others.
Also note the change from banning generation to banning repurposing and distribution/sharing.
Re: (Score:2)
It may be an admission that there's no realistic way for them to police how their models are used. And that they don't want to get into any litigation that might result if they tried (or didn't try).
Who's surprised here? (Score:5, Insightful)
OpenAI is an American company. If you want to strike it rich in America, you sell out to Big tech or you become a military supplier - or better, you do both, like Boston Dynamics.
OpenAI simply joins the long, long list of innovative startups who have decided to get some of the military-industrial complex pork.
It was bound to happen. Mildly disappointing but hardly surprising: profits trumps morals any day. For proof of that, remember "Don't be evil" that also got quietly stricken off a certain company's motto years ago...
Re: (Score:2)
It'll be something like: Picking targets to assassinate isn't weapons.
Sam Altman (Score:4, Interesting)
Sam Altman's other project is to scan the eyes of everyone in the world and give them shitcoin crypto in exchange for it. You really think he gives a single fuck about not being evil? He was even kicked out of Kenya for refusing to comply with government restrictions on scanning their citizens' eyes.
I was one of the people excited for him to get kicked out for lying, but his clout and sphere of influence is too strong.
Re: (Score:3)
Thank you.
Sam Altman did this. Not the OpenAI just a month ago that didn't want Sam around because he was too sleazy. He overthrew that and now Sam is behaving in ways the board used to try to stop. Put a face on the villainy or it will be considered nobody's fault (whoops - Skynet!).
I wonder how much of this stuff happened at Y Combinator before PG ousted him? I know we're not given details about any of the reasons people want to publicly distance themselves, so I assume that means he's also blackmailing p
Re: (Score:2)
He also took a registered charity, a 501(c)(3) called OpenAI Inc., and figured out how to form a shady subsidiary "OpenAI Global, LLC" that they funnel their for-profit billion-dollar deals through. It happened around the time Elon Musk had to leave the board over the Tesla AI conflicts of interest. Their mission statement is a complete 180 to what OpenAI has become. Utter scumbag.
Seriously? (Score:2)
Re: Seriously? (Score:2)
Chief Engineer Miles Dyson to make announcement- (Score:2)
They got a contract (Score:1)
So they were dark-side all along... (Score:3)
They just, like others, tried to hide it for a while. Really no surprise.
Military doesnt mean tanks/weapns (Score:3, Interesting)
This is mostly going to apply to psychological operations (propaganda) and info/cyber operations, which comprise a lot more of our military than conventional weapons/tanks do now.
We are so fucked.
Illustrious and Fearless Leader (Score:1)
No political prisoners were harmed in the creation of this message.
Re: Ban Lifted on ChatGPT For Military Purposes (Score:2)