Forgot your password?
typodupeerror
GNU is Not Unix Open Source AI

Free Software Foundation Says 'Responsible AI' Licenses Which Restrict Harmful Uses are Unethical and Nonfree (fsf.org) 39

The Free Software Foundation's Licensing and Compliance Manager published a blog post this week to explicitly state that"Responsible AI" Licenses (RAIL) are nonfree and unethical. The licenses restrict AI and ML software "from being used in a specific list of harmful applications," according to the license's web site, "e.g. in surveillance and crime prediction." (The license's steering committee is volunteers from multiple academic institutions.)

But even though Responsible AI licenses are marketed as addressing ethical challenges, the FSF argues "they do not require anything that is really necessary for users to control their computing done with machine learning, including: complete training inputs, training configuration settings, trained model, or — last, but not least — the source code of software used for training, testing, and running tools based on machine learning." Thus, RAILed machine learning can be, and most probably will be, unethical. Use restrictions do not prevent these licenses from being used to exercise power over users...

RAIL contribute to unethical marketing of machine learning, again under the disguise of morally-loaded restrictions they purport to enforce. If we want software to help decrease social injustice, we should oppose licenses that restrict how software can be used. We should focus on effective ways of addressing injustices: government and community support for freedom-respecting tools and services; releasing programs under strong copyleft licenses; and entrusting copyrights to organizations that have the resources to enforce copyleft.

Software freedom must be defended, not denied. More specifically, the more free software is out there, the more likely people will collaborate on tools and services that do not pose moral dangers and help solve existing ones. Free software also makes it more likely that users have real choices when looking for freedom-respecting ethical programs and tools based on machine learning. Denying people the freedom to a particular program, as RAIL or similar licenses would have it, prevents them from using such program for the common good.

Free Software Foundation Says 'Responsible AI' Licenses Which Restrict Harmful Uses are Unethical and Nonfree

Comments Filter:
  • "Harmful" is subjective and debatable.

    If we want software to help decrease social injustice, we should oppose licenses that restrict how software can be used.

    Yep. Software freedom is software freedom. Even if it lets those smelly other people who dare to have different opinions use the software ...

    • Ironically the Free Software Foundation is historically in favor of licenses that restrict actions they consider "harmful".
      • by HiThere ( 15173 )

        Please provide examples. Otherwise I'll consider that your handle is missing an appropriate "u".

        • For example :
          Letting companies use the software without allowing the end user to modify it.
          Anti-tivoization clause which prevents a unalterable chain of trust managed by the vendor for every binary on the device.
          Requiring service providers to provide sources to software not even running on the user's device. (AGPL) /s - just to be sure

        • by drnb ( 2434720 )

          Please provide examples. Otherwise I'll consider that your handle is missing an appropriate "u".

          You are superficial in both you understanding of licenses and your reading of my handle, and lacking knowledge in both cases as a result.

          With respect to licenses, the viral nature of GNU should have come to mind if you had thought for a moment.

        • by ceoyoyo ( 59147 )

          They say it right in the summary: copyleft. The GPL has a whole list of restrictions. The point of a license is to impose restrictions. Even the MIT license has a (short) list of obligations.

          The FSF and other organizations that write licenses have a list of restrictions they think are justified for one reason or another. You may agree or disagree with them, but that's what they do. If you don't want to impose any restrictions on the end user you simply release it into the public domain.

          • by HiThere ( 15173 )

            I think we disagree, but perhaps you should stick to BSD licensed code. There's always been a disagreeent as to which license counts as free. With GPL the freedom resides in the code, with BSD the freedom resides in the user.

    • by dgatwood ( 11270 )

      Yep. Software freedom is software freedom. Even if it lets those smelly other people who dare to have different opinions use the software ...

      No, full stop.

      The problem with software freedom, at least when it comes to sufficiently powerful entities like governments, is that they have access to treasure troves of data that the general public does not. They have the power to massively abuse privacy in ways that, once done, cannot ever truly be undone. They have the ability to stick AI technology onto a drone with machine guns and use it to assassinate random people anywhere in the world from miles away.

      Absolute software freedom can, in a very real

  • I sometimes get annoyed or angry at the evil doings of various people and I make comments of beating them up or killing them and I have never followed through because it is just to let off steam. I don't threaten people and always say crazy stuff like that only when I am alone and I am sure my spyware infected smartphone is always listening, (all smartphones are spying on their users at all times thanks to Apple & Google)
    • My neighbor was having a small event recently. I appeared outside to check the weather. He asked me if they were bothering me with noise. I said no, and if they heard me yelling obscenities, it was because I was watching the news on TV.
    • If unhinged rants on the internet were a good predictor of crime they wouldn't be able to build prisons fast enough to house the criminals. The majority of people posting crap on the Internet are completely harmless. Like dogs safely behind a gate they bark loudly, but open the gate and they aren't as ferocious as they were moments before. Besides, everyone knows that it's the quiet ones you need to watch out for.
  • by david.emery ( 127135 ) on Saturday April 25, 2026 @12:13PM (#66111602)

    I remember Java for years had a license restriction against using it for safety-critical applications.

    It's interesting to consider restrictive licenses as a legal liability measure (as I suspect was true for Java), versus a technical or moral measure (i.e. 'we don't trust this well enough to use in some circumstances.')

    But I wonder if the FSF position will change if/when AI vibe-codes non-open-license replacements for key OSS projects.... Would they claim that the LLM 'inhaling' GPL licensed software inherits the license terms of the input?

    • That's my biggest worry about the Linux distros out there, lots of distros start using AI to do code or assemble their distribution and lots of AI enshitified Linux distros are released flooding distrowatch with bad buggy distros or worse distros that have malware in them
    • by AmiMoJo ( 196126 )

      We had all this with the GPL. Code licensed under it can be used for military applications, by North Korea, that sort of thing. You either accept that or use a difference licence. You can't add additional restrictions to it.

      As for AI ripping off open source projects, it would be covered by existing copyright law. The AI slop would be considered a derivative work, even if it wasn't actually copied line for line from the original code. It could maybe be circumvented with a clean room implementation.

      • The AI slop would be considered a derivative work, even if it wasn't actually copied line for line from the original code. It could maybe be circumvented with a clean room implementation.

        Why not? One AI session creates a spec, a fully API compatible spec. A different AI session implements that spec. How is the clean room principle violated?

        You seem to be assuming that an ML model somehow has a copy of the original source code, rather than perhaps concepts extracted from something it read. Not to mention assuming that the ML implementing the spec was trained on the original code. There could easily be two different ML. One for generating a specification from source code, and one for gener

        • That's not only possible, it's very likely to be implemented in the near future, if it hasn't been already. However, it's also very probable that there will be unscrupulous companies out there training one AI on OSS source code, using it to produce a clone of that package and claiming to have used that clean room technique. And when (not if) it happens, how will people tell the difference?
          • by HiThere ( 15173 )

            By the license it's released under. If it's released under GPL, then it's free software. If it's released under a restrictive license then it isn't. And if it can be proven to have been AI generated, then it's public domain.

            The main purpose of Free Software licenses is not to restrict others from implementing it, it's to ensure that the code remains open.

            • by drnb ( 2434720 )

              By the license it's released under. If it's released under GPL, then it's free software. If it's released under a restrictive license then it isn't. And if it can be proven to have been AI generated, then it's public domain.

              Nope. If the source code is not disclosed it remains a protected trade secret.

              • by HiThere ( 15173 )

                If the code is GPL, and the binary is released, then it is required that the source is released. But you might prefer the AGPL.

          • by ceoyoyo ( 59147 )

            Has been already. There was a story about it a few days ago, which is pretty clearly what drnb is referring to.

            claiming to have used that clean room technique. And when (not if) it happens, how will people tell the difference?

            You don't need an AI for this. I expect there's a lot of "clean room" reverse engineering that's nothing of the sort.

        • by AmiMoJo ( 196126 )

          You would probably want two completely isolated AIs, not just separate sessions. Local only, air gapped machine.

          But even then you might have problems because whoever trained the AI may have fed it the open source code being cloned.

          You would probably want a third AI to check the generated code to make sure it wasn't too close to the original.

          • An ML system is not a cut and paste machine. It may extract a concept. Reimplementing that concept is not inherently a derivative. Nor is similar code necessarily derivative work. The courts have recognized in copyright cases that some solutions are trivial, obvious, or otherwise the main reasonable way to implement something.

            If there is no history, nothing carried over from one session to another, what is the problem? The problem with a single human being is that it is inevitably that something will be
            • Reimplementing a concept for code you have already seen is derivative. You seem to live in a fairy world where these magical AIs are independent of human reality, and cleanly recreate universal software principles from scratch. That is not the reality.

              The reality is that open software is trained on, and then recreated imperfectly by machines. The fruit of the forbidden tree has been eaten long before you get your hands on a prompt. The onus is on whoever wants to pretend clean room is happening to *prove*

              • by drnb ( 2434720 )

                Reimplementing a concept for code you have already seen is derivative.

                Not in the context of copyright.

                The reality is that open software is trained on, and then recreated imperfectly by machines.

                And by humans. That is what happens when you read something in a textbook or academic paper, including the sample code - be it copyrighted or not, and then implement something based on your knowledge, your understanding of what was read. That's not copyright infringement, whether human or machine.

                There are already plenty of documented examples of AI outputs that contain signatures of artists, comments from code projects, and trademarked brand products.

                You are conflating how a

                • Not in the context of copyright.

                  Copyright applies (amongst other things) because the copyrighted code was copied, processed, and represented in the model with the ability to output it verbatim. That's what those signatures in AI generated images are: verbatim copies of the source material. That's what those quirky little phrases are when you say "write my homework in the style of X".

                  So when you ask for some code to do X, it's a derivative of the open source code the foundation model trained on.

                  And by h

                  • Copyright applies (amongst other things) because the copyrighted code was copied, processed, and represented in the model with the ability to output it verbatim.

                    That is the problem here, I do not believe that this is a given. I believe that the original data can be processed to a degree that its representation is more conceptual. That the original human structures, sequence, and organization can be lost. That what the ML has would be closer to a good comment functionally explaining what a block of code does. Not the actual code.

                    That's what those signatures in AI generated images are: verbatim copies of the source material. That's what those quirky little phrases are when you say "write my homework in the style of X".

                    I am not referring to LLMs that just go out and search the internet. I am referring to an ML system with two parts. One that processes sour

          • You missed my point: an unscrupulous company could use only one AI, trained in part on the software in question, to make a clone of it and claim falsely that they'd used a clean room to create their own version of the package.
            • by AmiMoJo ( 196126 )

              They could, but they could also have humans do similar. It's just basic lying. They would need to produce evidence if they were sued.

        • If the LLM-AI can spew forth a word-for-word exact copy of a page from any book, why wouldn't it just swallow up OpenOffice's full code and modify a couple things and repackage it as almost-OpenOffice?
          If it's open source, the code to compile a working dingus from it is online someplace, and with ML copying and reading everything online, there's no doubt that a request for 'code to make a word processor' will result in the exact code from a bunch of FOSS word processors mashed together, that might work.

          It'd

          • by drnb ( 2434720 )

            If the LLM-AI can spew forth a word-for-word exact copy of a page from any book, why wouldn't it just swallow up OpenOffice's full code and modify a couple things and repackage it as almost-OpenOffice?

            Because in your example the book is not embedded into the "AI". The "AI" is grabbing a copy off the internet as needed.

            And the clean room phase of someone write a spec, then someone writes an implementation, makes such copies of the original work irrelevant. That is the entire point of the clean room process.

            • "Someone" implies a human being (y'know... blood and flesh) doing both stages of it, which eliminates the AI (which wants access to the outside world to scan for changes and updates to it's database), so an AI is irrelevant.
              The AI downloads the book, and assimilates it into it's reference database... a prompt can make it regurgitate the entire book (maybe even without errors). I assume an AI, trapped in a cleanroom (I assume you mean like a Faraday cage, so it can't touch outside the room)... isn't gonna b

              • by drnb ( 2434720 )

                "Someone" implies a human being (y'know... blood and flesh) doing both stages of it, which eliminates the AI (which wants access to the outside world to scan for changes and updates to it's database), so an AI is irrelevant.

                You misunderstand. We are discussing AI applying the example same legal established process.

                The AI downloads the book, and assimilates it into its reference database... a prompt can make it regurgitate the entire book (maybe even without errors).

                We are not discussing and LLM just going out to the internet, we are discussing ML models trained for a specific purpose, Different beasties.

            • (and, the people who think adding a 'u' to your username would make you more 'in line with what you post'... it only looks that way in the comments page or on the article page when you comment, in the Messages page, it shows it as "d r n b"... just so you know)

              Our posts are not "dumb" (wonder what their qualifications for "dumb" are?), and we both tend to function more in the 'real' than most.
              And, we don't fall into that hole where we resort to calling each other dumb or idiots.

              A human at the keyboard would

    • by Improv ( 2467 )

      They were built on principle and likely will stand on it.

  • by guygo ( 894298 ) on Saturday April 25, 2026 @12:33PM (#66111616)

    without consequences. Bah, useless drivel.

Much of the excitement we get out of our work is that we don't really know what we are doing. -- E. Dijkstra

Working...