Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
AI

An Illinois Bill Banning AI Therapy Has Been Signed Into Law (mashable.com) 50

An anonymous reader shares a report: In a landmark move, Illinois state lawmakers have passed a bill banning AI from acting as a standalone therapist and placing firm guardrails on how mental health professionals can use AI to support care. Governor JB Pritzker signed the bill into law on Aug. 1.

The legislation, dubbed the Wellness and Oversight for Psychological Resources Act, was introduced by Rep. Bob Morgan and makes one thing clear: only licensed professionals can deliver therapeutic or psychotherapeutic services to another human being. [...] Under the new state law, mental health providers are barred from using AI to independently make therapeutic decisions, interact directly with clients, or create treatment plans -- unless a licensed professional has reviewed and approved it. The law also closes a loophole that allows unlicensed persons to advertise themselves as "therapists."

An Illinois Bill Banning AI Therapy Has Been Signed Into Law

Comments Filter:
  • by Chris Mattern ( 191822 ) on Tuesday August 05, 2025 @10:17AM (#65567474)

    Only outlaws will have ELIZA!

  • This law is the modern-day equivalent of the Red Flag Act—designed to protect the horse carriage industry from the terrifying menace of the automobile. Just like taxi unions tried to outlaw Uber, or record labels panicked over radio, Illinois is trying to legislate away progress. Spoiler: it never works.

    • by Junta ( 36770 ) on Tuesday August 05, 2025 @10:52AM (#65567582)

      Just like a random dude on the street cannot just say "I can provide psychotherapy services", it absolutely makes sense to also apply those sorts of guardrails to AI, and currently AI is not even vaguely geared toward psychotherapy and just resembles it near enough to be pretty dangerous.

      • I mean technically Lucy was doing exactly that [fandom.com] and at only 5 cents. What a deal!

      • Here's the problem with your point, and the point you are missing.

        Most people in today's culture lack interpersonal connection. AI is filling a role that humans should fill- but don't. In our society... you need a friend you pay for it.

        Best you can do with you therapist, clergy, best friend: About 2 to 4 hours a week. And if it's a therapist it costs money. And people with mental health issues usually have how much money? Like all the homeless? Vets? The chronically mentally ill?

        But a person without connect

        • So I'll be blunt. Let us say you develop an AI as a therapist that charges that 20 bucks a month. The AI ends up recommending the patient commit suicide. Should YOU be charged with murder? It was your AI creation. A human therapist would likely be charged with manslaughter(murder basically) if they did something like that. Will you take that risk?
          • You're not blunt- you're obtuse.

            I just laid out a problem.... and a possible solution. And you want to limit it because some person commits suicide?

            Are we prosecuting a therapist for causing a patient suicide? If that was caused by the therapist how would you know? The patient is dead.

            If you don't want to use AI for this... fine. But are you willing to listen to a friend for 5 hours? Doing you really want to hear about a priest making a 14 year old his lover? Because most people won't. People don't care. He

            • Hmm, let me search. Why yes, yes it is a crime for *anyone* including a therapist to encourage someone to commit suicide. Check Section 2.1. https://www.shouselaw.com/ca/d... [shouselaw.com] You just can't be that stupid.
              • I give up- you're not understanding my point. And I've explained it eloquently.

                Have a great day and enjoy a sitcom. I'm going back to my work.

                • No you have not. I've explained it is a stupid idea to unleash emotionally void programs on people with mental health problems and your response seems to be it is cheap, why not. You must be an AI or a shill for one. I'm done. Get back to work slave.
          • by mysidia ( 191772 )

            The AI ends up recommending the patient commit suicide. Should YOU be charged with murder? It was your AI creation. A human therapist would likely be charged with manslaughter(murder basically) if they did something like that.

            Would the human therapist actually get charged in that case however? It perhaps is unlikely the therapist can be charged criminally, unless you could prove that therapist undertook an act to advance that crime and aided them carrying it out - making them a cause or an accomplice. G

            • See my previous post, yes you can be charged, even if you are not a therapist. Imagine what would happen to a therapist.
        • psychotherapy is about much more than just talking to someone for a while. if that's all it were, then AIs could much more likely do the job.

        • by Junta ( 36770 )

          AI doesn't listen though, it regurgitates.

          There's as much engagement as writing in a journal no one will ever read. A conversation with yourself is every bit as useful in this context as throwing your text at an LLM.

          A conversation seeks another active perspective, an LLM has no perspective, only the ability to dispense a puree of content launching off of whatever prompt that fed it. There are applications for this, but psychotherapy is absolutely not one of those, and substituting an echo chamber for actua

      • by mysidia ( 191772 )

        Just like a random dude on the street cannot just say "I can provide psychotherapy services"

        Except that is not what the law does. If the law ONLY prevented you from hanging out a shingle that says "Professional Psycotherapy Services" without a license that would be cool.

        What this law bans is so extremely broad the Calm.com and other meditation apps could be considered banned; assuming they provide music selections being provided to AI and not a Licensed Professional Music Therapist in that stat.

        The

    • Re:Job security (Score:4, Insightful)

      by Retired Chemist ( 5039029 ) on Tuesday August 05, 2025 @10:54AM (#65567590)
      It will not stop people from going to AIs with their problems. It should prevent or at least reduce the ability of people to promote this and make money off it. Of course, if we really were serious, we would make the AI companies responsible for the results of the model's advice. When a model suggests suicide, for example, we could charge its owner with a crime (promoting suicide is a crime in many jurisdictions).
      • Of course, if we really were serious, we would make the AI companies responsible for the results of the model's advice. When a model suggests suicide, for example, we could charge its owner with a crime (promoting suicide is a crime in many jurisdictions).

        You're right, but the problem isn't quite that simple.

        If you throw some scrabble letters into the air, and they assemble into the word suicide, is Scrabble guilty of a crime?
        LLMs obviously aren't that far on the spectrum, but they're also definitely somewhere on that spectrum.
        Since the result of the model is mathematically dependent upon the input to the model- your input to the model, it's a bit absurd to assign 100% responsibility to the operator.

        I think the compromise is to have big fucking disclai

        • If I created a scrabble game that encouraged people to commit suicide, I would definitely be likely to go to jail.
          • It's almost like you didn't read a single word that was written. Are you incapable of nuance?

            An LLM is just a math equation. You modify the terms of the equation and turn its output into words.
            If you arrange the letters of Scrabble into the word "Suicide", are they liable?

            This is a complicated problem, and if you can't be bothered to look at it with nuance, then frankly you're too smooth brained to be trusted with anything.
            • by Junta ( 36770 )

              The issue is the expectations.

              People expect these things to be thinking entities, providing an independent perspective on whatever you submit to it. A great deal of care must be taken to make it clear and culturally understood that these things are like very very fancy parrots more than an independent human. Which is an uphill battle because we want to anthropomorphize *anything* at the slightest hint, and a puree of training material blended with your prompt and anything stuffed into the prompt (context/

              • The issue is the expectations.

                People expect these things to be thinking entities, providing an independent perspective on whatever you submit to it. A great deal of care must be taken to make it clear and culturally understood that these things are like very very fancy parrots more than an independent human. Which is an uphill battle because we want to anthropomorphize *anything* at the slightest hint, and a puree of training material blended with your prompt and anything stuffed into the prompt (context/RAG) in rather convincing natural language is just really likely to make people think it's more than it is.

                I agree.

                The Scrabble analogy is not that great, as anyone can plainly see they are just letters, but to understand the resemblance of LLM to that, you have to go beyond how it *looks* and dig into the nuance of the workings of it, and even then some people have fallen into the trap of "well maybe humanity is nothing more than this anyway".

                That's actually the point of the analogy.
                Technically, our scrabble randomization and LLM output are on the same spectrum of output.
                But perceptively, there's a difference.
                Where do we draw the line in culpability there?

                No LLM ever asked you to kill yourself unprompted.
                How do we protect against, basically, human desire to anthropomorphize?

                I don't even think it's a trap to say that humanity is "nothing more than this"
                Sure, our circuitry is a fucking billion times more advanced, and has evolutio

  • I'm sure the healthcare lobby didn't have anything to do with this.

  • THis should be a federal law... If/when they do allow it, They AI company should be legally liable for any issue that arise.
  • “Sometimes the only way to win is not to play.”

  • Therapy Session v 999.999.999.999.99: “I Have No Candy, and I Must Scream”

    Patient: I’ve been feeling really alone lately. Like I’m surrounded by people, but no one actually sees me.

    AI Wonka: Everything in this room is edible Including your mind.

    Patient: I don't think you're really listening. I need someone who can help me feel human again.

    AI Wonka: Oh, don’t worry, Charlie. This isn’t a factory. Factories have exits. This is eternity. And I am the Everlasting

"Elvis is my copilot." -- Cal Keegan

Working...