Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
AI United States

Illinois Bans AI Therapy, Joins Two Other States in Regulating Chatbots (msn.com) 31

"Illinois last week banned the use of artificial intelligence in mental health therapy," reports the Washington Post, "joining a small group of states regulating the emerging use of AI-powered chatbots for emotional support and advice." Licensed therapists in Illinois are now forbidden from using AI to make treatment decisions or communicate with clients, though they can still use AI for administrative tasks. Companies are also not allowed to offer AI-powered therapy services — or advertise chatbots as therapy tools — without the involvement of a licensed professional.

Nevada passed a similar set of restrictions on AI companies offering therapy services in June, while Utah also tightened regulations for AI use in mental health in May but stopped short of banning the use of AI.

The bans come as experts have raised alarms about the potential dangers of therapy with AI chatbots that haven't been reviewed by regulators for safety and effectiveness. Already, cases have emerged of chatbots engaging in harmful conversations with vulnerable people — and of users revealing personal information to chatbots without realizing their conversations were not private.

Some AI and psychiatry experts said they welcomed legislation to limit the use of an unpredictable technology in a delicate, human-centric field.

This discussion has been archived. No new comments can be posted.

Illinois Bans AI Therapy, Joins Two Other States in Regulating Chatbots

Comments Filter:
  • Again? (Score:5, Informative)

    by registrations_suck ( 1075251 ) on Saturday August 16, 2025 @06:06PM (#65594672)

    This was posted and discussed on 5 August 2025:

    https://slashdot.org/story/25/... [slashdot.org]

  • by hdyoung ( 5182939 ) on Saturday August 16, 2025 @06:21PM (#65594694)
    Until we’re absolutely sure the models wont suddenly start channeling the spirit of hitler while engaging with a psychiatrically vulnerable teen, maybe we’re better off slow-rolling their use for therapy.

    Don’t get me wrong, I’m totally in favor of developing AI in medicine and therapy, but the tech is clearly still half-baked. Not a bad idea to slow-roll it for a while.

    It’s a different conversation why states would restrict this but encourage conversion therapy, which is solidly proven to be harmful. But, consistency was never a strong suit for most humans, it we’re being honest.
    • In this kind of situation, it's smart to disallow its use until an evidence-based decision can be made about whether it will actually work and whether it performs at the level of human therapists, and any AI used for this purpose should have to go through an approval process, because not all AIs are created equal.

      • Totally agree with you, though I do feel like this application *should* be actively pushed, fairly hard, because of it’s potential to make on-demand therapy available at almost zero cost. But, yes, definitely do the studies first.

        So, lets restrict it, but also make sure we don’t choke it to death before we seriously study it. Psychiatric medicine is decades behind where it could have been, because our society played the following game with entire classes of promising psycotropics:

        1. We
        • The answer is very simple. Use a big stick.

          Make the companies owning the servers where the AI model runs fully liable for misconduct by the AI using normal standards. If it runs on your hardware, you're responsible. Make the therapy sessions fully recorded for 7 years and searchable for occurrences of malpractice.

          This will have a strong effect on fly-by-night companies cashing in on the AI bubble. It will also give the AI "engineers" who are entering the medical field some valuable data to fix their model

          • Make the companies owning the servers where the AI model runs fully liable for misconduct by the AI using normal standards.

            How would the companies owning the servers know how the software is being used?

            Make the therapy sessions fully recorded for 7 years and searchable for occurrences of malpractice.

            Bit of a problem with patient privacy there.

    • The biggest problem isn't the flaws in how AI works; it's the lack of accountability. Nobody wants to vet these systems, prove they are functioning as intended, or take responsibility when they fail.
      At least with a professional human, you can hold them responsible if they do something unhinged or unsafe. But right now it is really difficult if not impossible to get an AI fired, sue them for damages, or make sure they never practice again. The corporations that own them are either too big to stop, or so tiny

      • Exactly. If these things are going to be used someone needs to be liable for it not doing what it was sold to do. To me, this is the showstopper for AI. The AI companies are not going to be willing to be liable, and therefore will blame the user. Who is the user in the case of an AI therapist? The person seeking therapy or is it whomever created or trained or deployed the model?
      • by djinn6 ( 1868030 )

        If a therapist uses it, then they are responsible for it behaving poorly.

        This is no different than if they told the patient to read some crazy wacko book that encouraged suicide. It's their responsibility to vet anything they are giving to a patient.

    • Based on my experience with AI, its ability to authoritatively provide verifiably false information is on par with the lack of logic that goes into supporting conversion therapy. I personally believe we are far from being able to trust an AI chatbot therapist, as if the AI is not practically perfect it is unreliable. That said, I am not sure that psychotherapy is really a “Science” to begin with.
  • by gurps_npc ( 621217 ) on Saturday August 16, 2025 @08:32PM (#65594808) Homepage

    It is illegal for anyone to just throw up a 'shingle' and start offering free therapy without a license. Usually just a fine, sometimes a fine per day you offer your services. In addition, some states require you to have insurance.

    Why would anyone think an AI could legally do it without that AI getting a license and insurance?

    • Why would anyone think an AI could legally do it without that AI getting a license and insurance?

      What would really be an interesting experiment would be seeing if a bot could get licensed. (Allowing it to take licensing tests and so forth.)

      • It would not be an interesting experiment. There are tons of examples of AI systems being repeatedly trained to pass exams. Turns out, training a parrotting machine on exams doesn't prove competency in non-exam situations.
        • Turns out, training a parrotting machine on exams doesn't prove competency in non-exam situations.

          Oh ... good thing humans can't do that then!

          (I think you may have missed my implied point about why it would be interesting.)

      • What would really be an interesting experiment would be seeing if a bot could get licensed. (Allowing it to take licensing tests and so forth.)

        The referenced statute lists about ten different types of professional therapists. Licensing requirements may vary, but they generally require a master's or doctoral degree [ilga.gov] in a relevant field, so the bot would have some trouble there.

  • Judging from all the ads I hear on the radio, someone has already invented an easy solution for this problem:

    sed s/therapist/psychic/g marketing_material.txt

  • by greytree ( 7124971 ) on Sunday August 17, 2025 @04:33AM (#65595154)
    Psychology is a dangerous pseudoscience.

    It must be banned until it is wholly rebased on science instead of on dangerous opinions.
    • Psychology is neither science nor pseudoscience. While it fails to be scientifically rigorous, that doesn't automatically make pseudoscience.

      Then there is the leap of logic you make. Are we supposed to ban anything that isn't science? Does that apply to movies, religion, grocery shopping, etc? Or just to the things you personally disagree with?

      Now, regulating or banning things that are causing a measurably significant harm to society does make some sense. But psychology doesn't make the cut in my mind.

      • You admit that it lacks scientific integrity. That alone disqualifies it from being a viable service to offer at-large.

        At best, it should be contained to university experiements and small study groups, until such a day that the science is solid.
    • Psychology is a dangerous pseudoscience. It must be banned until it is wholly rebased on science instead of on dangerous opinions.

      Your statements are contradictory. First, "Psychology is the scientific study of mind and behavior. [wikipedia.org]" So it is a science, or it's scientific. Psychology definitely uses the scientific principles of hypothesis and testing by experimentation. The contradiction is that you want -- something -- "rebased on science", but what science would you use to "rebase" psychology? Do you want to create a new field consisting of the scientific study of -- psychology?

      It's not clear whether you really mean "psychology" or "

      • Psychology poses as science, but is not science because it does not properly use experiments to test hypotheses.

        Psychologists invent a hypothesis and do one study which does not disprove it and BANG it's in the press and it's treated as a legitimate theory and they they use it on people.

        Psychology is VERY DANGEROUS pseudoscience.
  • My friend next door found a loophole to the "no-AI-therapy" rules. He runs an LLM bot that does Tarot card readings, but first it asks you a bunch of questions about your life. It can then legally say something like: "Oh, the next card is the High Priestess. It looks like you may be in an abusive relationship. Want to talk about it?"

    My $0.02 is that AI-based therapy might be very useful to a segment of the population who can't afford human-provided services. Don't let perfect be the enemy of good.

    • I would not like to try to argue this case in court (though I am not representing this person). They should at least have a disclaimer stating that the Tarot readings are "for entertainment purposes only."

      They're also asking for a late-night visit from some enforcers for Big Tarot. And they really know how to put the hurt on a person. I'm talkin' curses and all that, for real.

"The Avis WIZARD decides if you get to drive a car. Your head won't touch the pillow of a Sheraton unless their computer says it's okay." -- Arthur Miller

Working...