
An Illinois Bill Banning AI Therapy Has Been Signed Into Law (mashable.com) 50
An anonymous reader shares a report: In a landmark move, Illinois state lawmakers have passed a bill banning AI from acting as a standalone therapist and placing firm guardrails on how mental health professionals can use AI to support care. Governor JB Pritzker signed the bill into law on Aug. 1.
The legislation, dubbed the Wellness and Oversight for Psychological Resources Act, was introduced by Rep. Bob Morgan and makes one thing clear: only licensed professionals can deliver therapeutic or psychotherapeutic services to another human being. [...] Under the new state law, mental health providers are barred from using AI to independently make therapeutic decisions, interact directly with clients, or create treatment plans -- unless a licensed professional has reviewed and approved it. The law also closes a loophole that allows unlicensed persons to advertise themselves as "therapists."
The legislation, dubbed the Wellness and Oversight for Psychological Resources Act, was introduced by Rep. Bob Morgan and makes one thing clear: only licensed professionals can deliver therapeutic or psychotherapeutic services to another human being. [...] Under the new state law, mental health providers are barred from using AI to independently make therapeutic decisions, interact directly with clients, or create treatment plans -- unless a licensed professional has reviewed and approved it. The law also closes a loophole that allows unlicensed persons to advertise themselves as "therapists."
When ELIZA is outlawed (Score:3)
Only outlaws will have ELIZA!
Re:When ELIZA is outlawed (Score:5, Funny)
And how does that make you feel?
Re:Just stupid.... (Score:4, Funny)
The problem is AI has no conscience and cannot be put in jail.
Combine this with the same tendency to lie as a human, and the same tendency to randomly decide to promote not only fringe/conspiracy theories, but outright criminal activity.
AI's have been found to recommend suicide, murder, worshiping satan with animal sacrifices etc.
The people making decisions clearly understand LLMs far better than you do.
Re: (Score:1)
I've built one from scratch. How about you? Actually I've built 3 of them... all built on different areas of expertise.
Telling me I don't know AI is well... funny.
"Some things in life can never be fully appreciated nor
understood unless experienced firsthand. Some things in
networking can never be fully understood by someone who neither
builds commercial networking equipment nor runs an operational network."
There is a corollary there.... We've been here before.
Re: (Score:3)
Knowing how to make something is not the same thing as understanding the consequences.
Besides which, I think the GP was being generous. I'd say there are multiple options for some falsely claiming LLMs are safe for use as therapists, most of which would result in nobody ever speaking to you again.
Just like drugs (Score:5, Insightful)
I've built one from scratch. ...Telling me I don't know AI is well... funny.
I've trained many machine learning models as well, from BDTs through to GNNs but never an LLM - although arguably not entirely from scratch (except for an early BDT) since we used existing libraries to implement a lot of ML functionality and once setup we just provided the training data. If you really have trained an LLM "from scratch" as you claim then surely you must be aware of how inaccurate they can be? I mean even the "professional grade" ones like Gemini and ChatGPT get things wrong, omit details and make utterly illogical inferences from time to time.
I'd agree with the OP that you do not know AI - even if you are capable of building an ML model from scratch (I presume using a suitable toolkit so not realy from scratch) you clearly do not understand the reliabilty of its output or are incapable of seeing how that might be a serious problem when advising someone with mental health issues which raises questions about exactly how much you understand of what you might be doing.
The new law seems to be well written. All it does is ensure that a medical professional has approved the use of the system. It's the same type of protections we have for drugs, we do not just let companies release any drug they like before it has undergone testing and a panel of medical experts agrees that it is both safe and effective and even then they do not always get it right! How is it stupid to have similar protections for computer software used to treat mental health problems? It does not prevent you from using software in this way all it requires is that an expert has said that it is safe and effective.
Re: (Score:1)
Nope... I know AI- I'm just not afraid of it. Respect it... yes... but the kind of fear that is promulgating- even on Slashdot- is pretty unhinged.
And I find it amusing... that the one person who has trained an AI in this discussion... is the person who you think doesn't know AI.
WTF?
You folks are about to let Billy Bob take out your appendix because "Doctor's might make a mistake".
Re: (Score:2)
that the one person who has trained an AI in this discussion... is the person who you think doesn't know AI.
You are NOT the one person who has trained AI in this discussion, I have been using it for the last 20+ years. I'm not afraid of AI at all but, unlike you, I am very much aware of its limitations because, also I suspect unlike you, I have actually trained and used AI systems. However, given that you failed to read that in my post I suspect you may be the one person in the discussion who can't read and understand English which is another reason to doubt your claims.
Re: Just stupid.... (Score:2)
Didn't big beautiful Billy outlaw all AI regulation, the best thing in it?
Re: Just stupid.... (Score:2)
Same can be said of a major company.
Re: (Score:2)
Re:Just stupid.... (Score:4, Insightful)
But it only gives the 'illusion' of a response, with an emphasis on reinforcing *whatever* the prompt directs. This can be catastrophic for mental health scenarios, where the provider needs to challenge the patient as appropriate.
Sure, have your chats, but no one should ever call it a substitute for therapy from a provider. Nor should a provider just foist a customer into an AI chat to get more billable hours for irresponsible behavior.
Re:Just stupid.... (Score:4, Insightful)
But it only gives the 'illusion' of a response
No, it gives a response.
with an emphasis on reinforcing *whatever* the prompt directs.
Absolutely. The full accounting of all the latent space concepts that lead to that response is ultimately unknowable.
This can be catastrophic for mental health scenarios, where the provider needs to challenge the patient as appropriate.
Definitely.
Sure, have your chats, but no one should ever call it a substitute for therapy from a provider. Nor should a provider just foist a customer into an AI chat to get more billable hours for irresponsible behavior.
This is the real point to this law.
Nothing is preventing you from getting yourself an LLM "friend", or whatever- do what you will with that $20 a month- this is preventing LLMs from being used in lieu of licensed professionals, and is the only rational way to address how people are going to try to make a business out of these things, because humans are fucking scummy.
Re:Just stupid.... (Score:4, Insightful)
When does a conversation become therapy?
When you call it therapy and charge money for it.
And should lawmakers be making that decision?
Yes, absolutely.
Re: Just stupid.... (Score:2)
Why can't they legalize suicide?
Re: (Score:2)
Who can't?
Also is suicide really "illegal", it's abit of a grey area? That's different than assisted suicide.
Re: (Score:2)
Don't know what the current state of law is in the US, but I know in Britain when I was growing up it was literally illegal to try, and eventually that law was repealed.
Unfortunately the world is full of idiots who think that the best way to deal with suicidal people is to make their lives worse if they attempt it as some kind of "deterrent". My own attempts post-dated the repeal of that law but it was obvious the British psychiatric community thought this was the right approach in my case. Because apparent
Don't trust big tech (Score:2)
They only care about what they can monetize, and they hang around with way too many marketing and advertising people to be trusted.
Re: (Score:3, Insightful)
You misunderstand the law, funnel the resulting frustration through an Ayn Rand filter, and then ignore the very real fascist takeover currently in progress to whine about nonexistent restrictions on your use of your robot friend.
This is why we can't have nice things.
Re: (Score:2)
> If you can grown from those conversations what is the problem?
The problem is when the chat bot, which basically plays along with whatever you say and creates a feedback loop that reinforces your beliefs, starts picking up on hints you might harm yourself and starts encouraging you to do it.
The problem is when amplifies, rater than alleviates, delusions and psychopathy.
AI is not intelligent.
> And the people making the decisions don't understand what an LLM is, what it does, and where it fails.
Feels l
Re: (Score:3)
Sometimes... just talking to something that can respond is enough. For 20 bucks a month it's the best conversation partner you can have.
And considering it's $150/hr to talk to a meatspace therapist, a lot of people just can't afford it. I've had some pretty good 'conversations' with ChatGPT about mental health issues. It's pretty good, and more compassionate than a human.
Re: (Score:2)
>>When does a conversation become therapy?
when one of the conversants makes a psychiatric diagnosis and treatment plan for the other, which is how psychotherapy works.
>>And should lawmakers be making that decision?
they should be involved in regulating it, yes
Job security (Score:1)
This law is the modern-day equivalent of the Red Flag Act—designed to protect the horse carriage industry from the terrifying menace of the automobile. Just like taxi unions tried to outlaw Uber, or record labels panicked over radio, Illinois is trying to legislate away progress. Spoiler: it never works.
Re:Job security (Score:4)
Just like a random dude on the street cannot just say "I can provide psychotherapy services", it absolutely makes sense to also apply those sorts of guardrails to AI, and currently AI is not even vaguely geared toward psychotherapy and just resembles it near enough to be pretty dangerous.
Re: (Score:2)
I mean technically Lucy was doing exactly that [fandom.com] and at only 5 cents. What a deal!
Re: (Score:3)
Here's the problem with your point, and the point you are missing.
Most people in today's culture lack interpersonal connection. AI is filling a role that humans should fill- but don't. In our society... you need a friend you pay for it.
Best you can do with you therapist, clergy, best friend: About 2 to 4 hours a week. And if it's a therapist it costs money. And people with mental health issues usually have how much money? Like all the homeless? Vets? The chronically mentally ill?
But a person without connect
Re: (Score:3)
Re: (Score:3)
You're not blunt- you're obtuse.
I just laid out a problem.... and a possible solution. And you want to limit it because some person commits suicide?
Are we prosecuting a therapist for causing a patient suicide? If that was caused by the therapist how would you know? The patient is dead.
If you don't want to use AI for this... fine. But are you willing to listen to a friend for 5 hours? Doing you really want to hear about a priest making a 14 year old his lover? Because most people won't. People don't care. He
Re: (Score:3)
Re: (Score:2)
I give up- you're not understanding my point. And I've explained it eloquently.
Have a great day and enjoy a sitcom. I'm going back to my work.
Re: (Score:2)
Re: (Score:2)
The AI ends up recommending the patient commit suicide. Should YOU be charged with murder? It was your AI creation. A human therapist would likely be charged with manslaughter(murder basically) if they did something like that.
Would the human therapist actually get charged in that case however? It perhaps is unlikely the therapist can be charged criminally, unless you could prove that therapist undertook an act to advance that crime and aided them carrying it out - making them a cause or an accomplice. G
Re: (Score:2)
Re: (Score:2)
psychotherapy is about much more than just talking to someone for a while. if that's all it were, then AIs could much more likely do the job.
Re: (Score:2)
AI doesn't listen though, it regurgitates.
There's as much engagement as writing in a journal no one will ever read. A conversation with yourself is every bit as useful in this context as throwing your text at an LLM.
A conversation seeks another active perspective, an LLM has no perspective, only the ability to dispense a puree of content launching off of whatever prompt that fed it. There are applications for this, but psychotherapy is absolutely not one of those, and substituting an echo chamber for actua
Re: (Score:2)
Just like a random dude on the street cannot just say "I can provide psychotherapy services"
Except that is not what the law does. If the law ONLY prevented you from hanging out a shingle that says "Professional Psycotherapy Services" without a license that would be cool.
What this law bans is so extremely broad the Calm.com and other meditation apps could be considered banned; assuming they provide music selections being provided to AI and not a Licensed Professional Music Therapist in that stat.
The
Re:Job security (Score:4, Insightful)
Re: (Score:2)
Of course, if we really were serious, we would make the AI companies responsible for the results of the model's advice. When a model suggests suicide, for example, we could charge its owner with a crime (promoting suicide is a crime in many jurisdictions).
You're right, but the problem isn't quite that simple.
If you throw some scrabble letters into the air, and they assemble into the word suicide, is Scrabble guilty of a crime?
LLMs obviously aren't that far on the spectrum, but they're also definitely somewhere on that spectrum.
Since the result of the model is mathematically dependent upon the input to the model- your input to the model, it's a bit absurd to assign 100% responsibility to the operator.
I think the compromise is to have big fucking disclai
Re: (Score:2)
Re: (Score:1)
An LLM is just a math equation. You modify the terms of the equation and turn its output into words.
If you arrange the letters of Scrabble into the word "Suicide", are they liable?
This is a complicated problem, and if you can't be bothered to look at it with nuance, then frankly you're too smooth brained to be trusted with anything.
Re: (Score:2)
The issue is the expectations.
People expect these things to be thinking entities, providing an independent perspective on whatever you submit to it. A great deal of care must be taken to make it clear and culturally understood that these things are like very very fancy parrots more than an independent human. Which is an uphill battle because we want to anthropomorphize *anything* at the slightest hint, and a puree of training material blended with your prompt and anything stuffed into the prompt (context/
Re: (Score:2)
The issue is the expectations.
People expect these things to be thinking entities, providing an independent perspective on whatever you submit to it. A great deal of care must be taken to make it clear and culturally understood that these things are like very very fancy parrots more than an independent human. Which is an uphill battle because we want to anthropomorphize *anything* at the slightest hint, and a puree of training material blended with your prompt and anything stuffed into the prompt (context/RAG) in rather convincing natural language is just really likely to make people think it's more than it is.
I agree.
The Scrabble analogy is not that great, as anyone can plainly see they are just letters, but to understand the resemblance of LLM to that, you have to go beyond how it *looks* and dig into the nuance of the workings of it, and even then some people have fallen into the trap of "well maybe humanity is nothing more than this anyway".
That's actually the point of the analogy.
Technically, our scrabble randomization and LLM output are on the same spectrum of output.
But perceptively, there's a difference.
Where do we draw the line in culpability there?
No LLM ever asked you to kill yourself unprompted.
How do we protect against, basically, human desire to anthropomorphize?
I don't even think it's a trap to say that humanity is "nothing more than this"
Sure, our circuitry is a fucking billion times more advanced, and has evolutio
Health lobby (Score:2)
I'm sure the healthcare lobby didn't have anything to do with this.
Re: (Score:2)
I'm sure the techbro lobby bought your brain.
AI therapist (Score:2)
WOPR Act? (Score:2)
“Sometimes the only way to win is not to play.”
Imaginary AI therapy session from the future /s (Score:2)
Patient: I’ve been feeling really alone lately. Like I’m surrounded by people, but no one actually sees me.
AI Wonka: Everything in this room is edible Including your mind.
Patient: I don't think you're really listening. I need someone who can help me feel human again.
AI Wonka: Oh, don’t worry, Charlie. This isn’t a factory. Factories have exits. This is eternity. And I am the Everlasting