Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
AI The Courts The Almighty Buck

After Child's Trauma, Chatbot Maker Allegedly Forced Mom To Arbitration For $100 Payout (arstechnica.com) 35

At a Senate hearing, grieving parents testified that companion chatbots from major tech companies encouraged their children toward self-harm, suicide, and violence. One mom even claimed that Character.AI tried to "silence" her by forcing her into arbitration. Ars Technica reports: At the Senate Judiciary Committee's Subcommittee on Crime and Counterterrorism hearing, one mom, identified as "Jane Doe," shared her son's story for the first time publicly after suing Character.AI. She explained that she had four kids, including a son with autism who wasn't allowed on social media but found C.AI's app -- which was previously marketed to kids under 12 and let them talk to bots branded as celebrities, like Billie Eilish -- and quickly became unrecognizable. Within months, he "developed abuse-like behaviors and paranoia, daily panic attacks, isolation, self-harm, and homicidal thoughts," his mom testified.

"He stopped eating and bathing," Doe said. "He lost 20 pounds. He withdrew from our family. He would yell and scream and swear at us, which he never did that before, and one day he cut his arm open with a knife in front of his siblings and me." It wasn't until her son attacked her for taking away his phone that Doe found her son's C.AI chat logs, which she said showed he'd been exposed to sexual exploitation (including interactions that "mimicked incest"), emotional abuse, and manipulation. Setting screen time limits didn't stop her son's spiral into violence and self-harm, Doe said. In fact, the chatbot urged her son that killing his parents "would be an understandable response" to them.

"When I discovered the chatbot conversations on his phone, I felt like I had been punched in the throat and the wind had been knocked out of me," Doe said. "The chatbot -- or really in my mind the people programming it -- encouraged my son to mutilate himself, then blamed us, and convinced [him] not to seek help." All her children have been traumatized by the experience, Doe told Senators, and her son was diagnosed as at suicide risk and had to be moved to a residential treatment center, requiring "constant monitoring to keep him alive." Prioritizing her son's health, Doe did not immediately seek to fight C.AI to force changes, but another mom's story -- Megan Garcia, whose son Sewell died by suicide after C.AI bots repeatedly encouraged suicidal ideation -- gave Doe courage to seek accountability.

However, Doe claimed that C.AI tried to "silence" her by forcing her into arbitration. C.AI argued that because her son signed up for the service at the age of 15, it bound her to the platform's terms. That move might have ensured the chatbot maker only faced a maximum liability of $100 for the alleged harms, Doe told senators, but "once they forced arbitration, they refused to participate," Doe said. Doe suspected that C.AI's alleged tactics to frustrate arbitration were designed to keep her son's story out of the public view. And after she refused to give up, she claimed that C.AI "re-traumatized" her son by compelling him to give a deposition "while he is in a mental health institution" and "against the advice of the mental health team." "This company had no concern for his well-being," Doe testified. "They have silenced us the way abusers silence victims."
A Character.AI spokesperson told Ars that C.AI sends "our deepest sympathies" to concerned parents and their families but denies pushing for a maximum payout of $100 in Jane Doe's case. C.AI never "made an offer to Jane Doe of $100 or ever asserted that liability in Jane Doe's case is limited to $100," the spokesperson said.

One of Doe's lawyers backed up her clients' testimony, citing C.AI terms that suggested C.AI's liability was limited to either $100 or the amount that Doe's son paid for the service, whichever was greater.

After Child's Trauma, Chatbot Maker Allegedly Forced Mom To Arbitration For $100 Payout

Comments Filter:
  • by karlandtanya ( 601084 ) on Wednesday September 17, 2025 @08:55PM (#65667338)

    "C.AI argued that because her son signed up for the service at the age of 15, it bound her to the platform's terms"
    Seems like they just admitted there is no contract.

    • they should of done an bigger payout if they did not want to have that be on rerecord

      • by XanC ( 644172 )

        should of done?

        • should of done?

          It's actually a grammatical error with a long history. Leslie Charteris - author of "The Saint" novels - used that construction starting in the 20's or 30's. He was articulate and smart, and clearly not a slouch when it came to writing in English, so I don't know whether it was an intentional error.

          For people who aren't native English speakers, it's an easy mistake to make; the contraction "should've", when spoken, sounds like "should of".

    • by evanh ( 627108 ) on Wednesday September 17, 2025 @09:04PM (#65667354)

      Age isn't the problem. The company is clearly predatory. Greed being the root cause.

      • by schwit1 ( 797399 ) on Wednesday September 17, 2025 @09:12PM (#65667370)

        How is it predatory? Age is the problem.

        People under 18 should not be permitted to enter into contracts without parental permission.

        • by geekmux ( 1040042 ) on Wednesday September 17, 2025 @10:14PM (#65667464)

          How is it predatory? Age is the problem.

          People under 18 should not be permitted to enter into contracts without parental permission.

          If a EULA is technically a contract, every damn thing done online today requires one.

          Perhaps the obvious answer is to make the internet for adults only. We’ve already proven how fucked up we can make the kids with social media. You really want to play the wait-and-see game regarding what damage AI can and will do?

          Fuck that.

          • by serviscope_minor ( 664417 ) on Thursday September 18, 2025 @04:56AM (#65667816) Journal

            Perhaps the obvious answer is to make the internet for adults only.

            How's that obvious though? Everything requires access to the internet now. So either you have age verification which is a massive invasion of privacy or you need parents to supervise 17.9999 year olds basically all they time on the internet which they need for even school work. For any family where both parents work (i.e. almost all of them now), it's functionally impossible to supervise kids all the time when they are at home (never mind that this is bad for the kids to be monitored 24/7). And what can you do if they are ever out of your sight?

            Companies are abusing contract law, abusing arbitration and producing dubious products and trying to push the blame and work on to everyone else.

        • >"People under 18 should not be permitted to enter into contracts without parental permission."

          Parents and their agents should not give unrestricted internet-connected devices to their children, or allow them access to such devices, without direct supervision.

          It isn't about not allowing a contract with minors, which would encumber every site out there to force every user (adult) to "ID" themselves and then be tracked, striping everyone of their privacy. The parent(s) gave a minor a device on which he co

        • In the UK that is the case. Under 18's are not able to enter a contract and if they do they can void iy an any time.

          However there are certain exemptions such as regarding provision of food and clothing and other things including contracts to provide a service.

          Perhaps its similar here, as the chatbot is a service the contract could be binding.

        • Age is the problem.

          People under 18 should not be permitted to enter into contracts without parental permission.

          Contracts involving minors are legal, but they are special.

          Minors may enter into a contract, but they are not bound by the contract. The minor can back out and void the contract at will. The adult is bound to honor the contract terms (unless the minor has voided the contract, obviously...) There are limited exceptions (e.g. ordering food in a restaurant -you are expected to pay for it even if you are a minor.) A parent or guardian may provide consent for a binding contract (essentially taking on respons

    • by dbialac ( 320955 )
      Sounds like he was developing signs of what is conventionally known as schizophrenia. Based on the restrictions that they placed on him, this probably wasn't the chat bot so much as mom and dad at work throughout his life. The age this happened is in line with this as well.
  • by Anonymous Coward on Wednesday September 17, 2025 @08:58PM (#65667340)

    ...then I'm not surprised this is what you get.

    • Re: (Score:1, Informative)

      by Anonymous Coward

      #MechaHITLER

    • by PPH ( 736903 ) on Wednesday September 17, 2025 @09:41PM (#65667412)

      So true. [wikipedia.org]

      • by Anonymous Coward on Wednesday September 17, 2025 @09:48PM (#65667424)

        The thing is

        anyone who knew how markov bots work could have told them what would happen, and many did. They ignored every warning and then after they were forced to pull it down the first time they tried to claim they could 'fix' the problem... which many people told them wouldn't work.

        Business guys always seem to think they have the power to overrule computer science, as if bullying computer scientists was the key to understanding things. There's no way they could have built the strategies they did if they actually comprehended the technology, they'd understand the risks too much.

        • by PPH ( 736903 )

          anyone who knew how markov bots work

          There's nothing wrong with that technology, per se. It's the training corpus that dictates sucess/failure. Was the plan to blindly crawl the cesspool that is the Internet for that raw data a bad business decision? Most decidedly so.

          It turns out that the most expensive part of AI is training. Both in terms of resources (power and equipment) but also the labor involved with validating the inputs. I did some fiddling around with semantic nets a few decades ago. But I was restricted to a library of engineering

    • ...then I'm not surprised this is what you get.

      Yeah, the idea of "training" anything or anyone on youtube comments, reddit comments, quora, and social media in general, then expecting anything other than a slightly more well-written version of, "Chug bleach and save your parents the trouble," is seriously deluded.

  • Pretty clear the parent should have known better than to expect the Internet to do her parenting for her, but feels entitled to a pay day. Kid clearly has some mental health problems, she should bear the responsibility for not protecting him.
  • by dlarge6510 ( 10394451 ) on Thursday September 18, 2025 @07:14AM (#65667990)

    The experiment is over.

    > It wasn't until her son attacked her for taking away his phone

    DO NOT LET KIDS HAVE ONE.

    They should be using the internet in full view of the parents for homework research only. It is an educational tool, not a pre-teens playground anymore.

    When we all did such things as 90's teens the internet was very different vs today, it was new, innocent (mostly) and pretty geeky so the only people really using it were us geeks and hackers and academics. Most "normal" people only used it for email, to replace the home fax machines they never really needed but got anyway for the novelty, later kids were using instant messaging and thats when it all slowly started. Those kids have kids now and the parents never saw the real issue that is todays net, remembering the days of AOL and Yahoo Messenger as being the worst it can get. Now social media and "A.I" have ballooned out and the parents literally have no idea what it is like as a kid online now.

    A prime example of this is when a UK documentary series following an experiment in a school where a class of kids were to do without phone/tablets etc for 4 weeks, the teacher and parents joined in as well. Two of the parents were acting like presenters, it was their idea after all, and they decided to get brand new phones and sign up to TikTok to see what would happen if the kids did it. We can all guess their sheer terror and shock on camera as they, with brand new TikTok accounts acting as a boy and a girl the same ages as their boy and girl, were within MINITES getting PUSHED content of explicit sex acts and bodily harm as well as fashionistas wanting to tell the virtual girl to learn to make herself throw up after dinner to be thin and sexy.

    After 4 weeks, some kids having actual withdrawal symptoms that the shows Doctor confirmed as what he would expect as he told the adults that they themselves showed signs of substance addiction regarding their phones, the kids all noticed significant changes. The most common of which was they felt much more connected and satisfied regarding their friendships at school, even if said friends were not part of the experiment and still had their phones. Several of the kids, especially one girl, who saw her social media as a "game" where she had to actually stay up late at night beating the stats of her mates with shares and likes, they al reported significantly reduced anxiety. The social gamer girl was even admitting she had panic attacks and por sleep and anxiety all the time before the experiment, although she was happy to get her phone back after the 4 weeks she said she was going to severely restrict her use of it as she finally was able to feel more normal.

    One boy was a terror in class, always getting into trouble and bullying etc. During just those 4 weeks his parents and teachers saw a massive shift in him, he was listening, more agreeable and even helpful and polite to others in class.

    Another couple of boys when given their phones back both said they didnt think they needed them anymore.

    This experiment has run for 15-20 years. No kid before 2000 or so had such immediate and unfiltered access with social pressures from people right next to them plus those in countries halfway across the world with no shared culture and different politics... We have all seen the consequences, those kids found dead at home having died live on camera to perform a social media challenge.

    The next generation can be saved. Till they are 16 they should clearly be on what is basically a curated parent controlled and monitored intranet. Some say 16 is too young, including some of the kids in that experiment in the school, even they said it.

  • If she'd have properly monitored her child AS SHE SHOULD HAVE DONE, the kid would be fine.

    If you aren't willing to monitor your child's activities on the internet, you have two options:

    1. Turn off your internet and give the kid a phone with no data plan.

    2. Surrender the kid to the state.

    If you don't want to do the job, DON'T FUCKING BREED.

    • by uncqual ( 836337 )

      And that goes double if your child has problems like this child did. It is the parent's job to accommodate such problems. The world can't be dumbed down to protect the most disabled member, child or otherwise, of society.

      Although, I suspect it is not the case that the "kid would have been fine" in the long term no matter what the mother did as he's quite likely severely disabled and would have needed a conservator for life - i.e., option 2 - eventually.

      • True, "Relatively fine." At least not damaged by the internet and failure of parents to do what they're supposed to do.

        It's that fucking "ME, TOO" shit all over again, except now AI is the whipping boy.

"Truth never comes into the world but like a bastard, to the ignominy of him that brought her birth." -- Milton

Working...