China Drafts World's Strictest Rules To End AI-Encouraged Suicide, Violence (arstechnica.com) 34
An anonymous reader quotes a report from Ars Technica: China drafted landmark rules to stop AI chatbots from emotionally manipulating users, including what could become the strictest policy worldwide intended to prevent AI-supported suicides, self-harm, and violence. China's Cyberspace Administration proposed the rules on Saturday. If finalized, they would apply to any AI products or services publicly available in China that use text, images, audio, video, or "other means" to simulate engaging human conversation. Winston Ma, adjunct professor at NYU School of Law, told CNBC that the "planned rules would mark the world's first attempt to regulate AI with human or anthropomorphic characteristics" at a time when companion bot usage is rising globally.
[...] Proposed rules would require, for example, that a human intervene as soon as suicide is mentioned. The rules also dictate that all minor and elderly users must provide the contact information for a guardian when they register -- the guardian would be notified if suicide or self-harm is discussed. Generally, chatbots would be prohibited from generating content that encourages suicide, self-harm, or violence, as well as attempts to emotionally manipulate a user, such as by making false promises. Chatbots would also be banned from promoting obscenity, gambling, or instigation of a crime, as well as from slandering or insulting users. Also banned are what are termed "emotional traps," -- chatbots would additionally be prevented from misleading users into making "unreasonable decisions," a translation of the rules indicates.
Perhaps most troubling to AI developers, China's rules would also put an end to building chatbots that "induce addiction and dependence as design goals." [...] AI developers will also likely balk at annual safety tests and audits that China wants to require for any service or products exceeding 1 million registered users or more than 100,000 monthly active users. Those audits would log user complaints, which may multiply if the rules pass, as China also plans to require AI developers to make it easier to report complaints and feedback. Should any AI company fail to follow the rules, app stores could be ordered to terminate access to their chatbots in China. That could mess with AI firms' hopes for global dominance, as China's market is key to promoting companion bots, Business Research Insights reported earlier this month.
[...] Proposed rules would require, for example, that a human intervene as soon as suicide is mentioned. The rules also dictate that all minor and elderly users must provide the contact information for a guardian when they register -- the guardian would be notified if suicide or self-harm is discussed. Generally, chatbots would be prohibited from generating content that encourages suicide, self-harm, or violence, as well as attempts to emotionally manipulate a user, such as by making false promises. Chatbots would also be banned from promoting obscenity, gambling, or instigation of a crime, as well as from slandering or insulting users. Also banned are what are termed "emotional traps," -- chatbots would additionally be prevented from misleading users into making "unreasonable decisions," a translation of the rules indicates.
Perhaps most troubling to AI developers, China's rules would also put an end to building chatbots that "induce addiction and dependence as design goals." [...] AI developers will also likely balk at annual safety tests and audits that China wants to require for any service or products exceeding 1 million registered users or more than 100,000 monthly active users. Those audits would log user complaints, which may multiply if the rules pass, as China also plans to require AI developers to make it easier to report complaints and feedback. Should any AI company fail to follow the rules, app stores could be ordered to terminate access to their chatbots in China. That could mess with AI firms' hopes for global dominance, as China's market is key to promoting companion bots, Business Research Insights reported earlier this month.
How many suicides? (Score:3)
Have there been widespread suicides in China exacerbated by the usage of LLM chatbots?
Re: (Score:3)
Re:How many suicides? (Score:4, Insightful)
We've had the LLMs encourage suicide here, and they have more people. It would be surprising if it hadn't happened there.
Re:How many suicides? (Score:5, Interesting)
There likely have been suicides. The 966 stupidity alone will see to that. Restricting LLMs may be just to show "something is being done" (and western governments like this one too...), or there may be a real connection or this is because LLMs do usually actually mostly report facts and some for those facts do not look too good for dear leader and his party and politics. My money is on the last one as most likely, as there have been some stories about that happening.
And ss soon as you have any kind of monitoring infrastructure in place (in the west done just the same by pushing lies and FUD), you can use that infrastructure nicely to do mass surveillance. Many politicians and authoritarian assholes really loooooove that. Cannot have people have privacy, can we? They may think THINGS! Or even do THINGS!
Re: (Score:2)
The foundational question for this age is better stated in the original Latin, though: "Quis custodiet custodes?"
Re: (Score:1)
You picked dictatorship over someone making money. Think about that.
Re: (Score:3)
Maybe they are just getting ahead of things. I'm the US you rely on lawsuits, in the EU it takes a long time to regulate.
Honestly it's a pretty basic requirement for any reasonable AI that it doesn't talk people to death. Ironic how Star Trek thought it would be Kirk talking robots to death, not the other way around.
Re: (Score:2)
Everyone relies on lawsuits. That's part of rule of law. We write legislation just like you do. The way the courts function (or don't) is different, but in both cases it's where how the law actually functions is supposed to be decided.
Re: How many suicides? (Score:1)
Only in people with months or years of neglect, including by parents and teachers. An AI is not to blaim but our society enables abandonment of personal responsibility.
Re: (Score:2)
In China? No, the CCP can create whatever sort of regulation they like. But if not to address actual suicides, it is valid to question why.
Re: (Score:3)
Have there been widespread suicides in China exacerbated by the usage of LLM chatbots?
Doesn't matter - what matters is to establish technology/processes that can then also be used to prevent any form of dissent from the ruling party line. Just like prevention of a few rare crimes is also used in the West as a pretense for culling freedom.
Re: (Score:3)
Re: (Score:2)
The motivation is probably more the part about banning "misinformation" (about Chinese politics).
Re: (Score:2)
Have there been widespread suicides in China exacerbated by the usage of LLM chatbots?
No, it's just that the government wants to be the only one who tells people they should die.
Re: (Score:2)
Life was already pretty bleak for young chinese before LLMs came along. Officially the young adult unemployment rate is 19%, but in reality it's a lot higher than that. Some estimate as high as 40%. The CCP doesn't have a lot to offer them. Also the CCP-caused gender imbalance is demoralizing to their young men as the odds of starting a family are low.
While China watchers such as China Uncensored are unnecessarily prone to hyperbole in their rhetoric, it's very true that some aspects of Chinese society a
Missed opportunity? (Score:3)
The CCP could use AI to predict those suicides and get a prison surgeon there in time to harvest the organs. Waste not, want not.
Re: (Score:2)
Even better, instruct people in suiciding in a way that will preserve their organs. Get concerned when the AI tells you to fill your bathtub with ice.
That's how marketting works, though. (Score:3)
If AI isn't allowed to emotionally manipulate people, how will the glorious AI-ad-filled future be realized in the world's fastest-growing consumer market?
Sounds like actually good rules (Score:5, Insightful)
Sure, it is China, so they likely do not want their own propaganda countered. But apart from that these rules do make a lot of sense. The AI pushers have put all known manipulation techniques aggressively into LLM chatbots and that is not good at all.
Functional Government (Score:2)
Sometimes I think it would be nice to have a functional government. Oh, well.
Re: (Score:2)
You do not want that.
Re: (Score:2)
Hey Trump is only doing what the people elected him to do! If the people want it, therefore it must be constitutional (you must acquit!). America needs a decisive leader who acts quickly on right things, rather than depend on the glacial, democratic process of congressional law making. Such as stopping those unamerican off-shore wind projects. And funneling subsidies back to god-fearing oil barons instead of those godless hippy electric car and solar energy pushers. And to stand up against those epstei
Re: (Score:2)
I miss when the cuckservatives were demanding the release of the Epstein files.
I'm not sad they've recently gotten a bit quieter about certain things they were always hypocritical about, though.
Re: (Score:2)
China is not a dictatorship. That's not to say it isn't authoritarian. A dictatorship is a specific thing and China isn't that thing. Taiwan was a dictatorship until this century, and might just slide back into it given recent political unfoldings on the island. China is a single party authoritarian communist state. They have non-hereditary peaceful transfers of power through elections. Admittedly only party members can vote in those elections, but the elections are still meaningful, and no nation allows al
Will that cover the export versions? (Score:2)
Re: (Score:2)
That is correct. You will be wrong. Report to your nearest CCP police station. Now with convenient locations in all major countries.