Pinterest Says AI Reduced Reported Self-Harm Content By 88% (venturebeat.com) 37
Pinterest says it's using machine learning techniques to identify and hide content that displays, rationalizes, or encourages self-injury. The company says it has achieved an 88% reduction in reports of self-harm content by users and that it's now able to remove such content 3 times faster. From a report: Additionally, over 4,600 search terms and phrases related to self-harm have been removed from the platform, Pinterest says, and links to free and confidential support from expert resources are now more prominently displayed to members who search for those keywords. People showing signs of distress now see the resources directly in their boards (i.e., home screens), an approach Pinterest says was developed with guidance from outside emotional health experts at the National Suicide Prevention Lifeline, Vibrant Emotional Health, and Samaritans. Elsewhere, Pinterest this morning broadened the rollout of the emotional well-being interactive practices and exercises it introduced in the U.S. through its iOS app earlier this year.
pffft - useless goody googies (Score:2)
people into it will just post somewhere else
No more home surgery (Score:3)
How am I supposed to find tutorials for performing backyard surgery now?
Re: (Score:1)
Seems you're already a master; your self-lobotomy did the job.
All tutorials involving hot glue... (Score:2)
Have been removed. You're welcome.
Pinterest? (Score:4, Insightful)
You mean the site that clogs up my image search results?
Not a surprise (Score:2)
Their user base probably declined by 90% or so.
Great (Score:2, Informative)
Re: (Score:2)
Do you have alternatives to suggest?
I do. Identify the people and then show their images only to people who will say positive things to them. If they're gonna use AI, why not use it for good?
Re: Great (Score:2)
Good idea. Let's make this happen, people.
I wonder if with a large enough corpus of posting some standardized psychometrics can be automatically derived.
Re: (Score:2)
Good idea. Let's make this happen, people.
I wonder if with a large enough corpus of posting some standardized psychometrics can be automatically derived.
Talk to Cambridge Analytica
Re: (Score:1)
Re: (Score:2)
You seem to be opposed to people who complain about other people complaining, yet here you are complaining about people who complain about people who complain.
Apparently irony can be recursive.
Now missing from Pinterest (Score:2)
X-Acto knife safety videos.
Superglue removal techniques.
How to thread a needle without accidentally tattooing yourself, or sewing your fingers together. /s
Screw Pintrest (Score:2)
What about the "genuine" cries for help? (Score:2)
Silencing the troubled? (Score:3)
Re: (Score:2)
Someone posts some cry for help, and then an unthinking moronic AI decides to algorithmically flag it as self-harm-flavoured and so their speech is suppressed automatically. How is that making the world a better place?
The summary states that in such a case, the AI would recommend "links to free and confidential support from expert resources". In what way do you anticipate this support to be inadequate?
Re: (Score:2)
Re: Silencing the troubled? (Score:2)
Re: (Score:2)
Re: (Score:2)
Apparently their "peers" on Pinterest are only encouraging them to harm themselves more. It's likely the AI is targeting "pro-Mia" and "pro-Ana" groups, as well as groups that celebrate forms of self-mutilation.
Story makes me want to kill myself (Score:1)
What about actual self-harm? (Score:2)
Or even just unreported self-harm content?
This reminds me of that rape victim, who said that the (now formerly) proposed DNS blocking of child abuse content is a bad idea, because then they would just continue, shiedled by the blocking, while nobody would *see* it anymore. Effectively *protecting* child rapists from being reported, as a result.
Re: (Score:2)
Yep. Exactly like the penalties for promoting Nazism or denying the holocaust in public in Germany, except, you know, the opposite. But the result is precisely the same; make them invisible so you can deny the extent of the problem, and do nothing about it because that would take work.
What's the false positive rate? (Score:5, Insightful)
Re: (Score:1)
Indeed! Youtube trained AI to detect (illegal) animal combat, but it suppressed robotic combat ("battle-bots") as a side-effect.
It's unclear whether they "solved" it by tuning the bots, or by requiring a human inspector to verify take-downs.
Re: (Score:2)
In other news, (Score:1)
Success! (Score:2)
We used our algorithm to identify and remove "self-harm" content, after which our algorithm for identifying "self-harm" content told us we'd gotten almost all of it!
Can you lift up that rug? (Score:2)
If it's not public, it's not a problem... (Score:2)
That is so stupid. You will help noone by hiding their cries for help.
You just don't have to look at them any more.
But, let's face it: that is all you wanted anyway.
That 88% figure was no doubt provided by... (Score:2)
AI.