Meta Loses Trial After Arguing Child Exploitation Was 'Inevitable' (arstechnica.com) 45
Meta lost a child safety trial in New Mexico after a court found that its platforms failed to adequately protect children from exploitation and misled parents about app safety. According to Ars Technica, the jury on Tuesday "deliberated for only one day before agreeing that Meta should pay $375 million in civil damages..." While the jury declined to impose the maximum penalty New Mexico sought, which could have cost the company $2.2 billion, Meta may still face additional financial penalties and could be forced to make changes to its apps. From the report: The trial followed a 2023 lawsuit filed by New Mexico Attorney General Raul Torrez after The Guardian published a two-year investigation exposing child sex trafficking markets on Facebook and Instagram. Torrez's office then conducted an undercover investigation codenamed "Operation MetaPhile," in which officers posed as children on Facebook, Instagram, and WhatsApp. The jury heard that these fake profiles were "simply inundated with images and targeted solicitations" from child abusers, Torrez told CNBC in 2024. Ultimately, three men were arrested amid the sting for attempting to use Meta's social networks to prey on children. At trial, Mark Zuckerberg and Instagram chief Adam Mosseri testified that "harms to children, such as sexual exploitation and detriments to mental health, were inevitable on the company's platforms due to their vast user bases," The Guardian reported. Internal messages and documents, as well as testimony from child safety experts within and outside the company, showed that Meta repeatedly ignored warnings and failed to fix platforms to protect kids, New Mexico's AG successfully argued.
Perhaps most troubling to the jury, law enforcement and the National Center for Missing and Exploited Children also testified that Meta's reporting of crimes to children on its apps -- including child sexual abuse materials (CSAM) -- was "deficient," The Guardian reported. Rather than make it easy to trace harms on its platforms, the jury learned from frustrated cops that Meta "generated high volumes of 'junk' reports by overly relying on AI to moderate its platforms." This made its reporting "useless" and "meant crimes could not be investigated," The Guardian reported.
Celebrating the win as a "historic victory," Torrez told CNBC that families had previously paid the price for "Meta's choice to put profits over kids' safety." "Meta executives knew their products harmed children, disregarded warnings from their own employees, and lied to the public about what they knew," Torrez said. "Today the jury joined families, educators, and child safety experts in saying enough is enough." Meta said the company plans to appeal the verdict. "We respectfully disagree with the verdict and will appeal," Meta's spokesperson said. "We work hard to keep people safe on our platforms and are clear about the challenges of identifying and removing bad actors or harmful content. We will continue to defend ourselves vigorously, and we remain confident in our record of protecting teens online."
Perhaps most troubling to the jury, law enforcement and the National Center for Missing and Exploited Children also testified that Meta's reporting of crimes to children on its apps -- including child sexual abuse materials (CSAM) -- was "deficient," The Guardian reported. Rather than make it easy to trace harms on its platforms, the jury learned from frustrated cops that Meta "generated high volumes of 'junk' reports by overly relying on AI to moderate its platforms." This made its reporting "useless" and "meant crimes could not be investigated," The Guardian reported.
Celebrating the win as a "historic victory," Torrez told CNBC that families had previously paid the price for "Meta's choice to put profits over kids' safety." "Meta executives knew their products harmed children, disregarded warnings from their own employees, and lied to the public about what they knew," Torrez said. "Today the jury joined families, educators, and child safety experts in saying enough is enough." Meta said the company plans to appeal the verdict. "We respectfully disagree with the verdict and will appeal," Meta's spokesperson said. "We work hard to keep people safe on our platforms and are clear about the challenges of identifying and removing bad actors or harmful content. We will continue to defend ourselves vigorously, and we remain confident in our record of protecting teens online."
Exploitation of children is inevitable??? (Score:5, Funny)
Re: (Score:1, Redundant)
Re: (Score:2)
I am now more encouraged by the fact that any service/product is not inherently constitutional just because the mentioned. self labeled "industrialists" providing said service in their defence say people are "too stupid" to reject it.
Re:Exploitation of children is inevitable??? (Score:4, Insightful)
The difference is that if they really did their best to prevent it and someone slipped through the cracks anyway they could honestly say that they did their best.
They didn't do their best, though. They just didn't care. Or worse, they deemed it not worth the expense to even try to protect kids. That's different.
Re: (Score:2)
or even more worse... it's a feature, not a bug
Re: (Score:2)
It is legitimate for any service that constitutes a "common carrier" to be free of consequences for what it carries. But Meta do not claim to be a "common carrier", and that changes the nature of the playing field substantially. As soon as a service can inspect messages and moderate, it is no longer eligible to claim that it is not responsible for what it carries.
Your counter-argument holds some merit, but runs into two problems.
First, society deems any service that monitors to be liable. That may well be u
Re: Exploitation of children is inevitable??? (Score:2)
Uhhh... welcome to 1996?
https://www.law.cornell.edu/us... [cornell.edu]
(c) Protection for "Good Samaritan" blocking and screening of offensive material
(1) Treatment of publisher or speaker
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
(2) Civil liability
No provider or user of an interactive computer service shall be held liable on account of -
(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or
Re: (Score:2)
Darn, why didn't Epstein's lawyers use that excuse?
He did. Long they argued that exploitation was inevitable even constitutional and the fact that a person's influence in this civilization was chained to how willing you were to exploit Epstein-like situstions was obvious, and I'm just going by the following: The names dropped as holding the baton tended to be of a certain financial security while the people receiving said baton were below that ceiling.
Re: (Score:2)
and completely off topic.
but does this website have the right to monitor your browsing and remove threads they do not want you to see?
just asking?
Re: (Score:2)
and completely off topic.
but does this website have the right to monitor your browsing and remove threads they do not want you to see?
just asking?
and again not the topic of the article.
but can they deny others the ability to moderate one comment as opposed to another?
just asking again.
Re: (Score:2)
Re: (Score:2)
by "others" I mean "anyone"'s ability to moderate and I guess see, to clarify the references in both posts. and I did not mention that any flagging action by the viewer or viewers was initiated to cause this assuming no action was requested. but thank you for your reply.
Re: (Score:2)
\o/ (Score:1)
Wow, if that's the best they can think of to surface to others (on trial no less), one can only wonder what lurks in the deep dark depths of their secret intentions.
Re: (Score:1)
I forgot to say that clearly the answer is encryption back-doors for law enforcement. Oh wait.
Re: (Score:2)
Meta? (Score:5, Informative)
Re: (Score:2)
I've seen this one before (Score:1)
Re: I've seen this one before (Score:2)
"First they came for the amoral megacorporations run by lizard people!"
Re: (Score:2)
First they came for the amoral megacorporations run by lizard people!
Are you claiming that Zuck is a person (even if of the lizard variety)? I do not buy it.
Re: (Score:2)
Re: (Score:2)
What constitutional rights of Meta were violated?
Re: (Score:3)
Re: (Score:2)
What constitutional rights of Meta were violated?
I would like to know also?
Re: I've seen this one before (Score:1)
The right to unaccountable profit.
Accountability is tantamount to socialism. A few potential child penetrations is a small price to pay for FREEDOM OF CHOICE. (besides they were probably crisis actors planted by Soros.)
Re: (Score:1)
Re: (Score:2)
You sound like a poorly written perl script.
Re: (Score:1)
family (Score:1)
Re: (Score:2)
You know 18 is teenage AND an adult that could sign up to e-harmony.
priorities (Score:2)
"harms to children, such as sexual exploitation and detriments to mental health, were inevitable on the company's platforms due to their vast user bases"
Zuck and his lawyer are admitting that the very nature of their business is detrimental to society. What is the upside? They make hundreds of billions of dollars while firing half of their staff because their agentic AI tool is supposed to be good enough to replace some 40k employees. It seems like working in any corporation there's always a big part of the
AI moderation... what are the alternatives? (Score:2)
Rather than make it easy to trace harms on its platforms, the jury learned from frustrated cops that Meta "generated high volumes of 'junk' reports by overly relying on AI to moderate its platforms." This made its reporting "useless" and "meant crimes could not be investigated," The Guardian reported.
What, exactly, do they think the alternatives are?
Facebook has over 3 billion users. If they output an average of twenty artifacts (posts, replies, direct messages, or images/videos) per day, that's 60 billion outputs. If 1% of those are videos that are an average of three minutes long, that's 1.8 billion minutes of video, and if the other 99% take thirty seconds to moderate, that's another 29.7 billion minutes, for a total of 31.5 billion minutes per day to moderate.
That's 65.6 million workdays of conten
Re: (Score:1)
What about parenting? (Score:3)
I'm no fan of Zuck and Meta. However, the parent allowed their child to have the phone and allowed them to become obsessed with social media most of her awake hours. Where is the accountability with the parent?
Re: (Score:2)
Those parents are also products of a shitty US education system where religious nutters prevent things like sex education, which amusingly has better results than purity culture and produces far fewer pedophiles (which churches seem to excel at).
But don't worry, the world is rapidly shifting away from the USA,
Re: (Score:2)
Zuc and co are also funding those fights pushing for it.
It's to help verify the user is a human so they...you know...mark it to have all it's data sold.
Re: (Score:2)
They are anti anything that could reduce engagement which is why they deliberately use addictive methods to keep people coming back.
shocked shocked i say (Score:1)
Yes, Mark, with the way you ran the platform, the things you built it to do and the way you implemented it, exploitation of all kinds WAS inevitable.
If you were a literate person you'd already know why. On the other hand, you're a hero for proving that ignorance really IS an excuse i guess?