Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AI Facebook Social Networks

New Internal Documents Contradict Facebook's Claims that AI Can Enforce Its Rules (livemint.com) 71

Today in the Wall Street Journal, Facebook's head of integrity, Guy Rosen, admitted that from April to June of this year, one in every 2,000 content views on Facebook still contained hate speech. (Alternate URL here, with shorter versions here and here.)

Head of integrity Rosen was calling that figure an improvement over mid-2020, when one in every 1,000 content views on Facebook were hate speech. But at that same moment in time Mark Zuckerberg was telling the U.S. Congress that "In terms of fighting hate, we've built really sophisticated systems!" "Facebook Inc. executives have long said that artificial intelligence would address the company's chronic problems keeping what it deems hate speech and excessive violence as well as underage users off its platforms," reports the Wall Street Journal.

"That future is farther away than those executives suggest, according to internal documents reviewed by The Wall Street Journal. Facebook's AI can't consistently identify first-person shooting videos, racist rants and even, in one notable episode that puzzled internal researchers for weeks, the difference between cockfighting and car crashes." On hate speech, the documents show, Facebook employees have estimated the company removes only a sliver of the posts that violate its rules — a low-single-digit percent, they say. When Facebook's algorithms aren't certain enough that content violates the rules to delete it, the platform shows that material to users less often — but the accounts that posted the material go unpunished.

The employees were analyzing Facebook's success at enforcing its own rules on content that it spells out in detail internally and in public documents like its community standards. The documents reviewed by the Journal also show that Facebook two years ago cut the time human reviewers focused on hate-speech complaints from users and made other tweaks that reduced the overall number of complaints. That made the company more dependent on AI enforcement of its rules and inflated the apparent success of the technology in its public statistics.

According to the documents, those responsible for keeping the platform free from content Facebook deems offensive or dangerous acknowledge that the company is nowhere close to being able to reliably screen it. "The problem is that we do not and possibly never will have a model that captures even a majority of integrity harms, particularly in sensitive areas," wrote a senior engineer and research scientist in a mid-2019 note. He estimated the company's automated systems removed posts that generated just 2% of the views of hate speech on the platform that violated its rules. "Recent estimates suggest that unless there is a major change in strategy, it will be very difficult to improve this beyond 10-20% in the short-medium term," he wrote.

This March, another team of Facebook employees drew a similar conclusion, estimating that those systems were removing posts that generated 3% to 5% of the views of hate speech on the platform, and 0.6% of all content that violated Facebook's policies against violence and incitement.

Facebook does also take some other additional steps to reduce views of hate speech (beyond AI screening), they told the Journal — also arguing that the internal Facebook documents the Journal had reviwed were outdated. But one of those documents showed that in 2019 Facebook was spending $104 million a year to review suspected hate speech, with a Facebook manager noting that "adds up to real money" and proposing "hate speech cost controls."

Facebook told the Journal the saved money went to better improving their algorithms. But the Journal reports that Facebook "also introduced 'friction' to the content reporting process, adding hoops for aggrieved users to jump through that sharply reduced how many complaints about content were made, according to the documents."

Facebook told the Journal that "some" of that friction has since been rolled back.
This discussion has been archived. No new comments can be posted.

New Internal Documents Contradict Facebook's Claims that AI Can Enforce Its Rules

Comments Filter:
  • by cats-paw ( 34890 ) on Sunday October 17, 2021 @10:53PM (#61901871) Homepage

    It's not even remotely close. We're going to lose this battle just like we lost the hacker vs cracker battle. Correction, it's already been lost.

    AI doesn't have the faintest amount of common sense of any sort.

    I think Machine Learning is a pretty good moniker, although still inaccurate, because it also lacks complete introspection.

    I real AI, or even something capable of "learning", would be able to at least get some idea that things that don't make sense when they don't actually make sense.

    However ML will do a bad job with bad training sets, sometimes a bad job with good training sets, and it's certainly not capable of the kind of decision making that's needed. But FB doesn't actually need it now do they ?

    They need to make money, lots of money. Claiming that they have "systems" in place to do the work gives them plausible deniability. Look, see, "we're doing something. Don't worry about the fact that the AI doesn't work, we'll tweak it to make it better".

    It will never be good enough.
    - it's not really an AI
    - FB wants it to allow those things that make it the most money, and it's guaranteed that rule is built into whatever "learning" it is capable of.

    • by sg_oneill ( 159032 ) on Monday October 18, 2021 @12:14AM (#61901991)

      Its shocking. I got a 30 day ban for "hatespeech" for talking about australian slang around cigarettes.

      Basically a decade and a half ago a local entrepenur Dick Smith launched a protest about the sale of RedHead matches to a foreign multinational by launching his own brand "Dick Heads". In my post I also mentioned how changing sensibilities moved the prefered australian slang for cigarettes from "Fa gs" to "Durries". (And yes I'm putting that space in here because I still dont know the lameness filters current arbitrary rules)

      Within minutes I was on a 30 day ban for "Dick heads" and "Fa gs".

      Bots have no capability of understanding nuance. But now I have on my permanent record that I'm a hatespeech person, and anyone who knows me knows nothing could be further than the truth. I wouldnt dare use the N word, or call a gay person the F word. And I dont even use the R word for disabled folks. And none of this is about political correctness, I was just raised by my mother to be polite, and not be a dick to people who've done me no harm. But the Bot does not know this. It just sees $banned_word and now here we are.

      Meanwhile literal neo-nazis with swastika profiles are on there sending death threats and going hog wild with no repercussions.

      Go figure.

      • australian slang around cigarettes. I was on the piss with me mate and the cunt was bumming fags all night. I had to tell him to get onto Cenno and get on the dole and score his own fags and stop being a slack cunt. But the cunt won't listen.

      • Re: (Score:2, Insightful)

        > And none of this is about political correctness, I was just raised by my mother to be polite, and not be a dick to people who've done me no harm.

        That's why she probably taught you to say 'retarded' instead of moron, imbecile, or idiot. It was the polite term, to indicate that perhaps their learning was delayed.

        But a very sick part of our society wants to take offense at anything possible because some fools give them comfort and power for it. Resist this at all costs.

        They consider 'handicapped' to be a

        • by jbengt ( 874751 ) on Monday October 18, 2021 @07:58AM (#61902513)

          That's why she probably taught you to say 'retarded' instead of moron, imbecile, or idiot. It was the polite term, to indicate that perhaps their learning was delayed.

          The fun part is that moron, imbecile, and idiot were, each in there own turn, once the scientific or polite term for the mentally disabled. But it's only natural to use those terms as insults to those they don't really apply to, and as a consequence they acquire connotations of offensiveness and become the new "bad" word to use. It becomes a never-ending cycle.

          They consider 'handicapped' to be a form of mockery now, despite all the parking spots

          To be fair, the current term is "accessible parking space", which actually is a better description of the parking space than "handicapped" would be.

          • To be fair, the current term is "accessible parking space", which actually is a better description of the parking space than "handicapped" would be.

            I agree that "accessible" is a better term for a parking space for those who are handicapped. But I resist calling the people who need them anything other than "handicapped".

            First, look at the supposedly politically-correct "sensitive" alternatives that I've heard. "Disabled"? Hell no! A non-functional vending machine or a booted vehicle or an interrupt pin on a micro are "disabled"; somebody with only one leg, for example, may still be very functional in multiple ways.

            "Differently abled"? Shit - the needle

          • The fun part is that moron, imbecile, and idiot were, each in there own turn, once the scientific or polite term for the mentally disabled. But it's only natural to use those terms as insults to those they don't really apply to, and as a consequence they acquire connotations of offensiveness and become the new "bad" word to use. It becomes a never-ending cycle.

            Ok, for a start, the reason I dont use the word is because I've worked with people with Down Syndrome, andt they'll break down in tears every damn ti

        • But a very sick part of our society wants to take offense at anything possible because some fools give them comfort and power for it. Resist this at all costs.[...]No, the only winning move is not to play these idiots' games. Being offended is a sign of emotional instability, not an argument.

          The reality is that people will use any term for mental disability as a slur in a wider context to the point where the term commonly becomes accepted as a slur. People with such disabilities and those that know them in

        • Being offended is a sign of emotional instability

          Well...you'd better stop being offended by changing language, then.

        • But a very sick part of our society wants to take offense at anything possible because some fools give them comfort and power for it. Resist this at all costs... Your mama taught you to be kind but your daddy taught you to be firm.

          Ah, a man who demonstrates the courage of his convictions by being openly sexist... ;-) I was, of course, kidding - your post was terrific and should be read by everyone who considers politically correct censored speech to be the salvation of society. Especially, it should be re-read by whoever modded you down.

      • Bots have no capability of understanding nuance

        Yep. I recently got banned for "spamming." The bot had decided my thread comment about the good qualities in Star Trek III was "spam."

        I appealed, if for no other reason than to further clog up their system by demonstrating how broken it is.

        • I appealed, if for no other reason than to further clog up their system by demonstrating how broken it is.

          If you are talking about Facebook, the only meaningful
          "appeal" is to cancel your membership and encourage
          others to do the same.

      • Comment removed based on user account deletion
      • by whitroth ( 9367 )

        Yep. Their so-called algorithms appear to primarily searching for naughty words. I got a week for using "You're a slut, Jane" from 1970's SNL.

        There's no difference between this, and Prodigy Europe's attempt in the '80's, to ban naughty words. That, since people paid, and had the right to complain, lasted a week, given that it banned "breast", and the breast cancer survivors were unhappy, as was the French port of Brest, and on and on.

        Now, I'm seeing reports from folks that it banned due to naughty words, ju

    • Re: (Score:1, Insightful)

      by Anonymous Coward

      > AI doesn't have the faintest amount of common sense of any sort.

      Neither do most people, and certainly fish, animals, and insects don't. It doesn't mean they're not forms of intelligence.

      > I think Machine Learning is a pretty good moniker, although still inaccurate, because it also lacks complete introspection.

      It doesn't need it. Learning is possible without introspection, again, look to the varied world of fauna.

      > I real AI, or even something capable of "learning", would be able to at least get s

    • by ranton ( 36917 ) on Monday October 18, 2021 @09:58AM (#61902775)

      FB wants it to allow those things that make it the most money, and it's guaranteed that rule is built into whatever "learning" it is capable of.

      This comment really gets to the core problem preventing real progress at Facebook. They won't fix this until they are convinced their company will be fined out of existence if they don't fix it. Just pass a law where Facebook will be fined 10x their current gross revenue in 2024 if they have the same number off hateful posts they did in 2020. Now the company can choose to either fix it or go away. Obviously that is a bit of hyperbole, but as long as Facebook knows they won't be allowed to operate like this and that the laws don't care how hard it is to implement, Facebook has about $40 billion per year in net profits to find a solution. If that takes hiring 500,000 moderators and 25,000 therapists and counselors for those moderators then so be it. If they can improve their "AI" enough to not need so much manpower, even better.

    • by mark-t ( 151149 )

      AI *IS* artificial intelligence.

      What I think you might have meant to say is that nobody has apparently actually successfully made anything that should qualify as AI yet.

      But answer this question: What *IS* intelligence, exactly? We can observe intelligence not just in humans, but in many other creatures as well. Can you define what intelligence actually is? How do we ascertain that something is, or is not, intelligent in the first place if not simply by its outward behavior? Should knowing or under

  • Suspicious (Score:5, Insightful)

    by Jiro ( 131519 ) on Sunday October 17, 2021 @10:59PM (#61901885)

    I mean, I doubt this is actually wrong. I have no doubt that the leaked information is correct--that is, Facebook's AI really is terrible at censoring. And if the point had been "Facebook censorship is terrible, maybe they should stop it", it'd be okay.

    But that's not the point of the discourse surrounding the leaks, and probably not the point of leaking it in the first place. The point is that since Facebook's censorship is terrible, we need to get someone else in there to really make sure Facebook is getting rid of all that hate speech and misinformation. That someone is going to be either the government or a cancel mob, and that's a lot scarier than even malfunctioning Facebook algorithms.

    • by AmiMoJo ( 196126 )

      Why should Facebook stop censoring? It's their private platform, they want to provide a certain user experience to keep the herd happy, why shouldn't they be allowed to do that?

      If you want less censorship then there are plenty of other websites that offer what you want, or you could start your own.

      • Re:Suspicious (Score:5, Insightful)

        by fafalone ( 633739 ) on Monday October 18, 2021 @04:38AM (#61902279)
        The argument that people can just make their own site is undermined by the fact that when they do, you'll simply go after their hosting provider and DDOS protection, then when that doesn't work, the upstream provider to those. At this point you might as well just tell people to forget being online at all and rely on phone calls and mail. Because even it they went through the lengths of creating their own backbone NOC for their own hosting service, you'd demand all the others refuse to peer with them, and they can't build their own internet, because you'll demand governments not allow it.
        • by AmiMoJo ( 196126 )

          8chan and the Daily Stormer are still up, so clearly it is perfectly possible to create sites like that and keep them online.

          Where other sites go wrong is that their business model relies on forcing other people to give them money against their will. Advertisers must forced to advertise on their sites, banks must be forced to process payments for them.

          Don't misunderstand me here, I actually support positive rights where the *government* is obligated to provide people with basic services, as is the norm in E

          • They might be back up, I donâ(TM)t visit either so I can't say, but it's temporary. It just happens they aren't the center of a media storm at the moment; next time their current host gets smeared for "supporting hate groups and pedophiles" they'll get kicked off and the cycle will repeat.

            The point is that because the internet only works by association, "make your own site" is essentially impossible to do when a Cancel Mob is trying to take you down.

          • Comment removed based on user account deletion
        • Comment removed based on user account deletion
      • Re:Suspicious (Score:5, Insightful)

        by AleRunner ( 4556245 ) on Monday October 18, 2021 @04:44AM (#61902283)

        Why should Facebook stop censoring? It's their private platform, they want to provide a certain user experience to keep the herd happy, why shouldn't they be allowed to do that?

        Facebook also claim immunity from responsibility for the comments under legislation such as the DMCA, claiming that "it's just a user comment" and they also actively engineer the engagement with the content, including allowing such things as hiding the content from people that would react badly to it or correct it.

        If any one of those things wasn't true - e.g. a forum which allows anonymous hate speech but doesn't control and push it (Slashdot) or a forum which actively took responsibility for the anonymous views and deleted or responded to them (most newspapers) or a simple forum where everything was reliably visible to everyone if they want it (again Slashdot or Reddit) then this wouldn't be the same.

        As it is, Facebook is almost perfectly designed to eliminate freedom of expression by avoiding certain views, inconvenient to Facebook, getting to certain people whilst at the same time maximising the spread of disinformation.

        • by AmiMoJo ( 196126 )

          Facebook doesn't claim immunity from the DMCA, they respond to take-down requests as legally required.

          Maybe you mean Section 230. S230 explicitly allows what they are doing.

          (2)Civil liability
          No provider or user of an interactive computer service shall be held liable on account ofâ"

          (A)any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or

          (B)any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).

          That seems pretty clear cut, they can censor all day and all night on any criteria they like and still not be held liable.

          You will also note that Slashdot does in fact censor comments. There is the lameness filter and moderation effectively buries things.

          I hate Facebook as much as the next guy, but legally speaking they are on solid grou

          • Re:Suspicious (Score:4, Interesting)

            by AleRunner ( 4556245 ) on Monday October 18, 2021 @06:15AM (#61902377)

            Facebook doesn't claim immunity from the DMCA, they respond to take-down requests as legally required.

            Maybe you mean Section 230. S230 explicitly allows what they are doing.

            That's not what I said. I said they claim immunity via laws like the DMCA not immunity from those laws. Normally, if you post lies about someone, especially someone private, which cause real damage, even in the USA there are laws under which they can sue. Especially for interfering with commercial interests. If you do that widely, but in secret so that the person you libel/slander cannot react, that makes it even worse and when they find out they are much more likely to get damages.

            With Facebook, they do that all the time, but when you find out, even though Facebook was spreading the lie, Facebook was controlling access to it and Facebook was profiting, they simply claim it's a user comment, delete it and avoid the responsibility they should normally legally have. The fact that they control the access and select the audiences where it will do the most damage whilst they can avoid the ones that might fix the problem makes this different from normal publication.

            Facebook is not a true publication putting other people's views out into the public domain as envisioned by the DMCA. They are a mechanism which uses the texts of others to push their own preferred ideas and since the ideas that they push are the ones which make people the most angry ("increase engagement") their use of the DMCA and other similar laws is an abuse which doesn't match the envisioned use of that law.

            • by AmiMoJo ( 196126 )

              So what is your proposed solution to this? Sounds like you would have to do away with S230 so that Facebook takes on some liability.

              • So what is your proposed solution to this? Sounds like you would have to do away with S230 so that Facebook takes on some liability.

                I don't have a full immediate thought out idea. I think that things like Slashdot should be allowed even with the some of the offensive stuff that's published here. At a gues, after a study and consideration I think I would limit S230 to material which is openly published and subject to open peer review. I'm not sure how that would interact with e.g. charging for access.

                One of the things I noticed in your quote is that facebook is required to do their publication "in good faith". I'd consider them to be i

      • Hey everyone: this is a dishonest argument. It's a hard left account that's arguing an extreme libertarian point of view. Pay it no attention, it's just here to disrupt the flow of posting.
        • by AmiMoJo ( 196126 )

          It's literally the law in the United States. I guess that makes the United States "hard left" and "extreme libertarian" too.

          Also, libertarian, LOL. I'm always espousing the benefits of big government.

    • Absolutely right. Watch the old Kirk Douglas movie "Saturn 3" for an excellent, though primitive, example of this played out. And that's why speech shouldn't be classified as "hate speech". Let people decide for themselves what is tolerated, and if enough people decide not to tolerate certain speech (like racial epithets), it, like those who use it, will be ostracized organically.

      Also, hate speech laws ALWAYS lead to hate crimes laws, which value the lives of some people over the lives of others. That's

  • by zemicrofilm ( 3033205 ) on Sunday October 17, 2021 @11:03PM (#61901899)
    Seriously who defines hate speech? Is saying trump is an orange Buffoon? Is saying Biden is senile hate speech? Is saying Fauci is linked to funding gain of function research in Wuhan hate speech? Is saying Zuckerberg is evil hate speech? My guess is the people who are going to define what is hate speech are probably the last people we would want to be given that power.
    • by Rosco P. Coltrane ( 209368 ) on Sunday October 17, 2021 @11:11PM (#61901911)

      Censorship is arbitrary by definition. That's not the issue here. What TFA says is, whatever FB's own criteria for what constitutes hate speech are, apparently they're unable to enforce them.

      But of course the real issue isn't that they're unable to enforce them, it's that they're unwilling.

      • "A word means exactly what I say it means. No more and no less." -- the Caterpillar.
      • by AmiMoJo ( 196126 )

        Even if they were willing I don't think there is a good solution. AI doesn't work very well and humans doing the job tend to develop mental health problems.

        • by ranton ( 36917 )

          Even if they were willing I don't think there is a good solution. AI doesn't work very well and humans doing the job tend to develop mental health problems.

          My guess is that most attempts for humans to do this have spent more time on minimizing costs than on creating an environment where the workers could perform the task with minimal stress and mental health complications. What would happen if each employee had (mandatory) access to a couple hours of therapy each week, and limit someone's workload after being exposed to a certain number of banned posts per week? Sure it could increase the costs per moderated post by 50% or more, but now there is a solution.

          Wha

    • Seriously who defines hate speech?

      On Facebook, Facebook should define what is hate speech.

      It is their site, they can do what they want. It is none of my business because it isn't my site.

      I have a Facebook account. I occasionally use it to keep in touch with friends and family.

      I have never, not once, seen any hate speech on Facebook.

      I presume the people who do see hate speech are seeing it because they are looking for it, or having silly arguments with strangers.

    • That's an easy answer. Read this whole link, but here's the gist of it: [newdiscourses.com]

      That's an easy answer. Read this whole link, but here's the gist of it: [newdiscourses.com]

      In the most extreme forms, demands for suppressing or "banning" hate speech stem from claims that it can, in that way, create fascist [newdiscourses.com] and other potentially genocidal [newdiscourses.com] "hate" movements that are unstoppable except through extreme applications of military force (perhaps including civil or world war). These more extreme interpretations can be traced rather directly to

  • by Canberra1 ( 3475749 ) on Monday October 18, 2021 @01:49AM (#61902093)
    Both Facebook and Google pretend to have AI, mind you, the rules are never clear (or published). Fairness demands that there is an appeal mechanism, and with specificity the reason is clear. It boils down to 'potentially offensive' policed harder than most magazines sold at the newsagent, or clips on the nightly news. Just like how Bugs Bunny and the Roadrunner cartoons had to be censored for violence. Not sure how Groucho Marx would fare nowadays. Ask any content creator. Facebook or Youtube have not published any rules or policy that other can read. Taking Google - People with video's sometimes get warnings but are not told why, or what portion of their video say minutes and seconds in. Then there is retrospective censorship, where 3 year old videos are taken down, because the rules have changed over time. Recently stereotyping has sometimes been flagged as hateful. AI is also a poor word to describe keyword triggers. Plenty of people adjusting posts not using high-scoring or 'banned' words that may trigger the filter. The world is not a box of chocolates.
  • What AI. Bulsh*t. AI some other time. Regexps and substring mathching.

    The same way it was denying accounts to people from the city Scuntrope it is now marking as hate speech any post about Hohloma (because it contains the word Hohol - an insult for Ukrainian).

    That is the level of Facebook AI. Almost as good as their DNS.

  • AI might be able to pick up on someone being a potential bot, or a swarm of them, or an account for their prodigious amount of content they're posting, or flag content that multiple people are tossing around. But a human still needs to review the content to see what is going on, especially if it is images or video. And even if the outcome of that review gets fed back into the AI, the AI *still* can't tell what's going on.
  • With 36% of Americans on social media getting their news from facebook (https://www.omnicoreagency.com/facebook-statistics/) and 1 in 2000 of the views being hate speech I understand how we got where we are. I think regulation should be implemented. Facebook itself says it hurts the bottom line to fight hate speech so it will never do it.
  • by nospam007 ( 722110 ) * on Monday October 18, 2021 @06:21AM (#61902381)

    Must be those AIs who became racist, xenophobic Nazis after 1 hour of reading the internets.

  • I've been temporarily suspended several times now for "bullying" when it wasn't. One time a person posted a picture of a frog they found in their yard that was quite plump and I responded "Fat boi!" and received a 24 hour suspension for online bullying. When I appealed it, they noted that it was reviewed and they stood by the ruling and my suspension was increased to 3 days! Another instance was a screenshot of some girls twitter griping about something and my response was she probably hates all men. An
  • in 2019 Facebook was spending $104 million a year to review suspected hate speech

    Can they find no better application for that money? Just let it be — people can block (those they consider to be) assholes on their own, what's the push to hound them off for everyone else?

    Other than, of course, Democrats using Facebook to circumvent the First Amendment [washingtonpost.com]...

  • by jenningsthecat ( 1525947 ) on Monday October 18, 2021 @10:44AM (#61902925)

    Facebook has a head of integrity? Isn't that kind of like Hell having a head of divinity?

    Oh, I get it now - that must be part of Facebook's "know thine enemy" strategy!

  • by account_deleted ( 4530225 ) on Monday October 18, 2021 @10:49AM (#61902939)
    Comment removed based on user account deletion
  • I literally was banned for life by Twitter. I didn't break any rules. I simply stated that people should try to stop trying to ruin other people lives over imaginary mind crimes and be civil. Twitter said that was targeted harassment and hate speech. Yes, these AI's are such wonderful overlords. The mods that manually review are even worse then it involves politics. We're at a point that public discussion isn't tolerated because people get offended by everything and invoke hate speech rules or whatever in m
  • My son, tell me your ailment? Aha, I see. AI-AI-AI, I call on you, cure my son! ...My son, you are cured by the power of AI!

  • Can anyone ever believe a press release from Facebook?

No spitting on the Bus! Thank you, The Mgt.

Working...