Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Google Youtube AI

Google Returns to Using Humans (Instead of AI) to Moderate Content on YouTube (digitaltrends.com) 88

"Google is bringing back human moderators to oversee YouTube content, taking over from automated systems that were given more responsibilities at the onset of the COVID-19 pandemic," reports Digital Trends: YouTube revealed in late August that in the three months prior, 11.4 million videos have been removed from the platform for violating its Community Guidelines. This is the highest number of videos taken down from YouTube over a three-month period since the service was launched in 2005, and it was attributed to the higher reliance on A.I. as the pandemic prevented human reviewers from going to work. YouTube admitted, however, that some of the videos would have been erroneously removed...
Mashable reports: According to the Financial Times, YouTube reversed content moderation decisions on 160,000 videos. Usually, YouTube reverses its rulings on less than 25 percent of appeals; under AI moderation, half of the total number of appeals were successful...

Now, the company is able to reassign some of that work back to humans who can make more nuanced decisions.

This discussion has been archived. No new comments can be posted.

Google Returns to Using Humans (Instead of AI) to Moderate Content on YouTube

Comments Filter:
  • Let me know when the humans can do a decent job of moderating. You can start by detecting political bias via a second moderation layer. Also by using unbiased and true-to-public-policy moderation as a metric, not speed. Oh, and by not favoring the big media companies. And I want a pony!
    • Re:Nuanced decisions (Score:5, Interesting)

      by bill_mcgonigle ( 4333 ) * on Monday September 21, 2020 @05:49AM (#60526944) Homepage Journal

      These are $12/hr sub-sub contractors in a cube farm in Texas who only take the job to exercise some political power.

      See the Veritas undercover video for a sense of how they actually think. It's your stupid neighbor who can't quite handle customer service at the supermarket due to personality traits.

      • Share the video link. A quick search for "YouTube veritas video" has vague results.
      • "It's your stupid neighbor who can't quite handle customer service at the supermarket due to personality traits." This is freaking hilarious! You have a link to this Veritas video?
      • Hey it is a dream job for some. Get paid to correct people who are wrong on the internet!
        But as I have been ranting on for a while, it isn't the Idea or the Ideology. But the Implementation is the difference between good and evil.

        Having people moderate large sites. Good idea, Implemented well we can have a good improvement. Implemented poorly it is just a disaster worse than before.

        I much rather see a bad idea implemented well. vs a Good Idea implemented poorly.

      • by Deef ( 162646 )

        In any discussion of Project Veritas [wikipedia.org], it should be noted that they have been repeatedly been caught lying and doctoring videos [theguardian.com], to further their very clear right-wing agenda. Project Veritas is by no means an unbiased observer of events, and when someone quotes them as a supposed standard of truth, it should raise immediate red flags.

      • These are $12/hr sub-sub contractors in a cube farm in Texas who only take the job to exercise some political power.

        See the Veritas undercover video for a sense of how they actually think. It's your stupid neighbor who can't quite handle customer service at the supermarket due to personality traits.

        The only sense in which Project "Veritas" is truthful is that they tell us exactly what unethical or illegal behavior they're behaving in by accusing others of it first. You know... kinda like Trump.

      • Diiiiiiiid someone say Project Veritas [sourcewatch.org]?

        > a group affiliated with James O'Keefe, a right-wing provocateur known for a series of deceptive videos attacking targets like Planned Parenthood and ACORN. As of 2017, Porject Veritas's main targets were the mainstream news media and left-leaning groups via "undercover 'stings' that involve using false cover stories and covert video recordings meant to expose what the group says is media bias."

    • Humans can do a better job than what AI can do. For example you may need to bring up Trump or some other divisive political figure in a non-partisan or political discussion. An AI could just flag names equate to political then figure out The tone of the comment and figure if it was Positive to negative. Not all things that involve government are political. Humans in a diverse team can do this. Any individual may just let their personal belief system in play (Up-vote or down-vote because such message cla

      • "Humans can do a better job than what AI can do"

        It's not Artificial Intelligence, it's Artificial Insanity [youtube.com].

        I doubt Google will be able to recruit enough people to do this job adequately, perhaps we've reached "peak YouTube" and the whole system has just become too large to be viable in its present form. To say nothing of the endless ads that now dilute the content to an unacceptable degree.

  • 25% (Score:4, Interesting)

    by AmiMoJo ( 196126 ) on Monday September 21, 2020 @03:49AM (#60526778) Homepage Journal

    A 25% failure rate is terrible! It's no wonder they are banning a lot of stuff that doesn't break their rules.

    Just as bad is the de-monetization. Videos about make-up or dating tips for LGBT people get flagged and can't make money from ads, which is clearly illegal.

    • Re:25% (Score:5, Insightful)

      by Luckyo ( 1726890 ) on Monday September 21, 2020 @03:59AM (#60526798)

      Far left: "We're going to pressure advertisers to not advertise on anything that isn't universally acceptable and completely morally unambiguious".

      Also far left: "What do you mean that many of our favourite things aren't universally acceptable and completely morally unambiguious?"

      • Re: (Score:2, Insightful)

        by AmiMoJo ( 196126 )

        I think you mean "far right". Remember GamerGate and when they tried to get Intel to stop advertising on gaming news sites? Or when they tried to Kaepernick's sponsorships taken away? Or got L'Oreal to cancel Munroe Bergdorf's contract (they later apologised)?

        • Or got L'Oreal to cancel Munroe Bergdorf's contract (they later apologised)?

          Well I sure hope that Bergdorf apologized for vilifying an entire race of people (a stunt that few would date to perform in this day and age). But what does it have to do with this?

          • by AmiMoJo ( 196126 )

            She didn't. Neither said that nor apologies for (not) saying it. Once that was pointed out to L'Oreal they apologised to her.

            The original attack on her was actually much worse. Not only were her comments misrepresented, they tried to smear her as a "porn star". For a bunch of people who claim to hate cancel culture they sure tried hard to cancel her.

            Fortunately the truth usually does come out in these instances and brands have discovered that simply trying to wash their hands of it or not get involved is no

            • She didn't.

              If she didn't apologize for saying those incredibly vile things that she said then let me play the world's smallest violin for her silly complaints. And if you don't see how telling all people of one race that they "inherited racism", that their "entire existence" is "drenched" in it and that their race is "the most violent " is vile, then feel free to GFY.

      • by gmack ( 197796 )

        I don't know if anyone has noticed, but the left and right both engage in cancel culture. They only disagree on what should be cancelled.

        • by Luckyo ( 1726890 )

          That's a nice way of saying "far left invented it and engages in it as a matter of routine, whereas the opposite side (which is not right, but everyone else, including centrists and leftists) does it exceedingly rarely and finds it a generally disdainful tactic that they are forced to use because far left uses it so liberally".

          • by gmack ( 197796 )

            That is a load of crap.

            In the 80s Dungeons & Dragons would turn us all into witches or maniac killers and needed to be banned. Violent movies and nudity needed banning. Certain books were protested if found in a school library. Then in the 90s until today it was violent video games would turn us all into school shooters. I haven't forgotten that Trump wanted to have a talk with the video game industry about school violence levels. Gambling under GW Bush got a total ban on credit card transactions.

            • by Luckyo ( 1726890 )

              Ah, you're retroactively pretending that "cancel culture" isn't referring to the targeting of individuals, but all moral panics of the past.

              No. I do not accept your rewriting of history.

              • by gmack ( 197796 )

                Oh! you mean like Amy Grant or Jars of Clay during the 90s?

                • by Luckyo ( 1726890 )

                  No, I mean like "gulags were just work camps", "cancel culture existed before tumblr and twitter" and other far left attempts at rewriting history.

    • A 25% failure rate is terrible! It's no wonder they are banning a lot of stuff that doesn't break their rules.

      Just as bad is the de-monetization. Videos about make-up or dating tips for LGBT people get flagged and can't make money from ads, which is clearly illegal.

      11.4 million videos removed, 0.16 million (160K) reversals on appeal. Doesn't seem bad at all to me, but if a convincing case can be made for why a large proportion aren't appealing I might take a different view.

      • by AmiMoJo ( 196126 )

        "Usually, YouTube reverses its rulings on less than 25 percent of appeals"

        So between 0 and 24.999%, presumably very close to 25% or they would have said "less than 20%. That's an extremely bad false positive ratio.

        • "Usually, YouTube reverses its rulings on less than 25 percent of appeals"

          So between 0 and 24.999%, presumably very close to 25% or they would have said "less than 20%. That's an extremely bad false positive ratio.

          I don't doubt that it's very close to 25% on appeals, but I believe that it's reasonable to count from the total number of removed videos which gives a ratio (0.16/11.4) of about 1.4%. Since appeals carry very little cost I think it's fair to assume that a negligible number in the category that wasn't appealed would have been reversed but I'm of course willing to be convinced otherwise.

          • by flink ( 18449 )

            Since appeals carry very little cost I think it's fair to assume that a negligible number in the category that wasn't appealed would have been reversed but I'm of course willing to be convinced otherwise.

            Most of the money a video is going to make unless it is one of the outliers that goes viral will be in the first few days to week after it is posted. Once a video is demonitized, by the time it can be appealed and reversed, the damage to the creator is done. It's not like YouTube is setting aside the money the video would have made and handing over a check along with the reversal.

            • Most of the money a video is going to make unless it is one of the outliers that goes viral will be in the first few days to week after it is posted. Once a video is demonitized, by the time it can be appealed and reversed, the damage to the creator is done. It's not like YouTube is setting aside the money the video would have made and handing over a check along with the reversal.

              I can see how that would make someone do a quick edit and re-upload not bothering with an appeal, but then afaik there's a limited number of strikes your channel can have before it gets de-monetized completely so isn't there an incentive to always appeal?

              • by flink ( 18449 )

                I can see how that would make someone do a quick edit and re-upload not bothering with an appeal, but then afaik there's a limited number of strikes your channel can have before it gets de-monetized completely so isn't there an incentive to always appeal?

                A copyright strike is different from being demonitized by the algorithm. E.g. if you use a copyrighted piece of music and the owner of the music chooses to claim your video, and you choose to contest and loose, that's a strike. 3 strikes and your channel can be deactivated altogether. If you are an LGBT vlogger that discusses current issues affecting your community, then all your videos might get demonitized for "inappropriate content" by YouTube's algorithm, but that doesn't constitute a strike, YT wil

    • Just as bad is the de-monetization. Videos about make-up or dating tips for LGBT people get flagged and can't make money from ads, which is clearly illegal.

      They don't care if it's illegal. They just care because when content is demonetized, they can't make money from ads on it either.

  • 80% of my comments vanish in the span of seconds.

    Seems I'm not compatible with Google's idea of what constitutes an appropriate opinion.

    granted, I am a bit of a dick too, so this may not be entirely their fault :D.

  • Context is important (Score:5, Interesting)

    by teg ( 97890 ) on Monday September 21, 2020 @05:36AM (#60526930)

    A human can understand the context better, and so - hopefully - avoid some errors of the AI. One example: This summer, a popular youtube chess channel was posting a game analysis, as it usually does. This time, however, the the video was rejected because it contained "harmful and dangerous" content. [thesun.co.uk], An appeal was also rejected [twitter.com], seemingly automatically (as it was instantenous). Fortunately, the situation was eventually resolved after further appeals.

    No real reason has ever been given AFAIK, but the best guess is that it was rejected because it contains statements discussing black and white and statements like if "black goes to B6 instead of C6, white will always be better". An algorithm might think this is racism, a knowledgeable human shoud conclude otherwise.

    • Science (Score:4, Insightful)

      by JBMcB ( 73720 ) on Monday September 21, 2020 @07:21AM (#60527086)

      A science channel I watch sometimes made a video debunking another video showing how wearing a mask to prevent COVID-19 is dangerous. It was flat-out, not ambiguous at all, a debunking video showing that masks *are* safe. It was taken down because it "spread misinformation" about wearing face masks. The thing is, the email said that it was reviewed by a human. A long email chain ensued, where, ostensibly, a human said that it violated Youtube guidelines on misinformation, never giving the exact type of misinformation contained in the video, just that it was COVID related.

      There are two major issues, overall, with the Youtube moderation system. One, is that the AI system is very inaccurate. Two, is that the humans moderating the videos are lazy or inept, and don't actually watch the videos they are moderating. In the case of this video, a cursory watch would show that it was not spreading misinformation.

      • I think I watch the same lightning toe channel, and he has had videos removed previously. The first that I am aware of had the word "naked" in the title, so it was taken down for that. The issue with the take-down is that the video was all about naked _electrons_ and chemistry. My position is clear: if seeing naked electrons excites you in a sexual manner, there is something seriously wrong with you, and eventually YT seems to have understood that the "naked" in this context was not in any way erotic/sexual
      • A series of videos on covid from a well respected authority (MedCram), aimed at medical professionals and whos videos qualify for continuing medical education credits, were taken off YouTube for spreading misleading information. It took weeks to get them back and almost ended the channel on YT.
    • by Evtim ( 1022085 )

      What would probably happen if a human was moderating the case you cite is that they'd initiate Twitter rage to remove the terms black and white from chess. Then ban it.

    • Reminds me of South-African chess, only white can capture...
  • This is a call to arms Christians! You must all become YouTube moderators and clean out all the filth that doesn't align with your particular brand and flavor of religion.
  • Good start (Score:4, Insightful)

    by onyxruby ( 118189 ) <onyxruby@@@comcast...net> on Monday September 21, 2020 @07:10AM (#60527050)

    That is a good start, now they just have to do something about the blatant bias their human censors have long practiced. Hate speech has long since become a euphemism for any speech that is right of center. Nobody is fooled by big tech claims of neutrality, over 70% of Americans believe:

    that it is somewhat or very likely that social media sites intentionally censor political viewpoints they find objectionable

    https://www.pewresearch.org/fa... [pewresearch.org]

    Example after example has been posted online by people showing that threats of violence, sexual assaults, racism, sexism and religious bigotry and even genocide are allowed to stand - so long as the victim is acceptable. Condoning or condemning something for one group and not another is by definition bias and bigotry. Is it any wonder that half of Americans don't trust fact checkers?

    https://www.mysuncoast.com/201... [mysuncoast.com]

    It doesn't much matter if it's an algorithm or human doing the censorship when their both being trained to favor the same bias. Only the most partisan of people can deny the favorable view of progressive speech and the hard line drawn against conservative speech. I don't think any intellectually honest person can claim that big tech is unbiased.

    Frankly I don't think there is anyway for Google to regain the trust of the world's citizens short of Federal oversight of their monopoly. Everything that impacts search rank and autocomplete across all of their products including algorithms, blacklists, whitelists and manual manipulations of all kinds needs to overseen for bias.

    • Frankly I don't think there is anyway for Google to regain the trust of the world's citizens short of Federal oversight of their monopoly. Everything that impacts search rank and autocomplete across all of their products including algorithms, blacklists, whitelists and manual manipulations of all kinds needs to overseen for bias.

      Seems like a good opportunity for an unbiased competitor to take their market.

      • They have a monopoly. They have used their monopoly to quash competitors again and again. I do recall the EU finding them guilty of this behavior and fining them massive amounts of money over the matter. Last time they were find $1.7 billion over their behavior in the EU alone over this.

        https://ec.europa.eu/commissio... [europa.eu].

        They've been investigated all over the world for abusing their monopoly, including by all 50 attorney general's - and the feds? Microsoft has literally spent billions of dollars trying to co

        • Sigh. I debated whether I want to wade into this with you, but I decided it's not worth my time. After we drill down into each of the major points we'll end up disagreeing over unprovable interpretations of actions, and there will likely be innumerable other distractions along the way... assuming we even both put in the effort to fully explore the questions.

          So, I'll let you have the last word (since I'm obviously not providing any sort of substantive response here, just explaining why I'm not).

          Have a n

          • An agreement to disagree and let's keep things civil - we need more of that. I shall also wish you a good day.

    • now they just have to do something about the blatant bias their human censors have long practiced. Hate speech has long since become a euphemism for any speech that is right of center

      That is exactly why Google is going back to humans, because the AI was not just blocking anything right of center, but also some left of center content as well.

      Humans should be much more reliable in banning all right of center content, and allowing anything to be posted by those on the left, especially important as the election

      • Good point, I recall hearing about some gay dating getting censored lately as well. There is no question that would have been inadvertent.

  • by PingSpike ( 947548 ) on Monday September 21, 2020 @07:14AM (#60527064)

    This is the highest number of videos taken down from YouTube over a three-month period since the service was launched in 2005, and it was attributed to the higher reliance on A.I. as the pandemic prevented human reviewers from going to work.

    Just to be clear: One of the largest and richest tech companies in the world couldn't figure out a way to have their employees review youtube videos from home.

    • This is the highest number of videos taken down from YouTube over a three-month period since the service was launched in 2005, and it was attributed to the higher reliance on A.I. as the pandemic prevented human reviewers from going to work.

      Just to be clear: One of the largest and richest tech companies in the world couldn't figure out a way to have their employees review youtube videos from home.

      What makes that especially interesting is that all of Google's R&D staff is working from home, and that transition was done basically over a weekend.

      My guess is that YouTube reviewers are contractors, not employees, and security policy baked deeply into systems barred contractors from accessing sensitive internal systems (like the review interface) except from Google-owned hardware connected to the Google corporate network, or from contract company-owned hardware/network, which is what they probably u

  • I'm not sure this applies to YouTube, but wasn't this job supposed to be causing PTSD and breakdowns on a regular basis because the videos they have to watch are so awful? That might have been Facebook.
    • I guess it goes for content moderation for every platform. Since, by definition, things are unfiltered until they get to moderators.
  • by RightwingNutjob ( 1302813 ) on Monday September 21, 2020 @08:17AM (#60527262)
    is like boasting about removing more words in the newest edition of the Newspeak dictionary.
  • The article says that more moderation was done by AI because of COVID-19 restrictions, which implies that Google viewed this as a job that couldn't be done from home.

    That seems like an odd position to take. It's a job that requires no intensive interaction with others, and no special tools. Just a computer with a VPN, and off you go.

  • "A method for reviewing content using biological entities raised and educated in Homo Sapien societies."

  • The AI having false positives isn't the issue, the problem lies with the appeal process. Any enough humans use their brains to evaluate appeals in a timely manner would certainly mitigate the problem. I don't think that the AI is bad enough that you'd need more people to check appeals efficiently than you'd need people to do the moderating efficiently in the first place.
  • Already my fantasy clock wakes me every morning with "Awaken, Human! Rise!"

    Perhaps it will add. "Do some YouTube moderation, Human!"

  • Because we are all aware that Conservanazis are not people

10 to the minus 6th power Movie = 1 Microfilm

Working...