Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Social Networks The Internet Communications Facebook

Crowdsourcing the Censors: A Contest 111

Frequent contributor Bennett Haselton is back with an article about how sites with huge amounts of user-generated content struggle to deal with abuse complaints, and could benefit from a crowd-sourced policing system similar to Slashdot's meta-moderation. He writes "In The Net Delusion, Evgeny Morozov cites examples of online mobs that filed phony abuse complaints in order to shut down pro-democracy Facebook groups and YouTube videos criticizing the Saudi royal family. I've got an idea for an algorithm that would help solve the problem, and I'm offering $100 (or a donation to a charity of your choice) for the best suggested improvement, or alternative, or criticism of the idea proposed in this article." Hit the link below to read the rest of his thoughts.

Before you get bored and click away: I'm proposing an algorithm for Facebook (and similar sites) to use to review "abuse reports" in a scalable and efficient manner, and I'm offering a total of $100 (or more) to the reader (or to some charity designated by them) who proposes the best improvement(s) or alternative(s) to the algorithm. We now proceed with your standard boilerplate introductory paragraph.

In his new book The Net Delusion: The Dark Side of Internet Freedom, Evgeny Morozov cites examples of Facebook users organizing campaigns to shut down particular groups or user account by filing phony complaints against them. One Hong-Kong-based Facebook group with over 80,000 members, formed to oppose the pro-Beijing Democratic Alliance for the Betterment and Progress of Hong-Kong, was shut down by opponents flagging the group as "abusive" on Facebook. In another incident, the Moroccan activist Kacem El Ghazzali found his Facebook group Youth for the Separation between Religion and Education deleted without explanation, and when he e-mailed Facebook to ask why, his personal Facebook profile got canned as well. Only after an international outcry did Facebook restore the group (but, oddly, not El Ghazzali's personal Facebook account), but they refused to explain the original removal; the most likely cause was a torrent of phony "complaints" from opponents. In both cases it seemed clear that the groups did not actually violate Facebook's Terms of Service, but the number of complaints presumably convinced either a software algorithm or an overworked human reviewer that something must have been inappropriate, and the forums were shut down. The Net Delusion also describes a group of conservative Saudi citizens calling themselves "Saudi Flagger" that coordinates filing en masse complaints against YouTube videos which criticize Islam or the Saudi royal family.

A large number of abuse reports against a single Facebook group or YouTube video probably has a good chance of triggering a takedown; with 2,000 employees managing 500 million users, Facebook surely doesn't have time to review every abuse report properly. About once a month I still get an email from Facebook with the subject "Facebook Warning" saying:

You have been sending harassing messages to other users. This is a violation of Facebook's Terms of Use. Among other things, messages that are hateful, threatening, or obscene are not allowed. Continued misuse of Facebook's features could result in your account being disabled.

I still have no idea what is triggering the "warnings"; the meanest thing I usually say on Facebook is to people who write to me asking for tech support (usually with the proxy sites to get on Facebook at school), when they say "It gives me an error", and I write back, "TELL ME THE ACTUAL ERROR MESSAGE THAT IT GIVES YOU!!" (Typical reply: "It gave me an error that it can't do it." If you work in tech support, I feel your pain.) I suspect the "abuse reports" are probably coming from parents who hack into their teenagers' accounts, see their teens corresponding with me about how to get on Facebook or YouTube at school, and decide to file an "abuse report" against my account just for the hell of it. If Facebook makes it that easy for a lone gunman to cause trouble with fake complaints, imagine how much trouble you can make with a well-coordinated mob.

But I think an algorithm could be implemented that would enable users to police for genuinely abusive content, without allowing hordes of vigilantes to get content removed that they simply don't like. Taking Facebook as an example, a simple change in the crowdsourcing algorithm could solve the whole problem: use the votes of users who are randomly selected by Facebook, rather than users who self-select by filing the abuse reports. This is similar to an algorithm I'd suggested for stopping vigilante campaigns from "burying" legitimate content on Digg (and indeed, stopping illegitimate self-promotion on Digg at the same time), and as an general algorithm for preventing good ideas from being lost in the glut of competing online content. But if phone "abuse reports" are also being used to squelch free speech in countries like China and Saudi Arabia, then the moral case for solving the problem is all that more compelling.

Here's how the algorithm would work: Facebook can ask some random fraction of their users, "Would you like to be a volunteer reviewer of abuse reports?" (Would you sign up? Come on. Wouldn't you be a little bit curious what sort of interesting stuff would be brought to your attention?) Wait until they've built up a roster of reviewers (say, 20,000). Then suppose Facebook receives an abuse report (or several abuse reports, whatever their threshold is) about a particular Facebook group. Facebook can then randomly select some subset of its volunteer reviewers, say, 100 of them. This is tiny as a proportion of the total number of reviewers (with a "jury" size of 100 and a "jury pool" of 20,000, a given reviewer has only a 1 in 200 chance of being called for "jury duty" for any particular complaint), but still large enough that the results are statistically significant. Tell them, "This is the content that users have been complaining about, and here is the reason that they say it violates our terms of service. Are these legitimate complaints, or not?" If the number of "Yes" votes exceeds some threshold, then the group gets shuttered.

It's much harder to cheat in this system, than in an "abuse report" system in which users simply band together and file phony abuse reports against a group until it gets taken down. If the 200 members of "Saudi Flagger" signed up as volunteer reviewers, then they would comprise only 1% of a jury pool of 20,000 users, and on average would only get one vote on a jury of 100. You'd have to organize such a large mob that your numbers would comprise a significant portion of the 20,000 volunteer reviewers, so that you would have a significant voting bloc in a given jury pool. (And my guess is that Facebook would have a lot more than 20,000 curious volunteers signed up as reviewers.) On the other hand, if someone creates a group with actual hateful content or built around a campaign of illegal harrassment, and the abuse reports start coming in until a jury vote is triggered, then a randomly selected jury of reviewers would probably cast enough "Yes" votes to validate the abuse reports.

Jurors could in fact be given three voting choices:

  • "This group really is abusive" (i.e. the abuse reports were legitimate), or;
  • "This group does not technically violate the Terms of Service, but the users who filed abuse reports were probably making an honest mistake" (perhaps a common choice for groups that support controversial causes, or that publish information about semi-private individuals); or
  • "This group does not violate the TOS, and the abuse reports were bogus to begin with" (i.e. almost no reasonable person could have believed that the group really did violate the TOS, and the abuse reports were probably part of an organized campaign to get the group removed).

This strongly discourages users from organizing mob efforts against legitimate groups; if most of the jury ends up voting for the third choice, "This is an obviously legitimate group and the complaints were just an organized vigilante campaign", then the users who filed the complaints could have their own accounts penalized.

What I like about this algorithm is that the sizes and thresholds can be tweaked according to what you discover about the habits of the Facebook content reviewers. Suppose most volunteer reviewers turn out to be deadbeats who don't respond to "jury duty" when they're actually called upon to vote in an abuse report case. Fine — just increase the size of the jury, until the average number of users in a randomly convened jury who do respond, is large enough to be statistically significant. Or, suppose it turns out that people who sign up to review content to be deleted, are a more prudish bunch than average, and their votes tend to skew towards "delete it now!" in a way that is not representative of the general Facebook community. Fine — just raise the threshold for the percentage of "Yes" votes required to get content deleted. All that's required for the algorithm to work, is that content which clearly does violate the Terms of Service, gets more "Yes" votes on average than content that doesn't. Then make the jury size large enough that the voting results are statistically significant, so you can tell which side of the threshold you're on.

Another beneficial feature of the algorithm is that it's scaleable — there's no bottleneck of overworked reviewers at Facebook headquarters who have to review every decision. (They should probably review a random subset of the decisions to make sure the "juries" are getting what seems to be the right answer, but they don't have to check every one.) If Facebook doubles in size — and the amount of "abusive content" and the number of abuse reports doubles along with it — then as long as the pool of volunteers reviewers also doubles, each reviewer has no greater workload than they had before. But the workload of the abuse department at Facebook doesn't double.

Now, this algorithm ducks the question of how to handle "borderline" content. If a student creates a Facebook group called "MR. LANGAN IS A BUTT BRAIN," is that "harassment" or not? I would say no, but I'm not confident that a randomly selected pool of reviewers would agree. However, the point of this algorithm is to make sure that if content is posted on Facebook that almost nobody would reasonably agree is a violation of their Terms of Service, then a group of vigilantes can't get it removed by filing a torrent of abuse reports.

Also, this proposal can't do much about Facebook's Terms of Service being prudish to begin with. A Frenchman recently had his account suspended because he used a 19th-century oil painting of an artistic nude as his profile picture. Well, Facebook's TOS prohibits nudity -- not just sexual nudity, but all nudity, period. Even under my proposed algorithm, jurors would presumably have to be honest and vote that the painting did in fact violate Facebook's TOS, unless or until Facebook changes the rules. (For that matter, maybe this wasn't a case of prudishness anyway. I mean, we know it's "artistic" because it's more than 100 years old and it was painted in oils, right? Yeah, well check out the painting that the guy used as his profile picture. It presumably didn't help that the painting is so good that the Facebook censors probably thought it was a photograph.)

But notwithstanding these problems, this algorithm was the best trade-off I could come up with in terms of scalability and fairness. So here's the contest: Send me your best alternative, or best suggested improvement, or best fatal flaw in this proposal (even if you don't come up with something better, the discovery of a fatal flaw is still valuable) for a chance to win (a portion of) the $100 -- or, you can designate a charity to be the recipient of your winnings. Send your ideas to bennett at peacefire dot org and put "reporting" in the subject line. I reserve the right to split the prize between multiple winners, or to pay out more than the original $100 (or give winners the right to designate charitable donations totalling more than $100) if enough good points come in (or to pay out less than $100 if there's a real dearth of valid points, but there are enough brainiacs reading this that I think that's unlikely). In order for the contest not to detract from the discussion taking place in the comment threads, if more than one reader submits essentially the same idea, I'll give the credit to the first submitter -- so as you're sending me your idea, you can feel free to share it in the comment threads as well without worrying about someone re-submitting it and stealing a portion of your winnings. (If your submission is, "Bennett, your articles would be much shorter if you just state your conclusion, instead of also including a supporting argument and addressing possible objections", feel free to submit that just in the comment threads.)

In The Net Delusion, Morozov concludes his section on phony abuse reports by saying, "Good judgment, as it turns out, cannot be crowdsourced, if only because special interests always steer the process to suit their own objectives." I think he's right about the problems, but I disagree that they're unsolvable. I think my algorithm does in fact prevent "special interests" from "steering the process", but I'll pay to be convinced that I'm wrong. Today I'm just choosing the "winners" of the contest myself; maybe someday I'll crowdsource the decision by letting a randomly selected subset of users vote on the merits of each proposal... but I'm sure some of you are dying to tell me why that's a bad idea.

This discussion has been archived. No new comments can be posted.

Crowdsourcing the Censors: A Contest

Comments Filter:
  • by yakatz ( 1176317 ) on Friday April 15, 2011 @12:32PM (#35829980) Homepage Journal

    This sounds a lot like the slashdot moderation scheme...

    For those who did not know, you can get the source code behind slashdot here [slashcode.com]

  • by JMZero ( 449047 ) on Friday April 15, 2011 @12:56PM (#35830284) Homepage

    ..but I have to say it's ironic that you're posting about this algorithm on Slashdot, a site whose moderation system has incorporated the best of your ideas for years, and yet that doesn't seem to come up when you're asking for ideas.

    I like the Slashdot system. Moderators are assigned points at times beyond their control, to prevent just the kind of abuses you mention. There's appropriate feedback control on how moderators behave. The job of moderating (and meta-moderating) is presented and appreciated in such a way that people actually do it. People are picked to do moderation in a reasonable way. The process is transparent, and the proof that it works is that the Slashdot comments you typically see are actually not horrible (usually) and sometimes are quite informative.

  • Re:Meta (Score:2, Informative)

    by raddan ( 519638 ) * on Friday April 15, 2011 @12:58PM (#35830314)
    People have been doing that on Mechanical Turk for awhile now. It makes sense. If you think of a human as a very slow, error-prone CPU, the solution is obvious:
    1. Get more people
    2. Have them check each other's work

    The second part relies on the independence of the people-- i.e., they are not colluding to distort your "computation". But crowdsourcing sites like MTurk and Slashdot effectively mitigate this by 1) having a large user base from which they 2) sample randomly. MTurk allows you to do crowdsourced checking of crowdsourced content by exposing the worker_id, so that you can exclude a worker who participated in one step from participating in another. Slashdot's moderation system requires that people can't post and moderate at the same time.

    This paper [acm.org] coins the term "inter-annotator agreement" for this idea for NLP-type tasks. There are other papers, too, but I have to get back to work.

Genetics explains why you look like your father, and if you don't, why you should.

Working...