Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Slashdot videos: Now with more Slashdot!

  • View

  • Discuss

  • Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

×
Social Networks The Internet Communications Facebook

Crowdsourcing the Censors: A Contest 111

Posted by Soulskill
from the people-love-to-vote dept.
Frequent contributor Bennett Haselton is back with an article about how sites with huge amounts of user-generated content struggle to deal with abuse complaints, and could benefit from a crowd-sourced policing system similar to Slashdot's meta-moderation. He writes "In The Net Delusion, Evgeny Morozov cites examples of online mobs that filed phony abuse complaints in order to shut down pro-democracy Facebook groups and YouTube videos criticizing the Saudi royal family. I've got an idea for an algorithm that would help solve the problem, and I'm offering $100 (or a donation to a charity of your choice) for the best suggested improvement, or alternative, or criticism of the idea proposed in this article." Hit the link below to read the rest of his thoughts.

Before you get bored and click away: I'm proposing an algorithm for Facebook (and similar sites) to use to review "abuse reports" in a scalable and efficient manner, and I'm offering a total of $100 (or more) to the reader (or to some charity designated by them) who proposes the best improvement(s) or alternative(s) to the algorithm. We now proceed with your standard boilerplate introductory paragraph.

In his new book The Net Delusion: The Dark Side of Internet Freedom, Evgeny Morozov cites examples of Facebook users organizing campaigns to shut down particular groups or user account by filing phony complaints against them. One Hong-Kong-based Facebook group with over 80,000 members, formed to oppose the pro-Beijing Democratic Alliance for the Betterment and Progress of Hong-Kong, was shut down by opponents flagging the group as "abusive" on Facebook. In another incident, the Moroccan activist Kacem El Ghazzali found his Facebook group Youth for the Separation between Religion and Education deleted without explanation, and when he e-mailed Facebook to ask why, his personal Facebook profile got canned as well. Only after an international outcry did Facebook restore the group (but, oddly, not El Ghazzali's personal Facebook account), but they refused to explain the original removal; the most likely cause was a torrent of phony "complaints" from opponents. In both cases it seemed clear that the groups did not actually violate Facebook's Terms of Service, but the number of complaints presumably convinced either a software algorithm or an overworked human reviewer that something must have been inappropriate, and the forums were shut down. The Net Delusion also describes a group of conservative Saudi citizens calling themselves "Saudi Flagger" that coordinates filing en masse complaints against YouTube videos which criticize Islam or the Saudi royal family.

A large number of abuse reports against a single Facebook group or YouTube video probably has a good chance of triggering a takedown; with 2,000 employees managing 500 million users, Facebook surely doesn't have time to review every abuse report properly. About once a month I still get an email from Facebook with the subject "Facebook Warning" saying:

You have been sending harassing messages to other users. This is a violation of Facebook's Terms of Use. Among other things, messages that are hateful, threatening, or obscene are not allowed. Continued misuse of Facebook's features could result in your account being disabled.

I still have no idea what is triggering the "warnings"; the meanest thing I usually say on Facebook is to people who write to me asking for tech support (usually with the proxy sites to get on Facebook at school), when they say "It gives me an error", and I write back, "TELL ME THE ACTUAL ERROR MESSAGE THAT IT GIVES YOU!!" (Typical reply: "It gave me an error that it can't do it." If you work in tech support, I feel your pain.) I suspect the "abuse reports" are probably coming from parents who hack into their teenagers' accounts, see their teens corresponding with me about how to get on Facebook or YouTube at school, and decide to file an "abuse report" against my account just for the hell of it. If Facebook makes it that easy for a lone gunman to cause trouble with fake complaints, imagine how much trouble you can make with a well-coordinated mob.

But I think an algorithm could be implemented that would enable users to police for genuinely abusive content, without allowing hordes of vigilantes to get content removed that they simply don't like. Taking Facebook as an example, a simple change in the crowdsourcing algorithm could solve the whole problem: use the votes of users who are randomly selected by Facebook, rather than users who self-select by filing the abuse reports. This is similar to an algorithm I'd suggested for stopping vigilante campaigns from "burying" legitimate content on Digg (and indeed, stopping illegitimate self-promotion on Digg at the same time), and as an general algorithm for preventing good ideas from being lost in the glut of competing online content. But if phone "abuse reports" are also being used to squelch free speech in countries like China and Saudi Arabia, then the moral case for solving the problem is all that more compelling.

Here's how the algorithm would work: Facebook can ask some random fraction of their users, "Would you like to be a volunteer reviewer of abuse reports?" (Would you sign up? Come on. Wouldn't you be a little bit curious what sort of interesting stuff would be brought to your attention?) Wait until they've built up a roster of reviewers (say, 20,000). Then suppose Facebook receives an abuse report (or several abuse reports, whatever their threshold is) about a particular Facebook group. Facebook can then randomly select some subset of its volunteer reviewers, say, 100 of them. This is tiny as a proportion of the total number of reviewers (with a "jury" size of 100 and a "jury pool" of 20,000, a given reviewer has only a 1 in 200 chance of being called for "jury duty" for any particular complaint), but still large enough that the results are statistically significant. Tell them, "This is the content that users have been complaining about, and here is the reason that they say it violates our terms of service. Are these legitimate complaints, or not?" If the number of "Yes" votes exceeds some threshold, then the group gets shuttered.

It's much harder to cheat in this system, than in an "abuse report" system in which users simply band together and file phony abuse reports against a group until it gets taken down. If the 200 members of "Saudi Flagger" signed up as volunteer reviewers, then they would comprise only 1% of a jury pool of 20,000 users, and on average would only get one vote on a jury of 100. You'd have to organize such a large mob that your numbers would comprise a significant portion of the 20,000 volunteer reviewers, so that you would have a significant voting bloc in a given jury pool. (And my guess is that Facebook would have a lot more than 20,000 curious volunteers signed up as reviewers.) On the other hand, if someone creates a group with actual hateful content or built around a campaign of illegal harrassment, and the abuse reports start coming in until a jury vote is triggered, then a randomly selected jury of reviewers would probably cast enough "Yes" votes to validate the abuse reports.

Jurors could in fact be given three voting choices:

  • "This group really is abusive" (i.e. the abuse reports were legitimate), or;
  • "This group does not technically violate the Terms of Service, but the users who filed abuse reports were probably making an honest mistake" (perhaps a common choice for groups that support controversial causes, or that publish information about semi-private individuals); or
  • "This group does not violate the TOS, and the abuse reports were bogus to begin with" (i.e. almost no reasonable person could have believed that the group really did violate the TOS, and the abuse reports were probably part of an organized campaign to get the group removed).

This strongly discourages users from organizing mob efforts against legitimate groups; if most of the jury ends up voting for the third choice, "This is an obviously legitimate group and the complaints were just an organized vigilante campaign", then the users who filed the complaints could have their own accounts penalized.

What I like about this algorithm is that the sizes and thresholds can be tweaked according to what you discover about the habits of the Facebook content reviewers. Suppose most volunteer reviewers turn out to be deadbeats who don't respond to "jury duty" when they're actually called upon to vote in an abuse report case. Fine — just increase the size of the jury, until the average number of users in a randomly convened jury who do respond, is large enough to be statistically significant. Or, suppose it turns out that people who sign up to review content to be deleted, are a more prudish bunch than average, and their votes tend to skew towards "delete it now!" in a way that is not representative of the general Facebook community. Fine — just raise the threshold for the percentage of "Yes" votes required to get content deleted. All that's required for the algorithm to work, is that content which clearly does violate the Terms of Service, gets more "Yes" votes on average than content that doesn't. Then make the jury size large enough that the voting results are statistically significant, so you can tell which side of the threshold you're on.

Another beneficial feature of the algorithm is that it's scaleable — there's no bottleneck of overworked reviewers at Facebook headquarters who have to review every decision. (They should probably review a random subset of the decisions to make sure the "juries" are getting what seems to be the right answer, but they don't have to check every one.) If Facebook doubles in size — and the amount of "abusive content" and the number of abuse reports doubles along with it — then as long as the pool of volunteers reviewers also doubles, each reviewer has no greater workload than they had before. But the workload of the abuse department at Facebook doesn't double.

Now, this algorithm ducks the question of how to handle "borderline" content. If a student creates a Facebook group called "MR. LANGAN IS A BUTT BRAIN," is that "harassment" or not? I would say no, but I'm not confident that a randomly selected pool of reviewers would agree. However, the point of this algorithm is to make sure that if content is posted on Facebook that almost nobody would reasonably agree is a violation of their Terms of Service, then a group of vigilantes can't get it removed by filing a torrent of abuse reports.

Also, this proposal can't do much about Facebook's Terms of Service being prudish to begin with. A Frenchman recently had his account suspended because he used a 19th-century oil painting of an artistic nude as his profile picture. Well, Facebook's TOS prohibits nudity -- not just sexual nudity, but all nudity, period. Even under my proposed algorithm, jurors would presumably have to be honest and vote that the painting did in fact violate Facebook's TOS, unless or until Facebook changes the rules. (For that matter, maybe this wasn't a case of prudishness anyway. I mean, we know it's "artistic" because it's more than 100 years old and it was painted in oils, right? Yeah, well check out the painting that the guy used as his profile picture. It presumably didn't help that the painting is so good that the Facebook censors probably thought it was a photograph.)

But notwithstanding these problems, this algorithm was the best trade-off I could come up with in terms of scalability and fairness. So here's the contest: Send me your best alternative, or best suggested improvement, or best fatal flaw in this proposal (even if you don't come up with something better, the discovery of a fatal flaw is still valuable) for a chance to win (a portion of) the $100 -- or, you can designate a charity to be the recipient of your winnings. Send your ideas to bennett at peacefire dot org and put "reporting" in the subject line. I reserve the right to split the prize between multiple winners, or to pay out more than the original $100 (or give winners the right to designate charitable donations totalling more than $100) if enough good points come in (or to pay out less than $100 if there's a real dearth of valid points, but there are enough brainiacs reading this that I think that's unlikely). In order for the contest not to detract from the discussion taking place in the comment threads, if more than one reader submits essentially the same idea, I'll give the credit to the first submitter -- so as you're sending me your idea, you can feel free to share it in the comment threads as well without worrying about someone re-submitting it and stealing a portion of your winnings. (If your submission is, "Bennett, your articles would be much shorter if you just state your conclusion, instead of also including a supporting argument and addressing possible objections", feel free to submit that just in the comment threads.)

In The Net Delusion, Morozov concludes his section on phony abuse reports by saying, "Good judgment, as it turns out, cannot be crowdsourced, if only because special interests always steer the process to suit their own objectives." I think he's right about the problems, but I disagree that they're unsolvable. I think my algorithm does in fact prevent "special interests" from "steering the process", but I'll pay to be convinced that I'm wrong. Today I'm just choosing the "winners" of the contest myself; maybe someday I'll crowdsource the decision by letting a randomly selected subset of users vote on the merits of each proposal... but I'm sure some of you are dying to tell me why that's a bad idea.

This discussion has been archived. No new comments can be posted.

Crowdsourcing the Censors: A Contest

Comments Filter:
  • Trying for the $100 (Score:4, Interesting)

    by xkr (786629) on Friday April 15, 2011 @11:56AM (#35830290)

    I have two algorithms, and I suggest that they are more valuable if used together, and indeed, if all three including your algorithm are used together.

    (1) Identify "clumps" of users by who their friends are and by their viewing habits. Facebook has an app that will create a "distance graph," using a published algorithm. It is established that groups of users tend to "clump" and the clumps can be identified algorithmically. For example, for a given user, are there more connections back to the clump than there are to outside the clump? Another way to determine such a clump is by counting the number of loops back to the user. (A friends B friends C friends A.) Traditional correlation can be used to match viewing habits. This is probably improved by including a time factor in the each correlation term. For example, if two users watch the same video within 24 hours of each other this correlation term has more weight than if they were watched a week apart.

    Now that you have identified a clump -- which you do not make public -- determine what fraction of the abuse reports come from one or a small number of clumps. That is very suspicious. Also apply an "complaint" factor to the clump as a whole. Clumps with high complaint factors (complain frequently) have their complaints de-weighted appropriately. Rather than "on-off" determinations (e.g. "banned"), use variable weightings.

    In this way groups of like-minded users who try to push a specific agenda through abuse complaints would find their activities less and less effective. The more aggressive the clump, the less effective. And, the more the clump acts like a clump, the less effective.

    (2) Use Wikipedia style "locking." There are a sequence of locks, from mild to extreme. Mild locks require complaining users to be in good standing and be a user for while for the complaint to count. Medium locks require a more detailed look, say, by your set of random reviewers. Extreme locks means that the item in question has been formally reviewed and the issue is closed. In addition, complaints filed against a locked ("valid") item hurt the credibility score of the complainer.

    I hope this helps.

  • by dkleinsc (563838) on Friday April 15, 2011 @01:13PM (#35831220) Homepage

    Thing is, this problem isn't one of mere trolls. Trolls, spammers, and other forms of lesser life are relatively easy to recognize.

    No, these are paid shills and organized groups with an agenda. And that's much much harder to stop, because they will have 'spies' trying to infiltrate and/or control your jury selection, 'lawyers' looking for loopholes in your system, and a semi-disciplined mob who will be happy to carry out their plans carefully.

    An example of what they might do if they were trying to take over /. :
    1. See if they could find and crack old accounts that haven't been used in a while, so they could have nice low UIDs. These are your 'pioneer' accounts. If you aren't willing or able to pull that off, make some new accounts, but expect the takeover to take longer.
    2. Have the 'pioneers' post some smart and funny comments about stuff unrelated to your organization's angle to build karma and become moderators.
    3. Have your larger Wave 2 come in, possibly with new accounts. Still be reasonably smart and funny on stuff unrelated to the organization's angle. Have your pioneers mod up the Wave 2 posts.
    4. Repeat steps 2 and 3 until your group has a large enough pool of mods so that you can have at least 5 moderators ready whenever a story related to the organization's ideology comes up.
    5. Now let your mob in. Have your moderators mod up the not-totally-stupid mob posts in support of your organization's ideological position, and posssibly mod down as 'Overrated' (because that's not metamodded) anything that would serve to disprove it.
    You now have the desired results: +5 Insightful on posts that agree with $POSITION, -1 Overrated on posts that disagree with it, and an ever-increasing pool of moderators who will behave as you want them to with regards to $POSITION.

    I have no knowledge of whether anyone has carried out this plan already, but it wouldn't surprise me if they had. The system on /. is considerably more resilient than, say, the New York Times comment section or Youtube, but still hackable.

If it's not in the computer, it doesn't exist.

Working...