Forgot your password?
typodupeerror
Social Networks The Internet Communications Facebook

Crowdsourcing the Censors: A Contest 111

Posted by Soulskill
from the people-love-to-vote dept.
Frequent contributor Bennett Haselton is back with an article about how sites with huge amounts of user-generated content struggle to deal with abuse complaints, and could benefit from a crowd-sourced policing system similar to Slashdot's meta-moderation. He writes "In The Net Delusion, Evgeny Morozov cites examples of online mobs that filed phony abuse complaints in order to shut down pro-democracy Facebook groups and YouTube videos criticizing the Saudi royal family. I've got an idea for an algorithm that would help solve the problem, and I'm offering $100 (or a donation to a charity of your choice) for the best suggested improvement, or alternative, or criticism of the idea proposed in this article." Hit the link below to read the rest of his thoughts.

Before you get bored and click away: I'm proposing an algorithm for Facebook (and similar sites) to use to review "abuse reports" in a scalable and efficient manner, and I'm offering a total of $100 (or more) to the reader (or to some charity designated by them) who proposes the best improvement(s) or alternative(s) to the algorithm. We now proceed with your standard boilerplate introductory paragraph.

In his new book The Net Delusion: The Dark Side of Internet Freedom, Evgeny Morozov cites examples of Facebook users organizing campaigns to shut down particular groups or user account by filing phony complaints against them. One Hong-Kong-based Facebook group with over 80,000 members, formed to oppose the pro-Beijing Democratic Alliance for the Betterment and Progress of Hong-Kong, was shut down by opponents flagging the group as "abusive" on Facebook. In another incident, the Moroccan activist Kacem El Ghazzali found his Facebook group Youth for the Separation between Religion and Education deleted without explanation, and when he e-mailed Facebook to ask why, his personal Facebook profile got canned as well. Only after an international outcry did Facebook restore the group (but, oddly, not El Ghazzali's personal Facebook account), but they refused to explain the original removal; the most likely cause was a torrent of phony "complaints" from opponents. In both cases it seemed clear that the groups did not actually violate Facebook's Terms of Service, but the number of complaints presumably convinced either a software algorithm or an overworked human reviewer that something must have been inappropriate, and the forums were shut down. The Net Delusion also describes a group of conservative Saudi citizens calling themselves "Saudi Flagger" that coordinates filing en masse complaints against YouTube videos which criticize Islam or the Saudi royal family.

A large number of abuse reports against a single Facebook group or YouTube video probably has a good chance of triggering a takedown; with 2,000 employees managing 500 million users, Facebook surely doesn't have time to review every abuse report properly. About once a month I still get an email from Facebook with the subject "Facebook Warning" saying:

You have been sending harassing messages to other users. This is a violation of Facebook's Terms of Use. Among other things, messages that are hateful, threatening, or obscene are not allowed. Continued misuse of Facebook's features could result in your account being disabled.

I still have no idea what is triggering the "warnings"; the meanest thing I usually say on Facebook is to people who write to me asking for tech support (usually with the proxy sites to get on Facebook at school), when they say "It gives me an error", and I write back, "TELL ME THE ACTUAL ERROR MESSAGE THAT IT GIVES YOU!!" (Typical reply: "It gave me an error that it can't do it." If you work in tech support, I feel your pain.) I suspect the "abuse reports" are probably coming from parents who hack into their teenagers' accounts, see their teens corresponding with me about how to get on Facebook or YouTube at school, and decide to file an "abuse report" against my account just for the hell of it. If Facebook makes it that easy for a lone gunman to cause trouble with fake complaints, imagine how much trouble you can make with a well-coordinated mob.

But I think an algorithm could be implemented that would enable users to police for genuinely abusive content, without allowing hordes of vigilantes to get content removed that they simply don't like. Taking Facebook as an example, a simple change in the crowdsourcing algorithm could solve the whole problem: use the votes of users who are randomly selected by Facebook, rather than users who self-select by filing the abuse reports. This is similar to an algorithm I'd suggested for stopping vigilante campaigns from "burying" legitimate content on Digg (and indeed, stopping illegitimate self-promotion on Digg at the same time), and as an general algorithm for preventing good ideas from being lost in the glut of competing online content. But if phone "abuse reports" are also being used to squelch free speech in countries like China and Saudi Arabia, then the moral case for solving the problem is all that more compelling.

Here's how the algorithm would work: Facebook can ask some random fraction of their users, "Would you like to be a volunteer reviewer of abuse reports?" (Would you sign up? Come on. Wouldn't you be a little bit curious what sort of interesting stuff would be brought to your attention?) Wait until they've built up a roster of reviewers (say, 20,000). Then suppose Facebook receives an abuse report (or several abuse reports, whatever their threshold is) about a particular Facebook group. Facebook can then randomly select some subset of its volunteer reviewers, say, 100 of them. This is tiny as a proportion of the total number of reviewers (with a "jury" size of 100 and a "jury pool" of 20,000, a given reviewer has only a 1 in 200 chance of being called for "jury duty" for any particular complaint), but still large enough that the results are statistically significant. Tell them, "This is the content that users have been complaining about, and here is the reason that they say it violates our terms of service. Are these legitimate complaints, or not?" If the number of "Yes" votes exceeds some threshold, then the group gets shuttered.

It's much harder to cheat in this system, than in an "abuse report" system in which users simply band together and file phony abuse reports against a group until it gets taken down. If the 200 members of "Saudi Flagger" signed up as volunteer reviewers, then they would comprise only 1% of a jury pool of 20,000 users, and on average would only get one vote on a jury of 100. You'd have to organize such a large mob that your numbers would comprise a significant portion of the 20,000 volunteer reviewers, so that you would have a significant voting bloc in a given jury pool. (And my guess is that Facebook would have a lot more than 20,000 curious volunteers signed up as reviewers.) On the other hand, if someone creates a group with actual hateful content or built around a campaign of illegal harrassment, and the abuse reports start coming in until a jury vote is triggered, then a randomly selected jury of reviewers would probably cast enough "Yes" votes to validate the abuse reports.

Jurors could in fact be given three voting choices:

  • "This group really is abusive" (i.e. the abuse reports were legitimate), or;
  • "This group does not technically violate the Terms of Service, but the users who filed abuse reports were probably making an honest mistake" (perhaps a common choice for groups that support controversial causes, or that publish information about semi-private individuals); or
  • "This group does not violate the TOS, and the abuse reports were bogus to begin with" (i.e. almost no reasonable person could have believed that the group really did violate the TOS, and the abuse reports were probably part of an organized campaign to get the group removed).

This strongly discourages users from organizing mob efforts against legitimate groups; if most of the jury ends up voting for the third choice, "This is an obviously legitimate group and the complaints were just an organized vigilante campaign", then the users who filed the complaints could have their own accounts penalized.

What I like about this algorithm is that the sizes and thresholds can be tweaked according to what you discover about the habits of the Facebook content reviewers. Suppose most volunteer reviewers turn out to be deadbeats who don't respond to "jury duty" when they're actually called upon to vote in an abuse report case. Fine — just increase the size of the jury, until the average number of users in a randomly convened jury who do respond, is large enough to be statistically significant. Or, suppose it turns out that people who sign up to review content to be deleted, are a more prudish bunch than average, and their votes tend to skew towards "delete it now!" in a way that is not representative of the general Facebook community. Fine — just raise the threshold for the percentage of "Yes" votes required to get content deleted. All that's required for the algorithm to work, is that content which clearly does violate the Terms of Service, gets more "Yes" votes on average than content that doesn't. Then make the jury size large enough that the voting results are statistically significant, so you can tell which side of the threshold you're on.

Another beneficial feature of the algorithm is that it's scaleable — there's no bottleneck of overworked reviewers at Facebook headquarters who have to review every decision. (They should probably review a random subset of the decisions to make sure the "juries" are getting what seems to be the right answer, but they don't have to check every one.) If Facebook doubles in size — and the amount of "abusive content" and the number of abuse reports doubles along with it — then as long as the pool of volunteers reviewers also doubles, each reviewer has no greater workload than they had before. But the workload of the abuse department at Facebook doesn't double.

Now, this algorithm ducks the question of how to handle "borderline" content. If a student creates a Facebook group called "MR. LANGAN IS A BUTT BRAIN," is that "harassment" or not? I would say no, but I'm not confident that a randomly selected pool of reviewers would agree. However, the point of this algorithm is to make sure that if content is posted on Facebook that almost nobody would reasonably agree is a violation of their Terms of Service, then a group of vigilantes can't get it removed by filing a torrent of abuse reports.

Also, this proposal can't do much about Facebook's Terms of Service being prudish to begin with. A Frenchman recently had his account suspended because he used a 19th-century oil painting of an artistic nude as his profile picture. Well, Facebook's TOS prohibits nudity -- not just sexual nudity, but all nudity, period. Even under my proposed algorithm, jurors would presumably have to be honest and vote that the painting did in fact violate Facebook's TOS, unless or until Facebook changes the rules. (For that matter, maybe this wasn't a case of prudishness anyway. I mean, we know it's "artistic" because it's more than 100 years old and it was painted in oils, right? Yeah, well check out the painting that the guy used as his profile picture. It presumably didn't help that the painting is so good that the Facebook censors probably thought it was a photograph.)

But notwithstanding these problems, this algorithm was the best trade-off I could come up with in terms of scalability and fairness. So here's the contest: Send me your best alternative, or best suggested improvement, or best fatal flaw in this proposal (even if you don't come up with something better, the discovery of a fatal flaw is still valuable) for a chance to win (a portion of) the $100 -- or, you can designate a charity to be the recipient of your winnings. Send your ideas to bennett at peacefire dot org and put "reporting" in the subject line. I reserve the right to split the prize between multiple winners, or to pay out more than the original $100 (or give winners the right to designate charitable donations totalling more than $100) if enough good points come in (or to pay out less than $100 if there's a real dearth of valid points, but there are enough brainiacs reading this that I think that's unlikely). In order for the contest not to detract from the discussion taking place in the comment threads, if more than one reader submits essentially the same idea, I'll give the credit to the first submitter -- so as you're sending me your idea, you can feel free to share it in the comment threads as well without worrying about someone re-submitting it and stealing a portion of your winnings. (If your submission is, "Bennett, your articles would be much shorter if you just state your conclusion, instead of also including a supporting argument and addressing possible objections", feel free to submit that just in the comment threads.)

In The Net Delusion, Morozov concludes his section on phony abuse reports by saying, "Good judgment, as it turns out, cannot be crowdsourced, if only because special interests always steer the process to suit their own objectives." I think he's right about the problems, but I disagree that they're unsolvable. I think my algorithm does in fact prevent "special interests" from "steering the process", but I'll pay to be convinced that I'm wrong. Today I'm just choosing the "winners" of the contest myself; maybe someday I'll crowdsource the decision by letting a randomly selected subset of users vote on the merits of each proposal... but I'm sure some of you are dying to tell me why that's a bad idea.

This discussion has been archived. No new comments can be posted.

Crowdsourcing the Censors: A Contest

Comments Filter:
  • How about this... (Score:5, Insightful)

    by geminidomino (614729) on Friday April 15, 2011 @12:18PM (#35829798) Journal

    Don't rely on the cooperation of self-serving and outwardly evil companies to send your message.

    I'll take my prize in zorkmids, thanks.

  • Meta (Score:3, Funny)

    by PhattyMatty (916963) on Friday April 15, 2011 @12:26PM (#35829900)

    So he's crowd-sourcing the crowd-sourcing solution. One more level and we'll make a black hole!

    • Re: (Score:1, Funny)

      by Anonymous Coward

      Yo dawg, we heard you like crowdsourcing so we put a moderation system in your moderation system so we can all crowdsource while we crowdsource.

      M O D E R A T I O N

      • And let me say this: Extremism in defense of crowdsourcing is no vice. And moderation in pursuit of moderation is no virtue.

    • Meta-meta-minutiae? Just the thing for keeping trivial minds occupied, apparently.
    • Re: (Score:2, Informative)

      by raddan (519638) *
      People have been doing that on Mechanical Turk for awhile now. It makes sense. If you think of a human as a very slow, error-prone CPU, the solution is obvious:
      1. Get more people
      2. Have them check each other's work

      The second part relies on the independence of the people-- i.e., they are not colluding to distort your "computation". But crowdsourcing sites like MTurk and Slashdot effectively mitigate this by 1) having a large user base from which they 2) sample randomly. MTurk allows you to do crowdsourced c

  • by Anonymous Coward

    they want you to snitch on others and if you can't do that then they make you snitch on your self.

  • by Anonymous Coward

    Your idea doesn't take into account the number of people who would sign up, just so that they could hit "Abusive" at everything. You need levels of meta moderation for this to succeed.

    • Re: (Score:3, Insightful)

      by Garridan (597129)
      Actually, you can use moderation (and a bit of graph theory) to do just that. At first, you shouldn't put stock in any user. But, when you have a large group who usually agree, and (here's the key point) usually agree with the professional moderators, you should trust that large group. This can easily be reduced to an eigenvalue problem, similar to PageRank.

      The problem I see with this idea as a whole is teens posting naked pictures of themselves and others. Then, this moderation scheme turns into a
  • by yakatz (1176317) on Friday April 15, 2011 @12:32PM (#35829980) Homepage Journal

    This sounds a lot like the slashdot moderation scheme...

    For those who did not know, you can get the source code behind slashdot here [slashcode.com]

    • by magarity (164372)

      This sounds a lot like the slashdot moderation scheme...

      Speaking of which, it used to throw up a 'please take your turn at metamoderation' link every once in a while. I haven't seen that for quite a while now. Did the new version leave it out?

      • by dreampod (1093343)

        I think it just reduced the frequency of the metamod reminder. I know I had been thinking the same thing lately and then yesterday got the reminder link. It was 2 days after I got regular mod points which I don't know if it is relavent.

  • Deputize (Score:5, Insightful)

    by DanTheStone (1212500) on Friday April 15, 2011 @12:34PM (#35830000)
    I'd be more likely to deputize to people who you find are more reliable (basically, trusted moderators chosen from your randomly-selected pool after reviewing their decisions). Your system assumes that most people will be reasonable. I think that is an inherently flawed assumption, including for the very situations listed above. You can't trust that only a minority will think you should remove something that is against the mainstream view.
    • Re:Deputize (Score:4, Insightful)

      by owlnation (858981) on Friday April 15, 2011 @12:39PM (#35830068)

      "I'd be more likely to deputize to people who you find are more reliable (basically, trusted moderators chosen from your randomly-selected pool after reviewing their decisions). Your system assumes that most people will be reasonable. I think that is an inherently flawed assumption, including for the very situations listed above. You can't trust that only a minority will think you should remove something that is against the mainstream view."

      In theory, that's definitely a better way. The problem is -- as wikipedia proves conclusively -- if you do not choose those moderators wisely, or you are corrupt in your choice of moderators, you end up with a completely failed system very, very quickly.

    • by gknoy (899301)

      Deputizing doesn't sound like it would work well versus a dedicated mob of abusive flaggers.

      • by Brucelet (1857158)
        Why not? You wouldn't deputize the abusers.
        • by dreampod (1093343)

          Wikipedia takes this particular approach..The problem encountered with this is that if they are honest in their moderation most of the time, it dramatically enhances their reputibility when they are abusively modding their particular special interest. Most flaggers trying to cover up criticism of the Saudi royal family are going to be perfectly content properly rating content for nudity, offensive language, etc. However when their particular hobbyhorse shows up they will falsely rate it as being offensive

  • by Geeky (90998) on Friday April 15, 2011 @12:35PM (#35830010)

    The painting mentioned as a profile picture is Courbet's Origin of the World.

    Probably best not check it out at work.

    Although, of course, it is on the wall of a major gallery where anyone can see it.

    • by H0p313ss (811249)

      That would indeed a make an awesome desktop background to set for one's pointy haired boss...

  • by Anonymous Coward

    Suppose an atheist created a page about islam in turkish language and islamist turks find this abusive - hey because the person denies Allah - and they create a mass campaign with abuse complaints.

    Let's assume turkey is 99% muslim. (https://secure.wikimedia.org/wikipedia/en/wiki/Demographics_of_Turkey)

    The random picked people would statistically fit to that group and they'd also approve the ban.

    So, what is the trouble for?

  • It is jarring for me to realize that pages are being taken down because they merely *offend* others. These aren't kiddie porn or drug dealer pages, it's just people talking about stuff. They talk about their friends, their enemies, their schools, their governments. It's not all flowers and happiness. If they want real people on facebook, they need to realize that some people are going to say unpleasant things.

    Maybe have a counter at the top of the pages that says "this page has received N complaints" bu

  • There's actually been a lot of research on this topic the last decade. No great solutions imo, but a lot of research. Most of it better than random=trustworthy. Here's the problem. Say you have N users and M are a fake mob. F=M/N is your ratio of fake users. Now select 0.01 of your users at random. F=0.01*M/0.01*N ... the ratio is the same, so the mob still works. Of course, I'm assuming you don't know which users are mob users. If you did, why would you bother with randomly selecting some? This i
    • by suutar (1860506)
      True, the percentage won't change. But the original problem isn't a percentage problem; it's saying "we have X complaints" instead of saying "X out of Y people think this is a problem", because they don't know what Y is. They can't really assume that everyone on FaceBook has seen the material in question and only X think it's bad. The proposed solution is intended to turn the number of folks who have a problem with the content from an absolute figure to a measured percentage, which is (or at least appears t
  • I'm really hoping for a decentralized social networking system, where everything is not controlled by the Big Head.

    • Diaspora does not solve the problem, because you either rely on outside hosting (in this case you're prone to excessive notice/takedown and screwed anyway) or serve that from own machine, in which case you are even more vulnerable to angry DDoS mob (vide: anon).
  • by JMZero (449047) on Friday April 15, 2011 @12:56PM (#35830284) Homepage

    ..but I have to say it's ironic that you're posting about this algorithm on Slashdot, a site whose moderation system has incorporated the best of your ideas for years, and yet that doesn't seem to come up when you're asking for ideas.

    I like the Slashdot system. Moderators are assigned points at times beyond their control, to prevent just the kind of abuses you mention. There's appropriate feedback control on how moderators behave. The job of moderating (and meta-moderating) is presented and appreciated in such a way that people actually do it. People are picked to do moderation in a reasonable way. The process is transparent, and the proof that it works is that the Slashdot comments you typically see are actually not horrible (usually) and sometimes are quite informative.

    • When I look at slashdot and the way it gets moderated, I feel that either the culture of slashdot has changed a lot over the past decade or else I've changed a lot (it's sort of hard for me to tell objectively). I realize that communities and their biases are not a constant but there are a few topics where the slashdot moderation lately feels so alien to me that it has raised my internal astroturf alarm. Admittedly, I'm part of the problem for letting my mod points expire more often than I spend them, b

      • by Ltap (1572175)
        I agree. If you find someone who has constructed a well-written argument that disagrees with you (some of the anti-privacy, anti-filesharing, and pro-censorship commenters come to mind), the best (or at least most constructive) approach is to respond to them, not to try to mod them into oblivion.
  • Trying for the $100 (Score:4, Interesting)

    by xkr (786629) on Friday April 15, 2011 @12:56PM (#35830290)

    I have two algorithms, and I suggest that they are more valuable if used together, and indeed, if all three including your algorithm are used together.

    (1) Identify "clumps" of users by who their friends are and by their viewing habits. Facebook has an app that will create a "distance graph," using a published algorithm. It is established that groups of users tend to "clump" and the clumps can be identified algorithmically. For example, for a given user, are there more connections back to the clump than there are to outside the clump? Another way to determine such a clump is by counting the number of loops back to the user. (A friends B friends C friends A.) Traditional correlation can be used to match viewing habits. This is probably improved by including a time factor in the each correlation term. For example, if two users watch the same video within 24 hours of each other this correlation term has more weight than if they were watched a week apart.

    Now that you have identified a clump -- which you do not make public -- determine what fraction of the abuse reports come from one or a small number of clumps. That is very suspicious. Also apply an "complaint" factor to the clump as a whole. Clumps with high complaint factors (complain frequently) have their complaints de-weighted appropriately. Rather than "on-off" determinations (e.g. "banned"), use variable weightings.

    In this way groups of like-minded users who try to push a specific agenda through abuse complaints would find their activities less and less effective. The more aggressive the clump, the less effective. And, the more the clump acts like a clump, the less effective.

    (2) Use Wikipedia style "locking." There are a sequence of locks, from mild to extreme. Mild locks require complaining users to be in good standing and be a user for while for the complaint to count. Medium locks require a more detailed look, say, by your set of random reviewers. Extreme locks means that the item in question has been formally reviewed and the issue is closed. In addition, complaints filed against a locked ("valid") item hurt the credibility score of the complainer.

    I hope this helps.

    • by dkleinsc (563838)

      If I'm an organized group with an ax to grind, I can get around your idea 1 fairly easily - organize my group off $SOCIAL_NETWORK, and instruct my loyal group members to specifically avoid detection by not friending or viewing each other's stuff on $SOCIAL_NETWORK. Your distance graph no longer shows these folks as connected in any way, problem solved.

      Offline astroturfers have been doing that kind of thing for years.

      • by xkr (786629)
        True, but my guess is that (1) this is far too much trouble for them; and (2) they were a "clump" with common interests long before they decided to attack postings they don't like. You are suggesting that people leave their own church before promoting the church's views?
      • by dreampod (1093343)

        At that point the profiles complaining are going to quite distinctive due to their inactivity except for when flagging something as abusive. They are still going to want to use their real profile most of the time because using a fake profile prevents them from interacting with their friends and groups. This means that fairly simple behavioural analysis can help exclude these reports from the system or treat them as a distinct 'loners who like flagging content' clump.

        The real difficulty would be determinin

  • At its core, this sounds like a blend of Slashdot's moderation and meta-moderation processes.

  • Hello everyone,

    This is a much more difficult problem than it seems at first glance. Some other posters have already pointed out the problem of the "jury of your peers" concept with the example of the country Turkey. A similar problem arises if it is simply approached as "what is considered offensive in the host country" (in this case, the USA, since Facebook is based in the USA). Heck, there are pictures of my daughter in her soccer uniform that would be banned in Saudi Arabia because you can see her knees

    • I think you'd be hard-pressed to find a group of people who would familiarize themselves with the Facebook TOS well enough to actually enforce it. I'm afraid that what you'd actually get is a group of people who vote Offensive/Inoffensive based on whether they agree with whatever controversial topic is at hand. This puts any minority group (LGBT, religious organizations, etc.), as well as any controversial groups (pro-Life/pro-Choice, political groups, etc.) at a much higher risk than they are now. You need
      • by querist (97166)
        That is why the metamoderation is done by Facebook employees, who should be familiar with the TOS. It should work itself out eventually, with obvious abusers being given low reputations so that they are never asked to moderate again.
        • by Ruke (857276)

          A system like this could work out, but by introducing Facebook employees, you make the system non-scalable. Instead of an employee reviewing 1000 abuse reports per day, he reviews 1000 moderations per day, and that's only meta-moderating 1 out of every <jury size> moderations.

          I like the system I saw posted earlier, where instead of asking "Is there abuse?", you ask "Which kind of abuse is there?" and give them a list. People who choose incorrectly are given a lower confidence rating. However, this sti

  • by Umuri (897961) on Friday April 15, 2011 @01:09PM (#35830438)

    I like the idea, however your problem is you will always come across trolls on the internet, or people who just like screwing up systems. I would say this percentage on facebook is quite sizable, so i would propose these alterations(to be taken individually or all together or mix/match):

    Assign a trustability value to each juror, that is hidden and modified in one of two ways(or both):

    Have a pool of pre-existing cases(I'm sure facebook has tons of examples stored in their history banks).
    In this situation facebook knows what the outcome should be according to their standards.
    Have any prospective juror have a mix of "real" cases and these pre-existing cases mixed together for a trial period, say that first 20 cases they review have an unknown mix. This way they can't guess which ones are appropriate or not.

    Use their verdicts on these existing cases to assign a juror a "reliability" factor on their verdicts on the non-example cases in their batch.
    That way jurors who don't quite get the rules, or are causing problems, are easily weeded out and their vote counts less in the total verdict weight on their real cases.

    Alternatively:
    Trustability starts at 50%, so new jurors get half votes.
    whenever a juror disagrees with the majority opinion by the polar opposite choice, lower their trustability rating.
    Likewise when they are in the majority and it is not a middleground, increase their trustability.

    Both of these improvements will lower the odds of troll or mob mentality, even if the control a decent size of the juror pool because their individual votes will be worth less, while being invisible enough to the end-user that they won't be able to tell they aren't being effective.

    • by querist (97166)
      This seems like a good idea, except that it could still allow a "mob rule" effect. I would contend that having Facebook employees metamoderate, at least in disputed cases, would be a more effective approach overall. Your approach, however, could easily seed the system.
    • by dkleinsc (563838) on Friday April 15, 2011 @02:13PM (#35831220) Homepage

      Thing is, this problem isn't one of mere trolls. Trolls, spammers, and other forms of lesser life are relatively easy to recognize.

      No, these are paid shills and organized groups with an agenda. And that's much much harder to stop, because they will have 'spies' trying to infiltrate and/or control your jury selection, 'lawyers' looking for loopholes in your system, and a semi-disciplined mob who will be happy to carry out their plans carefully.

      An example of what they might do if they were trying to take over /. :
      1. See if they could find and crack old accounts that haven't been used in a while, so they could have nice low UIDs. These are your 'pioneer' accounts. If you aren't willing or able to pull that off, make some new accounts, but expect the takeover to take longer.
      2. Have the 'pioneers' post some smart and funny comments about stuff unrelated to your organization's angle to build karma and become moderators.
      3. Have your larger Wave 2 come in, possibly with new accounts. Still be reasonably smart and funny on stuff unrelated to the organization's angle. Have your pioneers mod up the Wave 2 posts.
      4. Repeat steps 2 and 3 until your group has a large enough pool of mods so that you can have at least 5 moderators ready whenever a story related to the organization's ideology comes up.
      5. Now let your mob in. Have your moderators mod up the not-totally-stupid mob posts in support of your organization's ideological position, and posssibly mod down as 'Overrated' (because that's not metamodded) anything that would serve to disprove it.
      You now have the desired results: +5 Insightful on posts that agree with $POSITION, -1 Overrated on posts that disagree with it, and an ever-increasing pool of moderators who will behave as you want them to with regards to $POSITION.

      I have no knowledge of whether anyone has carried out this plan already, but it wouldn't surprise me if they had. The system on /. is considerably more resilient than, say, the New York Times comment section or Youtube, but still hackable.

  • by petes_PoV (912422) on Friday April 15, 2011 @01:11PM (#35830472)
    The system as described does not appear to cater for situations where a post/article is grossly offensive to an identifiable group or minority, but is meaningless to the majority. So if something is flagged that honks off a lot of people in Uzbekistan (for example) or america (for another example) should the "judges" not also come from that cultural group (the honkees?)? Without that filter, most people who knew nothing about the circumstances of the article would not be in a position to make a considered judgement - or they might even vote the complaint down for their own political reasons.

    Although you can't expect people to identify themselves as being knowledgeable about every conflict, argument, religious view, political wrangling or moral panic you could choose individuals from the same timezone and hemisphere that the complaints originate from (and maybe only ban the offending piece in that geography - unless more complaints are received from outside).

    • by querist (97166)
      Your proposal is interesting, but I can see some potential problems with it with regard to the overall concept of free expression.
      Let us consider a page on Facebook that is critical of Islam. Who would be considered appropriate to moderate that page? Most (if not all) Muslims would mark it inappropriate or offensive because it offends their beliefs, yet to Christains or others it may be considered informative and appropriate.
      As a conservative Christian (I am not saying you are), would you want your 13-y
  • I like your idea of crowd-sourcing, and I came up with a few ideas while reading yours:

    Test the judgement of the moderators.

    When mods are called upon to moderate something make sure that they have no way of knowing if it's real or not. This way you can for example request of a mod if "MR. LANGAN IS A BUTT BRAIN," or any other previously non bogus post should be modded or not. This way, you can know before hand if the content should be modded and test the modding ability of the moderator.

    Describe what how th

  • Most people are censorious by nature to one extent or another. People tend to group together with like-minded individuals. Those two factors, plus a system that lets them trash those they dislike is an inherent recipe for disaster.

    The solution, if there is one, is a simple filter process:

    1. One complaint, per item.
    2. Individual review of complaint.
    3. If your complaint is blatantly unreasonable, you're banned.

    Maybe if a member of "Saudi Flag" got permabanned the first time they filed a false report, things w

    • This reminds me of the video appeal in pro tennis. If your appeal was bogus you lose it, otherwise you get to appeal again. This makes you think twice before appealing, and will therefore reduce the review load.

      Minor possible improvement: you get the right to enter a complaint only after some time since the creation of the account has passed, or if you have reached a certain amount of activity; this will deter the creation of shill accounts, or at least it will increase the friction/cost (time, energy) for

    • by Sigma 7 (266129)

      You missed a step:

      4. If there's a large number of false complaints against a user (whether all at once or spread over time), it becomes harder to complain against that user. For example, you may need minimum community participation (e.g. been around for at least 4 days and made 10 posts, etc.) or otherwise managed to get yourself trusted enough by the community.

  • make people participate in your whole system like stack overflow. Gradually, as they build up reputation, give them more power. Then if one of these guys does something odious, you can yank his/her priv's.
  • There is "prior art" for this idea, if you know where to look. OKCupid.com has has a crowd-sourced "flagmod" [okcupid.com] system for its Web site for years.

  • Require that abuse reports include a freeform description of why the suspected content violates the rules.

    Then, don't ask the jurors whether the content violates the rules. Instead, ask them: Is (the freeform description) a true statement about the suspected content?

    In other words: someone reports content for violation of TOS. Reason: "This content contains nudity."

    The juror then gets the moderation request, and they answer a single question. Is: "This content contains nudity" true about the suspected

  • Lets say the offending material is extremely and obviously offending. You want to send this image to 100 volunteers to be viewed and voted on? What If I have the my pictorial guide to gutting and skinning a pig uploaded to my profile. 1000 pictures... 100,000 volunteers just vomited a little in their throats. My account gets deleted, I create a new email address and I'm back! :D

    Exposing volunteers to REAL abusive images/videos/text is IMHO worse then trying to catch false positives. If you want to be a f
  • by curril (42335) on Friday April 15, 2011 @02:14PM (#35831234)

    This is a well-studied "Who watches the watchers?" web of trust type issue. While there is no perfect solution, there are a number of good approaches. This page on Advgato [advogato.org] describes a good trust metric for reducing the impact of spam and malicious attacks. It wouldn't be that big of a deal for FaceBook to incorporate some such system. However, it would require FaceBook to actually care about about being fair to its users, which it doesn't. FaceBook exploits for financial gain the tribal desires of people to band together and be part of a group. So FaceBook's really uses its abuse policy as a way to force people to follow the rules of the bigger and more aggressive tribes. Such battles actually help FaceBook to be successful because it strengthens the tribal behaviors that benefit FaceBook's bottom line.

    So all in all, no matter what brilliant, cost-effective, robust moderation/abuse system you design or crowd source, the very, very best that you can hope for is that somebody at FaceBook might pat you on the head and thank you for your efforts and say that they aren't interested in your contribution at this time.

  • So you want free content provided by your customers... but that lead to low quality content. So you wanted free moderation provided by your customers... but that lead to poor moderation. So you want free code and methodology to improve your moderation provided by your customers... at some point the obvious has to just smack you upside the head.
  • by twistedsymphony (956982) on Friday April 15, 2011 @03:02PM (#35831770) Homepage
    Why not "test" the jurors every so often to determine if they're really effective jurors?

    It would work something like this: you would have a small group (employees of facebook, or wherever) that takes (actual) select complains and determine how their "ideal" juror would handle the complaint. feed these at random to the jury pool and if they're not voting the way they should, reduce (or remove) their voting power in effecting the outcome in the decision making process, alternatively if they have a strong history of voting exactly the way they should then their votes would carry more weight in non-test cases.

    I wouldn't necessarily "kick out" jurors, but their voting power could be diminished to nothing if they have a very poor track record... I also don't think that the jurors should know that they're being tested nor, what their voting power is, nor that their voting power even has more or less weight than anyone else's.
  • Totally agree that people like Morozov write off crowdsourcing without understanding it. One of the things that's fascinating to me is that crowdsourcing systems in general haven't learned from Slashdot's success with meta-moderation. Evaluating abuse reports seems like a great application.
  • by kelemvor4 (1980226)

    He's helping kids circumvent security systems at their school to access banned sites and doesn't understand why he's getting complaints?

    Here's your sign...

  • Your basic approach does seem to be vulnerable to someone registering a large number of "sleeper" accounts that wait to be called in to be a juror about something they care about (perhaps an upcoming attack). To help counter this: 1. An account can't be selected as a juror unless it's been active for a minimum amount of time, with actual activity. (Say, a month.) 2. Jurors which consistently ignore their "duty" get dropped from the list. [p] You would also want to attempt to weed out vandals from your juror
  • The most fatal flaw in your algorithm is that you assume that the average user actually knows the Facebook TOS. They don't. I'm not sure it would necessarily be a bad thing, but most "jurors" would certainly end up just making a judgment based on their own values
  • To further improve upon this I have a few suggestions. #1 Instead of the "Jury" making the final decision have it so that the jury is the initial buffer before the official complaint is registered for final review by facebook themselves. This should alleviate the amount of work they have to do and thus have more time to properly investigate the claims and the corresponding group/user etc. In this manner Facebook makes the official decision based on their ToS instead of randomly selected people and their in
  • I think your system would work quite well. I don't think even a 100 jurors are needed. I would suggest an initial pool of 20. Now within that 20 of 17 say A, and 3 say B, go with A. If it's closer, say, 8-12, then enlarge the pool to 50. If it's still close expand the pool again.

    One of the keys is that people should NOT be able to volunteer for Jury duty. This will keep the deck from being stacked by the self righteous.
    I would suggest that Jurors be selected from people who have:
    * Active accounts --

    • by _0xd0ad (1974778)

      They should not lose their account or have any obvious indication that the account's ability to report has been revoked, because that's just telling them to create another account. The report interface should still appear to work normally even if the user's "credibility" multiplier is zero.

  • I'd improve it in a few ways:
    The jury's decision to remove an item should not be final. It should be a site employee with the final say.

    Don't remove content in the first place. Put up a "this contains potentially offensive content" warning and let people click through if they want.

    Once a complainer reports a link/image as offensive, remove it from their access so they don't see it any more.

  • A major piece or the puzzle required is the proper selection of the Jury members. Doing this jury selection completely at random may not be the best option due to previous emotional baggage.

    One method of pre-screening potential jurors might be to process their own published Facebook content online and analyse it for irrational thoughts or extreme positions on related topics to what is being voted on. Lets suppose you are looking at a potential take down of someone's web profile because of it being accuse

  • I still believe that the best solution is to leave the censorship to professional mods, who at lest know what's really forbidden and what's not.

    If you accept my assumption then the answer to your question lies in pattern analysis. If the people reporting are very tightly clustered, as in being friends with one another, having the same interests, liking the same pages or belonging to the same groups then the likelihood of mob behavior increases. The relevancy of such analysis can be determined by looking at

Dennis Ritchie is twice as bright as Steve Jobs, and only half wrong. -- Jim Gettys

Working...