Crowdsourcing the Censors: A Contest 111
Before you get bored and click away: I'm proposing an algorithm for Facebook (and similar sites) to use to review "abuse reports" in a scalable and efficient manner, and I'm offering a total of $100 (or more) to the reader (or to some charity designated by them) who proposes the best improvement(s) or alternative(s) to the algorithm. We now proceed with your standard boilerplate introductory paragraph.
In his new book The Net Delusion: The Dark Side of Internet Freedom, Evgeny Morozov cites examples of Facebook users organizing campaigns to shut down particular groups or user account by filing phony complaints against them. One Hong-Kong-based Facebook group with over 80,000 members, formed to oppose the pro-Beijing Democratic Alliance for the Betterment and Progress of Hong-Kong, was shut down by opponents flagging the group as "abusive" on Facebook. In another incident, the Moroccan activist Kacem El Ghazzali found his Facebook group Youth for the Separation between Religion and Education deleted without explanation, and when he e-mailed Facebook to ask why, his personal Facebook profile got canned as well. Only after an international outcry did Facebook restore the group (but, oddly, not El Ghazzali's personal Facebook account), but they refused to explain the original removal; the most likely cause was a torrent of phony "complaints" from opponents. In both cases it seemed clear that the groups did not actually violate Facebook's Terms of Service, but the number of complaints presumably convinced either a software algorithm or an overworked human reviewer that something must have been inappropriate, and the forums were shut down. The Net Delusion also describes a group of conservative Saudi citizens calling themselves "Saudi Flagger" that coordinates filing en masse complaints against YouTube videos which criticize Islam or the Saudi royal family.
A large number of abuse reports against a single Facebook group or YouTube video probably has a good chance of triggering a takedown; with 2,000 employees managing 500 million users, Facebook surely doesn't have time to review every abuse report properly. About once a month I still get an email from Facebook with the subject "Facebook Warning" saying:
You have been sending harassing messages to other users. This is a violation of Facebook's Terms of Use. Among other things, messages that are hateful, threatening, or obscene are not allowed. Continued misuse of Facebook's features could result in your account being disabled.
I still have no idea what is triggering the "warnings"; the meanest thing I usually say on Facebook is to people who write to me asking for tech support (usually with the proxy sites to get on Facebook at school), when they say "It gives me an error", and I write back, "TELL ME THE ACTUAL ERROR MESSAGE THAT IT GIVES YOU!!" (Typical reply: "It gave me an error that it can't do it." If you work in tech support, I feel your pain.) I suspect the "abuse reports" are probably coming from parents who hack into their teenagers' accounts, see their teens corresponding with me about how to get on Facebook or YouTube at school, and decide to file an "abuse report" against my account just for the hell of it. If Facebook makes it that easy for a lone gunman to cause trouble with fake complaints, imagine how much trouble you can make with a well-coordinated mob.
But I think an algorithm could be implemented that would enable users to police for genuinely abusive content, without allowing hordes of vigilantes to get content removed that they simply don't like. Taking Facebook as an example, a simple change in the crowdsourcing algorithm could solve the whole problem: use the votes of users who are randomly selected by Facebook, rather than users who self-select by filing the abuse reports. This is similar to an algorithm I'd suggested for stopping vigilante campaigns from "burying" legitimate content on Digg (and indeed, stopping illegitimate self-promotion on Digg at the same time), and as an general algorithm for preventing good ideas from being lost in the glut of competing online content. But if phone "abuse reports" are also being used to squelch free speech in countries like China and Saudi Arabia, then the moral case for solving the problem is all that more compelling.
Here's how the algorithm would work: Facebook can ask some random fraction of their users, "Would you like to be a volunteer reviewer of abuse reports?" (Would you sign up? Come on. Wouldn't you be a little bit curious what sort of interesting stuff would be brought to your attention?) Wait until they've built up a roster of reviewers (say, 20,000). Then suppose Facebook receives an abuse report (or several abuse reports, whatever their threshold is) about a particular Facebook group. Facebook can then randomly select some subset of its volunteer reviewers, say, 100 of them. This is tiny as a proportion of the total number of reviewers (with a "jury" size of 100 and a "jury pool" of 20,000, a given reviewer has only a 1 in 200 chance of being called for "jury duty" for any particular complaint), but still large enough that the results are statistically significant. Tell them, "This is the content that users have been complaining about, and here is the reason that they say it violates our terms of service. Are these legitimate complaints, or not?" If the number of "Yes" votes exceeds some threshold, then the group gets shuttered.
It's much harder to cheat in this system, than in an "abuse report" system in which users simply band together and file phony abuse reports against a group until it gets taken down. If the 200 members of "Saudi Flagger" signed up as volunteer reviewers, then they would comprise only 1% of a jury pool of 20,000 users, and on average would only get one vote on a jury of 100. You'd have to organize such a large mob that your numbers would comprise a significant portion of the 20,000 volunteer reviewers, so that you would have a significant voting bloc in a given jury pool. (And my guess is that Facebook would have a lot more than 20,000 curious volunteers signed up as reviewers.) On the other hand, if someone creates a group with actual hateful content or built around a campaign of illegal harrassment, and the abuse reports start coming in until a jury vote is triggered, then a randomly selected jury of reviewers would probably cast enough "Yes" votes to validate the abuse reports.
Jurors could in fact be given three voting choices:
- "This group really is abusive" (i.e. the abuse reports were legitimate), or;
- "This group does not technically violate the Terms of Service, but the users who filed abuse reports were probably making an honest mistake" (perhaps a common choice for groups that support controversial causes, or that publish information about semi-private individuals); or
- "This group does not violate the TOS, and the abuse reports were bogus to begin with" (i.e. almost no reasonable person could have believed that the group really did violate the TOS, and the abuse reports were probably part of an organized campaign to get the group removed).
This strongly discourages users from organizing mob efforts against legitimate groups; if most of the jury ends up voting for the third choice, "This is an obviously legitimate group and the complaints were just an organized vigilante campaign", then the users who filed the complaints could have their own accounts penalized.
What I like about this algorithm is that the sizes and thresholds can be tweaked according to what you discover about the habits of the Facebook content reviewers. Suppose most volunteer reviewers turn out to be deadbeats who don't respond to "jury duty" when they're actually called upon to vote in an abuse report case. Fine — just increase the size of the jury, until the average number of users in a randomly convened jury who do respond, is large enough to be statistically significant. Or, suppose it turns out that people who sign up to review content to be deleted, are a more prudish bunch than average, and their votes tend to skew towards "delete it now!" in a way that is not representative of the general Facebook community. Fine — just raise the threshold for the percentage of "Yes" votes required to get content deleted. All that's required for the algorithm to work, is that content which clearly does violate the Terms of Service, gets more "Yes" votes on average than content that doesn't. Then make the jury size large enough that the voting results are statistically significant, so you can tell which side of the threshold you're on.
Another beneficial feature of the algorithm is that it's scaleable — there's no bottleneck of overworked reviewers at Facebook headquarters who have to review every decision. (They should probably review a random subset of the decisions to make sure the "juries" are getting what seems to be the right answer, but they don't have to check every one.) If Facebook doubles in size — and the amount of "abusive content" and the number of abuse reports doubles along with it — then as long as the pool of volunteers reviewers also doubles, each reviewer has no greater workload than they had before. But the workload of the abuse department at Facebook doesn't double.
Now, this algorithm ducks the question of how to handle "borderline" content. If a student creates a Facebook group called "MR. LANGAN IS A BUTT BRAIN," is that "harassment" or not? I would say no, but I'm not confident that a randomly selected pool of reviewers would agree. However, the point of this algorithm is to make sure that if content is posted on Facebook that almost nobody would reasonably agree is a violation of their Terms of Service, then a group of vigilantes can't get it removed by filing a torrent of abuse reports.
Also, this proposal can't do much about Facebook's Terms of Service being prudish to begin with. A Frenchman recently had his account suspended because he used a 19th-century oil painting of an artistic nude as his profile picture. Well, Facebook's TOS prohibits nudity -- not just sexual nudity, but all nudity, period. Even under my proposed algorithm, jurors would presumably have to be honest and vote that the painting did in fact violate Facebook's TOS, unless or until Facebook changes the rules. (For that matter, maybe this wasn't a case of prudishness anyway. I mean, we know it's "artistic" because it's more than 100 years old and it was painted in oils, right? Yeah, well check out the painting that the guy used as his profile picture. It presumably didn't help that the painting is so good that the Facebook censors probably thought it was a photograph.)
But notwithstanding these problems, this algorithm was the best trade-off I could come up with in terms of scalability and fairness. So here's the contest: Send me your best alternative, or best suggested improvement, or best fatal flaw in this proposal (even if you don't come up with something better, the discovery of a fatal flaw is still valuable) for a chance to win (a portion of) the $100 -- or, you can designate a charity to be the recipient of your winnings. Send your ideas to bennett at peacefire dot org and put "reporting" in the subject line. I reserve the right to split the prize between multiple winners, or to pay out more than the original $100 (or give winners the right to designate charitable donations totalling more than $100) if enough good points come in (or to pay out less than $100 if there's a real dearth of valid points, but there are enough brainiacs reading this that I think that's unlikely). In order for the contest not to detract from the discussion taking place in the comment threads, if more than one reader submits essentially the same idea, I'll give the credit to the first submitter -- so as you're sending me your idea, you can feel free to share it in the comment threads as well without worrying about someone re-submitting it and stealing a portion of your winnings. (If your submission is, "Bennett, your articles would be much shorter if you just state your conclusion, instead of also including a supporting argument and addressing possible objections", feel free to submit that just in the comment threads.)
In The Net Delusion, Morozov concludes his section on phony abuse reports by saying, "Good judgment, as it turns out, cannot be crowdsourced, if only because special interests always steer the process to suit their own objectives." I think he's right about the problems, but I disagree that they're unsolvable. I think my algorithm does in fact prevent "special interests" from "steering the process", but I'll pay to be convinced that I'm wrong. Today I'm just choosing the "winners" of the contest myself; maybe someday I'll crowdsource the decision by letting a randomly selected subset of users vote on the merits of each proposal... but I'm sure some of you are dying to tell me why that's a bad idea.
How about this... (Score:5, Insightful)
Don't rely on the cooperation of self-serving and outwardly evil companies to send your message.
I'll take my prize in zorkmids, thanks.
Meta (Score:3, Funny)
So he's crowd-sourcing the crowd-sourcing solution. One more level and we'll make a black hole!
Re: (Score:1, Funny)
Yo dawg, we heard you like crowdsourcing so we put a moderation system in your moderation system so we can all crowdsource while we crowdsource.
M O D E R A T I O N
Re: (Score:2)
And let me say this: Extremism in defense of crowdsourcing is no vice. And moderation in pursuit of moderation is no virtue.
Re: (Score:3)
Re: (Score:2, Informative)
The second part relies on the independence of the people-- i.e., they are not colluding to distort your "computation". But crowdsourcing sites like MTurk and Slashdot effectively mitigate this by 1) having a large user base from which they 2) sample randomly. MTurk allows you to do crowdsourced c
in soviet russia (Score:1)
they want you to snitch on others and if you can't do that then they make you snitch on your self.
Asshole jurors (Score:1)
Your idea doesn't take into account the number of people who would sign up, just so that they could hit "Abusive" at everything. You need levels of meta moderation for this to succeed.
Re: (Score:3, Insightful)
The problem I see with this idea as a whole is teens posting naked pictures of themselves and others. Then, this moderation scheme turns into a
Re: (Score:1)
Yes. The basic problem in this problem is that at present, mobs can expend X units of work to cause FB to expend Y > X work to investigate the complaint, or to cause the Z > X work to be further expended to fight the complaint (for Y and Z being not more than a few times, say 10, larger than X). The solution proposes that FB avoids spending that 10X work by enabling mobs to expend 100X units of work to cause FB to incur some exponential of X work through community moderators. (The amount of work the j
Facebook to switch to SlashCode? (Score:4, Informative)
This sounds a lot like the slashdot moderation scheme...
For those who did not know, you can get the source code behind slashdot here [slashcode.com]
Re: (Score:2)
This sounds a lot like the slashdot moderation scheme...
Speaking of which, it used to throw up a 'please take your turn at metamoderation' link every once in a while. I haven't seen that for quite a while now. Did the new version leave it out?
Re: (Score:2)
I think it just reduced the frequency of the metamod reminder. I know I had been thinking the same thing lately and then yesterday got the reminder link. It was 2 days after I got regular mod points which I don't know if it is relavent.
Re: (Score:2)
Deputize (Score:5, Insightful)
Re:Deputize (Score:4, Insightful)
In theory, that's definitely a better way. The problem is -- as wikipedia proves conclusively -- if you do not choose those moderators wisely, or you are corrupt in your choice of moderators, you end up with a completely failed system very, very quickly.
Re: (Score:2)
Deputizing doesn't sound like it would work well versus a dedicated mob of abusive flaggers.
Re: (Score:1)
Re: (Score:2)
Wikipedia takes this particular approach..The problem encountered with this is that if they are honest in their moderation most of the time, it dramatically enhances their reputibility when they are abusively modding their particular special interest. Most flaggers trying to cover up criticism of the Saudi royal family are going to be perfectly content properly rating content for nudity, offensive language, etc. However when their particular hobbyhorse shows up they will falsely rate it as being offensive
Do not check out the painting at work (Score:3)
The painting mentioned as a profile picture is Courbet's Origin of the World.
Probably best not check it out at work.
Although, of course, it is on the wall of a major gallery where anyone can see it.
Re: (Score:2)
That would indeed a make an awesome desktop background to set for one's pointy haired boss...
Sheep of god (Score:1)
Suppose an atheist created a page about islam in turkish language and islamist turks find this abusive - hey because the person denies Allah - and they create a mass campaign with abuse complaints.
Let's assume turkey is 99% muslim. (https://secure.wikimedia.org/wikipedia/en/wiki/Demographics_of_Turkey)
The random picked people would statistically fit to that group and they'd also approve the ban.
So, what is the trouble for?
Re: (Score:1)
Or is genuinely free speech something you're trying to avoid?
Yes.
Because when speech is completely free without any sort of consequences, people become cruel, abusive, small minded, lose all sense of rationality, etc....
The GNAA posts that uses to show up here are a perfect example. And then you have the spam from mostly scam sites.
No thank you, as much as I love free speech, there needs to have some sort of consequence on the web because otherwise, not so popular speech gets drowned up by the crap. Sure it's not perfect - case in point were the posts here that were
Re: (Score:2)
Re: (Score:2)
If we remove moderation, can we remove anonymity too, and force people to post with their real names if they wish to participate?
Or is taking real-world responsibility for the actual content of your speech something you're trying to avoid?
Re: (Score:2)
You cannot have free speech without anonymous speech. Let's say I want to be able to say something bad about the Saudi royal family or the Israeli government, think I got both sides there, do I really want their nutbag followers to be able to find me?
Since I probably would rather continue to breathe than make my point known I will never speak out. That hurts free speech. If we did not have to worry about never being able to find a job, or even being killed for our speech that would not be an issue.
Re: (Score:2)
Here in the US, it isn't that bad where people fear for their lives (yet), but SLAPP cases are on the rise here. I wouldn't be surprised when companies start having bots which periodically check Google or traverse sites themselves and automatically file lawsuits against anyone for libel who complains. This is cheap for an organization which has a large law arm, but defending against these would be cost prohibitive for individuals.
So, in the US, having the ability to separate a userID from the real person
Re: (Score:2)
Indeed, anonymity is important, but when we're talking about private platforms operated by private, for-profit enterprise, we're not talking about "free speech." Nobody's obligated to give others a platform from which to speak.
Suggesting - as the AC above did - that you have a right to un-moderated, unrestricted speech on somebody else's dime is rather disingenuous.
Re: (Score:2)
Or is taking real-world responsibility for the actual content of your speech something you're trying to avoid?
You bet it is, when that might mean being killed/fired/beaten up/expelled/etc.
Re: (Score:1)
Maybe you should start your own site which will allow you to publish your speech in a truly anonymous fashion, then, instead of demanding other people provide one for you, or pretending to be outraged that somebody would set terms of use that govern what you say and how you say it on a site they have built & continue to operate.
Re: (Score:2)
You're either a moron who can't read, an asshole who doesn't care what post he attaches his rants to or a troll who misrepresents people to wind them up..
Re: (Score:2)
Apparently my initial point wasn't expressed clearly for you:
"Stop moderation" is simply not an option on a private platform. No private platform is obligated to provide - or interested in providing - for its members a platform which allows them to disseminate consequence-free speech. Do away with moderation? You'll see platforms require that you use your real name, or they'll simply disallow comments and posting altogether. Any expectation of "anonymous" speech on Facebook is ridiculous.
It does not mat
Re: (Score:2)
Well, there's the one we don't talk about.
Re: (Score:2)
Apparently my initial point wasn't expressed clearly for you
And it wasn't your "initial point" I was responding to or complaining of, but your assertion that I was "pretending to be outraged".
Re: (Score:2)
There is a difference between responsibility to not troll/spam/spew on a board, and being sued/arrested/tortured/killed/family tortured/family killed for a statement.
It would be nice that there would be a way to have a system that dealt with trolls and spammers, but wouldn't affect people who have unpopular opinions, either unpopular in their country or unpopular in general.
A system also would have to deal with grey areas: Lets say there is a person who says that iOS and OS X are 100% secure. Would this p
censorshipbook (Score:2)
It is jarring for me to realize that pages are being taken down because they merely *offend* others. These aren't kiddie porn or drug dealer pages, it's just people talking about stuff. They talk about their friends, their enemies, their schools, their governments. It's not all flowers and happiness. If they want real people on facebook, they need to realize that some people are going to say unpleasant things.
Maybe have a counter at the top of the pages that says "this page has received N complaints" bu
random != win (Score:2)
Re: (Score:1)
I'm hoping for Diaspora (Score:2)
I'm really hoping for a decentralized social networking system, where everything is not controlled by the Big Head.
Re: (Score:1)
I'm sure someone's already mentioned Slashdot... (Score:4, Informative)
..but I have to say it's ironic that you're posting about this algorithm on Slashdot, a site whose moderation system has incorporated the best of your ideas for years, and yet that doesn't seem to come up when you're asking for ideas.
I like the Slashdot system. Moderators are assigned points at times beyond their control, to prevent just the kind of abuses you mention. There's appropriate feedback control on how moderators behave. The job of moderating (and meta-moderating) is presented and appreciated in such a way that people actually do it. People are picked to do moderation in a reasonable way. The process is transparent, and the proof that it works is that the Slashdot comments you typically see are actually not horrible (usually) and sometimes are quite informative.
The hivemind has moods (Score:2)
When I look at slashdot and the way it gets moderated, I feel that either the culture of slashdot has changed a lot over the past decade or else I've changed a lot (it's sort of hard for me to tell objectively). I realize that communities and their biases are not a constant but there are a few topics where the slashdot moderation lately feels so alien to me that it has raised my internal astroturf alarm. Admittedly, I'm part of the problem for letting my mod points expire more often than I spend them, b
Re: (Score:2)
Trying for the $100 (Score:4, Interesting)
I have two algorithms, and I suggest that they are more valuable if used together, and indeed, if all three including your algorithm are used together.
(1) Identify "clumps" of users by who their friends are and by their viewing habits. Facebook has an app that will create a "distance graph," using a published algorithm. It is established that groups of users tend to "clump" and the clumps can be identified algorithmically. For example, for a given user, are there more connections back to the clump than there are to outside the clump? Another way to determine such a clump is by counting the number of loops back to the user. (A friends B friends C friends A.) Traditional correlation can be used to match viewing habits. This is probably improved by including a time factor in the each correlation term. For example, if two users watch the same video within 24 hours of each other this correlation term has more weight than if they were watched a week apart.
Now that you have identified a clump -- which you do not make public -- determine what fraction of the abuse reports come from one or a small number of clumps. That is very suspicious. Also apply an "complaint" factor to the clump as a whole. Clumps with high complaint factors (complain frequently) have their complaints de-weighted appropriately. Rather than "on-off" determinations (e.g. "banned"), use variable weightings.
In this way groups of like-minded users who try to push a specific agenda through abuse complaints would find their activities less and less effective. The more aggressive the clump, the less effective. And, the more the clump acts like a clump, the less effective.
(2) Use Wikipedia style "locking." There are a sequence of locks, from mild to extreme. Mild locks require complaining users to be in good standing and be a user for while for the complaint to count. Medium locks require a more detailed look, say, by your set of random reviewers. Extreme locks means that the item in question has been formally reviewed and the issue is closed. In addition, complaints filed against a locked ("valid") item hurt the credibility score of the complainer.
I hope this helps.
Re: (Score:2)
If I'm an organized group with an ax to grind, I can get around your idea 1 fairly easily - organize my group off $SOCIAL_NETWORK, and instruct my loyal group members to specifically avoid detection by not friending or viewing each other's stuff on $SOCIAL_NETWORK. Your distance graph no longer shows these folks as connected in any way, problem solved.
Offline astroturfers have been doing that kind of thing for years.
Re: (Score:2)
Re: (Score:2)
At that point the profiles complaining are going to quite distinctive due to their inactivity except for when flagging something as abusive. They are still going to want to use their real profile most of the time because using a fake profile prevents them from interacting with their friends and groups. This means that fairly simple behavioural analysis can help exclude these reports from the system or treat them as a distinct 'loners who like flagging content' clump.
The real difficulty would be determinin
Re: (Score:2)
When he said "locking" he didn't mean locking the group or its discussion board, he meant "locking" the Report feature. It'll still appear to work, of course - you can still "report" the group - but unbeknownst to you, your report goes straight to /dev/null and your reporting credibility actually gets automatically decreased.
Slashdot moderation/meta-moderation??? (Score:1)
At its core, this sounds like a blend of Slashdot's moderation and meta-moderation processes.
This is much more difficult than it sounds (Score:2)
This is a much more difficult problem than it seems at first glance. Some other posters have already pointed out the problem of the "jury of your peers" concept with the example of the country Turkey. A similar problem arises if it is simply approached as "what is considered offensive in the host country" (in this case, the USA, since Facebook is based in the USA). Heck, there are pictures of my daughter in her soccer uniform that would be banned in Saudi Arabia because you can see her knees
Vigilantism (Score:3)
Re: (Score:2)
Re: (Score:2)
A system like this could work out, but by introducing Facebook employees, you make the system non-scalable. Instead of an employee reviewing 1000 abuse reports per day, he reviews 1000 moderations per day, and that's only meta-moderating 1 out of every <jury size> moderations.
I like the system I saw posted earlier, where instead of asking "Is there abuse?", you ask "Which kind of abuse is there?" and give them a list. People who choose incorrectly are given a lower confidence rating. However, this sti
Jury Qualification Improvement (Score:3)
I like the idea, however your problem is you will always come across trolls on the internet, or people who just like screwing up systems. I would say this percentage on facebook is quite sizable, so i would propose these alterations(to be taken individually or all together or mix/match):
Assign a trustability value to each juror, that is hidden and modified in one of two ways(or both):
Have a pool of pre-existing cases(I'm sure facebook has tons of examples stored in their history banks).
In this situation facebook knows what the outcome should be according to their standards.
Have any prospective juror have a mix of "real" cases and these pre-existing cases mixed together for a trial period, say that first 20 cases they review have an unknown mix. This way they can't guess which ones are appropriate or not.
Use their verdicts on these existing cases to assign a juror a "reliability" factor on their verdicts on the non-example cases in their batch.
That way jurors who don't quite get the rules, or are causing problems, are easily weeded out and their vote counts less in the total verdict weight on their real cases.
Alternatively:
Trustability starts at 50%, so new jurors get half votes.
whenever a juror disagrees with the majority opinion by the polar opposite choice, lower their trustability rating.
Likewise when they are in the majority and it is not a middleground, increase their trustability.
Both of these improvements will lower the odds of troll or mob mentality, even if the control a decent size of the juror pool because their individual votes will be worth less, while being invisible enough to the end-user that they won't be able to tell they aren't being effective.
Re: (Score:2)
Re:Jury Qualification Improvement (Score:4, Interesting)
Thing is, this problem isn't one of mere trolls. Trolls, spammers, and other forms of lesser life are relatively easy to recognize.
No, these are paid shills and organized groups with an agenda. And that's much much harder to stop, because they will have 'spies' trying to infiltrate and/or control your jury selection, 'lawyers' looking for loopholes in your system, and a semi-disciplined mob who will be happy to carry out their plans carefully.
An example of what they might do if they were trying to take over /. :
1. See if they could find and crack old accounts that haven't been used in a while, so they could have nice low UIDs. These are your 'pioneer' accounts. If you aren't willing or able to pull that off, make some new accounts, but expect the takeover to take longer.
2. Have the 'pioneers' post some smart and funny comments about stuff unrelated to your organization's angle to build karma and become moderators.
3. Have your larger Wave 2 come in, possibly with new accounts. Still be reasonably smart and funny on stuff unrelated to the organization's angle. Have your pioneers mod up the Wave 2 posts.
4. Repeat steps 2 and 3 until your group has a large enough pool of mods so that you can have at least 5 moderators ready whenever a story related to the organization's ideology comes up.
5. Now let your mob in. Have your moderators mod up the not-totally-stupid mob posts in support of your organization's ideological position, and posssibly mod down as 'Overrated' (because that's not metamodded) anything that would serve to disprove it.
You now have the desired results: +5 Insightful on posts that agree with $POSITION, -1 Overrated on posts that disagree with it, and an ever-increasing pool of moderators who will behave as you want them to with regards to $POSITION.
I have no knowledge of whether anyone has carried out this plan already, but it wouldn't surprise me if they had. The system on /. is considerably more resilient than, say, the New York Times comment section or Youtube, but still hackable.
Cultural sensitivities (Score:3)
Although you can't expect people to identify themselves as being knowledgeable about every conflict, argument, religious view, political wrangling or moral panic you could choose individuals from the same timezone and hemisphere that the complaints originate from (and maybe only ban the offending piece in that geography - unless more complaints are received from outside).
Re: (Score:2)
The thing we must consider is that China is currently the single largest user base on the planet. Indi
Re: (Score:3)
Let us consider a page on Facebook that is critical of Islam. Who would be considered appropriate to moderate that page? Most (if not all) Muslims would mark it inappropriate or offensive because it offends their beliefs, yet to Christains or others it may be considered informative and appropriate.
As a conservative Christian (I am not saying you are), would you want your 13-y
A few ideas. (Score:1)
I like your idea of crowd-sourcing, and I came up with a few ideas while reading yours:
Test the judgement of the moderators.
When mods are called upon to moderate something make sure that they have no way of knowing if it's real or not. This way you can for example request of a mod if "MR. LANGAN IS A BUTT BRAIN," or any other previously non bogus post should be modded or not. This way, you can know before hand if the content should be modded and test the modding ability of the moderator.
Describe what how th
Re: (Score:2)
Re: (Score:2)
Give this guy $100! (Score:2)
This reminds me of the video appeal in pro tennis. If your appeal was bogus you lose it, otherwise you get to appeal again. This makes you think twice before appealing, and will therefore reduce the review load.
Minor possible improvement: you get the right to enter a complaint only after some time since the creation of the account has passed, or if you have reached a certain amount of activity; this will deter the creation of shill accounts, or at least it will increase the friction/cost (time, energy) for
Re: (Score:2)
You missed a step:
4. If there's a large number of false complaints against a user (whether all at once or spread over time), it becomes harder to complain against that user. For example, you may need minimum community participation (e.g. been around for at least 4 days and made 10 posts, etc.) or otherwise managed to get yourself trusted enough by the community.
look at stack overflow model (Score:1)
"Prior art" (Score:2)
There is "prior art" for this idea, if you know where to look. OKCupid.com has has a crowd-sourced "flagmod" [okcupid.com] system for its Web site for years.
Don't judge the rules; judge the evidence (Score:2)
Require that abuse reports include a freeform description of why the suspected content violates the rules.
Then, don't ask the jurors whether the content violates the rules. Instead, ask them: Is (the freeform description) a true statement about the suspected content?
In other words: someone reports content for violation of TOS. Reason: "This content contains nudity."
The juror then gets the moderation request, and they answer a single question. Is: "This content contains nudity" true about the suspected
User Accountablility (Score:1)
Exposing volunteers to REAL abusive images/videos/text is IMHO worse then trying to catch false positives. If you want to be a f
FaceBook doesn't care (Score:3)
This is a well-studied "Who watches the watchers?" web of trust type issue. While there is no perfect solution, there are a number of good approaches. This page on Advgato [advogato.org] describes a good trust metric for reducing the impact of spam and malicious attacks. It wouldn't be that big of a deal for FaceBook to incorporate some such system. However, it would require FaceBook to actually care about about being fair to its users, which it doesn't. FaceBook exploits for financial gain the tribal desires of people to band together and be part of a group. So FaceBook's really uses its abuse policy as a way to force people to follow the rules of the bigger and more aggressive tribes. Such battles actually help FaceBook to be successful because it strengthens the tribal behaviors that benefit FaceBook's bottom line.
So all in all, no matter what brilliant, cost-effective, robust moderation/abuse system you design or crowd source, the very, very best that you can hope for is that somebody at FaceBook might pat you on the head and thank you for your efforts and say that they aren't interested in your contribution at this time.
I see... (Score:2)
My Suggestion... (Score:3)
It would work something like this: you would have a small group (employees of facebook, or wherever) that takes (actual) select complains and determine how their "ideal" juror would handle the complaint. feed these at random to the jury pool and if they're not voting the way they should, reduce (or remove) their voting power in effecting the outcome in the decision making process, alternatively if they have a strong history of voting exactly the way they should then their votes would carry more weight in non-test cases.
I wouldn't necessarily "kick out" jurors, but their voting power could be diminished to nothing if they have a very poor track record... I also don't think that the jurors should know that they're being tested nor, what their voting power is, nor that their voting power even has more or less weight than anyone else's.
Excellent use of meta-moderation! (Score:1)
eh (Score:1)
He's helping kids circumvent security systems at their school to access banned sites and doesn't understand why he's getting complaints?
Here's your sign...
Improvements (Score:1)
Overestimating average Facebook user (Score:1)
Inmprovements (Score:1)
Tweaking the Algorithm (Score:1)
I think your system would work quite well. I don't think even a 100 jurors are needed. I would suggest an initial pool of 20. Now within that 20 of 17 say A, and 3 say B, go with A. If it's closer, say, 8-12, then enlarge the pool to 50. If it's still close expand the pool again.
One of the keys is that people should NOT be able to volunteer for Jury duty. This will keep the deck from being stacked by the self righteous.
I would suggest that Jurors be selected from people who have:
* Active accounts --
Re: (Score:2)
They should not lose their account or have any obvious indication that the account's ability to report has been revoked, because that's just telling them to create another account. The report interface should still appear to work normally even if the user's "credibility" multiplier is zero.
the jury idea is a good one (Score:2)
The jury's decision to remove an item should not be final. It should be a site employee with the final say.
Don't remove content in the first place. Put up a "this contains potentially offensive content" warning and let people click through if they want.
Once a complainer reports a link/image as offensive, remove it from their access so they don't see it any more.
Jury Selection Algorithm needed? (Score:2)
One method of pre-screening potential jurors might be to process their own published Facebook content online and analyse it for irrational thoughts or extreme positions on related topics to what is being voted on. Lets suppose you are looking at a potential take down of someone's web profile because of it being accuse
Mob Behavior Has Its Own Patterns (Score:2)
I still believe that the best solution is to leave the censorship to professional mods, who at lest know what's really forbidden and what's not.
If you accept my assumption then the answer to your question lies in pattern analysis. If the people reporting are very tightly clustered, as in being friends with one another, having the same interests, liking the same pages or belonging to the same groups then the likelihood of mob behavior increases. The relevancy of such analysis can be determined by looking at