Ask Slashdot: How Should User-Generated Content Be Moderated? (vortex.com) 385
"I increasingly suspect that the days of large-scale public distribution of unmoderated user generated content on the Internet may shortly begin drawing to a close in significant ways..." writes long-time Slashdot reader Lauren Weinstein.
And then he shares "a bit of history": Back in 1985 when I launched my "Stargate" experiment to broadcast Usenet Netnews over the broadcast television vertical blanking interval of national "Superstation WTBS," I decided that the project would only carry moderated Usenet newsgroups. Even more than 35 years ago, I was concerned about some of the behavior and content already beginning to become common on Usenet... My determination for Stargate to only carry moderated groups triggered cries of "censorship," but I did not feel that responsible moderation equated with censorship — and that is still my view today. And now, all these many years later, it's clear that we've made no real progress in these regards...
But as it stands now, Weinstein believes were probably headed to "a combination of steps taken independently by social media firms and future legislative mandates." [M]y extremely strong preference is that we deal with these issues together as firms, organizations, customers, and users — rather than depend on government actions that, if history is any guide, will likely do enormous negative collateral damage. Time is of the essence.
Weinstein suggests one possibility: that moderation at scale "may follow the model of AI-based first-level filtering, followed by layers of human moderators."
But what's the alternative? Throngs of human moderators? Leaving it all to individual users? Limiting the amount of user-generated content? No moderation whatsoever?
Share your own thoughts and ideas in the comments. How should user-generated content be moderated?
And then he shares "a bit of history": Back in 1985 when I launched my "Stargate" experiment to broadcast Usenet Netnews over the broadcast television vertical blanking interval of national "Superstation WTBS," I decided that the project would only carry moderated Usenet newsgroups. Even more than 35 years ago, I was concerned about some of the behavior and content already beginning to become common on Usenet... My determination for Stargate to only carry moderated groups triggered cries of "censorship," but I did not feel that responsible moderation equated with censorship — and that is still my view today. And now, all these many years later, it's clear that we've made no real progress in these regards...
But as it stands now, Weinstein believes were probably headed to "a combination of steps taken independently by social media firms and future legislative mandates." [M]y extremely strong preference is that we deal with these issues together as firms, organizations, customers, and users — rather than depend on government actions that, if history is any guide, will likely do enormous negative collateral damage. Time is of the essence.
Weinstein suggests one possibility: that moderation at scale "may follow the model of AI-based first-level filtering, followed by layers of human moderators."
But what's the alternative? Throngs of human moderators? Leaving it all to individual users? Limiting the amount of user-generated content? No moderation whatsoever?
Share your own thoughts and ideas in the comments. How should user-generated content be moderated?
Get rid of anonymity (Score:5, Funny)
The trouble with everyone keeping their identity secret is that no-one needs to take responsibility for what they say.
I'm posting this anonymously 'cos I'm afraid of the consequences.
Re:Get rid of anonymity (Score:5, Insightful)
Basically you'd probably also kill the Internet in general; it would become 'read-only', just Cable TV 2.0. Without user-generated content, there is no 'Internet' anymore.
Re: (Score:2)
Even in what we might call liberal democracies, most people's freedom to do and say what they want is severely reduced by the simple fact that they're forced to spend most of their awake time under the dictate of an employer. Depending on their concrete relationship, posting something, anything, online within working hours may already be a sanctionable violation of contractual duties which ultimately can lead to loss of job and livelihood.
Re: (Score:3)
Free speech without consequences is a bad idea.
Knowing what you say can have consequences, will often give a person a bit of a pause, rethink what they are saying, and the value on what they are saying.
Our freedom of speech isn't about not having consequences, but having a government imprisoning me for saying what I feel is right. I spent the last 4+ years saying how much I dislike Trump. Often publicly, this has Turned some of the Trump supporters against me, however I haven't been arrested by the US Gov
Re:Get rid of anonymity (Score:5, Insightful)
Studies onto suitability of systems to survive for a long term all tend to find a particular value that's a balance between order and chaos whereby it has enough rules that are followed to ensure it doesn't just degrade and destroy itself, and enough freedom to actually grow, ,change and adapt.
Where Social Media is today, is blatantly nowhere near this value, with the concept of "I can say what I want, with absolutely no accountability" (or near enough, given the vitriol that's spewed by both sides of the political spectrum).
So, having a chilling effect would actually be beneficial to the entire system as a whole. It needs to be "slowed" from this absolutely insane headlong rush to chase the next dopamine hit that seems to be current day Social Media.
On the subject of Doxxing, it's a bit late for that. Social Media is already used to Doxx people. I've seen it many times, to grand cheers from "bystanders", because it's the latest political crusade, and groups have found someone they can harass. The reason that they can do that with impunity is often because it's not easy for the general bystander to work out who they are (they have no risk to doing this, or at least, little risk).
Again, this starts to go back to the concept of privacy. Just because it's "Social Media", why on earth would you want to be broadcasting to the world, and a whole load of random strangers about what you're up to, with a megaphone?
What you'd find is that most people will go back to groups of real life friends, with no information spilling out of those areas, or very little. This is very analogous to how it works in a "regular" society, just faster across distance.
As for killing the internet in general, where on earth did you get that idea? Social Media is a latecomer. The internet ran just fine before the "Eternal September", and so much of what it's about these days is not about that newcomer upstart that the loss of Social Media entirely would just be routed around and people would then just come to think of it as another failed experiment. Remember email and instant messenger? They work nicely still.
The internet is still there, still humming along nicely, still doing business and connecting people, just a lot of the shouty mob no longer has an amplifier.
What's the problem with that? So far, I'm seeing no merit, or grounding for your assertions.
You seem to believe that Facebook and Twitter have some magical reason they must survive, no matter what. Since when was that mandatory? They're businesses. They grow, shrink, adapt, and most eventually fail as the world overtakes them (being superseded by something fitter for the world they're in). That's the nature of change and progress.
Re: (Score:3)
We should also consider that having a "receptacle tip" for the shouty mob to mouth off in, walled off from the rest of us, is a good idea.
But for that, those sites need free speech.
Further, there's a feedback loop between Old Media and New Media. Many news articles cover something "important" because a bunch of baristas on Reddit or Apple employees on F
Re:Get rid of anonymity (Score:4, Insightful)
Social media have been used for quite a while as an extension of US foreign policy: allow people to mobilize in the countries of adversaries to the maximum extent possible, which absolute disregard of the validity of whatever they are saying.
Domestically the attitude is entirely opposite. Only the right type of mobilisation is allowed. Only the right type of things is allowed to be said. It is an extremely antidemocratic attitude.
If you look at the demonstrations which triggered all this(or reinvigorate the ongoing trend), there was little illegal in it.
Information which is wrong and which mobilizes people is not illegal. Conspiracy thinking is not illegal. The demonstrations were not illegal. As soon as a riot element comes in there are illegal elements but the amount of violence in demonstrations should be handled in a proportionate manner. In this case the police presence was clearly inadequate and in stark contrast to what happened at the BLM protests.
Hate groups and all kinds of extremism are more difficult. We tend to draw a distinction between public proselytizing of extremist ideas and closed group extremist ideas: as long as there are no illegal acts the preventative measures against future illegal acts should be kept to a minimum. When people meet online there is a shift towards more public behavior(larger groups) but there is certainly also a shift to 'oh wouldn't it be great if we could monitor all that and intervene much earlier'.
You are not supposed to intervene early. You are not supposed to monitor closely what people say and do. You are not supposed to sabotage social mobilisation. You are not supposed to make it impossible to rebel violently against the state. You're supposed to offer democratic means for change so people do not see violence as the only way out.
Re: (Score:3)
Democracy success needs to be tied with accurate and truthful information.
If authoritative sources are saying false hoods of a political party or a person who is trying to run for election. This is harmful to democracy because we have to make a decision on how to deal with conflicting information from multiple authoritative sources.
We don't have a ministry of truth, that can dictate is source is authoritative or not. But if enough people believe in the source than it is authoritative.
There are a lot of Con
Re: (Score:2)
scary.
russian hackers beware
Re: (Score:3)
Re: Get rid of anonymity (Score:3)
Re: (Score:2)
This used to be the case - e.g. back in 2004 there was the famous Greater Internet Fuckwad Theory
https://www.penny-arcade.com/c... [penny-arcade.com]
These days however, not so much. The fact that white supremacists were quite recently more than happy to livestream their insurrection, under their own names, really goes to show that many people just don't care.
Anonymity definitely encourages others to say all kinds of crap, but lack of anonymity really doesn't stop a lot of people.
Re:Get rid of anonymity (Score:5, Interesting)
How's the opposite working out for Facebook?
Anonymity allows people to speak freely. Allowing people to speak freely is the opposite of censorship. But when people can speak freely some people let rip with trolls so vicious that they'd hound good people until they committed suicide. So moderation is needed otherwise it becomes a total shitfest. Some people want a total shitfest so that they can hound people to death. Some people will say it's ok to have a shitfest where you can hound people to death and that is the price to pay for being able to hound people to death, I mean anything else is censorship right.
Some censorship is a good thing. Everyone censors themselves outside of the internet, if everyone spoke everything that came into their head all the time then life we be complete insanity, like youtube comments.
Re: (Score:2)
Re: (Score:3)
And they should go by that name in real life too, to avoid mixups!
Re:Get rid of anonymity (Score:4, Interesting)
The trouble with everyone keeping their identity secret is that no-one needs to take responsibility for what they say.
I'm posting this anonymously 'cos I'm afraid of the consequences.
I prefer to comment anonymously in most contexts, although I'm not anonymous here. My reason why is that while I have no issue with someone looking up the person behind a comment they see in a specific context, I don't like going the opposite way: Looking me up, and having Google providing everything - contextless.
Re: (Score:2)
You're not posting anonymously but twice anonymous.
When you create a user id , especially in a small online community, it takes on a reputation of its own, formally(score) as well as informally. You care about it a bit similar to real life, even if it is not formally attached to your reallife identity.
This weakens a bit if you scale up things and interactions become more random. So scale matters.
Relativity (Score:5, Insightful)
One man's moderation is another man's censorship. Compare US, UK, Germany, Russia, China, each with different views on the subject. No set of rules can accommodate them all.
Then recall that all these differing views are present, in varying degrees, among people in a single country.
I see no perfect solution to this conundrum.
One option might be total freedom of speech on the sender's side and advanced automated filtering on the reader's terminal. Except such filtering is probably an AI-complete problem.
Re:Relativity (Score:4, Interesting)
A perfect solution would be the ability to subscribe to a moderator that aligns with your interpretation of the rules. Any posts he purges are purged for all his subscribers. This also means if he becomes "corrupt" you can put him out of sight and out of mind with the click of a button.
Re: (Score:3)
A perfect solution would be the ability to subscribe to a moderator that aligns with your interpretation of the rules. Any posts he purges are purged for all his subscribers. This also means if he becomes "corrupt" you can put him out of sight and out of mind with the click of a button.
+1000 if I had the mod points
We need a system which maximizes individual liberty. My ideal platform to use would definitely be one where I am in control of what to see and what not to see. I don't want anyone to decide what content is good for me or not.
This does of course not include "criminal speech" such as incitement to violence (in many countries).
Re: (Score:3)
To me, it seems that this would make the problem of echo chambers ,with misinformation you like receiving, even worse than it is today. Society is faced with a huge issue. Yes, censorship is a way to facilitate misinformation spread by the authorities. However, the ability for essentially closed groups to be able to radicalise each other by circulating dangerous propaganda is even more dangerous. It has already led to genocide in Myanmar, and looks likely to do so in India. It has facilitated radicalisation
so an automated echo chamber (Score:3)
Re: (Score:2)
Won't work, the freeze peach warriors will be upset by the default moderator. They want to be on the front page, to be in the recommendations. They get very upset when they are "shadow banned", which in this case would mean the default moderator blocks them.
Also this would create bubbles. Some moderators would abuse their power to filter out anything running contrary to their preferred narrative. We should be looking for ways to break people out of bubbles so we don't end up with another QAnon.
In the end so
Re: (Score:2)
This is true, things like 230 need to be amended to say that removal of illegal content is the sole responsibility of paid staff. Paid with money.
Re: (Score:2)
It's like cleaning up toxic waste. Regardless of how much you pay someone they still need a hazmat suit. Moderating Facebook is like that, there is no safe dosage, no way to wash away the horrors you are exposed to.
Re: (Score:2)
You mispelled "All" at the beginning of the sentence there.
Unfortunately, there is no good solution to the problem - people will naturally think of other people who agree with them as "smart (or "wise") people. This applied when the only source of news was the newspapers, It still applied when people got their news by radio, When TV was the way people got news, same. And now it's the same with i
Re: (Score:2)
Won't work, the freeze peach warriors will be upset by the default moderator. They want to be on the front page, to be in the recommendations. They get very upset when they are "shadow banned", which in this case would mean the default moderator blocks them.
Also this would create bubbles. Some moderators would abuse their power to filter out anything running contrary to their preferred narrative. We should be looking for ways to break people out of bubbles so we don't end up with another QAnon.
In the end some of the moderators would themselves get banned for not removing illegal material or actively promoting it. The job itself would be hell, being exposed to the worst of humanity and relentlessly doxed and harassed by people who disagree with their policies.
In my opinion there should not be any default moderator. Let users initially select from a list of diverse popular moderators instead - or search for some less popular ones which fit them.
Yes, some moderators will filter stuff running against their preferred narrative. But then you can leave them with the click of a button and subscribe to other moderators that give you a more balanced viewpoint (I don't think you should be limited to one moderator btw. - just like you can subscribe to multiple YouTube chan
Re: (Score:2)
Still got the same problem though. Who is on the default list? Why isn't my preferred truth teller on there? Why is QAnon9234 not listed?
How is it ranked, pure popularity contest? How are users supposed to make an informed choice, does each candidate get to make a pitch?
It will end up like YouTube, with curated channels that create a pipeline to bubbles and extremism. There is a reason the Christchurch terrorist told people to subscribe to PewDiePie.
Re: (Score:2)
Still got the same problem though. Who is on the default list? Why isn't my preferred truth teller on there? Why is QAnon9234 not listed?
How is it ranked, pure popularity contest? How are users supposed to make an informed choice, does each candidate get to make a pitch?
It will end up like YouTube, with curated channels that create a pipeline to bubbles and extremism. There is a reason the Christchurch terrorist told people to subscribe to PewDiePie.
I don't see that as a big problem as long as the algorithm for the initial list is transparent (could simply be the 30 most popular moderators in your country). QAnon9234 is not listed because he does not have sufficient subscribers. Yes, they would of course need to describe their policy for content moderation. Preferably in a systematic manner.
And no, I don't think it will generally create a pipeline to bubbles and extremism but rather to freedom and choice.
Re: (Score:2)
Re:Relativity (Score:5, Insightful)
A perfect solution would be the ability to subscribe to a moderator that aligns with your interpretation of the rules. Any posts he purges are purged for all his subscribers. This also means if he becomes "corrupt" you can put him out of sight and out of mind with the click of a button.
I'd ask perfect for whom? That sounds like a perfect echo chamber, which while pleasing to the ear is not healthy. Imagine as a thought experiment a teacher that only ever gave lessons that pleased their pupils. I'd still be playing with the giant wooden blocks from Kindergarten... I wonder if Amazon sells those... I'll be... NO CARRIER
Privately-owned companies can decide (Score:3, Insightful)
If you want basically NO rules other than 'blatantly illegal gets you banned' then go use something like 4chan and take the bad with the good.
For what it's worth I don't use Facebook, Twitter, or any of these other so-called 'social media' sites because I think they're ridiculous and a waste of time and a cancer on our society, and we'd be better off without them. But if you're going to use them, you play by the rules they set down, you agreed to them.
Re:Privately-owned companies can decide (Score:5, Insightful)
This is "roll your own".
And we see how that's working out.
Say something innocuous.
You can't be here. Roll your own.
Goes somewhere else.
Colludes to get you booted THERE too. Roll your own!
Build your own service.
Colludes to destroy your service and business relations. Roll your own!
What? Roll my own INTERNET? Just to send a fucking tweet?
Be serious and sane for just ONE second in your life.
Re: (Score:2)
Well, he could have just hosted it on colo'd servers owned by himself privately (or his own companies), on network connections provisioned directly by his company from the backbone provider. Then, if he'd just refrained from obvious treason and hadn't killed off net neutrality he could have continued to lie, spam, and antagonize to his heart's content, and it would have been actually illegal for anyone to stop him. I think the problem here is you keep thinking the other half of the argument is the one tha
Re: (Score:3)
It's not shadowy forces destroying your homebrew Twitter clone, it's lack of a viable business plan.
Look at Voat. Didn't get booted off hosting or anything like that. It just ran out of money because there was no viable way to make any and the users were too cheap to pay for it.
That's just the normal way startups work, if you hadn't noticed. Twitter was one of many and just happened to get big enough with enough investment to survive, but 100 others didn't.
Besides which there are examples of successful righ
Re: (Score:3)
I would argue very strenuously that private organizations working together to set expectations of behavior in "the commons" is our system working as intended.
If that strikes you as inherently evil, you probably shouldn't live in a representative capitalist society, but some other form of government where social norms are not driven by a profit motive.
If you'd like a perfect libertarian society where free speech means no private limitations on what you can say, then you're talking about a society where peopl
Re: (Score:2)
Actually private owned companies have taken over public space and they coordinate closely with dominant political players to control speech.
If you don't like that then go start Gab. Or Parler.
Select mature people... (Score:5, Insightful)
... as super moderators that are knowledgeable and ruthlessly skeptical of everything.
I mean the george carlin types, that know humanity is one giant race of bullshit artists.
https://www.youtube.com/watch?... [youtube.com]
If there's anything I've seen, it's the infinite capacity for most people to deceive themselves. Lots of bs gets upvoted or downvoted because generations have changed because the new generation doesn't like facts of the old generation (aka anything related to drm and world of warcraft, and steam since most modern slashdotters are drm lovers tragically).
So you get bullshit as one demographic outpopulates another as demographics shift and old slashdotters become a minority.
Re: (Score:2)
They will just be shredded by accusations of bias, doxed and torn down.
Who would want that job anyway, sifting through the absolute worst of humanity 8 hours a day? It's the kind of thing you do because you can't find anything better, like working at McDonalds, or because you mistakenly believe it might get you into Facebook.
A two-tier identity system (Score:5, Interesting)
We need a strong way to guarantee our identities. Government has to get involved with that, sorry. Most modern governments have one already, to fill your taxes or similar.
However, most sites should not use this system directly. Instead, with some cryptographic cleverness, it should be possible for the gov-ID service to guarantee that the person that's making an account on your twitter-clone now, is a real physical person who has no other accounts on your twitter clone.
The twitter-clone (or whatever site it is) does not know which physical person is registering an account, only that they are a real one (and just one). The gov-ID service, on the other hand, knows that a certain physical person registered an account on the twitter-clone - but not what their pseudonym there is.
I think that with pseudonymity, but without unlimited throwaway accounts, a lot of dysfunction goes away. Forcing people to operate under full name is counterproductive - as we have seen, when that's the case, people with nothing socially to lose are overrepresented.
Re: A two-tier identity system (Score:3, Insightful)
Re: (Score:2)
Good ideas but let's take it a step further. Because sensational propaganda tends to crowd out facts, stripping everybody of their credentials tilts the playing field in favor of whoever can make the most outlandish but believable claim, and that puts us right back at square one. So let's provide the option for users to verify their credentials in order to give the experts a little more ammo against disinformation. But like you said, do it without the twitter-clone knowing who the physical person is.
Re: (Score:2)
Yes, I agree. You should be able to use the government's system to prove who you are, if YOU want to. That's the least they owe you for keeping track of you.
Right now many governments have strong ID systems, but they only allow them to be used on their own sites and maybe a few privileged sites (banks, etc.) It should be possible to instead put the citizen firmly in charge here.
Re: (Score:2)
Some CAs already do identity checks [wikipedia.org], I don't think we need to involve governments in this.
Re: (Score:2)
He did suggest it so that a) the platform only gets the information that the user is a real person but not their name, age or any other information about them and b) the government only gets to know that the user registered an account on a service, not their account name or anything else. Not saying I would support this (or not), but if you nevertheless think this would cause problem if implemented the way suggested here then please explain how? Not saying it wouldn't cause problems, just interested if you
Re: (Score:2)
the government only gets to know that the user registered an account on a service, not their account name or anything else
That seems sufficient to cause problems. If the government has a list of every citizen with an account on www.IDon'tLikeTheCurrentGovernment.com it could trivially round up opposition. Imagine McCarthyism, but the government already has ready-made lists of people who are sympathetic to "dangerous" political ideologies.
Re: (Score:2)
it wouldn't be "www.IDon'tLikeTheCurrentGovernment.com".
It'd just be "HereWeDiscussPolitics.com", where there will be a subforum on "I don't like the gov.t".
All of us will be on HereWeDiscuss, and the govermnent won't know who goes to the subforum.
That sounds very reasonable to me.
Repeal Section 230 (Score:4, Interesting)
I'll say something unpopular: Section 230 should be repealed.
Giving websites immunity from 3rd party content has led to forums where no one is accountable for the content that is produced on them, and the consequence is that online discussions are easily hackable by trolls or third parties/foreign actors with less than honorable interests. The exploit is confirmation bias. People are being duped left and right by total bullshit that sounds plausible and confirms their biases, and in just 25 short years the internet has brought democratic discourse in the United States to its knees.
If we make content providers accountable for what they publish, then fact checking becomes necessary and it will make it a lot harder online discourse to be hacked and manipulated in a way that we are seeing now. Yes that means forums like Slashdot, Reddit, Hacker News, and anything will a real time commenting system will essentially be impossible. So be it. I just saw the U.S. Capital invaded by people who think the 2020 election was stolen and Democrats keep child sex slaves in the basement of pizza shops. Something has to give.
Re:Repeal Section 230 (Score:5, Insightful)
Section 230 needs to be REWRITTEN.
Platforms should be limited to the legal standard in what they can and cannot remove.
Because if they're going to editorialize on anything beyond that, they're a publisher.
Re: (Score:3)
How are they going to make any money or block spam if they can only remove illegal stuff?
How will YouTube avoid becoming a porn site?
And will conservatives stop complaining that the algorithm isn't promoting their content? Because right now just being moved down the search results is censorship according to them. Or does the algorithm have to be replaced by rand() now?
Re: (Score:2)
Giving websites immunity from 3rd party content has led to forums where no one is accountable for the content that is produced on them
In principle, the individual who posted the content is accountable for it. If I post a specific threat of violence against a politician on Slashdot along with my name and address, the FBI/Secret Service should show up at my door, not at Slashdot HQ. In practice, of course, it's trivial to anonymize yourself on most internet services, so the ability to hold posters accountable for their content falls apart quickly.
Given the choice between holding content providers (ie Facebook, Slashdot, Twitter) account
Re: (Score:3)
You realize this also destroys sites like GitHub, stackoverflow etc. right? This would also destroy any topic specific forums because section 230 is what allowed them to remove off-topic posts. Section 230 was passed because it was desperately needed at the time.
I do agree it needs to be rewritten and I don't know exactly how it should be rewritten but just destroying it would be devastating.
Re: (Score:3)
I'll say something unpopular: Section 230 should be repealed.
Giving websites immunity from 3rd party content has led to forums where no one is accountable for the content that is produced on them, and the consequence is that online discussions are easily hackable by trolls or third parties/foreign actors with less than honorable interests. The exploit is confirmation bias. People are being duped left and right by total bullshit that sounds plausible and confirms their biases, and in just 25 short years the internet has brought democratic discourse in the United States to its knees.
While I sympathize with your viewpoint, repealing 230 would just replace one bad situation with another. While conservatives, for example, focus on facebook and Twitter and want to get back at them for perceived sllights, there will be plenty of liberals gunning for Fox, Parlar et. al. It's a two edged sword that will cut both ways. It'll impoact not just the big players either, a whole cottage industry could spring up threatening to sue unless a small website settled for a nominal amount when a user pos
Re: (Score:2)
Umm, Section 230 won't encourage fact checking. How much fact checking do you think occurs in the right wing media on radio and other media?
Section 230 means you can't publish slander. You can lie about facts. You can say the moon is purple. You can even call someone a fat cow, as long as you can show you actually thought he was a fat cow. Hell in the case of public figures such as celebrities or politicians you can even publish slander as long as you don't do it knowingly.
Re:Repeal Section 230 (Score:4, Insightful)
The problem with repeal is that fact checking becomes a matter of establishing consensus, which would prevent any "inconvenient" truths from being heard. Imagine Al Gore being deplatformed in 2005 because the "majority of scientists at oil companies do not believe Global Warming to be happening."
The solution is not technical, but social. People are outraged because:
I have seen this before on the elementary school playground: an inarticulate bully knows only how to shove and hit because he can't adequately explain his feelings/problems/etc... to his peers. This is what we are seeing in cancel culture: people who are good at sloganeering, but absolutely inadequate when it comes to developing an understanding of opposing points of view.
Free speech and public civil discourse is for a morally-upright, well-informed society where the participants are seeking to develop ideas which would further human progress. Conversations on Twitter and FB cannot convey the nuance and context of personal interaction which renders harmless the off-color joke, or contextualizes words which, in other contexts, would seem quite offensive, e.g.: "I want them the whitest" when discussing toothpaste. Because the media is so willing to de-contextualize a statement in order to sell outrage, those that do remain on these platforms are constrained to only agreement or disagreement with the popular narratives.
If we are ever to return to a place where everyone has a voice, and can be heard, and feels free to express themselves, we have to consign outrage culture to the dustbin of history. I, quite frankly, couldn't care less how racist, homophobic, misogynist, etc... you consider someone's words - if you won't let me hear them, you are the problem, not them. Even if I can't change the minds of someone who holds reprehensible views, I may in fact be able to help them understand mine. Censorship prevents this dialogue process from occurring, and quite frankly, protecting your feelings is less important than preserving democracy.
If we are going to effect change, we have to use cancel culture like an epithet, and regard censorship with the same aprobrium as racism; it doesn't matter why you did it, you're still undermining the democratic process. Yes, people will abuse their freedom to say awful things, but if you can't handle the truth, you can't be trusted with the power to vote. The preservation of democracy is more important than your feelings.
Re: (Score:2)
Because people with half a brain have better things to do than address the incoherent babble of total nutjobs on the internet.
I say let 'em have it! We'll make our own internet. With blackjack and hookers!
Moderation as in (moderate vs extreme) (Score:4, Insightful)
It kind of feels like the answer is in the word itself - we're aiming to prevent the extreme views from propagating, while embracing the "moderate" view - one that considers arguments from more than one side, and draws their own conclusions.
Wiki (Score:2)
Wiki-style self-moderation seems ok
Usage of content vs content (Score:5, Insightful)
Moderation and censorship are different (Score:2)
Re: (Score:2)
Moderation recently has been used as a code word for censorship. I suggest returning to the original usage. If a forum has rules (such as the topic), moderation should be used transparently to enforce the rules.
Define transparency. So that little low traffic phpbb forum dedicated solely to crocheting of kitten bootees which has basically one moderator/owner/admin. What precise, legal requirements do you propose putting on that person wanting to delete holocaust denial posts such that they can't get sued in
What large scale unmoderated content? (Score:5, Insightful)
The internet does not have large scale distribution of unmoderated content. There are very few sites without moderation. There always have been few sites without moderation. Even the likes of 4chan has moderation and Slashdot employs content filters along with a moderation system that generally hides content by default.
This fantasy that the internet was some mythical wild west where companies would set up endless free speech paradises just never existed. It doesn't exist now that AWS is not hosting Parler. It didn't exist when Voat was formed because Reddit got sick of alt-right bullshit. It didn't exist when 4chan started banning it's first content back in 2004, less than a year after starting. It didn't exist when Geocities published their terms of service back in the mid 90s.
The internet has a pipe. If you want to post something without moderation then your only option has always been to connect your own system to that pipe. At any point where a user of another service or platform has existed there has been some level of moderation and the services which were truly unmoderated have always been few and far between.
Re:What large scale unmoderated content? (Score:4, Informative)
Re: (Score:3)
The problem with this idea is that it is wrong. Posts or even whole groups could be killed on USENET. There were even DMCA requests against USENET binary posts. Operators would kill enough of the parts of a message that you couldn't recover from the PAR files.
Re:What large scale unmoderated content? (Score:5, Informative)
I still use Usenet heavily, for text. It's still unmoderated, with all the good and bad that that brings.
Re:What large scale unmoderated content? (Score:4, Interesting)
Err..Usenet.
There were PLENTY of moderated Usenet groups.
There was also a big difference in social class, one which saw like minded nerds part-taking in a somewhat underground system that generally wasn't accessible to every single braincelled organism with a phone and a twitter account. Usenet wasn't so much a free speech wild west as it was a 1900s era gentlemen's club.
Re: (Score:2)
Community moderation with ban triggers (Score:5, Interesting)
So I would want better ways to throttle / size limit anonymous coward posts to deter screeds (e.g. 280 chars max, 30 minute timeout), implement size / frequency limits on new users, and a community mechanism to punish or timeout accounts / IP addresses in engaging in abuse. And for moderation I would like a way to flag a screed / troll separate to the scoring mechanism so that if other mods agree, that post gets marked down to -2 which is effectively deletes it and autopunishes the poster.
Re: (Score:3)
The Slashdot model is rather specific. I can't see newspapers, bug trackers and what not setting up a moderation system.
The model we have ended up with is someone creates and pays for a site that anyone to post to, for free no less. But the owner of the site gets to decide what posts are allowed. The only legal constraint on the owner seems to be to under is they must delete illegal posts.
Those rules have ended up creating a place were we users have thousands, if not millions of places on the internet we
Re: (Score:2)
Human editors (Score:2)
The question is, I think, based on the false premise that we need to save "social media". We don't.
Re: (Score:3)
And that's the bullshit. I don't publish on their platform
If you don't have to wait for a moderator to approve your content, you 100% in fact do publish on their platform.
If the content is moderated in the sense that it does not appear until approved, there's an argument to be made that they do in fact publish the content. I could go either way on that. But if your content just appears on the site when you click submit, you are the publisher.
Not an answer (Score:2)
This is not an answer at all, but just an observation that there is not going to be an easy solution to this. I think what we're facing is global culture at its most powerful, all of our human flaws and frailties exposed and exploded many times over. It's kind of like asking "how can we talk to each other?" which is a fair question in a kind of broad philosophical way but also kind of nonsensical. As it turns out, the great challenge of the age of information is how to manage all of the information that com
Jurisdictions problem (Score:2)
Make everyone a moderator (Score:2)
If everyone is required to give quick feedback after reading a post, the system would have a lot of data. Bit like the moderation here on slashdot. The system could learn to predict your opinion of a post based on who has recommended or disliked it. You could set your reading threshold according to your mood. This could easily lead into bubbles of like-minded people reading only the same kind of stuff, but the system could add some opposing views that have got very positive ratings.
Etiquette and working for your reputation. (Score:5, Interesting)
I suspect much of the solution is getting away from "instant gratification" and the focus on the self to the exclusion of all else.
It brings out the worst in people.
Back in the earlier days of the net, there was the concept of the September syndrome. This was a yearly thing (when the Internet was still largely restricted to the academic world), when a whole load of new students came on line. It was absolute chaos, with flame wars running rampant, and shouty voices everywhere.
However, over the course of the year, people learned that if they wanted to be taken seriously and get assistance on things that mattered, they needed to be pleasant to other people, and generally follow established respect patterns and behaviours. That baseline was what it took to even have you engaging with the group.
To be taken seriously took the building of a reputation. This took time, and actual work. It's not something I think an AI would be good at judging (at least in the current primitive form) due to the ability of just gaming the input to output. People valued the hard earned reputations they worked for.
So in current world. I think as a baseline we'd need:
* Scarcity of methods to build alternate accounts. I think this is something that Tim Berners-Lee was looking at in his "one identity" environment. When something's not easily replaceable, it starts to have value.
* A reputation metric. Not a "like" that can be clicked randomly, but something that's difficult to build. For example, restricting those 'scarce' accounts to one like per week. And then having that only in particular categories. Just because someone's a great physicist, doesn't mean they know anything at all about baking cakes, and vice versa. This would allow for a slow build of reputation in areas over time. Irrelevant to people that really know you, useful when you're discussing particular subject matter.
* Possibly most difficult would be an accuracy metric. This would be a tough one to come up with, as it'd require the concept of strength of proof, as exists in scientific basis. That requires strength of evidence (there are many scientific papers that are plain junk, for example, and many, many cases of very strong assertions being made from extremely weak evidence and study types) to be factored in and so on. It would also need to take into account subjective vs objective which a huge swathe of people don't get a lot of the time.
What people need to come to grips with (again) is that reputations have value, and they're not easily replaced.
Fuck AI (Score:2)
AI is that magic box which is designed from group up to be responsive to human intervention and yet avoids human responsility. AI moderation is best when you want to avoid topics and not spam. Fuck that. That is censorship.
Instead of complicated NLP models they can have a list of patterns that one can avoid. That list can be converted into statistical models and even made public. If each post requires effort, it should be made possible.
Moderation needs to be separate from hosting (Score:2)
There's no solution that's going to satisfy everybody, so the only way to solve it would be if instead of opaque algorithms sites implemented an API that would give third parties a chance to implement their own aggregation of user mods, hopefully in an open source way. To prevent privacy problems, the mods of a user would not be tied to their identity but some generated id. Of course, users would be free to announce their id and aggregators could choose to only use the mods of select users, which is equival
Re: (Score:2)
And which set of weights do the search spiders use? It's not sufficient to merely sweep things under the rug if people -- or bots -- are actively looking for them, and the big G is always looking.
social media is publishing, organize it so (Score:2)
When the printing press was invented, for a very long it was time unclear how the publication process, from content production - editing - financing - production - distribution would be organized; how the labor would be differentiated; and who would play a leading role in the process.
Over time it turned out that printing is a craft, with its own beautiful challenges (such as typography, typesetting, design, production quality), but they are not in charge.
It has also been clarified that it is the publisher w
Re: (Score:3)
Social media publishers should have the same responsibilities and legal liabilities for the content they publish as newspaper publishers.
They do. Section 230 offers protection for the content that others post to their platforms without moderation, even if they choose to moderate it after the fact. It does not offer them protection for content they themselves choose to publish, for which they retain responsibility. Likewise, if you choose to upload a comment to someone else's website, you remain liable for its contents. This places the burden of responsibility on those who actually create the content. This is obviously and clearly the most fa
Solution: No moderation, increased education (Score:2)
Anyone can post, anyone can decide not to listen.
Educated people can think critically when information is available. There will always be people who choose not to think.
In the fullness of time, online users will as a matter of course not accept anything they read online at face value. In majority they will question, they will reason. There will always be a minority that will not question and will not reason.
Censorship, and in particular automated censorship, is wrong.
Just do it like we did it the 200 years before (Score:2)
Post a letter to the editor of a newspaper, some people read it and if you're not a nutjob you might have the chance to be one of the 2-3 to be published, from the 2000 sent in by crazy people every day.
If you want more, buy a newspaper company or a TV-Station.
The way we do it in real life: peer-to-peer (Score:3)
Education focused on critical thinking (Score:4, Interesting)
It's not a quick fix, so won't be popular... but the real "fix" is to put "critical thinking" back in the curriculum. I (and others) have postulated that a significant contributor to modern societal woes is the No Child Left Behind act, at least in the USA.
That structural change to the US education system put emphasis on achieving scheduled academic milestones - making education focused on ticking a box, rather than on education.
Now, some 20 years later, you have an adult population that is largely incapable of being able to weigh viewpoints critically, and instead just weighs them emotionally. Most of that population isn't even capable of recognizing the difference between critical and emotional evaluation.
It may stem the tide a bit if you basically parent online communication because, essentially, even the adults are children; we have to wait a generation to get adults back in the room, capable of moderating themselves.
Caution is necessary though - you want the "moderation" to be instructive, not punitive. If you don't educate the masses on how to speak to each other, you will just alienate them and cause them to act childishly. Which is what is happening - people are not getting their way, so they throw a tantrum.
Tear down major social networks (Score:3)
Moderation at scale does not work. Communities are also incapable of being healthy at scale.
Humans evolved to have small social networks. Modern humans have existed for roughly 300,000 years. Back then we were in small tribes on the African savannah. An ancient human would be lucky to know more than 100 people over their life. Take that evolutionary baseline and apply it to Facebook, or any other social media platform and it just doesn't work. We had already well and truly exceeded it a long time ago with just the size of our real world communities.
The difference is, in a city I don't have to have meaningful interactions with every single person on the side of the street. Most of them are just faceless shapes walking by. My social network is still small and within my direct control. With social media it really isn't. Sure, you can choose who to friend and who to block but you can only do so much to limit the meaningful interactions. A friend of a friend's rant about whatever is still going to show up in your feed. To make things worse, none of it is organically happening. Social media companies tailor the content that is shown to you in order to increase the time you spend on their platform. They don't train the ML system to favor other things like how rewarding the content may be or how it may make you feel beyond just consuming more. This isn't healthy and results in addiction to the platform. Often, the content that makes you stay around isn't informative or uplifting. Its drama.
We've also seen how moderation at these scales doesn't work. Youtube gets thousand of hours of content added each minute. Its simply impossible for them to hire enough humans to review it all and make fair judgement calls. So we again turn to an ML algorithm to do it for us. Except AI is a scam, it isn't nearly as clever as the sales people say. Im sure everyone can think of several instances off the top of their head where various automated moderation systems have failed. Either blocking the wrong thing, letting through content it shouldn't or being trivially easy to abuse by bad actors.
So shut it down. Smaller networks are both more compatible with humans and are much easier to moderate with real people. They won't be perfect, nothing ever is. But they will be better. Smaller, more focused online communities with dedicated moderation are healthier places. They are also far harder to manipulate.
Re:Get rid of algorithms (Score:5, Interesting)
"Make it hard to spread abuse."
You may as well just shut down the Internet because that's COMPLETELY subjective.
If I tell a joke, and someone gets offended and calls it "abusive", why should THEY decide what I've said should go away?
"Make it hard to create silos and echo chambers."
Unfortunately the platforms themselves do this with their arbitrary and uneven enforcement.
And there is NO way to force them to not be arbitrary.
At least in the US, The moderation should stop where the law ends.
And if someone is still sandy about content on the platform after that, including the platform owners, they can BE ADULTS and CHANGE THE FUCKING CHANNEL.
But this will never happen. The terminal altruism and will to power being exhibited show that, with great power comes great crazy and great abuse.
Re: (Score:2)
Re: (Score:2)
1. No, it isn't.
2. You're not telling your joke to everybody.
You don't know who I am, so either you are saying you find the breathing of all humans offensive or you are lying. If you have to lie to make a point then maybe your point is wrong.
Re:Get rid of algorithms (Score:4, Insightful)
Re: (Score:2)
We're going round in circles here, why would you want to tell a joke that's not funny? Or why would you intentionally tell a joke if you know it's going to offend the people you're telling it to?
For what audience (Score:2)
Frankly a lot of my jokes are restricted to a small audience , not because of some inherent meanness but because there is a lot of experimental humor and irony implied which is interpreted completely opposite if you're not steeped in irony.
So that would exclude Americans I am afraid. And Germans. French. And people who generalize too much.
Re: (Score:2)
You've never heard of "punching up" in comedy, making fun of the powerful or puffed-up? Sometimes that makes people uncomfortable. And sometimes a joke lands badly. Neither of those mean the joke should be erased or the teller silenced.
And sometimes people go around looking for reasons to be offended. That usually means there's a reason to keep the joke.
Re: (Score:3)
They're free to ignore it.
They're also free to not host it.
We don't want you here. Roll your own!
Moves to another service.
Colludes to get you booted there too. Roll your own!
Builds own service.
Colludes to get you deplatformed. Roll your own!
If the fucking American Nahtzee Party can manage to keep their website up, then the problem isn't de platforming it's that you're shit at coding. The constitution doesn't obligate anyone to make up for your obvious deficiencies.
You want to see echo chambers? (Score:4)
https://www.pewresearch.org/fa... [pewresearch.org]
Check for the table with Fox News: all red, and MSNBC: all blue.
Those are massive echo chambers. This certainly does a lot amplifying the partisan divide.
Re: (Score:2)
BINGO!
Re: Every damn time (Score:2)
Uuum, Germany openly opposed Trump being censored. Even though we despise the guy.
Newsflash: We're not in the beginning of the 20th century anymore. You may be, and your -bergs and -steins may be, but we aren't.
Re: The paradox of tolerance (Score:2)
Define "hate".
Define "intolerance".
Define "harm".
You will realize those are all entirely relative.
No, nobody has the right to call his definitions more equal or even absolute. That is where dictatorship begins.
Re:"Moderation" = newspeak for censorship (Score:4, Insightful)
Right, that's exactly why users should be free to go to a different site that is aligned with their beliefs. It's never a reason why a site should be forced to carry content with which the owners, operators, and/or shareholders do not agree. That interferes with their freedoms of association, speech, and expression. Why so many people don't want to protect those freedoms is easily explained: they aren't getting what they want out of them, and don't care about anyone else's rights.
The like button - destroyer of civilizations (Score:3)
If the culture of respectfully disagreeing is not prevalent in the population, or if the cultural incentives are stacked against behaving that way, then no amount of technological effort would be able to change that.
I want to highlight this aspect of my previous post. There is a huge incentive to misbehave that can be attributed to the like button. While it is not immediately obvious, but the ability to effortlessly give and receive likes results in promotion and encouragement of extreme behavior. Humans are hard-wired for positive social reinforcements and the like system hijacks this hardwired wetware system. This hijack results in unprecedented and systemic attention whoring, where people on-up each other for the l