Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Social Networks

Ask Slashdot: How Should User-Generated Content Be Moderated? (vortex.com) 385

"I increasingly suspect that the days of large-scale public distribution of unmoderated user generated content on the Internet may shortly begin drawing to a close in significant ways..." writes long-time Slashdot reader Lauren Weinstein.

And then he shares "a bit of history": Back in 1985 when I launched my "Stargate" experiment to broadcast Usenet Netnews over the broadcast television vertical blanking interval of national "Superstation WTBS," I decided that the project would only carry moderated Usenet newsgroups. Even more than 35 years ago, I was concerned about some of the behavior and content already beginning to become common on Usenet... My determination for Stargate to only carry moderated groups triggered cries of "censorship," but I did not feel that responsible moderation equated with censorship — and that is still my view today. And now, all these many years later, it's clear that we've made no real progress in these regards...
But as it stands now, Weinstein believes were probably headed to "a combination of steps taken independently by social media firms and future legislative mandates." [M]y extremely strong preference is that we deal with these issues together as firms, organizations, customers, and users — rather than depend on government actions that, if history is any guide, will likely do enormous negative collateral damage. Time is of the essence.
Weinstein suggests one possibility: that moderation at scale "may follow the model of AI-based first-level filtering, followed by layers of human moderators."

But what's the alternative? Throngs of human moderators? Leaving it all to individual users? Limiting the amount of user-generated content? No moderation whatsoever?

Share your own thoughts and ideas in the comments. How should user-generated content be moderated?
This discussion has been archived. No new comments can be posted.

Ask Slashdot: How Should User-Generated Content Be Moderated?

Comments Filter:
  • by Anonymous Coward on Monday January 18, 2021 @03:39AM (#60958510)

    The trouble with everyone keeping their identity secret is that no-one needs to take responsibility for what they say.

    I'm posting this anonymously 'cos I'm afraid of the consequences.

    • by Rick Schumann ( 4662797 ) on Monday January 18, 2021 @03:45AM (#60958522) Journal
      If you do that you may as well just remove all ability for the general public to post any content whatsoever. What you suggest would have an incredibly chilling effect on freedom of speech. There are countries where, if it can be traced back to you, speaking your mind about, say, your government, can get you arrested and/or just killed outright. Here in the U.S. having your real name plastered all over everything will get you doxxed, harassed, your life ruined, and maybe killed. So, again: you may as well just shut down all ability to post anything anywhere; no sane person would do it, and anyone who thinks nothing bad could happen would soon find out how wrong they are. Facebook, Twitter, and all so-called 'social media' would lose users in droves and have to shut down just for that reason alone.
      Basically you'd probably also kill the Internet in general; it would become 'read-only', just Cable TV 2.0. Without user-generated content, there is no 'Internet' anymore.
      • Even in what we might call liberal democracies, most people's freedom to do and say what they want is severely reduced by the simple fact that they're forced to spend most of their awake time under the dictate of an employer. Depending on their concrete relationship, posting something, anything, online within working hours may already be a sanctionable violation of contractual duties which ultimately can lead to loss of job and livelihood.

      • by malkavian ( 9512 ) on Monday January 18, 2021 @05:54AM (#60958930)

        Studies onto suitability of systems to survive for a long term all tend to find a particular value that's a balance between order and chaos whereby it has enough rules that are followed to ensure it doesn't just degrade and destroy itself, and enough freedom to actually grow, ,change and adapt.
        Where Social Media is today, is blatantly nowhere near this value, with the concept of "I can say what I want, with absolutely no accountability" (or near enough, given the vitriol that's spewed by both sides of the political spectrum).
        So, having a chilling effect would actually be beneficial to the entire system as a whole. It needs to be "slowed" from this absolutely insane headlong rush to chase the next dopamine hit that seems to be current day Social Media.

        On the subject of Doxxing, it's a bit late for that. Social Media is already used to Doxx people. I've seen it many times, to grand cheers from "bystanders", because it's the latest political crusade, and groups have found someone they can harass. The reason that they can do that with impunity is often because it's not easy for the general bystander to work out who they are (they have no risk to doing this, or at least, little risk).
        Again, this starts to go back to the concept of privacy. Just because it's "Social Media", why on earth would you want to be broadcasting to the world, and a whole load of random strangers about what you're up to, with a megaphone?
        What you'd find is that most people will go back to groups of real life friends, with no information spilling out of those areas, or very little. This is very analogous to how it works in a "regular" society, just faster across distance.

        As for killing the internet in general, where on earth did you get that idea? Social Media is a latecomer. The internet ran just fine before the "Eternal September", and so much of what it's about these days is not about that newcomer upstart that the loss of Social Media entirely would just be routed around and people would then just come to think of it as another failed experiment. Remember email and instant messenger? They work nicely still.
        The internet is still there, still humming along nicely, still doing business and connecting people, just a lot of the shouty mob no longer has an amplifier.
        What's the problem with that? So far, I'm seeing no merit, or grounding for your assertions.

        You seem to believe that Facebook and Twitter have some magical reason they must survive, no matter what. Since when was that mandatory? They're businesses. They grow, shrink, adapt, and most eventually fail as the world overtakes them (being superseded by something fitter for the world they're in). That's the nature of change and progress.

        • The internet is still there, still humming along nicely, still doing business and connecting people, just a lot of the shouty mob no longer has an amplifier.

          We should also consider that having a "receptacle tip" for the shouty mob to mouth off in, walled off from the rest of us, is a good idea.

          But for that, those sites need free speech.

          Further, there's a feedback loop between Old Media and New Media. Many news articles cover something "important" because a bunch of baristas on Reddit or Apple employees on F

      • by tinkerton ( 199273 ) on Monday January 18, 2021 @05:58AM (#60958946)

        Social media have been used for quite a while as an extension of US foreign policy: allow people to mobilize in the countries of adversaries to the maximum extent possible, which absolute disregard of the validity of whatever they are saying.
        Domestically the attitude is entirely opposite. Only the right type of mobilisation is allowed. Only the right type of things is allowed to be said. It is an extremely antidemocratic attitude.

        If you look at the demonstrations which triggered all this(or reinvigorate the ongoing trend), there was little illegal in it.

        Information which is wrong and which mobilizes people is not illegal. Conspiracy thinking is not illegal. The demonstrations were not illegal. As soon as a riot element comes in there are illegal elements but the amount of violence in demonstrations should be handled in a proportionate manner. In this case the police presence was clearly inadequate and in stark contrast to what happened at the BLM protests.

        Hate groups and all kinds of extremism are more difficult. We tend to draw a distinction between public proselytizing of extremist ideas and closed group extremist ideas: as long as there are no illegal acts the preventative measures against future illegal acts should be kept to a minimum. When people meet online there is a shift towards more public behavior(larger groups) but there is certainly also a shift to 'oh wouldn't it be great if we could monitor all that and intervene much earlier'.

        You are not supposed to intervene early. You are not supposed to monitor closely what people say and do. You are not supposed to sabotage social mobilisation. You are not supposed to make it impossible to rebel violently against the state. You're supposed to offer democratic means for change so people do not see violence as the only way out.

    • This used to be the case - e.g. back in 2004 there was the famous Greater Internet Fuckwad Theory
      https://www.penny-arcade.com/c... [penny-arcade.com]

      These days however, not so much. The fact that white supremacists were quite recently more than happy to livestream their insurrection, under their own names, really goes to show that many people just don't care.

      Anonymity definitely encourages others to say all kinds of crap, but lack of anonymity really doesn't stop a lot of people.

    • by MrL0G1C ( 867445 ) on Monday January 18, 2021 @04:52AM (#60958716) Journal

      The trouble with everyone keeping their identity secret is that no-one needs to take responsibility for what they say.

      How's the opposite working out for Facebook?

      Anonymity allows people to speak freely. Allowing people to speak freely is the opposite of censorship. But when people can speak freely some people let rip with trolls so vicious that they'd hound good people until they committed suicide. So moderation is needed otherwise it becomes a total shitfest. Some people want a total shitfest so that they can hound people to death. Some people will say it's ok to have a shitfest where you can hound people to death and that is the price to pay for being able to hound people to death, I mean anything else is censorship right.

      Some censorship is a good thing. Everyone censors themselves outside of the internet, if everyone spoke everything that came into their head all the time then life we be complete insanity, like youtube comments.

    • by fennec ( 936844 )
      What would it change? If your name is John Smith you can shitpost as much as you want, whereas Khaleesi Furthoidla is a bit more unique...
    • by teg ( 97890 ) on Monday January 18, 2021 @05:05AM (#60958754)

      The trouble with everyone keeping their identity secret is that no-one needs to take responsibility for what they say.

      I'm posting this anonymously 'cos I'm afraid of the consequences.

      I prefer to comment anonymously in most contexts, although I'm not anonymous here. My reason why is that while I have no issue with someone looking up the person behind a comment they see in a specific context, I don't like going the opposite way: Looking me up, and having Google providing everything - contextless.

    • You're not posting anonymously but twice anonymous.
      When you create a user id , especially in a small online community, it takes on a reputation of its own, formally(score) as well as informally. You care about it a bit similar to real life, even if it is not formally attached to your reallife identity.
      This weakens a bit if you scale up things and interactions become more random. So scale matters.

  • Relativity (Score:5, Insightful)

    by Meneth ( 872868 ) on Monday January 18, 2021 @03:50AM (#60958530)

    One man's moderation is another man's censorship. Compare US, UK, Germany, Russia, China, each with different views on the subject. No set of rules can accommodate them all.

    Then recall that all these differing views are present, in varying degrees, among people in a single country.

    I see no perfect solution to this conundrum.

    One option might be total freedom of speech on the sender's side and advanced automated filtering on the reader's terminal. Except such filtering is probably an AI-complete problem.

    • Re:Relativity (Score:4, Interesting)

      by LenKagetsu ( 6196102 ) on Monday January 18, 2021 @04:19AM (#60958620)

      A perfect solution would be the ability to subscribe to a moderator that aligns with your interpretation of the rules. Any posts he purges are purged for all his subscribers. This also means if he becomes "corrupt" you can put him out of sight and out of mind with the click of a button.

      • by jlar ( 584848 )

        A perfect solution would be the ability to subscribe to a moderator that aligns with your interpretation of the rules. Any posts he purges are purged for all his subscribers. This also means if he becomes "corrupt" you can put him out of sight and out of mind with the click of a button.

        +1000 if I had the mod points

        We need a system which maximizes individual liberty. My ideal platform to use would definitely be one where I am in control of what to see and what not to see. I don't want anyone to decide what content is good for me or not.

        This does of course not include "criminal speech" such as incitement to violence (in many countries).

      • To me, it seems that this would make the problem of echo chambers ,with misinformation you like receiving, even worse than it is today. Society is faced with a huge issue. Yes, censorship is a way to facilitate misinformation spread by the authorities. However, the ability for essentially closed groups to be able to radicalise each other by circulating dangerous propaganda is even more dangerous. It has already led to genocide in Myanmar, and looks likely to do so in India. It has facilitated radicalisation

      • As if we did not have enough of those.
      • by AmiMoJo ( 196126 )

        Won't work, the freeze peach warriors will be upset by the default moderator. They want to be on the front page, to be in the recommendations. They get very upset when they are "shadow banned", which in this case would mean the default moderator blocks them.

        Also this would create bubbles. Some moderators would abuse their power to filter out anything running contrary to their preferred narrative. We should be looking for ways to break people out of bubbles so we don't end up with another QAnon.

        In the end so

        • This is true, things like 230 need to be amended to say that removal of illegal content is the sole responsibility of paid staff. Paid with money.

          • by AmiMoJo ( 196126 )

            It's like cleaning up toxic waste. Regardless of how much you pay someone they still need a hazmat suit. Moderating Facebook is like that, there is no safe dosage, no way to wash away the horrors you are exposed to.

        • Some moderators would abuse their power to filter out anything running contrary to their preferred narrative.

          You mispelled "All" at the beginning of the sentence there.

          Unfortunately, there is no good solution to the problem - people will naturally think of other people who agree with them as "smart (or "wise") people. This applied when the only source of news was the newspapers, It still applied when people got their news by radio, When TV was the way people got news, same. And now it's the same with i

        • by jlar ( 584848 )

          Won't work, the freeze peach warriors will be upset by the default moderator. They want to be on the front page, to be in the recommendations. They get very upset when they are "shadow banned", which in this case would mean the default moderator blocks them.

          Also this would create bubbles. Some moderators would abuse their power to filter out anything running contrary to their preferred narrative. We should be looking for ways to break people out of bubbles so we don't end up with another QAnon.

          In the end some of the moderators would themselves get banned for not removing illegal material or actively promoting it. The job itself would be hell, being exposed to the worst of humanity and relentlessly doxed and harassed by people who disagree with their policies.

          In my opinion there should not be any default moderator. Let users initially select from a list of diverse popular moderators instead - or search for some less popular ones which fit them.

          Yes, some moderators will filter stuff running against their preferred narrative. But then you can leave them with the click of a button and subscribe to other moderators that give you a more balanced viewpoint (I don't think you should be limited to one moderator btw. - just like you can subscribe to multiple YouTube chan

          • by AmiMoJo ( 196126 )

            Still got the same problem though. Who is on the default list? Why isn't my preferred truth teller on there? Why is QAnon9234 not listed?

            How is it ranked, pure popularity contest? How are users supposed to make an informed choice, does each candidate get to make a pitch?

            It will end up like YouTube, with curated channels that create a pipeline to bubbles and extremism. There is a reason the Christchurch terrorist told people to subscribe to PewDiePie.

            • by jlar ( 584848 )

              Still got the same problem though. Who is on the default list? Why isn't my preferred truth teller on there? Why is QAnon9234 not listed?

              How is it ranked, pure popularity contest? How are users supposed to make an informed choice, does each candidate get to make a pitch?

              It will end up like YouTube, with curated channels that create a pipeline to bubbles and extremism. There is a reason the Christchurch terrorist told people to subscribe to PewDiePie.

              I don't see that as a big problem as long as the algorithm for the initial list is transparent (could simply be the 30 most popular moderators in your country). QAnon9234 is not listed because he does not have sufficient subscribers. Yes, they would of course need to describe their policy for content moderation. Preferably in a systematic manner.

              And no, I don't think it will generally create a pipeline to bubbles and extremism but rather to freedom and choice.

      • Comment removed based on user account deletion
      • Re:Relativity (Score:5, Insightful)

        by Minupla ( 62455 ) <minupla@gmail.PASCALcom minus language> on Monday January 18, 2021 @08:06AM (#60959282) Homepage Journal

        A perfect solution would be the ability to subscribe to a moderator that aligns with your interpretation of the rules. Any posts he purges are purged for all his subscribers. This also means if he becomes "corrupt" you can put him out of sight and out of mind with the click of a button.

        I'd ask perfect for whom? That sounds like a perfect echo chamber, which while pleasing to the ear is not healthy. Imagine as a thought experiment a teacher that only ever gave lessons that pleased their pupils. I'd still be playing with the giant wooden blocks from Kindergarten... I wonder if Amazon sells those... I'll be... NO CARRIER

  • by Rick Schumann ( 4662797 ) on Monday January 18, 2021 @03:50AM (#60958532) Journal
    Privately-owned companies can decide what is and is not acceptable content according to the rules you agree to when you sign up to use a site. If you don't like a sites' rules then don't agree to them, don't have an account there, and find a site that has rules you agree with.
    If you want basically NO rules other than 'blatantly illegal gets you banned' then go use something like 4chan and take the bad with the good.
    For what it's worth I don't use Facebook, Twitter, or any of these other so-called 'social media' sites because I think they're ridiculous and a waste of time and a cancer on our society, and we'd be better off without them. But if you're going to use them, you play by the rules they set down, you agreed to them.
    • by Chas ( 5144 ) on Monday January 18, 2021 @04:23AM (#60958630) Homepage Journal

      This is "roll your own".

      And we see how that's working out.

      Say something innocuous.
      You can't be here. Roll your own.
      Goes somewhere else.
      Colludes to get you booted THERE too. Roll your own!
      Build your own service.
      Colludes to destroy your service and business relations. Roll your own!
      What? Roll my own INTERNET? Just to send a fucking tweet?

      Be serious and sane for just ONE second in your life.

      • Well, he could have just hosted it on colo'd servers owned by himself privately (or his own companies), on network connections provisioned directly by his company from the backbone provider. Then, if he'd just refrained from obvious treason and hadn't killed off net neutrality he could have continued to lie, spam, and antagonize to his heart's content, and it would have been actually illegal for anyone to stop him. I think the problem here is you keep thinking the other half of the argument is the one tha

      • by AmiMoJo ( 196126 )

        It's not shadowy forces destroying your homebrew Twitter clone, it's lack of a viable business plan.

        Look at Voat. Didn't get booted off hosting or anything like that. It just ran out of money because there was no viable way to make any and the users were too cheap to pay for it.

        That's just the normal way startups work, if you hadn't noticed. Twitter was one of many and just happened to get big enough with enough investment to survive, but 100 others didn't.

        Besides which there are examples of successful righ

      • I would argue very strenuously that private organizations working together to set expectations of behavior in "the commons" is our system working as intended.

        If that strikes you as inherently evil, you probably shouldn't live in a representative capitalist society, but some other form of government where social norms are not driven by a profit motive.

        If you'd like a perfect libertarian society where free speech means no private limitations on what you can say, then you're talking about a society where peopl

    • Actually private owned companies have taken over public space and they coordinate closely with dominant political players to control speech.
      If you don't like that then go start Gab. Or Parler.

  • by blahplusplus ( 757119 ) on Monday January 18, 2021 @04:00AM (#60958558)

    ... as super moderators that are knowledgeable and ruthlessly skeptical of everything.

    I mean the george carlin types, that know humanity is one giant race of bullshit artists.

    https://www.youtube.com/watch?... [youtube.com]

    If there's anything I've seen, it's the infinite capacity for most people to deceive themselves. Lots of bs gets upvoted or downvoted because generations have changed because the new generation doesn't like facts of the old generation (aka anything related to drm and world of warcraft, and steam since most modern slashdotters are drm lovers tragically).

    So you get bullshit as one demographic outpopulates another as demographics shift and old slashdotters become a minority.

    • by AmiMoJo ( 196126 )

      They will just be shredded by accusations of bias, doxed and torn down.

      Who would want that job anyway, sifting through the absolute worst of humanity 8 hours a day? It's the kind of thing you do because you can't find anything better, like working at McDonalds, or because you mistakenly believe it might get you into Facebook.

  • by Vintermann ( 400722 ) on Monday January 18, 2021 @04:06AM (#60958578) Homepage

    We need a strong way to guarantee our identities. Government has to get involved with that, sorry. Most modern governments have one already, to fill your taxes or similar.

    However, most sites should not use this system directly. Instead, with some cryptographic cleverness, it should be possible for the gov-ID service to guarantee that the person that's making an account on your twitter-clone now, is a real physical person who has no other accounts on your twitter clone.

    The twitter-clone (or whatever site it is) does not know which physical person is registering an account, only that they are a real one (and just one). The gov-ID service, on the other hand, knows that a certain physical person registered an account on the twitter-clone - but not what their pseudonym there is.

    I think that with pseudonymity, but without unlimited throwaway accounts, a lot of dysfunction goes away. Forcing people to operate under full name is counterproductive - as we have seen, when that's the case, people with nothing socially to lose are overrepresented.

    • An interesting idea & certainly worth considering. However, who'd administer such a register? The internet is global, e.g. how many countries are slashdotters from? Perhaps the UN would be a suitable administrator? I like to hear from civil rights organisations, international journalists, diplomats, etc., about this first. What are the threats to personal privacy, freedom of speech, & accountability if we were to adopt such a register? Also, how would it replace existing accounts on websites or woul
    • by Ichijo ( 607641 )

      Good ideas but let's take it a step further. Because sensational propaganda tends to crowd out facts, stripping everybody of their credentials tilts the playing field in favor of whoever can make the most outlandish but believable claim, and that puts us right back at square one. So let's provide the option for users to verify their credentials in order to give the experts a little more ammo against disinformation. But like you said, do it without the twitter-clone knowing who the physical person is.

      • Yes, I agree. You should be able to use the government's system to prove who you are, if YOU want to. That's the least they owe you for keeping track of you.

        Right now many governments have strong ID systems, but they only allow them to be used on their own sites and maybe a few privileged sites (banks, etc.) It should be possible to instead put the citizen firmly in charge here.

    • by Hentes ( 2461350 )

      Some CAs already do identity checks [wikipedia.org], I don't think we need to involve governments in this.

  • Repeal Section 230 (Score:4, Interesting)

    by tmmagee ( 1475877 ) on Monday January 18, 2021 @04:11AM (#60958600)

    I'll say something unpopular: Section 230 should be repealed.

    Giving websites immunity from 3rd party content has led to forums where no one is accountable for the content that is produced on them, and the consequence is that online discussions are easily hackable by trolls or third parties/foreign actors with less than honorable interests. The exploit is confirmation bias. People are being duped left and right by total bullshit that sounds plausible and confirms their biases, and in just 25 short years the internet has brought democratic discourse in the United States to its knees.

    If we make content providers accountable for what they publish, then fact checking becomes necessary and it will make it a lot harder online discourse to be hacked and manipulated in a way that we are seeing now. Yes that means forums like Slashdot, Reddit, Hacker News, and anything will a real time commenting system will essentially be impossible. So be it. I just saw the U.S. Capital invaded by people who think the 2020 election was stolen and Democrats keep child sex slaves in the basement of pizza shops. Something has to give.

    • by Chas ( 5144 ) on Monday January 18, 2021 @04:27AM (#60958642) Homepage Journal

      Section 230 needs to be REWRITTEN.

      Platforms should be limited to the legal standard in what they can and cannot remove.
      Because if they're going to editorialize on anything beyond that, they're a publisher.

      • by AmiMoJo ( 196126 )

        How are they going to make any money or block spam if they can only remove illegal stuff?

        How will YouTube avoid becoming a porn site?

        And will conservatives stop complaining that the algorithm isn't promoting their content? Because right now just being moved down the search results is censorship according to them. Or does the algorithm have to be replaced by rand() now?

    • Giving websites immunity from 3rd party content has led to forums where no one is accountable for the content that is produced on them

      In principle, the individual who posted the content is accountable for it. If I post a specific threat of violence against a politician on Slashdot along with my name and address, the FBI/Secret Service should show up at my door, not at Slashdot HQ. In practice, of course, it's trivial to anonymize yourself on most internet services, so the ability to hold posters accountable for their content falls apart quickly.

      Given the choice between holding content providers (ie Facebook, Slashdot, Twitter) account

    • You realize this also destroys sites like GitHub, stackoverflow etc. right? This would also destroy any topic specific forums because section 230 is what allowed them to remove off-topic posts. Section 230 was passed because it was desperately needed at the time.

      I do agree it needs to be rewritten and I don't know exactly how it should be rewritten but just destroying it would be devastating.

    • I'll say something unpopular: Section 230 should be repealed.

      Giving websites immunity from 3rd party content has led to forums where no one is accountable for the content that is produced on them, and the consequence is that online discussions are easily hackable by trolls or third parties/foreign actors with less than honorable interests. The exploit is confirmation bias. People are being duped left and right by total bullshit that sounds plausible and confirms their biases, and in just 25 short years the internet has brought democratic discourse in the United States to its knees.

      While I sympathize with your viewpoint, repealing 230 would just replace one bad situation with another. While conservatives, for example, focus on facebook and Twitter and want to get back at them for perceived sllights, there will be plenty of liberals gunning for Fox, Parlar et. al. It's a two edged sword that will cut both ways. It'll impoact not just the big players either, a whole cottage industry could spring up threatening to sue unless a small website settled for a nominal amount when a user pos

    • Umm, Section 230 won't encourage fact checking. How much fact checking do you think occurs in the right wing media on radio and other media?

      Section 230 means you can't publish slander. You can lie about facts. You can say the moon is purple. You can even call someone a fat cow, as long as you can show you actually thought he was a fat cow. Hell in the case of public figures such as celebrities or politicians you can even publish slander as long as you don't do it knowingly.

    • by gillbates ( 106458 ) on Monday January 18, 2021 @02:43PM (#60960978) Homepage Journal

      The problem with repeal is that fact checking becomes a matter of establishing consensus, which would prevent any "inconvenient" truths from being heard. Imagine Al Gore being deplatformed in 2005 because the "majority of scientists at oil companies do not believe Global Warming to be happening."

      The solution is not technical, but social. People are outraged because:

      1. They've been indoctrinated, but never taught to think for themselves, and or
      2. Because they have been taught how to express how they feel about something, rather than defend their ideas, and or,
      3. They have been taught that reason and logic are white-supremacist affirming, and feel that they cannot win arguments except by shouting down or censoring their opposition.

      I have seen this before on the elementary school playground: an inarticulate bully knows only how to shove and hit because he can't adequately explain his feelings/problems/etc... to his peers. This is what we are seeing in cancel culture: people who are good at sloganeering, but absolutely inadequate when it comes to developing an understanding of opposing points of view.

      Free speech and public civil discourse is for a morally-upright, well-informed society where the participants are seeking to develop ideas which would further human progress. Conversations on Twitter and FB cannot convey the nuance and context of personal interaction which renders harmless the off-color joke, or contextualizes words which, in other contexts, would seem quite offensive, e.g.: "I want them the whitest" when discussing toothpaste. Because the media is so willing to de-contextualize a statement in order to sell outrage, those that do remain on these platforms are constrained to only agreement or disagreement with the popular narratives.

      If we are ever to return to a place where everyone has a voice, and can be heard, and feels free to express themselves, we have to consign outrage culture to the dustbin of history. I, quite frankly, couldn't care less how racist, homophobic, misogynist, etc... you consider someone's words - if you won't let me hear them, you are the problem, not them. Even if I can't change the minds of someone who holds reprehensible views, I may in fact be able to help them understand mine. Censorship prevents this dialogue process from occurring, and quite frankly, protecting your feelings is less important than preserving democracy.

      If we are going to effect change, we have to use cancel culture like an epithet, and regard censorship with the same aprobrium as racism; it doesn't matter why you did it, you're still undermining the democratic process. Yes, people will abuse their freedom to say awful things, but if you can't handle the truth, you can't be trusted with the power to vote. The preservation of democracy is more important than your feelings.

  • by Gollum ( 35049 ) on Monday January 18, 2021 @04:15AM (#60958606)

    It kind of feels like the answer is in the word itself - we're aiming to prevent the extreme views from propagating, while embracing the "moderate" view - one that considers arguments from more than one side, and draws their own conclusions.

  • by pele ( 151312 )

    Wiki-style self-moderation seems ok

  • by ElGraz ( 879347 ) on Monday January 18, 2021 @04:17AM (#60958616)
    When the main goal of a business is to find ways to keep users glued to their platforms, creating flaming echo chambers is the natural step. If you want to profit today you will need to do this. Also news networks had to lower their standards to compete with the social networks level of meme-based "information". If we cannot find a way to limit the ways the socialmedia companies can leverage our human weaknesses and lower psychology, a near moderation of the contents can change next to nothing.
  • Moderation recently has been used as a code word for censorship. I suggest returning to the original usage. If a forum has rules (such as the topic), moderation should be used transparently to enforce the rules. ..however I don't believe a group of users talking amongst themselves should have "moderation" imposed on them that they don't want it. That crosses the line into censorship. And before some npc says to me "its a private platform" remember that legal censorship is still censorship. And just because
    • Moderation recently has been used as a code word for censorship. I suggest returning to the original usage. If a forum has rules (such as the topic), moderation should be used transparently to enforce the rules.

      Define transparency. So that little low traffic phpbb forum dedicated solely to crocheting of kitten bootees which has basically one moderator/owner/admin. What precise, legal requirements do you propose putting on that person wanting to delete holocaust denial posts such that they can't get sued in

  • by thegarbz ( 1787294 ) on Monday January 18, 2021 @04:25AM (#60958634)

    The internet does not have large scale distribution of unmoderated content. There are very few sites without moderation. There always have been few sites without moderation. Even the likes of 4chan has moderation and Slashdot employs content filters along with a moderation system that generally hides content by default.

    This fantasy that the internet was some mythical wild west where companies would set up endless free speech paradises just never existed. It doesn't exist now that AWS is not hosting Parler. It didn't exist when Voat was formed because Reddit got sick of alt-right bullshit. It didn't exist when 4chan started banning it's first content back in 2004, less than a year after starting. It didn't exist when Geocities published their terms of service back in the mid 90s.

    The internet has a pipe. If you want to post something without moderation then your only option has always been to connect your own system to that pipe. At any point where a user of another service or platform has existed there has been some level of moderation and the services which were truly unmoderated have always been few and far between.

  • by DrXym ( 126579 ) on Monday January 18, 2021 @04:35AM (#60958678)
    Slashdot has a fairly reasonable model - users (who meet some threshold of site age / engagement) get assigned as mods to a story and can spend points to mod comments up and down. Anything below a 0 gets filtered out by default. Works fairly well. What doesn't work on Slashdot is anonymous users posting abuse or screeds to every single story. Obviously modding that crap reduces the points mods have to spend elsewhere. Even scoring the post down to a -1 is a victory for the troll who knows that garbage is still there.

    So I would want better ways to throttle / size limit anonymous coward posts to deter screeds (e.g. 280 chars max, 30 minute timeout), implement size / frequency limits on new users, and a community mechanism to punish or timeout accounts / IP addresses in engaging in abuse. And for moderation I would like a way to flag a screed / troll separate to the scoring mechanism so that if other mods agree, that post gets marked down to -2 which is effectively deletes it and autopunishes the poster.

    • by ras ( 84108 )

      The Slashdot model is rather specific. I can't see newspapers, bug trackers and what not setting up a moderation system.

      The model we have ended up with is someone creates and pays for a site that anyone to post to, for free no less. But the owner of the site gets to decide what posts are allowed. The only legal constraint on the owner seems to be to under is they must delete illegal posts.

      Those rules have ended up creating a place were we users have thousands, if not millions of places on the internet we

  • The question is, I think, based on the false premise that we need to save "social media". We don't.

  • This is not an answer at all, but just an observation that there is not going to be an easy solution to this. I think what we're facing is global culture at its most powerful, all of our human flaws and frailties exposed and exploded many times over. It's kind of like asking "how can we talk to each other?" which is a fair question in a kind of broad philosophical way but also kind of nonsensical. As it turns out, the great challenge of the age of information is how to manage all of the information that com

  • The flipside of moderation is jurisdictions of the poster, the host and the reader. On a global internet, you can tread on the toes of lots of folk but the law varies by those jurisdictions. As time goes on repressive countries will get smarter at prosecuting breaches of censorship laws online.
  • If everyone is required to give quick feedback after reading a post, the system would have a lot of data. Bit like the moderation here on slashdot. The system could learn to predict your opinion of a post based on who has recommended or disliked it. You could set your reading threshold according to your mood. This could easily lead into bubbles of like-minded people reading only the same kind of stuff, but the system could add some opposing views that have got very positive ratings.

  • by malkavian ( 9512 ) on Monday January 18, 2021 @06:25AM (#60959028)

    I suspect much of the solution is getting away from "instant gratification" and the focus on the self to the exclusion of all else.
    It brings out the worst in people.

    Back in the earlier days of the net, there was the concept of the September syndrome. This was a yearly thing (when the Internet was still largely restricted to the academic world), when a whole load of new students came on line. It was absolute chaos, with flame wars running rampant, and shouty voices everywhere.
    However, over the course of the year, people learned that if they wanted to be taken seriously and get assistance on things that mattered, they needed to be pleasant to other people, and generally follow established respect patterns and behaviours. That baseline was what it took to even have you engaging with the group.

    To be taken seriously took the building of a reputation. This took time, and actual work. It's not something I think an AI would be good at judging (at least in the current primitive form) due to the ability of just gaming the input to output. People valued the hard earned reputations they worked for.

    So in current world. I think as a baseline we'd need:

    * Scarcity of methods to build alternate accounts. I think this is something that Tim Berners-Lee was looking at in his "one identity" environment. When something's not easily replaceable, it starts to have value.
    * A reputation metric. Not a "like" that can be clicked randomly, but something that's difficult to build. For example, restricting those 'scarce' accounts to one like per week. And then having that only in particular categories. Just because someone's a great physicist, doesn't mean they know anything at all about baking cakes, and vice versa. This would allow for a slow build of reputation in areas over time. Irrelevant to people that really know you, useful when you're discussing particular subject matter.
    * Possibly most difficult would be an accuracy metric. This would be a tough one to come up with, as it'd require the concept of strength of proof, as exists in scientific basis. That requires strength of evidence (there are many scientific papers that are plain junk, for example, and many, many cases of very strong assertions being made from extremely weak evidence and study types) to be factored in and so on. It would also need to take into account subjective vs objective which a huge swathe of people don't get a lot of the time.

    What people need to come to grips with (again) is that reputations have value, and they're not easily replaced.

  • AI is that magic box which is designed from group up to be responsive to human intervention and yet avoids human responsility. AI moderation is best when you want to avoid topics and not spam. Fuck that. That is censorship.

    Instead of complicated NLP models they can have a list of patterns that one can avoid. That list can be converted into statistical models and even made public. If each post requires effort, it should be made possible.

  • There's no solution that's going to satisfy everybody, so the only way to solve it would be if instead of opaque algorithms sites implemented an API that would give third parties a chance to implement their own aggregation of user mods, hopefully in an open source way. To prevent privacy problems, the mods of a user would not be tied to their identity but some generated id. Of course, users would be free to announce their id and aggregators could choose to only use the mods of select users, which is equival

    • by Mal-2 ( 675116 )

      And which set of weights do the search spiders use? It's not sufficient to merely sweep things under the rug if people -- or bots -- are actively looking for them, and the big G is always looking.

  • When the printing press was invented, for a very long it was time unclear how the publication process, from content production - editing - financing - production - distribution would be organized; how the labor would be differentiated; and who would play a leading role in the process.
    Over time it turned out that printing is a craft, with its own beautiful challenges (such as typography, typesetting, design, production quality), but they are not in charge.
    It has also been clarified that it is the publisher w

    • Social media publishers should have the same responsibilities and legal liabilities for the content they publish as newspaper publishers.

      They do. Section 230 offers protection for the content that others post to their platforms without moderation, even if they choose to moderate it after the fact. It does not offer them protection for content they themselves choose to publish, for which they retain responsibility. Likewise, if you choose to upload a comment to someone else's website, you remain liable for its contents. This places the burden of responsibility on those who actually create the content. This is obviously and clearly the most fa

  • Anyone can post, anyone can decide not to listen.

    Educated people can think critically when information is available. There will always be people who choose not to think.

    In the fullness of time, online users will as a matter of course not accept anything they read online at face value. In majority they will question, they will reason. There will always be a minority that will not question and will not reason.

    Censorship, and in particular automated censorship, is wrong.

  • Post a letter to the editor of a newspaper, some people read it and if you're not a nutjob you might have the chance to be one of the 2-3 to be published, from the 2000 sent in by crazy people every day.

    If you want more, buy a newspaper company or a TV-Station.

  • by martynhare ( 7125343 ) on Monday January 18, 2021 @07:27AM (#60959202)
    This puts a cap on how much content can be out there at any given point per individual. The more popular the content, the larger the swarm; the less popular the content, the fewer people will grab it and less bandwidth will be used. This model can't easily be monetised in any predatory way (since the free option will always be there) and once the data is out there, it can be infinitely reshared to prevent censorship. If people want to listen to you, that's great! However, if nobody wants to listen to you, that is also just fine, as you still have your right to speak fully intact, regardless of the handle you go by. This method requires no terms and conditions, no cloud providers and your "identity" is backed by cryptography, so if you change as a person or want to distance yourself from your past, you have the ability to start from scratch. After all, this is the Internet, a dinky little research network released for the public good which has now been hijacked by large media organisations, egotistical millionaires and morally questionable technology firms. It should be up to us to decide what we do or don't want to engage with, therefore, it should be us who individually wields the banhammer to block peers we want nothing to do with, or to create "safe space" hugboxes as we see fit - without interference from third parties. Likewise, if an individual wants to have a personal philosophy of unrestricted free speech for all, they can block nobody and listen to everybody. Everyone is happy, nobody is censored.
  • by ThosLives ( 686517 ) on Monday January 18, 2021 @08:16AM (#60959320) Journal

    It's not a quick fix, so won't be popular... but the real "fix" is to put "critical thinking" back in the curriculum. I (and others) have postulated that a significant contributor to modern societal woes is the No Child Left Behind act, at least in the USA.

    That structural change to the US education system put emphasis on achieving scheduled academic milestones - making education focused on ticking a box, rather than on education.

    Now, some 20 years later, you have an adult population that is largely incapable of being able to weigh viewpoints critically, and instead just weighs them emotionally. Most of that population isn't even capable of recognizing the difference between critical and emotional evaluation.

    It may stem the tide a bit if you basically parent online communication because, essentially, even the adults are children; we have to wait a generation to get adults back in the room, capable of moderating themselves.

    Caution is necessary though - you want the "moderation" to be instructive, not punitive. If you don't educate the masses on how to speak to each other, you will just alienate them and cause them to act childishly. Which is what is happening - people are not getting their way, so they throw a tantrum.

  • by thereddaikon ( 5795246 ) on Monday January 18, 2021 @09:05AM (#60959452)

    Moderation at scale does not work. Communities are also incapable of being healthy at scale.

    Humans evolved to have small social networks. Modern humans have existed for roughly 300,000 years. Back then we were in small tribes on the African savannah. An ancient human would be lucky to know more than 100 people over their life. Take that evolutionary baseline and apply it to Facebook, or any other social media platform and it just doesn't work. We had already well and truly exceeded it a long time ago with just the size of our real world communities.

      The difference is, in a city I don't have to have meaningful interactions with every single person on the side of the street. Most of them are just faceless shapes walking by. My social network is still small and within my direct control. With social media it really isn't. Sure, you can choose who to friend and who to block but you can only do so much to limit the meaningful interactions. A friend of a friend's rant about whatever is still going to show up in your feed. To make things worse, none of it is organically happening. Social media companies tailor the content that is shown to you in order to increase the time you spend on their platform. They don't train the ML system to favor other things like how rewarding the content may be or how it may make you feel beyond just consuming more. This isn't healthy and results in addiction to the platform. Often, the content that makes you stay around isn't informative or uplifting. Its drama.

    We've also seen how moderation at these scales doesn't work. Youtube gets thousand of hours of content added each minute. Its simply impossible for them to hire enough humans to review it all and make fair judgement calls. So we again turn to an ML algorithm to do it for us. Except AI is a scam, it isn't nearly as clever as the sales people say. Im sure everyone can think of several instances off the top of their head where various automated moderation systems have failed. Either blocking the wrong thing, letting through content it shouldn't or being trivially easy to abuse by bad actors.

    So shut it down. Smaller networks are both more compatible with humans and are much easier to moderate with real people. They won't be perfect, nothing ever is. But they will be better. Smaller, more focused online communities with dedicated moderation are healthier places. They are also far harder to manipulate.

If you aren't rich you should always look useful. -- Louis-Ferdinand Celine

Working...