Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Social Networks The Internet Government

The Case Against Section 230: 'The 1996 Law That Ruined the Internet' (theatlantic.com) 259

Writing in the Atlantic, programmer/economics commentator Steve Randy Waldman explains "Why I changed my mind" about the Communication Decency Act's Section 230: In the United States, you are free to speak, but you are not free of responsibility for what you say. If your speech is defamatory, you can be sued. If you are a publisher, you can be sued for the speech you pass along. But online services such as Facebook and Twitter can pass along almost anything, with almost no legal accountability, thanks to a law known as Section 230.

President Donald Trump has been pressuring Congress to repeal the law, which he blames for allowing Twitter to put warning labels on his tweets. But the real problem with Section 230, which I used to strongly support, is the kind of internet it has enabled. The law lets large sites benefit from network effects (I'm on Facebook because my friends are on Facebook) while shifting the costs of scale, like shoddy moderation and homogenized communities, to users and society at large. That's a bad deal. Congress should revise Section 230 — just not for the reasons the president and his supporters have identified.

When the law was enacted in 1996, the possibility that monopolies could emerge on the internet seemed ludicrous. But the facts have changed, and now so must our minds... By creating the conditions under which we are all herded into the same virtual space, Section 230 helped turn the internet into a conformity machine. We regulate one another's speech through shame or abuse, but we have nowhere to go where our own expression might be more tolerable. And while Section 230 immunizes providers from legal liability, it turns those providers into agents of such concentrated influence that they are objects of constant political concern. When the Facebook founder Mark Zuckerberg and the Twitter founder Jack Dorsey are routinely (and justifiably!) browbeaten before Congress, it's hard to claim that Section 230 has insulated the public sphere from government interference...

If made liable for posts flagged as defamatory or unlawful, mass-market platforms including Facebook and Twitter would likely switch to a policy of taking down those posts automatically.... Vigorous argument and provocative content would migrate to sites where people take responsibility for their own speech, or to forums whose operators devote attention and judgment to the conversations they host. The result would be a higher-quality, less consolidated, and ultimately freer public square.

This discussion has been archived. No new comments can be posted.

The Case Against Section 230: 'The 1996 Law That Ruined the Internet'

Comments Filter:
  • Why not? (Score:4, Insightful)

    by nospam007 ( 722110 ) * on Saturday January 09, 2021 @02:36PM (#60916354)

    The Non-Online Credit-Card companies blocked payments to Pornhub because somebody wrote an article in a newspaper.
    Where's the difference?

    • by shanen ( 462549 ) on Saturday January 09, 2021 @03:10PM (#60916514) Homepage Journal

      Really? That's the best you can do? Or just in a rush to FP? What were you trying to say? That you like porn?

      For what it's worth (= very little on Slashdot) I recently read Mike Godwin's new book, which is largely about Section 230. Memories getting fuzzier, but I even think he helped defend Section 230 in the Reno case. Kind of a fanatic, but he used to be a nice guy when we were much younger. Or maybe I was the one who was a nicer guy back then? Here's a piece he wrote way back in 1997: https://www.wired.com/1997/09/... [wired.com] (Disclaimer: I am NOT a lawyer and Mike is. (But I still wonder if my LSAT score is higher than his.))

      My take is sideways, as usual. I don't really care what anyone says. I care whether or not that person is worth listening to. If someone has a track record of spewing lots of lies, then I don't want to waste my time figuring out if he might have said something true. Plenty of honest people with true stuff to read or listen to. However, if someone has a track record of telling the truth and says something that I disagree with or don't understand, then there might be something important to learn there. This is actually a gross oversimplification of a much more complicated and multidimensional reality, but I just did the MEPR thing again. Pass for now.

      Makes more sense to go peruse the full story. Maybe I'll have a substantive reaction for Slashdot? That FP sure wasn't.

      • by ChatHuant ( 801522 ) on Saturday January 09, 2021 @03:40PM (#60916688)

        I don't really care what anyone says. I care whether or not that person is worth listening to. If someone has a track record of spewing lots of lies, then I don't want to waste my time figuring out if he might have said something true.

        I think it's pretty much this kind of attitude that led us to the current situation. It's troubling in multiple ways. First, the "track record of spewing lies" can easily become "they once said or did something I don't like", or "I heard they said/did something bad", or "they were friends with somebody else I don't like", or "they're from the wrong social class, they have the wrong color, nationality, religion". This is how extremes think. And this is not ludicrous exageration -- that's exactly how today's cancel culture works.

        Second, for many the immediate corollary means they'll listen to people they deem trustworthy also without "wasting time figuring out if he might have said something true". This is exactly why we had the Capitol events.

        • by shanen ( 462549 ) on Saturday January 09, 2021 @04:10PM (#60916878) Homepage Journal

          Good points, but I do feel somewhat misinterpreted. I think one part of the problem is that we can't stop people from having favorite sources. Yes, I think someone has to be nuts to pick Trump as a favorite source, but until a person becomes a dangerous nut it isn't other people's business. (Actually I think all of us are more or less nutty.)

          On the flip side, I don't like cancel culture. People shouldn't be "cancelled" for any single mistake, and that is what is happening much too often these days. Even worse when the mistakes are contrived or faked and amplified by hypocrites and liars. I actually think public reputation should be based on LOTS of data, accumulated over time, though weighted in favor of the more recent data. For example, Bill Cosby earned his cancellation.

          But there has to be filtering. The amount of information already available is more than anyone can absorb, and the amount available is increasing more rapidly every day. That's why I want filtering tools that support a proper mix, but I think the settings of those tools should be up to each person. I'm pretty sure that I favor more new material than most people, but it's going too far to say that my preference for novelty is a good thing and that someone else is wrong because they prefer to hear the same old things over and over again. However when those old things are lies that lead to violent behavior that hurts other people, then we are clearly in the area of "That is wrong." (And I even hope they impeach Trump for his latest terrible lies. Not the first people his lies have killed, but the causal chain is too clear this time. Sins of omission are usually debatable, but this time he committed.)

          As it would apply on Facebook or Slashdot, I think newbies should start with relatively lower visibility. I don't really like the idea of personally selecting people for higher or lower visibility, but I would like my preferences to be considered in forming favored pools of favored and disfavored identities, but with a certain percentage of ideas coming from identities outside those pools. And the pools should evolve over time, too. Another user setting, though my preference might be for slower evolution than most folks. But the available time is always the ultimate filter. These days reading is the biggest consumer of my time.

          Which reminds me to recommend Zucked by Roger McNamee. Highly relevant to these topics.

      • by Kisai ( 213879 )

        The problem in hindsight for everyone is that if section 230 didn't exist, no user-generated-content site would be able to exist, and sites like 4chan, deviantart, twitter, and chat systems like Skype and Discord would be unviable systems because the operators can't prevent them from becoming private sewers of hate. Section 230 would have also ended aol.com as soon as it got on the internet, because up to that point AOL's walled garden was moderated.

        How do we fix it?

        1. Require sites to have self-moderation

        • I appreciate your thoughtful and reasoned post. However, I disagree with your premise:

          if section 230 didn't exist, no user-generated-content site would be able to exist

          Why do you think user-generated content could not exist online without the host (=publisher in the digital world) being accountable for that content? I could agree that there might be less of it (would have to be reviewed before posting), or that content that is risky might be less frequent- but I do not agree that user-generated-content sites could not exist. Not any more than book, newspaper, magazine, music, etc publis

  • Fantasy (Score:5, Interesting)

    by MrL0G1C ( 867445 ) on Saturday January 09, 2021 @02:43PM (#60916382) Journal

    Vigorous argument and provocative content would migrate to sites where people take responsibility for their own speech, or to forums whose operators devote attention and judgment to the conversations they host. The result would be a higher-quality, less consolidated, and ultimately freer public square.

    Would I run an online forum if I could be sued for what forum users post? Hell no. The result would

    But what do I know. I don't know how it's done in the UK, AFAIK here you simply sue the person on the forum who said a bad thing, I don't see why it should be any other way.

    I don't like Facebook's echo chamber but realistically that's no a problem of the law, it's a problem of having the world's biggest social media company being run by a total ass-hat who makes it a bad place by design and refuses to fix that design when we know now from insiders that it is possible to fix it.

    • Would I run an online forum if I could be sued for what forum users post? Hell no.

      Would that be such a bad thing? The death of fora, I mean.

      I was too poor in the 90s to be able to afford internet, so I was late to the party with regards to usenet. But the little I've had was more fun: you could use your favorite client software, you could search for, or discover by your own, new things you didn't even know existed. And most importantly: by posting there you didn't empower some giant corporation. You had a contract with your upstream provider, nothing else.

      If most of what came after what

      • Re: Fantasy (Score:4, Insightful)

        by amorsen ( 7485 ) <benny+slashdot@amorsen.dk> on Saturday January 09, 2021 @03:11PM (#60916520)

        Would that be such a bad thing? The death of fora, I mean.

        Wouldn't you miss Slashdot?

        (This is the third time I write pretty much the same reply to similar suggestions)

        • Yes ,I would.

          But let's face it, Slashdot is atypical. Most of the internet isn't like that. In fact, none of the internet is like that.

          The current rules don't breed Slashdots, the breed Facebooks and Twitters.

          • Re: Fantasy (Score:3, Funny)

            by BAReFO0t ( 6240524 )

            Says the kid who can't tell the Intenet from the web.

          • Comment removed (Score:5, Insightful)

            by account_deleted ( 4530225 ) on Saturday January 09, 2021 @03:43PM (#60916716)
            Comment removed based on user account deletion
            • Idiot yourself.

              I did consider the impact. I'm pretty sure I've made it clear that I'd like to lose all of that. There's no way why that couldn't be reinvented in a way that sucks less.

            • Re: Fantasy (Score:5, Interesting)

              by ghoul ( 157158 ) on Saturday January 09, 2021 @04:50PM (#60917100)
              The difference is Facebook insists on true ID so a bad FB post can ruin your entire life. A bad Slashdot post only affects my Karma.
      • by Zxern ( 766543 )

        Trolls....

        One troll could bring your site down and make you face potential lawsuits with ease.

        • There wouldn't be a site. It's not like, currently, one troll can bring down your email server or something.

          • There wouldn't be a site. It's not like, currently, one troll can bring down your email server or something.

            But would this be limited to web sites? If we go to basics, a web site/forum is just a channel of communication between people. If we make the channel responsible for the content, why would this stop at web sites, and not also to regular mail, usenet, e-mail, instant messages and so on?

            That may lead to all kinds of changes, most of them unpleasant - at the minimum, loss of anonymity; at the maximum, all communication channels would implement some form of validation/censorship. And yes, this sounds ludicrous

          • by sjames ( 1099 )

            That's because you can delete the troll's emails and move on. In a world without 230, the troll shitposts all over your forum and if you dare delete his crap you magically become responsible for anything any idiot chooses to post, even if you don't notice it.

            Arguing against 230 is essentially saying you think people should be responsible for things people they don't even know might choose to say.

            Simple analogy, one day some idiot stands in your driveway shouting obscenities and you shoo him away. The next d

      • by AuMatar ( 183847 )

        Um, if forums went, so would usenet. There's no difference between a usenet provider and a forum provider, they'd both be effected by the change in law. This would basically mean every message displayed would need to be preapproved by moderators or the site would shut down.

        • I understand that, but still, email works. So does an individual website. Nobody but the owner is required to koderate that. So between fora and email, or a website, there is the step that says "legal trouble for the organizer."

          That's what we need to get to, the configuration prior to that step.

          Part of the problem with Social Media and the Forum Culture is that people get widespread attention (as in: world wide) even if they did nothing of value to deserve that. Posting an inflamatory comment below a New Yo

        • Comment removed based on user account deletion
      • by sjames ( 1099 )

        Ironic that you're asking that question on a forum that might not be able to exist without 230.

    • Re:Fantasy (Score:5, Insightful)

      by Joce640k ( 829181 ) on Saturday January 09, 2021 @03:05PM (#60916492) Homepage

      Would I run an online forum if I could be sued for what forum users post? Hell no.

      And that is the reason why it shouldn't be repealed and why the person who wrote that article is a total dumbass.

      TLDR version: If you repeal section 230 then only the people with large legal teams will be able to host any kind of user content on the Internet, eg. Facebook and Twitter, the exact same people the author wants to be taken down.

      Also: "Automatically removing bad content"? Not a chance. People have evaded spam email filters for decades now, Facebook content is no different.

      • You don't have to automatically remove the bad content. You can let the posters of the bad content make it invisible for you. Short version is that someone who acts like a troll should earn invisibility relative to all the people who don't act like trolls. If a troll screams in the forest but no one can see the tree, then who cares?

        The tricky case is a troll who pretends to be nice some of the time. You need to consider at least two dimensions to cover that case. One is the age dimension. An old identity is

        • You can...

          I'll stop there, as with a lawsuit you won't get the courtesy of "but I was going to do something about it." You've heard of Google? The search engine thing? You know how SEO works, or doesn't work, or used to work, or might work tomorrow? Google, where money and expertise have no realistic limits, goes to an awful lot of trouble to keep bad actors from gaming its organic search result algorithms and has done so since its inception. Do you really want each and every forum-like site to have to do the same fo

          • In this post, the word Mazi should start with an N, but you know, lameness filter. It discusses freedom of speech.

            Pretty much this. With trillions of dollars in stock value, there are megayachts yet unborn just waiting for lawyers to buy from unending lawsuits against them.

            Our current issues with free speech and the tech giants revolve around them censoring harrassment for fear politicians, who want that, will alter or repeal 230. Trump's threat against it was for the exact opposite reason -- because the

        • Aaand that is how you get the filter bubbles that caused this in the first place.

          Oh, you thought your definition of "troll" is somehow more equal?

          Remember being the "troll" when Trump won, four years ago?
          Don't be so confident. Reality is relative. If there is an absolute reality, no human can ever tell. You might not always be dominant. Amd dominance is no logical reason for thinking you're more right.

        • Comment removed based on user account deletion
      • Re:Fantasy (Score:4, Interesting)

        by chuckugly ( 2030942 ) on Saturday January 09, 2021 @03:32PM (#60916642)

        If 230 were to be repealed, there would be a date in the future when that would take effect, and a mad scramble to reformulate something better to take its place before it sunset. I think the vagueness of 230 lends it to being abused, and a rewrite such as repealing it would force would be a good thing. It could of course be rewritten without being repealed but much like rewriting the ACA, I think it's going to be very hard to get enough people motivated enough to do a rewrite while v1.0 still stands.

    • Re:Fantasy (Score:5, Informative)

      by Koby77 ( 992785 ) on Saturday January 09, 2021 @03:43PM (#60916710)

      Would I run an online forum if I could be sued for what forum users post? Hell no. The result would

      Before Section 230 existed, online forums were safe because of a case called Cubby v. Compuserve, from 1991. It established that Compuserve was not liable because they didn't moderate their forums, and so they had no reason to know about the speech that was occurring. Another key lawsuit was Stratton Oakmont v. Prodigy from 1996. In that case, they were found liable for defamation because they were moderating. You are not automatically liable simply for having a comments section or a forum. It's a similar idea to how the phone company isn't liable if you say bad things to someone else over the phone. The carrier (probably!) isn't listening in (I kindly ask that we leave the Edward Snowden side of the argument for another day.)

    • Would I run an online forum if I could be sued for what forum users post? Hell no. The result would

      Well maybe not. While I am fully in support of section 230, I hate conspiracy theories almost as much as I hate censorship -- and I really hate censorship.

      I could see, in perhaps some way, that if it can be proved in a court of law that somebody actually engaged in physical violence as a result of influence from social media, then the platforms can be held liable, possibly even criminally so in the case of what happened in DC with both the pizzagate incident and january 6th. If it were crafted in such a way

    • The result would be a higher-quality, less consolidated, and ultimately freer public square.

      No. They will simply move abroad the same way Trump has moved from Twitter to Telegram.

    • by sjames ( 1099 )

      Exactly. Making running a forum a legal minefield where you might be held responsible for whatever anyone says if you make any effort at all not to be occupied by trolls will only entrench the monopoly. Only a very large entity can afford the legal risks.

      If the problem is a few large players holding a near monopoly, we have other laws to deal with that, but they can't help either if we don't enable smaller players to move in after clearing the way.

      Personally, I am not in favor of de-platforming alternative

  • by cascadingstylesheet ( 140919 ) on Saturday January 09, 2021 @02:44PM (#60916384) Journal

    The law lets large sites benefit from network effects (I'm on Facebook because my friends are on Facebook) while shifting the costs of scale, like shoddy moderation and homogenized communities, to users and society at large.

    "homogenized communities" are exactly what al the censorship-happy people want.

    The whole labeling and disappearing cancel culture stuff is designed to create homogeneity of thought, which is way worse than any homogeneity of skin color or other such trivialities.

    • by Nrrqshrr ( 1879148 ) on Saturday January 09, 2021 @03:02PM (#60916476)

      It's censorship if the government does it, but it's free market if a company does it.

      The truth is that "Public Space" doesn't exist anymore. It's owned by companies and though you are free to create your own, anti-monopoly laws exist for a reason. Monopoly is bad for everyone, even itself.

      • by amorsen ( 7485 )

        When did this "Public Space" you decry the loss of exist? Newspapers have always been very selective about publishing letters to the editor.

        You are free to distribute flyers or get on a soap box on the the corner.

        • When did this "Public Space" you decry the loss of exist?

          The tradition of speaker's corners [wikipedia.org] has existed for a very long time now. And here's a relevant quote on the content of the speech allowed in speaker's corners in the UK, such as the one in Hyde Park (from the Wikipedia article):

          The ruling famously established in English case law that freedom of speech could not be limited to the inoffensive but extended also to "the irritating, the contentious, the eccentric, the heretical, the unwelcome, and the provocative, as long as such speech did not tend to provoke v

        • You are free to distribute flyers or get on a soap box on the the corner.

          Are you? Try it for an unapproved cause and see what happens.

          You can burn a city down if you are protesting imaginary racism, and little will happen to you. Try protesting the killing of babies and you'd better do it across the street. WAY across the street. Possibly another street. Or block.

      • It's censorship if the government does it, but it's free market if a company does it.

        No. It's government censorship if the government does it, otherwise it's just censorship.

    • Well, that mob was a quite homogenized community. ;)

      Just not the smile-or-die p.c. goosestepping kind.

      So the author clearly is too dumb to even argue about this.

  • If a person can be sued for what they said on Facebook, why do we need to change the rules of/for Facebook? I'm not being facetious, I'm trying to understand both sides here.

    • by Koby77 ( 992785 )

      If a person can be sued for what they said on Facebook, why do we need to change the rules of/for Facebook? I'm not being facetious, I'm trying to understand both sides here.

      Facebook is acting as both a platform and a publisher at the same time. The reality is that the little guy isn't going to face a lawsuit. And with the big tech corporation immune, this leaves zero accountability. The corporate tech oligarch reaps the vast majority of the benefits, while paying no price.

    • Why? Why must we always be "fair" to nutty sides?
      And why ust two sides? (That in many parameters lie on the same side and equally far in nutty space.)

  • by RyanFenton ( 230700 ) on Saturday January 09, 2021 @02:47PM (#60916400)

    As a longtime follower of the Skeptic movement - I've seen a lot of scams. I've also seen a lot of lawsuits against people investigating and debunking scams.

    Why is this important here?

    Because I've seen a lot of scammers get away with some horrible things in Europe - because you're not allowed to say anything in public against a large company in most European nations.

    Because they consider it legal slander of various sorts to complain publicly against their business practices.

    Well... repealing 230 is a super-charged version of what they have.

    Your often reasonably critical comments will now be the legal liability of the company with the servers hosting the websites.

    In the US, you can sue anyone for anything, with generally few drawbacks. Websites being on the hook for comments means basically the same as all the worst slander laws - because it costs far too much to defend against 1000 baseless lawsuits to be worth keeping any set of comments up.

    If you're going to repeal 230 - you have to replace it with something else - or else we end up with companies freely scamming to an extent far beyond what we see even now.

    And no one will be (de-facto) permitted to complain about it on US servers.

    Ryan Fenton

    • The courts are terminally backed up another year behind the normal civil docket, all work has stopped in my open state and county. A tremendous number of items are going beyond statues of limitations in civil court. Ciminal court is reseved for the people it should have been always reserved for, the insane or professional. The teen dope smoker and the bar fight are not in court these days.

      Getting sued isnt even a thing in the US anymore, I invite it, I would love discovery process against the people
    • Bullshit. (Score:2, Insightful)

      by BAReFO0t ( 6240524 )

      >ecause you're not allowed to say anything in public against a large company in most European nations.

      I stopped reading right there.

      Seriously, you clearly have never been on the continent. Nor speak any of the languages. I bet you don't even know the difference between Europe and the EU, and believe Africa is a country.

    • Aside from some Eastern European countries that are becoming increasingly authoritarian (Poland, Belarus, Hungary), I do no think your depiction of European slander liabilities are accurate. I am no expert in that regard myself, but I just wanted to point out that it sounds like you’re exaggerating and if you’re not I would be interested in reading citations (genuinely interested, I’m not just saying that to be a dick).

      Well... repealing 230 is a super-charged version of what they have.

      I actually believe that if Section 230 is repealed, only one major even

    • by The Wily Coyote ( 7406626 ) on Saturday January 09, 2021 @09:34PM (#60918184)

      Because I've seen a lot of scammers get away with some horrible things in Europe - because you're not allowed to say anything in public against a large company in most European nations.

      Because they consider it legal slander of various sorts to complain publicly against their business practices.

      Umm, having lived in Europe for several years, I call BS on this claim.

  • by t0qer ( 230538 ) on Saturday January 09, 2021 @02:48PM (#60916408) Homepage Journal

    One of the major points brought up about the slash system was it was user moderated, not owner/administrator moderated. That was what allowed it to operate under 230. Twitter, Facebook are actively employing people to moderate.

    Twitter and Facebook do have a system in place to allow user moderated content, you can follow or unfollow someone. That is fair. What's been going on lately though is these platforms have been choosing who you can and cannot follow. Who is and isn't allowed on the system. From a purely technical standpoint, TD has not ddos'd these sites. He has not done anything to hinder the sites themselves. He's just posted things these sites disagree with. I don't think 230 should be repealed, but at the same time sites that are actively engaged in who you can follow, and what content gets posted should not get the protections 230 affords.

    Even to this day on /. we have our trolls and shitposters. I don't agree with their page long ASCII swastica's, they pale in comparison to some of our older glorious trolls like GNAA and The Turd Report, but I take some comfort in knowing as a user I can choose to filter them, or not, and that's not decided by some admin with an axe to grind.

    Unfortunately society as a whole is starting to welcome powers that be to think for them, deciding what they see and what is allowed to be seen, and that's scary.

    • by malkavian ( 9512 ) on Saturday January 09, 2021 @03:12PM (#60916528)

      That seems to be along with what I'm thinking at the moment.
      What actually seems to be happening at the moment, is more and more people yelling that they need safe spaces for their echo chambers, where they can say what they want to people who believe the same things they do, and never hear anything else. For people who disagree with them, they want them isolated into other equivalents of their "safe spaces" (where they can possibly later then attack said "safe space" from a distance, and attempt to get it removed altogether). And they expect more and more tech isolation to cover their own weakness (the ability to hear things that don't align with their own beliefs, rightly or wrongly), and actually make informed calls as to whether there's validity anywhere in opposing statements.

      What should be occurring is making sure that the mix of opposing "beliefs" actually meet, and we need to change the mindsets that are so entrenched and unbending. The problem isn't in the tech, or isolation, it's in the actual people who have grown so inflexible and convinced of their own frames of reference that they're entirely intolerant of others.

      And the nature of the Slashdot trolls has somewhat changed. Yep, we still have the trolls you mention; There are a few cleverer ones who will troll threads by pretense at authority to derail interesting conversation progression, and a lot of what I see now is the "cancel trolls", who actually have mod points, and will attempt to mod out interesting and often very accurate pieces of information, presumably because "the don't like what it says, and don't want anyone else to see it". Those are probably the most dangerous, for exactly the reasons you've given.

      • by t0qer ( 230538 ) on Saturday January 09, 2021 @03:22PM (#60916594) Homepage Journal

        Totally agree with you on all points. On cancel trolls, this is one reason I *really* hate reddit. Their moderation system is complete trash since anyone can create a bunch of accounts to moderate comments how they see fit. Reddit admins have no desire to actually keep this in check, in fact I think it's one of the reasons reddit has gotten such a draw.

        At least with /. even if they farmed a bunch of accounts, the ability to moderate is random, and there is meta-moderation that can RTLSB (forgot what it's called) but it will basically take away any ability to moderate if the majority of users think your moderation is BS.

    • by Cylix ( 55374 )

      It’s time to kill the means that got them to power.

      I fully expect a slew of articles from left wing sources to vilify 230 now.

      What it really needed was a way to challenge those who behave as publishers. Ie fb and Twitter have been hiding behind it and acting in bad faith.

      Anyhow, they were complicit in their own demise.

    • by shanen ( 462549 )

      Good comment, though the Subject is weak. However I'm not sure what you mean by "filter them". Does Slashdot have a killfile somewhere?

      However the killfile approach doesn't work well when identity is cheap. An attention-seeking troll can always spawn a fresh sock puppet.

      For such reasons I would like to see self-censorship tools, but with visibility so that everyone can see exactly what they are doing that is making their comments less visible. That could included automatically triggered meta-commentary, but

      • by t0qer ( 230538 )

        By filter I mean we can choose what comment score to browse at. Most of the time I'm looking at 1+ comments. Sometimes I'll change that to -1 or 0 depending on my mood because occasionally you can find some diamonds in that rough.

    • by stikves ( 127823 )

      Believe me user moderation does not work. At least not at Twitter / Facebook scales.

      Yes, it would work here on Slashdot, since there are maybe 10-20 articles per day, and bad comments are highly visible (i.e.: everybody is subscribed to all content).

      But, no, on Facebook, Twitter there are literally more comments than people on the services. You would need to run schemes like "welcome to Facebook, please moderate these 3-4 posts before you can login" instead of the CAPTCHA of today.

      What is more, there are ec

    • One of the major points brought up about the slash system was it was user moderated, not owner/administrator moderated. That was what allowed it to operate under 230. Twitter, Facebook are actively employing people to moderate.

      When automated scripts remove content the millisecond someone attempts to post it, that is not "people" moderation.

      People, often have the ability to make a judgement call. A filter, cannot.

      This is why political bias represented as filters bad enough to be deemed as censorship, are being drug in front of Congress regularly.

    • Precisely.

      All that Slashdot is missing, is the ability to *inherit* trust relationships from other users. (Like trust user x on who to hide and who to highlight. Maybe even parametrized by areas of expertise via topic tags on posts.)
      And to face the backlash from your moderation, by being foced to explain it in the form of a comment, instead of being forced to be an anonymous coward just so people don't get their own replies on top.

  • Repealing 230 would make any platform provider either legally liable for *everything* its users post, if they do any moderation whatsoever.
    Only huge companies will be able to perform the 24/7 content monitoring required.
    This is the opposite of what the commentator tries to convince you of.

  • How would you moderate your site to make sure people stay on topic and don't spam? It'll be impossible. Slashdot will be impossible without Section 230.

    • The Slashdot, text only mod points system works well. Photos of children do not belong on the same site as tech talk or political postings. The mixing on Youtube and Twitter make moderation needed. You do not take your cat, your dog, your sister and your grandmother to rally for freedom, Facebook always does.

      I am about to fire up a NNTP and IRC server on a cloud provider with zero morals for my little group of miscreants to talk the way we learned to talk poltics during endless Septembers of the 1990
  • CDA 230 is not really needed. Those sites are mediums, not publishers, by any reasonable definition. CDA 230 just codifies that to make everyone sure about it.

    A reasonable court and a reasonable jury would find that someone e.g. libeling someone else on twitter was responsible, not twitter. It's the same as suing a firearms manufacturer because their working, exactly as described and legal product was used in a shooting. In a reasonable world laws preventing this would not be required, but alas the lack of

    • by nagora ( 177841 )

      CDA 230 is not really needed. Those sites are mediums, not publishers, by any reasonable definition

      Nope. They are publishers who push content to you in the hope of making money from advertising.

      If they didn't target advertising based on the user (rather than the content of the page) then there would be a much stronger case for not being publishers.

      • If they paid tweeters for their insipid attempts to curry favor then yes, I would agree. But they don't pay you, people just go on there and bloviate for free.
    • Or suing the construction company for a murder that happened on/in something they built, for that matter.

      It is literally as stupid as: "Hey building owner! We sue you because you did not wipe away that blood and hide the body when somebody completely unrelated mirdered somebody in your buiding! We don't want to see any of that shit! We want to wait, and wait, until it becomes a popular murder spot and all of your resources don't suffice to hide it all, and they form a murder mob and spill to Capitol Hill!"

  • by bjwest ( 14070 ) on Saturday January 09, 2021 @02:56PM (#60916452)
    It does need revision, but you can't just repeal it with nothing ready to replace it, which is what Trump tried to do. That would most definitely break the internet. It would require someone to read each and every post, comment and tweet created before it's made public, delaying the posting for days while it sits in the moderation queue.
  • by LagDemon ( 521810 ) on Saturday January 09, 2021 @02:57PM (#60916456) Homepage
    This article is wrong on so many levels, I honestly don't know where to start. First, it advocates fixing a monopoly problem (All speech concentrated in Facebook and Twitter) by trying to change liability laws. There is an entire section of our legal code devoted to fixing monopolies. Use the right tool for the job. Second, it rather naively assumes that: A) Facebook and twitter will only remove the most controversial speech. In reality, once you open the door for liability, they become liable for literally anything anyone gets upset about enough to sue over. They may win most of the court cases, but they are at the mercy of anyone who can afford to file court papers, nationwide. The costs of the cases and the risks of losing even one case will mean that their only option will be to ban anything more controversial than "I had a salad for lunch". B) Other social media companies will be willing to take risks that Facebook and Twitter won't. If a company the size of Facebook, with it's thousands of moderators, can't even slow down the deluge of bad content, let alone stop it, what makes anyone think that a smaller company will have better luck? The worst actors aren't going to suddenly go quiet if Facebook bans a few of them, they will simply go flood some other smaller provider that has even less capability to deal with the problem. Said provider will quickly be sued into oblivion. C) That an open internet forum is even possible if the provider becomes legally liable for user posts or for failing to moderate said posts On any given day, there are literally tens of millions of posts on a forum like Facebook. It is quite literally impossible to have a human review all, or even a fraction of those posts. You can try to have a user reporting system, but experience has shown that any system like that is vulnerable to brigading and mass abuse by people who want their opponents silenced. Practically speaking, moderating any large internet forum either requires that every single post be hidden until approved by a moderator, or that the moderators are protected by the equivalent of Section 230. In other words, the internet exists in it's current form because of Section 230. You can try removing it, but what you get as a result won't be an open internet forum. It will either be a completely locked down locked down site where all but the most uncontroversial posts are deleted, or a vile cesspool of a site (like what happened to Parler) where people feel free to vent the worst impulses of humanity directly onto your screen with no moderation.
    • by LagDemon ( 521810 ) on Saturday January 09, 2021 @02:59PM (#60916466) Homepage
      Reposting to fix formatting: This article is wrong on so many levels, I honestly don't know where to start.

      First, it advocates fixing a monopoly problem (All speech concentrated in Facebook and Twitter) by trying to change liability laws. There is an entire section of our legal code devoted to fixing monopolies. Use the right tool for the job.

      Second, it rather naively assumes that:

      A) Facebook and twitter will only remove the most controversial speech.

      In reality, once you open the door for liability, they become liable for literally anything anyone gets upset about enough to sue over. They may win most of the court cases, but they are at the mercy of anyone who can afford to file court papers, nationwide. The costs of the cases and the risks of losing even one case will mean that their only option will be to ban anything more controversial than "I had a salad for lunch".

      B) Other social media companies will be willing to take risks that Facebook and Twitter won't.

      If a company the size of Facebook, with it's thousands of moderators, can't even slow down the deluge of bad content, let alone stop it, what makes anyone think that a smaller company will have better luck? The worst actors aren't going to suddenly go quiet if Facebook bans a few of them, they will simply go flood some other smaller provider that has even less capability to deal with the problem. Said provider will quickly be sued into oblivion.

      C) That an open internet forum is even possible if the provider becomes legally liable for user posts or for failing to moderate said posts.

      On any given day, there are literally tens of millions of posts on a forum like Facebook. It is quite literally impossible to have a human review all, or even a fraction of those posts. You can try to have a user reporting system, but experience has shown that any system like that is vulnerable to brigading and mass abuse by people who want their opponents silenced. Practically speaking, moderating any large internet forum either requires that every single post be hidden until approved by a moderator, or that the moderators are protected by the equivalent of Section 230.



      In other words, the internet exists in it's current form because of Section 230. You can try removing it, but what you get as a result won't be an open internet forum. It will either be a completely locked down locked down site where all but the most uncontroversial posts are deleted, or a vile cesspool of a site (like what happened to Parler) where people feel free to vent the worst impulses of humanity directly onto your screen with no moderation.
    • by doom ( 14564 )

      First, it advocates fixing a monopoly problem (All speech concentrated in Facebook and Twitter) by trying to change liability laws. There is an entire section of our legal code devoted to fixing monopolies. Use the right tool for the job.

      So, if Facebook is forced to sell off Instagram and Whatsapp and whatever else, you think that will improve the quality of discourse on Facebook?

      I think any effect like that will be very indirect and weak at best... you need to believe that better competition will creat

    • by doom ( 14564 )

      Facebook and twitter will only remove the most controversial speech. In reality, once you open the door for liability, they become liable for literally anything anyone gets upset about enough to sue over. They may win most of the court cases, but they are at the mercy of anyone who can afford to file court papers, nationwide.

      Here I think you're on point: people are dreaming of a return to the days of the Fairness Doctrine and three main tightly managed broadcast news sources, and if you're going to get t

    • by doom ( 14564 )

      You can try to have a user reporting system, but experience has shown that any system like that is vulnerable to brigading and mass abuse by people who want their opponents silenced.

      Indeed, almost every internet moderator I've encountered does such a shallow job of looking into what's going on they're very easily manipulated. E.g. post "polite" flame-bait until someone flares up, then report them for being rude.

      The only interent moderator I've found who's worth anything runs https://www.reddit.com/r/p [reddit.com]

    • it advocates fixing a monopoly problem (All speech concentrated in Facebook and Twitter) by trying to change liability laws.

      Yeah that's a nonsense approach.

  • If made liable for posts flagged as defamatory or unlawful, mass-market platforms including Facebook and Twitter would likely switch to a policy of taking down those posts automatically.... Vigorous argument and provocative content would migrate to sites where people take responsibility for their own speech, or to forums whose operators devote attention and judgment to the conversations they host. The result would be a higher-quality, less consolidated, and ultimately freer public square.

    No, what you end up with is group think. You can see it already, today, on many sites which have user moderated forums. Speak against the popular opinion and get moderated into oblivion.

  • "When the law was enacted in 1996, the possibility that monopolies could emerge on the internet seemed ludicrous."

    Why was that ludicrous in 1996? Monopolies can exist anywhere. If anything with the Internet the ability to maintain monopolies is reduced as people can more easily change their choices. For example, people have left Facebook in droves over the years for a multitude of reasons. If I can't stand slashdot anymore I can comment on reddit, etc.

  • Publication that doesn't allow user comments publishes opinion piece against the law that allows other websites to publish user comments. Yawn.

    This is just another old media opinion stating that new media is bad. They want to go back to the old model where newspapers and TV were the final arbiters of what is published and what people should consider factual.

    Section 230 protects moderation.

    What are the alternatives: Complete repeal: websites shut down comments because they can't do anything about user commen

  • by BAReFO0t ( 6240524 ) on Saturday January 09, 2021 @03:40PM (#60916692)

    You sound like a child that thinks putting its head in the pillows means it's all not happening anymore. It is the most predictable knee-jerk reaction of somebody who is too lazy to actually think about it and just wants to whack somehing to vent his anger.

    A moron's posts should stand there for everyone to laugh at for all eternity.

    That people don't laugh but eat it up, won't improve one bit with your censorship.

    That evil will enjoy even bigger surprise effect, *will* "improve" though. Unless you want that, to keep having a convenient enemy to make life simple... Guess who else ticks that way... ;)

    I suggest what honestly desperate and sad conditions for a first world country drove people to want to believe this, of all things, in the first place... and fix that!
    Like livable wages and being treated like a human on your job.
    Or is that too hard for you?

    • by t0qer ( 230538 )

      Too many people these days think the population is too stupid to think for themselves, and they must be protected from bad influences.

  • From TFS:

    The result would be a higher-quality, less consolidated, and ultimately freer public square.

    Dude, it's the Internet.

    Go to Parler. Then masturbate all you want.

    Caution:

    You must enter a phone number. You know, the one you use on your Amazon account.

    • Go to Parler. ...
      Caution: You must enter a phone number.

      And to be a "Verified" user [wikipedia.org], you must provide your SSN or Taxpayer ID -- yikes!

      • by PPH ( 736903 )

        And to be a "Verified" user, you must provide your SSN or Taxpayer ID

        That's for their "influencer network". So it looks like they are set up to support a tiered system, possibly to allow these account holders to monetize their blogs.

        On any Internet board, I'll choose who to follow based on my perception of their credibility. Not whether they paid extra for the credentials.

  • by Tyrannosaur ( 2485772 ) on Saturday January 09, 2021 @03:52PM (#60916782)
  • by fahrbot-bot ( 874524 ) on Saturday January 09, 2021 @04:06PM (#60916850)

    In the United States, you are free to speak, but you are not free of responsibility for what you say. If your speech is defamatory, you can be sued. If you are a publisher, you can be sued for the speech you pass along. But online services such as Facebook and Twitter can pass along almost anything, with almost no legal accountability, thanks to a law known as Section 230

    You, yourself, are always responsible for what you say, regardless of the medium. A publisher can be held liable for what you say because they (presumably) reviewed, edited and approved your words prior to publication -- also you and they are in a (probably paid) contractual relationship to publish your words. The same relationship does NOT exist between you and Facebook, Twitter, etc...

    As for TFA, I read it but am not sure for what he's actually advocating... Perhaps he was paid by the word, but this seems to be his core message buried in there, but any elaborations are unclear:

    A new legal standard should encourage websites to moderate content posted by users (as Section 230 was intended to do), and it should recognize that forums for mixed-martial-arts fans and trauma survivors might apply different norms. But a new standard should not immunize hosts from risk of liability (as Section 230 does) even after notice that material they are hosting is defamatory, threatening, or otherwise unlawful.

  • by Dagmar d'Surreal ( 5939 ) on Saturday January 09, 2021 @04:09PM (#60916868) Journal

    We regulate one another's speech through shame or abuse, but we have nowhere to go where our own expression might be more tolerable.

    This is simply not true on so many levels it should actually fall under the heading of not even wrong [wikipedia.org].

    "Shame" is not regulation of speech. It's sometimes pushback for unacceptable behaviour, sometimes bullying, and sometimes just plain abuse, but it should never be confused with "regulation". ...and "abuse" is simply abuse.

    When the law was enacted in 1996, the possibility that monopolies could emerge on the internet seemed ludicrous. But the facts have changed, and now so must our minds... By creating the conditions under which we are all herded into the same virtual space, Section 230 helped turn the internet into a conformity machine.

    There are just plenty of virtual echo chambers on the internet where people with fringe beliefs are allowed to do whatever they like, short of anything that clearly contravenes the law. Those places exist because of section 230.

    Without section 230, in short span of time there would be almost no free (as in money) speech on the internet, and it's democratizing effects would all but disappear. All content platforms would wind up charging for people to use them, rather like the "free website" people were given by the ISP to run in 1996 by being given a public_html/ directory and told "Have fun!". Not only is that a fairly high barrier to entry (because it's still not quite that trivial to write a web page) it would burden users with the problem of dealing with when their ISP decides to change branding or domain names, and if the last 20 years have taught us anything, what to do when their ISP invariably begins trying to insert advertisements into their webpages. ...or, you know, gets purchased and the service subsequently relocated/rebranded so that all the previous URLs stop working. Have you ever wondered why you were never charged for allowing search engines to spider your website? Mainly it was because your ISP never had quite enough leverage to get away with doing so. Do you think they wouldn't if they could? Look at your cable bill and explain why you're paying monthly fees for events you don't care a single bit about, and then look at your cell phone provider service agreement and ask yourself why you're charged money for "excess bandwidth", but an additional fee if you needed to connect your laptop to the internet through your cell phone for even five minutes (an idea the telcos couldn't get away with back in the early nineties).

    One might think, "well, small niche websites would spring up to fill that need" but that would show you weren't around prior to these changes to the legal landscape. Those smaller sites would be destroyed, one by one, by almost any lawsuit filed because without section 230 those sites would be liable for content the very second it's been posted. Political winds shift rapidly and often, and what is an offhand comment one day could easily become legally actionable (and financially catastrophic) the next. Without section 230, sites would have to pre-screen whatever message their users wish to send, which would result in only conservative viewpoints being aired (and I'm not talking about Trump-supporters idea' of "conservative") as content providers try ever harder to stay out of the way of SLAPP suits. (Did we perhaps forget those exist?)

    ...and those "costs of scale" are already benefitting the established players. They will continue to benefit the large content houses, who would, without section 230, be able to file suits against competitors, even if the individual poster was someone they hired to slip a specific, legally actionable statement through the overworked content pre-screening system.

  • by Rick Schumann ( 4662797 ) on Saturday January 09, 2021 @04:17PM (#60916930) Journal
    If you want to allow the law to hold Internet sites liable for content that users post there, then there's only a few ways you can do that and still have those sites.
    1. All users would have to be required to use their real, legal names, legally verified, and the sites they post on would have to have them sign a legally binding agreement that they take 100% responsibility legally, both in criminal and civil law, for everything they post, and they agree the site itself is not in any way shape or form responsible for said user content. Overall effects of this: non-U.S. users can't be verified therefore are excluded; U.S. users cannot have any sort of anonymity or protection of their identity, possibly with dire consequences to their property and persons; overall, chilling effects on Freedom of Speech/Freedom of Expression on the Internet. Futhermore: some Bad Actors would slip through the cracks regardless and create chaos and mayhem.

    2. All Internet sites would have to sequester all user content pending moderation by human moderators, who would be required to fact-check, quality-control, and judge content for possible risk of liability for the site in question. Most all 'social media' sites would go the way of the dinosaurs due to the impossibility of this task. At best it would take days, weeks, even months for user content to be made 'live' for others to view. Impractical. Would allow for anonymity, but again, chilling effects on freedom of speech and expression due to the massive lag between posts being made and posts going 'live'.
    3. All Internet sites close down the ability of users to post content of any kind whatsoever. Essentially, the World Wide Web becomes 'read-only'; the only 'user content' that could be transmitted over the Internet would be private email -- and I could make an argument for someone deciding that even that would have to be 'moderated' in some way by the systems a given email passes through. Things like in-game chat functions would have to be removed. I could also make an argument in this case for all user-generated communications over the internet having to be shut down (chat clients, video conferencing, etc) because how do you moderate live streaming content in realtime? You don't, that would be the problem. The Internet would essentially become entirely read-only, just another version of Cable TV, switching URLs instead of switching channels.

    All the above would essentially destroy the Internet once and for all.
    Meanwhile, I and others can make the argument that the only people wanting to remove the ability of websites to moderate what is posted by users on their sites as they see fit, are right-wing extremists who are complaining about so-called 'liberals' and so-called 'leftists' running sites like Facebook and Twitter 'censoring' them and depriving them of their 1st Amendment rights -- when in reality what is being moderated by these sites are 'alternate facts', propaganda, outright lies, misinformation, racism, and thinly-veiled calls for violence.
    The fact of the matter is, whether these extremists scream and yell about it or not, is that all these sites that they complain won't allow them to just say whatever they want are not parts of the U.S. Government, they are private companies, and they have their own Terms of Service that all these people agree to when they sign up for an account, and if they violate that ToS then they will have their content moderated and/or have their accounts suspended or deleted. If you don't like it, go somewhere else where whatever it is you want to post is welcomed! That's your actual 1st Amendment rights in action right there: the Government doesn't dictate what websites can and cannot exist, so go find one that welcomes whatever extremist commentary you want to post -- or go start your own website allowing whatever content you want! The only restriction on this is blatantly illegal content, like planning/coordinating acts of violence, for instance.
    • One more thing I'd like to add to the above:
      If you'd like to see what a 'free and open internet', in the style 'certain people' (i.e. these extremists) would like, go browse through 4chan for a few days. What 'moderation' happens there is spotty at best due to the fact that moderators are volunteers, unpaid, and there's no 'user accounts', so no one knows who anyone is, everything is anonymous, and you can say whatever outrageous bullshit you want there. Even posting outright illegal content isn't necessar
  • Fuck right off (Score:4, Insightful)

    by Graymalkin ( 13732 ) * on Saturday January 09, 2021 @04:55PM (#60917134)

    The idea that repealing Section 230 will do anything beneficial is beyond absurd. The only thing it would accomplish is consolidate more communication in silos of companies like Facebook. Typically vague laws are a detriment to citizens, for instance the "indecency" portions of the CDA got struck down because "indecent" and "offensive" were not given definitions in the law. Section 230's vagueness is actually a boon for the citizenry.

    While everyone wanted to reference Facebook et al with regard to Section 230, it's really everyone else that really benefits. Because you don't personally own a bunch of Internet infrastructure you have to rely on third parties to host pretty much anything you put online. Neither centrally hosted or peer-to-peer hosted material would be viable without Section 230 protections.

    If you want to put up a website, it's either going to be hosted at a service provider or going through a service provider if you try self hosting it. In either case, the ISP is not the publisher of your site, you're just using their infrastructure. Without Section 230 protections no ISP is going to want to do that because they would then be liable for whatever you did with your site. Peer-to-peer hosting isn't safe either, one successful lawsuit against a last-mile ISP would lead to all of them actively blocking upstream services.

    This ends up spilling over into any content not generated by the hosting entity. So no more GitHub, mailing lists, help forums, or even wikis. Even product/service ratings on websites will disappear. For all the bad shit that might get removed from the web an untold amount of net positive things will also end up going away.

  • by DERoss ( 1919496 ) on Saturday January 09, 2021 @04:59PM (#60917150)

    Making Internet platforms liable for what its users say could eventually eliminate platforms. They will go bankrupt because they will be seen as "deep pockets" and sued for vast sums. Instead Section 230 should be modified.

    The terms of service and acceptable-use policies of a platform should be required by law to apply uniformly to all of its users. There should be no exceptions for any individual, organization, or business. There should be no exceptions for any politician or government agency.

    Then, if a platform fails to enforce its policies and terms uniformly, the exemption from liability in Section 230 should be presumed by law to have been waived. This would create a strong incentive for platforms to go beyond lip service in decreeing their policies.

Without life, Biology itself would be impossible.

Working...