Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Google The Internet United States

Google Says Supreme Court Ruling Could Potentially Upend the Internet (wsj.com) 221

Speaking of Google, the company says in a court filing that a case before the Supreme Court challenging the liability shield protecting websites such as YouTube and Facebook could "upend the internet," resulting in both widespread censorship and a proliferation of offensive content. From a report: In a new brief filed with the high court, Google said that scaling back liability protections could lead internet giants to block more potentially offensive content -- including controversial political speech -- while also leading smaller websites to drop their filters to avoid liability that can arise from efforts to screen content. [...] The case was brought by the family of Nohemi Gonzalez, who was killed in the 2015 Islamic State terrorist attack in Paris. The plaintiffs claim that YouTube, a unit of Google, aided ISIS by recommending the terrorist group's videos to users. The Gonzalez family contends that the liability shield -- enacted by Congress as Section 230 of the Communications Decency Act of 1996 -- has been stretched to cover actions and circumstances never envisioned by lawmakers. The plaintiffs say certain actions by platforms, such as recommending harmful content, shouldn't be protected.

Section 230 generally protects internet platforms such as YouTube, Meta's Facebook and Yelp from being sued for harmful content posted by third parties on their sites. It also gives them broad ability to police their sites without incurring liability. The Supreme Court agreed last year to hear the lawsuit, in which the plaintiffs have contended Section 230 shouldn't protect platforms when they recommend harmful content, such as terrorist videos, even if the shield law protects the platforms in publishing the harmful content. Google contends that Section 230 protects it from any liability for content posted by users on its site. It also argues that there is no way to draw a meaningful distinction between recommendation algorithms and the related algorithms that allow search engines and numerous other crucial ranking systems to work online, and says Section 230 should protect them all.

This discussion has been archived. No new comments can be posted.

Google Says Supreme Court Ruling Could Potentially Upend the Internet

Comments Filter:
  • by Retired ICS ( 6159680 ) on Friday January 13, 2023 @01:51PM (#63206354)

    If they want protection from liability, they should seek to be common carriers. If they wish to not be common carriers and exercize editorial control, then they should be subject to liability, the same as any other publisher. Mutatis Mutandis ISPs.

    • Re: (Score:3, Funny)

      by Anonymous Coward
      No, 230 should protect all of the content moderation that Google performs.

      Who is a better source on this than Google? Only Google knows what the impact of this sort of thing would be, and Google looks out for the best interests of The Internet and all of us.
      • "At the apex of the pyramid comes Big Brother. Big Brother is infallible and all-powerful. Every success, every achievement, every victory, every scientific discovery, all knowledge, all wisdom, all happiness, all virtue, are held to issue directly from his leadership and inspiration."

        • The problem is you're using that quote to try to justify forcing Google to censor along government guidelines

          • Re: (Score:3, Interesting)

            by quonset ( 4839537 )

            The problem is you're using that quote to try to justify forcing Google to censor along government guidelines

            Don't worry, Twitter is doing quite well censoring what its owner [imgur.com] doesn't like [imgur.com] without the government.

            • The problem is you're using that quote to try to justify forcing Google to censor along government guidelines

              Don't worry, Twitter is doing quite well censoring what its owner doesn't like without the government.

              I'm not worried about that, because the two things are not remotely the same.

              • Re: (Score:3, Interesting)

                by cayenne8 ( 626475 )
                You know....how about 230 covers it IF they turn off and stop using algorithms to push content to people.

                This way people would just search for what they want and not have things they don't want to see pushed at them.

                This takes away much of the editorialization factor.

                Treat it like USENET....sure, there's content you don't want to see and by searching, etc....you can generally avoid it.

                I lean towards what the OP of this thread mentioned, let them have full 230 protection IF they act more like a common ca

                • Re: (Score:2, Troll)

                  by drinkypoo ( 153816 )

                  You know....how about 230 covers it IF they turn off and stop using algorithms to push content to people.

                  Showing people new content without showing them things they have stated they don't want to see is done with an algorithm. Congratulations, you just proved you know how nothing works.

            • The problem is you're using that quote to try to justify forcing Google to censor along government guidelines

              Don't worry, Twitter is doing quite well censoring what its owner [imgur.com] doesn't like [imgur.com] without the government.

              For those who haven't seen it, here are [imgur.com] the two videos [imgur.com] Ken Klippenstein posted showing a self-driving Tesla pulling over in a tunnel and coming to a stop without any reason to do so. This occurred right after Musk announced the feature. This resulted in damage to eight cars and nine people.

              • by dgatwood ( 11270 )

                I had no trouble searching for him. Maybe the person posting that made a typo? Or maybe a partial outage? It is post-Elon Twitter, after all.

            • I just tried that right now and @kenklippenstein is the first result, blue checkmark and everything. Seems like he's claiming it has been removed [twitter.com] but right now I have no evidence it even happened except a few clips from imgur.

          • “Power is in tearing human minds to pieces and putting them together again in new shapes of your own choosing.”

            “Power is not a means; it is an end. One does not establish a dictatorship in order to safeguard a revolution; one makes the revolution in order to establish the dictatorship. The object of persecution is persecution. The object of torture is torture. The object of power is power.”

            George Orwell, 1984
      • > Google looks out for the best interests of The Internet and all of us.

        I think my sarcasm detector just got triggered

    • Re: (Score:3, Insightful)

      by backslashdot ( 95548 )

      Allowing and disallowing content isn't "editorializing." What you are talking about would eliminate sites like slashdot, because it will either become full of trolls and spam or it will be a long process to get even one comment up because it will have to be checked by staff -- slashdot can't afford that.

      • by jwhyche ( 6192 )

        Allowing and disallowing content isn't "editorializing."

        Yes it is. Changing any form of content in any manner is the very definition of being a editor.

        • by dfm3 ( 830843 ) on Friday January 13, 2023 @03:32PM (#63206656) Journal
          The word you are looking for is "moderating". Editorializing is when one posts their own opinions, one example being newspaper articles that feature commentary and analysis of an event. On a smaller scale, one can editorialize by injecting their own opinion when recounting a story, say if I were to tell you about something I observed but I include my own opinions instead of just reporting the facts.

          When a website operator deletes content for violating their TOS, that's moderating.

          When a website operator screens content from users before allowing select material to be posted, that's moderating.

          When a site like /. allows select users to act as moderators to vote up and down content, that's moderating.
          • I'm going to say I agree with this, I do. However.. How does one avoid crossing over from assurances and hand-waving to full head-in-the-sand denial?

            Is there some point where we could truthfully acknowledge the signs of a slippery slope? Where is the line drawn?

            When a website operator deletes content because an interested 3rd party says so, that's ... ?
            When a website operator screens content pending judgement by the Ministry of Truth, that's ... ?
            When a website operator promotes controversial/inconvenien

      • by wiggles ( 30088 ) on Friday January 13, 2023 @02:38PM (#63206512)

        One person's "troll" is another person's fellow political traveller.

        We have had a serious problem with people labeling people whose political speech they don't like as "trolls" to be silenced.

        • by ceoyoyo ( 59147 )

          Don't worry, without moderation you won't be able to see any real posts OR troll posts. They'll all just get lost in a flood of automatically generated penis pill spam.

        • Re: (Score:2, Insightful)

          by CAIMLAS ( 41445 )

          I basically have given up posting on slashdot for this reason. If anything is even slightly within the realm of political "affinity" and I express an opinion, or simply asking questions contrary to prevailing narrative.

          Nobody engages, like they used to here... it's just a -1 STFU TROLL. Went from 25 years of "Excellent" karma (and frequent +5 posts) to effectively being banned from posting for months (with Terrible karma) in the course of less than a week (after a long hiatus from being away from /.).

          This p

          • Out of curiosity, I went and read some of your more recent posts, and have good news - I don't think "political affinity", "express[ing] an opinion", or "asking questions contrary to prevailing narrative" is actually the problem. I think you just need to save your non-tech thoughts for the time you presumably spend yelling incoherently down by the bus stop, and you should be fine.
        • We have had a serious problem with people labeling people whose political speech they don't like as "trolls" to be silenced.

          Yes, I know. It happens to me constantly here on Slashdot when I complain about the abusive aspects of capitalism, a living minimum wage, labor rights... I get modded "troll" damned near every time, despite the fact that I'm sincere as all hell. Citation: my posting history, see google. But wait, does google read Slashdot at -1?

        • by Tablizer ( 95088 )

          > We have had a serious problem with people labeling people whose political speech they don't like as "trolls" to be silenced.

          That is indeed true, but it's also true that trolls have proven very capable of inciting violence, riots, and medical death via BS, exaggeration, and doxing.

          We have to somehow find decent compromises. For example, one compromise is for a social network to put warning markers/tags on questionable content rather than outright ban it, maybe even with links to alternative opinions/sou

      • by DarkOx ( 621550 )

        A distinction without a difference. Sure it might not be literally drafting your own editorial piece and putting it on blast, but crowd sourcing content with the same sentiment from a 1000 monkeys at 1000 keyboards will stifling critical content, has the same result.

      • Allowing and disallowing content isn't "editorializing."

        The case is not about allowing or blocking content it's about promoting content. Slashdot should be well protected from that because here "promoting" comments via moderation is also left entirely up to us, the site users and not Slashdot itself. On YouTube where the algorithm is complex and secret and there are employees who get involved in choosing what (or what not) to promote as well that's a much harder argument to make.

        • What about hobbyist niche websites that don't have user moderation, or lack the number of users slashdot has. You've never run a website with a comment forum and seen it get inundated with spam/bots.

        • Can you be sued for moderating?

          • by sfcat ( 872532 )

            Can you be sued for moderating?

            No, user action is different than editorial/administrative action. This case is about what is promoted, not what is censored. The question is does the site have a liability for things that advertisers promote and/or it itself promotes. This is different than having liability for anything that is hosted and/or posted and it is different than having liability for censoring or not censoring specific content.

    • Or... hear me out... they can have both as the law allows and follow that law specifically written to prevent internet sites turning into shitshows.

    • by Sloppy ( 14984 )

      If they wish to not be common carriers and exercize editorial control, then they should be subject to liability, the same as any other publisher.

      Why would we want to make such a destructive change to the law? Doesn't your idea just cause problems, without being mitigated by helping anyone? Your proposed policy change looks like a lose/lose idea, where everyone comes out behind (except for litigious motherfuckers).

      If you allow commenting on your personal Wordpress site, you shouldn't lose everything you own

    • by grmoc ( 57943 )

      In the past, providers were liable both for doing moderation /and/ for not doing moderation. We'd not have user-generated-anything (including forums like this one) if that was still (or newly again) the case.

      ISPs, OTOH, should certainly be common carriers, since they get to prioritize one thing over another, gather your data and sell it to whomever...

    • by ras ( 84108 ) <russell+slashdot ... stuart...id...au> on Friday January 13, 2023 @04:12PM (#63206744) Homepage

      If they want protection from liability, they should seek to be common carriers.

      As a rule, common carriers allow two individuals to communicate. The rule is effectively "thou shall not fuck with a private conversation". This is about broadcasting; saying things in public. There always has been restrictions on things you can say in public. You can't broadcast a bomb threat in an airport, you can't lie in advertising, you can't publish other peoples private details you earlier promised to keep secret. You are confused if you think the blanket "common carrier" has a role here.

    • If they want protection from liability, they should seek to be common carriers. If they wish to not be common carriers and exercize editorial control, then they should be subject to liability, the same as any other publisher. Mutatis Mutandis ISPs.

      Common carrier status has absolutely nothing to do with this. Common carriers can and do ban/drop/expel users of their services and block users from using their service for objectionable activities. Just the most obvious case is if you use your phone to make death threats, you can be dropped in a hot minute.

      What being a common carrier means is simply that you are generally open to any member of the public as long as they pay the standard fees that anyone else does and haven't individually done something t

    • Re: (Score:3, Interesting)

      Comment removed based on user account deletion
    • but they want to have their cake and eat it too
    • Section 230, the communications decency act, specifically defines that online services are their own category to be regulated -they are not common carriers, nor are they publishers.

      Section 230 States that the online services are :
      1. not liable for what anyone else posts on their service, and
      2. that they are encouraged to censor anything they find objectionable
      -a. whether or not they are successful in blocking what they intend to block, and
      -b. even if they block what would oth

  • Get it right (Score:2, Interesting)

    "resulting in both widespread censorship and a proliferation of offensive content"

    You can't have it both ways. This sounds like FUD.

    • Re:Get it right (Score:5, Informative)

      by Knightman ( 142928 ) on Friday January 13, 2023 @02:15PM (#63206434)

      You should look up the background to why Section 230 came to be. In short, sites could be liable for content they didn't moderate while also being liable for not moderating enough.

      See Cubby v Compuserve where Compuserve was sued because they didn't moderate, and Stratton Oakmont v Prodigy Services where Prodigy was sued because they didn't moderate enough. This led to the untenable situation where a service could be sued for moderating AND sued for not moderating.

      The solution was Section 230 that simply says that each party is liable for its own speech and a service is allowed to moderate content they don't think belong on their service.

    • by ranton ( 36917 )

      "resulting in both widespread censorship and a proliferation of offensive content"

      You can't have it both ways. This sounds like FUD.

      You can have it both ways if different companies respond to new legislation differently. Some would choose to increase censorship, and others would remove it. Even the summary describes this scenario.

    • Re:Get it right (Score:4, Insightful)

      by thegarbz ( 1787294 ) on Friday January 13, 2023 @02:41PM (#63206518)

      Of course you can. The law specifically was written to allow both because it realised (which I know is mind blowing for anything produced by American politics) that the world isn't black and white and that neither full liability nor lack of control are in any way desirable.

    • Entire goal of repealing the kind of protections you find in section 230 of the CDA in America is for the right wing to seize control of the internet because they're backed by billionaires and they're the only ones who have the money to buy exceptions and circumvent the legal system.

      And the right wing has always been Buddy Buddy with offensive content and racism. That's because the right wing is all about hierarchies and racism and bigotry or how you create those hierarchies. It's about punching down an
      • by DarkOx ( 621550 )

        Utter fucking nonsense - Big media (wildly left leaning) is well big with tons of money and already control a lot of publishing. Big tech and Twitter really isnt in the same conference as Google, Meta, Microsoft, is pretty well left leaning.

        Really 230 going away will be a DISASTER for the far-right. Of course it will also be a DISASTER for the very far left like, you rsilvergun, as well as all the other blue-dyed-hair Marxist academics. It will be very good thing for the moderate left - people who vote for

        • Big media (wildly left leaning)

          Oh no, child, no. They are perhaps socially left, but they are fiscally right. They still support corporatism, which is really just fascism. But money doesn't care about your genitals or your labels or whatever, so long as it can squeeze some money out of you. Money is libertarian, it doesn't care about anyone either way, it just doesn't want to be taxed.

          230 going away will be a DISASTER for the far-right. Of course it will also be a DISASTER for the very far left

          It's going to be a DISASTER for everyone who doesn't participate in the globalist, corporatist groupthink that you claim to despise.

        • Big media is moderate right-wing or moderate left. They're not extreme anything. You don't maximise sales by being extreme.

          If you consider them far left, it's only because you're far right.

    • Comment removed based on user account deletion
  • by magzteel ( 5013587 ) on Friday January 13, 2023 @01:53PM (#63206368)

    They have enjoyed the protection of a platform and the editorial freedom of a publisher.
    The only reason this is in court is they have abused their enviable position and power.

    • This is all well and good, but the result is that you will be stuck with the same tech companies you already hate.
      What startup is going to want to or be able to afford the legal fee necessary to run anything.
      • by DarkOx ( 621550 )

        Really - I doubt. Meta won't last week when every ambulance chaser in the nation is suddenly able encourage everyone to file a couple $100k suit for libelous content facebook let their ex publish..

        Ok Meta the company might survive because of their massive cash pile - but 100% sure facebook is locked down with little if any existing content shared 'public'

    • The only reason they're in court is because this family wants to blame anyone they can think of for ISIS. they should have started with the US govt.
    • The public has enjoyed having it both ways for a while too. We have gotten instant access to the best information on the Internet, and had large tech companies do a decent job of moderating some of the worst content. They don't do a great job, because it would take too much effort, but they generally do enough to make the Internet a more useful place.

      • Re: (Score:2, Informative)

        by cayenne8 ( 626475 )
        In the past, I found USENET to be quite useful and a great place for info.

        Change 230 protection to be more like common carrier and prohibit algorithms pushing content....

        This way, people are free to look for what they want...post what they want and not really be at risk for having content they don't want shoved in their face.

        Really change it to make 230 protection be more common carrier than it is.

        Hell, even allow companies to have 2 sections...one without 230 where they are publishers and moderate....a

    • The only reason this is in court is they have abused their enviable position and power.

      No. The only reason this is in court is because it turns out that despite the average age of congress being 58 years old, it turns out they are all a bunch of children.

  • So who exactly will be responsible for defining what 'harmful content' is?

    The only sane response to a change like that to Section 230 would be for social media platforms to simply restrict / exclude ANY content that has even a hint of being questionable so they aren't open to legal ramifications. What's wrong with the existing system of relying on users to flag content they believe to be harmful or appears to instigate violence or terrorism?
  • Actually good point (Score:4, Interesting)

    by peterww ( 6558522 ) on Friday January 13, 2023 @01:59PM (#63206384)

    > It also argues that there is no way to draw a meaningful distinction between recommendation algorithms and the related algorithms that allow search engines and numerous other crucial ranking systems to work online

    That's true. There is basically no difference between a search engine saying "we recommend these web pages to you based on your search terms", and YouTube saying "we recommend these videos to you based on your view history". Both involve the user looking for content and the search engine trying to divine what the user might want to see. You could say that YouTube Recommended is not "answering an explicit query from a user" or some such thing, but that's a pretty fine line, as what constitutes an explicit query will also get vague depending on the service.

    • by evanh ( 627108 )

      There's two rather large differences:
        - A search query has to be entered/requested.
        - There is no history/tracking to a search query.

      Those two facts make a world of difference to what is supplied to users. Without the pushed content/links there just isn't the same engagement by users. Without the tracking the type of content is restricted to just the search terms.

  • Hot air (Score:5, Insightful)

    by Retired Chemist ( 5039029 ) on Friday January 13, 2023 @02:09PM (#63206412)
    This is a bunch of hot air. All that would happen is that they would have to remove or modify their recommendation systems. They should be liable if they actively support statements that advocate violence, terrorism, or are libelous. They should not be liable for what people post, but when they make recommendations, they are no longer a passive participant in the process and should be held accountable.
    • They absolutely should not be liable. Removing it is fine but liability makes the platform impossible to maintain. It means you'll have a massive crackdown on any discussion except the most obviously advertiser friendly. It means that the internet becomes cable TV owned and operated by a handful of billionaire content creators instead of the current system where anyone can speak their mind in the public square.

      This is wealthy elites like Rupert Murdoch and likely the Indian equivalent trying to take bac
    • by dgatwood ( 11270 )

      This is a bunch of hot air. All that would happen is that they would have to remove or modify their recommendation systems. They should be liable if they actively support statements that advocate violence, terrorism, or are libelous. They should not be liable for what people post, but when they make recommendations, they are no longer a passive participant in the process and should be held accountable.

      How do you define "recommendation", though? Facebook orders your friends' posts based on how likely they think it is that you would want to view them. Is that a recommendation? When you search for something and it chooses the order, is that a recommendation? Where's the dividing line between promotion and filtering? And so on.

  • by jddj ( 1085169 ) on Friday January 13, 2023 @02:10PM (#63206414) Journal

    That's bullshit. The internet worked fine before there was Google, and while it's showing its age, it'll continue to work fine if Google and Facebork disappear (we but can dream...)

    • by Bodhammer ( 559311 ) on Friday January 13, 2023 @02:18PM (#63206450)
      "Upend our political manipulations, rent-seeking, and grossly exorbitant profits."

      TFTFY
    • That's bullshit. The internet worked fine before there was Google

      The internet before Google is completely unlike the internet today. Comparing the two is like saying we were able to design buildings before computers were invented. While true it completely misses the point and is completely irrelevant in the modern context.

    • by flink ( 18449 )

      The internet before the CDA (1996) was just starting to get on people's RADAR. The pre-230 internet was only possible because most sites were static without much user-generated content. So if you found something libelous or illegal, chances are the person who owned the site put it there. For the rare exception like geocities, the internet was small enough that they could handle the occasional complaint on an ad hoc basis.

      The modern web would be utterly impossible without a liability shield.

      • Web != Internet.

    • That's bullshit. The internet worked fine before there was Google, and while it's showing its age, it'll continue to work fine if Google and Facebork disappear (we but can dream...)

      Your statement is bullshit. Search before Google was painful beyond belief. I changed half a dozen search engines before I switched to Google.

      • by PPH ( 736903 )

        And now Google is painful (and practically worthless). I want AltaVista back.

        To be fair, some advertisers would figure out how to game that site as well and load up the first pages with junk.

      • Google is a web search engine, as was AltaVista before it. Without an Internet, those search engines couldn't do anything. It was working. It is working. Apart from a few innovations like, what, DNSSEC, and a certain amount of fragmentation into private backbones, what's so different?

    • The internet worked fine before there was Google

      The internet worked fine before the Communications Decency Act, which made people responsible for the content they served on the internet except for content which was posted to their service by someone else. The second part in bolt there is what makes it possible for Slashdot to let comments hit the internet without being evaluated first, given the part in between the bold parts.

      You cannot have open fora on the internet with the CDA without Section 230 of the CDA.

      I'm all for elimination of the entire CDA,

  • Sure there is (Score:5, Insightful)

    by DarkOx ( 621550 ) on Friday January 13, 2023 @02:12PM (#63206424) Journal

    "argues that there is no way to draw a meaningful distinction between recommendation algorithms and the related algorithms that allow search engines and numerous other crucial ranking systems"

    That has to be about the most farcical assertion i have ever read. Search is pull, recommendation is push. its that simple..

    At the algorithm level, its are the search terms coming from a human at a keyboard DIRECTLY or are they themselves machine generated by some kind of machine correlation of related content.

    • No kidding. That quote goes a long way toward explaining the complete shit results of search I get of late from Google and others. When straight-forward word searches are completely overridden to the point that the results are nonsensical. No apparent meaningful distinction between search and recommend means all your search are recommend and are probably paid for or ranked with ROI in mind and not what you wanted as first priority.

  • Anyone for these is sucking off Apple, Googles, Microsoft, and every other company that wants to lock you into a walled garden.
    This will do it for sure. No way to create a platform. No one can afford legal fees unless they are rich.
  • Google said that scaling back liability protections could lead internet giants to block more potentially offensive content -- including controversial political speech -- while also leading smaller websites to drop their filters to avoid liability that can arise from efforts to screen content.

    That should be the outcome we all WANT! Rather than having a small oligarchy of bit-tech leaders who essentially decide what we all get to see and and are allowed to say, we would have real diversity - AND - importantly some friction to publishing the aggressively stupid.

    Right now there is no point in posting anything except on the platforms the big tech oligopoly controls. Even a fairly popular info-tainment program like say Louder-with-Crowder pretty much is forced into youtube because nobody will both w

  • by jbarr ( 2233 ) on Friday January 13, 2023 @02:48PM (#63206548) Homepage

    (In the U.S. at least) freedom of speech does not nor was intended to protect you from being offended.

  • by jonniesmokes ( 323978 ) on Friday January 13, 2023 @02:56PM (#63206566)

    If all these companies did was some sort of public service recommendation without a profit motive, I would say they might have some reason to get special treatment. But they purposely write algorithms that spread the most incendiary (and often false) messages and hate speech all to drive advertising revenue. If a newspaper did that, they'd get sued. Why can Google and Facebook get away with it? Curating and recommending sites, whether its search or a playlist is not passive and those recommendations are what allow lunatics to spread their messages of hate. These giant companies should be at least as liable for their content as a newspaper is.

  • The same people who want this are oddly enough the same people who posted things like telling the proud boys to stand back and stand by, and the other stuff he posts on Truth social, Alex Jones, QAnon, and other purveyors of far right violence .

    I trust these geniuses know that it is more than just trying to kill the news they don't like, and owning the libs will expose everyone to liability, even Fox News and OAN. I'm seeing them with the shocked Pikachu face right now.

  • has been stretched to cover actions and circumstances never envisioned by lawmakers

    Like semi-automatic firearms per 2nd Amendment. Guns were slow and clunky when 2nd written, and mass shootings by lone loons was unheard of.

"Poor man... he was like an employee to me." -- The police commisioner on "Sledge Hammer" laments the death of his bodyguard

Working...