Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
United States Technology

Meta, Twitter, Microsoft and Others Urge Supreme Court Not To Allow Lawsuits Against Tech Algorithms 78

A wide range of businesses, internet users, academics and even human rights experts defended Big Tech's liability shield in a pivotal Supreme Court case about YouTube algorithms, with some arguing that excluding AI-driven recommendation engines from federal legal protections would cause sweeping changes to the open internet. From a report: The diverse group weighing in at the Court ranged from major tech companies such as Meta, Twitter and Microsoft to some of Big Tech's most vocal critics, including Yelp and the Electronic Frontier Foundation. Even Reddit and a collection of volunteer Reddit moderators got involved. In friend-of-the-court filings, the companies, organizations and individuals said the federal law whose scope the Court could potentially narrow in the case -- Section 230 of the Communications Decency Act -- is vital to the basic function of the web. Section 230 has been used to shield all websites, not just social media platforms, from lawsuits over third-party content.

The question at the heart of the case, Gonzalez v. Google, is whether Google can be sued for recommending pro-ISIS content to users through its YouTube algorithm; the company has argued that Section 230 precludes such litigation. But the plaintiffs in the case, the family members of a person killed in a 2015 ISIS attack in Paris, have argued that YouTube's recommendation algorithm can be held liable under a US antiterrorism law. In their filing, Reddit and the Reddit moderators argued that a ruling enabling litigation against tech-industry algorithms could lead to future lawsuits against even non-algorithmic forms of recommendation, and potentially targeted lawsuits against individual internet users.
This discussion has been archived. No new comments can be posted.

Meta, Twitter, Microsoft and Others Urge Supreme Court Not To Allow Lawsuits Against Tech Algorithms

Comments Filter:
  • by Petersko ( 564140 ) on Friday January 20, 2023 @11:50AM (#63225396)

    I find Google, Youtube, etc. do a pretty good job about not suggesting anything shady to me until I demonstrate a pretty clear interest in it. I've never seen nor watched anything terrorist related on Youtube because I've never gone looking for it. You can look for jobs for days, and Google won't send you anywhere dirty... until you cross the threshold and ask about handjobs - then, "Woah, nelly."

    It's tricky. But for my whole life, the classifieds in my local papers included thinly veiled prostitution ads. I don't recall lawsuits about that.

    • It's tricky. But for my whole life, the classifieds in my local papers included thinly veiled prostitution ads. I don't recall lawsuits about that.

      Well actually... [google.com]

      • I was not talking about Craigslist. I specifically said, "classifieds in my local papers". Perhaps I should have been more specific. After all, I'm over 50 now... newspapers might be an anachronism too obtuse for some.

    • ...with some arguing that excluding AI-driven recommendation engines from federal legal protections would cause sweeping changes to the open internet.

      Good.

      Hey, just limit it to algorithms on social media that try to steer your attention either just for more clicks OR trying to push you into thinking a certain way that the CEO wants you to think.

      That should protect the "open internet"....most of it isn't using behaviorally directed algorithms.

  • by SuperKendall ( 25149 ) on Friday January 20, 2023 @11:52AM (#63225412)

    To me it's mystifying they think they should not be responsible - you (as a company or person) should be responsible for what you put out in the world.

    If children not of legal age do something terrible, parents can be forced to take accountability, and a rogue AI from a giant corporation seems no different.

    It's not even like recommendation engines would go away (as much as that might be wished for by all), then would just have to be much more carefully monitored.

    I feel like tech companies are fighting to keep something around that not a lot of people like to begin with and is already doing humanity a disservice in aggregate.

    • by Petersko ( 564140 ) on Friday January 20, 2023 @12:08PM (#63225478)

      Regarding the parent's responsibility, a rather interesting nuance now is that for most of history, the child was raised in the home, and the parents were the gatekeepers between the world and the child. Now every child has the greatest bypass system ever built - the world is at their fingertips, and there is precious little a parent can do other than lock the child in a crate and slip food and water through the cracks until they're 18. Holding the parent responsible is less and less reasonable.

      I'm all for accountability, but it should come with control.

      • Regarding the parent's responsibility, a rather interesting nuance now is that for most of history, the child was raised in the home, and the parents were the gatekeepers between the world and the child. Now every child has the greatest bypass system ever built - the world is at their fingertips, and there is precious little a parent can do other than lock the child in a crate and slip food and water through the cracks until they're 18.

        Yet another reason to make access to social media adult only, much like

        • People, including children, are not mindless programmable automatons.

          Not yet, anyway. Your suggestion will go a long way towards making it happen.

          • People, including children, are not mindless programmable automatons.

            Not yet, anyway. Your suggestion will go a long way towards making it happen.

            How so?

            Not quite sure exactly what you are driving at.

            Can you give examples how making social media access adult only will turn children into "mindless programmable automatons"?

            Hell, it seems that something akin to "mindless automatons" is exactly what social media it is turning them into....

            I think of that when I see kids mindlessly staring into a smart

            • Can you give examples how making social media access adult only will turn children into "mindless programmable automatons"?

              If you teach them that information is dangerous, the risk is that they'll believe you.

          • by lsllll ( 830002 )
            I wonder if all children who grew up before the age of social media became mindless programmable automatons.
    • To me it's mystifying they think they should not be responsible

      So then you agree, web sites should have complete control over what is posted/listed/distributed on their site, and can remove whatever content they want and ban whomever they want.
      • by lsllll ( 830002 )
        Sure, as long as they're responsible and liable for what they leave behind.
    • Sue YouTube's AI because you don't like what it recommends? Absolutely. You should ask for your money back, like right now.
      • by lsllll ( 830002 )
        First of all, unless you have an AdBlocker, YouTube is not free. Your time is money and being forced to watch an ad monetizes the site for you. But even if there were no ads and it was completely free, they still can be liable. Are you not liable if you give away free fried chicken on the corner of the street and someone gets food poisoning?
    • by nasch ( 598556 )

      The problem is what will happen. Firstly, if "algorithm" is not given some special legal definition, it will include any manner in which a platform/web site decides to show content. Displaying content in chronological order is an algorithm. Displaying content determined to be relevant to a search term is an algorithm. There is no way to show anything at all other than by employing an algorithm to do so.

      Secondly, if sites are held liable for anything they decide to display to users, there are two likely

      • If this causes massive sites to implode and the internet to go back to a million tiny little forums and communities hyper-specialized and completely anonymous, it wouldn't be such a bad thing.
        The internet as we know it right now exists thanks to algorithmic recommendations, and these big companies made their fortunes this way and are scared sh*tless of losing what built them, but let's not play their own game. The internet was a pretty good place before facebook, twitter and tiktok, and I would venture that

        • by nasch ( 598556 )

          If this causes massive sites to implode and the internet to go back to a million tiny little forums and communities hyper-specialized and completely anonymous, it wouldn't be such a bad thing.

          It won't. If anything, those tiny sites would be the first to shut down, because there's no way they could afford the legal liability. The only ones with even a chance of survival with user generated content are the billion dollar companies.

  • Just like a child, sometimes it slips and does stupid things. Well, just like real parents, Google is responsible for what their stupid child does.

    If they can prove their "child" is mature enough to behave like a human adult, they might have a point. Until such time, the responsibility for what an AI says or does rests squarely on the shoulder of whoever runs the AI.

    • Given your wild hypothetical, we would expect the law to impose consequences directly on the misbehaving AI.

      • If (or rather when) AIs become so advanced as to match humans in complexity, creativity and ethical decision-making, then yes, they should be expected to face concequences for their actions, just like humans. I assume such an advanced AI would be receptive to the threat of being turned off.

        My point was that AIs are still dumb as bricks by human standards, and so Google, Microsoft and co. should be held responsible because they run them - and by proxy, inflict whatever fuckery the AI decides to do unto other

        • by wed128 ( 722152 )

          > I assume such an advanced AI would be receptive to the threat of being turned off.

            such an advanced AI will mimic something that is receptive to...

          it's very important that we all remember that these "AI" networks are just a big list of numbers to multiply. They have no consciousness, and no thought. Without some major breakthrough (no such breakthrough is on the horizon...), that will be true for the foreseeable future.

          When we all forget that, we're in big big trouble

    • Just like a child, sometimes it slips and does stupid things. Well, just like real parents, Google is responsible for what their stupid child does.

      The modern problem is: so many people claim to be offended or triggered by, well, anything. Lawsuits would be endless, because you cannot please everyone.

      Better would be to simply remind people that they do not need to visit web properties that they find objectionable. Kids are only slightly more difficult - we have two, and somehow manage to raise them despite the internet and smartphones.

    • by dstwins ( 167742 )
      Not really.. their algorithm isn't really AI.. its a matter of content weighting based on tags and paid rates..

      It goes like this (with a scale from 0 (no ranking) to 100 (bombard them with recommendations)):

      Everything is weighted at 0 (except for some content/tags that are paid to raise their rankings which makes them 5.
      Then the content you watch which include tags (for the types of content it is) is included.. if you are see 1 video it may raise the ranking of that tag 0.1 for each instance you watch and 0
      • > Not really.. their algorithm isn't really AI.. its a matter of content weighting based on tags and paid rates..

        That is, indeed, a lightweight AI algorithm.

        > Its not AI, its simple math..

        All computation is math. It's as complex as we wish it to be, including the integration of "random" algorithms and even quantum computing. Being mathematical does not divorce it from the responsibilities or limitations tied to using math and data for choices.

  • ban Tech Algorithms from labor issues!

  • by OrangeTide ( 124937 ) on Friday January 20, 2023 @12:05PM (#63225466) Homepage Journal

    And event patent them.

    But we don't want to be held responsible for these mindless overlords we have running our business.

  • Robber barons (Score:4, Insightful)

    by ozzymodus12 ( 8111534 ) on Friday January 20, 2023 @12:06PM (#63225470)
    The tech media companies are the new robber barons. The sheer amount of power a tiny group of people have over social media and technology is getting out of hand. Why should they now be above the law? If Google decides to screw you, you are screwed. It's that simple. They are a monolith. You are nothing unless you have a billion dollars and a name. Theses protections were for the fledgling internet companies. When you realize that these companies where directing "narratives" written up by special interests, the government and their own agendas without any transparency and accountability, it's not looking good for us peasants. Just look at the Twitter emails between the drug corporations wanting certain people silenced. Accountability won't happen if you have permanent immunity under the law. You still won't win in the end but I'd still like to have some hope.
  • Abolish 230 (Score:3, Interesting)

    by Anonymous Coward on Friday January 20, 2023 @12:15PM (#63225502)

    I hope 230 gets abolished. As a non American, I would love nothing more than to see the look on everyone's face after they've stabbed their nose repeatedly in a fit of rage against their own face.

    Everyone against Section 230 thinks it is somehow an affront to free speech because their own precious ideas aren't able to compete in the free market. But nothing in the US hurts free speech more, than civil liability. And so exposing American companies to that liability, will only make them curtail user generated content on their sites even more to avoid that liability. But instead of these people going off to form their own company, with blackjack and hookers, they too will face the wrath of lawyers and SLAPP suits and so on. And of course, the irony in all of this, is that the only companies that will be able to afford the burdens, are the ones the anti-230 crowd were trying to hurt in the first place, concentrating the power of silicon valley even more. And now that all public discourse on media platforms have to be clean and pure, the future of the US will start to look like some combination of Idiocracy and Demolition Man.

    Anyways, how about that flat 30% VAT instead of marginal income tax? I'm sure that will put more money into the average American's hands. LOL

    • But nothing in the US hurts free speech more, than civil liability. And so exposing American companies to that liability, will only make them curtail user generated content on their sites even more to avoid that liability

      Bingo. And a big factor is the uncertainty regarding those civil suits, both in applicability and in the penalty. Here in Europe, civil liability is not really a thing (mostly limited to actual damages), but the EU has tried to recreate that element of uncertainty in criminal laws around illegal content. They did so by proposing rules that are rather vague in applicability, but carry a very hefty penalty, thus achieving the same thing: scaring tech firms into self-censoring their content. Though I don't

    • I hope 230 gets abolished. As a non American...

      Everyone against Section 230 thinks it is somehow an affront to free speech because their own precious ideas aren't able to compete in the free market. But nothing in the US hurts free speech more, than civil liability. And so exposing American companies to that liability, will only make them curtail user generated content on their sites even more to avoid that liability.

      We don't need to do away with 230...we need to modify it to fit the times.

      It had its p

      • by lsllll ( 830002 )
        This is what I've been saying all along. The way Section 230 is written allows companies to have their cake and eat it, too. They need to choose on or the other. But whatever law congress passes to modify 230 also needs to take size into consideration. Large corporations with substantial userbases, like Twitter and Meta, need to be treated like how common carriers are treated, yet a small forum with a few thousand users still can be moderated and not held liable, because it's very easy for someone else
    • by nasch ( 598556 )

      But instead of these people going off to form their own company, with blackjack and hookers, they too will face the wrath of lawyers and SLAPP suits and so on. And of course, the irony in all of this, is that the only companies that will be able to afford the burdens, are the ones the anti-230 crowd were trying to hurt in the first place, concentrating the power of silicon valley even more. And now that all public discourse on media platforms have to be clean and pure, the future of the US will start to look like some combination of Idiocracy and Demolition Man.

      This is the outcome you're hoping for?!

  • "In friend-of-the-court filings, the companies, organizations and individuals said the federal law whose scope the Court could potentially narrow in the case -- Section 230 of the Communications Decency Act -- is vital to the basic function of the web".

    The "basic function of the Web" depends only on suitable hardware and software. Government and its laws can do nothing whatsoever to assist the basic function of the Web. They can only obstruct and prevent it.

  • Comment removed (Score:5, Insightful)

    by account_deleted ( 4530225 ) on Friday January 20, 2023 @12:22PM (#63225514)
    Comment removed based on user account deletion
    • by nasch ( 598556 )

      They want to have it both ways, protection as a common carrier AND as an editor.

      None of these companies are asking for common carrier status.

  • Litigation against the act of recommending the videos seems an odd route to me.
    - If the videos on YouTube were not permissible* they should be, or should have been, taken down promptly.
    - However, if the videos were permissible, how can there be any case to answer in recommending them?

    * I'm using not permissible to cover the range of "in breach of YouTube's conditions" and/or illegal**
    ** Yes, that raises the question of "illegal where, exactly?"
    • by irving47 ( 73147 )

      I think the real story is they want precedents set on one side so they can try to use them on the flip side of the coin. A lot of content creators that don't fall in with the Google/Youtube "group think/party line" are sick of seeing their videos shit-holed/shadow-banned/sent to the bottom of the pit because they might reflect "conservative" viewpoints. I know there are terms of service and probably arbitration clauses, but I'm just waiting for them to ban the wrong channel (again) (especially a particular

      • I think the real story is they want precedents set on one side so they can try to use them on the flip side of the coin.

        Intriguing; I hadn't considered that this may be all about setting a precedent to flip - "If you don't recommend my channel I'll sue you!".

  • by The Evil Atheist ( 2484676 ) on Friday January 20, 2023 @12:30PM (#63225540)
    "Recommendations" probably are responsible for the degradation of the whole internet experience. It induces and reinforces echochambers, and boosts only the loudest, richest voices.

    Perhaps fauxtistic nerds shouldn't be designing and implementing recommendation algorithms, since they come up with stupid shitty criteria like "the best thing to watch/read/buy is what you've watched/read/bought before."

    On that note, why do shopping websites have a "sort by popularity" anyway? Give me the Asian-sort as default: "sort lowest to highest per unit price". I don't want to buy things just because many other people bought it.
    • by lsllll ( 830002 )

      On that note, why do shopping websites have a "sort by popularity" anyway? Give me the Asian-sort as default: "sort lowest to highest per unit price". I don't want to buy things just because many other people bought it.

      I didn't know there was a name for that kind of sort. You have no idea how much of my life I've wasted looking at all the options and doing divisions in my head.

      • In Australia, the Woolworth's website has a sort called "sort by unit price", which I affectionately call Asian-sort because my parents grew up poor, like many in their generation, and would look only for the cheapest things.
  • If you have a piece of software, obviously you'd have to sue the person who is running the software. It's not like you can ban a tool, if you could we could ban all malware and be done with it. Seems to me like this is a non-issue.
    • It's not like you can ban a tool

      Why not?

      if you could we could ban all malware and be done with it.

      Malware is banned. That's not the point.

    • by Bumbul ( 7920730 )
      My Phalanx CIWS setup in my backyard accidentally took out half of the neighborhood. Luckily its targeting system is controller by a tech algorithm...
  • by dark.nebulae ( 3950923 ) on Friday January 20, 2023 @12:46PM (#63225578)

    The youtube algorithm, in pushing pro-isis content to a susceptible person should absolutely be held accountable. In pushing that content, it contributed to the radicalization of the participants.

    It's that same algorithm that pushes antivax content to anti-vaxers, conspiracy theory posts to conspiracy theorists, and generally left-leaning content to the left and right-leaning content to those on the right.

    The big companies are right too in that the general removal of section 230 would absolutely cripple and devastate the internet. For example, Slashdot would likely stop taking comments because of folks pushing (often unrelated) off-color messages. They'd have to because they can't afford to fight all of the legal cases that would be thrown against them. If I posted sandyhook was staged, then the folks who took down Alex Jones could take down Slashdot too. So the removal of the general protections of 230 would be a disaster.

    However, I don't believe the algorithms for suggesting content should carry the same exemption. The large providers should not be accountable for what someone posts, but once they start promoting or pushing that content, they absolutely carry the responsibility for that. We could exclude the algorithms from the exemption and this, too, would likely change the internet, but in my opinion it would only change for the better.

    The removal of the exemption on the algorithms could actually do a lot to bust up the various social media bubbles we end up in because the algorithms keep recommending things we already believe in, removing the self-reinforcing echo chambers...

    • by lsllll ( 830002 )

      The youtube algorithm, in pushing pro-isis content to a susceptible person should absolutely be held accountable. In pushing that content, it contributed to the radicalization of the participants.

      What makes a person susceptible? And how would Google know it? He could have been a columnist, or writing a paper on ISIS, or could simply laid back and eat popcorn while watching the videos and every once in a while chuckle and say "These dimwits."

      • You can't stop someone that is intent on finding videos on ISIS. They can just search for them, over and over again. So a columnist or a person wanting to chuckle at them, they can still search to find what they need.

        But if all youtube recommends is pro-ISIS stuff every time you come back, and they feed it to you more and more, I would argue that anyone could be susceptible because it can amount to brainwashing...

        If you were captured by ISIS and they forced you to watch ISIS materials day after day, don't y

    • by nasch ( 598556 )

      However, I don't believe the algorithms for suggesting content should carry the same exemption.

      This is a false distinction. There is no way for YouTube (for example) to show its users anything other than via an algorithm. So holding algorithmic presentation of content liable is the same as holding YouTube liable for everything that appears on the site. Holding Google accountable for bad effects from its algorithms sounds desirable, but I have not yet seen a proposal for how to do so (even at the napkin sketch level of detail) that would not kneecap the internet as we know it. Also, in the US all o

      • No other way? Like not even "show 5 videos with most requests in last 24 hours", etc? Something that is simple yet avoids trying to show you the same videos about the same content?

        Come on, let's be honest. The algorithms are meant to keep you on the site, keep you from going to another video service. Meta, Tiktok, ..., they all do this, they want to keep your eyeballs on their site by any means necessary, so they push content they think will do this. Ask for one ISIS video because you want to see what they

        • by nasch ( 598556 )

          No other way? Like not even "show 5 videos with most requests in last 24 hours", etc?

          That's an algorithm. Other approaches that would also be algorithms:

          - chronological order
          - random selection
          - things your friends have seen
          - any other possible method because any way a computer makes a decision about something is an algorithm

  • by account_deleted ( 4530225 ) on Friday January 20, 2023 @12:47PM (#63225580)
    Comment removed based on user account deletion
    • I think the problem here is what the hell is an algorithm.

      In this case, I think we can narrow it down to code running (or even manual manipulations) the push or promote content to a user based on data gathered about them or any other reason other than content the user directly was searching for.

    • by nasch ( 598556 )

      And so it can be reasonably argued that at some point, the user is NOT the person deciding what's being put on their screen. Instead the computer programmers who developed the software have made those decisions

      The user has never been and never can be the one who decides what the server sends to the client. The server may use user input as part of the determination of how to respond, but the response is always up to the server.

  • This is nothing but the powers that be butthurt that content criticizing them is uploaded.
    If they are mad about the recommendation engines, regulate it.
    But you can't. You gave companies free speech. Suck to you guys. Fucking deal with it like I have to deal with you.
  • Big Tech hides their $ABUSE of $PII, $PERSONAL_DATA, $LIFE_TRAUMA_EVENTS, $RUMORS_BEHIND_YOUR_BACK, $REALTIME_SIGINT_GEOINT_XXXINT to perform $PSYCHOLOGICAL_OPERATIONS $EXPERIMENTS that are often $MALICIOUS with the intent to refine a $WEAPON that harms and $KILLS. Please let me know if there are any questions, comments, or concerns.
  • by sjames ( 1099 ) on Friday January 20, 2023 @01:28PM (#63225700) Homepage Journal

    They need to go after the various copyright strike and community standards algos instead. Not necessarily for the algo itself but for the near impossibility to get an actual human to tell you what was wrong and to really review it (rather than apply a rubber stamp) when it seems to have gone wrong.

    No more copyright strikes triggered by wild bird songs. I saw one from Louis Rossmann that a short of his cat making an annoyed meow was dinged for violating community standards. I don't speak cat and I didn't know youtube's algorithm did either. Naturally there is no human available to explain why that was a violation.

    More generally, if you delegate to an algorithm, it's actions are YOUR actions and you must accept responsibility for them.

  • Oh, no, sure it's bad, but DON'T WORRY, we'll self regulate.

    No one wants the law telling them what to do, it's best to just sweep the bad stuff under the rug. What do you think "professional" bodies do? Law Societies, College of Physicians, etc.

    If you do bad things and get caught you have to answer to society via the legal system. Why is it not a problem when it's done at global scale by a corporation?
  • by cyberfunkr ( 591238 ) on Friday January 20, 2023 @02:28PM (#63225880)

    Say I run a simple forum about gardening. Everything is on the up and up and we all get along talking about plants.

    Joe Simpleton creates an account, because they too, like gardening. At the bottom of the main page, I show lists of recent new members, who is currently on, and popular posters. Joe clicks on the link of a popular poster; Elphaba Greenthumb. Turns out Elphaba's profile and some of their posts talk about using herbs for treating ailments, folklore about wolfs-bane, and ingredients to brew a tea to help someone sleep.

    My "recommendation engine" is promoting *witchcraft*!

    If 230 is repealed, I can now be sued as I put up a link for content that the viewer did not deem appropriate and may even be illegal in the region where they live. As the owner, it was my responsibility to filter all content to make sure that only people that want to see information on "witchcraft" are exposed to it.

    I would be forced to geofence anyone logging in, know the laws of every region of the US, make sure every profile is tagged correctly (because I would also be responsible if the member incorrectly tagged themselves), and build up filters so that I don't let any witches show up in those areas. And that also becomes retroactive, so I would not only have to deal with new members coming in, but go back through all my existing members and "correct" their content, plus any changes anyone makes.

    Even after all that, I could still face frivolous lawsuits. While the case will likely be thrown out, as the owner, I still have to take the time and money to defend my website/company.

    The overhead required to manually verify every account, every biography, every photo, every forum post, every everything for validity, content, and filters to apply, is financially crushing to any business that allows user entered information to be shown. The manpower to monitor it all, the lawyers that need to be on retainer, the constant vigilance of watching for laws to change, the backup savings required to handle any suits that get through and require my company to take action... It's not worth it.

    People are judging this based on Google, Facebook, and YouTube who have the resources. But for every one of those evil empires, there are hundreds of thousands of "little guys" that would be forced to live by the same rules and be crushed out of existence, leaving you with JUST the empires.

    • by nasch ( 598556 )

      Thanks for this, great comment. In addition, 230 protects users too. It protects you from liability for retweeting, forwarding an email, etc. because you aren't liable for something someone else did.

    • Or you could just burn them.
  • by NotEmmanuelGoldstein ( 6423622 ) on Friday January 20, 2023 @06:41PM (#63226612)

    ... third-party content.

    A third-party isn't telling people to watch more Qanon or more ISIS propaganda, Youtube's intellectual property is. They're simultaneously claiming it's theirs but they're not responsible. This is equal to the alcoholic claiming it's the wine bottle's fault he was driving drunk.

    ... precludes such litigation.

    If Youtube showed kiddie-porn, how many politicians would be claiming "it's not their fault", I can't do anything? The fact Youtube doesn't show kiddie-porn means it is possible to censor images. It was only a few months ago that Google destroyed a digital identity because the subscriber sent photos of a child to a doctor. It reveals that Google has the power to detect things it doesn't like. It's suggests this lack of action is a choice and Section 230 is not the issue.

Do you suffer painful hallucination? -- Don Juan, cited by Carlos Casteneda

Working...