Google Says Supreme Court Ruling Could Potentially Upend the Internet (wsj.com) 221
Speaking of Google, the company says in a court filing that a case before the Supreme Court challenging the liability shield protecting websites such as YouTube and Facebook could "upend the internet," resulting in both widespread censorship and a proliferation of offensive content. From a report: In a new brief filed with the high court, Google said that scaling back liability protections could lead internet giants to block more potentially offensive content -- including controversial political speech -- while also leading smaller websites to drop their filters to avoid liability that can arise from efforts to screen content. [...] The case was brought by the family of Nohemi Gonzalez, who was killed in the 2015 Islamic State terrorist attack in Paris. The plaintiffs claim that YouTube, a unit of Google, aided ISIS by recommending the terrorist group's videos to users. The Gonzalez family contends that the liability shield -- enacted by Congress as Section 230 of the Communications Decency Act of 1996 -- has been stretched to cover actions and circumstances never envisioned by lawmakers. The plaintiffs say certain actions by platforms, such as recommending harmful content, shouldn't be protected.
Section 230 generally protects internet platforms such as YouTube, Meta's Facebook and Yelp from being sued for harmful content posted by third parties on their sites. It also gives them broad ability to police their sites without incurring liability. The Supreme Court agreed last year to hear the lawsuit, in which the plaintiffs have contended Section 230 shouldn't protect platforms when they recommend harmful content, such as terrorist videos, even if the shield law protects the platforms in publishing the harmful content. Google contends that Section 230 protects it from any liability for content posted by users on its site. It also argues that there is no way to draw a meaningful distinction between recommendation algorithms and the related algorithms that allow search engines and numerous other crucial ranking systems to work online, and says Section 230 should protect them all.
Section 230 generally protects internet platforms such as YouTube, Meta's Facebook and Yelp from being sued for harmful content posted by third parties on their sites. It also gives them broad ability to police their sites without incurring liability. The Supreme Court agreed last year to hear the lawsuit, in which the plaintiffs have contended Section 230 shouldn't protect platforms when they recommend harmful content, such as terrorist videos, even if the shield law protects the platforms in publishing the harmful content. Google contends that Section 230 protects it from any liability for content posted by users on its site. It also argues that there is no way to draw a meaningful distinction between recommendation algorithms and the related algorithms that allow search engines and numerous other crucial ranking systems to work online, and says Section 230 should protect them all.
Protection from Liability (Score:4, Insightful)
If they want protection from liability, they should seek to be common carriers. If they wish to not be common carriers and exercize editorial control, then they should be subject to liability, the same as any other publisher. Mutatis Mutandis ISPs.
Re: (Score:3, Funny)
Who is a better source on this than Google? Only Google knows what the impact of this sort of thing would be, and Google looks out for the best interests of The Internet and all of us.
Re: Protection from Liability (Score:2, Offtopic)
"At the apex of the pyramid comes Big Brother. Big Brother is infallible and all-powerful. Every success, every achievement, every victory, every scientific discovery, all knowledge, all wisdom, all happiness, all virtue, are held to issue directly from his leadership and inspiration."
Re: (Score:2)
The problem is you're using that quote to try to justify forcing Google to censor along government guidelines
Re: (Score:3, Interesting)
The problem is you're using that quote to try to justify forcing Google to censor along government guidelines
Don't worry, Twitter is doing quite well censoring what its owner [imgur.com] doesn't like [imgur.com] without the government.
Re: (Score:2)
The problem is you're using that quote to try to justify forcing Google to censor along government guidelines
Don't worry, Twitter is doing quite well censoring what its owner doesn't like without the government.
I'm not worried about that, because the two things are not remotely the same.
Re: (Score:3, Interesting)
This way people would just search for what they want and not have things they don't want to see pushed at them.
This takes away much of the editorialization factor.
Treat it like USENET....sure, there's content you don't want to see and by searching, etc....you can generally avoid it.
I lean towards what the OP of this thread mentioned, let them have full 230 protection IF they act more like a common ca
Re: (Score:2, Troll)
You know....how about 230 covers it IF they turn off and stop using algorithms to push content to people.
Showing people new content without showing them things they have stated they don't want to see is done with an algorithm. Congratulations, you just proved you know how nothing works.
Re: (Score:2)
If they don't want the recommendations, they can just not go to the site.
Re: (Score:2)
Those two things are very different, but they both involve algorithms.
Words matter.
Re: (Score:2)
We know about a lot of objectionable use of algorithms and haven't done anything about it in general, but I still think stripping Sec. 230 protections is way off-base. Let's just treat it as its own problem.
Re: (Score:3)
You have no inherent *right* to use Twitter. It is Musk's site and he can do what the fuck he wants. If you don't like it, make your own, competing site and "let the free market decide"
Re: (Score:2)
The problem is you're using that quote to try to justify forcing Google to censor along government guidelines
Don't worry, Twitter is doing quite well censoring what its owner [imgur.com] doesn't like [imgur.com] without the government.
For those who haven't seen it, here are [imgur.com] the two videos [imgur.com] Ken Klippenstein posted showing a self-driving Tesla pulling over in a tunnel and coming to a stop without any reason to do so. This occurred right after Musk announced the feature. This resulted in damage to eight cars and nine people.
Re: (Score:2)
I had no trouble searching for him. Maybe the person posting that made a typo? Or maybe a partial outage? It is post-Elon Twitter, after all.
He's now claiming the block has been removed (Score:2)
I just tried that right now and @kenklippenstein is the first result, blue checkmark and everything. Seems like he's claiming it has been removed [twitter.com] but right now I have no evidence it even happened except a few clips from imgur.
Re: (Score:2)
“Power is not a means; it is an end. One does not establish a dictatorship in order to safeguard a revolution; one makes the revolution in order to establish the dictatorship. The object of persecution is persecution. The object of torture is torture. The object of power is power.”
George Orwell, 1984
Re: (Score:2)
> Google looks out for the best interests of The Internet and all of us.
I think my sarcasm detector just got triggered
Re: (Score:3, Insightful)
Allowing and disallowing content isn't "editorializing." What you are talking about would eliminate sites like slashdot, because it will either become full of trolls and spam or it will be a long process to get even one comment up because it will have to be checked by staff -- slashdot can't afford that.
Re: (Score:2)
Allowing and disallowing content isn't "editorializing."
Yes it is. Changing any form of content in any manner is the very definition of being a editor.
Re:Protection from Liability (Score:4, Insightful)
When a website operator deletes content for violating their TOS, that's moderating.
When a website operator screens content from users before allowing select material to be posted, that's moderating.
When a site like
Re: (Score:2)
I'm going to say I agree with this, I do. However.. How does one avoid crossing over from assurances and hand-waving to full head-in-the-sand denial?
Is there some point where we could truthfully acknowledge the signs of a slippery slope? Where is the line drawn?
When a website operator deletes content because an interested 3rd party says so, that's ... ? ... ?
When a website operator screens content pending judgement by the Ministry of Truth, that's
When a website operator promotes controversial/inconvenien
Re:Protection from Liability (Score:5, Insightful)
One person's "troll" is another person's fellow political traveller.
We have had a serious problem with people labeling people whose political speech they don't like as "trolls" to be silenced.
Re: (Score:2)
Don't worry, without moderation you won't be able to see any real posts OR troll posts. They'll all just get lost in a flood of automatically generated penis pill spam.
Re: (Score:3)
Re: (Score:2, Insightful)
I basically have given up posting on slashdot for this reason. If anything is even slightly within the realm of political "affinity" and I express an opinion, or simply asking questions contrary to prevailing narrative.
Nobody engages, like they used to here... it's just a -1 STFU TROLL. Went from 25 years of "Excellent" karma (and frequent +5 posts) to effectively being banned from posting for months (with Terrible karma) in the course of less than a week (after a long hiatus from being away from /.).
This p
Re: (Score:3)
Re: (Score:2)
We have had a serious problem with people labeling people whose political speech they don't like as "trolls" to be silenced.
Yes, I know. It happens to me constantly here on Slashdot when I complain about the abusive aspects of capitalism, a living minimum wage, labor rights... I get modded "troll" damned near every time, despite the fact that I'm sincere as all hell. Citation: my posting history, see google. But wait, does google read Slashdot at -1?
Re: (Score:2)
> We have had a serious problem with people labeling people whose political speech they don't like as "trolls" to be silenced.
That is indeed true, but it's also true that trolls have proven very capable of inciting violence, riots, and medical death via BS, exaggeration, and doxing.
We have to somehow find decent compromises. For example, one compromise is for a social network to put warning markers/tags on questionable content rather than outright ban it, maybe even with links to alternative opinions/sou
Re: (Score:2)
A distinction without a difference. Sure it might not be literally drafting your own editorial piece and putting it on blast, but crowd sourcing content with the same sentiment from a 1000 monkeys at 1000 keyboards will stifling critical content, has the same result.
Slashdot should be Fine (Score:2)
Allowing and disallowing content isn't "editorializing."
The case is not about allowing or blocking content it's about promoting content. Slashdot should be well protected from that because here "promoting" comments via moderation is also left entirely up to us, the site users and not Slashdot itself. On YouTube where the algorithm is complex and secret and there are employees who get involved in choosing what (or what not) to promote as well that's a much harder argument to make.
Re: (Score:2)
What about hobbyist niche websites that don't have user moderation, or lack the number of users slashdot has. You've never run a website with a comment forum and seen it get inundated with spam/bots.
Re: (Score:2)
Can you be sued for moderating?
Re: (Score:2)
Can you be sued for moderating?
No, user action is different than editorial/administrative action. This case is about what is promoted, not what is censored. The question is does the site have a liability for things that advertisers promote and/or it itself promotes. This is different than having liability for anything that is hosted and/or posted and it is different than having liability for censoring or not censoring specific content.
Re: (Score:2)
Or... hear me out... they can have both as the law allows and follow that law specifically written to prevent internet sites turning into shitshows.
Re: (Score:2)
Why would we want to make such a destructive change to the law? Doesn't your idea just cause problems, without being mitigated by helping anyone? Your proposed policy change looks like a lose/lose idea, where everyone comes out behind (except for litigious motherfuckers).
If you allow commenting on your personal Wordpress site, you shouldn't lose everything you own
Re: (Score:2)
In the past, providers were liable both for doing moderation /and/ for not doing moderation. We'd not have user-generated-anything (including forums like this one) if that was still (or newly again) the case.
ISPs, OTOH, should certainly be common carriers, since they get to prioritize one thing over another, gather your data and sell it to whomever...
Re:Protection from Liability (Score:4, Informative)
As a rule, common carriers allow two individuals to communicate. The rule is effectively "thou shall not fuck with a private conversation". This is about broadcasting; saying things in public. There always has been restrictions on things you can say in public. You can't broadcast a bomb threat in an airport, you can't lie in advertising, you can't publish other peoples private details you earlier promised to keep secret. You are confused if you think the blanket "common carrier" has a role here.
Re: (Score:3)
If they want protection from liability, they should seek to be common carriers. If they wish to not be common carriers and exercize editorial control, then they should be subject to liability, the same as any other publisher. Mutatis Mutandis ISPs.
Common carrier status has absolutely nothing to do with this. Common carriers can and do ban/drop/expel users of their services and block users from using their service for objectionable activities. Just the most obvious case is if you use your phone to make death threats, you can be dropped in a hot minute.
What being a common carrier means is simply that you are generally open to any member of the public as long as they pay the standard fees that anyone else does and haven't individually done something t
Re: (Score:3, Interesting)
Re: (Score:2)
You realize nobody's going to let you use their service to say fucknut over and over again if they're responsible for obscene content on their websites, right? You're literally asking for it to be illegal for websites to carry your comments publicly.
Re: Protection from Liability (Score:2)
Re: (Score:3)
Section 230, the communications decency act, specifically defines that online services are their own category to be regulated -they are not common carriers, nor are they publishers.
Section 230 States that the online services are :
1. not liable for what anyone else posts on their service, and
2. that they are encouraged to censor anything they find objectionable
-a. whether or not they are successful in blocking what they intend to block, and
-b. even if they block what would oth
Re: That's just a excuse racist to use (Score:2)
The case in question is over whether Google should have immunity because it recommended extremist content that got somebody killed. I fail to see how getting rid of that form of editorializing would support racism.
Why the hell do you progressive morons always like to place context that isn't there? What is this, an inquisition?
Re: (Score:2)
And as a matter of fact, the extremists they recommended were involved in the most extreme form of discrimination possible: they kill people who don't agree with them.
Re: (Score:2)
Re: (Score:3)
It is not a public square, you fool, and size isn't what makes something a public square. In order to be subjected to the same rules that governments operate under you literally have to effectively be a government, performing government functions that only governments do.
Running a search engine or social media site is not a government function. Thus, search engine sites and social media sites are not public squares.
Get it right (Score:2, Interesting)
"resulting in both widespread censorship and a proliferation of offensive content"
You can't have it both ways. This sounds like FUD.
Re:Get it right (Score:5, Informative)
You should look up the background to why Section 230 came to be. In short, sites could be liable for content they didn't moderate while also being liable for not moderating enough.
See Cubby v Compuserve where Compuserve was sued because they didn't moderate, and Stratton Oakmont v Prodigy Services where Prodigy was sued because they didn't moderate enough. This led to the untenable situation where a service could be sued for moderating AND sued for not moderating.
The solution was Section 230 that simply says that each party is liable for its own speech and a service is allowed to moderate content they don't think belong on their service.
Re: (Score:2)
"resulting in both widespread censorship and a proliferation of offensive content"
You can't have it both ways. This sounds like FUD.
You can have it both ways if different companies respond to new legislation differently. Some would choose to increase censorship, and others would remove it. Even the summary describes this scenario.
Re:Get it right (Score:4, Insightful)
Of course you can. The law specifically was written to allow both because it realised (which I know is mind blowing for anything produced by American politics) that the world isn't black and white and that neither full liability nor lack of control are in any way desirable.
You absolutely can (Score:2)
And the right wing has always been Buddy Buddy with offensive content and racism. That's because the right wing is all about hierarchies and racism and bigotry or how you create those hierarchies. It's about punching down an
Re: (Score:2)
Utter fucking nonsense - Big media (wildly left leaning) is well big with tons of money and already control a lot of publishing. Big tech and Twitter really isnt in the same conference as Google, Meta, Microsoft, is pretty well left leaning.
Really 230 going away will be a DISASTER for the far-right. Of course it will also be a DISASTER for the very far left like, you rsilvergun, as well as all the other blue-dyed-hair Marxist academics. It will be very good thing for the moderate left - people who vote for
Re: (Score:3)
Big media (wildly left leaning)
Oh no, child, no. They are perhaps socially left, but they are fiscally right. They still support corporatism, which is really just fascism. But money doesn't care about your genitals or your labels or whatever, so long as it can squeeze some money out of you. Money is libertarian, it doesn't care about anyone either way, it just doesn't want to be taxed.
230 going away will be a DISASTER for the far-right. Of course it will also be a DISASTER for the very far left
It's going to be a DISASTER for everyone who doesn't participate in the globalist, corporatist groupthink that you claim to despise.
Re: You absolutely can (Score:3)
Big media is moderate right-wing or moderate left. They're not extreme anything. You don't maximise sales by being extreme.
If you consider them far left, it's only because you're far right.
Re: (Score:3)
They've had it both ways for a while (Score:5, Insightful)
They have enjoyed the protection of a platform and the editorial freedom of a publisher.
The only reason this is in court is they have abused their enviable position and power.
Re: (Score:2)
What startup is going to want to or be able to afford the legal fee necessary to run anything.
Re: (Score:2)
Really - I doubt. Meta won't last week when every ambulance chaser in the nation is suddenly able encourage everyone to file a couple $100k suit for libelous content facebook let their ex publish..
Ok Meta the company might survive because of their massive cash pile - but 100% sure facebook is locked down with little if any existing content shared 'public'
Re: They've had it both ways for a while (Score:2)
The public also had it both ways for a while (Score:3, Insightful)
The public has enjoyed having it both ways for a while too. We have gotten instant access to the best information on the Internet, and had large tech companies do a decent job of moderating some of the worst content. They don't do a great job, because it would take too much effort, but they generally do enough to make the Internet a more useful place.
Re: (Score:2, Informative)
Change 230 protection to be more like common carrier and prohibit algorithms pushing content....
This way, people are free to look for what they want...post what they want and not really be at risk for having content they don't want shoved in their face.
Really change it to make 230 protection be more common carrier than it is.
Hell, even allow companies to have 2 sections...one without 230 where they are publishers and moderate....a
Re: (Score:2)
It was carried, back in the day, by most every provider and even though it had more than questionable content...the providers were not held liable for it.
They had questionable content because reality and the copyright maffia hadn't caught up with the internet yet in USENET's heyday. And perhaps you should talk to Time Warner, Verizon and Sprint - because the NY AG Andrew Cuomo who went after them in 2008 to shut down access to possible CSAM available on USENET. AOL and AT&T quickly stopped providing USENET access after that.
The providers did not moderate it or editorialize it....that's how social media should behave now....if they want 230 protections.
You never ran a USENET server, did you? Every USENET admin choose what was on their server or not, ie they moderated the content. So
Re: (Score:2)
The only reason this is in court is they have abused their enviable position and power.
No. The only reason this is in court is because it turns out that despite the average age of congress being 58 years old, it turns out they are all a bunch of children.
Where to draw the line... (Score:2)
The only sane response to a change like that to Section 230 would be for social media platforms to simply restrict / exclude ANY content that has even a hint of being questionable so they aren't open to legal ramifications. What's wrong with the existing system of relying on users to flag content they believe to be harmful or appears to instigate violence or terrorism?
Re: (Score:2)
Companies that rely on ad revenue and thusly keepng high viewership numbers do. Yes you can "change the channel" but even on TV you are changing between channels that have teir programming decided based on what gets viewers and what advertisers want. Even a subscription service like say, HBO back in the day decided it's programming based on what is going to grow subscriber bases. If there was conent that was going to drive vewers away you damn well believe they were going to ax it or never air it in the
Re: (Score:3, Insightful)
Yes. HBO is a publisher, not a common carrier. As such they are (a) able to choose what they publish and (b) liable for what they choose to publish.
Bell Telephone is a common carrier. As such they are (a) unable to choose what they carry, provided that the fee for carriage is paid, and (b) are not responsible for what they carry -- such liability falls on the person paying for carriage.
The problem is that Google et al want to (a) choose what they publish and (b) have no liability for what they publish.
Re: (Score:2)
You can't have both though, if you are going to make Google/Youtube liable for ontent on their privately owned platforms you cannot also remove their ability to moderate it. If Google is going to opened up to liability with a 230 repeal they are going to clamp down even harder on content because they do not want to be sued at every turn.
It's either that or you're going to have to codify in law "acceptable" content and that personally is not an area I want to government involved in more than they have to be
Re: (Score:2)
You would have a point if anyone could get their content on TV, or print media for that matter.
Re: (Score:2)
Ironic, TV content being fairly heavily regulated in most places. They even made TV shows about it: https://en.wikipedia.org/wiki/... [wikipedia.org]
Actually good point (Score:4, Interesting)
> It also argues that there is no way to draw a meaningful distinction between recommendation algorithms and the related algorithms that allow search engines and numerous other crucial ranking systems to work online
That's true. There is basically no difference between a search engine saying "we recommend these web pages to you based on your search terms", and YouTube saying "we recommend these videos to you based on your view history". Both involve the user looking for content and the search engine trying to divine what the user might want to see. You could say that YouTube Recommended is not "answering an explicit query from a user" or some such thing, but that's a pretty fine line, as what constitutes an explicit query will also get vague depending on the service.
Re: (Score:2)
There's two rather large differences:
- A search query has to be entered/requested.
- There is no history/tracking to a search query.
Those two facts make a world of difference to what is supplied to users. Without the pushed content/links there just isn't the same engagement by users. Without the tracking the type of content is restricted to just the search terms.
Re: (Score:2)
While google seems to work okay with a blank landing page, I'm not sure Meta/Twitter could do as well with you logging in and seeing nothing until you search.
Re: (Score:2)
Pick a side. Either they police it or don't.
Hot air (Score:5, Insightful)
Re: (Score:2)
This is wealthy elites like Rupert Murdoch and likely the Indian equivalent trying to take bac
Re: (Score:2)
This is a bunch of hot air. All that would happen is that they would have to remove or modify their recommendation systems. They should be liable if they actively support statements that advocate violence, terrorism, or are libelous. They should not be liable for what people post, but when they make recommendations, they are no longer a passive participant in the process and should be held accountable.
How do you define "recommendation", though? Facebook orders your friends' posts based on how likely they think it is that you would want to view them. Is that a recommendation? When you search for something and it chooses the order, is that a recommendation? Where's the dividing line between promotion and filtering? And so on.
"Upend the Internet" (Score:5, Insightful)
That's bullshit. The internet worked fine before there was Google, and while it's showing its age, it'll continue to work fine if Google and Facebork disappear (we but can dream...)
Re:"Upend the Internet" (Score:5, Insightful)
TFTFY
Re: (Score:2)
That's bullshit. The internet worked fine before there was Google
The internet before Google is completely unlike the internet today. Comparing the two is like saying we were able to design buildings before computers were invented. While true it completely misses the point and is completely irrelevant in the modern context.
Re: "Upend the Internet" (Score:2)
I think you're probably thinking of web 2.0, not the internet.
Re: (Score:2)
The internet before the CDA (1996) was just starting to get on people's RADAR. The pre-230 internet was only possible because most sites were static without much user-generated content. So if you found something libelous or illegal, chances are the person who owned the site put it there. For the rare exception like geocities, the internet was small enough that they could handle the occasional complaint on an ad hoc basis.
The modern web would be utterly impossible without a liability shield.
Re: "Upend the Internet" (Score:2)
Web != Internet.
Re: (Score:2)
Then maybe the modern web as you see it shouldnt be possible.
The part that's going to be impossible is leaving comments and having them read by other parties without their content first being vetted by a human. In other words, exactly the thing you say you don't want to see happen — censorship to meet government standards. The parts of the CDA you're not complaining about make you responsible for government-defined obscenity on the internet. Nobody is going to take the chance that you're going to post some of that on their site.
Re: (Score:2)
That's bullshit. The internet worked fine before there was Google, and while it's showing its age, it'll continue to work fine if Google and Facebork disappear (we but can dream...)
Your statement is bullshit. Search before Google was painful beyond belief. I changed half a dozen search engines before I switched to Google.
Re: (Score:2)
And now Google is painful (and practically worthless). I want AltaVista back.
To be fair, some advertisers would figure out how to game that site as well and load up the first pages with junk.
Re: "Upend the Internet" (Score:2)
Google is a web search engine, as was AltaVista before it. Without an Internet, those search engines couldn't do anything. It was working. It is working. Apart from a few innovations like, what, DNSSEC, and a certain amount of fragmentation into private backbones, what's so different?
Re: (Score:2)
The internet worked fine before there was Google
The internet worked fine before the Communications Decency Act, which made people responsible for the content they served on the internet except for content which was posted to their service by someone else. The second part in bolt there is what makes it possible for Slashdot to let comments hit the internet without being evaluated first, given the part in between the bold parts.
You cannot have open fora on the internet with the CDA without Section 230 of the CDA.
I'm all for elimination of the entire CDA,
Re: "Upend the Internet" (Score:2)
Sure but those are all web sites, unless you imagine Usenet is coming back big time.
Sure there is (Score:5, Insightful)
"argues that there is no way to draw a meaningful distinction between recommendation algorithms and the related algorithms that allow search engines and numerous other crucial ranking systems"
That has to be about the most farcical assertion i have ever read. Search is pull, recommendation is push. its that simple..
At the algorithm level, its are the search terms coming from a human at a keyboard DIRECTLY or are they themselves machine generated by some kind of machine correlation of related content.
Re: (Score:2)
No kidding. That quote goes a long way toward explaining the complete shit results of search I get of late from Google and others. When straight-forward word searches are completely overridden to the point that the results are nonsensical. No apparent meaningful distinction between search and recommend means all your search are recommend and are probably paid for or ranked with ROI in mind and not what you wanted as first priority.
Walled Gardens (Score:2)
This will do it for sure. No way to create a platform. No one can afford legal fees unless they are rich.
GREAT! (Score:2)
Google said that scaling back liability protections could lead internet giants to block more potentially offensive content -- including controversial political speech -- while also leading smaller websites to drop their filters to avoid liability that can arise from efforts to screen content.
That should be the outcome we all WANT! Rather than having a small oligarchy of bit-tech leaders who essentially decide what we all get to see and and are allowed to say, we would have real diversity - AND - importantly some friction to publishing the aggressively stupid.
Right now there is no point in posting anything except on the platforms the big tech oligopoly controls. Even a fairly popular info-tainment program like say Louder-with-Crowder pretty much is forced into youtube because nobody will both w
Freedom of Speech is not Freedom From Offense (Score:3)
(In the U.S. at least) freedom of speech does not nor was intended to protect you from being offended.
if you profit, you should have some liability (Score:3)
If all these companies did was some sort of public service recommendation without a profit motive, I would say they might have some reason to get special treatment. But they purposely write algorithms that spread the most incendiary (and often false) messages and hate speech all to drive advertising revenue. If a newspaper did that, they'd get sued. Why can Google and Facebook get away with it? Curating and recommending sites, whether its search or a playlist is not passive and those recommendations are what allow lunatics to spread their messages of hate. These giant companies should be at least as liable for their content as a newspaper is.
Logical disconnect (Score:2)
I trust these geniuses know that it is more than just trying to kill the news they don't like, and owning the libs will expose everyone to liability, even Fox News and OAN. I'm seeing them with the shocked Pikachu face right now.
"never envisioned..." (Score:2)
Like semi-automatic firearms per 2nd Amendment. Guns were slow and clunky when 2nd written, and mass shootings by lone loons was unheard of.
Re: (Score:2, Informative)
At the time the first amendment was drafted, muzzle loading rifles were frequently .75 caliber. Now it's illegal to own .50 caliber and above in many states. Muzzle loading rifles were the weapons of war of the day, and I believe this is what the founders intended.
If the second amendment was interpreted as liberally as the first, background checks would be illegal, and a firearm would have to be used in a crime before it could be seized by the government.
If the first amendment was interpreted like th
Re: (Score:2)
They wrote the law based on muzzle loading muskets and stuff.
Bullshit. They wrote the law being aware of Kalthoff repeaters [wikipedia.org] and other similar designs. And civilian owned cannons and warships.
Re: (Score:3)
Sure it can.
Just turn 230 into more of a true "common carrier" thing like with the phone companies.
Common carrier rules require zero moderation or monitoring by the carrier. So literally you're asking for a free-for-all.
The problem is, that can't realistically be monetized. If you don't have moderation, nobody wants to run ads over what might turn out to be child porn. Without a way to monetize, that means all services will have to cost money. That means anything involving user-generated content will be a pay service that costs enough money to fund its operation. (And that's assuming the credit card
Re: (Score:2)
And now you've completely broken the Web almost irreparably, and the likely end result would be everything eventually becoming a Facebook group. Searchability goes into the toilet, public access goes away, etc., and the Web reverts to being a collection of independent bulletin board systems like back in the CompuServe/AOL/GEnie days.
And not to state the obvious here, but want to guarantee that nobody will be able to compete with Big Tech ever again? That's the fastest way to reach that point, with a few companies controlling everything, and nobody else having a prayer of being able to compete, because all the consumers get all of their content from a small number of big sites to avoid paying for smaller ones.
The irony, of course, is that many companies who would probably benefit from 230 going away (because of their size, ability to t