Meta, Twitter, Microsoft and Others Urge Supreme Court Not To Allow Lawsuits Against Tech Algorithms 78
A wide range of businesses, internet users, academics and even human rights experts defended Big Tech's liability shield in a pivotal Supreme Court case about YouTube algorithms, with some arguing that excluding AI-driven recommendation engines from federal legal protections would cause sweeping changes to the open internet. From a report: The diverse group weighing in at the Court ranged from major tech companies such as Meta, Twitter and Microsoft to some of Big Tech's most vocal critics, including Yelp and the Electronic Frontier Foundation. Even Reddit and a collection of volunteer Reddit moderators got involved. In friend-of-the-court filings, the companies, organizations and individuals said the federal law whose scope the Court could potentially narrow in the case -- Section 230 of the Communications Decency Act -- is vital to the basic function of the web. Section 230 has been used to shield all websites, not just social media platforms, from lawsuits over third-party content.
The question at the heart of the case, Gonzalez v. Google, is whether Google can be sued for recommending pro-ISIS content to users through its YouTube algorithm; the company has argued that Section 230 precludes such litigation. But the plaintiffs in the case, the family members of a person killed in a 2015 ISIS attack in Paris, have argued that YouTube's recommendation algorithm can be held liable under a US antiterrorism law. In their filing, Reddit and the Reddit moderators argued that a ruling enabling litigation against tech-industry algorithms could lead to future lawsuits against even non-algorithmic forms of recommendation, and potentially targeted lawsuits against individual internet users.
The question at the heart of the case, Gonzalez v. Google, is whether Google can be sued for recommending pro-ISIS content to users through its YouTube algorithm; the company has argued that Section 230 precludes such litigation. But the plaintiffs in the case, the family members of a person killed in a 2015 ISIS attack in Paris, have argued that YouTube's recommendation algorithm can be held liable under a US antiterrorism law. In their filing, Reddit and the Reddit moderators argued that a ruling enabling litigation against tech-industry algorithms could lead to future lawsuits against even non-algorithmic forms of recommendation, and potentially targeted lawsuits against individual internet users.
My youtube experience (Score:4, Insightful)
I find Google, Youtube, etc. do a pretty good job about not suggesting anything shady to me until I demonstrate a pretty clear interest in it. I've never seen nor watched anything terrorist related on Youtube because I've never gone looking for it. You can look for jobs for days, and Google won't send you anywhere dirty... until you cross the threshold and ask about handjobs - then, "Woah, nelly."
It's tricky. But for my whole life, the classifieds in my local papers included thinly veiled prostitution ads. I don't recall lawsuits about that.
Re: (Score:1)
It's tricky. But for my whole life, the classifieds in my local papers included thinly veiled prostitution ads. I don't recall lawsuits about that.
Well actually... [google.com]
Re: (Score:3)
I was not talking about Craigslist. I specifically said, "classifieds in my local papers". Perhaps I should have been more specific. After all, I'm over 50 now... newspapers might be an anachronism too obtuse for some.
Re: (Score:1)
Good.
Hey, just limit it to algorithms on social media that try to steer your attention either just for more clicks OR trying to push you into thinking a certain way that the CEO wants you to think.
That should protect the "open internet"....most of it isn't using behaviorally directed algorithms.
Parents sometimes tried for children's actions (Score:5, Insightful)
To me it's mystifying they think they should not be responsible - you (as a company or person) should be responsible for what you put out in the world.
If children not of legal age do something terrible, parents can be forced to take accountability, and a rogue AI from a giant corporation seems no different.
It's not even like recommendation engines would go away (as much as that might be wished for by all), then would just have to be much more carefully monitored.
I feel like tech companies are fighting to keep something around that not a lot of people like to begin with and is already doing humanity a disservice in aggregate.
Re:Parents sometimes tried for children's actions (Score:5, Interesting)
Regarding the parent's responsibility, a rather interesting nuance now is that for most of history, the child was raised in the home, and the parents were the gatekeepers between the world and the child. Now every child has the greatest bypass system ever built - the world is at their fingertips, and there is precious little a parent can do other than lock the child in a crate and slip food and water through the cracks until they're 18. Holding the parent responsible is less and less reasonable.
I'm all for accountability, but it should come with control.
Re: (Score:2)
Yet another reason to make access to social media adult only, much like
Re: (Score:2)
People, including children, are not mindless programmable automatons.
Not yet, anyway. Your suggestion will go a long way towards making it happen.
Re: (Score:1)
How so?
Not quite sure exactly what you are driving at.
Can you give examples how making social media access adult only will turn children into "mindless programmable automatons"?
Hell, it seems that something akin to "mindless automatons" is exactly what social media it is turning them into....
I think of that when I see kids mindlessly staring into a smart
Re: (Score:3)
Can you give examples how making social media access adult only will turn children into "mindless programmable automatons"?
If you teach them that information is dangerous, the risk is that they'll believe you.
Re: (Score:2)
Re: (Score:2)
So then you agree, web sites should have complete control over what is posted/listed/distributed on their site, and can remove whatever content they want and ban whomever they want.
Re: (Score:2)
The law disagrees with you.
Re: (Score:2)
SHOULD.
I feel that they should be responsible for what they recommend. Much like advertisers should be responsible for what they recommend.
Section 230 (aka The Communications Decency Act) was put in place with the intention of encouraging censorship of obscene and harmful materials. It exists to shield services from errors in censorship by protecting them from responsibility for what they fail to censor, and for what they censor that anyone feels they should not have censored. It does not say anything ab
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:3)
The problem is what will happen. Firstly, if "algorithm" is not given some special legal definition, it will include any manner in which a platform/web site decides to show content. Displaying content in chronological order is an algorithm. Displaying content determined to be relevant to a search term is an algorithm. There is no way to show anything at all other than by employing an algorithm to do so.
Secondly, if sites are held liable for anything they decide to display to users, there are two likely
Re: (Score:2)
If this causes massive sites to implode and the internet to go back to a million tiny little forums and communities hyper-specialized and completely anonymous, it wouldn't be such a bad thing.
The internet as we know it right now exists thanks to algorithmic recommendations, and these big companies made their fortunes this way and are scared sh*tless of losing what built them, but let's not play their own game. The internet was a pretty good place before facebook, twitter and tiktok, and I would venture that
Re: (Score:2)
If this causes massive sites to implode and the internet to go back to a million tiny little forums and communities hyper-specialized and completely anonymous, it wouldn't be such a bad thing.
It won't. If anything, those tiny sites would be the first to shut down, because there's no way they could afford the legal liability. The only ones with even a chance of survival with user generated content are the billion dollar companies.
Youtube is Google's child (Score:2, Interesting)
Just like a child, sometimes it slips and does stupid things. Well, just like real parents, Google is responsible for what their stupid child does.
If they can prove their "child" is mature enough to behave like a human adult, they might have a point. Until such time, the responsibility for what an AI says or does rests squarely on the shoulder of whoever runs the AI.
Re: (Score:1)
Given your wild hypothetical, we would expect the law to impose consequences directly on the misbehaving AI.
Re: (Score:2)
If (or rather when) AIs become so advanced as to match humans in complexity, creativity and ethical decision-making, then yes, they should be expected to face concequences for their actions, just like humans. I assume such an advanced AI would be receptive to the threat of being turned off.
My point was that AIs are still dumb as bricks by human standards, and so Google, Microsoft and co. should be held responsible because they run them - and by proxy, inflict whatever fuckery the AI decides to do unto other
Re: (Score:1)
> I assume such an advanced AI would be receptive to the threat of being turned off.
such an advanced AI will mimic something that is receptive to...
it's very important that we all remember that these "AI" networks are just a big list of numbers to multiply. They have no consciousness, and no thought. Without some major breakthrough (no such breakthrough is on the horizon...), that will be true for the foreseeable future.
When we all forget that, we're in big big trouble
Re: (Score:2)
They have no consciousness, and no thought.
Define consciousness.
Define thought.
Re: (Score:1)
I can't. I suspect you cant either. So how can we possibly reproduce these things if we cant *define* them?
Re: Youtube is Google's child (Score:2)
Just like a child, sometimes it slips and does stupid things. Well, just like real parents, Google is responsible for what their stupid child does.
The modern problem is: so many people claim to be offended or triggered by, well, anything. Lawsuits would be endless, because you cannot please everyone.
Better would be to simply remind people that they do not need to visit web properties that they find objectionable. Kids are only slightly more difficult - we have two, and somehow manage to raise them despite the internet and smartphones.
Re: (Score:2)
It goes like this (with a scale from 0 (no ranking) to 100 (bombard them with recommendations)):
Everything is weighted at 0 (except for some content/tags that are paid to raise their rankings which makes them 5.
Then the content you watch which include tags (for the types of content it is) is included.. if you are see 1 video it may raise the ranking of that tag 0.1 for each instance you watch and 0
Re: (Score:2)
> Not really.. their algorithm isn't really AI.. its a matter of content weighting based on tags and paid rates..
That is, indeed, a lightweight AI algorithm.
> Its not AI, its simple math..
All computation is math. It's as complex as we wish it to be, including the integration of "random" algorithms and even quantum computing. Being mathematical does not divorce it from the responsibilities or limitations tied to using math and data for choices.
ban Tech Algorithms from labor issues! (Score:2)
ban Tech Algorithms from labor issues!
we make the algorithms (Score:4, Funny)
And event patent them.
But we don't want to be held responsible for these mindless overlords we have running our business.
Robber barons (Score:4, Insightful)
Re: (Score:2)
Theses protections were for the fledgling internet companies.
They are for everyone, not just the big guys.
without any transparency and accountability
The tech companies are accountable for what the tech companies do, and the users are accountable for what the users do, as they should be.
Hello! You've Been Referred Here Because You're Wrong About Section 230 Of The Communications Decency Act [techdirt.com]
Re: (Score:2)
There never were any 'robber barons' except for the nobility and thus government robber barons. You and the likes of you are calling people, who were the captains of industry 'robber barons' to invoke a connotation that matches your agenda. I would rather have Standard Oil than any government.
Abolish 230 (Score:3, Interesting)
I hope 230 gets abolished. As a non American, I would love nothing more than to see the look on everyone's face after they've stabbed their nose repeatedly in a fit of rage against their own face.
Everyone against Section 230 thinks it is somehow an affront to free speech because their own precious ideas aren't able to compete in the free market. But nothing in the US hurts free speech more, than civil liability. And so exposing American companies to that liability, will only make them curtail user generated content on their sites even more to avoid that liability. But instead of these people going off to form their own company, with blackjack and hookers, they too will face the wrath of lawyers and SLAPP suits and so on. And of course, the irony in all of this, is that the only companies that will be able to afford the burdens, are the ones the anti-230 crowd were trying to hurt in the first place, concentrating the power of silicon valley even more. And now that all public discourse on media platforms have to be clean and pure, the future of the US will start to look like some combination of Idiocracy and Demolition Man.
Anyways, how about that flat 30% VAT instead of marginal income tax? I'm sure that will put more money into the average American's hands. LOL
Re: (Score:2)
But nothing in the US hurts free speech more, than civil liability. And so exposing American companies to that liability, will only make them curtail user generated content on their sites even more to avoid that liability
Bingo. And a big factor is the uncertainty regarding those civil suits, both in applicability and in the penalty. Here in Europe, civil liability is not really a thing (mostly limited to actual damages), but the EU has tried to recreate that element of uncertainty in criminal laws around illegal content. They did so by proposing rules that are rather vague in applicability, but carry a very hefty penalty, thus achieving the same thing: scaring tech firms into self-censoring their content. Though I don't
Re: (Score:2)
We don't need to do away with 230...we need to modify it to fit the times.
It had its p
Re: (Score:1)
Re: (Score:2)
But instead of these people going off to form their own company, with blackjack and hookers, they too will face the wrath of lawyers and SLAPP suits and so on. And of course, the irony in all of this, is that the only companies that will be able to afford the burdens, are the ones the anti-230 crowd were trying to hurt in the first place, concentrating the power of silicon valley even more. And now that all public discourse on media platforms have to be clean and pure, the future of the US will start to look like some combination of Idiocracy and Demolition Man.
This is the outcome you're hoping for?!
Obviously untrue (Score:2)
"In friend-of-the-court filings, the companies, organizations and individuals said the federal law whose scope the Court could potentially narrow in the case -- Section 230 of the Communications Decency Act -- is vital to the basic function of the web".
The "basic function of the Web" depends only on suitable hardware and software. Government and its laws can do nothing whatsoever to assist the basic function of the Web. They can only obstruct and prevent it.
Comment removed (Score:5, Insightful)
Re: (Score:3)
They want to have it both ways, protection as a common carrier AND as an editor.
None of these companies are asking for common carrier status.
Seems an odd route... (Score:2)
- If the videos on YouTube were not permissible* they should be, or should have been, taken down promptly.
- However, if the videos were permissible, how can there be any case to answer in recommending them?
* I'm using not permissible to cover the range of "in breach of YouTube's conditions" and/or illegal**
** Yes, that raises the question of "illegal where, exactly?"
Re: (Score:2)
I think the real story is they want precedents set on one side so they can try to use them on the flip side of the coin. A lot of content creators that don't fall in with the Google/Youtube "group think/party line" are sick of seeing their videos shit-holed/shadow-banned/sent to the bottom of the pit because they might reflect "conservative" viewpoints. I know there are terms of service and probably arbitration clauses, but I'm just waiting for them to ban the wrong channel (again) (especially a particular
Re: (Score:1)
I think the real story is they want precedents set on one side so they can try to use them on the flip side of the coin.
Intriguing; I hadn't considered that this may be all about setting a precedent to flip - "If you don't recommend my channel I'll sue you!".
Recommendations (Score:3)
Perhaps fauxtistic nerds shouldn't be designing and implementing recommendation algorithms, since they come up with stupid shitty criteria like "the best thing to watch/read/buy is what you've watched/read/bought before."
On that note, why do shopping websites have a "sort by popularity" anyway? Give me the Asian-sort as default: "sort lowest to highest per unit price". I don't want to buy things just because many other people bought it.
Re: (Score:2)
On that note, why do shopping websites have a "sort by popularity" anyway? Give me the Asian-sort as default: "sort lowest to highest per unit price". I don't want to buy things just because many other people bought it.
I didn't know there was a name for that kind of sort. You have no idea how much of my life I've wasted looking at all the options and doing divisions in my head.
Re: (Score:2)
How can you enforce it? (Score:2)
Re: (Score:2)
It's not like you can ban a tool
Why not?
if you could we could ban all malware and be done with it.
Malware is banned. That's not the point.
Re: (Score:2)
Re: (Score:1)
They're both right... (Score:3)
The youtube algorithm, in pushing pro-isis content to a susceptible person should absolutely be held accountable. In pushing that content, it contributed to the radicalization of the participants.
It's that same algorithm that pushes antivax content to anti-vaxers, conspiracy theory posts to conspiracy theorists, and generally left-leaning content to the left and right-leaning content to those on the right.
The big companies are right too in that the general removal of section 230 would absolutely cripple and devastate the internet. For example, Slashdot would likely stop taking comments because of folks pushing (often unrelated) off-color messages. They'd have to because they can't afford to fight all of the legal cases that would be thrown against them. If I posted sandyhook was staged, then the folks who took down Alex Jones could take down Slashdot too. So the removal of the general protections of 230 would be a disaster.
However, I don't believe the algorithms for suggesting content should carry the same exemption. The large providers should not be accountable for what someone posts, but once they start promoting or pushing that content, they absolutely carry the responsibility for that. We could exclude the algorithms from the exemption and this, too, would likely change the internet, but in my opinion it would only change for the better.
The removal of the exemption on the algorithms could actually do a lot to bust up the various social media bubbles we end up in because the algorithms keep recommending things we already believe in, removing the self-reinforcing echo chambers...
Re: (Score:2)
The youtube algorithm, in pushing pro-isis content to a susceptible person should absolutely be held accountable. In pushing that content, it contributed to the radicalization of the participants.
What makes a person susceptible? And how would Google know it? He could have been a columnist, or writing a paper on ISIS, or could simply laid back and eat popcorn while watching the videos and every once in a while chuckle and say "These dimwits."
Re: (Score:2)
You can't stop someone that is intent on finding videos on ISIS. They can just search for them, over and over again. So a columnist or a person wanting to chuckle at them, they can still search to find what they need.
But if all youtube recommends is pro-ISIS stuff every time you come back, and they feed it to you more and more, I would argue that anyone could be susceptible because it can amount to brainwashing...
If you were captured by ISIS and they forced you to watch ISIS materials day after day, don't y
Re: (Score:2)
However, I don't believe the algorithms for suggesting content should carry the same exemption.
This is a false distinction. There is no way for YouTube (for example) to show its users anything other than via an algorithm. So holding algorithmic presentation of content liable is the same as holding YouTube liable for everything that appears on the site. Holding Google accountable for bad effects from its algorithms sounds desirable, but I have not yet seen a proposal for how to do so (even at the napkin sketch level of detail) that would not kneecap the internet as we know it. Also, in the US all o
Re: (Score:2)
No other way? Like not even "show 5 videos with most requests in last 24 hours", etc? Something that is simple yet avoids trying to show you the same videos about the same content?
Come on, let's be honest. The algorithms are meant to keep you on the site, keep you from going to another video service. Meta, Tiktok, ..., they all do this, they want to keep your eyeballs on their site by any means necessary, so they push content they think will do this. Ask for one ISIS video because you want to see what they
Re: (Score:2)
No other way? Like not even "show 5 videos with most requests in last 24 hours", etc?
That's an algorithm. Other approaches that would also be algorithms:
- chronological order
- random selection
- things your friends have seen
- any other possible method because any way a computer makes a decision about something is an algorithm
Comment removed (Score:3)
Re: (Score:1)
In this case, I think we can narrow it down to code running (or even manual manipulations) the push or promote content to a user based on data gathered about them or any other reason other than content the user directly was searching for.
Re: (Score:2)
And so it can be reasonably argued that at some point, the user is NOT the person deciding what's being put on their screen. Instead the computer programmers who developed the software have made those decisions
The user has never been and never can be the one who decides what the server sends to the client. The server may use user input as part of the determination of how to respond, but the response is always up to the server.
Re: (Score:2)
Re: (Score:2)
I disagree. You're misunderstanding the issues both technically and legally if you think the user is in any way in charge of what the server is doing.
Idiots. All of them (Score:2)
If they are mad about the recommendation engines, regulate it.
But you can't. You gave companies free speech. Suck to you guys. Fucking deal with it like I have to deal with you.
S230 == PsyOps Cover (Score:1)
They're going after the wrong algorithms (Score:3)
They need to go after the various copyright strike and community standards algos instead. Not necessarily for the algo itself but for the near impossibility to get an actual human to tell you what was wrong and to really review it (rather than apply a rubber stamp) when it seems to have gone wrong.
No more copyright strikes triggered by wild bird songs. I saw one from Louis Rossmann that a short of his cat making an annoyed meow was dinged for violating community standards. I don't speak cat and I didn't know youtube's algorithm did either. Naturally there is no human available to explain why that was a violation.
More generally, if you delegate to an algorithm, it's actions are YOUR actions and you must accept responsibility for them.
I'd say that too (Score:2)
No one wants the law telling them what to do, it's best to just sweep the bad stuff under the rug. What do you think "professional" bodies do? Law Societies, College of Physicians, etc.
If you do bad things and get caught you have to answer to society via the legal system. Why is it not a problem when it's done at global scale by a corporation?
Let me give you a non-"Evil Big Business" example (Score:3)
Say I run a simple forum about gardening. Everything is on the up and up and we all get along talking about plants.
Joe Simpleton creates an account, because they too, like gardening. At the bottom of the main page, I show lists of recent new members, who is currently on, and popular posters. Joe clicks on the link of a popular poster; Elphaba Greenthumb. Turns out Elphaba's profile and some of their posts talk about using herbs for treating ailments, folklore about wolfs-bane, and ingredients to brew a tea to help someone sleep.
My "recommendation engine" is promoting *witchcraft*!
If 230 is repealed, I can now be sued as I put up a link for content that the viewer did not deem appropriate and may even be illegal in the region where they live. As the owner, it was my responsibility to filter all content to make sure that only people that want to see information on "witchcraft" are exposed to it.
I would be forced to geofence anyone logging in, know the laws of every region of the US, make sure every profile is tagged correctly (because I would also be responsible if the member incorrectly tagged themselves), and build up filters so that I don't let any witches show up in those areas. And that also becomes retroactive, so I would not only have to deal with new members coming in, but go back through all my existing members and "correct" their content, plus any changes anyone makes.
Even after all that, I could still face frivolous lawsuits. While the case will likely be thrown out, as the owner, I still have to take the time and money to defend my website/company.
The overhead required to manually verify every account, every biography, every photo, every forum post, every everything for validity, content, and filters to apply, is financially crushing to any business that allows user entered information to be shown. The manpower to monitor it all, the lawyers that need to be on retainer, the constant vigilance of watching for laws to change, the backup savings required to handle any suits that get through and require my company to take action... It's not worth it.
People are judging this based on Google, Facebook, and YouTube who have the resources. But for every one of those evil empires, there are hundreds of thousands of "little guys" that would be forced to live by the same rules and be crushed out of existence, leaving you with JUST the empires.
Re: (Score:2)
Thanks for this, great comment. In addition, 230 protects users too. It protects you from liability for retweeting, forwarding an email, etc. because you aren't liable for something someone else did.
Re: (Score:2)
A choice (Score:3)
A third-party isn't telling people to watch more Qanon or more ISIS propaganda, Youtube's intellectual property is. They're simultaneously claiming it's theirs but they're not responsible. This is equal to the alcoholic claiming it's the wine bottle's fault he was driving drunk.
If Youtube showed kiddie-porn, how many politicians would be claiming "it's not their fault", I can't do anything? The fact Youtube doesn't show kiddie-porn means it is possible to censor images. It was only a few months ago that Google destroyed a digital identity because the subscriber sent photos of a child to a doctor. It reveals that Google has the power to detect things it doesn't like. It's suggests this lack of action is a choice and Section 230 is not the issue.