Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
The Internet Social Networks Technology

Tech Liability Shield Has No Place in Trade Deals, Groups Say (bloomberg.com) 26

A coalition of internet accountability groups is warning the Biden administration against including liability protections for tech companies in future trade agreements, saying that could hamstring efforts to hold platforms responsible for user content. From a report: In a letter sent to President Joe Biden on Thursday, the organizations said including a legal shield in trade deals like the 2018 U.S.-Mexico-Canada accord "reflects a broad effort by the big tech platforms to use 'trade negotiations' to limit domestic policy options."

The letter was signed by 16 public interest groups focused on issues such as civil rights, democracy and the market power of tech platforms, including Public Citizen, Color of Change and the Center for Digital Democracy. The coalition came together as the advocates observed how a ratified trade deal could bake in -- and export -- increasingly controversial legal protections for internet companies, said Morgan Harper, a policy director at the American Economic Liberties Project, which also signed the letter. The groups are "sounding the alarm about this tactic by Big Tech to undermine the inevitability of domestic regulation that's coming their way," Harper said. "We expect that this will be a priority for the Biden administration."

This discussion has been archived. No new comments can be posted.

Tech Liability Shield Has No Place in Trade Deals, Groups Say

Comments Filter:
  • Translation: (Score:4, Insightful)

    by memory_register ( 6248354 ) on Thursday May 27, 2021 @06:42PM (#61429708)
    We want to bully companies into following our slant of politics. These trade deals treat them too fairly and must stop.
    • Re:Translation: (Score:5, Insightful)

      by kot-begemot-uk ( 6104030 ) on Friday May 28, 2021 @02:07AM (#61430468) Homepage
      Sort-a.

      The liability shield provisions are essentially a form of forced export of USA legislation into other countries. They take section 230 and several similar legislations and force feed it down the savages throat assisted by the butt of an M16 to ensure it is eaten.

      Let's leave aside the outright colonialism aspect and have a look at it from a purely legal standpoint:

      1. Even if the country has a pretty well established regime for libel, hate speech, etc (f.e. Germany) they are put to the side and replaced with an inferior version. Specifically, criminal aspects of libel and hate speech are replaced with USA/UK style civil one.

      2. Country's sovereignty and heritage is violated, raped and thrown in a ditch. The definition of hate speech is an essential part of the history and heritage of each country. What an Englishman will and can say about German and French is hate speech in the latter and vice versa. It belongs in the local court with the country's constitutional court being the highest authority. It does not belong on the desk of a judge in California.

      3. Most countries around the world have a reasonably well defined and rigid legal system. It is rule based and not subject to interpretation. That is an essential feature of Napoleonic law compared to common law. You cannot break a chunk out of it, duck tape a google translate version of Section 230 on it and expect it to function correctly.

      So while I am not a fan of most of the groups that signed it, this time they got it right - a classic case of a broken clock showing the right time twice a day.

    • Re:Translation: (Score:5, Insightful)

      by Baki ( 72515 ) on Friday May 28, 2021 @02:07AM (#61430472)

      We want to keep our sovereignty as a nation, and don't want to be bullied into giving it up due to economic blackmail.

      E.g. political donations by companies are banned in most democracies, and for a good reason. It would be disastrous if reasonable balance between free speech and money/advertisement would be disturbed by the forced import of US standards (that are not always the best in the world, to put it mildly).

    • No, it's like an international version of section 230 liability protections.

      Do you need to go consult with your masters and find out if you're supposed to be supporting 230-like protections today? Yesterday was all about revoking 230. It'll probably be on the Rush rerun tomorrow. Do you want to flip a coin, will that help?

      I'll tell you a big secret, plenty of liberals want to get rid of 230's liability shield, which is what this is about, and if a bunch of cancelled conservatives think getting rid of it

    • Its worse than that. This is how they get us to cheerlead for our own censorship.... by framing it as holding those big, bad, evil tech companies responsible.
  • "Hold platforms accountable for user content".

    The reason US companies dominate the Internet is because they are safe from lawsuits about this. This will turn it into a legal feeding frenzy on trillion dollar corporations.

    Rest assured these other nations do not have free speech in mind. So, sorry. I have no patients for things restricting speech one way or the other. It's couched in noble sounding terms but is anything but.

    • Re: Lawsuits R Us (Score:5, Insightful)

      by dwater ( 72834 ) on Thursday May 27, 2021 @07:16PM (#61429796)

      You think you have free speech? That's so adorable.

    • I have no patients for things

      So, you're not a doctor, then? Or did you mean "patience"?

      Though I suppose you could be a doctor, trying to simulate the average prescription written by an average doctor - illegible....

    • by Ichijo ( 607641 )

      "Hold platforms accountable for user content".

      The reason US companies dominate the Internet is because they are safe from lawsuits about this.

      So if we held platforms accountable for user content, Facebook would die?

      Is there a downside?

    • Re: (Score:2, Troll)

      by lsllll ( 830002 )
      The large companies are having their cake and eating it, too, right now, because they're not liable for user content and they can also remove any content they wish (yeah, don't bring up that they're "private" companies. When they become that large, they become de facto town squares). If a company is going to curate user content, then the company should be on the hook for the comments it leaves behind. If it wants liability protection for user content, then it needs to leave user content alone (except in
  • I thought "Tech Liability Shield" meant you can't say "our computers or software failed, which is why our systems broke contracts or laws, so it's not our fault." Ah well.
  • The thing about the inevitable is it's gonna happen no matter what. Unless it isn't, in which case it's not an inevitability.

    Just trying to figure out what all of this is really about, if anything. I always look for the buried lead in any sloppily written news story.

    Is there any major group that isn't trying to influence the law in its own favor? Because if there isn't, then there's no point mentioning such a thing.
  • by Traverman ( 4909095 ) on Thursday May 27, 2021 @07:28PM (#61429816)

    To limit the spread of disinformation is to limit the accountability of individual users for applying standards of critical thinking based on evidence, and more accurately, a probabilistic model of the relative credibility of evidence. Please do not protect me from disinformation. Rather, protect me from those who would remove the volumes of information, disinformation, and misinformation (jointly, "alleged information") upon which I seek to train myself, just as AI trains itself. I alone am responsible for how much faith I choose to place in any given piece of alleged information, and the actions that I may take as a result. I would rather make mistakes and learn from them than to be "shielded" from "disinformation" or "unproven claims" which, on rare occasion, prove both true and valuable.

    Too many times in recent history, we have treated "unproven" as equivalent to "false". And "fact checkers" test the veracity of claims against "established facts" which, given the abundancy of nonreproducible "science", are often wrong. What they generally don't do, in any event, is analyze the totality of evidence available and weigh it by credibility in order to obtain the most likely of a multitude of unlikely explanations, to be compared against the claim or assertion in question. AI does do that with varying degrees of competency, which is why it often "miraculously" makes more progress faster than the scientific method would usually afford, with respect to the optimization of a certain target parameter.

    I don't need to you to check Wikipedia, PubMed, or Nature for me in order to "fact check" someone's outlandish claims because you're afraid to be sued. To the contrary, I need to ensure that you are vulnerable to lawsuits for deleting alleged information, especially if known to have originated with a human as opposed to a bot.

    The case could be made that humans cannot possibly replicate the evidence-weighting process of AI, especially within narrow domains where the latter excels. That's certainly true, but the most productive claims analysis process necessarily leverage AI (and to an extent which is sure to increase as AI becomes more sophisticated) because, in our dynamic world, we often need to make an assessment and decide on courses of action before the scientific method can settle on a rigorous conclusion. The default action is simply inaction, which is a choice in itself and usually not the optimal one. None of that works if social media platforms are compelled to delete alleged information because they fear liability arising from the subset of it which turns out to be false, or worse, merely unproven.

    Yes, perhaps such liability protections for hosting disinformation don't belong in a trade deal, but they belong somewhere in the laws of a country which claims to support freedom of expression and thought.

    Stupid people on both political extremes may act on a whim, avoiding the mental tax of critical thinking. There is no cure for such laziness other than evolution itself. We cannot afford to embolden the thought police in order to save those fools from themselves.

  • So... (Score:5, Insightful)

    by stikves ( 127823 ) on Thursday May 27, 2021 @08:46PM (#61429978) Homepage

    It is not the user's fault when they post something on Facebook, but rather go after the platform which has deep pockets. Bonus points: the government also gets to dictate what is "acceptable" and what is not.

    Yes, I know we can do better than our current state. However look at Slashdot as a medium sized online platform. If you browse at -1, you will see "the scum of the Earth", if you browse at 2, you will see very civil headed discussions.

    Same with most other platforms, let it be Reddit, Twitter, or even 4chan. The users can choose which content they are exposed to by subscribing to different rooms, or following other users. If you just jump to "latest", yes the vile stuff will be there. Caveat emptor.

As you will see, I told them, in no uncertain terms, to see Figure one. -- Dave "First Strike" Pare

Working...