Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Google The Internet

Google Develops Free Terrorism-Moderation Tool For Smaller Websites (arstechnica.com) 21

Google is developing a free moderation tool that smaller websites can use to identify and remove terrorist material, as new legislation in the UK and the EU compels Internet companies to do more to tackle illegal content. From a report: The software is being developed in partnership with the search giant's research and development unit Jigsaw and Tech Against Terrorism, a UN-backed initiative that helps tech companies police online terrorism. "There are a lot of websites that just don't have any people to do the enforcement. It is a really labor-intensive thing to even build the algorithms [and] then you need all those human reviewers," said Yasmin Green, chief executive of Jigsaw. "[Smaller websites] do not want Isis content there, but there is a ton of it all over [them]," she added.

The move comes as Internet companies will be forced to remove extremist content from their platforms or face fines and other penalties under laws such as the Digital Services Act in the EU, which came into force in November, and the UK's Online Safety bill, which is expected to become law this year. The legislation has been pushed by politicians and regulators across Europe who argue that Big Tech groups have not gone far enough to police content online. But the new regulatory regime has led to concerns that smaller start-ups are not equipped to comply and that a lack of resources will limit their ability to compete with larger technology companies.

This discussion has been archived. No new comments can be posted.

Google Develops Free Terrorism-Moderation Tool For Smaller Websites

Comments Filter:
  • by bradley13 ( 1118935 ) on Tuesday January 03, 2023 @10:41AM (#63176314) Homepage

    I don't know the state of the legislation in various places. However, it seems to me that penalties should only be assessed if illegal material us not removed after being officially notified of its existence by a responsible government agency.

    Small operators don't have moderation departments that can inspect everything. Nor do they have legal departments that can determine what is legal and what is not.

    • by twms2h ( 473383 )

      I don't know the state of the legislation in various places. However, it seems to me that penalties should only be assessed if illegal material us not removed after being officially notified of its existence by a responsible government agency. .

      I agree that this should be the case, but some countries (including the EU) see that differently and want website operators to be judge and enforcement for their sites at the same time. They also demand that there is a dispute system in place as well, because they still pay lip service to "free speech".

    • I have a contrarian view. A web service provider should take responsibility for taking care of their property proactively and not hide behind the I didn't know excuse. My frustration is not so much from the speech aspect but rather the malicious and scam content (particularly advertisements) hosted by web service providers. If someone was defecating on your front lawn, you would take action without waiting for a responsible government entity to notify you. Before someone trots out the common carrier argu
    • by taustin ( 171655 )

      There are two reasons that won't happen, at least in the US:

      First, ISPs can censor you in ways the government legally can't. It is, after all, "our network, our rules," never mind those rules are based on legal requirements. It's a shell game, but one that works.

      And second, it would cost money that is better spent (in the opinion of the powers that be) on graft and corruption.

      (And, of course, it's an impossible goal anyway, and they know it, but this isn't about controlling dangerous speech, it's just about

    • by AmiMoJo ( 196126 )

      "We don't have the resources to stop terrorism on our platform" probably isn't going to be seen as a very strong argument for an exemption.

      Kinda like "our staff are too busy to check ID, but if you tell us when kids are buying booze we'll stop them."

  • I believe this violates Texas law against social media censorship.
  • Nope (Score:5, Insightful)

    by ocean_soul ( 1019086 ) <tobias DOT verhulst AT gmx DOT com> on Tuesday January 03, 2023 @11:26AM (#63176440)

    No website operator in their right mind should even remotely consider using something like this. Especially not if it is produced by Google. That is just a recipe for utter disaster.

    • by AmiMoJo ( 196126 )

      Nonsense. Let it flag stuff for human review. Most services already use commercial spam filters and the like anyway.

  • Neat! (Score:5, Insightful)

    by Errol backfiring ( 1280012 ) on Tuesday January 03, 2023 @11:48AM (#63176502) Journal
    Given that there is no official definition for terrorism, and almost all countries refuse to come up with one, I am astounded that somebody solved this undefined problem.
    • by taustin ( 171655 )

      The official definition of terrorism is like the official definition of spam: "That which I do not like," with a big helping of "that which I do not do" mixed in.

      It's a tool to suppressed inconvenient opinions (and facts).

  • OBVIOUS PROBLEM IS OBVIOUS.

  • Which is what happens the first time some Republican makes terrorist threats and is targeted.

    • So you belong to the "Anything anyone says that which I do not like" club. Remember, Antifa has gone on the record that "Free speech is violence!" They also claim "Silence is violence!" as well years later. Why do I fear that this will be heavily abused in clown world, and why would you permit it to let it end up in the hands of people who might just use it against you later. Always side with liberty. Censorship never benefits a free and fair society. The Brown Shirts religiously hounded any public speaker
  • A centralized database run by big tech used to deplatform people? Yeah this totally will never be used to legislate morality and silence anyone who refuses to comply with the party line.

    Incoming: "Private corporations aren't bound by free speech, just make yourself a new _______ if you don't like it." this iteration the missing word will be "internet".

  • Remember when our media companies were our friends, and then, Google taught them how to bite. Twitter started with a simple kit to stop illegal content. Those tools turned into a all in one censor monster that put a smile on every control freaks face. We went from sharing ideas and discussions to people wanting to control who can criticize or control a narrative at a push of a button. This has little to do with stopping terrorism. It will quietly expand into other subjects.
    • Media companies were never our friends -- they wanted eyeballs, control, and money as they sold your public data to private companies.

  • no problems with offloading your moderation to an actor with ulterior interests. Bonus points if it's closed-source.

    Like, at that point might as well host your website in China.

A committee takes root and grows, it flowers, wilts and dies, scattering the seed from which other committees will bloom. -- Parkinson

Working...