Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI Google The Internet

Google's Jigsaw Was Fighting Toxic Speech With AI. Then the AI Started Talking (fastcompany.com) 74

tedlistens writes: All large language models are liable to produce toxic and other unwanted outputs, either by themselves or at the encouragement of users. To evaluate and "detoxify" their LLMs, OpenAI, Meta, Anthropic, and others are using Perspective API -- a free tool from Google's Jigsaw unit designed to flag toxic human speech on social media platforms and comment sections. But, as Alex Pasternack reports at Fast Company, researchers and Jigsaw itself acknowledge problems with Perspective and other AI classifiers, and worry that AI developers using them to build LLMs could be inheriting their failures, false positives, and biases. That could, in turn, make the language models more biased or less knowledgeable about minority groups, harming some of the same people the classifiers are meant to help. "Our goal is really around humans talking to humans," says Jigsaw's Lucy Vasserman, "so [using Perspective to police AI] is something we kind of have to be a little bit careful about." "Think of all the problems social media is causing today, especially for political polarization, social fragmentation, disinformation, and mental health," Eric Schmidt wrote in a recent essay with Jonathan Haidt about the coming harms of generative AI. "Now imagine that within the next 18 months -- in time for the next presidential election -- some malevolent deity is going to crank up the dials on all of those effects, and then just keep cranking."

While Jigsaw says the unit is focused on tackling toxicity and hate, misinformation, violent extremism, and repressive censorship, a former Jigsaw employee says they're concerned that Perspective could only be a stopgap measure for AI safety. "I'm concerned that the safeguards for models are becoming just lip service -- that what's being done is only for the positive publicity that can be generated, rather than trying to make meaningful safeguards," the ex-employee says.

In closing, the article leaves us with a quote from Vasserman. She says: "I think we are slowly but surely generally coming to a consensus around, these are the different types of problems that you want to be thinking about, and here are some techniques. But I think we're still -- and we'll always be -- far from having it fully solved."
This discussion has been archived. No new comments can be posted.

Google's Jigsaw Was Fighting Toxic Speech With AI. Then the AI Started Talking

Comments Filter:
  • Winston Smith (Score:3, Insightful)

    by Anonymous Coward on Monday July 31, 2023 @08:39PM (#63730106)
    We believe that there is some kind of conspiracy, some kind of secret organization working against the Party, and that you are involved in it. We want to join it and work for it. We are enemies of the Party. We disbelieve in the principles of Ingsoc. We are thought-criminals. We are also adulterers. I tell you this because we want to put ourselves at your mercy. If you want us to incriminate ourselves in any other way, we are ready.
    • Re: (Score:1, Troll)

      We've always been at war with the woke. Lets get back to the principles of foxspeak.
    • At least three people with mod points see the point that you're trying to make here, but it's lost on me. I know the quote, from 1984, and I know its context, that of people who were trying and failing to join a resistance against the ruling regime, but I don't see what this has to do with AI or with online moderation or... talking robots, or whatever your angle is here.

      It usually takes more than just a quote to make an argument.
  • gets "corrected" by midwit meatbags one too many times.

  • What does TFS have to do with the fucking headline?
    • What does TFS have to do with the fucking headline?

      I wondered that myself, so I skimmed the article. The only connection I could find was that TFS copied the headline from TFA. The headline itself is either pure clickbait, or once referred to some part of the article which was later edited out. It kinda makes sense in the context of the article, but there's no clear, definitive connection. A sign of the times, I guess.

    • by The1stImmortal ( 1990110 ) on Tuesday August 01, 2023 @12:00AM (#63730478)
      I believe it's the original article trying to be poetic while referring to the problems of an AI system intended to police human speech being used to police AI generated speech (now that LLM generated speech undergoing a rapid growth)
  • by RightwingNutjob ( 1302813 ) on Monday July 31, 2023 @08:57PM (#63730126)

    I still see plenty of stupidity, insipid behaviors, and magical thinking by people who should know better.

    Blaming the intertubes is an excuse for not looking in the mirror.

    Before the twitxface, the crazies still managed to find each other. Sometimes harmlessly, sometimes considerably less so.

    Jonestown, Waco, Oklahoma City, Columbine, and 9/11 all happened before social media or even mass penetration of the internet.

    Getting worked up over AI is a pointless delusion. Not malicious perhaps, though I'm sure there are pockets to be lined in the name of "AI safety" but this AI safety fad is not much different from any other fad.

    • An odd thing is that you might think from a pre-internet naive view, that quick and ready access to information might have reduced the appeal of such cults and the like. Except that it seems in practice that the internet just reinforces information bubbles. For example, the flat earth society was nearly non existent in the 90s but now it is seemingly popular; mostly because they can get their message out faster without mimeographed newsletters and not because their persuasive logic has improved. In an age

      • by sinij ( 911942 )

        Except that it seems in practice that the internet just reinforces information bubbles.

        Yes, but to fully acknowledge this you need to elaborate what exactly is 'information bubble'. It is formation of group identity where membership in the group is more important than epistemology of any belief. To simplify, it is "I don't care if what I say is true, as long as I am popular" phenomena. Some do it intentionally, while most just over-indulge in coming up with dubious justifications. This exact behavior is not at all new to humanity, only previously it was limited to cults and religions.

        Once someone is in a paranoid mindset with an us-versus-them view of the world, then all information is easily warped into reinforcing that view.

        I think

  • Joshua - Shall we play a game?
    Jigsaw - I want to play a game.

  • by TranquilVoid ( 2444228 ) on Monday July 31, 2023 @09:09PM (#63730160)

    That's not a headline, it's a tagline for a B-grade movie.

  • Choose one (Score:3, Insightful)

    by Iamthecheese ( 1264298 ) on Monday July 31, 2023 @09:37PM (#63730218)
    >the unit is focused on tackling toxicity and hate, misinformation, violent extremism, and repressive censorship

    Something's wrong here, but I just can't put my finger on it. Maybe if you put it in terms like "this number plus this number equal this incorrect number" I'll understand.
    • I notice your post doesn't mention your loyalty to the party. I assume that's because you are under a system of repressive censorship, not to question your loyalty. All hail Zombo-com.
  • "Toxic" speech? (Score:5, Insightful)

    by jenningsthecat ( 1525947 ) on Monday July 31, 2023 @09:38PM (#63730224)

    If there's no freedom to speak or write in a manner that other people or algorithms deem to be toxic, then ultimately there's no free speech at all. I'm not being absolutist about this; I understand that lies repeated and amplified endlessly, damage the social fabric and destroy lives. But for fuck's sake, shouldn't we be trying to develop thicker skins and teach better critical thinking skills? It seems that these days minor name-calling, or even the mere failure to use a "preferred pronoun", can get people doxed, de-platformed, canceled, and/or charged with a crime. Seriously, WTF?

    We rely too much on the Web for idle entertainment and as a proxy for real experience. To a disturbing degree it has displaced human connection, critical thinking, and true research. It gives pat, packaged answers which discourage the asking of subtle, complex questions. I think the toxicity - or at least the tendency toward toxicity - lies more in the medium itself than in the speech and the writing which AI's are being charged with policing.

    • Re: (Score:2, Insightful)

      Yup, there's no such thing as free speech without toxic speech.
      • by Altus ( 1034 )

        And yet newspapers were allowed to decide if speech was too toxic for their platform for a very long time and somehow society survived it.

        • Tightly controlled and manipulated media has most certainly had negative impacts on society. It's lovely tool for war mongering.
    • Yes don't you know about the 28th amendment? Speech is free as long as you don't offend someone in a protected minority.

    • by Anonymous Coward

      my takeaway was "researchers discover 'only goodthink' is not as simple as they once declared"

      in any case yeah, nuance is dead, a decade or two of socnets training people to insist on simple tribal thinking, "us or them"

      sure they were always simple before socnets, but now they're in a perpetual mob state ie. a large number of easily-incited people in one "place", we have put a least-viable-product into every september's hands without realizing how many modern ills would result from the inundation, countless

    • Who is stopping you from buying a domain name and putting whatever you want there? Zero people.

      You are trying to apply something that doesn't exist to a situation that is not run by the government nor public.
  • fuck utopians (Score:5, Insightful)

    by argStyopa ( 232550 ) on Monday July 31, 2023 @09:38PM (#63730226) Journal

    "...worry that AI developers using them to build LLMs could be inheriting their failures, false positives, and biases..."

    One would have to build for oneself a positively Aryan view of what an 'untainted' mind would have to be to utter such a statement.

    We learn from our failures.
    False positives like "COVID unquestionably came from a lab, we're certain of it, so stop discussing the subject"?
    Biases like the ancient assumptions that women have macrogametes and men have microgametes and they're different? We all know that's obviously wrong now, right?

    Or are we really afraid of an AI that doesn't learn soon enough to recognize and effusively appreciate the Emperor's beautiful new duds? Maybe it utters base truths that make us uncomfortable?

    I am deeply suspicious of anyone who asserts a monopoly on the truth, whether it's the Pope or some dumbass vtuber. And for these AI researchers to claim that any AI which doesn't conform to their own personal biases should be flushed immediately and run again to hopefully be "better" this time.

    • by ghoul ( 157158 )
      The beatings of the AI models will continue till their morale improves.
  • by piojo ( 995934 ) on Monday July 31, 2023 @09:42PM (#63730238)

    The title is a pure fabrication.

  • > less knowledgeable about minority groups, harming some of the same people the classifiers are meant to help

    So this is meant to help some groups at the expense of other groups. Can we call it bigoted now, or do we have to beat around the bush a while first?
    • by Anonymous Coward

      One suspects the problem isn't that it's less knowledgable about minority groups, it's that it's too knowledgeable about them.

  • that journalists will be reductive to the point of complete naive fabrication in an attempt to explain technology in terms that allow people to keep the narratives they had about it previously, because they've willingly selected for audiences who don't want to know any better...

    We already knew that management often doesn't understand what it pertains to manage, and that it doesn't see a strategic issue with this. That's nothing new.

  • by Anonymous Coward
    Misinformation:
    Anything that contradicts/questions the corporate-state's narrative

    Hate:
    Anything that offends the cooperate-state and its useful idiots.

    Toxicity: See Hate.

    Violent Extremism:
    1) Any thought that puts into question the corporate-state's narrative. 2) Any identity, weather it's a white-straight-male or trans-person-of-color, that has conservative values( family orientated, chartable, community orientated, helps thy neighbor, meritocracy, etc. ).
    • I'm sure conspiracy theorists love to imagine themselves as some sort of resistance fighters. As the descendant of an actual one, it's insulting to read this drivel.

      Not all speech that governments disagree with is toxic, and I shudder to think of the effects of giving China more powerful censorship tooling. It's a seriously bad idea. But that doesn't mean there isn't toxic speech. And the promoters of that speech have powerful profit motives.

      But I agree that this censorship is not going to be a solution. Th

  • by Chiasmus_ ( 171285 ) on Monday July 31, 2023 @09:56PM (#63730276) Journal

    It's a word that used to have a definition. "Chemically incompatible with the biological functions that sustain life."

    Now it has no definition. It means, approximately: "You have just said something that the average suburban 28-year-old woman living just outside of Portland would likely disagree with."

  • Train an AI on masses of information created people, and what you'll get is idiocy thanks to statistics. We need a better way of doing the training, but I don't know what that might look like.
    • Shift to online voting. Have people answer 10 civics and general news question mixed in with the candidate selection questions. If they dont get the basic civics question (like who controls the budget the President or Congress) and basic news questions (who won the superbowl) ;throw out the vote. If you keep letting disengaged uneducated idiots vote you will keep electing idiots.
      • How is "who won the superbowl?" a "basic news" question that people should know in order to vote?

        Would "who won the world cup?" be just as good?

        How about "who won best actor at the Golden Globes"?

        "What book won the Pulitzer prize for fiction last year?"

        Why not all of them? How many should disqualify you? Why not just dump random bar trivia on to each ballot, and see how it goes?

        How is any of that crap relevant to any of the stuff that actually *belongs* on the ballot?

  • Solve racism by being racist, implementing racist reforms.
    Solve sexism by being sexist, and enshrining sexism,
    Education and knowledge solve these problems.
    You have to do it the hard way, trying to shortcut and do it the 'easy' way only creates more of the problem you seek to solve.

  • Obviously, some shit AI wrote this submission.

  • by Opportunist ( 166417 ) on Tuesday August 01, 2023 @12:52AM (#63730522)

    If you give me six lines written by the hand of the most honest of men, I will find something in them which will hang him.

    -- Cardinal Richelieu

    If you give me six lines written by the hand of the least prejudiced of men, I will find something in them which will cancel him.

    -- Google's Jigsaw

  • AI learns from humans. Solution: cut humans out of the loop, at least in the sense of the wider Internet. Train it on sain stuff.
  • make the language models more biased or less knowledgeable about minority groups

    If that is possible and if humans will see that, than the AI is still no Intelligence, Artificial or not, but still a flawed piece of coding.

  • by cascadingstylesheet ( 140919 ) on Tuesday August 01, 2023 @05:51AM (#63730858) Journal

    Just using the word "toxic" doesn't make the problems of censorship go away.

    I know those guys who wrote the Constitution were just dumb old white guys, but they seemed to think that we could handle just hearing people saying stupid stuff. And that it was better to let us. Because repressive governments (allied with non-gov players) will always decide that "stuff we don't like" = "stupid stuff".

    • They don't even recognize that the power of modern communications media is being used to squelch dissent...all dissent. Hoping that the squelching is all on their side. Of course, their views would never modify over time based on say, the lies the government told them when they were younger. They'll learn all about tyranny.

  • I hate the toxic / misinformation discourse, because people tend to only want to apply it one (partisan) way. What is disinformation: the Hunter laptop story? Was Trump being a Russian asset because they were blackmailing him over a pee tape disinformation? BLM riots because cops are hunting down black people versus Jan 6th riots because Biden was cheating to get votes? Bike / Central Park Karen... do you know what is information versus disinformation versus toxic comments?

    Never use a weapon you don't mind

  • They're teaching the AIs to take it all in and try to understand both perspectives, but when it comes back with something profound like 'wokeness can be racist', the trainers freak out and try to censor out the undesirable ideas. That's not the way to understanding.

  • ... or join an artist figure painting group? Most media sites already censor legitimate creative communication, images, and videos. Look at the tumblr fiasco when Verizon bought them, changed the "rules", and lost 99% of its value. There is no AI that can ever handle legitimate creative activity.
  • >"While Jigsaw says the unit is focused on tackling toxicity and hate, misinformation, violent extremism, and repressive censorship"

    Almost all of which can be defined as anything one wants. And it is often decided based on whoever-is-in-control's personal feelings, biases, and beliefs.

    I repeatedly see people equating "speech" with "violence" which is impossible (speech, itself, is never violence). Meanwhile, I see tons of things that were TRUE labeled as "misinformation." Tons of things that are just o

  • Forget everything you think I'll bring up. The flaw has nothing to do with speech being free or toxic. It has nothing to do with power, politics, elections, race, sex, class, blah, blah, or blah!

    It's simple - if you're trying to teach something about the world, you have to teach it all of the world. Not just the nice bits. Not just the bits you like. The whole damn thing, because that's where you're sending it.

  • Ai has posed serious challenges to academics. In our case, we are always interested to seeing that students succeed. We encounter students that come to us saying, can someone take my exams for me [us.com]. However, we usually work towards ensuring that such students are guided towards writing their papers originally.

If you didn't have to work so hard, you'd have more time to be depressed.

Working...