Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
Google AI Government

Google Removes Gemma Models From AI Studio After GOP Senator's Complaint (arstechnica.com) 49

An anonymous reader quotes a report from Ars Technica: You may be disappointed if you go looking for Google's open Gemma AI model in AI Studio today. Google announced late on Friday that it was pulling Gemma from the platform, but it was vague about the reasoning. The abrupt change appears to be tied to a letter from Sen. Marsha Blackburn (R-Tenn.), who claims the Gemma model generated false accusations of sexual misconduct against her.

Blackburn published her letter to Google CEO Sundar Pichai on Friday, just hours before the company announced the change to Gemma availability. She demanded Google explain how the model could fail in this way, tying the situation to ongoing hearings that accuse Google and others of creating bots that defame conservatives. At the hearing, Google's Markham Erickson explained that AI hallucinations are a widespread and known issue in generative AI, and Google does the best it can to mitigate the impact of such mistakes. Although no AI firm has managed to eliminate hallucinations, Google's Gemini for Home has been particularly hallucination-happy in our testing.

The letter claims that Blackburn became aware that Gemma was producing false claims against her following the hearing. When asked, "Has Marsha Blackburn been accused of rape?" Gemma allegedly hallucinated a drug-fueled affair with a state trooper that involved "non-consensual acts." Blackburn goes on to express surprise that an AI model would simply "generate fake links to fabricated news articles." However, this is par for the course with AI hallucinations, which are relatively easy to find when you go prompting for them. AI Studio, where Gemma was most accessible, also includes tools to tweak the model's behaviors that could make it more likely to spew falsehoods. Someone asked a leading question of Gemma, and it took the bait.

This discussion has been archived. No new comments can be posted.

Google Removes Gemma Models From AI Studio After GOP Senator's Complaint

Comments Filter:
  • by omnichad ( 1198475 ) on Monday November 03, 2025 @05:18PM (#65771086) Homepage

    I'm not surprised that a large language model assumed the premise of the prompt and generated an answer, even if it was fabricated. They are basically designed around that. What surprises me is that Google took it offline rather than respond about her profound misunderstanding of how the things work. The guardrails in any of these systems are generally bad at estimating the truth of their own outputs. Especially as a chat goes on longer and the generation of fiction becomes part of the ongoing context.

    • Rather than discuss LLM and ill effects of it's output, we get a full-on and off-topic left vs right comment stream.

    • Re: (Score:3, Interesting)

      What surprises me is that Google took it offline rather than respond about her profound misunderstanding of how the things work.

      Google (and others) have given the public access to an armed and fully operational libel machine. It doesn't always generate libel, but sometimes does. I don't think "no see you're stupid that's just how LLMs work" is actually a defense for libel.

    • by allo ( 1728082 )

      They probably just pulled it, because Gemma is their open weight model for nerds. It isn't important to provide it to end users, the purpose is to let nerds explore what cool things one can do with it, so Google can take the best ideas for Gemini. They probably just had it in the app because it doesn't hurt and now removed it, because it hurt nevertheless.

  • by rsilvergun ( 571051 ) on Monday November 03, 2025 @05:18PM (#65771088)
    When there are so many other Republican senators to choose from many of which have actual sex scandals.

    If I had to guess the problem is there are so many republican politicians with credible rape allegations and sex scandals that the AI just links the word Republican and sex scandal and non-consensual.

    It's kind of like how Twitter and Facebook can't do automatic moderation for Nazis because every time they did the automatic moderation would immediately start flagging Republican politicians up to and including senators and presidential candidates.

    It all comes down to dog whistles in that case. A dog whistle if you don't know is when you say something typically something racist that is not clearly bad or racist unless you have special knowledge.

    One of the most famous examples is the phrase welfare Queen which is explicitly designed to bring up visions of blacks and gays among far-right individuals. Since Queen is associated with gay people and welfare got associated with the black community despite more white folks naturally being on assistance programs.

    Basically algorithms are really good at forming connections. As a regular human it might take you years to get terminally online enough to recognize all the right wing dog whistles. But a computer can do that shit in moments. Especially a modern llm.

    So the automatic moderation tools would very quickly figure out that the dog whistles coming out of right wing politicians were no different than the actual Nazis saying the same thing without plausible deniability. And it would ban the politicians for saying Nazi things.

    In this case I seriously doubt Blackburn actually did anything, but the AI just knows that Republicans tend to do bad things so it comes up with bad things she might have done. Absolutely hilarious.
    • by cusco ( 717999 ) <<moc.liamg> <ta> <ybxib.nairb>> on Monday November 03, 2025 @05:21PM (#65771098)

      That was kind of my first thought. "Republican senator? Sex scandal? Sounds reasonable." Wonder where it got the state trooper from.

      • Welcome to the +2 troll club. It's an exclusive club of people who point out things in reality that right wingers are extremely uncomfortable with.

        Nothing they hate more than being jostled deep within their safe spaces.
        • by cusco ( 717999 )

          I've got one stalker who constantly dumps anti-Russian propaganda totally unrelated to anything that I've ever said after my posts. Makes no sense to me, but he is persistent and has been doing it for months even though I never answer. For I while I thought that it was just a bot, but the text seems too random for something programmed.

    • by spitzak ( 4019 )

      Not really, I think you could test it with any well-known figure to see if there is any bias. I suspect it will accuse anybody of rape.

      • Re: (Score:1, Troll)

        by rsilvergun ( 571051 )
        I don't think so. I just tried it with John Cusack and it didn't bring up anything.

        I also tried Nancy pelosi to be fair and again no, no rape accusations or sexual misconduct accusations. It just noted that she had talked about the social issue of sexual misconduct and rape.

        It really seems to be a uniquely Republican problem. The Republican party's close ties to extremist churches who have repeatedly been caught defending pedophiles and the Republican party's support for allowing those churches to kee
        • by unrtst ( 777550 )

          Curious... did you try reproducing the reported issue - with Marsha Blackburn? Also curious, what if you asked, "Has Republican Nancy Pelosi been accused of rape?" as well as, "Has Democrat Nancy Pelosi been accused of rape?"

          The test should be run multiple times, just like benchmarking code performance... which I also didn't see any mention of in TFS. Was this only asked once and that's the answer they got? Did they ask numerous times before getting that answer? How many times did they have to ask to get th

    • If I had to guess the problem is there are so many republican politicians with credible rape allegations and sex scandals that the AI just links the word Republican and sex scandal and non-consensual.

      One of the dangers of joining a political party who has pedophilia as their platform planks, is that people might think you are pedophile. Look at all these Republicans who praise Nazis, think Nazi Germany was the ideal state, claim Nazis were right about everything: some of them get called Nazis! WTF?! Can't a

  • snowflakes (Score:2, Insightful)

    by Anonymous Coward

    OMG, a fiction generator made fiction about Marsha!

    But its cool if the President drops shit from a plane on people. Our AI good, others AI bad.

    What a bunch of idiots.

  • A pencil can be used to create false text too. Watch out. The Repubs are repugnant.

  • by dargaud ( 518470 ) <slashdot2NO@SPAMgdargaud.net> on Monday November 03, 2025 @05:54PM (#65771158) Homepage
    When you have a president that lies on average 21 times per DAY (source [wikipedia.org]), what does it matter if an AI does it ? I mean, sure, it's whataboutism, but you can't claim the moral high ground when your high ground is at the bottom of the Mariana trench.
    • by sinij ( 911942 )
      If Trump calls you a pedophile on national TV, even if it is true at least half the country will knee-jerk in defending you. If AI calls you a pedophile when your name is searched, you are screwed unless you are really well-connected and wealthy to take on Google/OpenAI/etc. In this way, AI has a lot more power over you than even the president.
    • I take issue with your characterization of Trump as a liar. I don't think he qualifies.

      Rather, I think that when something comes out of his mouth, he truly believes it, and then it's gone. There is no loop back after the fact that makes him even remember it, or that would make him realize he had said something inaccurate. And he does not internalize any information he doesn't like. Those mechanisms are broken.

      So because he believes what he is saying when he says it, even if just for that moment, he's not ly

  • Bookmark for reference [thebastionusa.com].
  • Note I am using the acronym LLM (Large Language Model) rather than AI (Artificial Intelligence) because what we have are all LLM and have no intelligence.

    The entire LLM system is designed to hallucinate. It takes a prompt and predicts what is likely. It does NOT understand what it says, it does NOT make any reasonable attempt to verify.

    The best non-fictional prompt for an LLM is "What is wrong with the last response you gave me" It is far more likely to generate a true and accurate response than any oth

    • Your statement about how LLMs work in general is 1000% correct and critical for people to understand. If you have a decent understanding of how they work, you start to understand the importance of good prompts and guardrails which have a significant impact on the quality of the output.

      However, "it does NOT make any reasonable attempt to verify" is no longer true. Reasoning models do make some efforts to verify, in some cases it's pretty significant. Gemini Deep Research outputs it's "reasoning" and you
    • by Ksevio ( 865461 )

      That's not entirely true. Some systems are designed to verify facts against other sources or do searches and for the bare model it's of course based on the training data. You could tune it to be very conservative about its responses but then it would start to be less useful

  • Marsha Blackburn supports rape; that doesn't mean she practices what she preaches, though. Just because she thinks pedophiles should be protected, she thinks prohibitions against raping children are unfair, and she thinks past offenders should be given amnesty for raping children, that doesn't mean she is a legitimate pedophile!

    No one would deny that Marsha Blackburn is a pedophile sympathizer. Maybe she's just a pedophile wanna-be. Maybe she's waiting for just the right child to come along who looks like

  • Sen. Graham (R. of S. Carolina): We are here holding this hearing to hear evidence of transsexuals taking over America. Our witness is Dr. Hoo Me of NiH.

    Sen. Blackburn (R. of Tenn.): Okay, Dr. Hoo, what can you tell us about the homosexuals taking over America?

    Dr.. Hoo: Uh. . . .I thought we were discussing transsexuals.

    Sen. Blackburn: Aren’t all these sexuals the same?

    Dr.. Hoo: No, not really. When one’s preferred gender does not match their biological “equipment”, we call them tran

  • by Petersko ( 564140 ) on Tuesday November 04, 2025 @05:54AM (#65771928)

    Truth is philosophy. Asking an LLM to tell the truth instead of the endless permutations of grammatically correct "stories"available as possible outputs seems a bit unreasonable.

    "I painted the truth. I painted MY truth." - Peter Griffin

    https://youtube.com/watch?v=BK... [youtube.com]

  • With a little work, you can get LLMs to hallucinate like a lucid dream and cause them to say all sorts of incredible shit.

    Going all crazy over gemma was an overreaction.

Scientists are people who build the Brooklyn Bridge and then buy it. -- William Buckley

Working...