Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
AI

Gavin Newsom Signs First-In-Nation AI Safety Law (politico.com) 35

An anonymous reader quotes a report from Politico: California Gov. Gavin Newsom signed a first-in-the-nation law on Monday that will force major AI companies to reveal their safety protocols -- marking the end of a lobbying battle with big tech companies like ChatGPT maker OpenAI and Meta and setting the groundwork for a potential national standard.

The proposal was the second attempt by the author, ambitious San Francisco Democrat state Sen. Scott Wiener, to pass such legislation after Newsom vetoed a broader measure last year that set off an international debate. It is already being watched in Congress and other states as an example to follow as lawmakers seek to rein in an emerging technology that has been embraced by the Trump administration in the race against China, but which has also prompted concerns for its potential to create harms.

This discussion has been archived. No new comments can be posted.

Gavin Newsom Signs First-In-Nation AI Safety Law

Comments Filter:
  • Based on the Governor's website this is being done to put guardrails on AI so that they comply with "national and international standards" while simultaneously requiring a government funded consortium to build a public AI. This is all being done in the name of "Safety" but it doesn't specify what it is protecting us from. If you follow 4-5 links it takes you to California's report I linked below basically it seems to focus on 'transparency,' which of course is good for any government organization, but why
    • by abulafia ( 7826 )
      This is basically step 1.

      If you're doing this sort of thing responsibly, you're going to do it slowly, based on what's happening in the real-world. This sort of legislation starts doing that - you place markers on where you think you're going, get impacted companies to hire a government affairs persona and starting to generate data and so on.

      Assuming you think government has any role at all here, this is more or less what you should want them to do. Doesn't do much right now, and it definitely doesn't me

      • f you're doing this sort of thing responsibly, you're going to do it slowly, based on what's happening in the real-world.

        This just moves the question from "What are the consequences of AI being used in an unsafe manner?" to "What are the consequences of AI being used in an irresponsible manner?".

        OP's question remains: what does this law protect us from?

        • by abulafia ( 7826 )
          And I answered. Nothing.

          The question misses the point, and leads me to believe you'll continue missing it, so please continue.

    • Safety from competition perhaps?
      • by kenh ( 9056 ) on Tuesday September 30, 2025 @01:10AM (#65692018) Homepage Journal

        From the linked-to report mentioned in the summary:

        The law includes whistleblower protections for AI workers and lays the groundwork for a state-run cloud computing cluster dubbed CalCompute.

        Unclear why, exactly, AI industry needs a "state-run cloud computing cluster" - how much money will CA taxpayers be funneling into this venture to, you know, prop-up these struggling AI companies?

        • how much money will CA taxpayers be funneling into this venture to, you know, prop-up these struggling AI companies?

          I read this as being a publicly owned competitor to the for-profit AI companies. And I trust that more than I do them.

    • ... basically it seems to focus on 'transparency,' which of course is good for any government organization, but why do we need it for private AI models?

      Transparency around the training process and sources. Transparency around guardrails, the history of successes and failures thereof, bad outcomes that might otherwise be swept under the rug, and specific details that allow comprehensive testing by third parties. Disclosure of all 'hallucinations' so that independent parties can look for repeat misbehaviour, repeated patterns, etc. I think all of these, and probably more, would be useful from a safety point of view.

      Also, unfortunately the way these models work doesn't allow for any transparency, it's basically a black box that does statistical trial and error to vastly oversimplify.

      Although they're "private AI models", they

      • Transparency around process and sources for private companies would limit freedom and competitive advantages. It's like requiring Coca-cola to release their recipe. You could say the exact same things about Google algorithm but there was never a requirement for transparency around search and that existed for 25 years without a law that required Google to publish their frameworks.
        • by narcc ( 412956 )

          Transparency around process and sources for private companies would limit freedom

          Nonsense. Whose freedom would be "limited"? What ways would it be "limited"?

          and competitive advantages.

          So what? Why should I care if Google loses some nebulous "competitive advantage" to Meta? Transparency is good for the public. You know, the people that our government was created to serve. If something is good for us, the people, why should anyone care if it's inconvenient for some giant corporation?

          You could say the exact same things about Google algorithm but there was never a requirement for transparency around search

          A serious oversight that lead to Google's near monopoly on search, dramatically reducing competition in that space and allowing

          • Thanks - you saved me the trouble, and you said it better than I would have.

          • Nonsense. Whose freedom would be "limited"? What ways would it be "limited"?

            Presumably freedom of private companies to keep trade secrets who are now forced to reveal their sources including purpose built synthetic sources.

            So what? Why should I care if Google loses some nebulous "competitive advantage" to Meta? Transparency is good for the public. You know, the people that our government was created to serve. If something is good for us, the people, why should anyone care if it's inconvenient for some giant corporation?

            Why should anyone care about anyone else? If Transparency is good for the public would you oppose the installation of cameras in every room of your residence and live streaming them to the public?

            It would have been be one thing to have articulated a specific justification for a course of action. Along the lines of on balancing of interests I think x is more imp

            • by narcc ( 412956 )

              So ... all you have is some obviously bullshit false equivalence. Why am I not surprised?

              You seem to need a reminder:
              Corporations are not people. Corporations are not your friends. Corporations will never act in the public interest unless compelled to do so by force of law ... and even that isn't always enough. Everything they do is in their own interest, the public be damned. They'll happily kill [pestakeholder.org] children [fa-mag.com] if it's more profitable to do so than not.

              We know what happens when corporations are allowed to

              • So ... all you have is some obviously bullshit false equivalence. Why am I not surprised?

                What makes my remarks a false equivalence? What is the limiting constraint in your statements I have violated?

                Corporations are not people. Corporations are not your friends. Corporations will never act in the public interest unless compelled to do so by force of law ... and even that isn't always enough. Everything they do is in their own interest, the public be damned. They'll happily kill children if it's more profitable to do so than not.

                We know what happens when corporations are allowed to do whatever they'd like. Markets are flooded with spoiled and contaminated food. Rivers become so polluted that the actually catch fire. Whole mountains are blasted away. That doesn't even scratch the surface.

                What the hell does any of this have to do with AI legislation?

                The libertarians like to pretend that "market forces" would keep them in line, but that's obviously not true. It wasn't true historically, and it's not true today.

                Governments exist to serve the public. The public interest very rarely aligns with corporate interest. If something is good for the public but inconvenient for corporations, so much the worse for corporations. All that matters is the public interest.

                If you believe otherwise, that corporate interests should be given any consideration at all, you are the one who needs to justify that. You can't, of course, because it's nonsense.

                More generic blah blah blah. Nothing having jack to do with AI.

                People like to complain about waste and fraud in government, but virtually all fraud and so-called "government waste" comes from the private side of public-private partnerships. This is indisputable. Every dollar in profit is a dollar wasted on not providing the product or service we've foolishly contracted. The reality is that government is terrifyingly efficient. It should come as no surprise when your only interest is delivering services, you can do so more efficiently than when your interest is in maximizing profit while delivering as little as possible.

                Don't be such a corporate tool. It's not in your best interest either.

                Yet more generic blabber that says nothing substantive or objective about AI and corporate regulation.

                • by narcc ( 412956 )

                  Yet more generic blabber that says nothing substantive or objective about AI and corporate regulation.

                  You clearly have trouble reading. Find a trusted adult to help you.

      • LLMs are sophisticated search engines regurgitating data that has been fed into them. Government making them "safer" is another word for censorship. DMCA was sold as Internet "safery" law after all.
    • by narcc ( 412956 ) on Monday September 29, 2025 @08:16PM (#65691628) Journal

      why do we need it [transparency] for private AI models?

      You can't possibly be serious.

      unfortunately the way these models work doesn't allow for any transparency

      Nonsense. Sources of training data, for example, can be provided no matter how the model operates internally.

      it's basically a black box

      Nonsense. They're hardly some impenetrable mystery. If that were true, things like attention pruning would be impossible. They're a lot more open than OpenAI's marketing department and years of bad tech reporting has lead you to believe.

      does statistical trial and error to vastly oversimplify

      Nonsense word salad. Where did get such a ridiculous notion?

    • Gavin Newsom is trying to raise his profile in preparation for a run for president next election.
  • Didn't Benjamin Franklin warn us about trying to obtain safety in the world? The world isn't safe, and the government should not try to make it safe. If we want safety then we need to arm ourselves with knowledge and an education. It wouldn't hurt to arm yourself with other things.

    Also, that image of Gavin Newsom at the top of the fine article is dangerous. Someone is going to flip that image horizontally, crop out his wedding band, then post it somewhere to show how he's buddies with Elon Musk or somet

  • Gavin Newsom signed a first-in-the-nation law on Monday that will "force" major artificial "intelligence" companies to "reveal" their "safety" protocols.

    FTFY.

  • All it is is transparency. But it hardly is "safety", except with very thick scare quotes.

    • by evanh ( 627108 )

      It'll reveal the smoke and mirrors. That's always a good thing. Whether there ever can be substantive guard rails for LLMs, I wouldn't know.

  • It's a statistical guessing machine with lots of data. Doesn't that mean it's going to do something stupid, at least some small percentage of the time?
  • So bad actors will know what's in place and so how to go round them? What could possibly go wrong?

  • You can use AI to steal whatever you want, use it to influence the idea that AI is great, but there is a catch...somewhere toward the bottom of the page.
  • Queue the "AI Safety Department" layoffs and statements from the AI companies that they have no safety protocols to monitor.

  • One man's safety is another man's censorship and persecution.

He who is content with his lot probably has a lot.

Working...