Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AI Technology

California AI Bill Sparks Backlash from Silicon Valley Giants (arstechnica.com) 59

California's proposed legislation to regulate AI has sparked a backlash from Silicon Valley heavyweights, who claim the bill will stifle innovation and force AI start-ups to leave the state. The Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act, passed by the state Senate last month, requires AI developers to adhere to strict safety frameworks, including creating a "kill switch" for their models. Critics argue that the bill places a costly compliance burden on smaller AI companies and focuses on hypothetical risks. Amendments are being considered to clarify the bill's scope and address concerns about its impact on open-source AI models.
This discussion has been archived. No new comments can be posted.

California AI Bill Sparks Backlash from Silicon Valley Giants

Comments Filter:
  • Just point at the big red button on the wall in the DC labeled 'EPO' the electric code already no doubt requires and say see kill switch for AI model right there.

    • Re: (Score:2, Troll)

      right next to the big red button on the wall that says "Coffee" which is right next to the other big red button on the wall that says "Launch all Nukular Missiles" that one former president had installed 20 years or so ago...
    • Given the number of stories I've heard about the huge number of blackouts in California I'd have thought that simply being connected to the California power grid would satisfy the "kill switch" condition.
      • Re: (Score:2, Troll)

        by Darinbob ( 1142669 )

        No huge number of blackouts for a couple decades. Are you still listening to Fox and it's "bash all blue states as shit holes" editorials?

  • by Anonymous Coward
    The day when everyone realizes what complete and utter bullshit all this fake-ass 'AI' garbage is and they abandon it once and for all can't come soon enough.
    • by Shaitan ( 22585 ) on Friday June 07, 2024 @12:09PM (#64530825)

      I don't like the term "AI" being used to refer to learning systems with no intention or possibility of attaining sentience either but the technology isn't garbage.

      I had an idea to make a system to detect arousal [don't ask]. I know enough about different technologies to have an idea what would be involved in building something like that and normally I'd dismiss it because there are to many pieces involved I'd need to learn or relearn but I decided to take a whirl at using chatgpt as my personal jarvis instead.

      Within a day I'd gotten it to produce an arduino sketch, a simple circuit, and calculate the right components to best utilize the resolution available on ESP32 ADC's. While waiting for those parts to arrive I got chatgpt to whip up a node.js instance to receive and record data and also to be able to receive tags, I had it whip up a simple web application that can be used to define arbitrary tags and send them in real time. That way a user running the sensor can load the app on their phone tap labels to tag events on the recorded data in real time. The parts then arrived and I was able to build and work out the issues with the basic sensor prototype and verify it worked on the serial console.

      Next chatgpt and I will collaborate to get the aforementioned app receiving the data properly and then to massage that data into the appropriate framework for the later processing and machine learning steps. Now was any of this as simple as it sounds? No, chatgpt made plenty of errors and I had to fill in lots of blanks and be able to spot simple mistakes and probe it correctly to get it to fix them but it also saved me from having to learn or relearn many components on the way and that has made this hobby project go from a non-starter to well on its way to being a reality.

    • The day when everyone realizes what complete and utter bullshit all this fake-ass 'AI' garbage is and they abandon it once and for all can't come soon enough.

      The day, you say?

      It’s almost comical assuming humans have learned a fucking thing since Greed was pimping vaporware.

      You better start meaning in millennia.

  • by DeplorableCodeMonkey ( 4828467 ) on Friday June 07, 2024 @10:34AM (#64530509)

    Let it sink in that these firms are REEEEEEEing over a "kill switch." A political label for an ability to turn off a model quickly and completely.

    The absolute horror.

    How is life supposed to imitate art if we demand common sense architectural decisions like giving human controllers absolute and reliable power to kill an AI system?

    • Technical (Score:4, Interesting)

      by JBMcB ( 73720 ) on Friday June 07, 2024 @10:48AM (#64530551)

      How is life supposed to imitate art if we demand common sense architectural decisions like giving human controllers absolute and reliable power to kill an AI system?

      It's not the demand for a "kill switch" in and of itself. It's the lack of fundamental understanding of technology the requirement shows. All AI models respond to input. If you shut off input the AI model will do nothing. A "kill switch" means commenting out a call to a REST service. Or, heck, shutting down the AI docker or K cluster. If you don't send the model input there is nothing to "kill." It's a nonsensical requirement and, likely, emblematic of the other issues in the bill.

      • Re:Technical (Score:5, Insightful)

        by UnknowingFool ( 672806 ) on Friday June 07, 2024 @11:16AM (#64530651)
        But you are limiting the problem to how AI exists now. The bill probably envisions an issue in the future when AI does not have the limitations that are present. At that time do lawmakers propose new legislation after it is too late?
        • This.

        • they are thinking of a time a couple of months in the future when 'ai' is built into my roomba and it comes alive and goes on a murderous spree, uh.... vacuuming us to death?
        • Last I checked the AI would be powered by physics, not magic. Wanna emergency kill the AI? Cut the fucking power. The servers aren't going to magically stay running.

          Even battery backup has breakers / fusing before the servers that can cut power. If it doesn't the datacenter burns down.

        • But you are limiting the problem to how AI exists now.

          Do you really see a future where we are going to have computer hardware without a power switch? Unless you do then you have a "kill switch" right there - just turn off the machine it is running on.

        • by JBMcB ( 73720 )

          But you are limiting the problem to how AI exists now.

          That's great, but we are nowhere near having a problem with rogue sentient AIs currently, and passing legislation before we even know what a sentient machine would look like is dumb.

          Let's pass legislation setting up zoning rules for mars and how to interact with alien species when they land on their planet while we are at it.

      • If you don't send the model input there is nothing to "kill." It's a nonsensical requirement and, likely, emblematic of the other issues in the bill.

        Your ramblings are nothing but semantic nonsense. Everyone knows what a kill switch is. It fully and completely stops something from working/running/doing something. It doesn't matter what that switch is.

        The term in the legislation may or may not be defined, but everyone knows what that term means. Being a pedantic prick won't change it.
        • semantic

          To be fair, semantics and nuance do matter when it comes to law, and legislating - hence why not all killing is considered "murder," having involuntary homocide, just plain old homocide, and self defense as a defense, for instance.

      • now, relax, just remember all elected officials in the USA today are just some person that won a popularity contest, none (almost) are ever elected because they have specific knowledge, or really, any knowledge in particular fields and most at the state level or above seem to be lawyers, so, this stupidity should be expected.

        I mean, I know the BOFH is satire, but after having worked the hell desk before, I know for a fact that each satirical statement he makes about users (lusers in BOFH parlance) has at

        • by caseih ( 160668 )

          I knew at least one CS professor who was functionally computer illiterate. Was pretty amazing to see.

          • wait, did I ever teach in California? Nope, that wasn't me.... Seriously though, when I was a physics prof someone in the CS department asked me to teach a class in C. I was dumbfounded. Yeah, one of my classes that included a lab with it (4 credit class, not 3 credit class with a separate 1 credit lab) I did teach a short intro to C, but that was in addition to a short intro to statistics, and the actual labs we did all in one semester, so nothing like a 'real' intro to C. I declined.
      • How is life supposed to imitate art if we demand common sense architectural decisions like giving human controllers absolute and reliable power to kill an AI system?

        It's not the demand for a "kill switch" in and of itself. It's the lack of fundamental understanding of technology the requirement shows. All AI models respond to input. If you shut off input the AI model will do nothing. A "kill switch" means commenting out a call to a REST service. Or, heck, shutting down the AI docker or K cluster. If you don't send the model input there is nothing to "kill." It's a nonsensical requirement and, likely, emblematic of the other issues in the bill.

        You do realize it’s this EXACT kind of blind human ignorance that simply assumes Greed N. Corruption cannot create Skynet, that will ultimately and ironically create Skynet, right?

        Read up on your history and for fucks sake STOP assuming humans aren’t that stupid. We certainly are.

      • It's not the demand for a "kill switch" in and of itself. It's the lack of fundamental understanding of technology the requirement shows. All AI models respond to input. If you shut off input the AI model will do nothing. A "kill switch" means commenting out a call to a REST service. Or, heck, shutting down the AI docker or K cluster. If you don't send the model input there is nothing to "kill." It's a nonsensical requirement and, likely, emblematic of the other issues in the bill.

        Maybe go back and watch

    • Lets say you're building a company to use AI to automate the process of doing your taxes.

      Having a "kill switch" for those models means one of two things

      - Either you need to hire a bunch of humans to have on staff, just in case you need to switch off the AI. This would eliminate the entire value proposition of the company.

      - Or, you need to be prepared to shut the entire company down when the "kill switch" is pressed. Not really something investors would like.

      • But then it's good to have this stuff out there as soon as possible then so the investors and companies can make that decision with the proper information and risk assesment. If your company was already in that position I would say you jumped the gun a bit early and early adopters carry risks.

        This is hyperbole but entertain my anaology. it would be an excellent value proposition if I could pay my workers $2.50 per hour and work them 60 hours a week. That's great value but I know up front that's off the t

        • by brunes69 ( 86786 )

          It's not the job of the government to decide whether or not AI is a good value proposition.

          The point I am making here is simple: There are endless applications for Generative AI. 99.99% of them are not dangerous to humans and should not need a "kill switch". Subjecting all companies using AI to that is ridiculous.

          • It's not the job of the government to decide whether or not AI is a good value proposition.

            Yes? That's the point I am making? That should play no part in the regulations but if there are potential negative outcomes we should be aware and act or at least be ready to act.

            There are endless applications for Generative AI. 99.99% of them are not dangerous to humans and should not need a "kill switch". Subjecting all companies using AI to that is ridiculous.

            Maybe, like in my post I think we do have to draw a line with these massive tools that are on the open internet versus things that are internal to a company. The internet is like society in general, yes it's made up of a bunch of private actors in private spaces but that alltogether creates a shared space much bigger than the indi

            • There are going to be MANY negative outcomes to AI - namely that 50% of the population may be unemployed in 5 years and if there isn't a massive overhaul to the foundational underpinnings of the economy, and capitalism itself, then there will be enormous civil unrest and the outcome may end up looking like Mad Max.

              But the government is more concerned with stupid Terminator scenarios - which aren't the immediate problem.

              • massive overhaul to the foundational underpinnings of the economy, and capitalism itself

                You know what I think you and I think are mostly on the same page after all because I do also agree with this but also I think this type of thing has to happen regardless of AI, it's been slowly happening for awhile now.

          • A kill switch is not hard.

            Therefore it will not be difficult to comply with this regulation.

            Therefore you are going all reeeee. Never go full reeeee.

      • Lets say you're building a company to use AI to automate the process of doing your taxes.

        Right, I want my tax forms to claim I have 10,000 dependents that I support through a non-profit organization for selling mutated newts.

        Are you familiar with "AI" at all? It has no concept of reality and can't have one. You don't want it to do anything more important than drawing you pretty pictures where, unfortunately, some of the people have fifteen fingers (so helpful for counting in hex).

        • by brunes69 ( 86786 )

          The efficacy or not of this, is not relevant to this regulation discussion. If the product is not effective then the market will figure that out.

        • >Are you familiar with "AI" at all?

          You obviously aren't. You've obviously only heard about GPT / stable diffusion base models and pretty much nothing else. Which don't even have a scope of being accurate in any single subject. They aren't accurate in any given subject because they *are not meant to be*. No, GPT can't count reliably... because it was never meant to be able to count. Or do any sort of other math.

          The X-ray diagnosing AI's? Those catch more cases than human doctors, with fewer false positiv

    • It's not the killswitch that is the problem, it's the (specific) security framework that makes it a big problem. And they can amend the bill in regard to open source all they want, but they don't have any legal clout outside California, so no open source AI has to abide the californian laws.
    • by WaffleMonster ( 969671 ) on Friday June 07, 2024 @11:43AM (#64530739)

      Let it sink in that these firms are REEEEEEEing over a "kill switch." A political label for an ability to turn off a model quickly and completely.
      The absolute horror.

      Any thoughts on how one would go about killing a bag of weights? I have a folder with about a TB of AI models on my PC. Do I now require persistent permission from the government to run models? Please explain how you would implement this short of outlawing open source entirely.

      This legislation is obviously malicious. Beyond the kill switches it creates impossible standards like holding model creators responsible for shit enabled by additional training and augmentations by others. It effectively outlaws models that are not centrally controlled and guarded.

      Heck they even through in the infamous 10^26 references for good measure to reward the industry for the billion dollars they spent on lobbying in recent years.

      How is life supposed to imitate art if we demand common sense architectural decisions like giving human controllers absolute and reliable power to kill an AI system?

      Whatever dangers people think AI will bring allowing corporations and governments to hoard technology is the far worse and a far more immediate threat to society.

      • by Anonymous Coward
        The bill was introduced by Scott Wiener, a thin vain man who wears red nail polish, notable for homosexual issues advocacy and support of aggressive infill real estate development.
    • by znrt ( 2424692 )

      dunno if i would call it "absolute horror", this is just regular nonsense by legislators pretending they have a clue and actually care about stuff.

      then again if they asked for a kill switch installed on certain ceos ... that would start to make some sense. ofc i'm just joking.

    • by reanjr ( 588767 )

      That's the feature that's mentioned in the summary, but it doesn't say that's the only thing being required. Presumably, there are some more onerous requirements in addition to the kill switch.

  • At least that's better than the new law that Colorado governor Polis just signed into law. https://leg.colorado.gov/bills/sb24-205 [colorado.gov] They didn't even bother to define the term "artificial intelligence". The classic textbook on AI, "Artificial Intelligence: A Modern Approach" one can see from the table of contents https://aima.cs.berkeley.edu/3rd-ed/ [berkeley.edu] that chapters 8 and 9 are inferencing of first-order logic. I.e. if/then statements. E.g. a blanket mortgage business rule like "disqualify if FICO 680" will be illegal, if it happens to result in racial disparities, once the law as written takes effect in 2026.
    • To be fair, the industry hasn't defined the term either. AI is just a buzzword you attach to any tech project you want investors to pour money into.
      • To be fair, my if/then statement can loosely be described as Intelligent and it's definitely artificial since it's not organic.

    • That is more likely an example of query logic or query syntax and not first order logic processing. First order logic was defined early in computing history along with machine learning. Syntax is just cross referencing a database for the most likely matches.. You can't regulate something you can't define. The misrepresentation of the word crypto cost the whole world unaccountable losses to give an example.....
  • This is the problem with politicians and technically illiterate individuals setting policy, based on the pop-culture representation of AI in movies, television, video games and so on... O_o

    The fact that these AI models were recommending that someone put glue on their pizza solely because someone mentioned it on Reddit really shows the limitation of the "intelligence" aspect of this software. They are highly sophisticated and capable of interpreting queries that can sound human, but are utterly lacking in re

  • by schwit1 ( 797399 ) on Friday June 07, 2024 @10:42AM (#64530533)

    https://x.com/psychosort/statu... [x.com]

    In a sentence

    If passed, SB1047 will entrench incumbents over startups, target open source, devastate California’s lead in AI, and cede rulemaking to unelected decels.

    In a paragraph

    SB1047 burdens developers with a mountain of compliance which will prevent startups from competing with legacy companies. It creates a new regulatory agency, the frontier model division, which will impose fees on AI developers while obstructing their research. These unelected bureaucrats will have broad powers to direct criminal liability and change regulatory standards at will. The bill’s co-sponsor, Center for AI Safety, is an extremist organization which believes AI research is likely to lead to human extinction. Consequently, the bill is designed to harm AI research itself, instead of focusing on malicious use, all while going out of its way to target open source using its derivative model standard. California owes its AI research lead to a vibrant startup economy. If we wish to keep it, California must block SB1047.

    • While you raise a valid point, the easy answer is for the new start-ups to just leave CA and go work in another state. They may also find that cheaper to do, always good for a start up. There are other good people out that that don't live in CA and don't want to.
      • Considering all AI research can and most likely is done on a computer, you hardly need to be based in any particular location. Go setup in the best tax friendly region and hire everyone as remote workers. Problem solved.

      • That doesn't address anything. Sure, you could move to upstate Idaho and do what you want - killbots or whatever. But if such a product is illegal for sale in California, no distributor will touch it (if it's a consumer play), and no major corporation will buy it (if it's a B-to-B play).

        Consider how California passes stringent (even extreme) environmental bills (e.g. auto emissions), and every manufacturer worldwide will (reluctantly) adopt it once it's a fait accompli.

    • Glad I do fancy automation and not AI.
    • by ShanghaiBill ( 739463 ) on Friday June 07, 2024 @11:17AM (#64530657)

      The bill’s co-sponsor, Center for AI Safety, is an extremist organization which believes AI research is likely to lead to human extinction.

      CAIS is backed by Elon Musk.

  • A kill switch.... really??? OpenAI is not Cyberdyne Systems and they did not invent fucking Skynet! ChatGPT is a neat toy that's turning into a useful tool. But it's not sentient. Hell, it doesn't even pass a Turing Test after you play with it for a few hours and learn to recognize its tells... maybe not even before you work with it much... humans just do not chat like that.

    Also, Judgement Day won't happen for more reasons beyond "AI"'s inadequacy. For one, nuclear weapons are air-gapped and if anyone i

  • But, but, we were running with scissors, just yesterday?
  • You have no right to information. Not in California, anyway.

  • Safety assessment requirement: Under the Act, developers would be required to assess safety before training a “covered AI model,” ensuring that the models are safe and do not pose a risk to public safety or welfare. This involves making a positive safety determination that the model does not have the capability to enable certain harms, such as the creation of chemical or biological weapons that would result in mass casualties or a cyberattack on critical infrastructure resulting in $500 million

  • Don't we have built in 'off switches' in everything?

    Oh, you mean a part of the software that will block it. But the software is adaptive, so it'll adapt to not be bothered by the kill switch. Right? At least in their fevered nightmares.

    Stop letting stupid people control things they don't understand. Or start measuring reality accurately.

  • California should require bacronyms or iambic meter for the names of all future bills.

"A child is a person who can't understand why someone would give away a perfectly good kitten." -- Doug Larson

Working...