California AI Bill Sparks Backlash from Silicon Valley Giants (arstechnica.com) 59
California's proposed legislation to regulate AI has sparked a backlash from Silicon Valley heavyweights, who claim the bill will stifle innovation and force AI start-ups to leave the state. The Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act, passed by the state Senate last month, requires AI developers to adhere to strict safety frameworks, including creating a "kill switch" for their models. Critics argue that the bill places a costly compliance burden on smaller AI companies and focuses on hypothetical risks. Amendments are being considered to clarify the bill's scope and address concerns about its impact on open-source AI models.
Kill switch (Score:2)
Just point at the big red button on the wall in the DC labeled 'EPO' the electric code already no doubt requires and say see kill switch for AI model right there.
Re: (Score:2, Troll)
Already Taken Care Of (Score:2)
Re: (Score:2, Troll)
No huge number of blackouts for a couple decades. Are you still listening to Fox and it's "bash all blue states as shit holes" editorials?
Just kill all of it (Score:2, Insightful)
Re:Just kill all of it (Score:5, Interesting)
I don't like the term "AI" being used to refer to learning systems with no intention or possibility of attaining sentience either but the technology isn't garbage.
I had an idea to make a system to detect arousal [don't ask]. I know enough about different technologies to have an idea what would be involved in building something like that and normally I'd dismiss it because there are to many pieces involved I'd need to learn or relearn but I decided to take a whirl at using chatgpt as my personal jarvis instead.
Within a day I'd gotten it to produce an arduino sketch, a simple circuit, and calculate the right components to best utilize the resolution available on ESP32 ADC's. While waiting for those parts to arrive I got chatgpt to whip up a node.js instance to receive and record data and also to be able to receive tags, I had it whip up a simple web application that can be used to define arbitrary tags and send them in real time. That way a user running the sensor can load the app on their phone tap labels to tag events on the recorded data in real time. The parts then arrived and I was able to build and work out the issues with the basic sensor prototype and verify it worked on the serial console.
Next chatgpt and I will collaborate to get the aforementioned app receiving the data properly and then to massage that data into the appropriate framework for the later processing and machine learning steps. Now was any of this as simple as it sounds? No, chatgpt made plenty of errors and I had to fill in lots of blanks and be able to spot simple mistakes and probe it correctly to get it to fix them but it also saved me from having to learn or relearn many components on the way and that has made this hobby project go from a non-starter to well on its way to being a reality.
Re: (Score:2)
The day when everyone realizes what complete and utter bullshit all this fake-ass 'AI' garbage is and they abandon it once and for all can't come soon enough.
The day, you say?
It’s almost comical assuming humans have learned a fucking thing since Greed was pimping vaporware.
You better start meaning in millennia.
Re: Leave the state or not... (Score:3)
Businesses can overlook the market and take their business elsewhere. It boils down to if complying is profitable. For some firearm manufactures, they have decided that the misfire and safety tests are too costly for some models and simply don't bother importing them into California.
Re: (Score:2)
Businesses can overlook the market and take their business elsewhere. It boils down to if complying is profitable.
There's also the question of whether complying is existential, and not just profitable. What if the market in question is too big to ignore?
For all markets, compliance to the one with the strictest market-standard can be good (e.g., California vehicle-emission standards helps the planet, though at higher vehicle-cost) but also bad (e.g., Texas school-boards' insistence on Intelligent Design in textbooks.)
For some firearm manufactures, they have decided that the misfire and safety tests are too costly for some models and simply don't bother importing them into California.
Alas, they'll wind up in California anyway. Private resale of firearms across the country is baked into
Re: (Score:2)
Alas, they'll wind up in California anyway. Private resale of firearms across the country is baked into the gun-manufacturers' business model.
Careful. California criminalized private sales. Nearly all firearm transfers in California have to go through a background check. So using an FFL dealer and through the DROS (dealer record of sale) entry system.
People buy and sell used firearms here, but even a private sale of rifles with banned features or handguns that are not on the certified handgun list are almost impossible to buy legally. There are loopholes for the certified handgun list, but it doesn't apply to newer handguns, which means gun manuf
How dare they demand an "off switch" (Score:4, Insightful)
Let it sink in that these firms are REEEEEEEing over a "kill switch." A political label for an ability to turn off a model quickly and completely.
The absolute horror.
How is life supposed to imitate art if we demand common sense architectural decisions like giving human controllers absolute and reliable power to kill an AI system?
Technical (Score:4, Interesting)
How is life supposed to imitate art if we demand common sense architectural decisions like giving human controllers absolute and reliable power to kill an AI system?
It's not the demand for a "kill switch" in and of itself. It's the lack of fundamental understanding of technology the requirement shows. All AI models respond to input. If you shut off input the AI model will do nothing. A "kill switch" means commenting out a call to a REST service. Or, heck, shutting down the AI docker or K cluster. If you don't send the model input there is nothing to "kill." It's a nonsensical requirement and, likely, emblematic of the other issues in the bill.
Re:Technical (Score:5, Insightful)
Re: (Score:2)
This.
Re: (Score:2)
Re: (Score:2)
Last I checked the AI would be powered by physics, not magic. Wanna emergency kill the AI? Cut the fucking power. The servers aren't going to magically stay running.
Even battery backup has breakers / fusing before the servers that can cut power. If it doesn't the datacenter burns down.
No power switch? (Score:2)
But you are limiting the problem to how AI exists now.
Do you really see a future where we are going to have computer hardware without a power switch? Unless you do then you have a "kill switch" right there - just turn off the machine it is running on.
Re: (Score:2)
But you are limiting the problem to how AI exists now.
That's great, but we are nowhere near having a problem with rogue sentient AIs currently, and passing legislation before we even know what a sentient machine would look like is dumb.
Let's pass legislation setting up zoning rules for mars and how to interact with alien species when they land on their planet while we are at it.
Re: (Score:2)
Your ramblings are nothing but semantic nonsense. Everyone knows what a kill switch is. It fully and completely stops something from working/running/doing something. It doesn't matter what that switch is.
The term in the legislation may or may not be defined, but everyone knows what that term means. Being a pedantic prick won't change it.
Re: (Score:2)
semantic
To be fair, semantics and nuance do matter when it comes to law, and legislating - hence why not all killing is considered "murder," having involuntary homocide, just plain old homocide, and self defense as a defense, for instance.
Re: (Score:2)
I mean, I know the BOFH is satire, but after having worked the hell desk before, I know for a fact that each satirical statement he makes about users (lusers in BOFH parlance) has at
Re: (Score:2)
I knew at least one CS professor who was functionally computer illiterate. Was pretty amazing to see.
Re: (Score:2)
Re: (Score:2)
How is life supposed to imitate art if we demand common sense architectural decisions like giving human controllers absolute and reliable power to kill an AI system?
It's not the demand for a "kill switch" in and of itself. It's the lack of fundamental understanding of technology the requirement shows. All AI models respond to input. If you shut off input the AI model will do nothing. A "kill switch" means commenting out a call to a REST service. Or, heck, shutting down the AI docker or K cluster. If you don't send the model input there is nothing to "kill." It's a nonsensical requirement and, likely, emblematic of the other issues in the bill.
You do realize it’s this EXACT kind of blind human ignorance that simply assumes Greed N. Corruption cannot create Skynet, that will ultimately and ironically create Skynet, right?
Read up on your history and for fucks sake STOP assuming humans aren’t that stupid. We certainly are.
Re: (Score:2)
Maybe go back and watch
The problem is what it means for a small company (Score:2)
Lets say you're building a company to use AI to automate the process of doing your taxes.
Having a "kill switch" for those models means one of two things
- Either you need to hire a bunch of humans to have on staff, just in case you need to switch off the AI. This would eliminate the entire value proposition of the company.
- Or, you need to be prepared to shut the entire company down when the "kill switch" is pressed. Not really something investors would like.
Re: (Score:3)
But then it's good to have this stuff out there as soon as possible then so the investors and companies can make that decision with the proper information and risk assesment. If your company was already in that position I would say you jumped the gun a bit early and early adopters carry risks.
This is hyperbole but entertain my anaology. it would be an excellent value proposition if I could pay my workers $2.50 per hour and work them 60 hours a week. That's great value but I know up front that's off the t
Re: (Score:3)
It's not the job of the government to decide whether or not AI is a good value proposition.
The point I am making here is simple: There are endless applications for Generative AI. 99.99% of them are not dangerous to humans and should not need a "kill switch". Subjecting all companies using AI to that is ridiculous.
Re: (Score:2)
It's not the job of the government to decide whether or not AI is a good value proposition.
Yes? That's the point I am making? That should play no part in the regulations but if there are potential negative outcomes we should be aware and act or at least be ready to act.
There are endless applications for Generative AI. 99.99% of them are not dangerous to humans and should not need a "kill switch". Subjecting all companies using AI to that is ridiculous.
Maybe, like in my post I think we do have to draw a line with these massive tools that are on the open internet versus things that are internal to a company. The internet is like society in general, yes it's made up of a bunch of private actors in private spaces but that alltogether creates a shared space much bigger than the indi
Re: The problem is what it means for a small compa (Score:2)
There are going to be MANY negative outcomes to AI - namely that 50% of the population may be unemployed in 5 years and if there isn't a massive overhaul to the foundational underpinnings of the economy, and capitalism itself, then there will be enormous civil unrest and the outcome may end up looking like Mad Max.
But the government is more concerned with stupid Terminator scenarios - which aren't the immediate problem.
Re: (Score:2)
massive overhaul to the foundational underpinnings of the economy, and capitalism itself
You know what I think you and I think are mostly on the same page after all because I do also agree with this but also I think this type of thing has to happen regardless of AI, it's been slowly happening for awhile now.
Re: The problem is what it means for a small compa (Score:2)
A kill switch is not hard.
Therefore it will not be difficult to comply with this regulation.
Therefore you are going all reeeee. Never go full reeeee.
Re: (Score:2)
Lets say you're building a company to use AI to automate the process of doing your taxes.
Right, I want my tax forms to claim I have 10,000 dependents that I support through a non-profit organization for selling mutated newts.
Are you familiar with "AI" at all? It has no concept of reality and can't have one. You don't want it to do anything more important than drawing you pretty pictures where, unfortunately, some of the people have fifteen fingers (so helpful for counting in hex).
Re: (Score:2)
The efficacy or not of this, is not relevant to this regulation discussion. If the product is not effective then the market will figure that out.
Re: (Score:2)
>Are you familiar with "AI" at all?
You obviously aren't. You've obviously only heard about GPT / stable diffusion base models and pretty much nothing else. Which don't even have a scope of being accurate in any single subject. They aren't accurate in any given subject because they *are not meant to be*. No, GPT can't count reliably... because it was never meant to be able to count. Or do any sort of other math.
The X-ray diagnosing AI's? Those catch more cases than human doctors, with fewer false positiv
Re: How dare they demand an "off switch" (Score:1)
Re:How dare they demand an "off switch" (Score:5, Insightful)
Let it sink in that these firms are REEEEEEEing over a "kill switch." A political label for an ability to turn off a model quickly and completely.
The absolute horror.
Any thoughts on how one would go about killing a bag of weights? I have a folder with about a TB of AI models on my PC. Do I now require persistent permission from the government to run models? Please explain how you would implement this short of outlawing open source entirely.
This legislation is obviously malicious. Beyond the kill switches it creates impossible standards like holding model creators responsible for shit enabled by additional training and augmentations by others. It effectively outlaws models that are not centrally controlled and guarded.
Heck they even through in the infamous 10^26 references for good measure to reward the industry for the billion dollars they spent on lobbying in recent years.
How is life supposed to imitate art if we demand common sense architectural decisions like giving human controllers absolute and reliable power to kill an AI system?
Whatever dangers people think AI will bring allowing corporations and governments to hoard technology is the far worse and a far more immediate threat to society.
Re: (Score:1)
Re: (Score:2)
dunno if i would call it "absolute horror", this is just regular nonsense by legislators pretending they have a clue and actually care about stuff.
then again if they asked for a kill switch installed on certain ceos ... that would start to make some sense. ofc i'm just joking.
Re: (Score:2)
That's the feature that's mentioned in the summary, but it doesn't say that's the only thing being required. Presumably, there are some more onerous requirements in addition to the kill switch.
Better than Colorado (Score:3)
Re: (Score:2)
Re: (Score:2)
To be fair, my if/then statement can loosely be described as Intelligent and it's definitely artificial since it's not organic.
Re: Better than Colorado (Score:1)
So Very Stupid (Score:2)
This is the problem with politicians and technically illiterate individuals setting policy, based on the pop-culture representation of AI in movies, television, video games and so on... O_o
The fact that these AI models were recommending that someone put glue on their pizza solely because someone mentioned it on Reddit really shows the limitation of the "intelligence" aspect of this software. They are highly sophisticated and capable of interpreting queries that can sound human, but are utterly lacking in re
SB1047 will entrench incumbents over startups (Score:5, Insightful)
https://x.com/psychosort/statu... [x.com]
In a sentence
If passed, SB1047 will entrench incumbents over startups, target open source, devastate California’s lead in AI, and cede rulemaking to unelected decels.
In a paragraph
SB1047 burdens developers with a mountain of compliance which will prevent startups from competing with legacy companies. It creates a new regulatory agency, the frontier model division, which will impose fees on AI developers while obstructing their research. These unelected bureaucrats will have broad powers to direct criminal liability and change regulatory standards at will. The bill’s co-sponsor, Center for AI Safety, is an extremist organization which believes AI research is likely to lead to human extinction. Consequently, the bill is designed to harm AI research itself, instead of focusing on malicious use, all while going out of its way to target open source using its derivative model standard. California owes its AI research lead to a vibrant startup economy. If we wish to keep it, California must block SB1047.
Re: (Score:1)
Re: (Score:2)
Considering all AI research can and most likely is done on a computer, you hardly need to be based in any particular location. Go setup in the best tax friendly region and hire everyone as remote workers. Problem solved.
Re: (Score:2)
That doesn't address anything. Sure, you could move to upstate Idaho and do what you want - killbots or whatever. But if such a product is illegal for sale in California, no distributor will touch it (if it's a consumer play), and no major corporation will buy it (if it's a B-to-B play).
Consider how California passes stringent (even extreme) environmental bills (e.g. auto emissions), and every manufacturer worldwide will (reluctantly) adopt it once it's a fait accompli.
Re: (Score:2)
Re:SB1047 will entrench incumbents over startups (Score:4, Informative)
The bill’s co-sponsor, Center for AI Safety, is an extremist organization which believes AI research is likely to lead to human extinction.
CAIS is backed by Elon Musk.
Seriously? (Score:2)
A kill switch.... really??? OpenAI is not Cyberdyne Systems and they did not invent fucking Skynet! ChatGPT is a neat toy that's turning into a useful tool. But it's not sentient. Hell, it doesn't even pass a Turing Test after you play with it for a few hours and learn to recognize its tells... maybe not even before you work with it much... humans just do not chat like that.
Also, Judgement Day won't happen for more reasons beyond "AI"'s inadequacy. For one, nuclear weapons are air-gapped and if anyone i
We have to be responible : ( (Score:2)
you have no right (Score:1)
You have no right to information. Not in California, anyway.
Roads fail the AI safety test (Score:2)
Safety assessment requirement: Under the Act, developers would be required to assess safety before training a “covered AI model,” ensuring that the models are safe and do not pose a risk to public safety or welfare. This involves making a positive safety determination that the model does not have the capability to enable certain harms, such as the creation of chemical or biological weapons that would result in mass casualties or a cyberattack on critical infrastructure resulting in $500 million
Like the power switch on a computer? (Score:2)
Don't we have built in 'off switches' in everything?
Oh, you mean a part of the software that will block it. But the software is adaptive, so it'll adapt to not be bothered by the kill switch. Right? At least in their fevered nightmares.
Stop letting stupid people control things they don't understand. Or start measuring reality accurately.
Long boring bill names inhibit direct democracy (Score:2)