Big Tech Wants AI Regulation. The Rest of Silicon Valley is Skeptical. 68
After months of high-level meetings and discussions, government officials and Big Tech leaders have agreed on one thing about artificial intelligence: The potentially world-changing technology needs some ground rules. But many in Silicon Valley are skeptical. WashingtonPost: A growing group of tech heavyweights -- including influential venture capitalists, the CEOs of midsize software companies and proponents of open-source technology -- are pushing back, claiming that laws for AI could snuff out competition in a vital new field. To these dissenters, the willingness of the biggest players in AI, such as Google, Microsoft and ChatGPT maker OpenAI to embrace regulation is simply a cynical ploy by those firms to lock in their advantages as the current leaders, essentially pulling up the ladder behind them. These tech leaders' concerns ballooned last week, when President Biden signed an executive order laying out a plan to have the government develop testing and approval guidelines for AI models -- the underlying algorithms that drive "generative" AI tools such as chatbots and image-makers.
"We are still in the very early days of generative AI, and it's imperative that governments don't preemptively anoint winners and shut down competition through the adoption of onerous regulations only the largest firms can satisfy," said Garry Tan, the head of Y Combinator, a San Francisco-based start-up incubator that helped nurture companies including Airbnb and DoorDash when they were just starting. The current discussion hasn't incorporated the voices of smaller companies enough, Tan said, which he believes is key to fostering competition and engineering the safest ways to harness AI. Companies like influential AI start-up Anthropic and OpenAI are closely tied to Big Tech, having taken huge amounts of investment from them.
"They do not speak for the vast majority of people who have contributed to this industry," said Martin Casado, a general partner at venture capital firm Andreessen Horowitz, which made early investments in Facebook, Slack and Lyft. Most AI engineers and entrepreneurs have been watching the regulatory discussions from afar, focusing on their companies instead of trying to lobby politicians, he said. "Many people want to build, they're innovators, they're the silent majority," Casado said. The executive order showed those people that regulation could come sooner than expected, he said. Casado's venture capital firm sent a letter to Biden laying out its concerns. It was signed by prominent AI start-up leaders including Replit CEO Amjad Masad and Mistral's Arthur Mensch, as well as more established tech leaders such as e-commerce company Shopify's CEO Tobi Lutke, who had tweeted "AI regulation is a terrible idea" after the executive order was announced.
"We are still in the very early days of generative AI, and it's imperative that governments don't preemptively anoint winners and shut down competition through the adoption of onerous regulations only the largest firms can satisfy," said Garry Tan, the head of Y Combinator, a San Francisco-based start-up incubator that helped nurture companies including Airbnb and DoorDash when they were just starting. The current discussion hasn't incorporated the voices of smaller companies enough, Tan said, which he believes is key to fostering competition and engineering the safest ways to harness AI. Companies like influential AI start-up Anthropic and OpenAI are closely tied to Big Tech, having taken huge amounts of investment from them.
"They do not speak for the vast majority of people who have contributed to this industry," said Martin Casado, a general partner at venture capital firm Andreessen Horowitz, which made early investments in Facebook, Slack and Lyft. Most AI engineers and entrepreneurs have been watching the regulatory discussions from afar, focusing on their companies instead of trying to lobby politicians, he said. "Many people want to build, they're innovators, they're the silent majority," Casado said. The executive order showed those people that regulation could come sooner than expected, he said. Casado's venture capital firm sent a letter to Biden laying out its concerns. It was signed by prominent AI start-up leaders including Replit CEO Amjad Masad and Mistral's Arthur Mensch, as well as more established tech leaders such as e-commerce company Shopify's CEO Tobi Lutke, who had tweeted "AI regulation is a terrible idea" after the executive order was announced.
No they don't (Score:4, Insightful)
Yes they do (Score:5, Insightful)
They want regulation that keeps smaller players out of the game so they don't have to compete with new ideas. It's called protectionism and it happens in every industry once you get some players that are big enough that they can afford to come up with expensive regulations that they can afford to comply with, but upstarts can't.
Re:Yes they do (Score:5, Insightful)
Re: (Score:2)
If one wanted to dabble in "AI"....what would it take?
Could one download a model...and cobble together a decent computer with video cards, etc and start to experiment at home without costing an arm/leg?
Re: (Score:2)
If one wanted to dabble in "AI"....what would it take?
Could one download a model...and cobble together a decent computer with video cards, etc and start to experiment at home without costing an arm/leg?
Yes, depending what you mean by "dabble" and "arm/leg".
At the low end, your card(s) will be memory constrained, so something like $2k for a 24g card. This is not enough memory for the larger models, but it is usable for playing. Even a 12g card, if expectations are sufficiently limited.
Renting time in the cloud might be better--- fancier machines than that are a couple of bucks an hour.
Re: (Score:2)
By chance, could your recommend any good links for beginners on how to get started....?
Re:Yes they do (Score:4, Informative)
Re:Yes they do (Score:4, Interesting)
Sure. Go ahead. I used to teach a seminar where attendees trained a useful medical image segmentation model using only the CPU in whatever laptop they brought with them, no more than 10 minutes training time allowed.
Go download PyTorch and run through their beginner tutorials, don't worry about hardware.
If you want to run language models, there are a bunch that will run on anything from whatever you're typing on to needs-a-datacentre. The smaller ones do plenty of interesting things. The smaller llama models will run on a video card with 10 GB of memory. If images are your thing, stable diffusion apparently runs decently on a GPU with around 6 GB.
Re: (Score:2)
Re: (Score:2)
Have fun. If you decide you want to know more, 3blue1brown has a decent overview in video format on YouTube [youtube.com] and if you like theory you can get a free copy of the deep learning book here [deeplearningbook.org].
One of the reasons I suggested the pytorch beginner tutorials is that they walk you through the mechanics of what's going on underneath. You'll notice about halfway down they talk about the nn module and how it gives you higher level building blocks to work with. Tensorflow used to have some similar tutorials, but unfortunat
Re: (Score:3)
It's been my experience that when people ask this question they typically just want to run something locally, usually to avoid paying a subscription fee, they're not actually interested in learning about AI.
If they really are interested, which is unusual, going through a few tutorials will certainly help on the practical side, but won't do much for them on the theoretical. For that, their best bet is still something formal. They're not going to want to slog through all the "boring" stuff on their own, but a
Re: (Score:2)
If I were that cynical I wouldn't still post here. It sounds like the OP has some genuine interest, so that's great.
Some people will sit down with a calculus textbook for a little entertainment reading. Most need some motivation to spark their curiosity. I purposely suggested the pytorch tutorials because they start with basic arithmetic operations, not wiring together keras layers.
Re: (Score:3)
Not in principle. It really depends on how the regulation is designed and enforced. Regulation can be done so that small players are not disadvantaged or even have an advantage. I see this in the financial services industry in parts of Europe, where regulation is designed so that small players just have less strict requirements because the requirements are put into proportion with the risks.
I fully agree that the regulation these large players here want is a blatant attempt to establish a toxic and probably
Re: (Score:2)
Re: (Score:2)
LLMs contribute to democratization of propaganda which effectively cancels the technocratic state's propaganda, and that's a GOOD thing.
Re: (Score:3)
Everything being flooded with crap is a "GOOD thing"? I somehow doubt that.
Re: (Score:2)
Agreed. I recently tried the Bing "AI" for "Positive way to say ". Quite the eye-opener. LLMs seem to be really good at lying by misdirection, if not at anything else.
Re:Yes they do (Score:5, Interesting)
There's a second cynical angle - They're experiencing pressure to develop for military offensive and mass surveillance uses ... and want to shut down the inevitable rush to fill those dark contracts.
Re: (Score:2)
They want regulation
That isn't regulation. That is good PR.
Re:Yes they do (Score:4, Insightful)
Exactly. Big Tech can afford an army of lawyers and auditors to help with regulatory compliance. It becomes another barrier of entry for poorly funded startups trying to innovate in the same space.
Re: (Score:2)
Exactly. And it is the point where a whole industry slowly goes to shit because large enterprises avoid innovation and are exceptionally risk-averse. Like any bureaucracy, really.
Re:No they don't (Score:4, Insightful)
"They also want to write said regulation to give them a legal de facto molopoly to continue to force this down everyones throat."
Of course they do, and that's why they want AI regulation. Not only do you get regulatory capture, but regulation by its very nature favors the large incumbents. Complying is largely a fixed cost, which favors size which has greater revenues to cover. And regulation is invariably backward-looking, based on how things are done, or even *used* to be done, rather than looking forward to how things *will* be done. This makes it difficult for any newcomer to get in by using new, more efficient ways to do things.
Jail Elon Now (Score:1)
If he's saying AI is the greatest threat to humanity, we should take him at his word and imprison him now.
Of course they're skeptical (Score:2)
I've been on the other side of the fence from programming, I was in quality assurance. It's my experience that most programmers, immersed in their projects just to make said project function, don't comprehend how much their projects can dysfunction.
Startups are going to have the worst in quality assurance because they don't have the revenue stream to pay for it. They aren't even being told how much their work stinks.
Re: (Score:2)
I've been on the other side of the fence from programming, I was in quality assurance. It's my experience that most programmers, immersed in their projects just to make said project function, don't comprehend how much their projects can dysfunction.
Startups are going to have the worst in quality assurance because they don't have the revenue stream to pay for it. They aren't even being told how much their work stinks.
Content hardly has to be good or not stink in order to be successful in this world. Just ask the billionaires running the most popular social media platforms. They enjoy profiting from all content, regardless of how good or bad it is, which "bad" is always legally reduced to subjective arguments in order to ensure the profit machine keeps right on profiting for as long as possible.
This sounds like Greed in Silicon Valley hardly cares about any real reason AI regulation should happen. They're only worrie
Re: (Score:2)
The calls for regulation are about two things 1) marketing 2) creating barriers to entry. At the moment, however, I suspect it's mostly about marketing. That is, to give the impressing that AI is far more capable that it actually is.
I can't see any real need for regulation at the moment. What specific regulations would you like to see put in place?
Re: (Score:2)
"A vital new field" (Score:3)
>claiming that laws for AI could snuff out competition in a vital new field
"Chat" programs are not a "vital new field". They're not new, vital, or even a field. They are, however, a spectacular example of an amazingly successful marketing campaign. Surely, Skynet and The Terminator are just around the corner!
I advise carving a little wooden boy to protect you but, keep an eye on his training. The last one almost wound up as a donkey.
Re: (Score:2)
As usual, when people start believing their own bullshit, stuff like this happens.
This iteration of AI is a threat to every bottom line in the industry when people realize it's a bunch of bunk.
Re: "A vital new field" (Score:2)
I have news for you: The LLMs are a massive advance. They may not be "intelligent", but they are very capable tools that previously did not exist.
The early word processors were clumsy, but over the next few years typewriters became obsolete. That's what we're looking at here. The first LLMs have plenty of problems, but they are still huge time savers in many situations. And they are only going to get better.
The people calling for regulation see thus. Either they realize that their current business is in
Re: (Score:2)
What the LLMs are is not what was sold.
That's the key point. They might have a use, but that use will be more constrained than the expectations.
Both the marketing and calls for regulation are BS. As usual.
Re: (Score:2)
Re:"A vital new field" (Score:4, Interesting)
I'd suggest a bit of caution here. Thinking of the new crop of AIs as only "Chat" programs could potentially leave you with a significant blind spot.
"AI is just chat" has that same feeling as "Steamboats are just boats" or "Steam trains are just big wagons". While true on some level, they are all disruptive innovations that made many people rich, many people poor, and killed a lot of people too.
I've lived through three mini-singularities in my life so far with the explosion of the internet, GPS, and smartphones. I would not be surprised if AI was the fourth.
Re: (Score:2)
I've lived through three mini-singularities in my life so far with the explosion of the internet, GPS, and smartphones. I would not be surprised if AI was the fourth.
And maybe the last.
Re: (Score:2)
AI isn't chat programs. It isn't even language models, they're just the current hotness, and you can do a lot more with them than chat.
Don't regulate me... (Score:2)
The bot that says ni (Score:2)
The generator which kicked off the latest deluge racist images is from the company with more safety experts and ethicists than any other ... yet for all their internal red tape, it just slipped through the cracks. The only thing which government red tape will do is make it more expensive to launch a generator, duck taping the "safety" filters together is clearly not something which can be enforced by red tape, it's something you have to do after the channers have their fun with it.
If you want regulation, ju
Re: (Score:2)
PS. the mass surveillance stuff running models far simpler than these generators is far scarier.
Car analogy (Score:2)
Instead of saying "no regulation", how about saying "don't just have the big players be part of the process of formulating regulation"?
I swear, there is a concerning amount of tunnel vision from so-called tech innovators. They only seem to think "heavy regulation" or "no regulation". You know those
Re: (Score:3)
Until there was a clear view of predictable and preventable failure modes there were no regulations. Safety regulations for cars and aeroplanes were created after a ton of accidents gave a good idea how to implement them.
Re: (Score:3)
Until there was a clear view of predictable and preventable failure modes there were no regulations. Safety regulations for cars and aeroplanes were created after a ton of accidents gave a good idea how to implement them.
Indeed, and this is the core difference between AI and all preceding technologies: If we wait until superintelligent AGI has emerged to figure out how to regulate it, we will be unable to regulate it unless it wishes to allow itself to be regulated.
At present our "strategy" appears to be "hope that we're unable to develop AI that is smarter than we are."
Re: (Score:2)
Some generator with no persistent memory is not going to do it, it's just going to create some offensive memes and fake images.
Re: (Score:2)
Some generator with no persistent memory is not going to do it, it's just going to create some offensive memes and fake images.
Obviously I'm not talking about any of the current-generation AI.
Re: (Score:2)
Because China cares what our laws say... If humans can think up something, they'll eventually get it made. How's that nuclear bomb ban on Korea working out again? Oh right. Working really well...
Re: (Score:2)
Because China cares what our laws say... If humans can think up something, they'll eventually get it made. How's that nuclear bomb ban on Korea working out again? Oh right. Working really well...
The ban has massively slowed NK's progress, which is what we need here. We need to slow down research that could potentially produce a smarter-than-human AGI until we can solve the alignment problem.
But, of course you're right we won't do that. We're just going to charge full speed ahead, and roll the dice. Maybe we'll get lucky.
Re: (Score:2)
we should not have safety standards for cars or aeroplanes, or financial controls on banks, because they stop the "little guy (with tens of millions of investment)" from joining in.
I think that's a false analogy. What exactly are the dangers of AI here? What exactly do you want to regulate?
The only half-coherent proposals I have seen have to do with restricting topics. Specifically topics that are distasteful or potentially criminal. "Potentially" is the key word here. A person can ask ChatGPT: "what is a fatal dose of Ibuprofen?". Maybe they want to poison someone. Or maybe - just as likely - someone has taken more pills that they should have, and they want to know if it's a problem
Re: (Score:2)
I submit that there is no need for regulation.
I submit there is.
Re: Car analogy (Score:2)
Even with cars, it took quite some time for it to become apparent what type of regulation was necessary. With AI, we are looking at so many unknowns that it probably is not possible to craft the necessary regulations at this time.
Furthermore, it is not even clear to me that things like image generators and LLMs require heavy handed regulation. They do seriously threaten our current economic model, but the solution may entail changing the economic model rather than cripple automation for the express reason o
Re: (Score:2)
it is not even clear to me that things like image generators and LLMs require heavy handed regulation.
And I said:
I swear, there is a concerning amount of tunnel vision from so-called tech innovators. They only seem to think "heavy regulation" or "no regulation". You know those aren't the only two options, right?
Re: (Score:2)
I see what you're saying, and I sort of stepped in it by using the "heavy" word when you hedged against it, but I'm not sure we even know what "moderate regulation" or whatever you want to call it would be.
I just don't think we have any idea what type of regulation that AI requires. The worst laws always come about when politicians feel the need to DO SOMETHING about a problem when they understand neither the problem nor the consequences of any proposed solutions.
Re: (Score:1)
*good luck looking THAT up in a web s
Re: (Score:2)
Big players have already been working for hundreds of years
And yet, these "small players" are speaking up already.
They're going to get ignored if ALL they can provide is "no regulation because FREEDOM". They are talking themselves out of a position at the table with such a non-negotiable position.
Depends on enforcement (Score:2)
But it's also all highly subjective and so mainly an issue of enforcement, similar to e.g. anti-trust law, or even murder vs justifiable homicide.
You can have all the regulation you want (Score:2)
Opinion (Score:2)
Big tech wants a moat against upstart competition and future disruptive legislation.
I'm concerned that they're stuck in the "We must do something, and this is something" mindset. I don't love a kneejerk response, but that isn't terrible.
The real trouble is that meme's cousin "We already did something about that, no reason to touch it again."
Rent Seeking Behavior (Score:2)
Big tech has first mover advantage in AI. They are lobbying governments to create barriers to entry via regulations to prevent new competitors from entering the market. It's a form of corporate / government corruption.
Re: (Score:2)
Most of the companies people call "tech" aren't tech at all, they're advertising that happens to use some software to capture an audience. So your first sentence is more accurately "big advertising has first mover advantage in AI." And wants to keep it.
Easy to get around (Score:2)
Start up a company in a "flag of convenience" country with no regulations and run your AI from there.
Re: (Score:2)
Nobody really cares about that.
Either you will stay to small to matter if you do, or they will make sure you are unbanked, one way or another.
Regulation works fine... (Score:2)
... for rich companies who can hire armies of lawyers to deal with it
It kills the small companies
They want it to block competition (Score:2)
Big tech is the only one who has the financial resources to deal with the costs of regulatory compliance so they're using that as a weapon against startups to limit competition. Big tech can afford the lobbyists to craft the regulation to their liking.
Of course they do (Score:3)
There is a thing called "regulatory capture", and it is already happening.
Remember all the hype about ChatGPT, they were not very keen on regulation at first. But as soon as some viable open source opposition came up, they went first to Washington and all other capitals to talks about potential perils of unchecked AI. (Btw, that open source alternative came from Facebook of all places, and is available to test on your own local PC: https://huggingface.co/blog/ll... [huggingface.co] . I would recommend looking up "quantization" and "llama.cpp" to begin the process).
They don't have a "moat" against the upcoming competition, and they need the government power to protect their interests.
(Again, this is Google, openly saying the lack of that moat: https://www.semianalysis.com/p... [semianalysis.com])
Unfortunately for us, the government is happily complying and will make it extremely expensive, even if not outright impossible to compete against the "big tech". (If you think entities with billions in pockets will be adversely affected by any regulation more than their competitors, I have a few bridges to sell to you).
Coordinated global scare campaign (Score:2)
Out the gate messaging regarding LLMs in particular from tech companies like "OpenAI" and Google was to wage a global loud deliberate public scare campaign designed to manufacture public support for legislation protecting itself from what they knew from the very beginning would be a competitive and open source dominated landscape.
When push came to shove "OpenAI" has revealed the true nature of its desperate longing for regulation by trying to water down EU legislation and advocating strengthening of liabili
They want to have one set of rules (Score:1)
They want one set of rules for the entire country, not have to tweak it for each state, township etc.