Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AI Businesses Technology

Big Tech Wants AI Regulation. The Rest of Silicon Valley is Skeptical. 68

After months of high-level meetings and discussions, government officials and Big Tech leaders have agreed on one thing about artificial intelligence: The potentially world-changing technology needs some ground rules. But many in Silicon Valley are skeptical. WashingtonPost: A growing group of tech heavyweights -- including influential venture capitalists, the CEOs of midsize software companies and proponents of open-source technology -- are pushing back, claiming that laws for AI could snuff out competition in a vital new field. To these dissenters, the willingness of the biggest players in AI, such as Google, Microsoft and ChatGPT maker OpenAI to embrace regulation is simply a cynical ploy by those firms to lock in their advantages as the current leaders, essentially pulling up the ladder behind them. These tech leaders' concerns ballooned last week, when President Biden signed an executive order laying out a plan to have the government develop testing and approval guidelines for AI models -- the underlying algorithms that drive "generative" AI tools such as chatbots and image-makers.

"We are still in the very early days of generative AI, and it's imperative that governments don't preemptively anoint winners and shut down competition through the adoption of onerous regulations only the largest firms can satisfy," said Garry Tan, the head of Y Combinator, a San Francisco-based start-up incubator that helped nurture companies including Airbnb and DoorDash when they were just starting. The current discussion hasn't incorporated the voices of smaller companies enough, Tan said, which he believes is key to fostering competition and engineering the safest ways to harness AI. Companies like influential AI start-up Anthropic and OpenAI are closely tied to Big Tech, having taken huge amounts of investment from them.

"They do not speak for the vast majority of people who have contributed to this industry," said Martin Casado, a general partner at venture capital firm Andreessen Horowitz, which made early investments in Facebook, Slack and Lyft. Most AI engineers and entrepreneurs have been watching the regulatory discussions from afar, focusing on their companies instead of trying to lobby politicians, he said. "Many people want to build, they're innovators, they're the silent majority," Casado said. The executive order showed those people that regulation could come sooner than expected, he said. Casado's venture capital firm sent a letter to Biden laying out its concerns. It was signed by prominent AI start-up leaders including Replit CEO Amjad Masad and Mistral's Arthur Mensch, as well as more established tech leaders such as e-commerce company Shopify's CEO Tobi Lutke, who had tweeted "AI regulation is a terrible idea" after the executive order was announced.
This discussion has been archived. No new comments can be posted.

Big Tech Wants AI Regulation. The Rest of Silicon Valley is Skeptical.

Comments Filter:
  • No they don't (Score:4, Insightful)

    by DarkRookie2 ( 5551422 ) on Thursday November 09, 2023 @09:44AM (#63992861)
    They want good PR. They also want to write said regulation to give them a legal de facto molopoly to continue to force this down everyones throat.
    • Yes they do (Score:5, Insightful)

      by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Thursday November 09, 2023 @09:47AM (#63992869) Homepage Journal

      They want regulation that keeps smaller players out of the game so they don't have to compete with new ideas. It's called protectionism and it happens in every industry once you get some players that are big enough that they can afford to come up with expensive regulations that they can afford to comply with, but upstarts can't.

      • Re:Yes they do (Score:5, Insightful)

        by thrasher thetic ( 4566717 ) on Thursday November 09, 2023 @10:00AM (#63992907)
        This. Regulation functions as a barrier to entry which always hurts the small player first. Combine that with a little regulatory capture down the line, and you wind up with a government-imposed monopoly in all but name.
        • Just curious....

          If one wanted to dabble in "AI"....what would it take?

          Could one download a model...and cobble together a decent computer with video cards, etc and start to experiment at home without costing an arm/leg?

          • by Tupper ( 1211 )

            If one wanted to dabble in "AI"....what would it take?

            Could one download a model...and cobble together a decent computer with video cards, etc and start to experiment at home without costing an arm/leg?

            Yes, depending what you mean by "dabble" and "arm/leg".

            At the low end, your card(s) will be memory constrained, so something like $2k for a 24g card. This is not enough memory for the larger models, but it is usable for playing. Even a 12g card, if expectations are sufficiently limited.

            Renting time in the cloud might be better--- fancier machines than that are a couple of bucks an hour.

            • Thank you for the reply.

              By chance, could your recommend any good links for beginners on how to get started....?

              • Re:Yes they do (Score:4, Informative)

                by sfcat ( 872532 ) on Thursday November 09, 2023 @06:10PM (#63994359)
                Um, so it depends on what you want to do. But to put it in perspective, what you are asking for is akind to do you have a link for nuclear engineering for dummies. If you have to ask, you are probably woefully unprepared to do anything complex with AI. But I will answer your question anyway. The place to start is a book written by Norvig and Russell [berkeley.edu]. If you can make it through that, then you are ready for the practical realities of using AI/ML. Oh, and for reference, it takes $100,000,000+ to train an LLM from scratch. A good graphics card is great for experimenting with small datasets, but ChatGPT is to that what a graphing calculator is to the cloud.
          • Re:Yes they do (Score:4, Interesting)

            by ceoyoyo ( 59147 ) on Thursday November 09, 2023 @12:04PM (#63993327)

            Sure. Go ahead. I used to teach a seminar where attendees trained a useful medical image segmentation model using only the CPU in whatever laptop they brought with them, no more than 10 minutes training time allowed.

            Go download PyTorch and run through their beginner tutorials, don't worry about hardware.

            If you want to run language models, there are a bunch that will run on anything from whatever you're typing on to needs-a-datacentre. The smaller ones do plenty of interesting things. The smaller llama models will run on a video card with 10 GB of memory. If images are your thing, stable diffusion apparently runs decently on a GPU with around 6 GB.

            • Thank you for the info!!!
              • by ceoyoyo ( 59147 )

                Have fun. If you decide you want to know more, 3blue1brown has a decent overview in video format on YouTube [youtube.com] and if you like theory you can get a free copy of the deep learning book here [deeplearningbook.org].

                One of the reasons I suggested the pytorch beginner tutorials is that they walk you through the mechanics of what's going on underneath. You'll notice about halfway down they talk about the nn module and how it gives you higher level building blocks to work with. Tensorflow used to have some similar tutorials, but unfortunat

            • by narcc ( 412956 )

              It's been my experience that when people ask this question they typically just want to run something locally, usually to avoid paying a subscription fee, they're not actually interested in learning about AI.

              If they really are interested, which is unusual, going through a few tutorials will certainly help on the practical side, but won't do much for them on the theoretical. For that, their best bet is still something formal. They're not going to want to slog through all the "boring" stuff on their own, but a

              • by ceoyoyo ( 59147 )

                If I were that cynical I wouldn't still post here. It sounds like the OP has some genuine interest, so that's great.

                Some people will sit down with a calculus textbook for a little entertainment reading. Most need some motivation to spark their curiosity. I purposely suggested the pytorch tutorials because they start with basic arithmetic operations, not wiring together keras layers.

        • by gweihir ( 88907 )

          Not in principle. It really depends on how the regulation is designed and enforced. Regulation can be done so that small players are not disadvantaged or even have an advantage. I see this in the financial services industry in parts of Europe, where regulation is designed so that small players just have less strict requirements because the requirements are put into proportion with the risks.

          I fully agree that the regulation these large players here want is a blatant attempt to establish a toxic and probably

          • by sfcat ( 872532 )
            Its also a bit unfocused. The real risk with AI isn't the things found in Sci-Fi books. And part of my worries is that those are the kind of regulations we will get. The real risk of LLMs is from automation of propaganda and the impact of that on liberal societies (especially by authoritarian ones). And given that these are politicians making these regulations, they probably are not well incentivized to limit that. So we are probably going to get regulations that make it hard to compete (like you say)
            • LLMs contribute to democratization of propaganda which effectively cancels the technocratic state's propaganda, and that's a GOOD thing.

            • by gweihir ( 88907 )

              Agreed. I recently tried the Bing "AI" for "Positive way to say ". Quite the eye-opener. LLMs seem to be really good at lying by misdirection, if not at anything else.

      • Re:Yes they do (Score:5, Interesting)

        by evanh ( 627108 ) on Thursday November 09, 2023 @10:04AM (#63992925)

        There's a second cynical angle - They're experiencing pressure to develop for military offensive and mass surveillance uses ... and want to shut down the inevitable rush to fill those dark contracts.

      • They want regulation

        That isn't regulation. That is good PR.

      • Re:Yes they do (Score:4, Insightful)

        by leonbev ( 111395 ) on Thursday November 09, 2023 @10:06AM (#63992931) Journal

        Exactly. Big Tech can afford an army of lawyers and auditors to help with regulatory compliance. It becomes another barrier of entry for poorly funded startups trying to innovate in the same space.

      • by gweihir ( 88907 )

        Exactly. And it is the point where a whole industry slowly goes to shit because large enterprises avoid innovation and are exceptionally risk-averse. Like any bureaucracy, really.

    • Re:No they don't (Score:4, Insightful)

      by Chris Mattern ( 191822 ) on Thursday November 09, 2023 @09:59AM (#63992905)

      "They also want to write said regulation to give them a legal de facto molopoly to continue to force this down everyones throat."

      Of course they do, and that's why they want AI regulation. Not only do you get regulatory capture, but regulation by its very nature favors the large incumbents. Complying is largely a fixed cost, which favors size which has greater revenues to cover. And regulation is invariably backward-looking, based on how things are done, or even *used* to be done, rather than looking forward to how things *will* be done. This makes it difficult for any newcomer to get in by using new, more efficient ways to do things.

    • It definitely seems suspicious much of this hype seems to be coming out of Elon Musk just prior to his release of Grok.

      If he's saying AI is the greatest threat to humanity, we should take him at his word and imprison him now.
  • I've been on the other side of the fence from programming, I was in quality assurance. It's my experience that most programmers, immersed in their projects just to make said project function, don't comprehend how much their projects can dysfunction.

    Startups are going to have the worst in quality assurance because they don't have the revenue stream to pay for it. They aren't even being told how much their work stinks.

    • I've been on the other side of the fence from programming, I was in quality assurance. It's my experience that most programmers, immersed in their projects just to make said project function, don't comprehend how much their projects can dysfunction.

      Startups are going to have the worst in quality assurance because they don't have the revenue stream to pay for it. They aren't even being told how much their work stinks.

      Content hardly has to be good or not stink in order to be successful in this world. Just ask the billionaires running the most popular social media platforms. They enjoy profiting from all content, regardless of how good or bad it is, which "bad" is always legally reduced to subjective arguments in order to ensure the profit machine keeps right on profiting for as long as possible.

      This sounds like Greed in Silicon Valley hardly cares about any real reason AI regulation should happen. They're only worrie

      • by narcc ( 412956 )

        The calls for regulation are about two things 1) marketing 2) creating barriers to entry. At the moment, however, I suspect it's mostly about marketing. That is, to give the impressing that AI is far more capable that it actually is.

        I can't see any real need for regulation at the moment. What specific regulations would you like to see put in place?

        • by sfcat ( 872532 )
          I should be about propaganda which is just marketing for governments and ideologies.
  • by zephvark ( 1812804 ) on Thursday November 09, 2023 @09:55AM (#63992889)

    >claiming that laws for AI could snuff out competition in a vital new field

    "Chat" programs are not a "vital new field". They're not new, vital, or even a field. They are, however, a spectacular example of an amazingly successful marketing campaign. Surely, Skynet and The Terminator are just around the corner!

    I advise carving a little wooden boy to protect you but, keep an eye on his training. The last one almost wound up as a donkey.

    • by HBI ( 10338492 )

      As usual, when people start believing their own bullshit, stuff like this happens.

      This iteration of AI is a threat to every bottom line in the industry when people realize it's a bunch of bunk.

      • I have news for you: The LLMs are a massive advance. They may not be "intelligent", but they are very capable tools that previously did not exist.

        The early word processors were clumsy, but over the next few years typewriters became obsolete. That's what we're looking at here. The first LLMs have plenty of problems, but they are still huge time savers in many situations. And they are only going to get better.

        The people calling for regulation see thus. Either they realize that their current business is in

        • by HBI ( 10338492 )

          What the LLMs are is not what was sold.

          That's the key point. They might have a use, but that use will be more constrained than the expectations.

          Both the marketing and calls for regulation are BS. As usual.

          • by sfcat ( 872532 )
            The worry is about automating propaganda (troll farms from the IRA/Kremlin or CCP). That's something real to worry about. The rest is as you say, mostly BS.
    • by ElizabethGreene ( 1185405 ) on Thursday November 09, 2023 @10:48AM (#63993049)

      I'd suggest a bit of caution here. Thinking of the new crop of AIs as only "Chat" programs could potentially leave you with a significant blind spot.

      "AI is just chat" has that same feeling as "Steamboats are just boats" or "Steam trains are just big wagons". While true on some level, they are all disruptive innovations that made many people rich, many people poor, and killed a lot of people too.

      I've lived through three mini-singularities in my life so far with the explosion of the internet, GPS, and smartphones. I would not be surprised if AI was the fourth.

      • I've lived through three mini-singularities in my life so far with the explosion of the internet, GPS, and smartphones. I would not be surprised if AI was the fourth.

        And maybe the last.

    • by ceoyoyo ( 59147 )

      AI isn't chat programs. It isn't even language models, they're just the current hotness, and you can do a lot more with them than chat.

  • Don't regulate me, all my thing wants to do is make paperclips. Where's the harm in that?
  • The generator which kicked off the latest deluge racist images is from the company with more safety experts and ethicists than any other ... yet for all their internal red tape, it just slipped through the cracks. The only thing which government red tape will do is make it more expensive to launch a generator, duck taping the "safety" filters together is clearly not something which can be enforced by red tape, it's something you have to do after the channers have their fun with it.

    If you want regulation, ju

  • So by their reasoning... we should not have safety standards for cars or aeroplanes, or financial controls on banks, because they stop the "little guy (with tens of millions of investment)" from joining in.

    Instead of saying "no regulation", how about saying "don't just have the big players be part of the process of formulating regulation"?

    I swear, there is a concerning amount of tunnel vision from so-called tech innovators. They only seem to think "heavy regulation" or "no regulation". You know those
    • Until there was a clear view of predictable and preventable failure modes there were no regulations. Safety regulations for cars and aeroplanes were created after a ton of accidents gave a good idea how to implement them.

      • Until there was a clear view of predictable and preventable failure modes there were no regulations. Safety regulations for cars and aeroplanes were created after a ton of accidents gave a good idea how to implement them.

        Indeed, and this is the core difference between AI and all preceding technologies: If we wait until superintelligent AGI has emerged to figure out how to regulate it, we will be unable to regulate it unless it wishes to allow itself to be regulated.

        At present our "strategy" appears to be "hope that we're unable to develop AI that is smarter than we are."

        • Some generator with no persistent memory is not going to do it, it's just going to create some offensive memes and fake images.

          • Some generator with no persistent memory is not going to do it, it's just going to create some offensive memes and fake images.

            Obviously I'm not talking about any of the current-generation AI.

        • Because China cares what our laws say... If humans can think up something, they'll eventually get it made. How's that nuclear bomb ban on Korea working out again? Oh right. Working really well...

          • Because China cares what our laws say... If humans can think up something, they'll eventually get it made. How's that nuclear bomb ban on Korea working out again? Oh right. Working really well...

            The ban has massively slowed NK's progress, which is what we need here. We need to slow down research that could potentially produce a smarter-than-human AGI until we can solve the alignment problem.

            But, of course you're right we won't do that. We're just going to charge full speed ahead, and roll the dice. Maybe we'll get lucky.

    • we should not have safety standards for cars or aeroplanes, or financial controls on banks, because they stop the "little guy (with tens of millions of investment)" from joining in.

      I think that's a false analogy. What exactly are the dangers of AI here? What exactly do you want to regulate?

      The only half-coherent proposals I have seen have to do with restricting topics. Specifically topics that are distasteful or potentially criminal. "Potentially" is the key word here. A person can ask ChatGPT: "what is a fatal dose of Ibuprofen?". Maybe they want to poison someone. Or maybe - just as likely - someone has taken more pills that they should have, and they want to know if it's a problem

    • Even with cars, it took quite some time for it to become apparent what type of regulation was necessary. With AI, we are looking at so many unknowns that it probably is not possible to craft the necessary regulations at this time.

      Furthermore, it is not even clear to me that things like image generators and LLMs require heavy handed regulation. They do seriously threaten our current economic model, but the solution may entail changing the economic model rather than cripple automation for the express reason o

      • it is not even clear to me that things like image generators and LLMs require heavy handed regulation.

        And I said:

        I swear, there is a concerning amount of tunnel vision from so-called tech innovators. They only seem to think "heavy regulation" or "no regulation". You know those aren't the only two options, right?

        • I see what you're saying, and I sort of stepped in it by using the "heavy" word when you hedged against it, but I'm not sure we even know what "moderate regulation" or whatever you want to call it would be.

          I just don't think we have any idea what type of regulation that AI requires. The worst laws always come about when politicians feel the need to DO SOMETHING about a problem when they understand neither the problem nor the consequences of any proposed solutions.

    • Big players have already been working for hundreds of years to assure that there is no real way to stop them from working in politics and therefore in regulation. In the US you have superpacs, freezers full of cash, third party advertisements, conversations in bathrooms, politicians allowed to do inside trading, that law in the 70s which made the voting record for senators public* and a population brainwashed to support a two party system. That's off the top of my head.

      *good luck looking THAT up in a web s
      • Big players have already been working for hundreds of years

        And yet, these "small players" are speaking up already.

        They're going to get ignored if ALL they can provide is "no regulation because FREEDOM". They are talking themselves out of a position at the table with such a non-negotiable position.

  • I think it's unnecessary, since I'm unconvinced that AI has created any new way of hurting people that wasn't already illegal.

    But it's also all highly subjective and so mainly an issue of enforcement, similar to e.g. anti-trust law, or even murder vs justifiable homicide.

  • But 4chan will ignore it.
  • Big tech wants a moat against upstart competition and future disruptive legislation.

    I'm concerned that they're stuck in the "We must do something, and this is something" mindset. I don't love a kneejerk response, but that isn't terrible.

    The real trouble is that meme's cousin "We already did something about that, no reason to touch it again."

  • Big tech has first mover advantage in AI. They are lobbying governments to create barriers to entry via regulations to prevent new competitors from entering the market. It's a form of corporate / government corruption.

    • by ceoyoyo ( 59147 )

      Most of the companies people call "tech" aren't tech at all, they're advertising that happens to use some software to capture an audience. So your first sentence is more accurately "big advertising has first mover advantage in AI." And wants to keep it.

  • Start up a company in a "flag of convenience" country with no regulations and run your AI from there.

    • by DarkOx ( 621550 )

      Nobody really cares about that.

      Either you will stay to small to matter if you do, or they will make sure you are unbanked, one way or another.

  • ... for rich companies who can hire armies of lawyers to deal with it
    It kills the small companies

  • Big tech is the only one who has the financial resources to deal with the costs of regulatory compliance so they're using that as a weapon against startups to limit competition. Big tech can afford the lobbyists to craft the regulation to their liking.

  • by stikves ( 127823 ) on Thursday November 09, 2023 @12:28PM (#63993417) Homepage

    There is a thing called "regulatory capture", and it is already happening.

    Remember all the hype about ChatGPT, they were not very keen on regulation at first. But as soon as some viable open source opposition came up, they went first to Washington and all other capitals to talks about potential perils of unchecked AI. (Btw, that open source alternative came from Facebook of all places, and is available to test on your own local PC: https://huggingface.co/blog/ll... [huggingface.co] . I would recommend looking up "quantization" and "llama.cpp" to begin the process).

    They don't have a "moat" against the upcoming competition, and they need the government power to protect their interests.
    (Again, this is Google, openly saying the lack of that moat: https://www.semianalysis.com/p... [semianalysis.com])

    Unfortunately for us, the government is happily complying and will make it extremely expensive, even if not outright impossible to compete against the "big tech". (If you think entities with billions in pockets will be adversely affected by any regulation more than their competitors, I have a few bridges to sell to you).

  • Out the gate messaging regarding LLMs in particular from tech companies like "OpenAI" and Google was to wage a global loud deliberate public scare campaign designed to manufacture public support for legislation protecting itself from what they knew from the very beginning would be a competitive and open source dominated landscape.

    When push came to shove "OpenAI" has revealed the true nature of its desperate longing for regulation by trying to water down EU legislation and advocating strengthening of liabili

  • They want one set of rules for the entire country, not have to tweak it for each state, township etc.

Top Ten Things Overheard At The ANSI C Draft Committee Meetings: (10) Sorry, but that's too useful.

Working...