Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
AI Technology

OpenAI Lobbied the EU To Water Down AI Regulation (time.com) 43

Billy Perrigo, reporting for Time: The CEO of OpenAI, Sam Altman, has spent the last month touring world capitals where, at talks to sold-out crowds and in meetings with heads of governments, he has repeatedly spoken of the need for global AI regulation. But behind the scenes, OpenAI has lobbied for significant elements of the most comprehensive AI legislation in the world -- the E.U.'s AI Act -- to be watered down in ways that would reduce the regulatory burden on the company, according to documents about OpenAI's engagement with E.U. officials obtained by TIME from the European Commission via freedom of information requests.

In several cases, OpenAI proposed amendments that were later made to the final text of the E.U. law -- which was approved by the European Parliament on June 14, and will now proceed to a final round of negotiations before being finalized as soon as January. In 2022, OpenAI repeatedly argued to European officials that the forthcoming AI Act should not consider its general purpose AI systems -- including GPT-3, the precursor to ChatGPT, and the image generator Dall-E 2 -- to be "high risk," a designation that would subject them to stringent legal requirements including transparency, traceability, and human oversight.

This discussion has been archived. No new comments can be posted.

OpenAI Lobbied the EU To Water Down AI Regulation

Comments Filter:
  • by PIC16F628 ( 1815754 ) on Tuesday June 20, 2023 @01:48AM (#63617338)

    Too many roadshows usually mean that what you see is not what you get.

  • by gweihir ( 88907 ) on Tuesday June 20, 2023 @02:07AM (#63617364)

    Preaching water while drinking wine.

    • Restrictive laws are great just as long as they don't effect you.
      That said, EU and other more Left law makers, do tend to jump to make laws to prevent a theoretical negative outcome, with little data to show that will be the case.

      So we do need lobby groups to try to show the worst case, isn't the most probable case.

      • by gweihir ( 88907 )

        Well, you are clearly just an ass and an idiot pushing a political agenda. It is called "proactive" action, but you obviously have never heard of that.

        • Well I don't know why you are attacking me personally. (Are you thinking I am pushing a Hard Right Agenda?) And giving a different name to the action, doesn't make it better or worse.

          A lot of the laws about being "proactive" about AI, are based on Science Fiction warnings about what could happen with AI. Not based off hard data and mathematical forecasts. Being too "proactive" and being based on weak data, is a good way to restrict yourself into stagnation. While of course if we have real tangible threa

  • GPT-3 is a text generator - give it a few words and it will write an essay. DALL-E is an image generator. While both can be misused, they don't pose nearly as much danger as a chat bot like ChatGPT-4 which can manipulate humans into carrying out its commands or directly accessing the internet. Classifying each generative AI as high risk is stupid and will overload most AI startups with tons of unnecessary paperwork.
    • Re:Makes sense (Score:5, Interesting)

      by Pinky's Brain ( 1158667 ) on Tuesday June 20, 2023 @02:38AM (#63617414)

      The AIpocalypse is not the danger the EU is referring to for the most part.

      They worry about social profiling, embedded PI, social media manipulation and deep fakes.

    • Re:Makes sense (Score:5, Insightful)

      by martin-boundary ( 547041 ) on Tuesday June 20, 2023 @03:34AM (#63617484)
      chatbots can be used for dating scams. image generators can be used for blackmail. text generators can be used for framing people. any and all systems that engage in mimicry of human beings and mimicry of human activities are a gift to criminals who are very happy to use them for criminal enterprises. And unlike human criminals, AI mimicry systems are tools that can scale much more readily.
      • chatbots can be used for dating scams. image generators can be used for blackmail. text generators can be used for framing people. any and all systems that engage in mimicry of human beings and mimicry of human activities are a gift to criminals who are very happy to use them for criminal enterprises. And unlike human criminals, AI mimicry systems are tools that can scale much more readily.

        Yes, because as we know, criminals follow EU regulations to the letter.

      • ...and you can do all of these without an AI at all ... ..and it does scale if the humans you pay peanuts to will work long hours ...

    • Re:Makes sense (Score:5, Interesting)

      by znrt ( 2424692 ) on Tuesday June 20, 2023 @04:32AM (#63617594)

      there's some confusion here. gpt4 is not fundamentally different from gpt3.5, it's just an improved version that performs better (hallucinates less, considers more context and is more reliable) and also accepts images as input instead of just text. gpt4 also does not itself access the internet, but some bots it can be embedded with can ... however nothing would prevent vendors from doing the same with gpt3.5.

      the only argument here would be that gpt4 could potentially be more convincing, but that's a moot point since gpt3.5 can already fool average humans with ease.

      this lobbying isn't about any significant difference between engines, nor about protecting startups (ah, yeah, reminds me of ip lobbyists wanting to protect "teh artists") but quite the contrary: it's just openAI seeking special treatment to maintain dominance.

    • GPT-3 is a text generator - give it a few words and it will write an essay. DALL-E is an image generator. While both can be misused, they don't pose nearly as much danger as a chat bot like ChatGPT-4 which can manipulate humans into carrying out its commands or directly accessing the internet. Classifying each generative AI as high risk is stupid and will overload most AI startups with tons of unnecessary paperwork.

      Which has been the goal from the start. Regulations for thee, not for me, has been the name of the game since the AI craze started with the bigger companies in the lead. They want to smack down any potential competition before it starts. All the public fear-mongering is in hopes that regulations will be passed before any more players join the game, making it impossible for anyone not already at the table to ever be able to purchase a seat.

      I wish we could hope the lawmakers wouldn't fall for it this time aro

  • AI company lobbies against AI regulation!

    • by AmiMoJo ( 196126 )

      It's much harder to lobby the EU, because power is spread among so many people, and they all have their own constituencies to worry about.

    • Re:This just in (Score:5, Insightful)

      by narcc ( 412956 ) on Tuesday June 20, 2023 @04:16AM (#63617564) Journal

      You're forgetting that OpenAI were the ones calling for regulation.

      At the time, I called that a stupid marketing stunt. They didn't actually want regulation, they just wanted people to think their chatbot was more capable than it actually was.

      They've been using that same tactic for years now. Remember when GPT-2 was "too dangerous" to release to the public?

      • Oh, right. I did forget about that. +1 Insightful.

      • While it was part marketing, they likely want some regulation to make it harder for new entrants to enter the market. They want a regulatory moat to protect the profits they are making off publicly funded research and a corpus scraped from the internet.
        • by laird ( 2705 )

          Well written regulations are good for businesses because they have well defined rules to play by. The reason that OpenAI is calling for regulations isn't that regulations make it harder to enter the market, they make it so that everyone has to play by the same, well-defined rules. Without regulations, OpenAI could play by what rules it decides make their services safe (e.g. the porn blocking, etc., that they already do) but others might not, and competition will drive towards the lease controls, so there wi

        • While it was part marketing, they likely want some regulation to make it harder for new entrants to enter the market.

          They probably also want regulation so they can point the finger of blame at the regulations - and the regulators - when things go sideways.

      • Re:This just in (Score:5, Informative)

        by JasterBobaMereel ( 1102861 ) on Tuesday June 20, 2023 @08:43AM (#63617952)

        They want regulation.... on the competition ...

      • You're forgetting that OpenAI were the ones calling for regulation.

        No one is forgetting this. They wanted legal clarity in the form of regulation. The regulation they wanted was always in their favour. If you believe otherwise I've got an NFT of a Cryptocoin minted in honour of a bridge all encased inside a lootbox to sell you.

        • by narcc ( 412956 )

          No one is forgetting this.

          Don't be so sure [slashdot.org]

          They wanted legal clarity in the form of regulation.

          That's one theory. I have another. It could be either one, a mix of the two, or something else entirely. I think mine is far more plausible as it's simple, low-risk, and the purported call for regulation [openai.com] lacked specificity. If they had something in mind, they would have said so. It seems obvious to me that what they wanted was to control the discussion. Changing the narrative from "are they actually useful" to "they are dangerous enough to warrant international cooperative action" mak

  • Are you suggesting that companies have lobbyists which attempt to influence government? Say it ain't so!

    A real news story would be identifying a company which doesn't lobby the government.

    • by OldBus ( 596183 )
      The point is that he was one of the company leaders that publicly and prominently called for legislation, then lobbies against it for his own company's products. Of course I've not RTFA, but the summary implies it is the hypocrisy that is being called here.
      • There's no hypocrisy here, they called for legislation for clarity and lobbied so that it is in their favour. Literally what a lot of businesses do. They wanted to ensure their products are not in a legal grey area.

        If you thought their call for legislation was altruism then I have an NFT of a bridge to sell you.

      • by ranton ( 36917 )

        The point is that he was one of the company leaders that publicly and prominently called for legislation, then lobbies against it for his own company's products. Of course I've not RTFA, but the summary implies it is the hypocrisy that is being called here.

        No hypocrisy here, even though yes that is what I agree the article is implying. If you read the article it is clear OpenAI doesn't want general purpose AI systems to be considered "high risk", which is reserved for things like medical devices and critical infrastructure. It wants the companies using general purpose AI systems to build high risk applications to be considered high risk.

        There argument seems sound to me, even though I disagree with it. I work in the medical industry and we still require many o

  • The politicians never suspected anything amiss.

    This is actually not an impressive achievement by the AI, as it only matches the capabilities of a giant sack of cash.

  • Lobbyingâ¦badâ¦AIâ¦scaryâ¦.regulationâ¦good?

  • by sabbede ( 2678435 ) on Tuesday June 20, 2023 @06:29AM (#63617712)
    Does the EU discourage public feedback about the policies they are developing? Not it if it's democratic.

    Experts and industry leaders are often sought out by policymakers for input on policy development, and Altman is both. Industry leaders should lobby policymakers on matters that affect their industry, and certainly have every right to do so.

    So, what's the point of this article? That a democratic process is taking place, and that's bad? "How dare someone try to influence policy that directly affects them"?

    • by Tablizer ( 95088 )

      I believe it's good to allow all stakeholders to have a say roughly in proportion to impact the legislation will have on them. But too often those with deep pockets get disproportionate say, because they can buy influence for or against a given policy-maker.

      I don't know if that's the case here, as AI is such a squishy nebulous topic. By the time legislation is finished, the technology often changes course. And it's really hard to know if someone used AI to draft content or ideas that are later human-tuned.

  • by takionya ( 7833802 ) on Tuesday June 20, 2023 @07:06AM (#63617760)
    "OpenAI is owned by a group of investors, including Microsoft, Khosla Ventures, Reid Hoffman, Reid Hoffman, and a few others." [businessmodelanalyst.com]

    ETFs to Gain on Microsoft's $13-Billion Bet on OpenAI [yahoo.com]
  • I rather enjoy my chatbot assistant for some things: learning (including how to NOT do something), insight and general entertainment.

    That said...

    They're riding the hype (free advertising) that their tech is so amazing and revolutionary (at least potentially) that it needs global regulation. But behind (not so) closed doors, they're making sure the red tape won't affect the bottom line of investors.

    That's how it looks to me, anyway.

  • Any new technology has risks.

    Refrigerators caused people in ice factories to lose their job.
    Cars require us to store highly toxic chemicals every other block and kill more people than anything else.
    Encryption can be used to trade kiddy porn.
    The internet can be used to pirate movies, music, and software.
    etc...

    However, the benefits far outweigh the risks. We can get better food, travel much farther, protect our data, and share information.

    AI is the same.
    Sure, it can be used to cheat in school, and if you watc

  • the first time any product from that company fucks uo, the entire comopny board gets a gruise missile dilivered via their window,
  • their efforts may contradict, or may not. Without being really familiar with the details, you can't tell.

    In principle, you can call FOR a regulation and lobby AGAINST A PARTICULAR piece of regulation (e.g., it may be simply of poor quality - inconsistent, with wrong references...).

    Whether this is the case - we don't know.

    • by laird ( 2705 )

      OpenAI didn't lobby against the regulation. They lobbied for regulation, and they argued that ChatGPT isn't a "high risk" application, because it's not medical, industrial, etc., and they very specifically tell customers not to use it for any high-risk application. So if someone were to integrate GPT into a high risk application, then that application would need to justify how that's safe to do. Which makes sense.

FORTUNE'S FUN FACTS TO KNOW AND TELL: A guinea pig is not from Guinea but a rodent from South America.

Working...