Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Technology

FTC Should Stop OpenAI From Launching New GPT Models, Says AI Policy Group (theverge.com) 56

An artificial intelligence-focused tech ethics group has asked the Federal Trade Commission to investigate OpenAI for violating consumer protection rules, arguing that the organization's rollout of AI text generation tools has been "biased, deceptive, and a risk to public safety." From a report: The Center for AI and Digital Policy (CAIDP) filed its complaint today following the publication of a high-profile open letter calling for a pause on large generative AI experiments. CAIDP president Marc Rotenberg was one of the letter's signatories, alongside a number of AI researchers and OpenAI co-founder Elon Musk. Similar to that letter, the complaint calls to slow down the development of generative AI models and implement stricter government oversight.

The CAIDP complaint points out potential threats from OpenAI's GPT-4 generative text model, which was announced in mid-March. They include ways that GPT-4 could produce malicious code and highly tailored propaganda as well as ways that biased training data could result in baked-in stereotypes or unfair race and gender preferences in things like hiring. It also points out significant privacy failures with OpenAI's product interface -- like a recent bug that exposed OpenAI ChatGPT histories and possibly payment details to other users.

This discussion has been archived. No new comments can be posted.

FTC Should Stop OpenAI From Launching New GPT Models, Says AI Policy Group

Comments Filter:
  • by wakeboarder ( 2695839 ) on Thursday March 30, 2023 @12:42PM (#63411892)
    Because people who wanted to generate malicious code could do so if they wanted to anyway, AI is a tool. Yeah it probably allows for script kiddies, but you still have to know what you are doing.
    • This is so dumb. Yeah, let's just say dont do it in America so people go build this stuff in China or some other country. So dumb. AI is a tool, yes. Harming others with a tool is illegal, no the tool itself.
    • by mysidia ( 191772 )

      You know... If the generative AI is smart enough to yield malicious code, then probably It can also generate automated analysis of malicious code.

      System security software also should evolve using AI techniques, And malicious code cannot do much in modern sandboxed environments such as Mobile phones, in the first place, without also discovering a security bug and means of exploit.

      Basically; it seems like the facility to generate malicious code should be Useless,
      because malicious code cannot complete it

    • by gweihir ( 88907 )

      That is the flawed anti-gun control argument: It is just the tool. NO. If the tool is an _amplifier_ and/or an _accelerator_, it is NOT "just a tool". It extends capabilities and may well extend capabilities over a threshold below which people would have chosen another course of action. Do you think people would go on a killing-spree in a school just using a kitchen knife or a sharp stick? In principle that is possible. But do they do it? No, they would not because the tooling makes their chances of doing s

      • Do you think that banning or limiting AI is going to stop it? Hardly.
        • by gweihir ( 88907 )

          Do you think that banning or limiting AI is going to stop it? Hardly.

          No, but it may limit it and it may outlaw some applications. I see this as somewhat similar to privacy laws. If you are in the US, you will not have seen what the GDPR achieved. Not perfect, but reduces abuse significantly, and there is now a recourse. Sure, the "authorities" do often not care, but as soon as there is a law, you can sue. And you can organize. That is one of the reasons I am a noyb member.

  • by groobly ( 6155920 ) on Thursday March 30, 2023 @12:54PM (#63411934)

    What we need to do is outlaw stupid people who think AI speaketh the word of god.

    • Actually that's exactly what they are attempting to do. The people proposing the restrictions are, of course, way too smart to be suckered in themselves... but they don't think so highly of others, such as the companies who will presumably assume AI can tell them who to hire, even the AI actually discriminatory.

      This is the basis of all AI luddism - 'I know better, but I'm not so sure about you.'

    • Are these the same people that think it speaks late middle english?
    • by gweihir ( 88907 )

      You know, ChatGPT is a lot more coherent than "God"...
      Still does not make it a good source of information.

  • by sinij ( 911942 ) on Thursday March 30, 2023 @12:56PM (#63411940)
    Regardless what you think about OpenAI, unless there is evidence of some specific deceptive and unfair business practices [ftc.gov] FTC does not have authority to intervene. Should it have such authority? Perhaps, but what would be objective criteria? It cannot be public opinion.
    • by Dwedit ( 232252 )

      AI chatbots are very good at generating false information. This could possibly be spun as deceiving or misleading people about the abilities and powers of the chatbots.

      • by Bahbus ( 1180627 ) on Thursday March 30, 2023 @02:16PM (#63412192) Homepage

        That is a deceiving or misleading USER, not a deceptive or unfair business practice by OpenAI. The best the FTC could (or should) do in this case is demand OpenAI consistently update their models to combat it usage in that manner. Maybe pull developer licenses of devs who are trying to create versions that are better at false information and such. But that's about it.

        A hammer is great tool for driving in nails. But it also makes for a deadly melee weapon in the hands of a bad actor with bad intentions. That doesn't mean we need the government to intervene in the sale or manufacture of hammers.

      • AI chatbots are very good at generating false information.

        Not as good as politicians and we don't ban them.

  • ... and it's the same story. Money, not ethics - is the driver.

    • by sinij ( 911942 )
      Absolutely. So should we give FTC authority to shut down almost any business only to watch it do so arbitrary based on political expediency? AI-controlled reservation for Humanity isn't the only dystopian scenario we ought to worry about.
    • Money, not ethics - is the driver.

      I agree and, if you look at the signatories you'll see a lot of people working on rival AI products who I suspect have a strong financial incentive to see OpenAI slowed down so that they can catch up.

    • by gweihir ( 88907 )

      Money, not ethics or even species survival is the driver for far too many things. I think we are going towards a critical threshold were a massive price will come due.

  • Let's be honest... (Score:5, Interesting)

    by Last_Available_Usern ( 756093 ) on Thursday March 30, 2023 @12:57PM (#63411946)
    This isn't about stopping small individuals/parties with malicious intent from fast-tracking development of code/propaganda, this is about isolating that capability within the authority of the governments and elite corporations that already have it. It's the whole, "We have nukes, but for everyone else's safety we don't want anyone else to have nukes", mentality all over again.
    • Re: (Score:2, Interesting)

      by Anonymous Coward

      I don't think it's even that. If you look at the list of signatories it's literally just a list of also-ran AI companies who want to see OpenAI slowed down so they can catch up.

      The list is almost entirely made up of the failures of the AI world, it's not even about governments but just a pathetic play by inept competition to try and cripple the lead OpenAI has.

      Be under no illusion; if OpenAI is forced to stop for 6 months, these guys signing sure as hell won't, they'll be spending every minute desperately t

      • by dbialac ( 320955 )
        So the co-founder of the company in the lead needs 6 months to catch up with itself? Everybody is "behind" except for the one in the lead. In this case, the point is legitimate and the people in the lead are also on the list. We need to analyze what we've built to understand it better. Part of the problem that needs to be solved is stopping smug people from assuming they know it all when in fact they know nothing. Smug people knew for certain that retrograde motion was the way planets worked, until they fou
  • by mysidia ( 191772 ) on Thursday March 30, 2023 @12:59PM (#63411954)

    The FTC's purpose is to regulate business practices and prevent ones which are deceptive or Unfair business practices that break the law by hindering competition or deceiving consumers.

    The FTC is Not there to prohibit products which you are fearful of due worries about them potentially being abused or falling into "the wrong hands".

    There are of course legitimate concerns, but generally speaking regulators require More to ban a product than fears about how criminals could be abuse it.. After all, most products have beneficial purposes. Imagine lobbying for the US Industrial Commission in 1890 to ban on anyone making new Automobiles shortly after the first one came to market, out of fear that they would cause deaths in vehicle accidents... I guess you could also rely on the support of the Horse taxi companies and Railroads to make sure that threat of possible automated transportation gets vanquished as well.

    • by Powercntrl ( 458442 ) on Thursday March 30, 2023 @02:59PM (#63412332) Homepage

      Imagine lobbying for the US Industrial Commission in 1890 to ban on anyone making new Automobiles shortly after the first one came to market, out of fear that they would cause deaths in vehicle accidents.

      Or suburban sprawl, a reliance on fossil fuels, roadways clogged with commuters, and significantly reduced support of public transportation?

      Spend enough of your life sitting in traffic and you might wish they had banned automobiles. Of course, most people imagine that scenario as their big fancy SUV disappearing from their driveway and being stuck in suburban hell with no way to get around, but in the "cars were banned" timeline, if the suburbs still existed they'd have excellent public transportation.

      This AI technology is going to kill a lot of jobs. In hindsight, it might end up looking quite a bit like the situation with the automobile - it made the creators of the technology an ungodly amount of money, but didn't necessarily lead to a better world. And just like the automobile, once you've integrated the technology too deeply into your society, it's nearly impossible to say "oops, we kind of screwed up" and get rid of it after the fact.

      • Don’t forget vehicular homicides/traffic accidents. Not a day goes by in my state where there isn’t a car crash clogging up one of the highways I drive on.

        The thing with progress is that in reality it’s a series of trade-offs and costs. Societies have to seriously consider how new developments will impact their culture, economy, and health. And more importantly, how to mitigate the negative impacts. A lot of “progress” shills tend to be people who are either not going to be neg

  • Company Name (Score:5, Insightful)

    by John Smith 2294 ( 5807072 ) on Thursday March 30, 2023 @01:00PM (#63411958)
    I'd be happy if they were forced to change their name to ClosedAI. There is nothing open about OpenAI. It is plainly misleading and would not be allowed in most product sectors.
    • yeah same here. that is the main thing i got arse burn about really.
    • by Bahbus ( 1180627 )

      Not really. You're just applying one definition to the word "open" because of its technologic origin. I assume by open; you mean open source. But how I see their name, OpenAI, is more like Open( up )AI( to be more accessible to anyone). By that definition of open, it's not only not misleading at all, but also very true. Because of them, more people are using an AI in some way, shape, or form than ever before.

      • Because of them, more people are using an AI in some way, shape, or form than ever before.

        Probably not, everyone was using Siri or Google search long before OpenAI came along.

        • by Bahbus ( 1180627 )

          Barely anybody actually uses Siri or Google like they do ChatGPT, because Siri and Google just aren't capable. They aren't even playing in the same sport.

          • Barely anybody uses ChatGPT.

            People use Siri to answer questions and do tasks. There are some creative souls who use ChatGPT for more than that, but it's still mostly a curiosity. Siri is more accurate at answering questions, and does more useful things.
    • Hey, are you aware of a sport named "Modern Pentathlon"?
      One of its events is show jumping, an equestrian sport hardly labelled as "modern"...

      • How about some Organic Lettuce, grown with only the best synthetic pesticides in copious quantities?
        • In the civilised world, this shouldn't be possible.
          Here in the EU, each country has legislation with strict requirements for any food labelled "bio-".

          But back to the original post: The problem is that closed-minded fans of open source automatically replace any "open" with "open-source".
          Sure, OpenAI is not open-source. BUT: I can use it, for free - which is a reason for labelling it open, no?
          And they provide an API to access it - again, a sign of "openness".

          Just open your mind... :-)

          • The original meaning of the name OpenAI was that they would make their technology and patents open to the world. Over time they transitioned (intentionally) to a closed model.
    • There is also nothing intelligent about OpenAI. They should just call their company 'A'
  • Same crap comes out; this is not a fault of the underlying algorithms just the data used to train it. But the only data set big enough is the internet, which is full of garbage, advertising, and false information...

    Its the same environment vs nurture debate, just with AI and Data.
  • by larryjoe ( 135075 ) on Thursday March 30, 2023 @01:11PM (#63411984)

    The Center for AI and Digital Policy (CAIDP) complaint [caidp.org] claims that ChatGPT is

    biased, deceptive, and a risk to privacy and public safety. The outputs cannot be proven or replicated. No independent assessment was undertaken prior to deployment. [...] The Federal Trade Commission has declared that the use of AI should be
    “transparent, explainable, fair, and empirically sound while fostering accountability.”

    It seems like these general complaints can be made against basically every search engine, social media site, etc. on the internet. Ironically, these same complaints could also be made against human outputs, and arguably the human outputs (especially for some individuals) are arguably much more "biased, deceptive, and a risk to privacy and public safety."

    • by mysidia ( 191772 )

      Yes, And... It seems like the FTC's comments they are quoting are reasonable but misapplied, and being Misused by that Anti-AI political / lobbyist group.

      “The Federal Trade Commission has declared that the use of AI should be ‘transparent, explainable, fair, and empirically sound while fostering accountability.’

      Specifically the FTC says that here [ftc.gov]. This is regarding businesses making claims about their AI-based technology -- to make predictions, recommendations, or decisions.

      OKAY, B

  • from the large tech corps etc!
  • The thing lives in a computer it can’t take over the world. A lot of the crap it spews is just wrong. It is slightly faster than googling certain things. This is 100 percent about keeping AI in the hands of people like Elon Musk who is using AI in his robots and cars. Pretty sure he does not want competition, that is why he is signing on to this stuff. IF he wants to ban the rest of from developing and using AI he needs to open source his stuff and give it away.

    • "The thing lives in a computer it can’t take over the world."

      Killing a few banks and ruining the right stocks, sending malicious emails can be a good start.
      No need for tentacles for that.

      " A lot of the crap it spews is just wrong. It is slightly faster than googling certain things. "

      Simple things, yes.

      Suppose you want to know the relationship between two specific genes and their expression levels in a particular cell type. While you could search for this information on Google, the answer might not be

  • Throughout time companies have invented new things that caused a change in industries. Other companies always claim, "For the Children! For the Workers! For the safety!" etc to stop it. It really comes down to, "We need time to catch up with our own products so we're not left out on making money on this or miss out chances to max profits".

    If it was really a safety concern, then why not advocate that they should design it to focus on patching security holes and preventing exploits so people can patch their o

  • by Huitzil ( 7782388 ) on Thursday March 30, 2023 @01:36PM (#63412054)
    Really hard to understand what this group is asking for. Particularly since a good portion of folks who are signing into these letters are doing AI research themselves - but also aligning themselves to less than ethical personalities.

    I am more of the perspective that this is actually the type of bad press that is good for AI and the myriad of startups that will be popping up all over the place will be getting funding because people are buying into the narrative that above human intelligence is coming soon (TM)!

    I love ChatGPT, I use it every day. But it is still constrained by corpus and compute - yes it will be able to automate stuff and make pseudo decisions soon - but we can pull the electricity out of it, or just starve it from corpus.

    Also at the end of the day, the only breakthrough in AI research we are seeing right now is that an ever increasing amount of compute is resulting in better model performance - we haven't found the equivalent of the 'god particle' that explains human reasoning. We are just becoming waaaay more sophisticated at pseudo intelligence through automation.
    • Itâ(TM)s the tech sector, grifters, cranks, and VC bros trying to score some cheap PR with the public while also playing off the publicâ(TM)s lack of understanding of LLMs. At the end of the day, itâ(TM)s basically (albeit very complex) statistics being posed as the coming of Skynet.

      Itâ(TM)s also a good way of distracting people from the problems this tech is already causing in the public due to being rushed into the market without safeguards. Issues regarding security (hello prompt inje

  • There is no regulation over AI, there is no involvement for the FTC, and who the hell thinks it's okay to tell a company they should not be allowed to create/release a new version of their product?

    And who's in this CAIDP? Is it google? Is it other competitors? Are they wanting to cause OpenAI to halt progress so they can have time to play catch up?

  • Current A.I. offerings are just a small bite. Think of the Adjutant in Star Craft. I would like to be in possession of one unit, for personal uses.
  • by nuckfuts ( 690967 ) on Thursday March 30, 2023 @02:38PM (#63412276)
    Yes - a ban! This sounds like something Google would love to see, given how they were blindsided by the release of Chat-GPT and are scrambling to produce anything that competes.
  • Who do they think the FTC is?

    They can't just tell businesses what they can and can't do in their industry because competitors want them to stop and let them catch up or some random people are panicking about advancing technology.

    They're working against anti-competitive business practices, not for them.
  • It's a clear threat to the FoxNews business model.
  • Sounds like a great idea. Lets stop all US research into AI, because its so ethically dubious and the danger is that people will use it in generating harmful content.

    Then only the Chinese, Russians and North Korea will be able to generate this harmful content, and we know they will so impressed by the US gesture that they will follow the US example and desist.

    Just wait for the next election to see how restrained they will be.

  • It was founded and is run by a Lawyer - they are not experts, and have no idea what they are talking about ...

  • Since AI is the big new thing, everybody seems to want to embrace it. While it can look up everything on the Internet, it doesn't mean it can draw the best conclusions, since the Internet is full of misinformation and opinions of less than smart people. Elon Musk was an early adopter of AI for self driving cars and it failed. A number of people died. Now he is warning us all to be careful with this so-called technology and for good reasons which he has outlined.

And it should be the law: If you use the word `paradigm' without knowing what the dictionary says it means, you go to jail. No exceptions. -- David Jones

Working...