FTC Should Stop OpenAI From Launching New GPT Models, Says AI Policy Group (theverge.com) 56
An artificial intelligence-focused tech ethics group has asked the Federal Trade Commission to investigate OpenAI for violating consumer protection rules, arguing that the organization's rollout of AI text generation tools has been "biased, deceptive, and a risk to public safety." From a report: The Center for AI and Digital Policy (CAIDP) filed its complaint today following the publication of a high-profile open letter calling for a pause on large generative AI experiments. CAIDP president Marc Rotenberg was one of the letter's signatories, alongside a number of AI researchers and OpenAI co-founder Elon Musk. Similar to that letter, the complaint calls to slow down the development of generative AI models and implement stricter government oversight.
The CAIDP complaint points out potential threats from OpenAI's GPT-4 generative text model, which was announced in mid-March. They include ways that GPT-4 could produce malicious code and highly tailored propaganda as well as ways that biased training data could result in baked-in stereotypes or unfair race and gender preferences in things like hiring. It also points out significant privacy failures with OpenAI's product interface -- like a recent bug that exposed OpenAI ChatGPT histories and possibly payment details to other users.
The CAIDP complaint points out potential threats from OpenAI's GPT-4 generative text model, which was announced in mid-March. They include ways that GPT-4 could produce malicious code and highly tailored propaganda as well as ways that biased training data could result in baked-in stereotypes or unfair race and gender preferences in things like hiring. It also points out significant privacy failures with OpenAI's product interface -- like a recent bug that exposed OpenAI ChatGPT histories and possibly payment details to other users.
This isn't an issue (Score:3)
Re: (Score:3)
Luddites (Score:2)
Re: (Score:3)
You know... If the generative AI is smart enough to yield malicious code, then probably It can also generate automated analysis of malicious code.
System security software also should evolve using AI techniques, And malicious code cannot do much in modern sandboxed environments such as Mobile phones, in the first place, without also discovering a security bug and means of exploit.
Basically; it seems like the facility to generate malicious code should be Useless,
because malicious code cannot complete it
Re: (Score:3)
That is the flawed anti-gun control argument: It is just the tool. NO. If the tool is an _amplifier_ and/or an _accelerator_, it is NOT "just a tool". It extends capabilities and may well extend capabilities over a threshold below which people would have chosen another course of action. Do you think people would go on a killing-spree in a school just using a kitchen knife or a sharp stick? In principle that is possible. But do they do it? No, they would not because the tooling makes their chances of doing s
Re: (Score:2)
Re: (Score:2)
Do you think that banning or limiting AI is going to stop it? Hardly.
No, but it may limit it and it may outlaw some applications. I see this as somewhat similar to privacy laws. If you are in the US, you will not have seen what the GDPR achieved. Not perfect, but reduces abuse significantly, and there is now a recourse. Sure, the "authorities" do often not care, but as soon as there is a law, you can sue. And you can organize. That is one of the reasons I am a noyb member.
there oughta be a law (Score:3)
What we need to do is outlaw stupid people who think AI speaketh the word of god.
Re: (Score:2)
This is the basis of all AI luddism - 'I know better, but I'm not so sure about you.'
Re: (Score:2)
Re: (Score:2)
You know, ChatGPT is a lot more coherent than "God"...
Still does not make it a good source of information.
FTC does not have that power (Score:5, Interesting)
Re: (Score:2)
AI chatbots are very good at generating false information. This could possibly be spun as deceiving or misleading people about the abilities and powers of the chatbots.
Re:FTC does not have that power (Score:4, Informative)
That is a deceiving or misleading USER, not a deceptive or unfair business practice by OpenAI. The best the FTC could (or should) do in this case is demand OpenAI consistently update their models to combat it usage in that manner. Maybe pull developer licenses of devs who are trying to create versions that are better at false information and such. But that's about it.
A hammer is great tool for driving in nails. But it also makes for a deadly melee weapon in the hands of a bad actor with bad intentions. That doesn't mean we need the government to intervene in the sale or manufacture of hammers.
Politicians (Score:2)
AI chatbots are very good at generating false information.
Not as good as politicians and we don't ban them.
Substitute Social Media ... (Score:2)
... and it's the same story. Money, not ethics - is the driver.
Re: (Score:2)
Re: (Score:2)
Money, not ethics - is the driver.
I agree and, if you look at the signatories you'll see a lot of people working on rival AI products who I suspect have a strong financial incentive to see OpenAI slowed down so that they can catch up.
Re: (Score:2)
Money, not ethics or even species survival is the driver for far too many things. I think we are going towards a critical threshold were a massive price will come due.
Let's be honest... (Score:5, Interesting)
Re: (Score:2, Interesting)
I don't think it's even that. If you look at the list of signatories it's literally just a list of also-ran AI companies who want to see OpenAI slowed down so they can catch up.
The list is almost entirely made up of the failures of the AI world, it's not even about governments but just a pathetic play by inept competition to try and cripple the lead OpenAI has.
Be under no illusion; if OpenAI is forced to stop for 6 months, these guys signing sure as hell won't, they'll be spending every minute desperately t
Re: (Score:2)
Doesn't seem like a FTC issue (Score:4, Insightful)
The FTC's purpose is to regulate business practices and prevent ones which are deceptive or Unfair business practices that break the law by hindering competition or deceiving consumers.
The FTC is Not there to prohibit products which you are fearful of due worries about them potentially being abused or falling into "the wrong hands".
There are of course legitimate concerns, but generally speaking regulators require More to ban a product than fears about how criminals could be abuse it.. After all, most products have beneficial purposes. Imagine lobbying for the US Industrial Commission in 1890 to ban on anyone making new Automobiles shortly after the first one came to market, out of fear that they would cause deaths in vehicle accidents... I guess you could also rely on the support of the Horse taxi companies and Railroads to make sure that threat of possible automated transportation gets vanquished as well.
Re:Doesn't seem like a FTC issue (Score:4, Interesting)
Imagine lobbying for the US Industrial Commission in 1890 to ban on anyone making new Automobiles shortly after the first one came to market, out of fear that they would cause deaths in vehicle accidents.
Or suburban sprawl, a reliance on fossil fuels, roadways clogged with commuters, and significantly reduced support of public transportation?
Spend enough of your life sitting in traffic and you might wish they had banned automobiles. Of course, most people imagine that scenario as their big fancy SUV disappearing from their driveway and being stuck in suburban hell with no way to get around, but in the "cars were banned" timeline, if the suburbs still existed they'd have excellent public transportation.
This AI technology is going to kill a lot of jobs. In hindsight, it might end up looking quite a bit like the situation with the automobile - it made the creators of the technology an ungodly amount of money, but didn't necessarily lead to a better world. And just like the automobile, once you've integrated the technology too deeply into your society, it's nearly impossible to say "oops, we kind of screwed up" and get rid of it after the fact.
Re: (Score:1)
Don’t forget vehicular homicides/traffic accidents. Not a day goes by in my state where there isn’t a car crash clogging up one of the highways I drive on.
The thing with progress is that in reality it’s a series of trade-offs and costs. Societies have to seriously consider how new developments will impact their culture, economy, and health. And more importantly, how to mitigate the negative impacts. A lot of “progress” shills tend to be people who are either not going to be neg
Company Name (Score:5, Insightful)
Re: (Score:1)
Re: (Score:2)
Not really. You're just applying one definition to the word "open" because of its technologic origin. I assume by open; you mean open source. But how I see their name, OpenAI, is more like Open( up )AI( to be more accessible to anyone). By that definition of open, it's not only not misleading at all, but also very true. Because of them, more people are using an AI in some way, shape, or form than ever before.
Re: (Score:2)
Because of them, more people are using an AI in some way, shape, or form than ever before.
Probably not, everyone was using Siri or Google search long before OpenAI came along.
Re: (Score:2)
Barely anybody actually uses Siri or Google like they do ChatGPT, because Siri and Google just aren't capable. They aren't even playing in the same sport.
Re: (Score:2)
People use Siri to answer questions and do tasks. There are some creative souls who use ChatGPT for more than that, but it's still mostly a curiosity. Siri is more accurate at answering questions, and does more useful things.
Re: (Score:1)
Hey, are you aware of a sport named "Modern Pentathlon"?
One of its events is show jumping, an equestrian sport hardly labelled as "modern"...
Re: (Score:1)
Re: (Score:1)
In the civilised world, this shouldn't be possible.
Here in the EU, each country has legislation with strict requirements for any food labelled "bio-".
But back to the original post: The problem is that closed-minded fans of open source automatically replace any "open" with "open-source".
Sure, OpenAI is not open-source. BUT: I can use it, for free - which is a reason for labelling it open, no?
And they provide an API to access it - again, a sign of "openness".
Just open your mind... :-)
Re: (Score:2)
Re: (Score:2)
Garbage, Advertising, and False Information in... (Score:2)
Its the same environment vs nurture debate, just with AI and Data.
These "threats" true for most internet sites (Score:5, Insightful)
The Center for AI and Digital Policy (CAIDP) complaint [caidp.org] claims that ChatGPT is
biased, deceptive, and a risk to privacy and public safety. The outputs cannot be proven or replicated. No independent assessment was undertaken prior to deployment. [...] The Federal Trade Commission has declared that the use of AI should be
“transparent, explainable, fair, and empirically sound while fostering accountability.”
It seems like these general complaints can be made against basically every search engine, social media site, etc. on the internet. Ironically, these same complaints could also be made against human outputs, and arguably the human outputs (especially for some individuals) are arguably much more "biased, deceptive, and a risk to privacy and public safety."
Re: (Score:2)
Yes, And... It seems like the FTC's comments they are quoting are reasonable but misapplied, and being Misused by that Anti-AI political / lobbyist group.
“The Federal Trade Commission has declared that the use of AI should be ‘transparent, explainable, fair, and empirically sound while fostering accountability.’
Specifically the FTC says that here [ftc.gov]. This is regarding businesses making claims about their AI-based technology -- to make predictions, recommendations, or decisions.
OKAY, B
but we cant get in the way of commercial profit (Score:1)
What’s the problem? (Score:2)
The thing lives in a computer it can’t take over the world. A lot of the crap it spews is just wrong. It is slightly faster than googling certain things. This is 100 percent about keeping AI in the hands of people like Elon Musk who is using AI in his robots and cars. Pretty sure he does not want competition, that is why he is signing on to this stuff. IF he wants to ban the rest of from developing and using AI he needs to open source his stuff and give it away.
Re: (Score:2)
"The thing lives in a computer it can’t take over the world."
Killing a few banks and ruining the right stocks, sending malicious emails can be a good start.
No need for tentacles for that.
" A lot of the crap it spews is just wrong. It is slightly faster than googling certain things. "
Simple things, yes.
Suppose you want to know the relationship between two specific genes and their expression levels in a particular cell type. While you could search for this information on Google, the answer might not be
Same old company sabotage (Score:2)
Throughout time companies have invented new things that caused a change in industries. Other companies always claim, "For the Children! For the Workers! For the safety!" etc to stop it. It really comes down to, "We need time to catch up with our own products so we're not left out on making money on this or miss out chances to max profits".
If it was really a safety concern, then why not advocate that they should design it to focus on patching security holes and preventing exploits so people can patch their o
STOP IT SO WE CAN CATCH UP! (Score:4)
I am more of the perspective that this is actually the type of bad press that is good for AI and the myriad of startups that will be popping up all over the place will be getting funding because people are buying into the narrative that above human intelligence is coming soon (TM)!
I love ChatGPT, I use it every day. But it is still constrained by corpus and compute - yes it will be able to automate stuff and make pseudo decisions soon - but we can pull the electricity out of it, or just starve it from corpus.
Also at the end of the day, the only breakthrough in AI research we are seeing right now is that an ever increasing amount of compute is resulting in better model performance - we haven't found the equivalent of the 'god particle' that explains human reasoning. We are just becoming waaaay more sophisticated at pseudo intelligence through automation.
Re: STOP IT SO WE CAN CATCH UP! (Score:2)
Itâ(TM)s the tech sector, grifters, cranks, and VC bros trying to score some cheap PR with the public while also playing off the publicâ(TM)s lack of understanding of LLMs. At the end of the day, itâ(TM)s basically (albeit very complex) statistics being posed as the coming of Skynet.
Itâ(TM)s also a good way of distracting people from the problems this tech is already causing in the public due to being rushed into the market without safeguards. Issues regarding security (hello prompt inje
CAIDP should be told to mind their own business... (Score:3)
There is no regulation over AI, there is no involvement for the FTC, and who the hell thinks it's okay to tell a company they should not be allowed to create/release a new version of their product?
And who's in this CAIDP? Is it google? Is it other competitors? Are they wanting to cause OpenAI to halt progress so they can have time to play catch up?
Bollocks! I strive for even more AI development. (Score:1)
Is Google behind this? (Score:3)
Wait, What? (Score:2)
They can't just tell businesses what they can and can't do in their industry because competitors want them to stop and let them catch up or some random people are panicking about advancing technology.
They're working against anti-competitive business practices, not for them.
Business threat (Score:1)
Sounds like a great idea (Score:2)
Sounds like a great idea. Lets stop all US research into AI, because its so ethically dubious and the danger is that people will use it in generating harmful content.
Then only the Chinese, Russians and North Korea will be able to generate this harmful content, and we know they will so impressed by the US gesture that they will follow the US example and desist.
Just wait for the next election to see how restrained they will be.
CAIDP is a pressure group (Score:2)
It was founded and is run by a Lawyer - they are not experts, and have no idea what they are talking about ...
AI is artificial but not intelligent (Score:1)