

OpenAI Bans Chinese Accounts Using ChatGPT To Edit Code For Social Media Surveillance (engadget.com) 20
OpenAI has banned a group of Chinese accounts using ChatGPT to develop an AI-powered social media surveillance tool. Engadget reports: The campaign, which OpenAI calls Peer Review, saw the group prompt ChatGPT to generate sales pitches for a program those documents suggest was designed to monitor anti-Chinese sentiment on X, Facebook, YouTube, Instagram and other platforms. The operation appears to have been particularly interested in spotting calls for protests against human rights violations in China, with the intent of sharing those insights with the country's authorities.
"This network consisted of ChatGPT accounts that operated in a time pattern consistent with mainland Chinese business hours, prompted our models in Chinese, and used our tools with a volume and variety consistent with manual prompting, rather than automation," said OpenAI. "The operators used our models to proofread claims that their insights had been sent to Chinese embassies abroad, and to intelligence agents monitoring protests in countries including the United States, Germany and the United Kingdom."
According to Ben Nimmo, a principal investigator with OpenAI, this was the first time the company had uncovered an AI tool of this kind. "Threat actors sometimes give us a glimpse of what they are doing in other parts of the internet because of the way they use our AI models," Nimmo told The New York Times. Much of the code for the surveillance tool appears to have been based on an open-source version of one of Meta's Llama models. The group also appears to have used ChatGPT to generate an end-of-year performance review where it claims to have written phishing emails on behalf of clients in China.
"This network consisted of ChatGPT accounts that operated in a time pattern consistent with mainland Chinese business hours, prompted our models in Chinese, and used our tools with a volume and variety consistent with manual prompting, rather than automation," said OpenAI. "The operators used our models to proofread claims that their insights had been sent to Chinese embassies abroad, and to intelligence agents monitoring protests in countries including the United States, Germany and the United Kingdom."
According to Ben Nimmo, a principal investigator with OpenAI, this was the first time the company had uncovered an AI tool of this kind. "Threat actors sometimes give us a glimpse of what they are doing in other parts of the internet because of the way they use our AI models," Nimmo told The New York Times. Much of the code for the surveillance tool appears to have been based on an open-source version of one of Meta's Llama models. The group also appears to have used ChatGPT to generate an end-of-year performance review where it claims to have written phishing emails on behalf of clients in China.
Is it really "open" (Score:1)
Re: (Score:3)
China couldn't care less at this point?
It has their own Deepthroat language model, one step ahead of the game.
Re: (Score:2)
Re: (Score:2)
OpenAI is no longer open since they refused to open GPT-3. They got dollar signs in their eyes when they saw people paying money to use LLM.
Ignoring the enemy within (Score:5, Insightful)
Can we expect them to take similar action when the US government wants to monitor social media?
If not, does anyone care to flail around trying to defend the double standard?
No privacy from OpenAI products (of course) (Score:1)
We all know that use of ChatGPT is monitored, analyzed, etc. We know (I know) that Intelligence has a hook into it as well. Now they are pretty much admitting it. I did a few tests, querying on a specific subject, then things started failing, breaking -- even the AI intimated there may be interference (go figure!).
We have big players like OpenAI and Microsoft vying for "regulation" -- which is really so they can control it, and we have bread crumbs. Now, Musk is involved (gag). This is another reason
Re: No privacy from OpenAI products (of course) (Score:2)
Redundant (Score:2)
> social media surveillance tool
Why are you repeating yourself?
Nothing wrong. (Score:1)
Why shouldn't they do that? Such things are a clear threat, so it makes sense that they want to know about them.
Wasted opportunity (Score:2)
It would have been so much nicer to mess up with the results to subvert the operation.
Perhaps it would have been too big of a business risk, though. Maybe they could've brought CIA along, or just blamed them anyway.
were they refunded? (Score:1)
Always give an adversary something to find (Score:3)
I have little doubt that OpenAI found it because they were intended to find it. Consider: do you really think the mere novices at OpenAI are any match for the well-trained, highly-educated, seasoned professionals working for Chinese intelligence?
This was a head fake, and OpenAI fell for it.