OpenAI Disrupts Five Attempts To Misuse Its AI For 'Deceptive Activity' (reuters.com) 16
An anonymous reader quotes a report from Reuters: Sam Altman-led OpenAI said on Thursday it had disrupted five covert influence operations that sought to use its artificial intelligence models for "deceptive activity" across the internet. The artificial intelligence firm said the threat actors used its AI models to generate short comments, longer articles in a range of languages, made up names and bios for social media accounts over the last three months. These campaigns, which included threat actors from Russia, China, Iran and Israel, also focused on issues including Russia's invasion of Ukraine, the conflict in Gaza, the Indian elections, politics in Europe and the United States, among others.
The deceptive operations were an "attempt to manipulate public opinion or influence political outcomes," OpenAI said in a statement. [...] The deceptive campaigns have not benefited from increased audience engagement or reach due to the AI firm's services, OpenAI said in the statement. OpenAI said these operations did not solely use AI-generated material but included manually written texts or memes copied from across the internet. In a separate announcement on Wednesday, Meta said it had found "likely AI-generated" content used deceptively across its platforms, "including comments praising Israel's handling of the war in Gaza published below posts from global news organizations and U.S. lawmakers," reports Reuters.
The deceptive operations were an "attempt to manipulate public opinion or influence political outcomes," OpenAI said in a statement. [...] The deceptive campaigns have not benefited from increased audience engagement or reach due to the AI firm's services, OpenAI said in the statement. OpenAI said these operations did not solely use AI-generated material but included manually written texts or memes copied from across the internet. In a separate announcement on Wednesday, Meta said it had found "likely AI-generated" content used deceptively across its platforms, "including comments praising Israel's handling of the war in Gaza published below posts from global news organizations and U.S. lawmakers," reports Reuters.
Five of how many thousand? (Score:5, Insightful)
I mean assistance with lying is one of the few things these tools do really well. Hence they will get used for that time and again.
ClippyGPT (Score:4, Insightful)
"It looks like you are attempting a coup, would you like some help?"
Indeed, for every 1 crook caught there are typically at least 10 still evading.
Re: (Score:2)
Indeed, for every 1 crook caught there are typically at least 10 still evading.
Blackstone's ratio.
But then there's Cheney's ratio: 1 in 4 detainees were probably innocent. But, "...I'd do it again in a minute."
True justice probably lies on the curve between these two points.
Thus solving the problem (Score:2)
They ran out of fingers. (Score:2)
OpenAI assumes people are dumb (Score:1, Interesting)
OpenAI seems to assume that everyone is dumb as rocks and would be easily fooled by generated nonsense.
All the safety talk is pure hype. That shit is next to useless.
The thing is, OpenAI is trained from human inputs and suffers from garbage-in-garbage-out. OpenAI won't ever be better than the top 2% of humans, and probably won't even ever match the top 2% of humans. Some of us won't be fooled by OpenAI, ever.
Re:OpenAI assumes people are dumb (Score:5, Interesting)
I don't think you understand the problem, or what the strengths of LLMs are.
LLMs are *language models*, they model language. They don't reason, they don't know facts, they just make sentences that are consistent with their training and prompt. Unlike humans, they have no special relationship to the truth, the truth is just the common thing to say, so they say it, usually. But give it false information, and it will do its best to make it sound consistent, and it is very good at it, as it is what it is designed to do. It is hard for humans to lie, because normally, we just say what we know, the truth in this case, what we say is naturally consistent because the truth is consistent. Now, when we deliberately introduce some false information, we need to do a lot of work in order to adjust the narrative, but it is not a problem for a LLM for which it is the primary function.
So, together with a human feeding it the appropriate misinformation, a LLM can be a fantastic lier. Think of it like a calculator for an engineer: the engineer gets the right numbers and feed them to the calculator to do the calculations, which it can do faster and better than any human. A LLM to a lier is like a "consistency calculator".
I think there are studies showing that current day LLMs are better than humans at bullshitting, about the only thing they are significantly better at. And it is one of the most significant AI safety risks, much more than some hypothetical superintelligence, it is a force multiplier for liars.
Re: (Score:2)
Let's not lose perspective here. We invented and took bullshitting to an art form. We can do it with or without LLMs. Even LLMs are just bullshitting on our request. A totalitarian regime has many people at their disposal to operate in their bullshitting campaigns, LLMs don't change the situation so much. And in democratic countries there are whole publications specialized in bullshitting.
Re: (Score:2)
Again you assume I cannot detect that your or your LLM is lying to me
I congratulate you on your perspicacity but it is you who is assuming that a large majority of society also share your innate ability to detect AI generated content,
or even simple outright lies.
Re: OpenAI assumes people are dumb (Score:2)
who moderates GPT-x (Score:2)
Who gets to devide? (Score:2)
Who gets to decide what "misinformation" is? What criteria will be used?
In a separate announcement on Wednesday, Meta said it had found "likely AI-generated" content used deceptively across its platforms, "including comments praising Israel's handling of the war in Gaza published below posts from global news organizations and U.S. lawmakers," reports Reuters
Ah ... so, if it contains wrongthink?
Re: (Score:3)
Re: (Score:2)
Who gets to decide what "misinformation" is? What criteria will be used?
In a separate announcement on Wednesday, Meta said it had found "likely AI-generated" content used deceptively across its platforms, "including comments praising Israel's handling of the war in Gaza published below posts from global news organizations and U.S. lawmakers," reports Reuters
Ah ... so, if it contains wrongthink?
And who is to decide what "right think" and "wrong think" is? AI is able to compile all kinds of information very quickly... but whether that information is accurate or not is another question. It will always dismiss results that are pre-programmed to be deemed "too controversial," and that is at the whim of the programmers which means all answers will be shaded with their beliefs no matter how "right" or "wrong" an answer might be.
Five attempts ... total? (Score:1)
Not 5 per second?