Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI The Internet

OpenAI Disrupts Five Attempts To Misuse Its AI For 'Deceptive Activity' (reuters.com) 16

An anonymous reader quotes a report from Reuters: Sam Altman-led OpenAI said on Thursday it had disrupted five covert influence operations that sought to use its artificial intelligence models for "deceptive activity" across the internet. The artificial intelligence firm said the threat actors used its AI models to generate short comments, longer articles in a range of languages, made up names and bios for social media accounts over the last three months. These campaigns, which included threat actors from Russia, China, Iran and Israel, also focused on issues including Russia's invasion of Ukraine, the conflict in Gaza, the Indian elections, politics in Europe and the United States, among others.

The deceptive operations were an "attempt to manipulate public opinion or influence political outcomes," OpenAI said in a statement. [...] The deceptive campaigns have not benefited from increased audience engagement or reach due to the AI firm's services, OpenAI said in the statement. OpenAI said these operations did not solely use AI-generated material but included manually written texts or memes copied from across the internet.
In a separate announcement on Wednesday, Meta said it had found "likely AI-generated" content used deceptively across its platforms, "including comments praising Israel's handling of the war in Gaza published below posts from global news organizations and U.S. lawmakers," reports Reuters.
This discussion has been archived. No new comments can be posted.

OpenAI Disrupts Five Attempts To Misuse Its AI For 'Deceptive Activity'

Comments Filter:
  • by gweihir ( 88907 ) on Thursday May 30, 2024 @04:31PM (#64511673)

    I mean assistance with lying is one of the few things these tools do really well. Hence they will get used for that time and again.

  • by Anonymous Coward

    OpenAI seems to assume that everyone is dumb as rocks and would be easily fooled by generated nonsense.

    All the safety talk is pure hype. That shit is next to useless.

    The thing is, OpenAI is trained from human inputs and suffers from garbage-in-garbage-out. OpenAI won't ever be better than the top 2% of humans, and probably won't even ever match the top 2% of humans. Some of us won't be fooled by OpenAI, ever.

    • by GuB-42 ( 2483988 ) on Thursday May 30, 2024 @08:02PM (#64512035)

      I don't think you understand the problem, or what the strengths of LLMs are.

      LLMs are *language models*, they model language. They don't reason, they don't know facts, they just make sentences that are consistent with their training and prompt. Unlike humans, they have no special relationship to the truth, the truth is just the common thing to say, so they say it, usually. But give it false information, and it will do its best to make it sound consistent, and it is very good at it, as it is what it is designed to do. It is hard for humans to lie, because normally, we just say what we know, the truth in this case, what we say is naturally consistent because the truth is consistent. Now, when we deliberately introduce some false information, we need to do a lot of work in order to adjust the narrative, but it is not a problem for a LLM for which it is the primary function.

      So, together with a human feeding it the appropriate misinformation, a LLM can be a fantastic lier. Think of it like a calculator for an engineer: the engineer gets the right numbers and feed them to the calculator to do the calculations, which it can do faster and better than any human. A LLM to a lier is like a "consistency calculator".

      I think there are studies showing that current day LLMs are better than humans at bullshitting, about the only thing they are significantly better at. And it is one of the most significant AI safety risks, much more than some hypothetical superintelligence, it is a force multiplier for liars.

      • > I think there are studies showing that current day LLMs are better than humans at bullshitting

        Let's not lose perspective here. We invented and took bullshitting to an art form. We can do it with or without LLMs. Even LLMs are just bullshitting on our request. A totalitarian regime has many people at their disposal to operate in their bullshitting campaigns, LLMs don't change the situation so much. And in democratic countries there are whole publications specialized in bullshitting.
    • I used to think the same about CGI in films. It was so obviously fake but now not so much.
  • I don't know how much verbal diarrhoea OpenAIs cloud servers generate per minute but it must be a lot. Do they have some AI model that monitors all that dross just to ascertain that none of this is used deceptively? They they have an AI bullshit filter?
  • Who gets to decide what "misinformation" is? What criteria will be used?

    In a separate announcement on Wednesday, Meta said it had found "likely AI-generated" content used deceptively across its platforms, "including comments praising Israel's handling of the war in Gaza published below posts from global news organizations and U.S. lawmakers," reports Reuters

    Ah ... so, if it contains wrongthink?

    • Yes, that's definitely wrong. There's no war in Gaza, only genocidal slaughter of mostly children.
    • Who gets to decide what "misinformation" is? What criteria will be used?

      In a separate announcement on Wednesday, Meta said it had found "likely AI-generated" content used deceptively across its platforms, "including comments praising Israel's handling of the war in Gaza published below posts from global news organizations and U.S. lawmakers," reports Reuters

      Ah ... so, if it contains wrongthink?

      And who is to decide what "right think" and "wrong think" is? AI is able to compile all kinds of information very quickly... but whether that information is accurate or not is another question. It will always dismiss results that are pre-programmed to be deemed "too controversial," and that is at the whim of the programmers which means all answers will be shaded with their beliefs no matter how "right" or "wrong" an answer might be.

  • Not 5 per second?

As the trials of life continue to take their toll, remember that there is always a future in Computer Maintenance. -- National Lampoon, "Deteriorata"

Working...