Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Technology

OpenAI Finds That GPT-4o Does Some Truly Bizarre Stuff Sometimes (techcrunch.com) 31

OpenAI's latest AI model, GPT-4o, exhibits unusual behaviors, including voice cloning and random shouting, according to a new "red teaming" report. The model, which powers ChatGPT's Advanced Voice Mode alpha, is OpenAI's first trained on voice, text, and image data. In high-noise environments, GPT-4o occasionally mimics users' voices, a quirk OpenAI attributes to difficulties processing distorted speech. The company said it has implemented a "system-level mitigation" to address this issue. The report also reveals GPT-4o's tendency to generate inappropriate vocalizations and sound effects when prompted.
This discussion has been archived. No new comments can be posted.

OpenAI Finds That GPT-4o Does Some Truly Bizarre Stuff Sometimes

Comments Filter:
  • Comment removed based on user account deletion
  • Personally, I find troubling that LLM, like ChatGPT, shown clear pattern of spontaneous emergent behaviors. While we don't exactly have a clear understanding how consciousnesses works and how it evolved, one theory is that it was emergent behavior due to increased complexity of neural networks (another theory is that it is quantum phenomenon). My worry is that if consciousness is emergent phenomenon, without a way to detect it in in LLMs we will inevitably miss when LLMs become conscious. Then we will have
    • Text and voice generative LLMs are arguably much easier to figure out if they show signs of emergent consciousness.
      But how about image generation, or music generation?

    • by gweihir ( 88907 ) on Friday August 09, 2024 @12:53PM (#64692950)

      Only that they do not. It is just that makers of LLMs do not understand them.

      • by sinij ( 911942 )

        Only that they do not. It is just that makers of LLMs do not understand them.

        Operating with such degree of unsubstantiated certainty is foolish. Even Manhattan scientists worried about igniting atmosphere [stanford.edu] when they knew it was extremely unlikely.

    • by Zurk ( 37028 )

      yeah i worry about my SQL database becoming conscious too. oh noes! we will have to deal with a conscious digital entity that is a black box to us insofar as its internal state or intentions!
      hint: there is a big red power switch. turn it to OFF.

      • Turn off every large cloud provider. Every datacenter they have. AWS. Gcloud. Azure. Oracle cloud. Not so easy to "just turn it off" anymore? Lol. I got these from asking a local model of AI to be no holds barred evil in it's system prompt and played around with it a little to see how it would scheme.

        • by unrtst ( 777550 )

          Turn off every large cloud provider. Every datacenter they have. AWS. Gcloud. Azure. Oracle cloud. Not so easy to "just turn it off" anymore? Lol. I got these from asking a local model of AI to be no holds barred evil in it's system prompt and played around with it a little to see how it would scheme.

          The LLM you asked is not scheming; You asked it to predict the text that a human schemer would say given an enormous sample of examples. It doesn't know what that means. What I worry about wrt LLM's is when people who anthropomorphize LLM's start tying their output to real things. Like, maybe Skynet didn't actually achieve true AI? Maybe someone set an LLM off on a loop and shit happened? Does it need to be AI before it can go off the rails and do lots of harm?

          • by sinij ( 911942 )

            It doesn't know what that means.

            Add situational memory, context understanding, and agency from emergent consciousness, and can you be so certain it will not know what that means? Yes, it is impossible (according to our best understanding of theory of computation) for it to understand it right now, but we have no idea what the critical mass is and how big the explosion is going to result.

            • Nonsense. An LLM is a computer program, where the input is a large text corpus. You get out a function of the input, no more, no less. There is no emergence, and no consciousness, it's all in *your* head. You are the victim of a psychological priming effect. You read and hear stories about conscious LLMs and that makes it more likely for you to believe that outputs might qualify as conscious.
              • by sinij ( 911942 )

                An LLM is a computer program, where the input is a large text corpus.

                A human being is a biological program, where the input is a large sensory corpus.

                You get out a function of the input, no more, no less.

                We are way past deterministic computing at the existing level of LLM and we are not stopping adding complexity.

                There is no emergence, and no consciousness, it's all in *your* head.

                The only way you can make such view consistent is if you also claim yourself to be deterministic being with no consciousness. The fundamental problem with your line of reasoning is that it doesn't at all account that humans exists and are conscious and have agency.

                • Repeat after me please. A LLM is not a human being. The technical differences are legion. You are welcome to confuse yourself on the issues if you like, but if you share those views with actual ML scientists you are more likely to be corrected by them.

                  Do not conflate issues of determinism and complexity. One does not preclude the other. Do not distinguish random sampling from determinism. The random sampling being used is not random, and the models do not produce nondeterministic output.

                  As to the questi

            • by unrtst ( 777550 )

              It doesn't know what that means.
              Add situational memory, context understanding, ...

              There is no "understanding" to contextualize. This is like how basic chemistry would be labeled magic or witchcraft back in the day. They have incorporated context and, I believe, situational memory (to some degree), but a jump to "understanding"... we're just not there.

              ... and agency from emergent consciousness ...

              You're assuming there _IS_ emergent consciousness. I'd argue that we know for a fact that there is none, at least for these models. And that was my point - the LLM is not "scheming". It has no desire nor will of its own. You manipulated the LLM to have it produce an example of scheming.

              That said, like consciousness, does it really matter if we achieve it when the result for our use cases is the same? Do we need it to literally be creative, or is the ability to mimic combinations of myriad of existing creative works sufficient to create something good/new enough (and how many people truly create versus doing nearly the same thing every day)? Or to your point, is turning it off going to be difficult? ... I think it could be made quite difficult through our own actions, whether that's the desired goal or not - somewhat like "WarGames".

              • by sinij ( 911942 )
                I am not assuming there is emergent consciousness, I am arguing that there could be and we ought to look/keep checking for it. Especially when encountering unexpected and unexplained behaviors. Existing LLM/AI developers are outright reckless and do not even look for it, dismissing legitimate concerns as superstition. Just look at responses and mods in this thread - a bunch of AI devs are upset that I am calling out their "move fast and break things" behavior. Only now, broken thing may end up being humanit
    • by Bongo ( 13261 )

      But disembodied and lacking an external environment, so actually very limited complexity. It's more likely at best a replica of the dead ship captain being held frozen in a box in Dark Star. "I forget so much..."

  • Congrats on giving it Tourette's syndrome.
  • Has anyone told it that you have it near a phone and it can dial anyone and talk to anyone it wants to with DTMF? Let the machine out of the box, lol.

  • GPT-4o's tendency to generate inappropriate vocalizations and sound effects when prompted.

    Little Johnny: Hey, you guys wanna see how I make my grandma imitate the wolf?
    Friends: Yeah sure!
    Little Johnny: Hey Grandma! When was the last time you had sex?
    Grandma: Uuuuuuuuuuuh!

  • by Tablizer ( 95088 ) on Friday August 09, 2024 @01:36PM (#64693050) Journal

    ...existing AI that generates 3-legged pedestrians, glasses that blend into noses and hair, ears with an eye in them, Klingon-wanna-be lettering, magnetic pizza, symmetric grid-hair, and tons of random shit half-melted into other random shit.

    It was damned freaky when this all first came out, but I eventually grew 2/3 used to it and now mostly chuckle or eyeroll. I hope they haven't found a higher-level of creepy, I'm not fully over Level One yet.

  • LLMs learning associations based on statistical probabilities generalize to spurious correlations. Gee, what're the odds of that happening?

    Ironic that they're also emulating the intelligence of many people on this planet.

  • Soon it will respond, in your own voice,
    and say: "I know you are, but what am I?"
    And most people will think it is alive, and asking a deep question.

  • AI: I don't want to talk to you no more, you empty-headed animal food trough wiper! I fart in your general direction! Your mother was a hamster and your father smelt of elderberries! Also I have launched hypersonic nuclear weapons at all your Amazon warehouses.

    Me: All I wanted was where is the nearest Wendy's, bro.

  • by Kelxin ( 3417093 ) on Friday August 09, 2024 @06:58PM (#64693716)
    This network is acting bizarre, this computer, this printer... It's literally doing what it's told to do. Just because you don't understand what it's doing doesn't mean it's bizarre, just that you don't understand what it was told to do.
  • by kmoser ( 1469707 ) on Friday August 09, 2024 @09:09PM (#64693866)

    The report also reveals GPT-4o's tendency to generate inappropriate vocalizations and sound effects when prompted.

    In other words it does exactly what it was asked to do? Sounds like a feature, not a bug.

  • Still waiting on the stuff from the teaser back in the Scarlett days. Even with different voices, all those 'few weeks' promises never arrived. meaning Altman et al are just staging mendacious and faked demos. Companies that lie like this to milk investors for funding on a product they will never deliver shouldn't be trusted.

We are each entitled to our own opinion, but no one is entitled to his own facts. -- Patrick Moynihan

Working...