Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI The Internet

AI Models Face Collapse If They Overdose On Their Own Output 106

According to a new study published in Nature, researchers found that training AI models using AI-generated datasets can lead to "model collapse," where models produce increasingly nonsensical outputs over generations. "In one example, a model started with a text about European architecture in the Middle Ages and ended up -- in the ninth generation -- spouting nonsense about jackrabbits," writes The Register's Lindsay Clark. From the report: [W]ork led by Ilia Shumailov, Google DeepMind and Oxford post-doctoral researcher, found that an AI may fail to pick up less common lines of text, for example, in training datasets, which means subsequent models trained on the output cannot carry forward those nuances. Training new models on the output of earlier models in this way ends up in a recursive loop. In an accompanying article, Emily Wenger, assistant professor of electrical and computer engineering at Duke University, illustrated model collapse with the example of a system tasked with generating images of dogs. "The AI model will gravitate towards recreating the breeds of dog most common in its training data, so might over-represent the Golden Retriever compared with the Petit Basset Griffon Vendéen, given the relative prevalence of the two breeds," she said.

"If subsequent models are trained on an AI-generated data set that over-represents Golden Retrievers, the problem is compounded. With enough cycles of over-represented Golden Retriever, the model will forget that obscure dog breeds such as Petit Basset Griffon Vendeen exist and generate pictures of just Golden Retrievers. Eventually, the model will collapse, rendering it unable to generate meaningful content." While she concedes an over-representation of Golden Retrievers may be no bad thing, the process of collapse is a serious problem for meaningful representative output that includes less-common ideas and ways of writing. "This is the problem at the heart of model collapse," she said.
This discussion has been archived. No new comments can be posted.

AI Models Face Collapse If They Overdose On Their Own Output

Comments Filter:
  • by Waccoon ( 1186667 ) on Thursday July 25, 2024 @09:04PM (#64656334)
    Inbreeding.
    • by Entrope ( 68843 ) on Thursday July 25, 2024 @09:35PM (#64656402) Homepage

      We have another word, too: dupe [slashdot.org].

    • Exactly the first word that came to mind.
      • I encourage LLM developers to intently study the 70 year success of the Congressional Budget Office making estimates which are never anywhere close to correct, yet they succeed in getting funding to produce the next year's financial estimates.

        They could the expand the training data set to, economist's estimates in private, public and academic jobs.

        They could then expand the training data set to NFL quarterbacks draft picks, initial salary and lifetime number of wins.

        We, in general, should do our part in pro

    • by narcc ( 412956 ) on Thursday July 25, 2024 @10:30PM (#64656476) Journal

      "Model collapse" has been a well-known phenomenon for some time, though there wasn't a term for it until recently. I even described it here [slashdot.org] a couple months months before the paper that coined the term [arxiv.org] came out.

      • by Rei ( 128717 )

        It's also heavily misrepresented [bsky.app] in the press. What they're doing is a very aphysical situation.

        • I'm afraid we've entered a time where "anything negative about AI" is now straight up click- and flamebait, especially (and surprisingly) on Slashdot. So many people just want to believe it is and will always be shit.

          • So many people just want to believe it is and will always be shit.

            People are just judging it based on (extremely bad faith) marketing effort and the marketing department has no clue about what their product can do.

      • "Model collapse" has been a well-known phenomenon for some time

        Indeed.

        Garbage in, garbage out. The expression was popular in the early days of computing. The first known use is in a 1957 syndicated newspaper article about US Army mathematicians and their work with early computers,[4] in which an Army Specialist named William D. Mellin explained that computers cannot think for themselves, and that "sloppily programmed" inputs inevitably lead to incorrect outputs.

      • Why doesn't it happen in real life? Or does it? There's no alien race feeding us new information as humans only live on this planet.
        • We humans ignore most of what we're told.

        • by narcc ( 412956 )

          This doesn't happen to humans because humans do not work the way that various kinds of neural networks work. ANNs are nothing at all like brains. In addition to being capable of reason and analysis, we can set our own goals and evaluate our own progress. We can also produce new information.

      • On the bright side, model collapse will lead towards better AI if anyone takes the time to understand what is going on and why it is collapsing.

        But nobody is being paid to find out why. People are being paid to make the current tech advancement work... which it won't, because it is not a complete model.

        • by narcc ( 412956 )

          It's not as complex as you think. Try thinking about it in terms of information. Your model is going to encode some information about the training data, but it can't encode all of it. In addition to the information you're hoping it will encode, it will also encode noise. To the model, there is no difference between information and noise. When you use that model to generate content, you're going to get something that looks like the training data, but not quite. Even in the best possible case, the outpu

    • by Z00L00K ( 682162 ) on Thursday July 25, 2024 @10:48PM (#64656498) Homepage Journal

      Or as it happens in electronics - a feedback loop.

    • A better analogy is Donold Trump believing his bullshit.

      • Not fully understanding the issue but parroting the general idea is more like believing in authoritarianism or populism. Like a schoolyard game of telephone where everyone sits in a circle, one person whispers into the ear of the next person the message, and everyone waits to see what comes out the other end after being transferred 20 times, the message comes out often quite a bit different than was inputted.
    • by gweihir ( 88907 )

      I think "echo-chamber" and maybe "reality exclusion bubble" cover it pretty well.

    • by shanen ( 462549 )

      Nice FP and should be modded Funnier. (And I still think the moderation should be logarithmic.)

    • by Gilmoure ( 18428 )

      Gonna have to keep one human alive to generate new content as seasoning for the AI goo.

  • Sniffing your own farts... like people gobbling their own propaganda.

  • Duh? (Score:5, Insightful)

    by Baron_Yam ( 643147 ) on Thursday July 25, 2024 @09:34PM (#64656400)

    It's data decay, it's random variation without natural selection.

    AI's output is not as true to the original data, so each round of ingesting its own output as new training data makes things worse.

    Humans (and anything on the planet with actual intelligence) has an entire world of reality to provide feedback and help us prune bad pathways from our brains. AI as yet does not have that kind of feedback mechanism.

    I'd be curious to see what would happen if you had an Internet-facing AI and asked people to judge its output and use those judgements as weights for the AI to adjust its training. Obviously it would end up racist in about 5 minutes because people are assholes, but it would still be interesting to see how the AI tuned itself in response to feedback from reality.

    • Since AI companies are raking in billions in free investor cash this year (at least until the bubble bursts), they could hire non-assholes to train their AIs in a more productive manner...

      • I'd also like to see a more natural tabula rasa attempt - put a blank AI in control of a robotic body with sensory feedback and a few pre-programmed instincts, then let it figure things out.

        Treat it like a baby, have it 'unhappy' at low charge and 'happy' when charging but not at full charge and then give it an overseer that can provide a charger connection when it makes a noise. Give it the equivalent training you'd give to a human baby. Try to help it learn how to find the charging port on its own, then

        • by narcc ( 412956 )

          then let it figure things out.

          I'm curious as to how you think AI works. ("AI" can mean a lot of different things, but I'm going to assume you mean a neural network.)

          See if you can get an AI-driven body that will seek out that port and activate it when hungry.

          That's no problem at all. Reinforcement learning has been used for even more complex tasks than this. So why hasn't someone tried to raise your robot baby?

          A neural network is just a function. It takes input and produces output. If you were to write out the function that describes a neural network, it would look like the same operations nested and repeated over and over

          • Yes, current neural networks are attempting to simulate single systems that make up the parts of our brains, e.g. vision, hearing, syntax, etc..

            LLMs in particular are trying to simulate how we learn language through input. If you study a little cognitive linguistics, particularly construction grammar, it should give you an idea of how this process works in humans & then how the absence of meaning would affect that process in LLMs (Main idea: constructions are form-meaning pairings, no meaning result
            • Patterns of statistical probability (over word/word-group relationship occurrence) in a very large corpus of human utterances about the world, is a pretty good proxy for meaning.
              It represents which concepts, and which relationships among concepts, humans observe a lot, pay attention to a lot, and thus communicate about a lot.
              The larger the relative occurrence in the corpus, the more likely the relationship is likely to correspond to a real, well agreed about relationship among things in the real world, and
              • Except that my point is that they're proxies for meaning without actual form-meaning pairings. When we use language, we make meaning with the language code. However, when LLMs generate text, they assemble morphemes based on statistical probabilities, based entirely on the occurrence of such morphemes in other texts. There is literally no reference or other connection to meaning. LLMs don't make meaning.
            • by narcc ( 412956 )

              current neural networks are attempting to simulate single systems that make up the parts of our brains [...] LLMs in particular are trying to simulate how we learn language through input

              That's complete nonsense. Neural networks are not attempting to simulate brains or parts of brains. Neither are LLMs are in any way trying to simulate how humans learn language. The very idea is beyond absurd!

              If you're getting this from your Swiss linguist, know that on the subject of LLMs he is deeply misinformed.

              • The clue's in the name "neural network." What do you think brains are made up of?
                • by narcc ( 412956 )

                  OMG... You can't possibly be serious. "Neurons" in a neural network are not even remotely similar to biological neurons. Artificial neural networks do not work anything at all like biological neural networks. They don't function the same way. They don't learn the same way. They are just about as different as two things can get. The most you can say, given the history, is that they were inspired by biological neurons.

                  As I tried to explain to the parent, NNs are not little electronic organisms. You can't r

                  • It looks like you didn't read/understand anything I wrote either.
                    • by narcc ( 412956 )

                      I read and understood your posts, which is how I know that you have some very confused ideas about very basic concepts. You can take the opportunity to learn or not. The choice is yours.

                    • You clearly haven't understood what I actually wrote because you've made a bunch of erroneous inferences. I think you're arguing with some other voice in your head.
                    • by narcc ( 412956 )

                      You don't seem to be able to articulate these alleged "erroneous inferences". I wonder why that is... such a mystery...

        • Sure, if you fall into the "tabula rasa" fallacy. We're not blank slates when we're born. Anyone who's spent any time around multiple babies can tell you that immediately.
          • If you want to read up on just how predisposed to particular traits & behaviours each of us were when we were born, I recommend starting with "The Blank Slate" by Steven Pinker, but he goes too far, so you'll need to read someone like Michael Tomasello to get a more rounded picture (evolutionary & social view of language learning). Both are necessarily dense writers because they're getting across complex, precise ideas.
      • They could...or they could not and say they did.

      • by gweihir ( 88907 )

        You underestimate a) how much data is needed and b) how difficult it is to build up a large workforce for something like that. Also c), how would you ever identify "non-assholes"? There is no known reliable mechanism for that.

        This is not a problem that can be solved with money.

    • What? AlphaZero, it learned Go, chess and shogi from scratch to superhuman level. It was not feeding on its own data, but rather acting on the board against an opponent.
      • That's a not very convincing example. It only trained "from scratch" after humans figured out the recipe from looking at human examples. Before that, computer AI models tried to train "from scratch" for 50 years without success.

        Look, it's like if you jump into a maze carrying a map with all the dangerous bits marked up. Technically, you can claim that you found the exit on your own, all from scratch, but actually the human who drew the map did all the important work for you, and if you get put into anothe

      • What? AlphaZero, it learned Go, chess and shogi from scratch to superhuman level. It was not feeding on its own data, but rather acting on the board against an opponent.

        It was very much feeding on it's own data (playing versions of itself) to improve.

        That is a very different sort of problem though. Chess and Go have clear rules and win conditions so even when it's playing itself it has a strong external evaluation mechanism. That sort of problem probably stretches to things like protein folding (the feed

    • Sounds more like the data gets homogenised by the process. I have to say that even today's LLM output is particularly bland & monotonous. They can usually get the general tone of each genre of writing but there's little to no nuance, subtlety, or expression to it. It's almost as if it was written by a heartless robot.

      The analogy would be when you see computer generated animated faces; when they talk & pull expressions, it looks like they've had an overdose of Botox.
    • It's data decay, it's random variation without natural selection.

      AI's output is not as true to the original data, so each round of ingesting its own output as new training data makes things worse.

      Humans (and anything on the planet with actual intelligence) has an entire world of reality to provide feedback and help us prune bad pathways from our brains. AI as yet does not have that kind of feedback mechanism.

      I don’t have to know the laws of a foreign country to know and understand that any 5-year old in that country, will not be my family doctor. No matter how many times they brag about their experience playing house, or watching House.

      Are we seriously too stupid to teach the concept of a child to AI, and filter out that kind of senseless input? Yeah, I get it. We’re trying to teach a child that. Doesnt dismiss the question. We should be filtering AIs output from its input to teach it. We te

    • It's data decay, it's random variation without natural selection.

      Why don't humans suffer from this for extended periods? Technically, we all suffer from it momentarily from time to time, but somehow, most of us seem to get past it. Why?

      (I know, but I'm not telling. I enjoy laughing too much.)

    • by gweihir ( 88907 )

      Yes. AI is not only not adding anything to the input data (unlike what AGI would do), it does not (and LLMs cannot) capture it fully. Hence each iteration has lower and lower quality and misses more and more important detail and has more and more hallucination in it. And that is one reason why general LLMs have no future: There will soon not be any useful training material.

  • by Tablizer ( 95088 ) on Thursday July 25, 2024 @09:38PM (#64656406) Journal

    I thought this was long known, as those who scrape the web for AI training content intentionally tried to avoid feeding it existing AI content.

    The running joke was: "How do you know this picture is AI?"

    Answer: "Their hands have 6 fingers each."

    How do you know if you accidentally trained your bot on existing AI?:

    Answer: "Their hands have 7 fingers."

    • by gweihir ( 88907 )

      Yep, same here. And from the abstract I read on a respective paper (still did not find the time to read the complete thing), it is not a "may".

    • My AI colleagues have been talking about this for over a year (two?). Maybe the paper qualified more precisely what is going on.

    • That is an oversimplification. Humans are there too, we prompt, iterate, select outputs, post them online and comment on them. This makes for enough filtering and feedback. Bad would be to randomly prompt and generate stuff with no filtering - what they did in the paper. They did it the stupid way. The paper doesn't probe a model can't train on its own outputs if the outputs are filtered to be high quality. And all LLMs today train on a mix of filtered web content and filtered synthetic text.
    • Yes and no. The preprint of the paper referred to in TFA appeared on arXiv over a year ago in May of 2023 (https://arxiv.org/abs/2305.17493), and it made all kinds of news at the time. The Wikipedia page on "Model Collapse" has referred to this preprint for a while.

      And yes, it was everyone's intuition that, as with making a "copy of a copy of a copy", collapse would occur. These guys just did a detailed study of that, and backed it up with a sh*t ton of math.

      That exact same preprint/paper (with revisions

    • by vadim_t ( 324782 )

      Image AI is unlikely to suffer from such issues.

      For images we've got plentiful sources of quality indicators. There's specialized galleries, tags, upvotes/downvotes, favorite counts, reposts, even just comments that can be quite easily evaluated. Overall this makes it very easy to select for the highest quality content.

      • by gweihir ( 88907 )

        You think? What makes you think AI can actually use those quality indicators? And what makes you think they will stay free from AI data?

        • by vadim_t ( 324782 )

          You think? What makes you think AI can actually use those quality indicators?

          Training is just dumping a bunch of files into a folder and letting the GPUs crunch that.

          So let's say you've got a dataset from Reddit. You probably get something like a bunch of JSON with upvote/downvote info, comments, etc. So you just don't use any images that were heavily downvoted. You exclude anything from the meme type subreddits and concentrate on ones for image appreciation. For a more complex approach you feed the comment

    • I thought this was long known, as those who scrape the web for AI training content intentionally tried to avoid feeding it existing AI content.

      The running joke was: "How do you know this picture is AI?"

      Answer: "Their hands have 6 fingers each."

      How do you know if you accidentally trained your bot on existing AI?:

      Answer: "Their hands have 7 fingers."

      And for a brief moment, people with polydactyly felt normal.

  • and if you know what happens if you get them too close, you understand this problem in one dimension.

    You take a video camera wired to a TV and point it at the screen...and now you understand this problem in two dimensions.

    Extrapolation can be dangerous, but I'll go out on a limb and say the pattern repeats all the way up to however many millions of dimensions these AI things have under the hood.

    • You take a video camera wired to a TV and point it at the screen...and now you understand this problem in two dimensions.

      Video feedback is a good example of iterated function systems. A key feature of IFSes is that the result of iterated feedback is independent of the initial input, it only depends on the functions. Like any good math result, this works in fairly general spaces. I noticed the connection between these and AI models back in June 2015, as I was taking a course of fractal geometry, and Google Deep Dream [blogspot.co.uk] was all over the news, and I think it was a key inspiration for my math art project (see the homepage link).

  • This is how neural networks work. And it doesn't forget rather than decrease some weights while increasing others. This is obvious and predictable behavior. Depending on scale, quantizing may collapse or revive insignificant weights.

    For human like intelligence, neural networks need human-like datasets. Therefore, robots will be the key to good AI. Put 1000 different robots in 1000 different classrooms for a few years and treat them as humans and make them play with kids and provide taste sensors for food to
  • Pretty sure it works the same for humans too.
    • by vivian ( 156520 )

      Is this part of the reason why we humans seek novelty and new experiences so much, especially when our minds are young and developing? Also why each generation seems to have a bias towards rejecting the previous generation's thoughts and ideas? Perhaps it's necessary for preservation of our own minds from generation to generation.

  • Real intelligence creates something new, via a rational mind. AI copies and apes intelligence, but does not possess it.

    If you need proof of this fact, the study shows exactly the results we would expect of non-rational AI: a copy of a copy degrades each time, just like on a copy machine...
    • What about error-corrected copies?

    • Real intelligence creates something new, via a rational mind. AI copies and apes intelligence, but does not possess it. If you need proof of this fact, the study shows exactly the results we would expect of non-rational AI: a copy of a copy degrades each time, just like on a copy machine...

      And a memory degrades over time. Along with the human brain responsible for maintaining it. And when humans share those memories in the form of telling stories, the image that can come out of that “copy machine” can be far from the original. Sometimes off by a human lie or seven.

      If AI can be trained on how to avoid shortsightedness, it will be vastly superior to human intelligence regardless of topic.

      • Current LLM can't learn anything new outside of the slow training process, it's a big limitation to compete against biological being.
      • If AI can be trained on how to avoid shortsightedness, it will be vastly superior to human intelligence regardless of topic.

        This is an absolutely massive speculation. LLMs are still new and we assume the sky is the limit, but it is much more likely the limit is somewhere below true human intelligence.

        • by gweihir ( 88907 )

          That is one of the misconceptions here: LLMs tech is not actually new. The implementation size is new, but anything else about them really is not. There are no easy breakthroughs to be expected in this quite old tech.

    • by gweihir ( 88907 )

      Exactly. And AI cannot even identify everything that is in there, so it is always a degraded view it has of its training data.

  • If a model is trained on its own output indiscriminately, it is like a man running with closed eyes. Eventually he would get hurt. Solution? Open eyes. Duh! Just let the model interact with the world and other models and us.
    • Just let the model interact with the world and other models and us.

      This seems to miss the point. The problem is models interacting with each other without a strong evaluator for quality once models are generating more content than humans.

      We are used to language changing at human scales, we have just enough rizz to more or less keep up with whatever skibidi things the kids are inventing these days.

      But what happens when LLMs are generating and consuming orders of magnitude more information with each othe

    • by gweihir ( 88907 )

      Let it interact with the average moron? Yeah, _great_ idea!

  • This sounds a lot like the Gossip Game that my second grade teacher had us perform. The input phrase was "rubber baby buggy bumpers". By the time it got to my end of the line, it was gibberish. It does sound like a good way to develop the foundational writings of Artificial Scientology, or the ramblings of that fuckwit who keeps promoting the Electric Universe "theory".
  • The paper is not representative of anything one can reasonably expect from the real world.

    They trained a worthless 125m toy model and the method of training on model output doesn't resemble anything any competent person would ever do for obvious reasons. The result of creating loops where gen loss/garbage only accumulates are blatantly obvious.

    The opposite has also been demonstrated where LLMs have been used to successfully improve model quality.

  • See what you get.

  • by Fons_de_spons ( 1311177 ) on Friday July 26, 2024 @02:39AM (#64656720)
    Yet another example where AI behaves like a human. Only listen to people that generate similar output as your own and crazy stuff happens.
    I'd love to analyse that. Anyone want to fund me for a decade or so?
    • That experiment has been ongoing on social media for years now. I'd say we're already in the phase where the output is becoming gibberish, or at least getting close to it.

    • Our political system is a study in human feedback loops. Many on the right and the left only listen to input from others who already think like they already do, and take no input from those with different opinions. Your proposed experiment is already well under way.

  • Been know for a long time not to inbreed.
  • Quickly, let's have one of these collapsed models draw Michael Keaton.

  • the start. GIGO - garbage in garbage out!
  • Model collapse when using AI-synthesized data is a thing, but model collapse is not necessarily inevitable. Work is being done on how to prevent it. Here's something current: https://x.com/KempeLab/status/... [x.com]
  • Or like editing a lossy format like .jpg or .mp3 over and over again. Do it enough times, and the end product is total garbage.

    Speaks volumes about how shitty so-called AI is, doesn't it?

    • by Gilmoure ( 18428 )

      The speed and efficiency of AI systems to reduce everything to a grey static is truely amazing.

  • This is not an AI thing, it is present in every optimization model and evolutionary process. The method/process is dependent on a correct representation of the data, and as soon as that disappears (which it does in every resulting output since the purpose here is to condense real world data to *likely* patterns) the result is increasingly random. You can observe the same fundamentals everywhere: language, law, culture/tradition, and so on. As long as we teach only proper languages without slang, slang will

    • by gweihir ( 88907 )

      Pretty true. The only way we know how to break out of this cycle is applying General Intelligence. Which also means most humans cannot do it either.

  • by Gilmoure ( 18428 )

    Fuck yeah!

  • The name I attached to this idea, years ago, was 'pattern resonance'. If you take an audio amplifier, and feed its output into its input, the signal will quickly morph so as to only have frequencies which, when going through the loop, end up in phase with themselves. Such frequencies reinforce and are amplified, and other frequencies dissipate. If you have some kind of dynamic system with resonant frequencies, and feed them input at those frequencies, the system resonates; inputs with other frequencies diss

  • Never get high on your own supply.

  • Back when I was learning about stacked Restricted Boltzmann Machines, you could visualize the network's "hallucinations" by introducing noise to the latent space (hidden layer), before recreating the input on the backward pass (to the visible layer). You could take this "corrupted" recreation of the input, pass it back into latent space, introduce noise, and then pass it back to the visible layer to see a new hallucination.

    Hinton had a nice animation of this on his U of T website back in the day. You could

The question of whether computers can think is just like the question of whether submarines can swim. -- Edsger W. Dijkstra

Working...