GPT的崩溃
Will AI systems perform poorly due to AI-generated material in training data?

原始链接: https://cacm.acm.org/news/the-collapse-of-gpt/

大型语言模型(LLM)领域越来越关注“模型坍塌”问题。当LLM在包含其自身生成文本的数据上进行训练时,就会出现这种情况,导致性能下降,因为训练数据偏离了真实世界的人工生成内容。这是因为数据中标记(单词或单词的一部分)的分布不再准确地反映自然语言。 虽然一些人最初担心模型会迅速坍塌,但现实往往是数据积累,即合成数据与真实数据混合,从而减缓了性能下降的速度。然而,这种混合也可能阻碍性能提升,除非仔细管理。 研究人员正在探索减轻模型坍塌的方法,包括精心策划合成数据质量、利用LLM本身评估其输出以及整合人工反馈。策划旨在缩小真实数据和合成数据分布之间的差距,从而改进LLM输出。提高数据质量和多样性是防止模型退化并保持公平、全面的语言生成的關鍵。

Hacker News 最新 | 过去 | 评论 | 提问 | 展示 | 招聘 | 提交 登录 GPT 的崩溃 (acm.org) pseudolus 33 分钟前 15 分 | 隐藏 | 过去 | 收藏 | 2 条评论 behnamoh 20 分钟前 | 下一条 [–] 我听说 OpenAI 和许多 AI 实验室会在他们的 LLM 输出中添加水印 [0] 来检测 AI 生成的内容并将其过滤掉。[0] 比如单词的统计数据等等。 回复 Rabbit_Brave 18 分钟前 | 上一条 [–] 这些公司掌握着源源不断的人工创造的数据流。你认为你与 AI 的对话或其他互动会发生什么?质量可能有点可疑。 回复 指南 | 常见问题 | 列表 | API | 安全 | 法律 | 申请 YC | 联系我们 搜索:
相关文章
  • (评论) 2024-09-19
  • 深度学习,深度丑闻 2025-04-08
  • (评论) 2025-04-08
  • (评论) 2024-02-21
  • (评论) 2024-08-28

  • 原文

    Ever since ChatGPT was released to the public in November 2022, people have been using it to generate text, from emails to blog posts to bad poetry, much of which they post online. Since that release, the companies that build the large language models (LLMs) on which such chatbots are based—such as OpenAI’s GPT 3.5, the technology underlying ChatGPT—have also continued to put out newer versions of their models, training them with new text data, some of which they scraped off the Web. That means, inevitably, that some of the training data used to create LLMs did not come from humans, but from the LLMs themselves.

    That has led computer scientists to worry about a phenomenon they call model collapse. Basically, model collapse happens when the training data no longer matches real-world data, leading the new LLM to produce gibberish, in a 21st-century version of the classic computer aphorism “garbage in, garbage out.”

    LLMs work by learning the statistical distribution of so-called tokens—words or parts of words—within a language by examining billions of sentences garnered from sources including book databases, Wikipedia, and the Common Crawl dataset, a collection of material gathered from the Internet. An LLM, for instance, will figure out how often the word “president” is associated with the word “Obama” versus “Trump” versus “Hair Club for Men.” Then, when prompted by a request, it will produce words that it reasons have the highest probability of meeting that request and of following from previous words. The results bear a credible resemblance to human-written text.

    Model collapse is basically a statistical problem, said Sanmi Koyejo, an assistant professor of computer science at Stanford University. When machine-generated text replaces human-generated text, the distribution of tokens no longer matches the natural distribution produced by humans. As a result, the training data for a new round of modeling does not match the real world, and the new model’s output gets worse. “The thing we’re worried about is that the distribution of your data that you end up with, if you’re trying to fit your model, ends up really far from the actual distribution that generated the data,” he said.

    The problem arises because whatever text the LLM generates would be, at most, a subsample of the sentences on which it was trained. “Because you generate a finite sample, you have some probability of not sampling them,” said Yarin Gal, an associate professor of machine learning at Oxford University. “Once you don’t sample, then they disappear. They will never appear again. So every time you generate data, you basically start forgetting more and more of the tail events and therefore that leads to the concentration of the higher probability events.” Gal and his colleagues published a study in Nature in July that showed indiscriminate use of what they called ‘recursively generated data’ caused the models to fail.

    The problem is not limited to LLMs. Any generative model that is iteratively trained can suffer the same fate if it starts ingesting machine-produced data, Gal says. That includes stable diffusion models that create images, such as Dall-E. The issue also can affect variational autoencoders, which create new data samples by producing variations of their original data. It can apply to Gaussian mixture models, a form of unsupervised machine learning that sorts subpopulations of data into clusters; they are used to analyze customer preferences, predict stock prices, and analyze gene expression.

    Collapse is not a danger for models that incorporate synthetic data but only do so once, such as neural networks used to identify cancer in medical images, where synthetic data was used to augment rare or expensive real data. “The main distinction is that model collapse happens when you have multiple steps, where each step depends on the output from the previous step,” Gal said.

    The theory that replacing training data with synthetic data will quickly lead to the demise of LLMs is sound, Koyejo said. In practice, however, not all human data gets replaced immediately. Instead, when the generated text is scraped from the Internet, it gets mixed in with human text. “You create synthetic data, you add that to real data, so you now have more data, which is real data plus synthetic data,” he said. What is actually happening, he said, is not data replacement, but data accumulation. That slows the degradation of the dataset.

    Simply accumulating data may stop model collapse but can cause other problems if done without thought, said Yunzhen Feng, a Ph.D. student at the Center for Data Science at New York University. As a rule, the performance of neural networks improves as their size increases. Naively mixing real and synthetic data together, however, can slow that improvement. “You can still obtain similar performance, but you need much more data. That means you’re using much more compute and much more money to achieve that,” he said.

    One challenge is that there is no easy way to tell whether text found on the Internet is synthetic or human-generated. Though there have been attempts to automatically identify text from LLMs, none have been entirely successful. Research into this problem is ongoing, Gal said.

    Solving with curation

    There are ways, however, to make the addition of synthetic data less of a problem.

    One approach is to curate the synthetic data to make sure it is of good quality. Some curation happens naturally, Gal said; people do not post everything their chatbot creates to the Internet, weeding out the material that contains false information or simply does not make sense, so that improves the training set.

    Curation can also be a deliberate process to make sure high-quality data goes into a training set. Feng, for instance, has experimented with asking the LLM to assess the quality of its own output. LLMs naturally select the words they think have the highest probability of fitting into a context. In doing so, they internally generate a score rating how confident they are that they are pairing the best words together. That same mechanism can be used to assess already generated text to rate its quality, with low-scoring results removed or the highest-scoring result of several attempts selected as the best. The idea is similar to a method used to fine-tune LLMs called reinforcement learning from human feedback (RLHF), in which people provide examples of good results, thereby pushing the models toward producing similar results. In this case, though, the LLM is generating its own feedback.

    How well that works varies by case, Feng said. The feedback can be improved by having other LLMs assess the same text and combining the results from different models. Including human assessments also improves the outcomes, as does applying some pre-written rules about what the output should look like. Eliminating lower-quality results from the synthetic data makes the generated data more closely resemble original data, he said. “It’s like you have a distribution of the synthetic data, you have a distribution of the real data, and you want to close the gap between them as much as possible,” he said.

    Improving the quality of synthetic data could also help with another challenge LLMs are facing as they try to improve: a dearth of new data on which to train. Scientists from Epoch AI, a research institute that focuses on trends in AI, have predicted the world will run out of new text to train on sometime between 2026 and 2032. With no new data on which to train future generations of LLMs, progress could stagnate. “The interesting question is, can synthetic data lead to not just stagnation but actual improvement in the model?” asked Pablo Villalobos, a staff researcher at Epoch.

    With curation of high-quality synthetic data, he said, the question becomes “whether this can be done iteratively so that each model generates better data that is used to train another model in basically the opposite of model collapse, in some virtuous circle.” He is not yet sure whether such improvement is possible, but sees some signs it could be.

    Other issues arise from training new models on generated data that do not quite reach the level of model collapse. For instance, Koyejo said, synthetic data could increase the likelihood that LLMs will discriminate against people in minority groups. Because any minority is by definition a smaller part of the data distribution, losing the tails of the distribution could make minorities disappear entirely. “Data tends to anchor on majority subgroups,” he said. “It tends to be good at capturing the most popular themes and less good at capturing tails. So less represented demographics can get erased in various ways.”

    While such erasure is something that could happen, he added, the issue has not been well studied. His colleague Diyi Yang, an assistant professor in the natural language processing group at Stanford, said there has been very little research into the question of how model collapse affects diversity issues. “Part of the reason is that, if you think about any existing big models, a lot of the training dynamics or checkpoints of those models actually are not really transparent or publicly available,” she said.

    In the end, Gal argued, model collapse is an important consideration, but not the matter of imminent disaster that some news coverage has made it out to be. “It’s a matter for the tech companies who build these models to be aware of how the models are being used and how the models are being trained, in order to avoid training on synthetic data that they themselves generated.”

    Further Reading

    联系我们 contact @ memedata.com