鶹Ƶ

Training AI on machine-generated text could lead to ‘model collapse,’ researchers warn

“A good analogy for this is when you photocopy of a piece of paper, and then you photocopy the photocopy – you start seeing more and more artifacts”
""

Like an ouroboros – or snake eating its own tail – future AI models trained on the internet, where AI-generated content is expected to become ubiquitous, could end up devouring the problematic work of their predecessors (photo by Martin McCarthy/Getty Images)

Researchers in Canada and the U.K. are warning of a potential snag that could hamper the evolution of artificially intelligent chatbots: their own chatter may eventually drown out the human-generated internet data they devour as part of their training.

To make their pattern-based predications, generative AI models – including large language models (LLMs) such as ChatGPT and art tools such as Stable Diffusion – draw from massive troves of data on the internet to learn about human text and images.

But the composition of the internet itself is poised to change as AI-generated content becomes increasingly ubiquitous, meaning future AI models will often be learning from the work of their predecessors.

This AI ouroboros – or snake eating its own tail – could spell trouble for future generations of AI chatbots by throwing off their predictions, suggests co-authored by researchers at the University of Toronto, University of Oxford, University of Cambridge, University of Edinburgh and Imperial College London.

The paper, which has yet to be peer-reviewed, says the effect could ultimately lead to what the researchers call “model collapse.”

""
Nicolas Papernot (supplied image)

“A good analogy for this is when you take a photocopy of a piece of paper, and then you photocopy the photocopy – you start seeing more and more artifacts,” says paper co-author Nicolas Papernot, an assistant professor in 鶹Ƶ’s Edward S. Rogers Sr. department of electrical and computer engineering in the Faculty of Applied Science & Engineering and the department of computer science in the Faculty of Arts & Science.

“Eventually, if you repeat that process many, many times, you will lose most of what was contained in that original piece of paper.”

Papernot, who is a faculty affiliate at the , a 鶹Ƶ , and his collaborators constructed toy mathematical models to analyze how this degenerative learning process could theoretically play out.

Today’s AI chatbots are trained on internet-mined data that’s curated to capture a wide spectrum of human information – from the most likely occurrence to the outliers and everything in between.  

But Papernot says the proliferation of AI-generated content could “pollute” the internet, so the data pool no longer reflects reality, but what LLMs predict reality to be. When this polluted data is fed into the next generation of chatbots, their predictions will be skewed to overrepresent probable events and underrepresent rare cases, raising concerns about fairness and accuracy.

“It’s kind of a reinforcing feedback loop where you only listen to the majority and you start forgetting whatever things were said less often,” he says. “There can be oddities where something that you start generating is actually not that common, and so it just starts reinforcing its own mistakes.”

These errors are compounded with each new iteration of the model, he says. In late-stage model collapse, the poisoned data from predecessors accumulate and converge around a warped representation of reality that bears little resemblance to our own, rendering a model’s predictions nearly worthless.

Papernot says the team’s findings cast doubt on predictions that the current pace of development in LLM technology will continue unabated.

“What we're seeing in the paper is, essentially, right now there is a fundamental issue with the way that models are trained, and that won't be able to rely so heavily on data from the internet to continue scaling the training of these models,” he says.

One proposal to circumvent the issue could be to train models to filter what content is produced by humans versus machines, but this could prove difficult as advances in the technology make the distinction blurry, Papernot says.

Another strategy would be to invest in curation of high-quality human-generated data, but Papernot says it could be a challenge to co-ordinate such an effort as competition between rival chatbots intensifies.

While he believes chatbots have access to enough human-generated data to continue their development for now, Papernot says early symptoms of LLM-induced data poisoning – such as information manipulation and amplification of bias against marginalized populations – might not be that far off. 

“We have to basically balance the different risks that machine learning is creating and figure out a way to allocate our resources to tackle both the short-term concerns … and how to handle machines that are more and more capable,” Papernot says.

“As we gain more certainty as to where the technology is going, we can better understand how much research to allocate to each of the problems.”

UTC