- AI-generated videos often lose coherence over time due to a problem called drift
- Models trained on perfect data struggle when handling imperfect real-world inputs
- EPFL researchers developed recycling through error recycling to limit progressive degradation
AI-generated videos often lose coherence as sequences become longer, a problem known as drift.
This problem occurs because each new frame is generated based on the previous one, so any small errors, such as a distorted object or a slightly blurred face, are amplified over time.
Large language models trained exclusively on ideal data sets have difficulty handling imperfect inputs, which is why videos typically become unrealistic after a few seconds.
Recycle bugs to improve AI performance
Generating videos that maintain logical continuity over long periods remains a major challenge in this field.
Now, researchers from EPFL’s Visual Intelligence for Transport (VITA) laboratory have introduced a method called error recycling retraining.
Unlike conventional approaches that try to avoid errors, this method deliberately introduces the AI’s own errors into the training process.
By doing so, the model learns to correct errors in future frames, limiting progressive image degradation.
The process involves generating a video, identifying discrepancies between the produced frames and the predicted frames, and retraining the AI on these discrepancies to refine future production.
Current AI video systems typically produce sequences that remain realistic for less than 30 seconds before shapes, colors, and motion logic deteriorate.
By integrating error recycling, the EPFL team has produced videos that resist drift for longer durations, potentially eliminating strict timing constraints in generative video.
This advance allows artificial intelligence systems to create more stable sequences in applications such as simulations, animation or automated visual storytelling.
Although this approach addresses drift, it does not eliminate all technical limitations.
Error recycling increases computational demand and may require continuous monitoring to avoid overfitting to specific errors.
Large-scale deployment may face resource and efficiency constraints, as well as the need to maintain consistency across diverse video content.
It remains uncertain whether feeding AI its own errors is actually a good idea, as the method could introduce unforeseen biases or reduce generalization in complex scenarios.
VITA Lab’s development demonstrates that AI can learn from its own mistakes, which could push the limits of video generation time.
However, it is still unclear how this method will perform in external controlled tests or in creative applications, suggesting caution before assuming that it can completely solve the drift problem.
Through Explore technology
Follow TechRadar on Google News and add us as a preferred source to receive news, reviews and opinions from our experts in your feeds. Be sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp also.




