
YouTube cracks down on low-quality AI videos 🤖📺
Cliquez ici pour lire en français
The video-sharing platform is tightening its grip on content quality. It has rolled out a new feature that makes it easier for users to flag videos they consider low quality—an update that comes as AI-generated content continues to flood the platform and raise concerns.
YouTube calls for vigilance ⚠️
A 2025 report by Le Journal de Montréal estimates that between 20% and 21% of videos recommended on YouTube may be generated by artificial intelligence. Often labeled as “AI slop,” these automated videos are typically described as repetitive and low-value.
While YouTube hasn’t confirmed these figures, they highlight the scale of the issue. Controversial as they may be, such estimates point to a growing reality that’s becoming harder to ignore.
Users as the new gatekeepers 👥
Viewers are now being prompted directly while watching videos with questions like: “Does this feel like a low-effort AI-generated video?” or “How would you rate the quality of this content?”
Responses range from “Not at all” to “Extremely,” effectively turning users into active participants in content moderation. With this move, YouTube is leaning on its audience to help assess and filter content quality.
This feature is part of a broader strategy. Since 2007, the platform has relied on a mix of human moderation and automated systems, gradually expanding its use of detection tools to support its moderation teams.
A growing wave of low-quality content 👎
The issue is far from marginal. According to an analysis cited by Digital Trends, 21% of the top 500 videos recommended to a new account are considered low quality, while 33% fall into repetitive or shallow content categories.
This trend underscores the downsides of automated content production—often optimized for views rather than value.
Monetization at stake for AI content 💰
YouTube isn’t banning AI-generated videos outright. However, it is tightening its monetization policies: content deemed overly automated or low quality may see reduced ad revenue—or lose monetization entirely.
A promising but imperfect solution ⚖️
While the initiative aims to raise overall content standards, it also introduces new risks. In an increasingly competitive creator ecosystem, the potential for abusive or malicious flagging remains a concern.
User participation, while essential, could quickly become a double-edged sword.
As AI continues to reshape content creation at a rapid pace, YouTube is trying to strike a delicate balance between innovation and control.
The key question remains: will this approach be enough to preserve content quality in the age of AI?
📱 Get our latest updates every day on WhatsApp, directly in the “Updates” tab by subscribing to our channel here ➡️ TechGriot WhatsApp Channel Link 😉





