
NewsGuard August 2025 One Year Progress Report: Percentage of Responses Containing False Information
This is what I have been warning for since 2020: AI LLM will end up in a downward quality spiral.
My reasoning back then, and still now is that they:
- cannot distinguish LLM generated training data from human data making the LLM worse over time
- don’t perform human curation thereby not solving the worsening
I’m not surprised by the [Wayback/Archive] August 2025 — AI False Claim Monitor – NewsGuard summary:
AI False Information Rate Nearly Doubles in One Year





