A growing number of fraudulent AI detection tools are mislabeling authentic content as machine-generated and then charging users to “humanise” it, raising concerns among experts about a new layer of online deception.
JioStar Terminates IPL Broadcast Deal in Bangladesh Over Payment Default
An investigation by Agence France-Presse (AFP) found that several widely accessible text detectors falsely identified human-written material — including news reports and literary works — as AI-generated. The tools then attempted to monetize these errors by offering paid services to “fix” the content.
Among the platforms examined were JustDone AI, TextGuard, and Refinely. In one case, JustDone AI flagged a human-written report on the US-Iran conflict as “88 per cent AI-generated” and then prompted users to pay up to $9.99 to remove the supposed AI traces.
AFP’s tests, conducted in multiple languages including Dutch, Greek, Hungarian, and English, showed that the tools consistently produced false positives — even when fed nonsensical text or passages from a 1916 literary classic. Researchers noted that some of these platforms appeared to generate results without actual analysis, suggesting scripted outputs rather than genuine AI detection.
Experts warn that such tools are not only misleading but potentially harmful. Debora Weber-Wulff, a researcher in AI detection technologies, described them as “scams” designed to sell flawed “humanising” services that often produce incoherent or distorted text.
The issue also highlights a broader challenge in the reliability of AI detection. Even legitimate systems cannot guarantee full accuracy, making it difficult to definitively determine whether content is AI-generated. Cornell University acknowledged that while generative AI poses risks in academic settings, detection tools are unlikely to provide a fully reliable solution.
Analysts say the misuse of such tools could contribute to the spread of disinformation by enabling what researchers call the “liar’s dividend” — the ability to dismiss genuine content as fabricated. This tactic has already been observed in political contexts, where misleading AI detection claims have been used to discredit opponents.
Misinformation monitoring groups such as NewsGuard warn that the trend threatens to erode trust in digital verification systems. As both AI-generated content and false detection claims proliferate, experts stress the need for critical evaluation and reliance on multiple verification methods.
They caution that in an increasingly complex information landscape, vigilance is essential to counter both fabricated content and false accusations of fabrication.
