- Technology
- Artificial Intelligence
There are several methods for detecting whether a piece of text was written by AI. They all have limitations – and probably always will.
When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.
Large language models have become extremely good at mimicking human writing.
(Image credit: Robert Wicher/iStock via Getty Images)
Share
Share by:
- Copy link
- X
People and institutions are grappling with the consequences of AI-written text. Teachers want to know whether students’ work reflects their own understanding; consumers want to know whether an advertisement was written by a human or a machine.
Writing rules to govern the use of AI-generated content is relatively easy. Enforcing them depends on something much harder: reliably detecting whether a piece of text was generated by artificial intelligence.
You may like-
Some people love AI, others hate it. Here's why.
-
'It won’t be so much a ghost town as a zombie apocalypse': How AI might forever change how we use the internet
-
Will AI ever be more creative than humans?
The problem of AI text detection
The basic workflow behind AI text detection is easy to describe. Start with a piece of text whose origin you want to determine. Then apply a detection tool, often an AI system itself, that analyzes the text and produces a score, usually expressed as a probability, indicating how likely the text is to have been AI-generated. Use the score to inform downstream decisions, such as whether to impose a penalty for violating a rule.
This simple description, however, hides a great deal of complexity. It glosses over a number of background assumptions that need to be made explicit. Do you know which AI tools might have plausibly been used to generate the text? What kind of access do you have to these tools? Can you run them yourself, or inspect their inner workings? How much text do you have? Do you have a single text or a collection of writings gathered over time? What AI detection tools can and cannot tell you depends critically on the answers to questions like these.
There is one additional detail that is especially important: Did the AI system that generated the text deliberately embed markers to make later detection easier?
These indicators are known as watermarks. Watermarked text looks like ordinary text, but the markers are embedded in subtle ways that do not reveal themselves to casual inspection. Someone with the right key can later check for the presence of these markers and verify that the text came from a watermarked AI-generated source. This approach, however, relies on cooperation from AI vendors and is not always available.
Sign up for the Live Science daily newsletter nowContact me with news and offers from other Future brandsReceive email from us on behalf of our trusted partners or sponsorsBy submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over.How AI text detection tools work
One obvious approach is to use AI itself to detect AI-written text. The idea is straightforward. Start by collecting a large corpus, meaning collection of writing, of examples labeled as human-written or AI-generated, then train a model to distinguish between the two. In effect, AI text detection is treated as a standard classification problem, similar in spirit to spam filtering. Once trained, the detector examines new text and predicts whether it more closely resembles the AI-generated examples or the human-written ones it has seen before.
The learned-detector approach can work even if you know little about which AI tools might have generated the text. The main requirement is that the training corpus be diverse enough to include outputs from a wide range of AI systems.
But if you do have access to the AI tools you are concerned about, a different approach becomes possible. This second strategy does not rely on collecting large labeled datasets or training a separate detector. Instead, it looks for statistical signals in the text, often in relation to how specific AI models generate language, to assess whether the text is likely to be AI-generated. For example, some methods examine the probability that an AI model assigns to a piece of text. If the model assigns an unusually high probability to the exact sequence of words, this can be a signal that the text was, in fact, generated by that model.
You may like-
Some people love AI, others hate it. Here's why.
-
'It won’t be so much a ghost town as a zombie apocalypse': How AI might forever change how we use the internet
-
Will AI ever be more creative than humans?
Finally, in the case of text that is generated by an AI system that embeds a watermark, the problem shifts from detection to verification. Using a secret key provided by the AI vendor, a verification tool can assess whether the text is consistent with having been generated by a watermarked system. This approach relies on information that is not available from the text alone, rather than on inferences drawn from the text itself.
Can we detect AI-generated text? - YouTube
Watch On
Limitations of detection tools
Each family of tools comes with its own limitations, making it difficult to declare a clear winner. Learning-based detectors, for example, are sensitive to how closely new text resembles the data they were trained on. Their accuracy drops when the text differs substantially from the training corpus, which can quickly become outdated as new AI models are released. Continually curating fresh data and retraining detectors is costly, and detectors inevitably lag behind the systems they are meant to identify.
Related stories—12 game-changing moments in the history of artificial intelligence
—When an AI algorithm is labeled 'female,' people are more likely to exploit it
—Your AI-generated image of a cat riding a banana exists because of children clawing through the dirt for toxic elements. Is it really worth it?
Statistical tests face a different set of constraints. Many rely on assumptions about how specific AI models generate text, or on access to those models’ probability distributions. When models are proprietary, frequently updated or simply unknown, these assumptions break down. As a result, methods that work well in controlled settings can become unreliable or inapplicable in the real world.
Watermarking shifts the problem from detection to verification, but it introduces its own dependencies. It relies on cooperation from AI vendors and applies only to text generated with watermarking enabled.
More broadly, AI text detection is part of an escalating arms race. Detection tools must be publicly available to be useful, but that same transparency enables evasion. As AI text generators grow more capable and evasion techniques more sophisticated, detectors are unlikely to gain a lasting upper hand.
Hard reality
The problem of AI text detection is simple to state but hard to solve reliably. Institutions with rules governing the use of AI-written text cannot rely on detection tools alone for enforcement.
As society adapts to generative AI, we are likely to refine norms around acceptable use of AI-generated text and improve detection techniques. But ultimately, we’ll have to learn to live with the fact that such tools will never be perfect.
This edited article is republished from The Conversation under a Creative Commons license. Read the original article.
TOPICS news analyses
Ambuj TewariProfessor of Statistics, University of Michigan
Show More Comments
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.
Logout Read more
Some people love AI, others hate it. Here's why.
'It won’t be so much a ghost town as a zombie apocalypse': How AI might forever change how we use the internet
Will AI ever be more creative than humans?
Switching off AI's ability to lie makes it more likely to claim it's conscious, eerie study finds
Experts divided over claim that Chinese hackers launched world-first AI-powered cyber attack — but that's not what they're really worried about
Do you think you can tell an AI-generated face from a real one?
Latest in Artificial Intelligence
Do you think you can tell an AI-generated face from a real one?
Will AI ever be more creative than humans?
'Putting the servers in orbit is a stupid idea': Could data centers in space help avoid an AI energy crisis? Experts are torn.
'It won’t be so much a ghost town as a zombie apocalypse': How AI might forever change how we use the internet
When an AI algorithm is labeled 'female,' people are more likely to exploit it
Your AI-generated image of a cat riding a banana exists because of children clawing through the dirt for toxic elements. Is it really worth it?
Latest in News
'How can all of this be happening?': Scientists spot massive group of ancient galaxies so hot they shouldn't exist
Advanced alien civilizations could be communicating 'like fireflies' in plain sight, researchers suggest
The moon has been secretly feasting on Earth's atmosphere for billions of years
US government overhauls the childhood vaccine schedule in unprecedented move
1,100-year-old burials of elite warriors and their ornate weapons discovered in Hungary
'Wolf Supermoon' gallery: See the first full moon of 2026 in pictures from across the world
LATEST ARTICLES
1Diagnostic dilemma: Giant 'stone' in a man's bladder looked like an ostrich egg- 2'How can all of this be happening?': Scientists spot massive group of ancient galaxies so hot they shouldn't exist
- 3Advanced alien civilizations could be communicating 'like fireflies' in plain sight, researchers suggest
- 4What to buy to start a fitness journey (and save some money in the process)
- 5The moon has been secretly feasting on Earth's atmosphere for billions of years