The Exhaustion Problem: When Generative AI Demands Expertise From Novices
We have spent considerable energy debating whether humans can detect AI-generated content, whether synthetic information pollutes our knowledge ecosystems, and whether we can build reliable detection systems. These are important questions, but they obscure a more fundamental problem: the sheer cognitive exhaustion of working with generative AI output. Constantly having to work out if something came from a machine or a human being is frankly exhausting.
Generative AI produces an unpredictable mixture of polished rubbish and more or less coherent information, forcing users into a relentless and never ending sorting process that requires precisely the expertise students are trying to develop. In other words, what we’re witnessing is a fundamental transfer of epistemological responsibility from institutions and teachers onto individual learners who are least equipped to handle it.
The traditional educational model, with all its inherent flaws, embedded quality assurance mechanisms through the knowledge transmission process. Textbooks underwent peer review, lectures were delivered by credentialed experts, and library acquisitions were curated by professionals. Students could reasonably trust that the materials placed before them met some baseline standard of reliability, freeing them to focus their cognitive energy on comprehension rather than verification.
Generative AI dismantles this infrastructure entirely. Every interaction becomes a vast minefield where facts and fabrication intermingle seamlessly, where logical coherence masks factual error, where confidence in delivery conceals fundamental misunderstandings. Consequently, students must now perform the work that entire institutional systems previously handled (more or less successfully): evaluating source reliability, cross-referencing claims, assessing logical validity, distinguishing between superficial plausibility and factual knowledge.
In this regard, as in many others, generative AI in education poses more problems than solutions. I can think of no other setting in education or elsewhere where we would want to outsource quality assurance to those least equipped to oversee it. Yet, this is what we are doing in universities right now.
The Paradox of Required Expertise
The gist of the problem is this: Novices lack the discernment to evaluate generative AI output effectively, and experts don’t need or want generative AI for basic information retrieval in their domain. In education, this leaves us with a curious paradox when we examine who can actually use these tools effectively.
The problem with output quality prevents students from developing reliable heuristics for evaluation. If AI consistently produced garbage, students could just ignore it. If it consistently produced excellence, they could learn to use it appropriately. Instead, students have no way to predict which interaction will prove reliable, which means every interaction requires the same exhaustive verification process. Thus, mental energy and motivation that should be spent on mastering a subject instead gets consumed by an endless audit cycle. Alternatively, as is the case in many classrooms, the energy is spent elsewhere entirely, as AI generated text is handed in instead of actual student work.
The fluctuating quality of generative AI output amplifies these epistemological difficulties in ways we’re only beginning to understand. Unlike a poorly written Wikipedia article whose awkward prose triggers skepticism (at least that used to be the case), AI delivers both obvious and less obvious errors wrapped in the rhetorical markers of authority. The grammar seems impeccable, the structure progresses logically, and the tone is certainly confident. For the untrained reader, each sentence flows smoothly into the next, creating an aesthetic of expertise that novices struggle to question even when the underlying content is profoundly flawed.
The traditional pedagogical red flags that might alert a teacher to struggling students -- awkward phrasing, organizational problems, unclear arguments -- have been cosmetically eliminated while the underlying comprehension gaps remain or even widen. Worldwide, institutions are enabling conditions where students can appear to perform competently while learning very little, and where teachers must work much harder to identify where actual understanding breaks down.
From Productive Confusion to Endless Verification
This connects directly to the work Victoria Livingstone and I have explored around what happens to student thinking in an age of instant answers. In the UNESCO piece The Disappearance of the Unclear Question, we argued that students need cognitive space to struggle with unclear thinking, to formulate inadequate questions and gradually refine them through iterative engagement with material and ideas. Instead, with generative AI, the mental energy that should power intellectual development gets consumed by quality assurance work that yields no educational benefit even when performed successfully. Verifying that an AI’s explanation of a given topic is accurate doesn’t teach you much, if anything at all; it just confirms that you’ve received accurate information, which you could have gotten from a reliable source such as a textbook without the verification overhead.
In educational contexts, generative AI creates more confusion than actual benefit - for now at least. I’ve written on how information is synthetic rather than human-created, and this clearly ties into the problem with spending time on pointless verification exercises. But what’s more, the technology fundamentally mismatches the epistemological demands it creates with the capabilities of its primary educational users. Generative AI requires expertise as a prerequisite for safe use, making it least reliable precisely when students need help the most. It transfers a significant cognitive burden onto learners while removing the productive struggle that actually builds understanding. It creates the appearance of democratized access to information while actually raising the bar for who can use it effectively.
Those who already possess significant expertise can navigate its inconsistencies. Everyone else -- which is to say, most students -- simply inherits layers of exhausting verification work atop an already demanding learning process, with little educational value to show for the effort.


Wow, spot on.
"....with generative AI, the mental energy that should power intellectual development gets consumed by quality assurance work that yields no educational benefit even when performed successfully"
and
"Generative AI requires expertise as a prerequisite for safe use, making it least reliable precisely when students need help the most"
I needed those formulations, thanks!
Very good. As a secondary/high school teacher, this is also the case at a younger level.