The Hidden Defaults Reshaping Academic Reading
When you ask generative AI to explain a concept, or anything else for that matter, the system makes a series of consequential decisions on your behalf. It determines the appropriate length, complexity, and vocabulary for the output you receive - typically optimized for user engagement rather than what the subject requires.
Generative AI systems are trained to maximize user satisfaction by avoiding specialized terminology, complex sentence structures, and arguments requiring sustained attention. A question about Kierkegaard and a question about macroeconomics might both generate 350-word responses, regardless of whether these topics have equivalent explanatory needs. And a philosophy student asking about Wittgenstein will typically receive a response built around everyday language rather than accurate vocabulary. Yes, you can change this through more sophisticated prompting - but doing so typically requires some measure of the very comprehension students are trying to develop.
In the case of Wittgenstein, a student may never encounter or learn how philosophical arguments actually work - the deliberate precision of vocabulary, or the recursive quality of arguments that circle back to refine earlier points, for example. They will not necessarily learn the intellectual architecture that makes serious enquiry meaningful.
Compare this to what reading Wittgenstein directly once entailed, or to engaging with peer-reviewed texts on his work. Academic texts make demands that AI-generated explanations systematically avoid in that they assume readers will tolerate specialized language because ordinary terms lack necessary precision. Academic arguments unfold across dozens of pages because some ideas genuinely require extended treatment, leaving certain ambiguities unresolved because not everything can be reduced to neat conclusions. Of course, these demands - specialized language, extended development, unresolved ambiguity - constitute only part of what makes academic reading distinctive.
Perhaps more importantly, academic texts require something AI-optimized prose eliminates: reading stamina. Following a complex argument through a 30-page journal article, hanging on for dear life, holding multiple threads simultaneously, returning to earlier sections to revise or reconfirm your understanding multiple times - this builds cognitive capacities that brief, accessible explanations cannot. And some books demand even more: pursuing academic ideas across hundreds of pages to experience how coherent and cogent thinking unfolds over chapters, rather than topic sentences. These processes build academic robustness.
All of this wouldn’t matter as much if generative AI remained a supplement to traditional reading. But it’s becoming primary.
When Students Lose Agency Over Complexity
When students encounter ideas primarily through AI, their sense of what constitutes reasonable intellectual engagement shifts. The semester-long exploration of a theoretical framework that would have seemed normal - challenging, granted, but normal - now feels impossibly demanding. They begin perceiving texts that deviate from generative AI’s defaults as deficient. A journal article requiring 15 pages to make its argument seems unnecessarily long. Prose using specialized vocabulary seems needlessly obscure. Ideas demanding patient attention seem poorly explained.
This contributes to what we might call “synthetic fluency”, a terminological twin to synthetic knowledge: the appearance of understanding without the cognitive work that produces genuine comprehension.
I have talked to faculty who notice this particularly in discussions of readings. Students who rely on AI-generated explanations can discuss ideas in general terms but struggle when asked details about how arguments were constructed, what specific language choices reveal, or why certain distinctions matter. To an extent, this was also the case before generative AI. But students have gotten more persuasive without the underlying foundation. The understanding proves shallow because they never did the work of reading - of engaging with prose that doesn’t immediately yield its meaning, of building understanding incrementally across pages.
What Academic Texts Demand - And Generative AI Eliminates
Academic texts deploy specialized language, not merely as gatekeeping, but as precision instruments that can leave productive ambiguity intact while still advancing understanding. Not everything resolves cleanly. Some arguments remain contested, some concepts resist simple definition. Academic texts build tolerance for ambiguity rather than demanding premature clarity.
Most fundamentally, they force readers to develop judgment about when to slow down, when confusion is productive, when to reread. Generative AI makes all of these calibrations on students’ behalf, eliminating the very practice of reading that builds expert knowledge. So many disciplines rely on arguments that can’t be adequately compressed, where evidence accumulates across many pages, and where an author’s specific language choices are themselves objects of analysis.
But when most encounters with ideas can be mediated through conversational AI, providing just-in-time explanations optimized for immediate comprehension, what happens to the fundamental capacity for sustained reading? Well, students are losing practice with the cognitive architecture that extended texts require - not because they lack intelligence, but because they lack exposure to forms of writing that demand it.
Reading As Intellectual Work
What we’re witnessing right now affects the core of education. It’s a renegotiation of what counts as intellectual work, who decides how ideas should be communicated, and what cognitive capacities students develop.
When machines determine that explanations should be 3-400 words, that specialized vocabulary should be translated away, that complex arguments should be linearized or made into bulleted lists for easy consumption, that there’s almost always a follow-up question or offer to make things even more accessible - these aren’t neutral formatting choices. They’re crucial decisions about what matters in the future of education.
A core principle of higher education has been that intellectual stamina and critical judgment are built from sustained engagement with difficult texts - the capacity to follow an extended scholarly argument and think within the complexity of ideas as they’re actually developed rather than as they’re algorithmically summarized.
That capacity isn’t nostalgia for pre-digital learning. It’s just exceedingly difficult to imagine a meaningful replacement for academic enquiry as foundational to everything else we hope educated people can do.
When we allow algorithms to determine what intellectual engagement should look like, we’re not just changing reading habits. The larger problem is that we’re largely caught unaware as the technology fundamentally alters what it means to think. Few academics (or students) have chosen that’s what they wanted. Yet here we are, building it by default.


I've benefited from reading and thinking about your various posts, Jeppe. As a member of an AI Task Force for my institution, I must sift through many low quality, high attention-seeking Substacks for useful ideas pertaining to AI pedagogy. It's refreshing when I get a notification for one of your new posts and I find myself returning to them again and again.
The medium is the message...