The Reading Crisis We're Not Talking About
Writing gets the spotlight, but generative AI is reshaping how we read too.
Every time a student writes a prompt and then watches an AI-generated paragraph appear on the screen in seconds, something consequential happens. The lesson is not only about the generated content, which may or may not be helpful, but also about language as performance.
The machine communicates not just what to write, but how writing should feel and look: speed equals wisdom, fluency equals thought, completion equals comprehension. Unconsciously, these performative ideals reshape not only our expectations about writing, but also how we read. And sometimes, how we don’t read at all.
Picture a student in a college class on 20th-century American literature typing a prompt: “Summarize Dow Mossman’s The Stones of Summer.” Within seconds, three clean, persuasive paragraphs appear on the screen. She pastes them into her notes, barely skimming the text. The problem isn't just that she hasn’t read the book. It’s that the system doesn't require her to, and increasingly, neither does her own cognitive instinct.
When paragraphs are polished, we are led to assume they are trustworthy. When arguments are syntactically sound, it takes effort to stop to ask if they are meaningful. There are other more or less obvious implications, but the main point is this: the debate around generative AI and higher education focuses almost entirely on writing. I believe reading, as a foundational skill, demands our attention too.
On Slow Reading
Reading has always been a fairly slow practice, although some people can obviously read really fast. Harold Bloom comes to mind. But text is now produced before our eyes faster than we can read it. Surely, this fundamentally affects our notion of reading - not skimming, but actual reading.
Reading requires attention and memory to be successful. It invites us to lose our place, to reread, to pause. And perhaps most brilliantly, it demands effort from us without any kind of guarantee that we get anything in return (although, of course, we often do). In this way, reading is not a retrieval task, but a - dare I say - generative act. Absorbing content is one thing, but reshaping it inside one’s mind is another.
What’s at stake is not just retention, although that is certainly important. But when language is already digested, both in expectation and in experience, it is very hard for the reader to avoid becoming passive. When every paragraph arrives pre-smoothed, friction disappears. And with it goes the ability to read for nuances, contradictions, and other helpful things, unless you’ve already been trained in how to read.
This is not a problem of content, but of choreography. We are being retrained in how to engage with language. Generative AI teaches us how quickly to consume it, how completely to trust it, and most importantly, how rarely to return to it. Sure, it is happy to make endless rewrites for us, even when we don’t ask for them, ready for consumption - active or passive - at our choosing. We gradually adapt to a linear pass-through model of reading. It’s reading as transaction.
Of course, real reading has never worked that way. It is iterative, recursive, sometimes circular. Attentive reading habits are delicate. And they are not reinforced by machines that constantly produce the next text rather than inviting us to explore what already exists.
Reading as Resistance
The result, I believe, is consequential: we are witnessing a shift in what reading means and, to a large extent, we’re not paying attention. Where reading once signaled interior construction of knowledge, it now begins to mean scanning, confirming, summarizing.
The appearance of fluency replaces the hard work of understanding.
To read deeply in an AI-saturated world will require not just willpower and dedication, but institutional imagination. It means carving out curricular space for slowness. It means teaching reading not as information intake, but as a form of resistance - a way of staying curious and alert while pushing back against the frictionless logic of automated language.
Reading against the grain of generative AI is not nostalgic, although in some ways it already seems so. If we fail to protect this kind of reading - or rather, this kind of reader - we risk creating a world where understanding is always second-hand and pre-processed. If that happens, we will live in a world where the most important ideas are never encountered, only paraphrased.
And we will call that comprehension.
I think if you have a college student studying literature who doesn't want to read, something has gone wrong so much deeper down, so much longer ago, than AI can account for. AI might be the tool currently used to dodge the unappealing work, but something has already happened to that person for them to view reading as unappealing work to avoid in the first place.
I'd be looking back upstream - what in their educational journey so far has led them to this point? I don't think the convenience of the tool can really explain the desire to use it.
Imagine a machine that could really efficiently eat cake for you. Would it matter how cheaply and easily available it was? Would it matter how good a job you perceived it to be doing? No, of course it wouldn't - because the point of cake is the joy of eating it. If young people are coming out of our educational systems viewing reading as work instead of a delicious, joyful experience they don't want to give up then something fundamental has gone wrong. One that can't be fixed by looking at the tool they use to avoid that work.
Thanks for this... It's so important. A few points to add...
First, I see this as one side of a coin, where the other side is commoditization of writing. The very word "content" to account for any and all published material is poison. As an experiment, I asked DeepSeek when the term "content creator" came around. The answer passed over some history, but effectively correlated the term with the rise of platforms that monetize "content". It all makes sense... "Content" exists to get clicks, and "content creators" get paid for engagement. Instead of writing for a message, this is writing for likes. I guess "content" describes it well. The point is, writing has become a commodity, where monitization of the product is the goal, and the VALUE of the product is secondary.
Second, this happened before LLMs. But... LLMs carry this effect to the extreme. Instead of having to craft some bit of outrage that will get clicks, you can just prompt for it. The goal here is, of course, scale. More "content" get you more money.
Third, there's a flaw here... The logical extension is to produce "content" at a scale that exceeds consumption. And so the next step is to automate consumption... Use LLMs to summarize all the "content" in your feed. So then we have a machine take vague ideas and produce a big pile of (well...) stuff, and another machine to take that steaming pile of (well...) stuff and turn it into vague ideas.All this at great expense in real terms. And the only benefit I can see in this is a transfer of wealth to the tech elite that's selling all this (well...) stuff.
And then there's your point about reading in itself. On a grand scale, we're training ourselves to accept all this as inevitable and necessary. We're transforming what we mean by human attributes. Hallicinate, know, understand, feel... We degrade these terms to make them fit the LLM worldview. Doing that, we degrade ourselves. We degrate attention, self, desire, conscience. We degrade the acts we perform to maintain our humanity, our attention, our desire, our understanding. As you so clearly point out, we degrade the act of reading.