Discussion about this post

User's avatar
Kytti's avatar

I think if you have a college student studying literature who doesn't want to read, something has gone wrong so much deeper down, so much longer ago, than AI can account for. AI might be the tool currently used to dodge the unappealing work, but something has already happened to that person for them to view reading as unappealing work to avoid in the first place.

I'd be looking back upstream - what in their educational journey so far has led them to this point? I don't think the convenience of the tool can really explain the desire to use it.

Imagine a machine that could really efficiently eat cake for you. Would it matter how cheaply and easily available it was? Would it matter how good a job you perceived it to be doing? No, of course it wouldn't - because the point of cake is the joy of eating it. If young people are coming out of our educational systems viewing reading as work instead of a delicious, joyful experience they don't want to give up then something fundamental has gone wrong. One that can't be fixed by looking at the tool they use to avoid that work.

Expand full comment
Chris Despopoulos's avatar

Thanks for this... It's so important. A few points to add...

First, I see this as one side of a coin, where the other side is commoditization of writing. The very word "content" to account for any and all published material is poison. As an experiment, I asked DeepSeek when the term "content creator" came around. The answer passed over some history, but effectively correlated the term with the rise of platforms that monetize "content". It all makes sense... "Content" exists to get clicks, and "content creators" get paid for engagement. Instead of writing for a message, this is writing for likes. I guess "content" describes it well. The point is, writing has become a commodity, where monitization of the product is the goal, and the VALUE of the product is secondary.

Second, this happened before LLMs. But... LLMs carry this effect to the extreme. Instead of having to craft some bit of outrage that will get clicks, you can just prompt for it. The goal here is, of course, scale. More "content" get you more money.

Third, there's a flaw here... The logical extension is to produce "content" at a scale that exceeds consumption. And so the next step is to automate consumption... Use LLMs to summarize all the "content" in your feed. So then we have a machine take vague ideas and produce a big pile of (well...) stuff, and another machine to take that steaming pile of (well...) stuff and turn it into vague ideas.All this at great expense in real terms. And the only benefit I can see in this is a transfer of wealth to the tech elite that's selling all this (well...) stuff.

And then there's your point about reading in itself. On a grand scale, we're training ourselves to accept all this as inevitable and necessary. We're transforming what we mean by human attributes. Hallicinate, know, understand, feel... We degrade these terms to make them fit the LLM worldview. Doing that, we degrade ourselves. We degrate attention, self, desire, conscience. We degrade the acts we perform to maintain our humanity, our attention, our desire, our understanding. As you so clearly point out, we degrade the act of reading.

Expand full comment
8 more comments...

No posts