When A Generation Comes To Expect Intelligence Without Resistance
And Why That's a Problem for Higher Education
Recent posts in this newsletter - such as The Synthetic Knowledge Crisis and The Age of Sponsored Knowing - addressed fundamental problems generative AI impose on higher education. Now, two recent developments in artificial intelligence continue to reveal a troubling convergence.
OpenAI recently released Sora 2, a video generation model alongside a new social app where users can generate videos of themselves and share them on a TikTok-style feed. Around the same time, Perplexity made its Comet browser freely available worldwide, promoting an AI assistant that “can automatically see what you’re looking at” and answer questions instantly without requiring users to formulate searches or evaluate sources.
Examined separately, these announcements might seem to belong to different categories - one about creative tools, the other about productivity. But together they further illuminate what we have seen for a while now - a design philosophy in big tech that fundamentally aspires toward systematic elimination of friction from human experience.
For educators grappling with how these technologies reshape learning environments, the question becomes unavoidable: What happens to a generation developing cognitive and social capacities through interactions with systems designed to eliminate all forms of resistance?
When patience isn’t a virtue
Sora 2’s launch as a social platform allows users to create “cameos” of themselves, generating videos where they appear in any scenario imaginable, then sharing these on an algorithmic feed designed to maximize engagement. The system generates videos based on text prompts, handling everything from physics simulation to audio synchronization automatically. Perplexity’s Comet processes vague queries with the same efficiency it brings to well-formulated research questions.
Neither system signals through its behavior that some ways of engaging are more productive than others, and this represents a fundamental departure from how human learning actually works. In human interaction, resistance serves crucial developmental functions. When someone articulates a question poorly, others request clarification - a response that motivates clearer thinking. When someone wants to create something, the gap between vision and execution - the friction of learning skills, iterating through failures - is where growth happens. These forms of friction constitute the very mechanism through which learning occurs.
Perplexity reports that users who downloaded Comet asked “6-18X” more questions on their first day - a metric the company celebrates as enhanced curiosity. But volume of questions and depth of inquiry are clearly different phenomena. The ease of asking, divorced from the discipline of formulating answerable questions, performs curiosity’s appearance while bypassing its substance.
Real teachers become less patient with students who won’t engage seriously. Real collaborators push back against poorly developed ideas. Real intellectual communities weed out weak thinking. These responses shape behavior and thinking.
Contrary to this, generative AI systems respond with unfailing patience regardless of how they’re engaged. This is a problem, because a generation developing primarily through interactions with systems that never provide this feedback will arrive in human contexts with fundamentally different expectations. They’ll interpret appropriate intellectual resistance as unreasonable obstacles. But most critically, they’ll lack practice in the fundamental human skill of reading how their thinking and behavior lands with others and adjusting accordingly.
Between difficulty and frictionless learning
Difficulty, when properly calibrated, isn’t an obstacle to understanding. It’s the very condition that makes deep learning possible.
As such, higher education faces both immediate tactical challenges and deeper strategic ones. The tactical problems are pressing: designing assessments when AI can complete research tasks indistinguishably from students, teaching information literacy when students bypass source evaluation entirely, evaluating written work when generative AI output becomes undetectable.
But the strategic challenge is arguably bigger. These tools are reshaping not just what students can do but what they expect from intellectual and creative work. Students formed by systems that aim to eliminate resistance develop different capacities than those who learned through friction. They may become skilled at prompting AI systems while lacking fundamental abilities, and the question is whether educational institutions can maintain a distinction between productive struggle and pointless difficulty when the broader technological environment treats all friction as a problem to be solved. Can universities argue that some things worth learning are hard, that process matters as much as outcome, when students spend most of their waking hours in systems designed around opposite principles?
Recent advertising campaigns for Comet have directly targeted students, showing AI completing coursework. When AI developers simultaneously claim to support education while building tools that systematically undermine learning, educational value becomes post-hoc rationalization.
Louder, please
What distinguishes this moment is that the infrastructure of intellectual work and creative expression is being rebuilt by entities that don’t recognize developmental needs as legitimate constraints on design. They treat friction as a bug to fix rather than as potentially essential to becoming capable, thoughtful, autonomous people. They optimize for engagement, or platform stickiness, without considering what forms of human capacity their tools cultivate or atrophy. I am aware that some generative AI platforms have “educational modes”, but I find those to be of little use in real-life learning environments.
Educators need to talk loudly about what’s being defended here: we must point straight to the problem and insist that human development requires experiences these tools are designed to eliminate - the experience of not-knowing before knowing, of working toward understanding rather than receiving it, of developing skills through the friction of learning rather than consuming unverified information that bypasses learning entirely.
The frictionless future that current generative AI development is building isn’t inevitable. But the question is whether those building them can be made to care about what they’re building - not just whether it works, but what it cultivates in those who use it. Until that question receives serious attention, educators face the task of maintaining learning environments that honor struggle and resistance while students are being formed by technologies that treat all friction as a design flaw.
Something has to give.
♻️ If you liked this post, please share it with someone in your network ♻️


Another terrifying but highly useful piece.
I am an educator who has decided to leave the profession - at least in part in response to the emergent storm of GAI.
I am so grateful for your determination to articulate the perils of GAI for our profession - and for society more broadly.
Thank you for this insightful post, Jeppe! It's a topic we are currently discussing in our AI & Education group. This notion of the importance of friction resonates with this episode of the podcast Machine & Meaning on Kierkegaard: https://open.spotify.com/show/0GGoKlOgFNmyohb90S2rEP?si=c4327a9305f44771&nd=1&dlsi=e93537a0c56c4e9f