When The Proxies Break
And What They Meant To Higher Education In The First Place
Higher education was designed around proxies that sometimes worked, and when they did work, real learning happened. Students wrote essays and came to understand things through the writing process. In exams, under the pressure of retrieval under constraints, students consolidated knowledge they had genuinely built. Seminars, supervision, presentations - all of these occasionally did exactly what they were supposed to do.
But the problem is that we mistook the occasional success of these mechanisms for structural success. We operated under the premise that exams were the proof of learning when they in fact weren’t. Exams are instruments that produce learning as a frequent, but not guaranteed byproduct - and we have no reliable way to tell the difference.
This distinction matters because the critique of assessment in the age of generative AI is sometimes framed as a revelation: now students can game the system. But gaming the system is not a new possibility. Students have probably always been able to produce the appearance of learning, and it is hardly a surprise that the current system has been in need of change for decades.
What’s changed is that generative AI has made the illusion of learning so effortless, so fluent, and so indistinguishable from what we thought was the real thing that the proxies have stopped functioning entirely - even on their own terms. The proxies for education didn’t break with generative AI. They were simply exposed.
The Performance of Knowing
What generative AI has exposed, specifically, is the gap between fluent output and underlying understanding. This is particularly revealing: it is a gap that academic culture has always struggled to see clearly, having trained itself to treat articulation as evidence of comprehension. Written assignments and oral examinations rest on the assumption that if a student expresses the right things, they probably know them.
That assumption has always been vulnerable. Now it has become untenable.
It is well established by now that a student can co-produce a sophisticated argument on virtually any topic without having engaged with the ideas at all. What makes this particularly problematic is that the student may not be aware if or how they contributed to the work, often thinking they played a larger role in creating generative AI output than they actually did.
The simulation of understanding is a problem not just for assessors, but also for the person being assessed. We are producing a generation that is increasingly fluent in the performance of knowing, and decreasing in practice with the thing itself.
Defending the Indefensible
The institutional response in many places has been to defend what remains. Consequently, for many students, higher education is now a series of artificial, time-pressured, unrepeatable high-stakes performances. This has arguably been the case for decades, but generative AI makes the problems significantly worse. Exams as we know them are the last proxy in education - and we are defending them not because they work, but because we have nothing credible to replace them with.
Continuous assessment is the most common proposed alternative. More checkpoints, more data, better measurement. But this is still thinking about the wrong problem, because the real problem with the current approach to higher education was never how closely or how often we measure. It was whether the conditions exist for learning to leave a trace with students at all - over time, in relevant contexts, and in ways that can’t be rehearsed or outsourced.
Those conditions are not technical. They are relational.
A Reckoning, Not a Reform
Perhaps this is where the influence of generative AI on higher education becomes most uncomfortable. Institutional architecture was not built to support the kind of trust that genuine assessment requires. The relationship between institution, teacher and student has been structured around compliance, credentialing, and the efficient management of large numbers of people through standardised processes. It was never primarily structured around unlocking individual potential in students and finding out what each student, regardless of their abilities, actually needs to make progress.
That question requires something entirely different from a logic built around thresholds and managing scale. It requires a relationship in which the teacher’s genuine purpose is to understand what the student knows - and in which the student understands that purpose and participates in it honestly. It also requires institutions that recognise and reward that kind of work, rather than treating it as an inefficiency to be optimised away.
None of this is easy to do. But before we reach for new tools, new formats, new assessment frameworks, we should be thinking about the type of education we are beginning to optimise for - with and without generative AI.
The proxies are gone, never to return. And the epistemic crisis in higher education is not really about generative AI. What comes after the proxies is not a better proxy. It is a reckoning with what we were using yesterday's proxies to avoid asking.


Great perspective! Reminds me of the pedagogy of the oppressed by Paulo Freire. He argued for a collaborative relationship rather than a top down one. We may end up there soon enough.
I agree with your analysis, and I do not see how we can sustain an educational system like that in the long run. A few students are calling it out already, and more will follow.
So what is the next level of education we need? I have tried to make education more relational, grounded in dialogue and reflection. I have removed books and replaced them with videos and AI tutors. I believe this is the right direction, but in my sixth-semester class, only half the students show up. Telling me I am not there yet - I have not created something that is meaningful and engaging enough for more students to show up.
My sense is that the existing educational system has taught them to learn superficially rather than engage with each other in meaningful learning. They struggle with focus, depth, and critical thinking. They seem to think that agreement is the goal of discussion, and they want me to teach them and give them the answer rather than wrestle with ideas themselves.
At the same time, they are deeply concerned that AI will take learning away from them, allowing them to bypass the process and go straight to the output.
I am trying to change that, and I hope we are right about AI. I hope we do not all become dependent on AI for thinking and problem-solving, and in the process lose the ability to think for ourselves. That is my hope, and that is why I continue to take part in this revolution in education.