When Generative AI Reframes Higher Education
What generative AI reveals when we stop trying to solve and start trying to see
Higher education today is built on inherent contradictions. It aspires to stability, yet is based on inquiry. It standardizes learning through quality assurance, but celebrates originality. It champions deep human development while operating under increasingly mechanistic pressures. Generative AI hasn’t introduced these tensions; it’s simply made them impossible to ignore. In doing so, it offers us an opportunity - not to solve everything at once, but to see our institutions more clearly, in all their competing logics and misaligned expectations.
For example, university structures rely on clear policies, procedures, and predictable workflows - what counts as plagiarism, how credit hours are awarded, who approves curriculum changes. But when generative AI makes it possible to write an essay, generate a dataset, or simulate a conversation in seconds, those structures begin to fray. Policy lags behind practice. Staff are told to uphold rules written for a different era, while students experiment in real time. Institutions that once prided themselves on consistency now find themselves improvising enforcement, writing guidelines faster than they can be internalized, and quietly tolerating rule-bending that no one wants to name out loud.
What AI Makes Visible
At the same time, another, more human tension becomes visible: the gap between institutional actions, language and actual experience. Universities are fond of speaking in confident tones - mission statements, strategic frameworks, policies. But generative AI has unsettled many of the people inside these institutions. Professors aren’t sure what their expertise means when machines can replicate summaries, feedback, even lesson plans - some even suggest that education has become an illusion. Students, far from being tech-native masters of new technologies, often feel unsure whether their instincts are cheating, innovating, or both. Underneath the official AI policies lies something harder to name: a collective unease, a renegotiation of what it means to learn, teach, or even think in an era of algorithmic assistance.
Layered over these practical and sometimes emotional disruptions is the messy reality of power. AI decisions - what tools to adopt, how to integrate them, which risks to tolerate as an institution - are rarely made democratically. The rhetoric is one of inclusion and classroom experimentation, but the reality often involves procurement offices, vendor partnerships, and high-level committees.
There’s nothing inherently sinister in this; universities are large, risk-averse systems that have long thrived on predictability and stable operations. But the problems abound when decisions are shrouded in techno-optimistic language that glosses over real asymmetries of voice and influence. When AI adoption is framed as neutral, inevitable, or efficient, we should ask: efficient for whom? At what cost? And with whose values embedded in the tools themselves?
Yet perhaps the most disorienting tension generative AI surfaces is not about operations or politics at all. It’s about meaning. Higher education has always relied on shared symbols - graduation gowns, lecture halls, libraries - as ways of representing something larger: intellectual formation, civic purpose, the gradual build-up of understanding.
But generative AI plays havoc with symbols. It generates fluent language without experience or substance. It simulates insight without comprehension. It offers performance without process. When a student can generate a “reflection” on a novel they’ve never read, or when a research proposal is drafted by an LLM trained on past research proposals, what exactly are we assessing? What is the diploma a symbol of?
Still, amid all this, I believe there is hope.
The Work of Reframing
If generative AI makes these tensions abundantly clear, it also makes certain kinds of honesty possible. We can stop pretending that our structures are more coherent than they actually are. We can admit that learning was never just about content delivery. We can finally talk about the difference between showing learning and being changed by it. And we can begin to ask better questions - not just “how should we regulate generative AI use?” but also “what kind of educational experience do we want to protect, or reinvent, or let go of?”
The answer isn’t more AI policies or guidelines - though those matter, too. But more than that, we need to pause and reframe. We need to see our institutions not as impossible puzzles to be solved, but as layered systems shaped by overlapping logics. This calls for a broader conceptual shift - one that sees institutional life through multiple lenses, not just through pedagogical, technological or administrative ones.
Over three decades ago, Lee Bolman and Terrence Deal proposed four such lenses in the book Reframing Organizations: the structural, the human, the political, and the symbolic. Their insight was not only diagnostic, but developmental: that complex institutions - especially those under pressure - require leaders to move fluidly between perspectives. To reframe, again and again.
For leaders in higher education, this reframing will not offer the comfort of clarity (sorry).
But it may offer something better: the intellectual honesty to hold complexity, the courage to act amid uncertainty, and the humility to recognize that wisdom lives not in answers, but in the quality of our questions.