When you hire an AI lead or appoint a task force to oversee AI strategy and transformation, you are most likely buying a lottery ticket.
There is no established playbook for generative AI in higher education. No long-standing experts with deep implementation experience. And while consultants will happily try to convince you of their three-step strategy, eight levers of AI transformation or whatever the case may be, leading institutional transformation with generative AI really means figuring it out on the fly.
No one knows exactly what they’re doing. And in a strange way, that’s actually a good thing.
The Case of NEXT Education Copenhagen
Of course strategy isn’t necessarily useless. But many early-stage AI strategies I have seen are aspirational by design. They don’t chart a course so much as they signal intent - or even worse, they may signal an institutional decision for an AI strategy without fully being able to communicate why.
I’ve seen schools spend six months drafting detailed strategy documents before making even the smallest pedagogical changes. Others barely have a slide deck to show for it - but have managed to create AI-informed classroom cultures through informal collaboration, light-touch leadership, and a willingness to let things unfold.
In these places, transformation doesn’t come from the top down. It emerges from within. In fact, some institutions have done little more than purchase a school-wide ChatGPT license and hold a few workshops.
One example is NEXT Education Copenhagen, Denmark’s largest combined vocational and upper secondary education institution. Here, adoption is booming - not because of an institutional roadmap, but because faculty started tinkering. Word spread. Prompts got shared. Practices evolved.
Currently, the school - one of few schools in Denmark that use ChatGPT instead of Microsoft CoPilot as an institutional policy - submits more than 60,000 prompts each month. And that number is on the rise.
So, why does it work? Peter Bruus, AI lead at the school, gives an astonishingly simple explanation:
“Sure, we’ve held workshops and introductory meetings, but mostly people have figured it out by themselves. And that’s probably the best part of it: it works. Because it makes sense. You can develop big strategies, or you give access to AI tools that actually work.”
The classroom as a safe laboratory
This example points to something we don’t talk about enough: it’s not the roadmap that makes people change. It’s the experience of trying something new and realizing that it works.
This bottom-up logic feels counterintuitive to many senior leaders, particularly those trained to deliver predictability and consistent student outcomes. But it fits the reality of how faculty adopt new tools. In my school, I have talked to multiple individuals recently who don’t want to ask for permission to test out a simple idea on generative AI and teaching. Instead, they want to test it, then ask for forgiveness—if needed. But the result is often that nothing happens. Colleagues know not to cross the line, but this leads to a kind of self-restriction we don’t usually associate with educators.
In some schools, this creates a peculiar dynamic: the real AI transformation in higher education, or its undercurrent aspirations, isn’t always visible in institutional reporting structures or strategic frameworks. It happens under the radar, through a series of semi-coordinated experiments, casual conversations, and teaching hacks.
Most of them aren’t documented. But they shape what actually happens in classrooms every day.
Loose maps and strong norms may get you there
If we accept this and assume that AI adoption will be messy, partial, iterative - then the question shifts.
We need to insist on a safe environment for staff and students, yes: but we also need better conditions for safe experimentation.
This means creating environments where it’s okay to be uncertain. Where trying and failing isn’t punished. Where faculty who tinker aren’t quietly discouraged because they deviate from the standard way of doing things.
Loose maps, strong norms. In this sense, the job of AI leadership in higher education isn’t to direct a transformation. It’s to host it. That might involve collecting emerging practices and curating them into internal showcases. It might mean hosting brown-bag lunches where faculty demo small wins for each other - even without management present. It might involve setting guardrails for ethical or responsible use, but keeping those guardrails wide enough to leave room for exploration.
The AI lead, then, isn’t an oracle. They’re a field guide.
Where this leaves us
Hiring an AI lead can be a bold and useful move. But unless you also create an institutional culture that tolerates ambiguity, provides tools that deliver genuine value, and embraces trial and error, you are simply buying a very expensive lottery ticket and hoping it pays off.
The institutions making the most progress with AI in education aren’t necessarily the ones with the most robust strategies. They’re the ones that make it safe to experiment. They’re the ones that build confidence by letting practice lead policy - not the other way around.
They understand that real change doesn’t happen when everyone agrees on what to do.
It happens when enough people feel it’s okay to try.