Why Universities Shouldn't Always Listen to Their Stakeholders
Or Listen Differently

Stakeholder management in higher education often involves surveying employers about the skills they need, consulting industry leaders about curriculum design, asking alumni what prepared them for success, and building advisory boards that can guide strategic planning.
The list goes on, but you get the idea.
All of this probably sounds reasonable, even responsible, given that education should serve the needs of society and the future job market. But what if the job market no longer knows which skills and competencies are truly required in a future with generative AI in the workplace?
Higher education institutions should obviously still engage in meaningful conversation with their main stakeholders. But in this moment of unprecedented technological change, there are strong reasons to change the focus of that conversation.
The Fundamental Contradiction
In some subjects more than others, external stakeholders operate under a profound conflict of interest that’s rarely acknowledged in advisory meetings: they’re asking universities to train students for capabilities they’re simultaneously working to automate.
Consider the technology executive who joined a prominent computer science department advisory board in 2022, advocating for expanded coding bootcamps and data analysis curricula. That same year, his company invested $50 million in AI coding assistants designed to automate exactly those programming tasks. Or take the pharmaceutical giants on university advisory boards calling for more specialists in drug discovery while simultaneously deploying AI systems like DeepMind’s AlphaFold that can predict protein structures in hours rather than years. Or marketing executives who insist business students need advanced campaign analytics training while their companies have already purchased technological solutions that generate those insights automatically.
This isn’t necessarily malicious - it’s often unconscious. Yet these stakeholders sometimes become unwitting saboteurs of their own future talent pipeline, asking universities to optimize for skills they’re working to make obsolete while remaining silent about capabilities they’ll actually need.
The deeper issue is misaligned incentives, as business stakeholders operate under different pressures than educational institutions. Their primary obligation is to shareholders and short-term profitability. If generative AI can perform tasks faster and cheaper than humans, they have a fiduciary duty to pursue automation, regardless of the implications for the workforce they’re advising you to train. Educational institutions, by contrast, have a broader social responsibility: developing human potential and preparing graduates for careers that will span decades of technological change.
When stakeholders focus on current pain points and immediate skills gaps, they’re describing needs they’re simultaneously working to eliminate. By the time higher education produces graduates, a four-to-six year process from curriculum design to graduation, the landscape they described will be largely unrecognizable, transformed by the very automation initiatives and other types of outsourcing they failed to mention.
The Blindspot: What They Can’t See or Ask For
External stakeholders often gravitate toward concrete, measurable skills because these are what they recognize and can evaluate in their current context. But educational institutions - especially today - may want to focus more on the ability to form hypotheses in ambiguous situations, to synthesize connections across unrelated domains, to navigate ethical complexity, to collaborate meaningfully with both humans and AI systems.
Some of the most transformative educational programs have come from institutions that ignored immediate, short-term stakeholder demands in favor of developing students’ capacity for creative problem-solving, critical thinking, and adaptive learning. MIT’s emphasis on “learning how to learn” in the 1960s seemed impractical to industry partners focused on immediate engineering needs, yet it produced graduates who went on to create entirely new technology sectors. Similarly, liberal arts colleges that maintained broad-based curricula such as philosophy, despite corporate calls for narrow specialization, often produced leaders who could navigate complex, interdisciplinary challenges.
In my previous writing, I’ve argued extensively for process-oriented learning over output-focused education. But process-oriented capabilities are exactly what stakeholders (as well as educators) struggle to articulate because workplaces traditionally evaluate people based on measurable deliverables. As AI handles more routine deliverables, the thinking process itself becomes still more important. Yet process-oriented capabilities and the act of thinking resist easy description or measurement.
Towards a New Conversation
For educational leaders, finding the right balance between taking stakeholder input seriously while remaining faithful to broader educational priorities and aspirations has always been challenging. This tension becomes acute when well-meaning advisors unknowingly advocate for obsolescence.
Today, educational leadership means having the courage to say: “We understand you want graduates who can use specific software, but we’re going to focus on developing their ability to rapidly learn and adapt to new tools, because the software you’re using today won’t exist in five years, and you’re probably working to replace it with AI anyway.”
Practical steps forward include reframing stakeholder engagement in several key ways:
First, supplement advisory conversations with questions around “which of your current processes are you planning to outsource to technology?” This surfaces the contradiction directly and forces honest discussion about future needs rather than present gaps.
Second, establish “future-proofing audits” where stakeholders must explicitly identify which of their requested or future skills they expect to automate within the typical student lifecycle. This creates accountability for the advice they provide.
Third, focus curriculum development on meta-skills: the ability to learn rapidly, synthesize across domains, work effectively with AI tools, and navigate ethical complexity. These complement rather than compete with automation.
Finally, and most importantly, leaders should trust their educational expertise. They understand learning and development in ways that external stakeholders, however accomplished, simply don’t. The greatest risk isn’t disappointing current stakeholders. It’s producing graduates perfectly prepared for a world that no longer exists.
This isn’t an argument for ignoring stakeholder input. But in five to ten years, when your current students are establishing careers, the employers who criticized your “impractical” or soft focus on critical thinking will be desperately seeking exactly those capabilities, having successfully automated away most skills they asked you to emphasize.
Higher education’s fundamental responsibility isn’t to current stakeholders but to future generations who will inherit a world we can barely imagine. Sometimes serving that responsibility requires the confidence to ignore well-meaning advice from people who are simultaneously working to eliminate the jobs they’re asking you to prepare students for.
If you found this valuable, please share it with someone in your network who might benefit from this perspective.
