Reflections On AI Policies in Higher Education
And Why First-Hand Generative AI Experience is Crucial for Leadership
In this edition, I want to direct attention to AI policies in educational institutions. This is not a new theme, of course, and there is already excellent advice on the subject (linked below).
What then? Well, I want to talk about the elephant in the room. Top management.
But we'll start on the fringes of the topic: I have yet to give a presentation, keynote, or workshop in higher education where participants have subsequently expressed that generative AI is immaterial. In fact, many experience eureka moments and become surprised, inspired, enthused, or anxious even. Participants often spontaneously articulate the potential and concrete applications they see for generative AI in their daily work, and it typically becomes really interesting when we start working with GPT-4 across text, images, data, diagrams, policies, process plans, sheets, tables, etc.
At the same time, I have observed a duller picture amongst leaders at educational institutions which shows that, broadly speaking, the higher up in the organisational pyramid a leader is, the less likely they are to have tried generative AI themselves more than superficially. Ethan Mollick has expressed the same concerns about this issue in a broader professional context.
Top management has many priorities in a busy leadership life. But the consequence of not paying personal attention to generative AI is that the whole AI agenda can seem rather abstract for leaders (as well as for anyone else who hasn't tried generative AI in practice) because they haven't yet been confronted with that ‘what-on-earth-is-going-on-here’ eye-opener that is fundamental to begin understanding what generative AI is, and what the implications of the technology are for the education sector.
AI is already showing far-reaching consequences for societies and educational institutions around the world. It is my contention that it is impossible to set strategic direction for AI in higher education if you haven't yet tried working with the technology yourself. The first wave of overwhelming, profound surprise simply cannot be outsourced to other parts of the organization.
I mention this because the need for both strategic and operational guidance for generative AI is growing rapidly in higher education institutions. Without the necessary – and quite basic – personal generative AI experience, however, it becomes difficult for leadership to meaningfully direct and anchor AI in the organization.
And without clear guidance in place, uncertainty arises for all internal stakeholders about expectations and appropriate uses of AI. This makes developing an institutional AI policy not just sensible, but necessary.
The Need for an AI Policy
Professors, admin staff and students are increasingly seeking help and answers in well-intended attempts to comply with institutional guidelines – for example regarding teaching, assignments, and exams – and the need for an AI policy seems rather obvious to me.
Not only are students and employees curious about what is expected of them in the institutional context they are part of, but they also express interest in discussing the ethical, environmental and legal aspects of using generative AI. Thankfully so.
As I write this, Google has launched Gemini, Elon Musk has Grok, and the Microsoft Office suite is moving ahead with CoPilot. In the EU, the AI Act has dropped - although it will be years (!) before the bill is passed. Suffice it to say that AI developments are still moving incredibly fast, and in the coming months, generative AI will no doubt become an even stronger force in the education sector than today.
Were all types of AI development to stop tomorrow (which there is absolutely no indication of, on the contrary), the technology already has such far-reaching consequences for the education sector that institutional traffic rules are absolutely necessary as we go forward.
Not taking action is not responsible educational leadership.
What Should an AI Policy Contain?
The ethical and legal perspectives are obviously essential, but first and foremost it is probably necessary to consider the academic and educational use of generative AI. Traditional academic integrity mechanisms are changing permanently, and fast.
For instance, you may need to consider:
Use of AI during exams and tests: Can your organization defend an outright ban policy, or is generative AI allowed under certain conditions? And if so, which?
Use of AI in relation to students' written assignments: Should students explicitly indicate, for example, when AI tools have contributed to the work? And if so, how?
Etiquette regarding the use of chatbots like Claude, Bing and ChatGPT: Should students disclose that they are using a chatbot to answer questions in class or preparation for class? And if so, how?
Employees' use of AI at work: Can teachers use AI to develop instruction materials, cases, etc.? And if so, how is quality ensured?
These are just a few very basic examples, of course, but specific institutional guidelines are crucial to creating transparency and safe frameworks around the increasing use of generative AI.
As I mentioned at the beginning of this post, a number of people have written extensively on what an AI policy should contain. Here are a couple of links to great resources you may find useful:
Resources
Matthew Wemyss: Finding the Right Balance: Reflections on Writing a School AI Policy
Stefan Bauschard: AI Policy Considerations for Schools
Amanda Bickerstaff / AI for Education: AI Resources
//
If you found this post useful, please consider sharing it with your colleagues or someone in your network.