Writers are spilling a lot of ink on the myriad concerns about artificial intelligence in education, but we may still be overlooking one of the most insidious threats: not that AI will become superintelligent, but rather that it will become superhumanly persuasive without the underlying intelligence to match. I suspect semioticians would have a great deal to say about signifiers and signifieds here, but whichever way you look at it the consequences are profoundly distressing.
For education specifically the capability to generate compelling but fundamentally unsound arguments poses a unique challenge to one of higher education's cornerstone propositions: evidence-based reasoning.
The allure of AI-generated content lies not in its intelligence, but in its unprecedented ability to craft persuasive narratives. Storytelling has shaped human communication for millennia - but much like political figures who have mastered the art of persuasion without necessarily grounding their arguments in facts or logic, generative AI systems can generate text that resonates emotionally and appears authoritative while potentially lacking substance and verifiable merit. This phenomenon is particularly concerning in educational contexts, and we must be careful not to enable the creation of a generation of learners who mistake rhetorical flair for intellectual rigor and facts.
Beyond the Turing Test: The Persuasion Trap
The traditional concern about generative AI passing the Turing test - essentially appearing human-like in conversation - rests on a false premise. The real challenge isn't whether AI can imitate human intelligence, but whether it can surpass human capability in persuasion while operating without the constraints of truth or logical consistency.
The real danger isn't whether generative AI will one day think, but that it is already persuading without thinking - and doing so at scale. This problem is compounded by social media algorithms, which already create echo chambers of confirmation bias and reinforce misleading narratives.
In higher education, this can create the perfect storm as we see students equipped with generative AI tools that can generate persuasive arguments tailored to any viewpoint, regardless of its factual basis. This already happens every day, right now, and at scale.
In the longer run, the result may be a kind of intellectual arms race where the most convincing narrative, rather than the most well-researched or logically sound argument, wins the day. The result isn’t just the spread of AI-generated arguments, but a more fundamental shift: the erosion of the link between argumentation and evidence - the assumption that rhetoric should be rooted in substance.
Implications for Higher Education
The implications for higher education are probably more radical than we might think. The true challenge isn't just that generative AI can generate persuasive content - it's that it's forcing us to confront uncomfortable questions about the nature of persuasion itself. When we cannot rely on the assumption that persuasive arguments emerge from careful reasoning, we must fundamentally rethink what we're teaching and why.
This inevitably forces us to rethink how we assess student work. If polished, persuasive, and deceptively convincing prose can now be generated with little intellectual effort, then coherence, fluency, and formal argumentation may no longer serve as reliable indicators of critical thinking.
We all know that higher education is transforming dramatically. In posts like The Imperative for Rethinking Higher Education Strategy and This Needs To Change In Higher Education Management, I explored both how higher education institutions must evolve, and why this is tricky.
If universities are to retain and expand their relevance, they need to be able to imagine the future. If they can’t, it’s pretty hard to get there. To begin with, universities could take the lead in navigating this new rhetorical landscape. The most valuable skill we can teach may not be traditional critical thinking, but an awareness of why certain arguments persuade us - even, no, especially when they lack merit - in an age of generative AI.
The stakes couldn't be higher. In a world where AI can generate infinite variations of compelling but potentially hollow arguments, our students' success - and perhaps the future of academia and reasoned discourse itself - depends not on their ability to out-argue the machines, but on their capacity to understand and resist the seductive power of artificial persuasion.
The real threat isn't that AI will outsmart us, but that it may make us forget what genuine intelligence looks like altogether.
Have you subscribed to edAI in Brief?
If you want the weekly essentials on AI & higher education, in just a minute or two, you’ll love it.
Subscribe here ↓