From writing and print to markets and bureaucracies, we've long relied on cultural technologies to extend human thought. These systems reorganize knowledge, scaffold societies, and shape how we think - often in ways no single mind could ever orchestrate alone.
Now, generative AI models stand at the threshold of becoming the next great cultural technology. Generative AI is not merely an assistant that automates tasks. It is a mechanism for organizing vast bodies of human knowledge in ways we have yet to fully understand. More importantly, the technology might serve as a means of accessing forms of thought that no human, bound by the limits of personal experience, could ever generate alone.
I’ve previous written on AI and the Renaissance of the Polymath. But we may not just be entering a new era of polymathic thinking - we may be approaching a moment in history when AI reshapes the very idea of intellectual endeavour itself.
Could AI bring together perspectives that no single mind, constrained by time and attention, could ever combine? And if it can, how might that change the way we think, create, and innovate?
The Limits of Individual Knowledge
Human thought, for all its brilliance, has always been constrained by limitations. No single person can hold all perspectives simultaneously. We navigate knowledge through disciplines, traditions, and intellectual silos, sometimes bridging them, but often struggling to see beyond the boundaries of our own training and cultural biases.
The best thinkers in history - think of Einstein, Hildegard of Bingen, Da Vinci, Marie Curie - achieved breakthroughs not just because they were brilliant, but because they combined ways of seeing the world that were usually kept apart. But this is rare. Even the most interdisciplinary human mind is fundamentally limited by personal experience, education, and the slow process of accumulating and synthesizing knowledge over a lifetime.
What if AI models, trained on the sum total of human intellectual output, could become enablers of thought that not only summarize knowledge but actively generate new ways of seeing the world - offering perspectives that have never been fully articulated before?
AI as an Ecosystem of Debate
One possibility might be to construct AI-driven systems that do not merely reflect back the most common interpretations of knowledge but challenge them. Instead of single large language models with middle of the road probabilistic, generic output, perhaps we could develop and combine "society-like" ecologies of AI perspectives, as proposed in this new piece by Henry Farrell et al. - multiple models trained on or fine-tuned to different traditions of thought, engaging in structured debates, testing ideas against one another, and highlighting contradictions.
Such systems could act as counterweights to the flattening effect of algorithmic recommendations, which tend to homogenize thought and create what I have previously called a synthetic knowledge crisis rather than diversify. Instead of reinforcing consensus, these AI perspectives might serve as intellectual adversaries - forcing us to grapple with unfamiliar ideas, illuminating the blind spots of human reasoning, and surfacing conceptual connections that would otherwise remain invisible.
In such a world, engaging with knowledge would both be a process of reading and absorbing as now and a more evolved, multimodal engagement with an evolving landscape of ideas. Scholarship itself might become a more synthetic and interactive process, where the goal is not merely to retrieve knowledge but to construct new modes of understanding in real-time collaboration with AI-driven intellectual ecosystems.
The Risk of Over-Harmonization
However, there is a danger here. AI systems, left unchecked, have a tendency toward both standardization and contentless fluency. By averaging out diverse perspectives, AI systems can create the illusion of a coherent, singular truth, or even worse add to epistemic flattening. This is already happening in search engines and recommendation systems, where knowledge is streamlined into digestible, lowest-common-denominator versions.
The real challenge, then, is not just to build AI that reflects existing knowledge, but AI that actively resists oversimplification, neatly structured and well-worded but hollow language. In other words, AI that deliberately preserves and even enhances the diversity of thought.
To achieve this, we must design AI models that are optimized for intellectual tension. Instead of seeking a single answer, they should generate productive disagreements. They should expose the inconsistencies in human knowledge, highlight the missing voices in historical narratives, and make visible the hidden structures that shape our worldviews.
Such a system would not just help us find answers. It would force us to think better.
The Future of Thinking
If we get this right, the implications could be profound. AI could become much more than a tool - it could become an intellectual adversary, a creative partner, a means of navigating the vast, chaotic ocean of human thought and intellectual contributions in ways that are richer and more nuanced than anything we have achieved before.
We need to think about AI as a way of expanding human intelligence and knowledge rather than replacing it. Instead of making us quietly complacent, it could make us more rigorous. Instead of flattening knowledge, it could deepen it by laying out new cables. AGI and whether AI can think is not what should concern us here.
Whether we can use AI to enable thinking in ways we never have done before is.
At my university, we’ve been asking ourselves, What if AI isn’t just a tool, but an invitation to reimagine?
We don’t just need new policies for AI. We need new ways of thinking. Lipscomb’s Fear to Flourishing model has pushed us to reimagine creation, assessment, policy, and our place as educators.
To move from fear to flourishing, we believe that we must reimagine what it means to teach, learn, and lead. AI challenges our systems, but it also opens the door to transformation. That’s what we’re learning as we navigate this together.
Glad to see your take on what I think is the most interesting set of ideas going when it comes to AI. Our conceptual framework for AI is dominated by a vision of building superhuman intelligence. Most of the discourse about the social and educational uses of transformer-based models uses that frame. So, we end up arguing over whether or not it is getting us closer to AGI and comparing its outputs to human outputs.
Recasting generative AI as a social and cultural technology moves us into a much better context for using it to solve social problems. So glad you're into these ideas. Looking forward to reading more.