Higher education isn’t just about finding answers. It’s about understanding how we arrive at them. But what happens when AI-powered tools like Perplexity Deep Research can scan thousands of sources and generate a polished analysis in minutes, mimicking the depth and structure of a well-researched literature review?
For researchers, this technology can be a powerful asset - if used with caution. But for students, the implications are more troubling. Engaging with sources is fundamental to developing academic judgment. If AI takes over that process, what happens to the foundational skills higher education is meant to cultivate?
Is This Really “Research”?
Much of the debate around AI-driven research tools stems from confusion over what we actually mean by research.
If research is simply information gathering - locating hard-to-find facts, summarizing existing knowledge, and compiling references - then AI does a pretty remarkable job. But that is a narrow and misleading definition.
Research is not merely the act of collecting information. Proper research begins with a question to be answered or a problem to be solved and requires the collection, analysis, and interpretation of data. In other words, research is an intellectual endeavor, not just a retrieval exercise.
By that standard, AI-based research tools are astonishingly fast at assembling and summarizing knowledge but significantly weaker at the most essential elements of research - the intellectual heavy lifting of critically analyzing sources, synthesizing competing perspectives, and interpreting evidence in a meaningful way.
The Problem with AI as Research
Perplexity Deep Research claims to provide “in-depth research,” conducting extensive literature searches and delivering synthesized insights at a level that rivals human expertise. The platform boasts that it condenses “many hours of research” into just a minute or two. After testing it, I’d say that’s an understatement. It can replace days or even weeks of work - an astonishing leap forward, but one that presents serious challenges for education.
AI-powered academic search is efficient, but it is not research in the traditional sense. Like all LLM-based systems, it selects sources without transparency, summarizes without engaging alternative perspectives, and delivers conclusions with a confidence that should raise red flags. It lacks methodological curiosity, fails to seek out conflicting arguments, and does not reflect on why certain sources are prioritized over others. And as AI-generated content increasingly floods the internet, the quality of sources these models rely on becomes an open question.
It’s tempting to think of AI as just another research tool, like library databases or Google Scholar. But unlike traditional research methods, which require active engagement with sources, Perplexity Deep Research reduces the process to a single command. Gone is the intellectual friction. Gone is the need to evaluate why one source carries more weight than another.
When AI doesn’t just help us find sources but also dictates which ones are valuable, research shifts from an academic endeavour to a mere validation exercise. Of course, it is not new that generative AI can sound rhetorically authoritative while lacking true depth. But in academia, where research creates new knowledge, the stakes are high. If students aren’t explicitly taught to critically engage with AI-generated analyses, they may accept its conclusions as fact. That runs directly counter to the tradition of source evaluation that has long been a cornerstone of higher education.
From Source Criticism to Fast-Food Information
What happens when we can no longer trust that a student’s information gathering and source analysis is the result of their own intellectual effort? Universities have two options: ignore the issue or rethink how they teach research, critical thinking, and information literacy.
The solution isn’t to ban generative AI or pretend we can carry on as before. But neither is it to embrace AI-driven research as a neutral tool. Generative AI is never neutral - it operates without intent, but its effects are far from harmless.
If we allow AI to take over research and source evaluation, we won’t be training students to think critically - we’ll be training them to validate machine-generated content. And this raises a deeper question: What happens when students stop learning how to question?
I share your concerns about students losing or not developing the knowledge/skills required to be effective researchers through use of tools like Deep Research. I often see supporters of AI in higher ed say we just need to make sure students develop AI literacy. And that of course requires them to know how to evaluate the output of AI tools. But to evaluate that output, you need to be something of an SME in the topic. And I do not think you can develop the depth of knowledge needed to be a SME if you have not done deep, reflective reading of original research. So, there is a "catch 22" in assuming students will be able to rigorously evaluate the output of AI research I think.