AI Deep Research is awesome. I am so impressed with ChatGPT's deep research that I have a Pro subscription that allows 120 deep research reports. To me, this is well worth the $200 per month subscription fee.
For those of you who might not be familiar with deep research, it's a relatively new generative AI tool that creates extensive research reports, complete with sources. These reports really are impressive, especially ChatGPT's. Gemini's deep research is also impressive although I don't find it quite as good as ChatGPT's. The jury is still out on Perplexity's version though.
Here’s how I described Deep Research in an earlier article:
As the name implies, Deep Research actually does research before preparing its response. It also thinks through its research before starting it and usually asks clarifying questions before getting to the real work. It's a bit eerie—it almost seems like you're texting with a human research assistant. The results of my early testing are beyond impressive. My first test was a pre-submission review of a journal article a colleague and I are planning to submit to a top journal. With a pretty simple prompt and a few answers to clarifying questions, Deep Research produced a 16-page (single-spaced) developmental review that will help us strengthen the paper before we submit it. This should be a big boost to our chances of having the paper accepted.
A couple of weeks ago, my doctoral students and I discussed deep research reports and how they might be useful to academic scholarship. Our conclusion was that they could help you learn the basics of research in a particular area, but they were not a substitute for "real" research. In other words, they could jump start deep understanding, but they weren't enough on their own. Real human thought and contemplation, and additional research, are still required.
This struck me as both insightful and interesting ... and indicative of many uses of AI. AI can get you started, but it can't take you all the way. Human effort is still required for true understanding and insight.
After thinking about this some more, I concluded that deep research is usually kind of shallow. Sure, these reports are often voluminous and dense with information. Many of my reports are over 30 pages long. Most aspects of the reports, in my experience, are reasonably well referenced. But, they are without exception shallow. They have a flavor of "just the facts" about them. They remind me of the sort of report a diligent graduate assistant might produce. They might be extensive, but they don't produce new insights. That's no knock on GAs, by the way. Few of us were capable of producing great insights early in our academic journeys.
Here's the thing: I'm perfectly fine with deep research reports. They really are awesome, as long as you understand what they are and what they are not.
Let me give you an example. I'm working on a paper that uses Service-Dominant Logic as a theoretical lens. I know next to nothing about SDL other than it exists and looked useful for my project. So, I asked ChatGPT to create a deep research report on SDL. In about 15 minutes, I had a well-sourced, cohesive and comprehensive report on SDL that brought me up to speed on the basics. This saved me HOURS of work. I was able to quickly and efficiently determine whether SDL was suitable for my project. (It is.) Then, I was able to figure out where to zero in, where I needed a deep understanding of the theory.
Before AI deep research, getting to that point would have meant hours of Google Scholar searches and article scanning. The deep research report let me do that in a fraction of the time. Yet, all of this really is kind of shallow. I do not claim to have a deep understanding of SDL and its applications, but I'm off to a good start. Yes, there's still a ton of work to do, but I'm thrilled to save hours of time and effort.
Much of AI is this way. I've found very few tasks that AI can complete 100% of, but I use it daily to save me time and effort. This is what I call the 50% mindset, which just means that you should stop expecting AI to do entire tasks for you — instead, adopt a co-production mindset where AI helps you do the annoying parts faster, saving time and effort without replacing your role.
Perhaps more importantly, deep research is a good example of why human thought and imagination are still critical to any sort of knowledge work. AI gives the appearance of depth, but without the human element, it's really shallow. That's why the idea of co-production between humans and AI is so compelling. Like any good collaboration, partners in co-production should learn each other's capabilities and limitations, then produce in ways that best leverage each team member.
The key to this, of course, is knowing what AI is and is not good at. Deep research is great at compiling a lot of material quickly and comprehensively, but on its own, it's not great at making true insights, although it can be useful as a collaborator to a human trying to nail down fuzzy, but interesting ideas.
The TL;DR of this is that when used effectively, AI, including deep research, can be a tremendous asset, but we still need humans to develop truly novel insights.
The evolution of AI tools like deep research presents both an opportunity and a challenge for academic scholarship. While these tools can dramatically accelerate the initial phases of research, gathering sources, synthesizing basic information, and identifying key concepts, they can NOT replace the essential human elements of scholarship: critical thinking, theoretical innovation, and genuine insight. The true power lies in understanding this dynamic and embracing AI as a collaborative tool rather than a replacement for human intellect.
For academics and researchers, the path forward is clear. Leverage AI's capabilities for the heavy lifting of initial research and information gathering, but invest your intellectual energy where it matters most, in developing novel theoretical frameworks, identifying unexpected connections, and advancing knowledge in meaningful ways. Deep research may not be as deep as its name suggests, but when combined with human expertise and critical thinking, it can be an invaluable tool in the modern scholar's toolkit.
Want to continue this conversation? I'd love to hear your thoughts on how you're using AI to develop critical thinking skills in your courses. Drop me a line at Craig@AIGoesToCollege.com. Be sure to check out the AI Goes to College podcast, which I co-host with Dr. Robert E. Crossler. It’s available at https://www.aigoestocollege.com/follow. Looking for practical guidance on AI in higher education? I offer engaging workshops and talks—both remotely and in person—on using AI to enhance learning while preserving academic integrity. Email me to discuss bringing these insights to your institution, or feel free to share my contact information with your professional development team.
In my discipline, much of the truly important and relevant material is behind a paywall (think Elsevier). Deep research that gains access to the iceberg below the waterline would be a game changer. But the ‘moat’ owned by the journals is much wider and deeper than the ‘moatless’ AI companies. Still, one can hope.