What 81,000 People Told Anthropic About AI (And Why Higher Ed Should Be Paying Attention)
Last week, Anthropic published the results of what they believe is the largest qualitative study ever conducted. Over one week in December 2025, they invited every Claude.ai user to sit for a conversational interview with a specially prompted version of Claude. The topic: how do you use AI, what do you hope it could do, and what scares you about it? Over 80,000 people across 159 countries and 70 languages took them up on it. The sheer scale is remarkable, but so is the method; an AI interviewer conducted open-ended, adaptive conversations with each participant, then AI-powered classifiers categorized every response across multiple dimensions. It’s qualitative research at a scale that would have been flatly impossible even two years ago. (By the way, expect to see increasing use of AI interviewers. Cloud Research recently released Engage, which uses AI to interview participants based on a protocol and guidance you provide. I’m planning on testing this soon.)
There are A LOT of interesting findings, so I encourage you to browse through the study report. Here are a few of the findings that stood out to me. One was interesting. One genuinely surprised me. And one I think is critically important for anyone working in higher education. (How’s that for a teaser?)
People Don’t Want AI to Work Faster, They Want AI to Help Them Live Better
If you’ve been paying attention to the chatter around AI discourse, it would be easy to think that the story is mostly about productivity. Although productivity was the single most common experience people reported (32% of respondents described AI dramatically speeding up their work), the AI story isn’t entirely about efficiency. Many users started the conversation by talking about productivity, but when the AI interviewer pushed them about the underlying reasons they wanted increased productivity, the responses were surprising. Here’s a snippet from the report:
Many others similarly started the interview talking about productivity, but after Anthropic Interviewer asked about their underlying hope behind it—what realizing this vision would enable for them—other priorities surfaced. It wasn’t about doing better work, but increasing their quality of life outside of it. Using AI to automate e-mails became, in actuality, a desire to spend more time with family.
“With AI I can be more efficient at work... last Tuesday it allowed me to cook with my mother instead of finishing tasks.”White collar worker, Colombia
“I want to use less brain power on client problems... have time to read more books.”Freelancer, Japan
11% of the users viewed AI’s productivity gains as a way to free up time for more rewarding pursuits, such as personal relationships and leisure. I’ve been talking about this for several years. 10% want to use AI to create financial independence and 14% want “AI to help them manage the logistics and administrative burden of modern life’s quotidian tasks.” (I had to use a direct quote here to justify the use of “quotidian,” which is a fun word.)
To me, the great hope of AI is that it can free humans from mundane tasks, creating more time and space for more meaningful endeavors. Think about it. How great would life be if you didn’t have to spend time on the boring, unrewarding tasks we have to tackle? Imagine a world in which AI would simply write your annual report, grade routine assignments, or write reports that nobody will actually read. Glorious.
Those of us in higher education should keep this in mind. We tend to frame AI as a tool for academic productivity or a threat to academic integrity. Our students, colleagues, and staff may be thinking about it in entirely different terms. They may be less interested in writing better papers and more interested in reclaiming Wednesday evenings.
The “Light and Shade” Are Tangled Together Inside the Same People
Here’s a result that was surprising to me (at first). Many of us seem to think that AI optimists and AI pessimists are in separate camps. To an extent, that’s true. However, what people want and fear from AI are tightly coupled. Anthropic called this the “light and shade” of AI, “the same capabilities that lead to benefits also produce harms.” They identified five tensions that recurred throughout the report:
There is a tension between using AI to learn and growing so reliant on it that you cease thinking for yourself; between being impressed by AI’s judgment but also burned by its mistakes. People find solace in AI but fear a time when its companionship stands in for human connection. They save time on some tasks only for the treadmill to speed up on others, and they dream of economic freedom at the same time they dread potential job displacement.
As I said, at first, this surprised me. But, the more I thought about it, the more the tensions made sense. In fact, I’ve experienced many of them; you probably have too. I especially feel the first tension. AI has been amazing for helping me explore ideas and gain knowledge, but I do worry about becoming over-reliant on it. When my favorite chatbot (currently Claude) is down, I feel a mild sense of panic before remembering that I’m actually capable of doing things without AI.
There’s a particularly interesting finding related to this dichotomy. According to Anthropic, the benefit side tends to be grounded in experience, while the harm side tends to be more hypothetical.
Across most tensions, the benefit side is more grounded in experience, while the harm leans hypothetical. For example, 33% of people mentioned AI’s benefits for learning, while 17% expressed worry about cognitive atrophy from AI use. 91% of those who mentioned learning benefits mentioned realizing those gains in some way, but 46% of those worried about atrophy had seen it firsthand. Students raised this particular tension the most—more than half had experienced learning benefits, but 16% also noted signs of cognitive atrophy, a rate exceeded only by their teachers (24%) and academics (19%). Troublingly, educators were 2.5-3 times more likely than average to report having witnessed cognitive atrophy firsthand, presumably in their students.
On the surface, this might seem to indicate that we just need to help people gain more experience with AI. Although that’s a good thing overall (in my opinion), it’s overly simplistic. The study’s data suggests that the context and way in which you use AI shapes whether it helps or harms you. If AI substitutes for your thinking, cognitive atrophy becomes reality. But if you collaborate with AI to push your thinking harder, learning and cognitive improvement result. And not every situation calls for the same approach; if I just want to crank out a routine email, having AI write it for me might be fine. But if I want to understand something deeply, I can’t outsource the cognitive work. Exactly how to help people understand these distinctions is messy, but these results seem to indicate that we need to start figuring it out.
Cognitive Atrophy Is Real, and Educators See It First
Speaking of cognitive atrophy, the study found that 16.3% of all respondents expressed concern about cognitive atrophy; the worry that over-reliance on AI causes skill loss, intellectual passivity, or a decline in critical thinking. That’s notable on its own. But the occupational breakdown is what should give us pause.
Educators were 2.5 to 3 times more likely than the average respondent to report having witnessed cognitive atrophy firsthand. They’re not just afraid of or anticipating cognitive atrophy, they’ve witnessed it. Among students, more than half reported experiencing learning benefits from AI, but 16% also noted signs of cognitive atrophy in themselves. Let that sit for a second. 16% is a big number when you consider the fact that these users had to have sufficient self-awareness to recognize cognitive atrophy. Almost a quarter (24%) of teachers and professors had seen cognitive atrophy in their students. Again, that number doesn’t tell the whole story since many educators may not have sufficient opportunities to observe cognitive atrophy in their students. So, the actual proportion of cognitive atrophy may be much larger.
This validates something many of us have been sensing in our classrooms but haven’t had good data to support. A South Korean student captured it with uncomfortable honesty: “I got excellent grades using AI’s answers, not what I’d actually learned. I just memorized what AI gave me... That’s when I feel the most self-reproach.” That’s not a student gaming the system without a care. That’s a student who recognizes something is being lost.
But here’s the crucial nuance, and I think this is where it gets really interesting for curriculum design. The study also found that tradespeople were among the most enthusiastic about AI for learning (45% reported experiencing learning benefits, second only to students), yet almost none had witnessed cognitive atrophy (4%). Self-employed researchers and people not currently working showed a similar pattern. The study’s authors suggest that AI’s learning benefits may be strongest when learning is volitional, rather than within institutional structures where AI is more likely to be used as a shortcut.
Read that again. When people choose to learn with AI, they learn. When they’re required to produce academic outputs in a system that rewards completion, they shortcut. This isn’t a story about AI being good or bad for learning. It’s a story about what happens when AI meets institutional incentive structures that reward products over processes. And that should make all of us in higher education deeply uncomfortable, because those incentive structures are ones we built and maintain. (See my article on the grade economy for more of my thoughts about the incentive system we’ve created.)
Before closing this article, I want to point out a quote that should scare us … a lot:
As educators, we need to make it a priority to counter this sort of thinking in our students. This is an extension of “Why memorize anything? I can just Google it.” Although the ability to Google facts and information is certainly useful, the black-and-white thinking is misguided as is “… learning deeply is of no use—ultimately I can just use AI.” This attitude is out there. Even if they don’t say it, many of your students are thinking the same thing. We need to help students see the shortsightedness of this mindset before it’s too late.
What This Means for Us
There are a few caveats to the study. It’s not a peer-reviewed academic paper, and there are important methodological limitations. The sample is entirely Claude users, likely biased toward early adopters. The interview asked about positive visions first, which may have primed responses. And the AI-powered classification system, while impressive in scale, introduces its own interpretive layer that deserves scrutiny. (The study’s authors, to their credit, acknowledge these limitations and provide a detailed methodological appendix.)
Still, 80,000 qualitative interviews across 159 countries represent something genuinely new in social science methodology. And the patterns that emerge are too consistent and too intuitively resonant to dismiss.
Here are a few things we should keep in mind, based on the study:
Our students’ relationships with AI are probably more complicated than our policies assume (assuming you HAVE an AI policy). Students aren’t simply cheating or simply learning. Many are doing both, and they’re aware of the tension. Institutional responses that treat AI use as a binary (allowed or prohibited) miss the psychological reality.
The cognitive atrophy finding demands serious attention (not panic). If nearly a quarter of educators are already witnessing it, we need to understand the mechanism. The volitional learning finding suggests the problem isn’t AI itself but AI combined with assessment structures that incentivize shortcuts. That’s a design problem that has a design solution.
We should pay attention to what people actually want from AI, which is overwhelmingly about living better lives, not optimizing performance (academic or otherwise). If our AI strategies focus exclusively on pedagogical applications, we’re missing the larger human story. Our students are using AI to manage their schedules, process their emotions, explore career paths, and cope with the cognitive load of modern life. Whether we think that’s healthy or not, ignoring it seems unwise.
The report is dense and full of interesting findings, so I encourage you to take some time exploring the results and the quotes. The big conclusion for me is that, for these early adopters, AI has become intertwined with their daily lives. In this sense, AI is edging towards becoming like mobile phones and the Internet … it’s just part of how modern life is lived. Just like those other technologies, the question isn’t whether AI is good or bad, it’s what can we, as educators, do to nudge the outcome away from the bad and towards the good.
Want to continue this conversation? I’d love to hear your thoughts on how you’re using AI. Drop me a line at Craig@AIGoesToCollege.com. Be sure to check out the AI Goes to College podcast, which I co-host with Dr. Robert E. Crossler. It’s available at https://www.aigoestocollege.com/follow.
Looking for practical guidance on AI in higher education? I offer engaging workshops and talks—both remotely and in person—on using AI to enhance learning while preserving academic integrity. Email me to discuss bringing these insights to your institution, or feel free to share my contact information with your professional development team.




So much interesting stuff in this study and being able to do a qualitative study like this in real time and analyze it is phenomenal in and of itself.
So much to unpack here but one of the first things that hit me was the student who recognized it was a means to an end. I would argue that existed long before AI - it was just inefficient. I can't tell you how many classes I had where I crammed the material in and right after the test it was all gone. So many of my classes as an undergrad and even some as a grad student in my master's program were multiple choice tests.
And then the one that I keep coming back to over and over again is that when the learning is out of curiosity or interest, the metacognition with AI can be an incredibly exciting experience.
I just wrote an article today about Pavlov because a study just came out that might completely flip the idea of repetition and learning completely on its head.