Recently, I ran across two interesting reports on AI in higher education. The first is EDUCAUSE’s 2025 AI Landscape Study. EDUCAUSE is a well-established nonprofit that focuses on “advancing higher education through the use of IT.” They’re a legit organization that’s widely considered the leading IT in higher ed organization. The second is Tyton Partners’ Time for Class 2025 report. Tyton Partners is an advisory firm (consultants) that focus on the intersection of education, media, and information markets, which they call the “Global Knowledge Section.”
The EDUCAUSE report has a nice navigation structure that makes it easy to get to the portions that interest you. The Tyton report is a long pdf. Although it’s worthwhile to read the full report, Inside Higher Ed has a nice summary that gives you the highlights.
As I pursued the reports, two big conclusions struck me. First, higher ed is still behind in responding to AI. Second, the response is still (mostly) on faculty.
Fragmented Response
The EDUCAUSE report found that while AI is a strategic priority for most institutions (57%), only 22% have an institution-wide approach to AI strategy. For most schools (55%) AI strategy is happening in pockets across campus. The Tyton report had similar findings; only 28% of institutions had formal AI policies in place, although another 32% were developing them.
There’s also a lack of institution-wide response when it comes to tools. Very few schools offer campus-wide licenses for AI tools, although there are exceptions. Faculty and staff are largely left to their own devices to use whatever AI chatbot or other AI tools they happen to find, like, and can afford. I’m a little surprised that AI integration with learning management systems is still limited. The Tyton report found that only 1 in 10 faculty indicated currently using embedded AI features in LMS and courseware.
The fragmented institutional response to AI isn’t really that surprising to higher ed veterans. Universities are odd institutions in many ways. Even the most powerful president, provost or dean has relatively limited power compared to their counterparts in other industries. Very few higher ed leaders can just snap their fingers and dictate changes. Although that’s a good thing overall, it does lead to situations like the current response to AI. But it’s not only institutional inertia that’s hindering higher ed’s response, it’s also funding. Do a little math: at $20 per month per person, providing an AI chatbot to everyone at even a small school is expensive. Tack on support staff, training programs and the like and the bill gets huge very quickly. So, for now we’re left with a fragmented AI response that will gradually become more coordinated over time. (By the way, AI is just one of many forces higher ed leaders are dealing with. Higher ed leadership is NOT easy.)
It’s up to Faculty
So, currently the response to AI is largely up to faculty. Both reports hint at this in their calls for a strong focus on AI training for faculty and staff (EDUCAUSE) and in holding up faculty as the key to AI literacy (Tyton). These are reasonable conclusions, but to some extent they fail to recognize that many faculty are already overwhelmed and under-resources. Many of the faculty I talk to are already doing everything they can to stay afloat. AI is just one more thing weighing them down, even if they’re AI optimists. One interesting finding is that while faculty in general found that AI increased their workload, 36% of faculty who use AI daily experienced a decrease in workload, compared with 26% of daily users who experienced increased workload.
Nowhere is the AI-induced load on faculty more apparent than in the area of academic integrity. Both reports also indicate an ongoing struggle to respond to academic integrity challenges related to AI. This is leading to increased workloads for faculty not only to monitor cheating, but also to redesign assessments to either leverage AI or reduce the potential for inappropriate AI use. AI detectors don’t work, especially when it comes to clever students. Redesigning assignments takes considerable time and effort. Figuring out how to adapt programs to account for workforce changes brought on by AI is an even more daunting challenge. It’s a lot to deal with. The bad news is that none of this is going to get any easier in the short term. The good news is that AI has the potential to make higher ed more effective AND more efficient, even for faculty. It will take time and effort, but I firmly believe the day will come when AI is a friend to faculty and not the current irritation.
In closing, I urge you to take some time to scan through the reports. I’ve only scratched the surface. Taken together, they paint a picture of interesting times ahead.
Want to continue this conversation? I'd love to hear your thoughts on how you're using AI to develop critical thinking skills in your courses. Drop me a line at Craig@AIGoesToCollege.com. Be sure to check out the AI Goes to College podcast, which I co-host with Dr. Robert E. Crossler. It's available at https://www.aigoestocollege.com/follow.
Looking for practical guidance on AI in higher education? I offer engaging workshops and talks—both remotely and in person—on using AI to enhance learning while preserving academic integrity. Email me to discuss bringing these insights to your institution, or feel free to share my contact information with your professional development team.
This is a problem across the board, not just in higher ed, but K-12 as well. Secondary schools have a little more ability to centralize and create a more coherent strategy but the independent nature, especially of tenured faculty at colleges and universities, still means fragmentation will be a problem going forward. If you follow the debate, it’s clear there are still lots of skeptics out there who want to double down - futile in my opinion - on AI detection. With GPT-5 on the horizon (the advance buzz is strong but we’ll see if the hype holds up - regardless, these frontier models seem to be going only in one direction), I don’t see how faculty who opt out can possibly keep up. The things AI can do now are extraordinary.
Very interesting; thanks, Craig. Most revealing of the nature of the challenge presented by AI are the survey elements that measured concerns and opportunities. Nearly every possible concern scored a strong response and so did the hopes and possibilities! That about sums up where academia stands at the moment, and most other folks who have given AI serious thought. Somewhat concerning is that so many are not really sure about where their institution stands, what is going on in the administrative halls, or where the time and money will come from to bring on this technology in a coherent and helpful way.