AI in Higher Education: Looking Ahead to 2025
A Year of Change: Reflecting on 2024
2024 brought significant developments in the world of generative AI — new models, new tools, more and more students using AI (appropriately and inappropriately) — it was an interesting year. The AI arms race among the big players accelerated, with new, more capable models coming out almost constantly. New tools, especially from Google, helped us push the boundaries of possibility. As awareness among students grew, so did AI-enabled academic dishonesty, although the cheaters are still the minority by a large margin. All of this took place in an environment in which the public wanted schools to embrace AI despite dwindling funds. Yes, 2024 was an interesting year for AI and higher education. There was no want of topics for me to write about.
The Rise of AI Agents: A Game-Changer for Education
It’s time to look forward. In that spirit, here is what I see on the horizon for 2025. First, I think 2025 may actually be the year of the AI agent (also known as agentic AI). 2024 saw a lot of hype around agents, but they haven’t really taken hold. That could change in 2025. Groups are working on standards and protocols that will allow AI agents to work together. Although these standards won’t be fully implemented in 2025, there may be enough progress to spur significant development. AI agents are great in concept, but they’re tricky to implement. That’s why I think we’ll see agents offered as add-ons to learning management systems and enterprise systems. Imagine an AI agent that you can train to reliably apply a rubric to an assignment and provide developmental feedback. This agent could then post the grade, all without requiring professor intervention. The agent could even flag submissions that require human review. This is all well within the capabilities of current AI tools, but few of us have the time or expertise to build and test the agents. The big providers do, plus they have incentives to develop these agents as add-on services. A parallel situation exists on the administrative side, but this will operate through enterprise system providers. There are technical and ethical challenges, but these can be addressed. All that being said, I could be overly optimistic about how quickly AI agents will take hold, but I remain confident that they will be a reality.
Universities Still Playing Catch-Up
2025 will also see continued slow responses from universities. In my experience, most schools still don’t have coordinated responses to AI. Most don’t even have institutional policies. Some schools are making good progress, but this often comes down to a small number of motivated individuals. When this includes someone in central administration, things can happen. Otherwise, it’s pockets of progress, taking place in isolation and without coordination. This has to change, and soon. The slow response isn’t surprising, and it’s not entirely unreasonable. An AI entrepreneur once said to me, “Trying to build an AI app is like building a house in a hurricane.” The constant progress means you’re always aiming at a moving target. Universities face an equally turbulent environment, trying to build sustainable AI policies while the technology constantly evolves. This is a tall order. There are many decisions that have to be considered and negotiated among parties that may have divergent interests and opinions. Anyone who has ever been on a university task force knows exactly what I mean.
The Challenge of "Unknown Unknowns"
One of the biggest challenges universities face with respect to AI is the huge number of “unknown unknowns.” It’s hard to fathom, but it’s only been a little over two years since the release of ChatGPT. In that time, there’s been a constant torrent of new tools and increased capabilities. To use a sports metaphor, it’s hard enough to skate to where the AI puck is, it’s almost impossible to skate to where the AI puck is going to be. (Apologies to Wayne Gretzky.) Schools are trying to figure all of this out against a backdrop of declining resources while still dealing with the aftermath of the COVID days.
Cross-Institutional Response: A Missing Piece
Responses across institutions are even less apparent. Accreditors are making noise about AI, but I haven’t seen any coordinated action, although I might just be out of the loop regarding the accreditors (and I’m thankful for that). Some disciplinary associations are trying to help. The Decision Sciences Institute is one example. But these responses are sporadic when they exist at all. Many (most?) faculty are on their own. We desperately need to share our experiences and responses to AI.
The AIGTC Learning Lab: A Collaborative Solution
This widespread need for coordination and shared resources has inspired a new initiative. In a few months, my AI Goes to College podcast co-host, Rob Crossler, and I will launch the AI Goes to College Learning Lab—a free, crowdsourced repository of learning activities and assessments. This repository will focus on two key areas: activities that effectively leverage AI and assessments designed to maintain academic integrity in an AI-enabled world.
The idea emerged during a recent podcast episode when Rob highlighted the critical need for faculty to share their AI-related teaching strategies. His suggestion of crowdsourcing these resources resonated strongly, and we decided to turn this vision into reality.
The details are still being developed, but basically we’ll solicit learning activities and assessments from faculty, then provide these in a searchable repository (at least that’s the plan). To be successful, the repository will need to reach critical mass. That’s where you come in. In a couple of months, we’ll send out a call for contributions. If nobody contributes, the project will die a swift death. But if you and enough of your colleagues contribute, we’ll all have a wonderful resource that can ease a lot of AI anxiety and effort. (To stay informed about Learning Lab developments, be sure to subscribe to AI Goes to College.)
The Reality Check: AI is Here to Stay
Perhaps most significantly, 2025 will be the year when higher education fully accepts that AI is here to stay. While some faculty remain in a state of denial about AI—viewing it as just another educational fad—the reality is undeniable: AI’s impact on education will only intensify. So, in 2025, I think many faculty and administrators will realize that it’s time to suck it up and deal with the situation. This will be very hard for many. Higher ed professionals (faculty and staff) are often overworked and overcommitted. AI is just one more thing on a never-ending list of things to do. In the long run, AI may save us time, but that’s not going to happen right away. We have to make the initial investment in time, thought, and energy before we can see the payoffs. One of the sad effects of this is that I think we’ll see more than a few faculty leave higher ed over the stress and aggravation of dealing with AI. Based on what I’m seeing on social media, this is already happening. For those who can tough it out, though, AI will eventually pay off. How soon that will happen is anyone’s guess. At the expense of being redundant, this is another reason we need coordinated action and resource sharing. Too many of us are inventing the AI wheel over and over again. That has to change—and initiatives like the Learning Lab can help—or 2025 might be an even rockier ride than 2024.
Well, that’s all for this time. If you have any questions or comments, you can leave them below, or email me - craig@AIGoesToCollege.com. I’d love to hear from you. Be sure to check out the AI Goes to College podcast, which I co-host with Dr. Robert E. Crossler. It’s available at https://www.aigoestocollege.com/follow. Thanks for reading!