The Big Announcement
Imagine a half a million students, faculty and staff being handed a shiny new AI tool and told “go learn” with no map and no compass. That’s where higher education might be heading. A few weeks ago, California State University (the system) announced a deal with OpenAI to provide ChatGPT to over 500,000 students, faculty, and staff. Unsurprisingly, the announcement got a lot of attention.
Although I have no inside knowledge, my guess is that there will be many similar announcements over the coming year. In fact, I'm surprised we haven't heard more from Google about these sorts of deals, given Google's entrenchment in higher education, and education in general.
There's a real danger in deals like the one between CSU and OpenAI. Are schools and systems investing too much without really understanding the implications of generative AI (GAI) for higher ed? In my view, the clear answer to this question is yes (in most cases). The clearest indication of this is a quote from a CSU spokesperson in response to a question about other expenses (beyond the $17 million going to OpenAI) that CSU would incur. The response is telling:
“Additional costs have not been quantified.”
Yikes! This makes the CSU deal sound very much like ready-fire-aim in that the investment seems to be made without clear, actionable goals and, more importantly, without reasonable implementation plans. Seasoned faculty like me have seen this before. Administration becomes enamored with some new thing—technology, teaching method, curriculum, etc.—and imposes it on faculty without clearly thinking through how to actually implement the new thing to accomplish some goal. In part, this happens because there aren't clear goals. I'm not going to go into why this happens, beyond saying that it's easy to get caught up in the latest trendy thing, especially if you think your peer institutions are using it. We can't get behind, can we?
If CSU has clear goals and a well-thought-out implementation plan, then good on them. But my decades of experience make me bet that this isn't the case. Here's how this often plays out.
The Rush to Adopt: A Familiar Pattern
Universities get a new thing. There are lots of proud announcements, followed by a cascade of meetings. Higher level administrators meeting with their charges, telling them to use the new thing. Deans and department chairs meeting with faculty to tell them to figure out how to use the new thing. Fast forward a few months and the criticisms begin, mostly criticisms of faculty for not using the new thing enough with the typical complaint that faculty aren't open to new ideas.
The same sort of thing happens on the staff side. I've seen it and been part of it more than once. It's really pretty simple. When changes are well thought out, competently planned and executed, and goals are clear, the change usually meets the goals. When any of those elements are missing, failure and recriminations result.
We're on the cusp of this happening with generative AI, 'this" being the poorly thought out, knee-jerk actions, not the carefully planned, thoughtful variety. I can smell it in the air. Other schools are going to point to CSU, Arizona State University (who inked the first major OpenAI-higher ed deal), and those that follow as evidence that they too should follow a similar path. This will not go well in most cases.
I'm painting with a broad brush here. In reality, it will be a relatively small number of institutions that go down this path. Most of the administrators I've known have been thoughtful people who cared about their schools and the people that work and learn in them. But more than a few schools will be caught up in the frenzy and will act before thinking sufficiently. The results will not be pretty.
Faculty will be blamed when these efforts go wrong and faculty WILL be responsible, but not at fault. Unless administrations provide the right leadership, guidance, and resources, generative AI will not live up to its potential to increase learning effectiveness and efficiency. But, faculty will be blamed for these failures.
Beyond Cheerleading: What Faculty Really Need
The current landscape is filled with AI cheerleaders but lacks practical guidance. Instead of just showcasing "cool new tools," institutions need to demonstrate how GAI can address what faculty actually care about: saving time and improving student learning.
Successful GAI initiatives require:
Clear leadership and specific goals
Recognition of faculty expertise
Adequate time for learning and implementation
Practical, hands-on training and support
Simply providing access to GAI tools without these elements ignores the reality of already-overstretched faculty schedules.
What does this all mean? What's my goal here? It's obvious that I'm a huge proponent of generative AI in higher ed. But, I'm also a pragmatist with more than a few battle scars. I can see a future in which GAI brings significant benefits to higher ed, but splashy announcements and "here, go use this new expensive thing" approaches won't get us there. We shouldn't slam on the brakes when it comes to GAI, but we shouldn't push the accelerator to the floor either. Careful, steady progress through carefully thought out plans will bring the future we need and deserve. Will higher ed steer AI towards real progress—or just more headlines? Time will tell.
Want to continue this conversation? I'd love to hear your thoughts on how you're using AI to develop critical thinking skills in your courses. Drop me a line at Craig@AIGoesToCollege.com. Be sure to check out the AI Goes to College podcast, which I co-host with Dr. Robert E. Crossler. It’s available at https://www.aigoestocollege.com/follow.
Looking for practical guidance on AI in higher education? I offer engaging workshops and talks—both remotely and in person—on using AI to enhance learning while preserving academic integrity. Email me to discuss bringing these insights to your institution, or feel free to share my contact information with your professional development team.
I've got an article coming out tomorrow with a similar theme, but it is focused on the use of AI detectors. Based on my experience I can certainly see how rolling out AI will play out just like you said. I do agree that goals, training, and standards should be developed across the institution.