We need to help students play the long game with AI
Short-term thinking can short-circuit learning
The short-term thinking problem
Almost since the introduction of ChatGPT, there's been a lot of interest and (justifiable) concern about students using generative AI to cheat on their coursework. Leaving aside the question of what it means to cheat with generative AI, which I'll write about in a future issue, I think the key to dealing with this really isn't all that different that different from how we should approach cheating in general.
I'm going to way overgeneralize here, but in my experience there are three approaches to academic dishonesty, prevention, appeals to integrity, and fear of punishment. Faculty take steps to prevent cheating through measures like proctoring exams, creating different versions of exams and assignments, and the like. Honor codes and academic integrity documents appeal to students' sense of integrity. Finally, fear of punishment involves specifying potential sanctions that students face when they violate academic integrity.
These approaches are also being applied to dishonesty that involves generative AI to varying degrees, which is fine. We need to adapt the approaches to a generative AI world, but that's a topic for another day. My message today is that we may want to consider an additional approach, one that's been underutilized in my experience. We need to convince that academic dishonesty is contrary to their long-term self-interest.
You've probably heard (or even said) something like, "You're only cheating yourself." This cliché is often met with a roll of the eyes, but like many clichés, there's more than a grain of truth to the statement. When it comes to learning, cheating really does cheat oneself (although I don't think it only hurts the cheater). The problem is that educators (including me) have largely botched the job of helping students understand that learning is work and short-circuiting that work hurts them in the long run.
This is not entirely our fault though. Higher education certainly has important long term goals, but students and faculty often focus on very short-term goals, such as grades on assignments and exams. I think this might lead students to be overly focused on short-term goals. So, the system works against getting students to see long-term effects.
The fact that most people tend to naturally focus on short term effects generally is also a barrier to getting students to understand the long-term consequences of cheating. When they cheat, they may think that they are working in their best interests, and they may (in some respects) be working in their short-term interests. But, their excessive focus on the short-term effects blinds them to the long-term consequences.
There's an additional problem, one with a fancy name -- hyperbolic discounting. Hyperbolic discounting is our human tendency to prefer small immediate rewards to larger future rewards. The benefits of cheating are immediate, but the cost is off in the hazy future. So, in some respects, it's natural for students to think about shortcutting the learning experience. This situation is exacerbated (I just love that word!) by the fact that some of the potential negative consequences may be seen as having a low probability of coming to pass. We don't catch every cheating incident, so students sometimes think they'll get "lucky" and get away with cheating, which is often the case.
Another problem is that many students also haven't gained enough life experience to really understand long-term consequences. (I was in my mid-thirties before I realized the error of my younger ways.) It's just the nature of life to become more aware of long-term consequences when you've experienced more of them. Sure, there are young folks that are exceptionally insightful, and some of us old ones are pretty clueless, but the general tendency is for younger people to be less aware of long-term consequences.
Addressing the problem
So far, I've painted a pretty bleak picture, but I think it's accurate and it's been the reality for a long time. Generative AI just makes the situation more complicated for several reasons:
Generative AI use is difficult to detect reliably at scale
Generative AI is widely available and relatively easy to use inappropriately
Students often don't understand the line between ethical and unethical AI use
Students are getting mixed messages about AI use
Yeah, I know, that doesn't really tell faculty what should be done. I'm calling for a mindset shift away from detection and punishment and towards deeper understanding of the personal costs of cheating. This isn't easy though. It involves several components.
Consistent messaging around the long-term consequences of taking learning shortcuts. Learning is often hard work, especially at more advanced levels, but that work is an investment that can pay huge dividends down the road.
An honest assessment of whether our learning activities really do help students learn. A number of years ago, there was a shift towards having lots of small, low risk assessments. There seemed to be some sound thinking behind that shift, but I wonder if its time to reconsider. Students often see these activities as busy work and sometimes they might be right.
Clear explanations of the learning value of various activities. In my long experience, if students buy into the relevance of an activity they tend to be much more engaged and less likely to act dishonestly. If we do an honest assessment of the value of our learning activities, we should be able to communicate that value to students.
Changing attitudes isn't easy, but I'm convinced that getting students to develop a long-term attitude towards the work associated with learning will benefit them and us ... in the long run.
What do you think? Do you agree, disagree, or have a different idea? Let me know: craig@AIGoesToCollege.com.