Time to end the “grade economy”
Like it or not, AI is forcing higher education to take a hard look at itself. Students are routinely using AI to help with their coursework, often inappropriately and in ways that hurt their learning. So far, much of the rhetoric has been focused on blaming the students. It’s time to knock that off and acknowledge that we (higher ed professionals) have created the conditions that led to our current state. We also need to stop pretending that AI created cheating. Cheating has been around as long as grades. Just a few years ago, there was a Chegg crisis. AI just accelerated what was already occurring.
The core problem is that we’ve created a grade economy based on trading effort for grades. Work harder, get better grades. Sure, some students are more academically talented than others so costs are not equally distributed, but for decades our core message boils down to trading effort for grades. Effort is the currency and the grade is the good. As I’ve written about before, this has created a transactional mindset among students. Once you set up that exchange a rational economic actor will minimize the input (effort) to get the desired output (grade). I’m currently shopping for a new vehicle for my wife. I’m shopping around for the best deal … the price at which I exchange the lowest amount of currency for the vehicle I want. Why are we surprised that students are doing the same thing?
The transactional mindset brought on by the grade economy is unnatural, artificially constructed and contrary to true learning. In my younger days, I worked really hard to learn to be a better basketball player. Nobody gave me grades. I learned how to proper techniques, footwork, even strategy because I wanted to learn, not because of some artificial point system. Kids do this naturally. Just watch a child trying to figure something out. They learn because they are internally motivated to learn, not because they get a grade. Students who are motivated to learn are far less likely to use AI inappropriately, especially if we help them understand how to use AI in ways that enhance their learning.
I teach a combination of doctoral seminars and undergraduate core classes. Unsurprisingly, my seminars look very different from my undergrad courses. The seminars have weekly synthesis papers, a major paper, and an exam that is really practice for their comprehensive exams. I give extensive feedback on each of these work products but I DO NOT grade them. They don’t get points, they get feedback. Yes, I have a grading scheme in my syllabus and it gets applied for the final grade, but I don’t maintain a grading spreadsheet. This works because these are highly motivated students. They know that if they don’t learn what I’m trying to teach them, they won’t be as successful as scholars. (They’ll also have a rough time on their comprehensive exams.) Of course, I assign a course grade, but this is based on my professional evaluation of their learning, not a calculation based on a bunch of graded activities.
My undergrad course is very different. It’s much larger, typically 80-90 students instead of 3 or 4. Also, there are lots of components … assignments, projects, quizzes, exams … all of which are graded. I keep careful records, have strict grading criteria, all of the normal things professors are taught to do. I think most students learn the material reasonably well, but in many cases, the learning is a byproduct, not the main thing. The grade is the main thing. (I might be exaggerating a little here, but not by much.)
If you teach even moderately large classes, your arrangement is probably similar to mine. For the last 10-15 years, the higher ed mantra has been “many low stakes learning assessments.” So we have lots (LOTS) of little things for students to do; often students don’t really understand how these help their learning. (Perhaps because they often do not.)
Differences in how the students use AI
It’s no great surprise that my doctoral seminar and undergrad core class are set up differently, so why bring this up? In my doctoral seminars, I not only don’t police AI use, I actively encourage it, going so far as to spend time explaining how the students can use tools like Notebook LM and Lex.page to enhance their learning and skill sets, often by showing them how I use these tools. In my undergrad course, I spend quite a bit of time on AI, but about half of this is focused on appropriate and inappropriate use. Also, I specifically design some assignments to be AI resilient and am constantly on the lookout for inappropriate use. I don’t have to do that in my seminars. For the undergrads, I’m highly concerned that without my vigilance (or despite my vigilance), students will slip into inappropriate use, especially when they feel squeezed for time.
There are two interrelated core differences at play—differences in motivation and differences in structure. The doctoral students are highly motivated compared to many of the undergrads (especially since my class is a business core class). Grades are an afterthought and administrative necessity in the seminars, while in the undergrad courses grades are an integral part of the structure and philosophy of the course. (The doctoral students are also more mature, but I’m not sure that makes a huge difference here).
These two forces lead to clear differences in how the students use AI. My doctoral students use AI as a tool to help them learn. Many of my undergrads use AI to reduce effort. The differences are stark. I’m sure more than a few of my undergrads are using AI to help them learn, but I’m just as sure that saving effort is more often the goal. Addressing the grade economy problem requires shifting undergrad students and courses towards the motivated, grades-light reality of my doctoral seminars. In this article, I’m going to focus more on the grades aspect, but the motivation issue also needs to be addressed, but we’ll save that for another day.
Grades serve legitimate purposes. They credential learning, provide feedback, create accountability to external stakeholders and motivate students. But somewhere in the pursuit of ‘fairness’ and ‘objectivity,’ we reduced these complex functions to a simple transaction: effort in, grades out. Points for participation. Points for attendance. Points for doing the reading. We built an elaborate system of exchange that treats learning as incidental to the accumulation of points. We do not need to abandon grades completely, but we do need to rethink how we leverage grades to improve learning. Otherwise, we’ll remain mired in the grade economy.
Start the conversation
Individual faculty tweaks won’t entirely solve the grade economy problem. I can remove grades from some activities, but I can’t change the fact that my students are still operating in a grade economy across all their other courses. Real change requires rethinking how we credential learning at the institutional level.
But now is the time for this conversation. AI has broken the fundamental exchange mechanism the grade economy depends on. Students can generate effort-equivalent outputs without the effort. The system simply does not work anymore. That creates the conditions for serious rethinking, but only if we’re willing to have the conversation.
Talk to your colleagues. Name what’s happening. Push your department chair and your dean to put this on the agenda. The grade economy is crumbling whether we address it intentionally or not. We can either design what comes next or watch it continue to collapse.
While we work towards systemic change, individual faculty can make changes that acknowledge the reality rather than pretend that it doesn’t exist. I’m going to make two such changes to my undergrad class. First, I’m going to lean into the use of AI by guiding students toward learning how to use AI to learn. AI-enabled activities will replace my current “small stakes” activities. I’ll give feedback on the new activities, but I won’t grade them. Students will (hopefully) be motivated to put effort into these activities for two reasons. First, I’ll make the case that being able to use AI to learn is an important skill, one that will serve them well throughout their lives. Second, I’ll design the activities so that they help prepare students for the remaining learning assessments. Grades will be based on a small number of higher-stakes assessments that are either in-class or AI-resilient. I’m betting that the combination of skill value and rational exam preparation outweighs the grade economy pull. Will this work? Frankly, I’m not sure. But it’s worth trying.
Want to continue this conversation? I’d love to hear your thoughts on how you’re using AI. Drop me a line at Craig@AIGoesToCollege.com. Be sure to check out the AI Goes to College podcast, which I co-host with Dr. Robert E. Crossler. It’s available at https://www.aigoestocollege.com/follow.
Looking for practical guidance on AI in higher education? I offer engaging workshops and talks—both remotely and in person—on using AI to enhance learning while preserving academic integrity. Email me to discuss bringing these insights to your institution, or feel free to share my contact information with your professional development team.



It seems like there are two relevant differences here, motivation and scale, though they presumably interact. The transactional focus you've identified (agree!) seems to go hand in hand with delivering undergrad education in larger and larger lot sizes. Since genAI has effectively broken our ability to assess anything transactional that happens out of eye-sight, I think the fundamental problem becomes: can it also help deliver learning-focused motivation at scale?