Do students know when they're using generative AI unethically? I'm not sure they do. Yeah, if they get ChatGPT to write a 500 word essay and then turn it in as their work, they know they're cheating. But what if they get Grammarly to check their grammar or ask Claude to help them reword an unclear sentence? Is that cheating? That's not so clear.
This, of course, is a problem. Currently, we have a situation in which unethical students use AI unfettered, but many contentious students are terrified of using AI because they don't know what's appropriate and what's not. This is an untenable situation, one that we need to address if we're going to get to a world in which AI enhances rather than hinders student learning.
This really hit home a few months ago. I was at the University of Louisiana System's annual conference. During a panel, I heard the dean of libraries for one of the schools say that students don't understand when they're plagiarizing. She went on to say that there are many different types of plagiarism and that students really don't know where the line is with some of them. For example, most students know that direct plagiarism (lifting substantial portions of another person's work and putting into a document without indicating the source) is wrong. But many may not understand that taking ideas from multiple sources and mixing them together without proper attribution is also plagiarism. (By the way, this is called mosaic or patchwork plagiarism.) The dean made a pretty compelling case that students really don't know when they plagiarize ... sometimes.
This made me think about the situation with generative AI. If students don't fully understand plagiarism, how in the world can we expect them to understand the line between appropriate and inappropriate use of generative AI. (As an aside, I'm not sure many faculty really know where this line is. In fact, I'm not sure that the line exists. More on that later.)
Now I'm pretty sure that most students know that some uses of generative AI are unethical and unacceptable. For example, entering an assignment into ChatGPT and then using the response verbatim as your answer is clearly inappropriate, and students know it. (One exception might be if the entire point of the assignment is to see how generative AI would respond to a specific prompt.) There are also uses that are clearly acceptable under most situations, such as brainstorming ideas for a paper.
But there's a whole lot of gray area between the clearly acceptable and the clearly unacceptable. And that's where the problems come in. There are many factors that come into play in those gray areas.
An ethics activity
In my principles of information systems class, I do an in-class activity where I give a number of scenarios of generative AI use and ask students whether the use is ethical or unethical or if they're unsure.
Here are four of the scenarios I use. What's your opinion? Ethical, unethical, or are you unsure? Scroll to the end of the article to see how the students responded.
Scenario 1: You enter some text your friend wrote into ChatGPT then as it to paraphrase the ideas. You use the paraphrased ideas in a report you’re writing for class.
Scenario 2: You’re working on a report for your economics class. You’ve written a paragraph to explain the main argument you’re trying to make in the paper. You want to make sure that it’s clearly written and that your argument makes sense. You ask ChatGPT to critique the paragraph regarding clarity and completeness. You use the critique to improve the paragraph.
Scenario 3: You’re writing a report for a marketing class. Your friend took the same class last year. You enter some text your friend wrote in their report into ChatGPT then as it to paraphrase the ideas. You use the paraphrased ideas in a report you’re writing.
Scenario 4: You’re preparing for an essay exam in a CIS class. Your professor gave you a list of potential essay questions so that you can prepare. You enter the questions into ChatGPT and ask it to generate a 500-word essay for each question. You study these answers as you prepare for your exam.
The core problems
Why is it so hard for students to know what's acceptable and what's not? I think it comes down to three core problems.
Generative AI is so incredibly flexible that it's impossible to give students a case by case set of rules to follow. For many other areas of academic honesty, we can give much clearer guidance than we can with generative AI. With AI, there are endless applications, so we simply cannot give students definitive guidance to cover every possible scenario.
Students may be receiving inconsistent messages and guidance. Some faculty might have outright bans on use. Others might take a carte blanche approach giving students free rein to use AI however they'd like. Many faculty don't give students any guidance at all. It's no wonder they're confused. To be fair, some of this inconsistency may be appropriate. What's acceptable in one class might be unacceptable in another. Even within a single class, what's ethical might vary from one assignment to the next. It's a confusing landscape.
The novelty of generative AI is also a factor. Plagiarism has been around a long time; generative AI, not so much. Many students still don't understand the nuances of plagiarism, so we shouldn't be surprised that they don't really know how to use AI ethically. We also shouldn't be surprised that many faculty are still struggling with what guidance to give students around AI use.
As I said, it's a confusing landscape. So, what can faculty and administrators do to address the problem?
What we can do
Here are some suggestions about what we can do to help students better understand how to use generative AI ethically.
Set expectations: We need to be clear about what the rules are for our courses. In this case, I'm not a fan of universal policies across an entire university, college, or even department. Each faculty member needs to decide what's appropriate for their course (or for individual assignments). My college is using a framework approach in which faculty are required to address certain areas in their syllabi, but the college isn't specifying the exact guidelines.
Help students understand underlying principles: This is a bit of a tall order since there are so many principles that we could apply here. So, I try to give students one "rule" and four questions they should ask themselves when considering the use of AI for coursework:
The rule: Use AI as a tool to enhance your understanding, not as a substitute for your own learning and work
Four questions:
Am I working with AI or is AI doing the work for me?
Am I taking a long-term view of my learning by using AI this way?
Do I feel good about my use of AI?
Am I harming myself or others by my use of AI?
Give students some grace: Look, we're all trying to figure this out. Be a little kind with students who are struggling to determine the right ways to use AI. Yes, we need to enforce academic integrity; that's critical. But we also need to be a little understanding when students make mistakes. Use errors in judgment as learning activities, not excuses for retribution.
What do you think? Leave a comment below to share your thoughts.
Student responses
Scenario 1: Ethical: 10%, Unethical: 78%, Unsure: 18%
Scenario 2: Ethical: 97%, Unethical: 3%, Unsure: 0%
Scenario 3: Ethical: 3%, Unethical: 86%, Unsure: 11%
Scenario 4: Ethical: 48%, Unethical: 4%, Unsure: 48%