Last week, I shared my thoughts about AI policies and why you need one. In this article, I discuss what makes a good policy and advocate for a framework approach for institutional policies. Even if you're creating a policy for your own courses, understanding the framework will help you create a more effective policy.
Characteristics of a good AI policy
What makes up a good AI policy? I have some thoughts. The first characteristic of a good AI policy is that it should be clear. This should go without saying, but we've all read policies that are jargon-laden, incomprehensible messes. If you want students or workers to follow a policy, make it clear.
The policy should also be accessible. This is another one that shouldn't need to be said. But as policies proliferate, they become harder and harder to find. More than a few times, I've spent inordinate amounts of time searching for a policy. (And don't get me started on mythological policies that people say exist, but don't in reality.) If you're a faculty member creating a policy for your classes, be sure to feature it prominently on your learning management system site or whatever you use for such things. Also, go over the policy with your students to make sure they 1) know it exists, and 2) know where to find it. There are SO MANY policies in today's syllabi that many students don't even bother reading them. (Do you read every policy your school releases?)
A good AI policy should also balance clarity and flexibility. I'm not saying that your policy should be flexible (although that's fine), I'm saying that you need to create your policy so that it can be adapted as AI advances and new uses emerge (and they will). You probably won't want to make a lot of changes during a term, but you want to be able to easily adapt your policy from term to term. The framework I'll discuss below should help with this.
A good policy also provides details about common use cases relevant to a course or department. For example, the university's marketing departments policy should address whether it's acceptable to use AI for images. A policy for a class that requires writing should clearly specify whether it's OK to use AI to check grammar.
You should also make sure that your AI policy aligns with institutional policies, mission, and values. Conflicting policies are a nightmare for diligent students and can cause significant difficulties for staff members. So, if your institution has AI policies, make sure that your policy doesn't contradict them. Misalignments with mission and values can be more subtle to detect, but they can still cause problems. One way is by making your policy difficult to enforce. If your policy is contrary to an institutional value, you may find that administrators or review committees are reluctant to back you up if violations occur.
The framework approach
What's the best way to put all of this together? My advice is to use a framework approach, especially if you teach multiple courses. Even if you're creating an AI policy for an administrative area, the elements of the framework below should be useful in developing a solid policy.
There are two big benefits to using a framework. First, the framework is adaptable not only to different contexts but more importantly, to advances in AI. Regardless of what new AI tools and techniques emerge, it's likely that you'll still want to cover the areas included in the framework. So, updating your policy is a matter of evaluating each part of the framework in light of new developments. Second, the framework will give you a solid structure for your policy and will help ensure that you cover the essentials. (Is that three reasons?)
Elements of a good framework
Here's my suggested framework. There may be others, or you may find it useful to adapt my framework. As long as you have a framework that meets your needs, you'll be fine.
Acceptable and unacceptable uses: This may be the most important part of your policy operationally. Clearly state what is acceptable and what is unacceptable. This is easy to say and challenging to do in practice. Just do the best you can to anticipate possible uses, but don't stress out too much over being absolutely comprehensive.
Disclosure requirements: Indicate what AI use disclosures you require. It might be useful to tie disclosure requirements to certain types of uses. For example, you may not require students to disclose using AI to generate ideas, but you may if they use AI to help them structure a paper. I strongly suggest a statement along the lines of "All uses of AI not specifically addressed should be disclosed." Also indicate the required form of disclosure. Often, a simple statement is sufficient, but you may want something more formal. Just be clear.
Privacy and confidentiality: It's a good idea to specifically indicate how privacy and confidentiality should be protected. This is trickier than it sounds in some cases. Consider team projects. Is it OK to upload a team member's work to get feedback from AI? Maybe, maybe not. Consideration and consent are the keys here. Require students or workers to consider privacy implications before sharing any information. (Yes, this is hard to enforce.) If they are sharing any data about others, require that they get the other party's consent.
Responsibility/accountability: Include clear statements about accountability, perhaps in a statement such as, "Remember that you are ultimately responsible for any work you produce with AI. AI is subject to error, so be sure to verify any information before using it" or something like that. This is a good way to remind people that their work is THEIR work. "AI made me do it" is not a valid excuse.
The areas above are what I view as absolute requirements for a useful AI policy, but there are some other areas that you should consider to improve your policy.
Ethical considerations: There are some substantial ethical issues with AI, including bias, fairness, and equity. Depending on the situation, you may want to discuss some of these issues in your policy. For many administrative uses, bias should be considered in your policy at a minimum. This is especially important in areas such as recruitment, admissions, and human resources.
Risk management: This is especially important for administrative policies. Inappropriate use of AI can expose the institution to significant risk, so I advise discussing risk management in administrative AI policies. Even if all you do is bring up the fact that AI poses institutional risk, your policy will be stronger. In addition, you may also want to specify what uses require human review. Be sure to include who should be responsible for doing the review as well.
Governance structure: If you're working on an institutional or college-level policy, your policy should also discuss an AI governance structure. Often it is enough to charge a current group, such as a curriculum committee, with AI governance. An important part of this governance is keeping the policy up to date. Responsibility for oversight is also important.
Conclusions
Well, that's enough about policy. The coming academic year will lead to increasing AI awareness and use, so if you don't have a policy, get to work! I'm happy to help. If you'd like my assistance or if you have comments on this article, email me at craig@AIGoesToCollege.com.
Remember, you can check out the AI Goes to College podcast, which I co-host with Rob Crossler at https://www.aigoestocollege.com/follow.