Policies are often tricky things. It can be hard to get the right balance of flexibility and control. AI policies are especially challenging because of the rapid pace of change coupled with the emerging nature of AI uses. AI policies are often behind the curve of AI development, sometimes by a lot.
But, as I'll discuss next, universities need AI policies (note the plural). And you need these policies now (if not sooner).
Why do you need AI policies?
There are several reasons you need AI policies. Before getting to these I want to address the plural (policies not policy). Universities are complex organizations with many different functions. Unless you want to have an inadvisable policy like outright banning of AI, what is appropriate and what properly manages AI risk will vary across campus. The academic folks need different policies than the marketing folks, although there certainly can be common elements. Now let's look at the reasons.
Confusion about what's ethical/acceptable and what's not: I don't know about your schools, but at many institutions there's considerable confusion about when it's OK to use AI and when it's not. This confusion isn't limited to students. Faculty and staff are also confused. Having sound AI policies can help clear up much of this confusion.
Risk management: AI carries risk (as do most things). If you're at a state-supported school, you've probably taken the dreaded annual training (EVERY year!). In Louisiana, it's the better part of a full day to get through all of the mandatory training. One of the big reasons for this training is risk management. Maybe you don't need to be reminded not to use a rolling chair as a ladder, but someone, somewhere, didn't think through the risks, so now we all get to be reminded of this and dozens of other risks EVERY YEAR. These sorts of risks are pretty well understood, and schools usually have well-developed policies around workplace safety, sexual harassment, data security, and the like. AI is new territory. So, institutions need to develop good AI policies to mitigate and manage risk. I will admit that most of us won't be involved in developing these aspects of AI policies, but it's a good idea to understand the need to manage risk when developing department and even course-level policies.
What's ethical and what's acceptable may differ: What's acceptable GAI use for one class or academic task may not be acceptable for others. For example, using GAI to check for grammatical problems may be fine for a management class but unacceptable for a freshman writing course.
Run dual risk of non-use and unacceptable use: This is a less obvious but still critical reason you need good AI policies. In the absence of policies, people are left to their own judgment. Some people (students, especially) may be so worried about using AI inappropriately that they don't use AI at all and universities miss out on potentially useful applications of AI.
Potential legal compliance issues (e.g., FERPA, HIPPA, etc.): Higher ed institutions are required to comply with numerous regulations and laws, several of which are especially relevant to generative AI. In the United States, the most obvious of these is FERPA (Family Educational Rights and Privacy Act), which governs the use of student information (among other things). But, other regulations may also be affected by GAI. For example, improper use of GAI without adequate human oversight may subject schools to Title XI issues related to sex-based discrimination. Sound policies may reduce these risks.
Match policy to governance need
My first bit of advice is to have multiple AI policies. Appropriate use and types of risk will vary across areas as will what a policy should cover. An obvious example is that academic policies should address academic honesty issues but an IT policy probably won't need to.
At a minimum, schools need four different policies:
Academic policies that address issues related to teaching and learning activities
Research policies that govern the acceptable use of GAI for scholarly activities
Administrative policies that focus on the vast array of activities that address the business aspects of the institution
IT governance policies that address issues specific to information privacy and security, among other IT-related topics.
The goal of all of these policies should be to achieve a good balance between risk management and innovation while promoting ethical use of AI. Also, schools should ensure that the various policies align with one another, which can be challenging.
So, those are my thoughts on how schools should view AI policies. But what makes a good AI policy? That, my friends, is a topic for next week. So, be on the lookout.
By the way, if you find AI Goes to College useful, you might want to check out the podcast. My co-host, Dr. Robert E. Crossler (aka Rob), and I discuss how AI is reshaping higher ed and what you can do to not only cope with the changes but leverage them to make your life easier. (I swear, it's possible.) Go to https://www.aigoestocollege.com/follow/ and subscribe on your favorite podcast app.