Malleable metaphors for AI
"AI is like a colleague," I tell my students. "No, it's more like an intern," another professor counters. "Actually, it's just supercharged autocomplete," a tech skeptic insists.
The debate over the perfect AI metaphor rages on in faculty lounges and Twitter threads. But after months of experimenting with AI in my teaching and research, I've realized something: we're having the wrong conversation. Instead of searching for one perfect metaphor, we need to embrace the power of switching metaphors to match the moment.
When I'm developing complex ideas at midnight, AI is my ever-patient colleague. When I need research compiled, it becomes my diligent intern. And when I'm drafting routine emails, it's indeed that supercharged autocomplete. We need malleable metaphors—mental models that shift as fluidly as AI itself.
The metaphors we choose for new technologies shape how we use them. Think of how the "desktop" metaphor made personal computers approachable, or how calling the internet an "information superhighway" helped people grasp its potential. With AI, the stakes are even higher. The way we conceptualize AI—whether as tool, colleague, or something else entirely—directly influences how effectively we can harness its capabilities.
That's why the metaphor debate matters. But here's the twist: we might not need to choose just one.
Two metaphors
Recently, I’ve seen a metaphor gaining popularity — AI as an eager-to-please intern. I’ve heard or read this several times in the last few weeks. At first, I thought, “Nope, I’m sticking with my colleague metaphor.” But the more I thought about the intern metaphor, the more sense it makes. A diligent, hardworking intern can be a huge help, but you have to know how to interact with interns. Three key elements of successfully working with an intern are understanding their capabilities and limitations, providing clear instructions, and providing oversight. Those same things are critical in using AI effectively, which is why the intern metaphor makes a lot of sense.
But, I’m still not entirely comfortable with the intern metaphor. Much of the time I’m using AI, it seems much more like a patient colleague. “Intern” just doesn’t fully capture how I use AI.
As I mentioned earlier, my dominant metaphor for AI is that of an infinitely patient, very knowledgeable polymath colleague. My AI colleague knows a lot about a lot. I actually have some human colleagues like this; they just seem to know about everything. What I don’t have is human colleagues who are willing to spend hours and hours helping me develop ideas regardless of the time. Even my most generous colleagues would not be pleased if I called them at 3AM to get feedback on a sentence. Interns lack the experience to help develop nuanced, complex ideas that require considerable experience to understand. Much of what I do is pretty complicated and is beyond the understanding of even the most accomplished intern (in my experience at least). But, I offload grunt work to AI, which is something I might do with an intern, but not with my colleagues.
For some time, I struggled with which metaphor to use. Both metaphors make a certain amount of sense, but neither one perfectly fits how I use AI. Then it hit me — I was asking the wrong question. The great thing about AI is that it is insanely versatile. The range of tasks AI can help with is seemingly endless. Some tasks are mundane (intern) work and others require great expertise and nuanced reasoning (colleague). I even use AI as that funny friend to feed me new dad jokes. (OK, funny to me at least.) AI is adaptable, so our metaphors need to be as well. The solution is malleable metaphors (because I love alliteration). Use the metaphor that is the best fit for your current task. Sometimes it’s the intern, sometimes it’s the colleague. As you switch between tasks, switch between metaphors.
Malleable metaphors while writing
Let’s look at an example - writing an article. The process I describe below is typical of how I use AI when developing articles for AI Goes to College (and sometimes my scholarly articles as well).
The first way I use AI is to refine my article idea. Often, I’ll briefly describe my idea and the purpose of AI Goes to College, then ask Claude or ChatGPT what they think. Then, the chatbot and I go back and forth as we collaboratively refine the idea. Sometimes I take this a step further and get AI’s help to refine my outline. For these tasks, I’m solidly in my colleague metaphor. My prompts are often a little vague (literally, “What do you think?” or “Does that make sense?”) and I just refine things during the conversation. This is exactly how I interact with colleagues.
Once the idea and flow of the article are set, it’s time for research. It’s not unusual for me to want to be able to do a little fact-checking or to get some sources to back up my claims. For that, I turn to the intern metaphor. My prompts get more detailed, with more precise instructions, just like with an intern. I also resign myself to a few rounds of refinement, which are usually attributable to missing details in my instructions (prompts). I also make sure to carefully check anything my AI intern is telling me by going to the original sources. This sounds like a lot, but in reality it’s pretty efficient. I don’t have to go out and find the sources, so I save a bunch of searching and filtering.
As the article starts to take shape, I go back to my colleague metaphor. My AI colleague helps me polish the article by pointing out confusing sentences, circuitous flow, and leaps of logic. Lex.page is great for this, but I also use ChatGPT and Claude for this phase.
This is a bit of an idealized description. The reality is usually messier, with considerable back and forth and switching between chatbots, metaphors and mental models. The good news is that all of this happens without much conscious thought about the right metaphor; the switching happens more-or-less automatically. Even though at some point the metaphors come without having to think about them, early on you may need to put some effort into developing and choosing metaphors.
Choosing the right metaphor
The most important thing about choosing the right metaphor is understanding the metaphors you’re using. Admittedly, this can be tricky since metaphors often operate at a subconscious level. The next time you’re using AI, reflect on how you’re thinking about AI. What guides your prompting and subsequent conversations? Just give it some thought and you’ll start to uncover your operating metaphor. Do this whenever you switch the kinds of tasks you’re doing and you should start to see different metaphors.
Once you’ve uncovered your basic metaphors, think about whether the metaphors are serving you well. Maybe your metaphor is a super-intelligent guru and that leads you to not being critical enough of AI’s responses (in other words, you’re not sufficiently skeptical). Maybe you need to adjust that metaphor.
My guess is that you won’t have to do this sort of conscious thinking about metaphors for very long. You’ll settle into a nice pattern of smoothly switching between metaphors as you change from task to task. That’s what happened with me. This article aside, I don’t think about metaphors much at the conscious level now, they just sort of flow, which is nice.
Final thoughts
I know this is kind of esoteric stuff, but I’m convinced that a little thinking about metaphors will improve your use of AI. Once you have your metaphors settled out, help your colleagues or students boost their AI use by encouraging them to consider their own AI metaphors. An easy way to do this is to share this article with them (hint, hint).
If you have any questions or comments, you can leave them below, or email me - craig@AIGoesToCollege.com. I’d love to hear from you. Be sure to check out the AI Goes to College podcast, which I co-host with Dr. Robert E. Crossler. It’s available at https://www.aigoestocollege.com/follow. Thanks for reading!