I told you so: It turns out I was right about prompting
Here’s a big secret about generative AI (GAI): it’s not hard. You know how to have a conversation, right? You know how to ask someone for help? (OK, that one’s not easy for some of us.) You know how to text? If so, you already have the basic skills to use GAI.
Validation from the experts
Professor Ethan Mollick, one of the leading voices in GAI, recently wrote a column for his newsletter, One Useful Thing, arguing that there’s no need to be a “prompt engineer” to use GAI effectively. I’ve been making the same argument for some time. In May, I wrote an article arguing against the term prompt engineering. (By the way, you should subscribe to One Useful Thing. It’s, well, useful.)
Why “engineering” is the wrong word
My beef is with the word “engineering.” Engineering has two characteristics that are NOT indicative of GAI. First, engineering is precise and repeatable. If you run the same data through the same equations 1,000 times you get the same answer every time. GAI is not like that. For most of us, it’s much more of an art than engineering. Second, engineering is hard. Becoming an engineer takes years of hard study and practice. You can create something useful with GAI as an absolute beginner. If you need to automate some complex process with the help of GAI, you might need to be a bit of a prompt engineer. For the rest of us, you just need to play around and figure out what the tools are good at and where they’re limited.
It’s good when the gurus agree with you, and Dr. Mollick and I are on the same page when it comes to learning GAI. He uses the term “Good enough prompting”; I just call it playing around and paying attention. (More seriously, my colleague Dr. France Bélanger and I coined the term “Application play” to express the idea of learning by playing around.)
Finding the right mental model
Dr. Mollick and I also agree on something else. He thinks we should:
… treat AI like an infinitely patient new coworker who forgets everything you tell them each new conversation, one that comes highly recommended but whose actual abilities are not that clear. (That’s a direct quote.)
As I’ve been saying for some time, one of the reasons that I tend to be polite to GAI is that it reinforces my mental model of the AI chatbot as an infinitely patient, knowledgeable colleague. Ethan takes things a couple of steps further, but we’re on the same page. I’ve also heard others mention the analogy of GAI as a hardworking intern that wants to please and has skills, but needs a lot of direction and review. Hey, whatever works. We all have to find the analogy or mental model that works for us.
Keep it simple
The bottom line is simple: learning and using GAI effectively is not hard. Over the last couple of years (ChatGPT just turned two years old, by the way), I’ve spoken to literally hundreds of people who were nervous about trying to learn and use GAI. My consistent message is that it really isn’t that hard. Sure, you need to pay attention to learn its capabilities and limitations, and there will definitely be successes and failures, but it simply isn’t difficult. Play around, pay attention, and remember that you need to review GAI’s output carefully, and you’ll be fine. The last section of Ethan’s article has the heading “Don’t make this hard.” I couldn’t agree more.
Note: You really should read Dr. Mollick’s article. He goes into some depth on how to engage in “good enough prompting” and offers several useful bits of advice.
If you have any questions or comments, you can leave them below or email me - craig@AIGoesToCollege.com. I’d love to hear from you. Be sure to check out the AI Goes to College podcast, which I co-host with Dr. Robert E. Crossler. It’s available at https://www.aigoestocollege.com/follow. Thanks for reading!