The Problem with Prompt Engineering
Struggling with complex AI prompts? You're not alone. Many educators overcomplicate their AI interactions, but there's a better way - start simple and iterate. Here's why this approach works and how to use it effectively. The most effective users of AI aren’t prompt engineers following rigid formulas - they’re educators and professionals who start with simple goals and iteratively refine their approach until they achieve the desired results. Prompts often fail not because they’re bad, but because they’re over-engineered. I’m not a fan of “prompt engineering,” either as a term or as a process. Highly engineered prompts have their place, but most of us have little need for them. In fact, engineered prompts can lead to poor results, which is counter-intuitive and counterproductive.
Understanding Gall's Law
One explanation for this comes from Gall’s Law:
A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system.
I ran across Gall’s Law on the Hacker Laws via the Recomendo newsletter, which I highly recommend. Basically, Gall’s Law implies that it’s folly to try to highly specify a system in advance; it’s much better to start simple and then allow the system to evolve. This is especially true for generative AI (GAI) due to its non-deterministic nature (which means that results aren’t regular and predictable).
The Power of Iteration
Writing “mega-prompts” often fails because it’s so difficult to get everything right on the initial specification of the prompt. You’re much better off to work iteratively, starting simple and building from there. One reason for this, especially with GAI, is that you don’t know exactly what you want until you see it, or more commonly see what you DON’T want.
We’ve all had conversations like this. Consider a student meeting with their academic advisor about choosing a major. The student might say, "I want to do something that helps people." The advisor suggests education, and while the student realizes that's not quite right, it helps them articulate: "I want to help people, but more on the mental health side." This leads to a discussion of psychology, counseling, or social work as potential majors. The initial suggestion, though not the final answer, helped the student clarify what they were actually seeking.
GAI works the same way. The same sort of thing happens daily for me. I have an idea, ask GAI to help me refine it. The initial suggestions aren’t quite right, but they start me down the correct path, eventually leading to a much more refined idea, typically through additional back-and-forth. In fact, I used iterative prompting to refine the paragraph above. My initial draft wasn’t quite right, so I asked Lex.page for some help. Its first response wasn’t what I was looking for, but with a couple of iterations, we (note the “we”) landed on the advising example.
From Theory to Practice: Academic Examples
Consider these common scenarios in higher education. Designing a course? Don’t try to create a perfect prompt for an entire syllabus, start by asking GAI to outline key learning objectives, then iterate and refine to develop learning objectives, assessment strategies, and the schedule. Want to use GAI to design a recruiting letter? Start with the main message and expand, adding details and setting tone. Refining a research idea? Start with asking for simple feedback, then iterate through asking for critiques on specific aspects of the idea, such as scope, feasibility, and novelty.
One thing that makes iterative prompting so effective is that it mimics how humans think and operate. We rarely do anything deep or creative in a single shot. We iterate, going down different paths, refining and expanding until we get the desired result. The early results help us better understand what we really want. Along the way, we add or relax constraints, add details, and tweak structure.
Putting It Into Practice
Let’s look at an example of how this might work. Suppose I want to create a class module on how to use GAI ethically for coursework. Here’s my initial prompt:
What's a ;tldr for college students who want to use generative AI (GAI) ethically?
;tldr is short for “too long, didn’t read,” which is a signal that I want something concise.
GAI (Claude in this case) replied with a list of seven “rules” for ethical GAI use.
Ethical GAI Use for College Students: TLDR
Understand AI's limitations: Know what GAI can and can't do reliably
Cite AI properly: Acknowledge when you use GAI in your work
Follow your institution's policies: Check your school's specific guidelines
Use GAI as a tool, not a replacement: Let it assist your thinking, not substitute for it
Verify information: AI can make mistakes or produce "hallucinations"
Maintain academic integrity: Don't use GAI to bypass learning or to plagiarize
Develop your own skills: Continue building critical thinking and writing abilities alongside AI use
Would you like more details on any of these points?
That’s too many. But, I didn’t really know how many I wanted until I saw the list of seven. That seemed much too long, so I responded with this:
That's too many. Students won't remember them in the moment. Condense these into no more than three rules. Try to capture the spirit of the entire list.
Claude gave me a nice list:
Three Essential Rules for Ethical GAI Use
Cite and verify: Always acknowledge when you've used AI and verify the information it provides before submitting
Use as a tool, not a substitute: Let AI enhance your thinking process, not replace it
Follow institutional policies: Know and respect your school's specific guidelines on AI use
These rules cover transparency, maintaining academic integrity, and institutional compliance while supporting your learning journey.
This looked fine, so I can move on to the next part of the task, learning objectives. The first list was too complex, so I asked for a smaller set. I’m going to stop the example now, but you can see how I could quickly work back-and-forth with Claude to quickly create a solid learning module.
A Five-Step Iterative Prompting Process
You can apply iterative prompting to almost any situation. The key is to start simple and iterate and refine until you get the desired result. Here’s a 5-step process.
Start small: Start with a simple, well-scoped prompt with one goal and one task.
Explain the concept of GAI academic integrity to college students in simple language.
Test and refine: Did the initial prompt work? What’s off or missing? You can either revise the initial prompt or ask AI for specific refinements or additions.
Provide short examples of GAI academic integrity violations that are relevant to college students.
Add constraints gradually: Adjust tone, format, complexity, etc. one layer at a time.
Explain GAI academic integrity in plain language, using college-student relevant examples, in a conversational tone.
Break big tasks into modules: Don’t prompt for an entire course or even class session. Prompt for the outline, then add details, refining and adding as you go.
Create an outline for a class session on GAI academic integrity. It’s for a junior level business class.
Template what works: Once you complete your task, turn it into a reusable prompt using meta-prompting. Meta-prompting is getting GAI to help you create prompts. Only do this if you’re likely to do a similar task in the future. Also, keep in mind that you may have to refine the template using, you guessed it, iterative prompting. How’s that for meta!
Create a prompt template based on our conversation. The goal of the template is to create a class session on a specified topic. (Note: You can add elements to this meta-prompt such as tone or audience, but don’t get carried away. Remember, it’s usually better to iterate than to engineer.)
Common Pitfalls to Avoid
Here are a few things to avoid as you build your iterative prompting skills.
Over-engineering initial prompts:
Starting with complex, highly-specified prompts often leads to rigidity and missed opportunities.
Failing to document successful iterations:
Keep track of what works so you can build on successful approaches.
Rushing to final results:
Remember that the iteration process itself often yields valuable insights. Focus on effectiveness over efficiency.
Ignoring context:
Even simple prompts need basic context about your goals and audience.
Moving Forward
The next time you have a prompt that isn’t working, don’t add complexity, simplify. Using an iterative prompting approach will save you time and frustration, making your interactions with GAI more natural, more productive, and less frustrating.
Want to continue this conversation? I'd love to hear your thoughts on how you're using AI to develop critical thinking skills in your courses. Drop me a line at Craig@AIGoesToCollege.com. Be sure to check out the AI Goes to College podcast, which I co-host with Dr. Robert E. Crossler. It's available at https://www.aigoestocollege.com/follow. Looking for practical guidance on AI in higher education? I offer engaging workshops and talks—both remotely and in person—on using AI to enhance learning while preserving academic integrity. Email me to discuss bringing these insights to your institution, or feel free to share my contact information with your professional development team.
Disclosure: I work closely with AI, especially Lex.page in crafting and refining AI Goes to College articles.