"How do I write the perfect prompt?" That's a common question from faculty trying to use AI effectively. Many feel intimidated by terms like "prompt engineering," imagining they need complex, carefully crafted instructions to get good results. The reality is far simpler: prompting generative AI (GAI) often requires nothing more than basic conversation skills. In fact, my most-used prompt is simply asking "What do you think?" You'd be amazed how much you can accomplish by treating AI like a knowledgeable colleague rather than a complex machine that needs precise programming.
For example, I just asked ChatGPT to give feedback on the original version of the paragraph above using this prompt:
Here's an opening paragraph for an article for my newsletter, AI Goes to College. What do you think?
Prompting generative AI (GAI) is not hard, despite the scary term “prompt engineering.” With a little practice and a few guidelines, almost anyone can learn to use GAI effectively. When people hear the term “prompt engineering” they envision carefully planned out, highly structured and detailed prompts, but in reality, you can be very effective with very simple prompts. As I’ve mentioned before, my most used prompt is a simple question or statement, followed by “What do you think?”
The prompt is remarkably simple and quite vague. This is an example of what I call open prompting. You just ask the chatbot an open question and see where it goes. I get a lot of mileage out of this approach. In this case, the response was some decent feedback on the paragraph, which I used to improve the paragraph.
One of the great things about open prompting is that you just kind of let the chatbot go where it will, which often leads to unexpected, interesting results. That’s also the problem with open prompting, you never know exactly where it will go. Most of the time you can redirect AI through conversation and refinement. If the open prompt isn’t giving you the results you want, you prompt the chatbot to refine the results, guiding it until you get what you want, just as we do in conversations with humans.
Open prompting is simple and flexible, but it’s inefficient. That can be a huge problem when time is short, you have to do a task frequently, or there’s a high cost to iterative refinement as is the case with ChatGPT Deep Research. Plus and Teams members only get 10 Deep Research queries a month, so you don’t want to waste them. In these situations, a detailed, focused prompt is in order. Here’s an example of a Deep Research prompt I used recently. By the way, I developed this prompt through meta-prompting—asking ChatGPT to write the prompt for me. (I redacted some portions of the prompt for confidentiality.)
Title: [redacted]: Theoretical Insights and Practical Implications
Objective:
This research report will examine [redacted] in two key domains: (1) generative AI and (2) the [redacted]. The report will explore how standpoint theory, the capabilities approach, and epistemic frame theory inform these biases and injustices, offering a conceptual foundation for understanding their role in knowledge production and dissemination. Additionally, the report will assess the potential of AI as both a perpetrator and a mitigator of epistemic bias and injustice.
Research Questions:
How do epistemic bias and [redacted] manifest in generative AI, particularly in model training, content generation, and decision-making?
How do [redacted] and [redacted] impact consumer services, especially in service design, personalization, and accessibility?
In what ways can standpoint theory, the capabilities approach, and [redacted] help explain and address these biases and injustices?
What are the ethical and policy implications of [redacted] in AI and consumer services from a global perspective?
How can AI be leveraged to mitigate [redacted] rather than exacerbate them?
Methodological Guidelines:
Scope: The research should be interdisciplinary, drawing from fields such as philosophy, AI ethics, information systems, consumer behavior, and business ethics.
Geographical Coverage: A global perspective should be taken, ensuring representation from both Western and non-Western contexts in AI and consumer services.
Scholarly Sources: The report must be based on peer-reviewed journal articles and academic books, with exceptions only for background material from reputable institutional sources.
Conceptual Framework:
Standpoint Theory: Discuss its role in identifying marginalized epistemic perspectives and how it applies to AI model development and consumer services.
Capabilities Approach: Analyze how access to knowledge and decision-making agency are affected by epistemic bias and injustice in AI-driven and human-mediated services.
[redacted]: Explore how structured ways of knowing influence AI outputs and consumer service interactions.
Expected Deliverables:
A critical literature review on [redacted] in generative AI and consumer services.
An analysis of key case studies demonstrating real-world manifestations of [redacted].
A theoretical synthesis of standpoint theory, the capabilities approach, and [redacted] as applied to epistemic bias and injustice.
Policy and ethical recommendations for addressing these issues, particularly in AI governance and consumer service equity.
A discussion of AI’s potential role in mitigating epistemic bias and injustice, including responsible AI design principles.
Intended Audience:
This report is intended for submission to the [redacted] and should align with its focus on business, technology, and ethical considerations in market systems. The writing should be accessible to scholars in business research, AI ethics, philosophy, and interdisciplinary studies on consumer services.
That is a 400+ word prompt with detailed context and instructions. Because of the detail in the prompt, the output was highly focused and relevant. It was also long— over 30 pages, although the length was entirely due to the prompt. It was also due to the nature of ChatGPT Deep Research.
OK, this is kind of interesting, but what’s the point, which should you use? The answer is both, depending on your goal. Taking this a step further, sometimes you should use both within the same chat session. Here are some guidelines for choosing between open and structured prompts.
Open Prompts
Open prompts are useful for creativity, exploration and serendipitous discovery. I find this last one especially useful when asking for feedback on ideas or my writing. My mental model is to use open prompts when I’m thinking in terms of having a conversation.
Use open prompts when you:
Are exploring ideas or brainstorming
Want creative or unexpected insights
Don’t know exactly what you’re looking for
Need inspiration or to get “unstuck”
Are open to multiple interpretations and directions for the work
Are engaging in reflective or philosophical dialogues
Structured (Detailed) Prompts
Structured, detailed prompts are useful when you want AI to be precise, consistent, and to give highly relevant results. My mental model here is giving instructions rather than having a conversation.
Use structured prompts when you:
Require specific and accurate information
Need detailed analysis or explanations
Want predictable and consistent outputs
Are working with well-defined criteria or guidelines
Want output in structured formats or frameworks
Need to minimize ambiguity
Hybrid Prompting Approach
Often, the most effective strategy is a blend of the two approaches. In the hybrid approach, you start with open prompts to gather ideas and inspiration, then follow up with detailed prompts to refine these ideas into clear, actionable steps or outputs. This approach is especially useful when you have a vague idea that you want to turn into a concrete output.
Here’s an example:
Open prompt: What are some interesting AI-enabled assignments for my principles of information systems class?
Detailed prompt: Take the best two suggestions and create detailed assignment descriptions, including objectives, required resources, and evaluation criteria. These should be ready to post to my learning management system. Be clear in the instructions. Also, provide an evaluation rubric for each assignment.
You could also provide the syllabus or lecture slides to give the first prompt more guidance and context. You might also select the assignments you want to develop more completely.
Bottom Line
The bottom line is that effective prompting doesn't require complex engineering - it requires understanding which approach fits your needs:
For creative tasks and exploration, start with open prompts and let the conversation flow naturally
When precision matters or time is limited, invest in crafting structured prompts
Keep a library of successful prompts for recurring tasks
Don't be afraid to mix approaches - start open and get more specific as needed
Remember that the "perfect" prompt is simply the one that helps you achieve your goal
Most importantly, treat prompting as a skill to develop rather than a formula to master. Experiment, learn from what works (and what doesn't), and develop your own style. The best approach to prompting isn't the most complex or the simplest - it's the one that works for you and your situation.
Want to continue this conversation? I'd love to hear your thoughts on how you're using AI to develop critical thinking skills in your courses. Drop me a line at Craig@AIGoesToCollege.com. Be sure to check out the AI Goes to College podcast, which I co-host with Dr. Robert E. Crossler. It's available at https://www.aigoestocollege.com/follow. Looking for practical guidance on AI in higher education? I offer engaging workshops and talks—both remotely and in person—on using AI to enhance learning while preserving academic integrity. Email me to discuss bringing these insights to your institution, or feel free to share my contact information with your professional development team.
Awesome 🔥🔥
I learned a lot
Thanks for sharing, I’ve been using AI for work for 2 years. I’m generally happy with the results but constantly look for ideas to improve.