Richard Batt |
The 5-Part Prompt Framework That Gets Consistent AI Results
Tags: prompt engineering, AI implementation, business AI, productivity
Your team just started using ChatGPT. One person gets brilliant outputs. Another gets garbage. Same tool, wildly different results. The difference is not talent or luck, it is prompt structure.
After deploying AI across 120+ projects, I have watched the same pattern repeat. Teams that write free-form prompts get inconsistent, unreliable outputs. Teams that use a structured framework get results they can replicate, refine, and scale across the entire business.
Key Takeaways
- Unstructured prompts produce unpredictable results. A framework makes AI output consistent and repeatable.
- The CIRCRD framework has five components: Context, Instruction, Relevance, Constraint, and Demonstration. You pick only the ones your task requires.
- Adding constraints alone (output format, length, style) typically improves output quality by 40-60% over a bare instruction.
- Demonstrations (examples of what you want) are the single most effective way to get AI to match your exact standards.
Why Most Prompts Fail
The typical business prompt looks like this: "Write me a marketing email." That is the equivalent of telling a new hire "do marketing" on their first day. No context about the product, the audience, the tone, the goal, or what success looks like. The AI fills in every blank with its own assumptions and those assumptions rarely match yours.
Research into prompt design has identified a consistent pattern: the more structural components you include, the more stable and useful the output becomes. Not because AI needs hand-holding, but because specificity eliminates ambiguity. Fewer assumptions means fewer surprises.
The CIRCRD Framework: Five Components, Use What You Need
CIRCRD stands for Context, Instruction, Relevance, Constraint, and Demonstration. It was developed as a standardised prompt design framework, and across my client work it has become the default structure I teach every team. Here is what each component does and when to use it.
1. Context, Set the Scene
Context tells the AI who it is and what situation it is operating in. This is where role prompting lives. Instead of asking a generic question, you establish the perspective you want the answer from.
Without context: "Write a report on our Q1 sales data."
With context: "You are a senior sales analyst at a B2B SaaS company with 200 customers. Our average deal size is $15K annually and our churn rate is 4.2% monthly."
The second version produces output that reflects your actual business reality instead of generic advice pulled from training data. Across client deployments, adding context alone typically shifts outputs from "technically correct but useless" to "actually relevant to our situation."
2. Instruction, State Exactly What You Want
The instruction is the task itself. Be specific. For complex tasks, break instructions into numbered steps. AI models handle sequential, well-defined instructions far better than vague requests.
Weak instruction: "Analyse this data."
Strong instruction: "Calculate the month-over-month growth rate for each product line. Flag any product that declined more than 10% in consecutive months. Summarise the three biggest trends in two sentences each."
3. Relevance, Point to What Matters
Relevance tells the AI which specific information, data, or reference material to use. This is where you paste in your actual data, link to reference documents, or specify which knowledge the AI should draw from.
When I set up client automations, the relevance component is where the real power sits. You are not asking the AI to guess, you are giving it your actual customer feedback, your real sales figures, your specific product descriptions.
4. Constraint, Define the Boundaries
Constraints tell the AI what it cannot do, what format to use, and what standards to meet. This is the component most teams skip and it is the one that makes the biggest immediate difference.
Examples of constraints that work:
- "Output as a markdown table with columns: Product, Growth Rate, Trend Direction"
- "Maximum 200 words"
- "Use British English and a professional but conversational tone"
- "Do not include any recommendations, only observations"
- "If the data is insufficient to draw a conclusion, say so explicitly"
Adding constraints typically improves output quality by 40-60% in my testing. The AI stops guessing about format, length, and tone, and focuses its processing power on the actual content.
5. Demonstration, Show What Good Looks Like
Demonstrations are examples of the input-output pattern you want. This is the most powerful component for getting AI to match your exact standards. Instead of describing what you want, you show it.
Example demonstration:
"Here is an example of how I want customer feedback categorised:
Input: 'The onboarding took three weeks and I still could not figure out the dashboard.'
Output: Category: Onboarding | Sentiment: Negative | Severity: High | Action: Review onboarding flow for dashboard training gaps"
When you provide two or three demonstrations, the AI pattern-matches against your examples and produces output that follows the same structure, tone, and level of detail. This is the technique behind few-shot prompting and it is the single fastest way to get consistent results from any AI tool.
How to Combine the Components
You do not need all five components for every prompt. Match the complexity of your framework to the complexity of your task.
| Task Complexity | Components Needed | Example |
|---|---|---|
| Simple lookup | Instruction only | "List the top 5 CRM tools by market share in 2026" |
| Formatted output | Instruction + Constraint | "Summarise this article in 3 bullet points, max 20 words each" |
| Business-specific | Context + Instruction + Relevance | "As our customer success manager, analyse this NPS data and identify at-risk accounts" |
| Repeatable process | All five | "Categorise incoming support tickets using these examples, following our severity matrix, outputting in this table format" |
Building a Prompt Library for Your Team
The real payoff comes when you stop writing one-off prompts and start building a library. Here is the process I use with clients:
Step 1: Identify your team's 10 most repetitive tasks that involve writing, analysis, or categorisation.
Step 2: Write a CIRCRD prompt for each task. Test it 5 times and refine until it produces consistent quality.
Step 3: Document each prompt with the task name, when to use it, what to paste into the relevance section, and any constraints that must not be changed.
Step 4: Store them in a shared document (Notion, Google Doc, whatever your team already uses). Review monthly and update based on what is working.
One client, a 30-person marketing agency, built a library of 25 CIRCRD prompts and cut their content drafting time by 62% in the first month. Not because the AI was doing their thinking. Because the prompts were structured well enough that the first draft from AI actually looked like something from their agency, not a generic template from the internet.
Common Mistakes to Avoid
Stuffing all five components into every prompt. A simple question does not need a demonstration section. Match framework complexity to task complexity.
Writing constraints that are too vague. "Make it professional" means different things in different industries. "Write in the style of our existing client proposals, formal, third person, no contractions, British English" gives the AI something it can actually follow.
Skipping the test-and-refine step. A prompt that works once might not work consistently. Run it 5 times with different inputs before you add it to your library. If it produces inconsistent results, your constraints or demonstrations are not specific enough.
Frequently Asked Questions
What is the CIRCRD prompt framework?
CIRCRD is a structured approach to writing AI prompts with five components: Context (who the AI is), Instruction (what to do), Relevance (what data to use), Constraint (what rules to follow), and Demonstration (examples of desired output). You select only the components your task requires, simple tasks might need just an instruction, while repeatable business processes benefit from all five.
Does prompt structure really matter that much?
Yes. In testing across 120+ AI implementations, structured prompts consistently produce more accurate, relevant, and useful outputs than unstructured ones. Adding constraints alone typically improves output quality by 40-60%. The difference compounds when you build a library of tested prompts that your entire team can reuse.
How many examples should I include in the demonstration section?
Two to three examples is the sweet spot for most business tasks. One example establishes a pattern but leaves room for misinterpretation. Two examples clarify the pattern. Three examples lock it in. More than five rarely adds value and increases token cost.
Can I use this framework with any AI tool?
Yes. CIRCRD works with ChatGPT, Claude, Gemini, Copilot, and any LLM-based tool. The principles are universal because they address how language models process instructions, not features specific to any one platform.
Where should I start if my team is new to prompting?
Start with your single most repetitive task. Write a CIRCRD prompt for it, test it 5 times, refine it, and document it. Once one prompt is working reliably, move to the next task. Building a library of 10 tested prompts will save more time than trying to train everyone on general prompting theory.
Richard Batt has delivered 120+ AI and automation projects across 15+ industries. He helps businesses deploy AI that actually works, with battle-tested tools, templates, and implementation roadmaps. Featured in InfoWorld and WSJ.
Put This Into Practice
I use versions of these prompting approaches with my clients every week. The full templates, prompt libraries, and implementation guides, covering the edge cases and variations you will hit in practice, are available inside the AI Ops Vault. It is your AI department for $97/month.
Want a personalised implementation plan first? Book your AI Roadmap session and I will map the fastest path from where you are now to working AI automation.