Richard Batt |
How to Build a Prompt Library That Your Whole Team Uses
Tags: prompt engineering, team productivity, AI implementation, prompt library
A 40-person consulting firm had 15 people using AI daily. Every single one of them was writing prompts from scratch, every single time. Same types of tasks, proposal writing, research summaries, client emails, but zero shared knowledge about what worked.
We built a prompt library in a shared Notion workspace. Within three weeks, the average time to first draft dropped by 40% across the team. Not because the prompts were magic. Because the team stopped reinventing the wheel on every task.
Key Takeaways
- A prompt library is a shared, documented collection of tested prompts for your team's most common tasks, the AI equivalent of a standard operating procedures manual.
- Start with 10 prompts covering your team's most repetitive tasks. Quality and testing matter more than quantity.
- Each prompt needs four things documented: the task name, the prompt text, instructions for what to paste into the variable sections, and the expected output format.
- Assign an owner who reviews and updates the library monthly. Prompts that are not maintained become unreliable.
Why Individual Prompting Does Not Scale
When one person figures out that adding "respond in a markdown table with columns for..." produces clean, usable output, that knowledge stays in their head. The rest of the team keeps getting messy, inconsistent results because they have not discovered the same trick.
A prompt library solves the knowledge sharing problem. It captures what works, makes it available to everyone, and establishes a quality standard. When your best prompter writes a proposal-generation prompt that produces client-ready first drafts, the entire team gets access to that capability, not just the one person who cracked it.
Across my client work, teams with shared prompt libraries consistently outperform teams where everyone prompts individually. The difference is not skill, it is leverage. A prompt library turns one person's breakthrough into an organisational capability.
Step 1: Identify Your Top 10 Tasks
Survey your team or observe for one week. What are the tasks people use AI for most frequently? Focus on tasks that are repetitive, produce similar types of output, and currently take more than 10 minutes each.
Common candidates across the businesses I work with:
| Department | Common Prompt Library Tasks |
|---|---|
| Sales | Proposal drafts, follow-up emails, meeting prep briefs, competitor comparison notes |
| Marketing | Blog outlines, social media posts, email campaigns, ad copy variations |
| Operations | Process documentation, meeting summaries, status report generation |
| Customer Success | Response templates, account review summaries, onboarding checklists |
| Finance | Report narratives, variance explanations, budget commentary |
Pick the 10 tasks that consume the most collective time. These are your first library entries.
Step 2: Build Each Prompt Using CIRCRD
For each task, write a structured prompt using the Context-Instruction-Relevance-Constraint-Demonstration framework. Include variable sections marked with brackets where team members paste their specific data.
Example: Client Meeting Summary Prompt
Context: You are a senior consultant at our firm. You have been in a client meeting and need to produce a structured summary for internal records.
Instruction: Convert the raw meeting notes below into a structured meeting summary. Extract action items with owners and deadlines. Flag any risks or decisions that need escalation.
Relevance: [PASTE RAW MEETING NOTES HERE]
Constraint: Use our standard format: Meeting Overview (2-3 sentences), Key Decisions (bullet points), Action Items (table with Owner, Task, Deadline columns), Risks and Escalations (if any). Maximum 500 words total. Use British English.
Demonstration: [Include one example of a completed meeting summary in the exact format you want]
Step 3: Test Each Prompt 5 Times
This is the step most teams skip, and it is the reason most prompt libraries fail. A prompt that works once might not work consistently. Run each prompt 5 times with different inputs and check:
- Does the output format stay consistent across all 5 runs?
- Is the quality acceptable in at least 4 out of 5 runs?
- Does it handle edge cases (messy input, short input, long input)?
If a prompt fails the consistency test, refine the constraints or add demonstrations until it passes. Only add prompts to the library that have passed 5-run testing.
Step 4: Document and Share
Store the library where your team already works, Notion, Google Docs, Confluence, or even a shared spreadsheet. Each entry needs:
Prompt name: A clear, descriptive name (e.g., "Client Meeting Summary Generator")
When to use: One sentence describing the trigger (e.g., "After any client meeting where you took notes")
The prompt: The full prompt text with variable sections clearly marked
What to paste in: Instructions for what goes in each variable section
Expected output: A brief description of what good output looks like
Tips: Any tricks that improve results (e.g., "Works better with rough notes than polished notes, do not clean up your notes before pasting")
Step 5: Assign an Owner and Review Monthly
Prompt libraries decay. AI models update. Team needs change. Business terminology shifts. Without maintenance, a prompt library becomes a graveyard of outdated prompts that produce mediocre results.
Assign one person to own the library. Their monthly review should cover: which prompts are being used most, which prompts have received complaints about quality, whether any new repetitive tasks have emerged that need a prompt, and whether any prompts need updating for new AI model capabilities.
The consulting firm I mentioned earlier appointed their most AI-savvy team member as "Prompt Librarian", a role that takes about 2 hours per month and saves the team roughly 60 hours.
Frequently Asked Questions
What is a prompt library?
A prompt library is a shared, documented collection of tested prompts for your team's most common AI tasks. Each entry includes the prompt text, instructions for what data to insert, the expected output format, and usage tips. It is the AI equivalent of a standard operating procedures manual, turning individual knowledge into an organisational capability.
How many prompts should we start with?
Start with 10 prompts covering your team's most repetitive tasks. Quality matters more than quantity. Each prompt should be tested 5 times with different inputs before being added to the library. Once those 10 are working reliably and being used regularly, expand based on team requests.
Where should we store our prompt library?
Store it where your team already works, Notion, Google Docs, Confluence, or a shared spreadsheet. The best tool is the one your team actually opens daily. Avoid creating a separate system that requires people to change their workflow to access prompts.
How do I get my team to actually use the prompt library?
Two things drive adoption: the prompts must save noticeable time (at least 5 minutes per use), and they must be easy to find. Organise by department or task type, not by technique. Run a 30-minute workshop showing 3 prompts in action. Once 2-3 team members see the time savings, adoption spreads through observation.
How often should prompts be updated?
Review the library monthly. Update prompts that are producing inconsistent results, remove prompts that are no longer being used, and add new prompts for emerging tasks. Assign one person as the prompt library owner, this takes about 2 hours per month and prevents the library from becoming a collection of outdated prompts.
Richard Batt has delivered 120+ AI and automation projects across 15+ industries. He helps businesses deploy AI that actually works, with battle-tested tools, templates, and implementation roadmaps. Featured in InfoWorld and WSJ.
Put This Into Practice
I use versions of these prompting approaches with my clients every week. The full templates, prompt libraries, and implementation guides, covering the edge cases and variations you will hit in practice, are available inside the AI Ops Vault. It is your AI department for $97/month.
Want a personalised implementation plan first? Book your AI Roadmap session and I will map the fastest path from where you are now to working AI automation.