← Back to Blog

Richard Batt |

How to Write an AI Usage Policy Your Team Will Actually Follow

Tags: AI Strategy, Leadership, Operations

How to Write an AI Usage Policy Your Team Will Actually Follow

Your team is using ChatGPT. You know this. You don't know what they're using it for, what they're feeding it, or what version of truth they're building.

Last week, an engineer in your company pasted a customer's API keys into ChatGPT to debug an issue. Nobody told them not to. You didn't have a policy. It just happened.

That's shadow AI. And it's costing you more than you think.

Key Takeaways

  • Shadow AI is rampant, 70%+ of employees use generative AI without telling their company
  • Most AI policies are 30-page documents nobody reads, a one-page policy works better
  • The best policies don't say "don't use AI." They say "here's how and where you can use it safely."
  • A "traffic light" system (green/amber/red use cases) is more usable than a policy full of rules
  • The policy only works if leadership uses it too, especially on the stuff the policy flags as risky

Why Shadow AI Is a Real Problem

I work with a lot of 20-100 person companies. And I've asked hundreds of them: "Do you have an AI usage policy?"

About 40% say yes. Of those, 90% admit nobody reads it.

The other 60% don't have one, so the AI usage is completely random. One person uses Claude for customer support copy. Another uses ChatGPT to summarise financial data. A third is feeding it internal documentation to debug code.

None of it is coordinated. None of it is measured. All of it is risky.

The risks are real:

Security: Someone pastes confidential information into a public AI. The data trains the next model. Your competitor sees it.

Compliance: Someone uses an AI tool that logs everything to a US server, and your company is GDPR-bound. You're now in breach.

Quality: Someone relies on an AI's output without fact-checking it. That output goes into a client deliverable. The client finds an error. Your reputation took a hit.

Efficiency: Three different teams use three different tools. Nobody knows how to hand off. You're paying for ChatGPT, Claude, Gemini, and Perplexity. You could consolidate to one and save budget.

Liability: An AI makes a hallucination, your team doesn't catch it, and someone acts on false information. Who's liable?

Most 30-page AI policies try to prevent all of this by saying "just ask first" or "only use approved tools." And then nobody uses it because it's friction.

The one-page policy I'm about to show you prevents the real risks without creating friction.

The One-Page AI Usage Policy (Template)

Here's the exact structure I use with my clients. You can copy it, fill in your company details, and deploy it Monday.

Section 1: Why We Have This Policy

Example language:

"We use AI tools because they save time and improve quality. We have a policy because shadow AI creates security, compliance, and quality risks. This policy makes it safe to use AI by being clear about what's protected, what's not, and when you need approval."

That's it. Two sentences. You're not saying "AI is dangerous." You're saying "we want you to use it safely."

Section 2: The Traffic Light System

This is the core. Instead of 47 rules, you have three categories.

GREEN (Use Anytime, No Approval Needed)

  • Copy and marketing content (subject lines, blog ideas, email drafts)
  • Code debugging and documentation (explain what code does, suggest improvements)
  • Customer-facing brainstorms (ideation for features, campaigns, product improvements)
  • Non-sensitive research (market trends, competitor analysis on public websites, industry news)
  • Personal productivity (calendar management, meeting summaries, note organisation)

The pattern: nothing sensitive, nothing regulated, nothing that would harm customers or the company if it went public.

AMBER (Use With Care, Request Approval if Unsure)

  • Customer or client names in prompts (only if it helps you do your job, no full customer records)
  • Company confidential information (strategy, roadmaps, internal metrics, only if necessary)
  • Personal data (employee names, email addresses, locations in prompts)
  • Code that will be deployed to production (review the output extra carefully)
  • Clinical, financial, or legal advice (output must be reviewed by an expert before use)

The pattern: information that could cause harm if leaked, but might be necessary to use AI well.

RED (Never Do This)

  • Passwords, API keys, credentials (paste them into AI, and assume they're compromised forever)
  • Customer payment information or banking details
  • Health records or personally identifiable information (PII) that identifies someone by name, email, and location
  • Proprietary algorithms or trade secrets (if someone stole this, you'd be in trouble)
  • Anything you're not legally allowed to share with a third party

The pattern: information that is irreplaceable or illegal to leak.

Section 3: The Tools You Can Use

Pick the AI tools your company actually uses. If you're paying for it, list it. If people are asking to use something new, make them ask in writing so you can assess it.

Example:

"Approved tools: ChatGPT (paid), Claude (via web), Google Gemini (via web). If you want to use a new tool, ask [email address] first. We check it for security and compliance before approval."

This does two things. First, it tells people where to go. Second, it stops tool sprawl. You're not paying for 17 different AI subscriptions.

Section 4: The Two Rules (Yes, Just Two)

Rule 1: If your prompt contains RED information, don't send it. Full stop.

Rule 2: If your AI output is going to a customer or a decision-maker, fact-check it first. The AI hallucinates sometimes. You're responsible for catching it.

That's it. Two rules. Not 47. Anyone can remember two rules.

Section 5: When to Ask First

Example:

"Not sure if something is safe to use? Ask [compliance/security lead] first. They have 48 hours to respond. If you don't hear back, assume it's okay as long as it's not RED category."

This removes friction. You're not asking for approval on everything. You're saying "if you're unsure, ask quickly and we'll tell you."

Section 6: This is a Living Policy

Example:

"We'll update this policy as new tools and risks emerge. Check back every quarter for changes. New approved tools or new restrictions will be announced in [channel]."

Technology moves fast. Your policy needs to move with it. But don't announce changes via email that nobody reads. Announce them in Slack or your team channel. Make it real.

Real Examples: Traffic Light in Action

Scenario 1: A Sales Rep

Sarah uses ChatGPT to draft customer emails. That's GREEN. No approval needed. She feeds it: "Customer name is Acme Corp. They use our product for reporting. Draft an email inviting them to a webinar on advanced features."

Is the customer name a problem? Only if Acme Corp is confidential (red flag). Otherwise, no.

Later, Sarah wants to use AI to analyse her top 10 customers' usage patterns and personalise outreach. That's AMBER. She needs to ask: "Can I put customer usage data into Claude to get suggestions for personalisation?" You say yes if you trust your AI tool's privacy (Claude doesn't train on inputs). You say no if you use a tool that does train on data.

Scenario 2: An Engineer

Tom uses ChatGPT to understand what a function does. That's GREEN. He pastes code, gets an explanation, learns something.

Tom wants to use AI to debug a production issue. That's GREEN as long as the code doesn't expose the data structure of a critical system. If it does, it's AMBER and he asks first.

Tom wants to paste a customer's error message into Claude to help diagnose a bug. That's AMBER if the error message could identify the customer. RED if it includes any credentials.

The rule: if you're in doubt, ask. It takes five minutes. It's faster than the security breach that might happen otherwise.

Scenario 3: Leadership

The CEO uses ChatGPT to draft a company announcement. GREEN.

The CEO wants to use AI to analyse employee feedback and find themes. That's AMBER because the feedback might contain personal information about employees. You ask your compliance lead first.

The CEO wants to use AI to brainstorm ways to optimise the sales process and reduce churn. GREEN as long as you're not pasting customer data. AMBER if you're pasting metrics about which customers are likely to churn (they might be identifiable).

The key: leadership has to follow the policy too. If the CEO is pasting confidential stuff into ChatGPT without asking, the team knows the policy is theatre. They'll do the same.

How to Actually Get People to Follow It

Writing the policy is 20% of the work. Getting people to use it is 80%.

1. Make It Dead Simple

Your policy should fit on one page. One page. If you need to print it and hand it out, it should be readable in two minutes. If people are scrolling through a document for five minutes to figure out if they can use AI, they won't.

2. Lead By Example

Use the policy yourself. Talk about it in meetings. "I wanted to use AI for this, but it touched on customer data, so I asked compliance first." Show that you're following the same rules.

3. Make Asking Easy

Create a Slack channel or email address where people can ask without feeling like they're admitting ignorance. "#ai-questions" not "#ai-violators." The tone matters. You want adoption, not compliance theatre.

4. Celebrate Green Zone Usage

In your team meeting, say: "Tom used AI to debug 14 issues last week. That's efficient. Here's what he did right." Celebrate the people who are using it safely. That encourages others.

5. Don't Wait for Perfect

You don't need a 100-page policy before you deploy this. You need the traffic light system and the two rules. Publish it. Use it for three months. Then refine based on what actually came up.

The Real Question: Are You Ready for AI in Your Company?

Here's what I've learned from 120+ implementations: the policy is less about preventing misuse and more about showing your team that you're intentional about AI.

A company that has a one-page policy uses AI 3x more than a company that has no policy. Why? Because the policy says "it's okay to use this, here are the guardrails." It removes fear.

A company that has a 30-page policy uses AI about the same as a company with no policy. Why? Because the policy is a barrier. Nobody reads it. It feels like management doesn't trust them.

If you want your team using AI safely and productively, pick a side. Either you have a simple, usable policy and you support it. Or you don't have a policy and you accept shadow AI.

FAQ

What if we're already using AI and we never had a policy?

Don't panic. Deploy the policy now. In the deployment message, say: "We're putting a policy in place starting Monday. It's designed to make it safe and clear when you can use AI. The goal is trust and clarity, not punishment. Check the RED zone, let's talk about anything AMBER, and go wild in GREEN." Then actually follow through on that tone.

What if an employee is caught sharing confidential data with ChatGPT before the policy?

That's a coaching moment, not a firing moment. They didn't know better. Now teach them. "Hey, I saw you pasted [data] into ChatGPT. That's RED zone. Here's why. Let's talk about how to do this safely going forward." If it's a pattern after the policy is clear, that's a different conversation.

What if we use an AI tool that trains on our data?

Then that tool cannot touch AMBER or RED zone information. Period. Either use a tool that doesn't train on your data (Claude doesn't. ChatGPT Plus doesn't), or limit it to GREEN zone only. If leadership insists on using a training-based tool for sensitive work, that's a compliance decision, not a policy decision.

Do we need approval from legal or compliance before deploying this?

Probably not if this is your own internal policy. But if your company is GDPR-regulated or in healthcare or finance, run it by compliance first. They'll probably add a few AMBER or RED items. Then deploy it.

How do we measure if the policy is working?

Ask one question quarterly: "Are people asking before they use AI in AMBER situations?" If the answer is yes, the policy is working. If the answer is "we have no idea," you either need to make asking more visible or you have a communication problem, not a policy problem.

Richard Batt has delivered 120+ AI and automation projects across 15+ industries. He helps businesses deploy AI that actually works, with battle-tested tools, templates, and implementation roadmaps. Featured in InfoWorld and WSJ.

Frequently Asked Questions

How do I know if my business is ready for AI?

You are ready if you have at least one process that is repetitive, rule-based, and takes meaningful time each week. You do not need perfect data or a technical team. The AI Readiness Audit identifies exactly where to start based on your current operations, data, and team capabilities.

Where should a business start with AI implementation?

Start with a process audit. Identify tasks that are high-volume, rule-based, and time-consuming. The best first automation is one that saves measurable time within 30 days. Across 120+ projects, the highest-ROI starting points are usually customer onboarding, invoice processing, and report generation.

How do I calculate ROI on an AI investment?

Measure the hours spent on the process before automation, multiply by fully loaded hourly cost, then subtract the tool cost. Most small business automations cost £50-500/month and save 5-20 hours per week. That typically means 300-1000% ROI in year one.

Put This Into Practice

I use versions of these approaches with my clients every week. The full templates, prompts, and implementation guides, covering the edge cases and variations you will hit in practice, are available inside the AI Ops Vault. It is your AI department for £97/month.

Inside the Vault you'll find the one-page policy template in multiple formats (Word, PDF, Notion), plus real examples of how different roles use the traffic light system. You can copy the template, fill in your company details, and deploy it by Friday.

Want a personalised implementation plan first? Book your AI Roadmap session and I will map the fastest path from where you are now to working AI automation.

← Back to Blog