Richard Batt |
Over 40% of AI Agent Projects Will Be Cancelled Without Governance
Tags: AI Strategy, Leadership
Gartner: 40% of agentic AI projects will be cancelled by 2027 without governance. Most frameworks? 50 pages. Nobody reads them. They check boxes. They don't govern anything.
Key Takeaways
- What Governance Actually Means in the Context of AI.
- The Four Questions Your Governance Framework Must Answer, apply this before building anything.
- Question One: Who Owns the AI System?.
- Question Two: Who Reviews and Approves Changes?.
- Question Three: What Happens When the AI Makes a Mistake?.
I have implemented governance frameworks on 30+ AI projects. The frameworks that work are the ones that are simple, specific, and focused on real decisions. Not frameworks that document every possible risk. Frameworks that document the decisions that matter.
What Governance Actually Means in the Context of AI
Governance is about making decisions before you get to the point where you need to make a decision. It is about having a clear process for approving new AI use cases. It is about having rules for who can change an AI system and what they can change. It is about having audit trails so you can understand what the AI did and why.
Most companies approach governance as compliance. They want to make sure they do not violate regulations or expose themselves to legal risk. That is important. But it is not the main reason governance matters for AI.
The main reason governance matters is that AI systems make decisions. If you have no governance, different people make different assumptions about what the AI is supposed to do, when it is supposed to escalate to humans, what constitutes a mistake, and how to fix it. Those assumptions will conflict. The project will stall.
The Four Questions Your Governance Framework Must Answer
Practical tip: Your governance framework needs to answer four questions about every AI system you deploy. First: who owns the AI system? Not who built it. Who owns the outcome? Who is accountable if something goes wrong? Second: who reviews the AI system and approves changes to it? What is the process for changing how the AI behaves? Who needs to sign off? Third: what happens when the AI makes a mistake? Who detects it? Who fixes it? How long before the fix is deployed? Fourth: how do you audit AI decisions? Can you show why the AI made a specific decision? Can you trace the data that fed into the decision?
If you have clear answers to these four questions, you have governance. Everything else is decoration.
Question One: Who Owns the AI System?
You need one person who is accountable for the AI system. Not a committee. Not a team. One person. That person owns the outcome. If the AI is not delivering the promised value, that person has to explain why and fix it. If the AI makes a bad decision, that person is responsible for understanding what went wrong.
In most organizations, that owner is either the head of the department where the AI is deployed or someone directly reporting to that person. In some cases, it is the Chief Data Officer or Chief AI Officer. The important thing is that this person has the authority to make decisions about the system without getting approval from five different people.
The owner's job is not to understand how the AI works technically. Their job is to understand what the AI is supposed to do, whether it is doing it, and whether it is delivering the promised business value.
Question Two: Who Reviews and Approves Changes?
The owner cannot make unilateral changes to the AI. You need a review process. That process should include the owner, someone technical, and someone from compliance or risk if the AI system has regulatory implications.
The review process should answer these questions: what is the change? Why is the change needed? What is the risk if something goes wrong? Is the change aligned with the purpose of the AI system? Can you reverse the change if it does not work?
This review process should happen before the change is deployed to production. Not after. If you are changing how the AI makes decisions, you need approval before that change goes live.
For small changes, the review can be informal. For large changes, it should be formal and documented. The key is consistency. You need a clear process that is applied every time.
Question Three: What Happens When the AI Makes a Mistake?
AI systems will make mistakes. The question is how you detect, escalate, and fix those mistakes. You need a process for this.
Who is responsible for monitoring the AI? How frequently do they check? What metrics do they monitor? If performance degrades, how do you know? If the AI starts making more errors, when do you detect it?
When a mistake is detected, who is responsible for investigating? Can they understand what happened? Can they fix it? Do they fix it immediately or do they document it and plan a fix?
I worked with a financial services company that deployed an AI system to approve loans. The AI made a decision that violated company policy. Nobody caught it for three weeks. By then, it had made 200 decisions based on the wrong rules. The financial impact was significant. If they had had a process for monitoring key decisions, they would have caught the problem in hours instead of weeks.
Question Four: How Do You Audit AI Decisions?
You need to be able to answer this question: why did the AI make this decision? If you cannot answer that question, you cannot trust the AI and you cannot govern it.
This is where explainability becomes important. Not for academic reasons. For governance reasons. You need to understand what data the AI consumed, what weights the AI applied to that data, and what decision the AI made as a result.
For some AI systems, this is easy. Decision trees are explainable. Rule-based systems are explainable. For others, it is harder. Deep neural networks are not explainable. If you are deploying a non-explainable AI system to make important decisions, you have a governance problem.
Practical tip: For high-stakes decisions, use explainable AI. For low-stakes decisions, complex models are fine. The stakes depend on the context. In financial services, loan decisions are high-stakes. Email classification is low-stakes. Choose your model based on the stakes.
CEO Oversight Is the Most Predictive Factor of Success
PwC research found that CEO and board-level oversight is the most correlated with AI projects delivering bottom-line impact. Not the quality of the model. Not the sophistication of the data science. CEO oversight.
This makes sense. AI is a strategic asset. The CEO should be involved in deciding which AI projects to fund, what they should accomplish, and whether they are delivering results. The CEO does not need to understand how the AI works. The CEO needs to understand the business impact and whether the company is making smart choices about where to invest in AI.
If your board is not asking questions about AI governance, your company is not taking AI governance seriously. If your CEO is not asking about the ROI from AI investments, you are probably wasting money.
Your Next Step
Start with these four questions. Do you have clear answers? If not, you do not have governance. Build the framework to answer them. Do not build a 50-page document. Build a one-page process that your team can actually follow.
Richard Batt has delivered 120+ AI and automation projects across 15+ industries. He helps businesses deploy AI that actually works, with battle-tested tools, templates, and implementation roadmaps. Featured in InfoWorld and WSJ.
Frequently Asked Questions
How long does it take to build AI automation in a small business?
Most single-process automations take 1-5 days to build and start delivering ROI within 30-90 days. Complex multi-system integrations take 2-8 weeks. The key is starting with one well-defined process, proving the value, then expanding.
Do I need technical skills to automate business processes?
Not for most automations. Tools like Zapier, Make.com, and N8N use visual builders that require no coding. About 80% of small business automation can be done without a developer. For the remaining 20%, you need someone comfortable with APIs and basic scripting.
Where should a business start with AI implementation?
Start with a process audit. Identify tasks that are high-volume, rule-based, and time-consuming. The best first automation is one that saves measurable time within 30 days. Across 120+ projects, the highest-ROI starting points are usually customer onboarding, invoice processing, and report generation.
How do I calculate ROI on an AI investment?
Measure the hours spent on the process before automation, multiply by fully loaded hourly cost, then subtract the tool cost. Most small business automations cost £50-500/month and save 5-20 hours per week. That typically means 300-1000% ROI in year one.
Which AI tools are best for business use in 2026?
It depends on the use case. For content and communication, Claude and ChatGPT lead. For data analysis, Gemini and GPT work well with spreadsheets. For automation, Zapier, Make.com, and N8N connect AI to your existing tools. The best tool is the one your team will actually use and maintain.
What Should You Do Next?
If you are not sure where AI fits in your business, start with a roadmap. I will assess your operations, identify the highest-ROI automation opportunities, and give you a step-by-step plan you can act on immediately. No jargon. No fluff. Just a clear path forward built from 120+ real implementations.
Book Your AI Roadmap, 60 minutes that will save you months of guessing.
Already know what you need to build? The AI Ops Vault has the templates, prompts, and workflows to get it done this week.