Richard Batt |
Why the Best AI Implementations Start with Boring Process Work
Tags: AI Strategy, Process
The Demo Trap Costs £200k
Consultant shows a beautiful demo: Claude reads support tickets, drafts perfect responses in 30 seconds. Executives sign contract. Six months later: the system is abandoned. Why? The demo didn't map real processes. It didn't test against messy data. The magic doesn't survive contact with reality.
Key Takeaways
- The Demo Trap, apply this before building anything.
- Why Process Mapping Actually Matters and what to do about it.
- The Checklist for the Boring Work, apply this before building anything.
- The Companies That Failed (And Why), apply this before building anything.
- The Demo Comes After, Not Before, apply this before building anything.
The demo was real. The tool worked. But nobody had mapped out the actual process the support team was using. Nobody asked: "How do your customers actually submit tickets? How do you currently route them? What's your quality control process? When do your support staff reject an AI suggestion and why? How would this integrate with your actual tools?" The demo existed in a vacuum. The implementation existed in the real world. They didn't match.
This is the demo trap. And I've seen it happen at least 15 times, probably more.
The companies that win with AI are not the ones that start with the demo. They're the ones that start with the boring work: understanding their actual processes, mapping workflows, identifying constraints, and then asking what AI can fix. The demo becomes a confirmation, not a starting point.
Why Process Mapping Actually Matters
When I work with a company on AI automation, I spend the first two weeks doing something that feels unproductive: I'm mapping their process. I'm asking questions like: "When a document comes in, who sees it first? What do they do with it? Does it go to a queue, or is it handled immediately? If there's a problem, where does it go? What's your quality bar? How do you measure whether something was done right?"
This is not exciting work. It's not going to make anyone's eyes light up. But here's what it does:
It reveals constraints you didn't know you had. I worked with a manufacturing company that thought they could automate their quality inspection reports. The demo showed Claude reading inspection notes and writing summaries. Looked great. But when we mapped the actual process, we discovered that the quality control team had a specific format they needed: specific fields, specific order, specific detail level. They'd send reports to customers, and customers were used to that format. Changing it would require renegotiating contracts. Suddenly the problem wasn't "can AI write a summary," it was "can AI write a summary in this very specific format." The answer was yes, but only because we understood the constraint.
It shows you where the actual pain is. Another company thought their bottleneck was in customer onboarding. The CEO said "we need to speed up onboarding." But when we mapped the actual process, we found out that 70% of the delay was waiting for customers to provide information back to us. The company was sitting around waiting, not actually doing work that needed to be done faster. Automating the company's part of the process would have saved 15 minutes. Fixing the customer communication process would have saved days. One demo would have gone in the wrong direction entirely.
It identifies where humans actually add value. Everyone assumes repetitive work is automatable. Usually it is. But sometimes it's not, or the automation isn't worth the cost. I looked at a law firm's document review process, expecting to find clear candidates for AI automation. We did find some. But we also discovered that one paralegal was doing something unusual: she was reading documents and flagging inconsistencies that the rest of the team missed. She had a mental model of the case that made her unusually good at spotting problems. Automating document review completely would have lost that capability. Instead, we automated the high-volume reading and flagging part, and gave her more time to focus on the inconsistency-spotting work she was exceptional at. That's a smarter automation.
It shows you the real implementation costs. Every company I've worked with underestimates implementation costs before we map the process. They think: "Plug in the AI tool, people use it, saves time." Then we map the actual workflow and discover that integrating the AI tool requires changes to how data flows through three different systems. Or we discover that the quality control process needs to change because the AI output format is different. Or we find out that adoption requires training 50 people, not just the team lead. Knowing these things matters. They're the difference between a project that costs $30k and a project that costs $100k, and between a project that takes 4 weeks and a project that takes 16 weeks.
The Checklist for the Boring Work
If you're starting an AI initiative and you want to do the boring work right, here's what to actually do:
Document the current state. For the process you think could benefit from AI, write down: What are the inputs? What are the outputs? Who touches it at each step? How long does each step take? What goes wrong and how often? What's the quality metric? Are there edge cases? Write it down. With numbers if you can get them.
Map the data flow. Where does data come from? What systems store it? How does it move between systems? What data is sensitive? What's the format? Do you have to clean it before using it? This matters because AI tools work with clean, structured data better than messy, inconsistent data. If your data flow is a nightmare, that's a problem the tool won't solve.
Identify decision points. Where do humans make judgment calls? When does a ticket get routed to the senior team instead of the junior team? When does a proposal need special approval? These decision points are often where AI can actually add the most value: not by replacing humans, but by flagging edge cases and ensuring consistency. But you have to know where they are.
Understand the quality bar. What does "good" look like for this process? Is it about accuracy? Speed? Cost? Customer satisfaction? Different answers lead to different automation strategies. If accuracy is paramount and you're willing to trade speed, the AI implementation looks different than if you're optimizing for cost and willing to accept some quality loss.
Find the constraints. What's the thing that makes this process hard? Is it volume (there's just too much work)? Is it variability (every case is different)? Is it skill requirements (only experts can do this well)? Is it tools (you're manually moving data between systems)? AI is fantastic at high-volume, repetitive work. It's not as good at handling infinite variability. Understanding your constraint helps you know if AI is the right answer.
Measure current performance. How long does the process take? How much does it cost? What's the error rate? What percentage of cases follow the happy path versus edge cases? You need a baseline, or you can't measure whether the AI actually helped. And you'll want to measure it, because the results often surprise people (usually in good ways, but not always).
The Companies That Failed (And Why)
I tracked the outcomes of about 20 significant AI implementations I was aware of at different companies over the last year. The ones that failed had a pattern:
They started with a demo from a vendor. They got excited. They bought the tool or hired someone to build something custom. Then they hit reality: the tool didn't integrate with their existing systems. The data was in the wrong format. The output didn't match their process. People weren't adopting it because it made their job harder, not easier. By month 4 or 5, it was basically abandoned.
Every single one of those failures would have been prevented by spending two weeks understanding the actual process before starting the implementation.
The successes had a different pattern. They spent time understanding the process first (sometimes internal, sometimes with consultants). They identified the right opportunity. They built or configured a solution. They trained people on how to use it. And then it actually got used because it solved a real problem in a way that fit the actual workflow.
The Demo Comes After, Not Before
Here's what a smart process looks like: Do the boring work (2 weeks). Figure out where AI could help (1 week). Then show a demo of the solution working on your actual data, with your actual constraints, in your actual workflow. That demo is worth seeing, because it's grounded in reality. It might show you that the solution only works 80% of the time, and you need a quality control process for the other 20%. That's useful information.
A demo in a vacuum that shows something working 100% of the time on sanitized, perfect data? That tells you nothing except that the demo was well-polished.
Boring is boring. Process documentation is not going to excite your board. Workflow mapping is not going to make headlines. But these are the things that actually determine whether your AI initiative succeeds or becomes a cautionary tale.
If you're building an AI strategy for your company and you want to avoid the demo trap, the path is clear: start with the boring work. Map the process. Understand the constraints. Identify the real opportunities. Then demo the solution on your actual problem. That's when you'll know if it's real.
Richard Batt has delivered 120+ AI and automation projects across 15+ industries. He helps businesses deploy AI that actually works, with battle-tested tools, templates, and implementation roadmaps. Featured in InfoWorld and WSJ.
Frequently Asked Questions
How long does it take to implement AI automation in a small business?
Most single-process automations take 1-5 days to implement and start delivering ROI within 30-90 days. Complex multi-system integrations take 2-8 weeks. The key is starting with one well-defined process, proving the value, then expanding.
Do I need technical skills to automate business processes?
Not for most automations. Tools like Zapier, Make.com, and N8N use visual builders that require no coding. About 80% of small business automation can be done without a developer. For the remaining 20%, you need someone comfortable with APIs and basic scripting.
Where should a business start with AI implementation?
Start with a process audit. Identify tasks that are high-volume, rule-based, and time-consuming. The best first automation is one that saves measurable time within 30 days. Across 120+ projects, the highest-ROI starting points are usually customer onboarding, invoice processing, and report generation.
How do I calculate ROI on an AI investment?
Measure the hours spent on the process before automation, multiply by fully loaded hourly cost, then subtract the tool cost. Most small business automations cost £50-500/month and save 5-20 hours per week. That typically means 300-1000% ROI in year one.
Which AI tools are best for business use in 2026?
It depends on the use case. For content and communication, Claude and ChatGPT lead. For data analysis, Gemini and GPT work well with spreadsheets. For automation, Zapier, Make.com, and N8N connect AI to your existing tools. The best tool is the one your team will actually use and maintain.
What Should You Do Next?
If you are not sure where AI fits in your business, start with a roadmap. I will assess your operations, identify the highest-ROI automation opportunities, and give you a step-by-step plan you can act on immediately. No jargon. No fluff. Just a clear path forward built from 120+ real implementations.
Book Your AI Roadmap, 60 minutes that will save you months of guessing.
Already know what you need to build? The AI Ops Vault has the templates, prompts, and workflows to get it done this week.