← Back to Blog

Richard Batt |

'Good Enough' AI Is Your Most Dangerous Competitor. Here's Why

Tags: AI, Strategy, ROI, Competitive Advantage

'Good Enough' AI Is Your Most Dangerous Competitor. Here's Why

The Real Finding Nobody's Talking About

MIT researchers just finished testing 40+ AI models against 11,500 real business tasks. The headline everyone caught: AI is "good enough to pass but not good enough to impress." The headline they missed: this is exactly the finding that kills incumbents.

Fortune's coverage focused on AI quality gaps. But the actual study found something more dangerous. On 65% of text-based work: emails, reports, initial analysis, documentation, first drafts. AI hits "minimally sufficient." Not perfect. Not dazzling. Just adequate.

For a consulting business, that's a problem. For a business that needs 3x throughput at 1/10th the cost, that's a strategy.

The Reframe That Changes Everything

Here's what the study really showed: the question isn't whether AI is "good enough." The question is whether your business model can survive when your competition is 95% cheaper and available 24 hours a day.

I've spent 120+ projects across 15+ industries watching this unfold. The pattern is always the same.

Six months ago, a client was reviewing compliance documentation for accuracy. Two analysts. Eight hours a day. They were catching 94% of issues. That's excellent work. Human-grade.

Then their competitor deployed AI on the same task. The AI caught 87% of issues. Worse by any standard. But their competitor's team now reviews 3x the volume because the AI runs overnight. And at that scale, 87% suddenly catches more problems in absolute numbers than the human team ever did.

That's the reframe. You're not comparing "AI quality" to "human quality." You're comparing "AI throughput at 1% of the cost" to "human throughput at 100% of the cost."

The businesses getting hurt aren't the ones waiting for perfect AI. They're the ones waiting for perfect AI while someone else already shipped adequate AI to production.

What 120+ Projects Taught Me About the 80/20

Across all those projects, I've noticed something consistent about which tasks AI actually needs to be perfect for. Spoiler: it's a smaller list than most people think.

Break down the typical business workflow by volume:

About 80% of tasks are routine. Email responses, data entry, initial analysis, report formatting, documentation updates, compliance checks, meeting summaries. These are the ones where "good enough" is literally the job description. The goal isn't to impress. The goal is to move the work through the system.

For these tasks, an AI system that gets it right 85% of the time and runs at 1/100th the human cost isn't a gap. It's a business model.

The other 20% are judgment calls. Client strategy. Escalation decisions. Interpretation of ambiguous requirements. Creative problem-solving. These are the ones where 87% isn't enough: they need the human judgment that AI can't replicate yet. Maybe ever.

The mistake most businesses make: they're still trying to deploy "good enough" AI on the 20%. That fails, they declare "AI doesn't work for our business," and they never touch the 80% where it would actually move the needle.

The businesses winning right now? They're doing the opposite. They figured out which 80% is actually "good enough to pass" and deployed it there. Then they freed up their people to do the 20% that actually requires judgment.

How to Find Your "Good Enough" Tasks

Not every task fits. Here's the practical framework I use with clients to identify which ones do.

Task 1: Map your workflow by volume. What are your team actually doing all week? Not what's on the job description. What they actually spend 40 hours on. List the top 20 tasks by time spent. You'll find 3-5 that account for half the week.

Task 2: For each task, ask: "What's the acceptance criteria?" This is the key question. Is the job "produce something without errors" or is the job "produce something that moves us forward?" The first set of tasks are good AI candidates. The second set aren't.

A compliance documentation review? Acceptance criteria: "catch the issues we'd catch." That's a good AI task. AI gets 87%. You need 90% minimum. Deploy a AI with a human check on the 10% you're unsure about. Problem solved.

A sales strategy for a new market? Acceptance criteria: "generate insights nobody's seen before." That's a bad AI task. AI can summarize industry trends (good enough), but it can't synthesize competitor behavior with your specific constraints and your founder's intuition. Don't force it.

Task 3: For good candidates, ask: "What's the cost of imperfection?" In compliance, missing 13% of issues means audit risk. That's a hard cost. Worth paying for the AI + human check.

In a first-draft email response, getting the tone slightly wrong means you revise in 10 minutes. The cost of imperfection is trivial. Ship the AI version.

Task 4: Prototype on one task. Don't overhaul your entire workflow at once. Pick one high-volume task where acceptance criteria are clear and cost of imperfection is low. Run the AI version for two weeks. Measure: how much time does it save, how much does your team have to rework it, where does it systematically fail?

You'll get three outcomes:

First: the AI version works well enough. Ship it to the whole team.

Second: the AI version works if you tweak it slightly. Most common. Adjust the prompt or the tool. Re-test. Then ship.

Third: the AI version doesn't work for this task. Keep the human version. Move to the next task.

That's it. Four questions, a prototype, a decision. You're not waiting for perfect AI. You're finding the 80% where adequate AI actually works for your business model.

The Danger Isn't AI Quality. It's Speed.

Here's the part that should actually keep you up at night.

The businesses that deployed "good enough" AI three months ago are already 300 tasks ahead. They're not waiting for AI to be perfect. They shipped it, learned from it, tuned it, and kept going.

Your competitor didn't hire three new analysts. They deployed AI to the compliance documentation task and freed up Sarah to actually negotiate contracts instead of reviewing documents. They deployed AI to the first-draft email responses and freed up the sales team to do actual selling.

They're doing more work with the same headcount. At lower cost. While you're still researching whether AI is "good enough."

The problem with waiting for perfect AI is simple: while you wait, your competition doesn't. And at scale, "good enough today" beats "perfect six months from now."

Key Takeaways

1. "Good enough" is the point. MIT's study found AI delivers "minimally sufficient" results on 65% of text-based tasks. For that 65%, "adequate" at near-zero cost beats "excellent" at high labor cost. The question isn't whether AI is perfect. It's whether it's better than the status quo.

2. Most of your work is "good enough" territory. About 80% of typical business workflows are routine execution. Email responses. Data entry. Report formatting. First drafts. These are exactly where "minimally sufficient" AI works. The other 20%: strategy, judgment calls, creative problem-solving: still needs humans.

3. Speed beats perfection in execution. The businesses deploying "good enough" AI right now are already doing 3x the volume with the same team. They're not waiting for AI to improve. They shipped it, learned from it, adjusted it, and moved on. Your competition is three months ahead.

4. Deploy the right task first. Don't force AI onto judgment work. Map your workflow by time spent, identify high-volume routine tasks, prototype on one, measure the savings and rework, then scale. The right "good enough" task can save 10+ hours a week with minimal quality risk.

Frequently Asked Questions

Is "good enough" AI actually good enough for business use?

Yes: but only for specific tasks. MIT's study found AI hits "minimally sufficient" on 65% of text-based work. The key is matching AI to tasks where "adequate" is actually the job description. Email responses, initial analysis, documentation, report formatting: these are good AI tasks because acceptable work doesn't need to be excellent. If the task requires judgment, creativity, or client interpretation, AI probably isn't good enough yet.

When should I use AI instead of hiring a person?

Use AI when: (1) the task is high-volume and repetitive, (2) the acceptance criteria is "produce something adequate," not "produce something novel," and (3) a human can quickly check or adjust the output. Don't use AI when: (1) the task requires judgment about what "good" even means, (2) the cost of error is high and the task is complex, or (3) the output needs to reflect your specific brand voice or client relationships. The hybrid model wins: deploy AI on the 80% routine work, free up your people to do the 20% that actually requires judgment.

How much money can AI actually save my business?

For a 20-person team where half the weekly work hours go to routine tasks: AI deployment on those tasks could save 200-400 hours per month. At an average loaded cost of £40/hour, that's £8,000-16,000 a month in labor freed up: either reduced headcount or reallocated to higher-value work. Real clients I've worked with: one logistics firm saved £12K/month in invoice processing. A healthcare practice saved £8K/month in compliance documentation prep. A recruiting firm saved £6K/month in candidate screening summaries. Those aren't edge cases. That's what happens when you stop researching AI and start deploying it to the right 80%.

What tasks should I automate first?

Start with high-volume, low-judgment tasks that take up significant team time: email response triage, meeting summaries, first-draft reports, compliance checklist preparation, data entry validation, customer intake documentation, initial research summaries. Pick one where the cost of a 10% error is tolerable (like a first draft, not a legal filing), prototype it for two weeks, measure the time savings and rework rate, then scale. Avoid starting with tasks that require deep client interpretation, complex judgment calls, or high regulatory stakes. AI isn't ready for those. Get wins on the easy 80% first.

Will AI quality improve enough that I don't need human review?

Probably not for most business-critical work. AI will get better, but the real improvement isn't "perfect AI." It's "AI good enough that humans can review 10x faster." A human reviewing an AI-drafted compliance summary is faster than drafting from scratch. A human checking an AI-generated report format is faster than building it from raw data. Don't wait for AI that doesn't need humans. Build workflows where AI handles the high-volume first-pass work and humans do the fast review. That's the winning model today, and it's likely the winning model in two years as well.

The Lesson in 120 Projects

I've watched two types of businesses emerge from the AI wave. One keeps waiting for perfect tools. The other shipped with adequate ones.

The waiting businesses are building the perfect strategy. The shipping businesses are already three automations ahead.

You can't wait for the perfect moment. The MIT study confirms what I've seen in the field: "good enough" AI deployed today beats perfect AI coming in six months. The businesses that figured out which 80% of their work is actually "good enough" territory are already running at 2-3x their previous efficiency.

The question isn't whether AI is good enough. It's whether your business is fast enough to deploy something adequate before your competition does.

Ready to Deploy "Good Enough" AI?

If you're seeing that 80% of your work could actually benefit from "good enough" AI, the next move is identifying which specific tasks in your business fit the pattern. That's where the AI Readiness Audit comes in. Over a working session, we map your actual workflow, identify the 80% candidates, assess the implementation path, and walk out with a prioritized roadmap.

Or if you want the practical toolkit now: the prompts, decision frameworks, and governance templates I've battle-tested with 120+ clients: that's the AI Ops Vault. Members get the "good enough" deployment checklist, the 4-step task assessment framework, the prompt library for routine work, and access to the community of implementation-focused practitioners.

Both paths start with the same insight: your competition isn't waiting. Neither should you.

Richard Batt has delivered 120+ AI and automation projects across 15+ industries. He helps businesses deploy AI that actually works, with battle-tested tools, templates, and implementation roadmaps. Featured in InfoWorld and WSJ.

What Should You Do Next?

If you are not sure where AI fits in your business, start with a roadmap. I will assess your operations, identify the highest-ROI automation opportunities, and give you a step-by-step plan you can act on immediately. No jargon. No fluff. Just a clear path forward built from 120+ real implementations.

Book Your AI Roadmap, 60 minutes that will save you months of guessing.

Already know what you need to build? The AI Ops Vault has the templates, prompts, and workflows to get it done this week.

← Back to Blog