Richard Batt |
Has AI Gone Too Far? An Honest Assessment in 2026
Tags: AI, Ethics
I work with AI every day. I build systems with it. I implement it for clients. I've built a consulting career on the idea that AI can solve real business problems. So when people ask me if AI has gone too far, I have perspective, but I'm also not neutral. I benefit from people adopting AI.
Key Takeaways
- The Real Risks (And Why They're Worth Taking Seriously), apply this before building anything.
- The Real Benefits (And Why They Matter), apply this before building anything.
- The Practitioner's Honest Assessment, apply this before building anything.
- What Responsible AI Adoption Looks Like.
- The Honest Middle Ground, apply this before building anything.
That said, I think the honest answer is: AI has real risks, real benefits, and we're still in the phase where we're figuring out which is which. The people saying AI is purely beneficial or purely dangerous are both wrong. The truth is more careful and more important to get right.
The Real Risks (And Why They're Worth Taking Seriously)
Deepfakes and Synthetic Media
This is the one that worries me most right now. The ability to create convincing audio and video of people saying things they never said has moved from theoretical to mainstream. I can generate a video of a CEO announcing a fake acquisition or a politician making statements they never made. Both are getting easier and cheaper to do.
The risk: election interference, fraud, reputation destruction, false confessions, false evidence. These aren't theoretical. In the past year, we've seen deepfakes used to commit fraud, manipulate markets, and attempt to influence elections.
The honest part: there's no simple technological fix. Detection tools get better, but they're always one step behind generation tools. This is a societal problem that needs institutional response, regulation, media literacy, verification standards. The AI industry can help but can't solve it alone.
Job Displacement
This is real and it's happening. Some jobs are getting eliminated or significantly reduced by AI. Customer service roles are shrinking because chatbots handle more of the work. Junior knowledge work (research, writing, analysis) is being consolidated because AI can do more with fewer people. Some creative roles are being disrupted by AI generation tools.
The number of people meaningfully displaced isn't huge yet (we're in 2026, not 2030), but I can see the trajectory. Some jobs will go away. Some will transform. Some new jobs will be created (AI training, maintenance, oversight, ethics). The distribution of who benefits and who loses isn't equal.
The honest part: I don't have a solution to this. I can tell you that throughout history, technological displacement has been painful and unequally distributed, and our social systems have struggled to handle it well. AI is not different. If anything, the speed is faster than prior technological shifts, which makes the adjustment harder.
Privacy and Data Erosion
Most AI systems need data. Lots of data. That means collecting information about people, training models on their behavior, and making inferences about their preferences. Some of this is useful (your email spam filter). Some of it is creepy (ad tracking). Some of it is actively harmful (surveillance that targets marginalized communities).
We're also in an era where companies collect data without meaningful consent and governments want access to AI models for surveillance purposes. Both are happening right now. The privacy boundary between public and private is erosion.
The honest part: I haven't seen meaningful regulation that solves this. GDPR tried. It helped a little. It also made some valuable AI applications harder. The trade-off between privacy and capability is real and there's no obvious good answer.
AI-Generated Misinformation at Scale
In the 2024 election, we started seeing AI-generated content being used to create fake news, fake endorsements, and fake scandals at volume. Not just one deepfake. Dozens of pieces of false content created cheaply and spread widely. This is going to get worse in the 2026 election cycle. I'm certain of it.
What worries me: the marginal cost of generating false content is now near-zero. The verification cost for people consuming it is high and getting higher. That's a bad asymmetry for truth.
The honest part: I don't see this getting solved. The genie is out of the bottle. The answer probably involves media literacy (which is hard), institutional verification (which doesn't scale), and societal skepticism (which we're developing). But I'm not optimistic this gets fixed before it gets worse.
Concentration of Power
AI development is expensive. The companies building frontier models (OpenAI, Anthropic, Google, Meta, China's tech companies) have enormous resources and infrastructure. Smaller players can't compete at the frontier. This means a few organizations are making decisions about how the most capable AI systems work.
Is this inherently bad? Not necessarily. OpenAI, Anthropic, and others have people genuinely thinking about safety and harm reduction. But it's also true that a small number of companies making decisions about powerful technology that affects billions of people is a concentration of power that's worth worrying about.
The honest part: I don't see how this resolves without either regulation or continued competition. Regulation is hard to get right and can stifle development. Competition requires someone matching the resources of OpenAI, which is hard. We're probably stuck in this territory for a while.
Autonomous Weapons Systems
This is the one that gets the most hype, and I think some of the concern is overblown, but the core risk is real. AI-powered military systems that can select and engage targets without human approval are being developed. We're not at Terminator level yet, but we're moving toward weapons systems that make decisions faster than humans can oversee.
The humanitarian risk is real. The escalation risk is real. An AI system that makes a targeting error can kill innocent people faster than one that requires human approval. And once one country builds it, others follow.
The honest part: this is harder to observe from outside military and defense contractors. I see the concern articulated by serious people. I assume the concern is justified. I don't have insider knowledge of how advanced these systems are or what safeguards exist.
The Real Benefits (And Why They Matter)
I wouldn't spend my career on AI if the benefits weren't genuine. Here's what I actually see:
Medical and Scientific Breakthroughs
AI is accelerating drug discovery, genomic research, and disease diagnosis. Researchers are using AI to process massive amounts of biological data and identify patterns humans would miss. Some recent examples: AI models predicting protein folding (saving years of lab work), AI systems detecting early cancer in imaging with higher accuracy than human radiologists, AI models identifying promising drug candidates in weeks instead of months of research.
This isn't theoretical. This is saving lives and reducing suffering. If you had a family member whose cancer we detected early because an AI system flagged a subtle pattern in a scan, you'd understand why I don't think AI has simply gone too far.
Accessibility and Disability Support
AI-powered text-to-speech, speech-to-text, and language translation are genuinely changing accessibility. People who are blind can navigate the internet more easily. People with dyslexia have better tools. People who speak different languages can communicate. Deaf people have real-time speech-to-text transcription.
This is unglamorous work. It doesn't make headlines. It deeply improves quality of life for millions of people.
Productivity and Economic Gain
I see this directly in my consulting work. A 12-person company saves 600 hours per year on routine customer support by implementing AI. A team of 5 people does the work that used to require 8. The company keeps more revenue and can hire more people elsewhere, or the owner makes more profit.
Multiply this across hundreds of thousands of businesses and you get genuine economic productivity gain. Productivity gains are how living standards improve over time. AI is contributing to that now.
Scientific Discovery Acceleration
AI is helping researchers analyze climate data, model disease spread, understand particle physics, and solve optimization problems. It's speeding up basic science in ways that will matter for human knowledge.
Solving Boring Problems
Nobody gets excited about AI automating invoicing or contract review, but thousands of hours per year are being freed up from boring work. That frees people to do more interesting work. That's a legitimate benefit even if it doesn't make headlines.
The Practitioner's Honest Assessment
I'm deep in the AI world and I see both the capability and the risks clearly. Here's my actual take:
We're not in danger from superintelligent AI destroying humanity. That's possible someday, but we're years or decades away from that mattering. The risks right now are more mundane: deception, job displacement, privacy erosion, misuse by bad actors, concentrate power. These are solvable problems if we take them seriously.
We're moving too fast on deployment without enough thinking about consequences. Some of this is necessary (you can't prevent technology, you have to manage it). Some of it is irresponsible (rolling out tools without thinking through how they'll be misused). I see both happening simultaneously.
The worst outcome is probably not AI being too powerful, but AI being powerful enough to do real harm and us not having the institutional capacity to manage it. That's where I'm actually worried. Not Skynet. Just systems that make millions of decisions that affect people, trained on flawed data, used in ways they weren't designed for, at a scale where humans can't oversee them individually.
The solution is not to stop AI development. The genie is out. You can't un-invent things. The answer is careful deployment, transparency where possible, regulation that matches the harm, and institutions that can oversee powerful systems. This is hard and I don't see it happening well yet.
What Responsible AI Adoption Looks Like
In my consulting work, I see the difference between responsible adoption and irresponsible adoption. Here's what good looks like:
Solving a real problem, not chasing novelty. You implement AI because something is actually broken or inefficient, not because AI is trendy. This filters out bad ideas.
Measuring the actual outcome, not assumptions. You implement, measure results, and adjust. Bad implementations get caught early.
Understanding what you're optimizing for. You think through what "success" means. If you optimize purely for cost reduction, you lose quality or customer satisfaction. If you optimize for speed, you sacrifice accuracy. Good adoption means being explicit about the trade-offs.
Keeping humans in the loop for important decisions. AI informs decisions. It doesn't make them autonomously. This is especially true for anything that affects people's rights, safety, or livelihood.
Planning for failure. What happens if the AI system breaks, gives bad advice, or is misused? You have plans for that. You don't just assume it will work.
Being transparent with people affected by the system. If an AI system is making decisions about whether someone gets a job, a loan, or medical treatment, they should know. Hiding AI in decision-making is usually a sign you're doing something you wouldn't be comfortable explaining.
The Honest Middle Ground
AI has probably not gone too far if we're responsible about it. AI will probably go too far if we're not. Right now, we're in a transition phase where some people are being responsible and some aren't. Some institutions are thinking about consequences and some aren't.
My job is to help the people thinking carefully move faster. My job is not to be neutral between good AI use and bad AI use. I believe there's a responsible way to build with AI that respects human agency, creates genuine value, and manages the real risks. I'm trying to do that.
For people asking whether they should adopt AI: the question isn't whether AI is good or bad in the abstract. The question is: am I using this tool responsibly for a real problem, am I being transparent about how it works, and am I prepared for it to be wrong? If the answer to all three is yes, it's probably worth doing.
For people worried about AI in general: justified concern is fine. Panic is less useful. We have real problems to solve and real risks to manage. Both are worth taking seriously without assuming the worst.
Richard Batt has delivered 120+ AI and automation projects across 15+ industries. He helps businesses deploy AI that actually works, with battle-tested tools, templates, and implementation roadmaps. Featured in InfoWorld and WSJ.
Frequently Asked Questions
How long does it take to implement AI automation in a small business?
Most single-process automations take 1-5 days to implement and start delivering ROI within 30-90 days. Complex multi-system integrations take 2-8 weeks. The key is starting with one well-defined process, proving the value, then expanding.
Do I need technical skills to automate business processes?
Not for most automations. Tools like Zapier, Make.com, and N8N use visual builders that require no coding. About 80% of small business automation can be done without a developer. For the remaining 20%, you need someone comfortable with APIs and basic scripting.
Where should a business start with AI implementation?
Start with a process audit. Identify tasks that are high-volume, rule-based, and time-consuming. The best first automation is one that saves measurable time within 30 days. Across 120+ projects, the highest-ROI starting points are usually customer onboarding, invoice processing, and report generation.
How do I calculate ROI on an AI investment?
Measure the hours spent on the process before automation, multiply by fully loaded hourly cost, then subtract the tool cost. Most small business automations cost £50-500/month and save 5-20 hours per week. That typically means 300-1000% ROI in year one.
Which AI tools are best for business use in 2026?
It depends on the use case. For content and communication, Claude and ChatGPT lead. For data analysis, Gemini and GPT work well with spreadsheets. For automation, Zapier, Make.com, and N8N connect AI to your existing tools. The best tool is the one your team will actually use and maintain.
What Should You Do Next?
If you are not sure where AI fits in your business, start with a roadmap. I will assess your operations, identify the highest-ROI automation opportunities, and give you a step-by-step plan you can act on immediately. No jargon. No fluff. Just a clear path forward built from 120+ real implementations.
Book Your AI Roadmap, 60 minutes that will save you months of guessing.
Already know what you need to build? The AI Ops Vault has the templates, prompts, and workflows to get it done this week.