Richard Batt |
The Road to AGI: Where We Actually Stand in 2026
Tags: AI, Industry Trends
What AGI Actually Means (And It's Not What You Think)
Let me start with something I say constantly in my consulting work: most people use the term AGI without actually understanding what it means. I've sat in boardrooms where executives talk about AGI like it's arriving next quarter, and I have to pump the brakes.
Key Takeaways
- What AGI Actually Means (And It's Not What You Think).
- The State of AGI Progress: Honest Assessment, apply this before building anything.
- The Timeline Question Everyone Gets Wrong, apply this before building anything.
- What Current Frontier Models Actually Can and Can't Do.
- What AGI Would Actually Mean for Your Business.
Artificial General Intelligence is a system that can learn and apply knowledge across any domain the way humans do. It's not just good at one thing. It's not ChatGPT writing emails and coding. It's not even a system that wins at chess and Go and StarCraft. AGI is a system that could walk into an unfamiliar field, learn the rules, and compete at expert level, all from first principles, with minimal training data.
What we have in 2026? Narrow AI. Very sophisticated, very useful narrow AI. But narrow AI nonetheless. These models are incredible at pattern matching within their training data, but they fail catastrophically outside their domain. A frontier model like Claude Opus 4.6 can help you debug code, write marketing copy, analyze financial statements, and brainstorm product strategies. But it doesn't understand any of those domains the way a human expert does. It's doing something more like sophisticated autocomplete based on patterns it learned.
The distinction matters because it changes how you should be using these tools right now.
The State of AGI Progress: Honest Assessment
OpenAI's approach has been to scale compute and data, make bigger models, train on more text, see what emerges. And I'll give them credit: that strategy has worked remarkably well. GPT-4 to GPT-4o to o1 showed genuine capability improvements. But the curve is flattening. We're seeing smaller improvements despite exponentially larger training costs.
Anthropic, where Claude comes from, has taken a different path. They've focused on safety, interpretability, and alignment from the start. Their thinking is that racing blindly toward AGI without understanding how these systems work is dangerous. In my consulting practice, I've found Anthropic's models tend to be more reliable and honest about their limitations, they'll tell you when they don't know something rather than confidently making something up.
Google DeepMind is doing interesting work in reasoning. Their recent models show improved chain-of-thought reasoning and long-horizon planning. Meta is pushing hard on open-source models, which is democratizing access but also creating quality concerns. Every major lab is experimenting with multi-modal approaches, longer context windows, and specialized architectures.
But here's what I actually observe: we're getting better tools, not closer to AGI. We're optimizing narrow AI, not approaching general intelligence. The jump from GPT-3 to GPT-4 felt like real progress. The jump from GPT-4 to GPT-4o felt like refinement. The jump from GPT-4o to o1 is genuine improvement in reasoning, but it's still narrow reasoning about narrow domains.
The Timeline Question Everyone Gets Wrong
In 2023, when the AI panic was at its peak, people were confident AGI was 5 years away. Some said 2 years. A few said we already be here (we definitely are not). Now in 2026, most serious researchers have backed off those timelines.
The honest answer? We don't know. And anyone claiming certainty is either selling something or hasn't thought it through. What we know is:
- Current approaches hit diminishing returns. Each new generation of model requires exponentially more compute for linearly smaller improvements.
- The jump from narrow to general isn't necessarily a smooth curve. It be sudden, or it be impossible with current approaches.
- We're missing fundamental insights about how intelligence actually works. We don't fully understand why these models work, what they're actually learning, or how their knowledge is represented.
- New architectures or training approaches unlock capabilities we haven't imagined. Or they hit fundamental walls.
My read after working with these systems for years: AGI is probably 10+ years away, decades, never with current approaches. But I could be wrong. The only thing I'm confident about is that people claiming certainty are overconfident.
What Current Frontier Models Actually Can and Can't Do
I've spent 10+ years in automation and AI consulting, and I've deployed frontier models in real business situations. Here's what I've learned works and what still fails:
They're excellent at: Writing and editing, coding with human oversight, summarizing information, explaining concepts, brainstorming, structured analysis, pattern matching in data, following complex instructions within their training domains.
They fail at: Truly novel reasoning, understanding context they've never seen before, long-horizon planning without human guidance, maintaining accuracy over 50+ step processes, creative work that requires deep domain expertise, anything requiring real-world interaction or observation, truly adapting to completely new domains.
The practical reality: frontier models are force multipliers. They make good people better. They don't replace good people. And they can make mediocre people slightly more productive, but they amplify bad judgment.
What AGI Would Actually Mean for Your Business
If AGI did arrive, real, general intelligence at human level or beyond, it would reshape every industry. Some tasks that require human judgment today would be automated. Other work would shift to managing and directing these systems. The adaptation period would be chaotic.
But here's what I tell every client: you don't need to wait for AGI. You need to use what you have now. Right now, in 2026, you have access to systems that can:
- Automate 40-60% of knowledge work with proper implementation
- Reduce time-to-market on content and content-adjacent work
- Improve decision-making through better analysis and synthesis
- Free your team to focus on work that actually requires human judgment
Companies that wait for AGI will be left behind by companies that master the tools available today. This is where I've seen the real value in my consulting work, not in following hype, but in practical implementation of what exists now.
The Path Forward: For You, Not for AGI
My advice hasn't changed much in three years: don't wait. Don't place bets on AGI arriving. Build systems around what you can do today. Invest in learning how to work with current AI tools effectively. Hire people who understand both the capabilities and the limitations.
The companies winning right now are the ones that integrated AI into their workflows in 2023-2024 and are now operating at higher efficiency than competitors. By the time AGI arrives, if it arrives, they'll be decades ahead.
AGI is interesting to think about. But it's not a business strategy. Your business strategy should be built on what you can do today and what you can accomplish this year. That's where the real value is.
If you're looking to build a roadmap for AI integration in your business without the hype and with focus on what actually works, let's talk. I've helped 120+ companies find the right automation solutions for their specific situation, and the results are always the same: the companies that start now are the ones winning.
Richard Batt has delivered 120+ AI and automation projects across 15+ industries. He helps businesses deploy AI that actually works, with battle-tested tools, templates, and implementation roadmaps. Featured in InfoWorld and WSJ.
Frequently Asked Questions
How long does it take to implement AI automation in a small business?
Most single-process automations take 1-5 days to implement and start delivering ROI within 30-90 days. Complex multi-system integrations take 2-8 weeks. The key is starting with one well-defined process, proving the value, then expanding.
Do I need technical skills to automate business processes?
Not for most automations. Tools like Zapier, Make.com, and N8N use visual builders that require no coding. About 80% of small business automation can be done without a developer. For the remaining 20%, you need someone comfortable with APIs and basic scripting.
Where should a business start with AI implementation?
Start with a process audit. Identify tasks that are high-volume, rule-based, and time-consuming. The best first automation is one that saves measurable time within 30 days. Across 120+ projects, the highest-ROI starting points are usually customer onboarding, invoice processing, and report generation.
How do I calculate ROI on an AI investment?
Measure the hours spent on the process before automation, multiply by fully loaded hourly cost, then subtract the tool cost. Most small business automations cost £50-500/month and save 5-20 hours per week. That typically means 300-1000% ROI in year one.
Which AI tools are best for business use in 2026?
It depends on the use case. For content and communication, Claude and ChatGPT lead. For data analysis, Gemini and GPT work well with spreadsheets. For automation, Zapier, Make.com, and N8N connect AI to your existing tools. The best tool is the one your team will actually use and maintain.
What Should You Do Next?
If you are not sure where AI fits in your business, start with a roadmap. I will assess your operations, identify the highest-ROI automation opportunities, and give you a step-by-step plan you can act on immediately. No jargon. No fluff. Just a clear path forward built from 120+ real implementations.
Book Your AI Roadmap, 60 minutes that will save you months of guessing.
Already know what you need to build? The AI Ops Vault has the templates, prompts, and workflows to get it done this week.