Richard Batt |
Only 5% of Workers Are AI Fluent
Tags: Leadership, Productivity
Walmart announced in late 2025 that they were training 1.6 million employees to work with AI rather than replacing those employees with AI. That decision, not to automate people out, but to augment them with AI capabilities, is the most important hiring story in retail in a decade. And it points to a massive, mostly-overlooked crisis in the labour market: most workers do not know how to use AI effectively, and their employers have not equipped them to learn.
Key Takeaways
- Why Classroom AI Training Fails and what to do about it.
- The Four-Level AI Fluency Model, apply this before building anything.
- The Training Programme That Actually Works: A 4-Week Sprint, apply this before building anything.
- The Measurement That Matters, apply this before building anything.
- The Culture Piece: Why "AI Adoption" Fails Without Reinforcement, apply this before building anything.
A joint study from Google and Ipsos published in early 2026 found that only 5% of workers globally meet the definition of "AI fluent." That is, only 5% of workers understand what AI can and cannot do, have hands-on experience using AI tools, and can integrate AI into their daily workflows. Another 40% have used AI tools occasionally. But the vast majority, over 50%, have never seriously used an AI tool at work.
This is a problem. And it is a massive opportunity.
I have designed AI training programmes for eight different organisations over the past eighteen months, ranging from professional services firms to manufacturing companies to financial services. What I have learned is that most training fails because it is designed like classroom training, it teaches concepts, not skills. The training that actually works is different. It is hands-on. It is task-based. It is measured by behaviour change, not test scores. And it is ongoing, not a one-time event.
Why Classroom AI Training Fails
I worked with a financial services firm that spent $400,000 on an external AI training programme delivered to 200 employees. It was complete. It covered AI concepts, use cases, ethics, regulations, and hands-on labs. Employees attended for four weeks, one day per week. Surveys after the training showed high satisfaction. Participants reported that they found it valuable.
Six months later, I ran a follow-up assessment. I asked those 200 employees to show me an example of how they had integrated AI into their work. Only 28 of them could articulate a concrete example. Of those, only 8 had sustained the AI practice for more than a month. The training had taught concepts. It had not changed behaviour.
This is the pattern I see repeatedly. Classroom training teaches people what AI is. It does not teach them how to use it. It creates theoretical knowledge without practical skill.
Why? Because learning to use a tool effectively requires repeated, scaffolded practice on real tasks. It requires immediate feedback. It requires peer learning, seeing how others solve problems. And it requires integration into daily work, not segregation into training days.
The Four-Level AI Fluency Model
Before I design a training programme, I assess which level of AI fluency an organisation needs. Not every role requires the same depth of AI capability. Here is how I structure it:
Level One: Awareness
This is the foundational level. Employees at this level understand what AI can do (and what it cannot). They know the difference between different AI tools. They understand basic concepts like training data, model limitations, and hallucinations. They can articulate use cases for AI in their domain.
This is what most classroom training teaches, and stopping here is why most training fails. But awareness is necessary. It is not sufficient.
Level Two: Adoption
Employees at this level can use AI tools to assist with their daily tasks. A customer service representative at adoption level can use ChatGPT to draft responses to common customer inquiries. An analyst can use Claude to synthesise data and generate summaries. A project manager can use AI to generate meeting notes or timeline summaries. They use AI regularly, but not strategically. They apply existing tools to existing tasks.
Level Three: Integration
Employees at this level have redesigned their workflows around AI. They have identified tasks that AI can assist with, restructured those tasks to leverage AI, and built feedback loops to verify accuracy. They do not just use AI, they have redesigned how they work to get the most out of AI. A financial analyst at integration level redesign their reporting workflow: they use AI to pull and summarise data from multiple sources, they validate that output against known benchmarks, and they generate final reports using AI templates that they have customised.
Level Four: Fluency
Employees at this level design AI-augmented processes for others. They identify opportunities to embed AI into team workflows. They train peers. They think about AI as a design principle. They understand not just how to use AI, but how to design work so that AI augments human capability.
Not every employee needs to reach level four. But every knowledge worker should reach at least level two. Leadership should reach at least level three.
The Training Programme That Actually Works: A 4-Week Sprint
Based on what I have learned from eight client implementations, here is the training structure that produces measurable behaviour change:
Week One: Hands-On AI Tool Experimentation
Do not start with concepts. Start with tools. In week one, employees spend 2-3 hours using specific AI tools on mock tasks related to their role. A marketer uses ChatGPT to write and refine campaign briefs. An engineer uses Claude to review code. A HR professional uses AI to draft job descriptions. They experience what the tools can do through direct use, not through explanation.
The structure is crucial: they use the tool on a task, they see the output, they critique it, they iterate. Most importantly, they experience the limitations directly. They see that AI hallucinates. They see that AI sometimes produces plausible-sounding but inaccurate information. They feel the workflow, not just understand it conceptually.
Week Two: Real Task Application
In week two, employees apply AI to a real task from their actual work. Not a hypothetical task. A real one. The marketer uses ChatGPT to help write actual campaign briefs that will be used. The engineer uses AI to review actual code they are working on. The HR professional uses AI to draft descriptions for actual job openings.
The key: they apply the tool to real work, but with guardrails. A peer reviews the output before it goes live. They get immediate, real-world feedback on quality. They learn by doing, not by studying.
Week Three: Workflow Redesign
In week three, employees redesign their work process to integrate AI more systematically. The marketer develops a process where AI assists at multiple stages: brainstorming, draft writing, copy refinement, performance analysis. The engineer develops a code review workflow where AI does initial scanning and human engineers focus on architectural and logic decisions. The HR professional redesigns their job description workflow to use AI for drafting, with human editing and approval.
This is where behaviour change actually happens. They move from using AI as a toy to using AI as part of their regular work structure.
Week Four: Peer Training and Sustainability
In week four, participants train their peers on what they have learned. This has multiple benefits: it reinforces their own learning, it builds organisational adoption, and it creates peer accountability. They commit to using their new AI-augmented workflow for the next 90 days and check in with peers weekly.
The Measurement That Matters
Most training programmes measure success through surveys ("Do you feel you learned something?") or test scores ("Can you define overfitting?"). This is theatre. It does not measure whether behaviour changed.
Here is how I measure whether training actually worked:
30 days post-training: I ask participants to show me one concrete example of how they have used AI in the past 30 days. I want evidence, an actual prompt they used, an actual output, a description of how it changed their work. If they cannot show evidence, the training did not stick.
90 days post-training: I measure whether they are still using AI consistently. I ask their manager whether they have observed behaviour change. I check whether they are using AI tools on at least 50% of tasks that could benefit from AI.
Six months post-training: I measure impact. If the training we designed to increase output, are they producing more? If it was designed to improve quality, has quality improved? If it was designed to free up time for higher-value work, are they spending that time differently?
In the eight programmes I have designed, the success rate, measured by sustained usage and behaviour change at six months, is approximately 68%. That is, about 7 out of 10 trained employees continue using AI and have integrated it into their workflow. The remaining 3 out of 10 either stop using AI or use it inconsistently.
For the organisations that track this, the ROI is strong. One client trained 32 customer service representatives in this four-week programme. Six months later, they were handling 18% more customer inquiries per representative, with slightly higher customer satisfaction. The training cost approximately $6,500 per employee. The productivity gain generated a payback in approximately 10 weeks.
The Culture Piece: Why "AI Adoption" Fails Without Reinforcement
I worked with a manufacturing company that built an excellent four-week training programme. But then they stopped. Training happened, employees went back to their normal work, and there was no ongoing reinforcement. Within four months, most employees had reverted to their original workflows. They were still aware that AI existed, but they were not using it.
Then we restructured. We created a 15-minute weekly "AI share" session where teams discussed how they had used AI that week and shared tips. We created an internal Slack channel for AI questions. We started measuring AI usage monthly and sharing results with teams. We ensured that managers were rewarding AI adoption in their regular conversations with direct reports.
The impact was dramatic. With reinforcement, the 68% sustained-usage rate moved to 82%. With reinforcement, organisations moved from compliance-level training ("we completed the training") to cultural adoption ("AI is how we work now").
What Walmart Is Really Doing Differently
Walmart's decision to train 1.6 million employees in AI is not about charity. It is about recognising that AI-augmented workers are more valuable than AI-replaced workers, and that building a competitive advantage means training at scale.
They are not doing this with two-hour webinars and multiple-choice quizzes. Walmart is partnering with Google to deliver hands-on, role-specific training. Store associates are learning to use AI to analyse inventory and optimise shelf stocking. Warehouse workers are learning to use AI to plan routes and manage logistics. Customer service representatives are learning to use AI to resolve inquiries faster.
The insight is that the organisations that will win with AI are not the ones that use AI to eliminate jobs. They are the ones that use AI to augment workers and then train those workers to work with AI effectively.
How to Start
If you are a leader responsible for an organisation or team, here is what I recommend: do not wait for a vendor training programme to come to you. Design your own, role-specific four-week sprint. Pick one team as a pilot. Measure the results at 30, 90, and 180 days. If it works, scale it. If it does not, adjust and try again.
The 5% of workers who are genuinely AI fluent are the ones who have had hands-on practice, real-world application, peer learning, and ongoing reinforcement. They are not the ones who completed a course.
Your competitive advantage does not come from having access to better AI tools. Competitors have the same access. Your advantage comes from having teams that can actually use those tools effectively. That is a training problem, not a tool problem.
Richard Batt has delivered 120+ AI and automation projects across 15+ industries. He helps businesses deploy AI that actually works, with battle-tested tools, templates, and implementation roadmaps. Featured in InfoWorld and WSJ.
Frequently Asked Questions
How long does it take to implement AI automation in a small business?
Most single-process automations take 1-5 days to implement and start delivering ROI within 30-90 days. Complex multi-system integrations take 2-8 weeks. The key is starting with one well-defined process, proving the value, then expanding.
Do I need technical skills to automate business processes?
Not for most automations. Tools like Zapier, Make.com, and N8N use visual builders that require no coding. About 80% of small business automation can be done without a developer. For the remaining 20%, you need someone comfortable with APIs and basic scripting.
Where should a business start with AI implementation?
Start with a process audit. Identify tasks that are high-volume, rule-based, and time-consuming. The best first automation is one that saves measurable time within 30 days. Across 120+ projects, the highest-ROI starting points are usually customer onboarding, invoice processing, and report generation.
How do I calculate ROI on an AI investment?
Measure the hours spent on the process before automation, multiply by fully loaded hourly cost, then subtract the tool cost. Most small business automations cost £50-500/month and save 5-20 hours per week. That typically means 300-1000% ROI in year one.
Which AI tools are best for business use in 2026?
It depends on the use case. For content and communication, Claude and ChatGPT lead. For data analysis, Gemini and GPT work well with spreadsheets. For automation, Zapier, Make.com, and N8N connect AI to your existing tools. The best tool is the one your team will actually use and maintain.
Put This Into Practice
I use versions of these approaches with my clients every week. The full templates, prompts, and implementation guides, covering the edge cases and variations you will hit in practice, are available inside the AI Ops Vault. It is your AI department for $97/month.
Want a personalised implementation plan first? Book your AI Roadmap session and I will map the fastest path from where you are now to working AI automation.