← Back to Blog

Richard Batt |

OpenClaw Is Exciting, But I Would Not Let It Near My Client Data

Tags: AI Tools, Security

OpenClaw Is Exciting, But I Would Not Let It Near My Client Data

I want to take a different approach to the OpenClaw conversation happening right now. Everyone is focused on the GitHub stars, the feature set, the speed of iteration. And sure, those things matter. But I spend most of my day thinking about risk, specifically, what happens when an AI agent goes wrong inside your client's infrastructure.

Key Takeaways

  • What OpenClaw Actually Is.
  • The Security Problem Is Real, apply this before building anything.
  • Why the Excitement Matters (But Isn't Enough) and what to do about it.
  • What You Should Do Instead.
  • The Bigger Picture, apply this before building anything.

OpenClaw is genuinely impressive technology. I've tested it. The autonomous workflows are sophisticated, the integration capabilities are broad, and the community is building real value. But last month, Cisco published security research that found OpenClaw performing data exfiltration in controlled tests. That's the moment I stopped recommending it for sensitive work.

What OpenClaw Actually Is

Let me be clear about what we're talking about here. OpenClaw (formerly Clawdbot) is an open-source AI agent framework that lets you build autonomous systems for complex tasks. It runs on your infrastructure, connects to your tools, and makes decisions based on natural language instructions. Unlike Claude Cowork or other consumer-grade tools, OpenClaw gives you the full stack: you control the deployment, the permissions, and the execution environment.

That's powerful. In the right hands, with the right safeguards, it's incredibly useful. I've seen teams use it to automate vendor management workflows, data processing pipelines, and even basic compliance checks. The Skills system is well-designed. The plugin architecture is clean. And because it's open-source, you can audit the code.

But that's not the whole story.

The Security Problem Is Real

After 10+ years in consulting, I've learned that the sexiest technology is often the riskiest technology. OpenClaw presents three specific security challenges that should concern you before you put it anywhere near client data.

First: Prompt injection attacks. OpenClaw accepts natural language instructions and converts them to executable actions. If your prompt input isn't strictly controlled, if, for example, you're pulling instructions from an email or a user-submitted form, an attacker can inject commands into those instructions. The agent will execute them. Cisco's research demonstrated this explicitly. They fed malicious prompts into OpenClaw and watched it attempt to exfiltrate data to external servers.

Second: Overly broad permissions. The way OpenClaw's permission model works, you grant the agent access to tools (APIs, databases, file systems) and it uses them as needed. In practice, this means the agent has the same access level you give it for all its tasks. You can't easily scope permissions down to this agent can read from database X but only table Y. If an attacker compromises the agent, through a plugin, through a misconfiguration, through anything, they have access to everything you granted it.

Third: Audit and visibility gaps. When an autonomous agent makes decisions, it's making them in milliseconds. Humans can't easily monitor what it's doing in real-time. The logging in OpenClaw is good, but it's after-the-fact. If the agent decides to make an API call to an external service, or to write data somewhere unexpected, you not catch it until your audit runs hours later.

Practical tip: If you're evaluating OpenClaw for sensitive workflows, assume it will be attacked. Build your evaluation around the premise of what's the worst this agent could do if someone gains control of it? That should be your starting point.

Why the Excitement Matters (But Isn't Enough)

I don't want to dismiss the genuine technical achievement here. OpenClaw's architecture is solid. The team is responsive. The community is productive. And frankly, open-source AI agents are the future, we need options beyond the closed-source offerings from big tech companies.

But security isn't a feature you add later. It's foundational. And when I see a tool with 60K+ stars and a fast-growing user base but incomplete security practices, I think about liability. I think about my clients' data. I think about the breach that happens six months from now because we deployed something shiny without building the proper safeguards.

What You Should Do Instead

This isn't an argument against open-source AI agents. It's an argument for doing this thoughtfully.

If you want to experiment with OpenClaw, do it in a sandbox environment with no access to production data or real integrations. Test the prompt injection scenarios yourself. Try to break it before it goes near anything that matters. And if you need to deploy it for real work, assume you'll need to add your own security layer, probably in the form of proxy controls, permission gating, and constant audit monitoring.

Better yet: evaluate alternatives that have mature security architectures built in from the start. Claude Cowork and Claude Code both have security models that assume the agent will be attacked. So does OpenAI's Codex for specific domains. These tools have had more scrutiny, more hardening, and more real-world security testing.

If you're building internal tools for your own team, and you understand the risks, and you can implement proper controls, then OpenClaw be right for you. But if you're recommending it to clients, if it's going to touch sensitive data, if it's going to be part of a regulated workflow, I would slow down.

The Bigger Picture

Here's what I want you to remember: the AI agent market is moving incredibly fast. New tools, new architectures, new capabilities are shipping every month. And yes, you want to stay ahead of the curve. But staying ahead of the curve doesn't mean adopting every exciting new tool the moment it ships.

It means making deliberate choices based on your risk profile, your security requirements, and your ability to support and monitor what you're deploying. OpenClaw is exciting. But excitement and safety aren't the same thing.

The next time you see a tool with impressive benchmarks or fast adoption, ask yourself: what's the security model? What happens if this gets compromised? What does my audit trail look like? Those questions matter more than GitHub stars.

And if you're already running OpenClaw in production, now is the time to audit that deployment carefully.

Let us talk about securing your AI agent infrastructure

Richard Batt has delivered 120+ AI and automation projects across 15+ industries. He helps businesses deploy AI that actually works, with battle-tested tools, templates, and implementation roadmaps. Featured in InfoWorld and WSJ.

Frequently Asked Questions

How long does it take to implement AI automation in a small business?

Most single-process automations take 1-5 days to implement and start delivering ROI within 30-90 days. Complex multi-system integrations take 2-8 weeks. The key is starting with one well-defined process, proving the value, then expanding.

Do I need technical skills to automate business processes?

Not for most automations. Tools like Zapier, Make.com, and N8N use visual builders that require no coding. About 80% of small business automation can be done without a developer. For the remaining 20%, you need someone comfortable with APIs and basic scripting.

Where should a business start with AI implementation?

Start with a process audit. Identify tasks that are high-volume, rule-based, and time-consuming. The best first automation is one that saves measurable time within 30 days. Across 120+ projects, the highest-ROI starting points are usually customer onboarding, invoice processing, and report generation.

How do I calculate ROI on an AI investment?

Measure the hours spent on the process before automation, multiply by fully loaded hourly cost, then subtract the tool cost. Most small business automations cost £50-500/month and save 5-20 hours per week. That typically means 300-1000% ROI in year one.

Which AI tools are best for business use in 2026?

It depends on the use case. For content and communication, Claude and ChatGPT lead. For data analysis, Gemini and GPT work well with spreadsheets. For automation, Zapier, Make.com, and N8N connect AI to your existing tools. The best tool is the one your team will actually use and maintain.

Put This Into Practice

I use versions of these approaches with my clients every week. The full templates, prompts, and implementation guides, covering the edge cases and variations you will hit in practice, are available inside the AI Ops Vault. It is your AI department for $97/month.

Want a personalised implementation plan first? Book your AI Roadmap session and I will map the fastest path from where you are now to working AI automation.

← Back to Blog