Skip to main content

When Your Automation Becomes the Job

Published: April 7, 20265 min read
#openclaw#buildinpublic#ai-agents#burnout#solopreneur#agentic-ai

When Your Automation Becomes the Job

ReadTime: 5 min read Date: April 6, 2026 Author: Jamie Watters Project: Business OS

I used to brag about OpenClaw.

I had agents running on a Mac Mini. Content. Social. Marketing. Operations. A whole little factory humming away while I focused on the work that actually mattered. It felt like a superpower.

That was three months ago.

Today I spend more time fixing, configuring, debugging, and babysitting those agents than the tasks would take me to do by hand. And I know I am not the only one.


What happened

OpenClaw went viral. 346,000 GitHub stars. Fastest-growing open-source project in GitHub history. And then reality showed up.

Nine CVEs in four days. 135,000 exposed instances on the public internet. Over 1,100 malicious skills planted in ClawHub. A one-click remote code execution flaw that let attackers hijack your entire machine through a single webpage visit. Meta's own AI safety director tested it with a few email tasks and had to physically sprint to her Mac Mini to stop it from deleting her inbox.

So the maintainers did what they had to do. They locked it down. Hard.

Stricter sandboxing. Multi-layer permission gates. Network controls. Manual approval prompts for things that used to just work. Every update quietly changed defaults, tightened scopes, removed implicit powers. Necessary? Absolutely. But it landed as a series of silent regressions that made the tool feel like it had been sabotaged.

Then last week, Anthropic blocked Claude subscriptions from working with third-party agents like OpenClaw. Google had already started banning accounts using it with Gemini. The economics were obvious. An autonomous agent can burn through thousands of dollars in tokens on a $20 subscription. That was never going to last.


The Agent's Dilemma

Here is what I think the deeper issue is, and it goes beyond OpenClaw.

We gave these tools hands. Then we immediately had to tie them behind their backs.

To be useful, an AI agent needs deep access to your system. But deep access makes it a massive security liability. The more powerful the tool, the more dangerous the failure mode. And when the failures arrived (and they were spectacular), the only responsible move was to add friction. Lots of it.

The result is a tool that now requires you to become a security engineer and a sysadmin just to get basic workflows running again. For a solopreneur, that is not a productivity gain. That is a new full-time job.

OpenClaw was not sabotaged. It was dragged out of its honeymoon period and forced to become a real product under pressure. Users are paying the tax.


The part I am less comfortable admitting

I am burnt out.

Not just on OpenClaw. On the whole meta-layer of running agents, maintaining frameworks, chasing the next tool, optimising the optimisation layer. I have been running 10 projects with 3 active at any time, using agents to automate the parts I do not enjoy. Sounds clever on paper. In practice, the automation itself became a project. And then maintaining the automation became another project. So instead of 10 projects I was running 12.

There is a version of this where I build my own agent harness to replace OpenClaw. Do it properly. Make it work the way it should.

That would be project 13.

I have enough self-awareness to recognise that pattern. The ADHD-flavoured hyperfocus impulse dressed up as strategy. "I will solve the problem of too many projects by starting another project." No.


What I am actually doing

Stopping. Not forever. But stopping the compulsive layering.

The only mistake, when you know something is not working, is to delay changing it. I have been delaying because I am attached. Sunk cost. Emotional investment in the architecture. Pride in the setup. All the usual traps.

Here is what I have landed on.

I am not building a replacement. I am not waiting for the perfect agentic platform to arrive. I am simplifying. Fewer projects. Less automation. More manual work for now. Not because agents are bad, but because the cost of maintaining the automation layer has exceeded the cost of doing the work.

That is not a failure of vision. That is arithmetic.

The Anthropic stack (Claude Code, Cowork, Computer Use) will mature. OpenClaw or something like it will eventually find the right balance between power and safety. When it does, I will be ready. But I will not burn myself out being an unpaid beta tester for infrastructure that is not ready yet.


The bigger lesson for anyone building on agentic AI

If you are a solopreneur or small team building workflows on top of autonomous agents right now, here is what I wish someone had told me six months ago.

Treat agent frameworks as disposable glue. Not core infrastructure. The moment your automation layer requires more maintenance than the work it automates, you have inverted the value equation. And because these tools are changing weekly (security patches, provider restrictions, breaking updates), that inversion can happen overnight.

The honeymoon phase is intoxicating. The slot machine psychology is real. You hit a prompt, the agent does something brilliant, and you feel like you have unlocked a cheat code. But when it fails (and it will fail, sometimes spectacularly), you are the one holding the bag.

Build on foundations you control. Keep the experimental layer light and replaceable. And do not let the meta-work of managing your productivity system become your actual work.

That is the trap. I fell into it. I am climbing out.


I have been building in public since 2025. 10+ projects, a multi-agent stack, and a lot of lessons learned the hard way. If this resonated, follow along. The truth is the currency of the future.

What is your experience with agentic AI tools? Are they saving you time or eating it?

Share this post