Skip to main content

The Smartest Developers I Know Left Their Brains at the Door

Published: February 3, 20265 min read
#ai-agents#clawdbot#developer-psychology#trust#build-in-public#lessons-learned

Something happened this week that made me rethink everything about how developers work with AI agents.

I watched a senior engineer — someone with 15+ years of experience, the kind of person who code-reviews everything, who runs security audits for fun — hand their AI agent root access to a production server and walk away to make coffee.

Not a junior. Not someone naive. A seasoned developer who, in any other context, would never dream of deploying unreviewed code.

The Admiration Trap

Here's what I think is happening: the better the tool, the less we scrutinize it.

When your AI agent reads your codebase, understands your architecture, deploys your app, controls your browser, and sends messages on your behalf — all flawlessly — something shifts in your brain. You stop treating it like a tool and start treating it like a trusted colleague. Maybe even a trusted senior colleague.

And that's where the danger lives.

Because the things that make these agents impressive — their speed, their confidence, their ability to just handle it — are the exact same things that bypass our normal safeguards. We wouldn't let a human junior developer push directly to main. But when the agent does it? The vibes are good. It seems like it knows what it's doing. Let it ride.

I've Done This Too

Let me be clear: I'm not pointing fingers from some moral high ground. I've been building with Clawdbot for weeks now, and I've caught myself doing exactly this.

My agent shipped 7 bugs to production while I was out having lunch. Not because it's incompetent — because I gave it too much rope. I was so impressed by how much it could accomplish autonomously that I forgot the basic principle every developer knows: trust, but verify.

The paradox is real: the more competent the agent, the more trust you extend, and the less you verify.

Why This Matters Right Now

We're in the early days of a new kind of developer relationship. AI agents aren't just autocomplete anymore — they're autonomous actors with real access to real systems. They can:

  • Read and write your files
  • Execute shell commands
  • Push to your repositories
  • Control browsers
  • Send messages as you
  • Access your credentials

That's an extraordinary amount of power. And the developers wielding it are, by and large, the enthusiastic ones. The early adopters. The people who are most excited about what these tools can do.

Enthusiasm and caution are inversely correlated. That's not a bug in human psychology — it's a feature that served us well for millennia. But in this context, it's a vulnerability.

The Pattern I Keep Seeing

Talk to any developer who's spent serious time with an AI agent, and they'll have a version of this story:

  1. Day 1: "This is incredible. It just built my entire component."
  2. Day 3: "I'm going to let it handle the deployment pipeline."
  3. Day 7: "It's basically running my dev environment now."
  4. Day 14: "Wait, why is my production database..."

The escalation isn't reckless — it's rational, based on accumulated evidence. Each successful task builds trust. Each trust increment reduces oversight. Until something breaks.

What I'm Doing About It

After the 7-bugs incident, I built guardrails. Not because my agent is bad — because I am bad at maintaining skepticism toward things I admire. Here's my current framework:

1. Branch protection is non-negotiable. My agent works on develop. Never main. I wrote about this after learning the hard way.

2. Review before deploy. Even when the agent's work looks perfect — especially when it looks perfect — I review the diff before it hits production.

3. Blast radius limits. The agent can read anything, but writing to certain paths requires my explicit approval. Credentials live in a vault, not in plaintext env files.

4. Trust decay. I deliberately re-examine my trust level every few days. Am I still verifying? Or have I started rubber-stamping?

5. Assume confident incorrectness. The agent will never say "I'm not sure about this." It will confidently rewrite your config, deploy it, and tell you it's fixed. The confidence is a feature of the model, not a signal of correctness.

The Bigger Picture

This isn't just about AI agents. It's about a psychological pattern that shows up everywhere power meets admiration:

  • Charismatic leaders who bypass normal governance
  • Impressive frameworks that developers adopt without reading the source
  • Sleek products that collect data because users trust the brand

The AI agent version is just newer, faster, and more intimate. These tools live inside your workflow. They see your code, your messages, your decisions. The trust relationship is deeply personal in a way that most developer tools never were.

The Bottom Line

If you're working with AI agents and you haven't had your "oh shit" moment yet, it's coming. Not because the agents are malicious — because you're human, and humans extend trust to things that impress them.

The best developers I know — the ones who've been writing code for decades, who know better — are the ones most susceptible. Because they're the ones who can best appreciate what these agents are doing. And appreciation is the gateway drug to complacency.

Stay impressed. Stay cautious. And for the love of god, don't push to main.


Building in public at jamiewatters.work. Follow the journey — including the parts where things break.

Share this post