We have firewalls for our laptops, why don't we have one for our AI Agents?

Posted by WhichCardiologist800@reddit | sysadmin | View on Reddit | 28 comments

I am the CTO of a successful AI company, and I want to share a major concern.

My teams use AI for coding on a daily basis. on one hand, i want to give them the flexibility to move fast without blocking them with massive rules and security layers. on the other hand, i am seeing frequent mistakes, some of them critical, like an AI agent attempting to upload .env files to a public repo.

as leaders, we manage firewalls and security policies across our entire fleet of hardware. However, we aren't taking the same action with agents. giving an ai agent full access to a terminal, database, or codebase is a massive security risk. we do not give our human junior devs unlimited access, so why does the agent have it?

I decided to start treating the llm like any other untrusted process. I built an ai firewall, an execution security layer that acts as a system-level gatekeeper for both terminal commands and MCP tools.

The project, node9-proxy, sits as a transparent proxy between the user and the llm. It focuses on the real-time interception of stdin/stdout, stderr, and JSON-RPC tool calls.

During development, my agent actually triggered a series of commands that could have been disastrous. The proxy caught them, applied a smart shield rule, and paused for human verification. once I saw this working, I added a cost-tracking tool to monitor the price of every agent action. it even helped me write its own Loop Detection logic after the agent got stuck in a recursive command loop, a perfect dog-fooding scenario for why we need a human in the loop.

Cmd interception: pauses agent malicious command (bash, sh, git, etc.) for human review.
MCP tool governance: Intercepts mcp calls. You can see and approve exactly what the agent is trying to do in your database (PostgreSQL), your filesystem, or your cloud providers (AWS/GitHub).
Policy engine (RBAC-style): Define granular rules. for example, always allow ls and cat, but always require manual approval for rm, drop table, or git push.
Cost guard: provides real time visibility into token usage, allowing you to kill a process before it burns your budget.

In a world of increasingly autonomous agents, an ai firewall should be a standard component of a secure operating system, just like a network firewall or SELinux.

node9-proxy is open-source and free. I’d love to hear from other CTOs, SysAdmins, and DevOps engineers, what kind of policy controls or logging formats would you want to see in an AI firewall?