We have firewalls for our laptops, why don't we have one for our AI Agents?
Posted by WhichCardiologist800@reddit | sysadmin | View on Reddit | 28 comments
I am the CTO of a successful AI company, and I want to share a major concern.
My teams use AI for coding on a daily basis. on one hand, i want to give them the flexibility to move fast without blocking them with massive rules and security layers. on the other hand, i am seeing frequent mistakes, some of them critical, like an AI agent attempting to upload .env files to a public repo.
as leaders, we manage firewalls and security policies across our entire fleet of hardware. However, we aren't taking the same action with agents. giving an ai agent full access to a terminal, database, or codebase is a massive security risk. we do not give our human junior devs unlimited access, so why does the agent have it?
I decided to start treating the llm like any other untrusted process. I built an ai firewall, an execution security layer that acts as a system-level gatekeeper for both terminal commands and MCP tools.
The project, node9-proxy, sits as a transparent proxy between the user and the llm. It focuses on the real-time interception of stdin/stdout, stderr, and JSON-RPC tool calls.
During development, my agent actually triggered a series of commands that could have been disastrous. The proxy caught them, applied a smart shield rule, and paused for human verification. once I saw this working, I added a cost-tracking tool to monitor the price of every agent action. it even helped me write its own Loop Detection logic after the agent got stuck in a recursive command loop, a perfect dog-fooding scenario for why we need a human in the loop.
Cmd interception: pauses agent malicious command (bash, sh, git, etc.) for human review.
MCP tool governance: Intercepts mcp calls. You can see and approve exactly what the agent is trying to do in your database (PostgreSQL), your filesystem, or your cloud providers (AWS/GitHub).
Policy engine (RBAC-style): Define granular rules. for example, always allow ls and cat, but always require manual approval for rm, drop table, or git push.
Cost guard: provides real time visibility into token usage, allowing you to kill a process before it burns your budget.
In a world of increasingly autonomous agents, an ai firewall should be a standard component of a secure operating system, just like a network firewall or SELinux.
node9-proxy is open-source and free. I’d love to hear from other CTOs, SysAdmins, and DevOps engineers, what kind of policy controls or logging formats would you want to see in an AI firewall?
ledow@reddit
Invent technology.
Give it full permissions.
Worry about security later.
That's how companies operate and, no matter how much they claim to have cybersecurity training, departments, experts, policies, that's what happens.
And they never realise that retro-fitting security just DOES NOT WORK.
There is no way that any AI app should have been approved for usage before the security and permissions were resolved.
Bolting on some third-party proxy to try to rein in this shite is the wrong solution.
WhichCardiologist800@reddit (OP)
so what is a good solution? any idea?
ledow@reddit
Step 1) Stop letting AI agents upload to your repos...
WhichCardiologist800@reddit (OP)
this exactly what the proxy do , now i have control on all my teams to act correctly
ledow@reddit
No, the proxy adds a layer in between to determine if they should be able to, right?
The determination of that should be on the repo's permissions.
You're trying to "filter out the badness" which simply doesn't work for security, if there's anywhere along the path where you're just permitting that badness anyway.
It's like trying to filter out SQL statements for malicious access but still executing that SQL statement as a full database admin.
It's a nonsense. I know you're trying to peddle your product (free or not), but it's the wrong solution. The solution to AI stuff doing dangerous things is to NOT LET AI HAVE PERMISSION TO DO DANGEROUS THINGS. Not "try to guess if this is a dangerous thing that the AI wants to do".
WhichCardiologist800@reddit (OP)
Bro, i am working with ai 20 years... you will never know if it correct approach or not if you will not use it... this is open source, you can try or not... it your decision
nuttertools@reddit
You are missing the fundamental point. The problem is not something a bolt-on is the solution for. The problem is an age-old continually repeated one of new systems being granted exemptions to bypass existing security controls. The env file as the example the security issue is A) the env file could be uploaded, fix that; B) the disclosure would not have been detected, fix that. Once the fundamental problem of bypassing security and compliance controls has been resolved THEN bolt-ons can improve developer experience, never as a band-aid for a self-inflicted gunshot.
WhichCardiologist800@reddit (OP)
i have 100 team members, each use different agent, we have 10 agents that running in cloud, how do you suggest to manage it. i found the answer, one tools, one policy to all
nuttertools@reddit
Basic security and compliance policies and enforcement thereof. If the agents can do things the team members cannot remove that exemption. If they cannot there is a breakdown of policy enforcement, resolve that.
The tool itself isn’t a problem/bad/not useful. You just focusing on the items it specifically should not be used for (core security enforcement) as opposed to the many things it should be used for.
WhichCardiologist800@reddit (OP)
You are correct, the open question is how to do it?
nuttertools@reddit
The same way it’s always been done, nothing changes because an agent is doing it.
The env file as an example (because it’s simple). Source control should deny permission on push. If the user gets creative it should be immediately detected and raise a security incident, likely suspending the user at the same time. That’s the security side of things. The developer experience side of things with an agent it’d be pretty cool if an incompetent and/or lazy team member didn’t continually cause problems, that’s what your tool is for. It’s the same for almost every feature you listed, the fundamental security side of this already exists and this tool should NEVER be used as a replacement, it’s an augment for improving the experience of using an agent.
WhichCardiologist800@reddit (OP)
I am with you... and your approach is correct, the main challenge, that it means that i need to go to many system and define rules. i am suggesting to do it in one place...
nuttertools@reddit
Yes and that is exactly the problem. Not a new problem either, a continually repeated approach for companies that want to fudge compliance surveys and pray to never be audited.
You have systems that you pay for (in one form or another) that were selected, vetted, configured, audited, and deployed. Now you are describing taking those out of compliance because a new tool needs to go through the same process and you would rather not bother following basic policies.
It’s the equivalent of adding a proxy to sniff traffic for “password:” instead of enforcing secure authentication methods. The tool fundamentally does not create a security boundary, it’s a bolt-on that improves experience.
ledow@reddit
Again: Don't have 10 agents running in the cloud.
Did you audit their capabilities, their individual permissions? Did you work on a least-privilege principle? Did you determine the access they would require and grant only that, and audit all actions outside that permitted access to ensure compliance?
What you've done is the equivalent of "default allow" and then "deny this particular thing I just thought of".
That's NOT secure. But security is clearly secondary to your mind, and convenience and actually even "we must have this tool" took priority. Like I implied happens, in my first post.
WhichCardiologist800@reddit (OP)
So what are you suggesting, that i will manage the permission for each agent?
ledow@reddit
I graduated my AI courses 26 years ago.
Kumorigoe@reddit
Sorry, it seems this comment or thread has violated a sub-reddit rule and has been removed by a moderator.
Do Not Conduct Marketing Operations Within This Community.
Your content may be better suited for our companion sub-reddit: /r/SysAdminBlogs
If you wish to appeal this action please don't hesitate to message the moderation team.
St0nywall@reddit
Traditional firewalls will block/allow network traffic. They can also inspect the contents of the traffic and block based on those contents.
You don't want a firewall for your AI agents, you need a way to prevent the agent from doing something it isn't supposed to do. You need to impose guiderails and a component of that can be a firewall but most of it should be incorporated into the base program layer of the agent framework.
That's my opinon.
WhichCardiologist800@reddit (OP)
This is a real issue, because you want strict rules not give to ai to decide
St0nywall@reddit
AI is equivalent to a very smart 8 year old who has had no schooling or parenting and due to that has no intuition it can use on what it is being asked to do.
WhichCardiologist800@reddit (OP)
Exactly, but as 8 years old, he has is own interpretation, sometime, it wrong
SquashNo7817@reddit
The problem for others is that you don't trust the AI that your team is using but when you sell your AI you want them to access everything.
Does your AI product have limits in access? Some company can claim why upload stuff onto your product?
The real issue is: agent is just it is code. It is the same as some dev running curl in command line 10.years ago. Yes they can upload stuff to some pastebin or so.
Or install some pypi/npm etc and ruin everything.
If you block then it spoils the development or delays.
Education is the only option.
WhichCardiologist800@reddit (OP)
Exactly, education and transparity
CanadianPropagandist@reddit
I'm genuinely worried about how some people are crafting agents. Like it sure sounds as if you're giving agents access to base level shells.
This is your real problem, you've mistaken "move fast" for "get real sloppy with it". Might be the whole industry's problem.
WhichCardiologist800@reddit (OP)
You're right, we all learn as we move fast. Sometimes it's better to take ten steps forward and one step back. By the way, I also added a git snapshot feature in case the AI shuffles the code.
AmazingHand9603@reddit
This is honestly what a lot of AI teams need right now. Seeing what the agent is doing before it does damage is a game-changer. I’d love to be able to tag actions for review so I can train both the AI and my team on what to look out for. Customizable alerts would be handy too. Thanks for sharing this.
WhichCardiologist800@reddit (OP)
many thanks!!
WhichCardiologist800@reddit (OP)
GitHub: https://github.com/node9-ai/node9-proxy