Two Linux kernel APIs from 1999 that fix credential theft in ssh-agent, gpg-agent, and every Unix socket daemon
Posted by GroundbreakingStay27@reddit | linux | View on Reddit | 17 comments
Built a credential broker for AI agents and found that ssh-agent, gpg-agent, and every UDS-based credential tool trusts the same boundary: the Unix UID. The assumption "if theyre running as you youve already lost" breaks when AI agents execute arbitrary code as your UID by design.
## The Exploit
SO_PEERCRED records who called `connect()` but fds survive `fork()`+`exec()`. Attacker connects, forks, child execs the legit binary, parent sends on inherited fd. Daemon hashes the childs binary — matches. Token issued to the attacker.
Tried eight mitigations. All failed because attacker controls exec timing.
## The Fix
-
SCM_CREDENTIALS** (Linux 2.2, 1999) — kernel verified sender PID on every message, not just connection. Fork attack: sender != connector, rejected.
-
Process-bound tokens** — token tied to attesting PID. Stolen token from different PID, rejected.
\~50 lines total. Two attack surfaces closed.
## What We Built With It
The tool (Hermetic) does somthing no other credential manager does — it lets AI agents USE your API keys without ever HAVING them. Four modes:
-Brokered:***daemon makes the HTTPS call, agent gets response only
- Transient:** credential in isolated child process, destroyed on exit
- MCP Proxy:** sits between IDE and any MCP server, injects credentials, scans every response for leakage, pins tool definitions against supply chain tampering
- Direct:* prints to human terminal only, passphrase required
The agent never touches the credential in any mode. Its not a secret manager that returns secrets — its a broker that uses them on your behalf.
Whitepaper with full exploit chain + 8 failed mitigations: https://hermeticsys.com
Source: https://github.com/hermetic-sys/Hermetic
The vulnerabilty class affects any daemon using SO_PEERCRED for auth. Happy to discuss.
Booty_Bumping@reddit
Ah yes, more idiotic security snake oil sold by an industry that has become infested by scams
GroundbreakingStay27@reddit (OP)
which part specifically do you think will fail? genuinely asking. the transient mode is just env_clear + inject + exec + exit, theres not much to go wrong. the leak scanner uses exact-match against vault-derived values not regex pattern matching so false negatives from obfuscation are a known limitation but zero false positives.
happy to be proven wrong on specifics — thats how the last 3 exploits we fixed got found
Otherwise_Wave9374@reddit
That SO_PEERCRED + fork/exec detail is the kind of footgun that only shows up once you start running agent code under your own UID. Really nice writeup, and +1 on SCM_CREDENTIALS as the sane fix (message-level auth instead of connection-level assumptions).
The “agent can use creds without ever seeing them” angle is exactly where I think agent security is headed. We have been collecting patterns for tool brokering + least-privilege agent setups over at https://www.agentixlabs.com/ , this post is a great real-world example of why that matters.
GroundbreakingStay27@reddit (OP)
thanks! yeah the SO_PEERCRED thing was a real eye opener — its one of those assumptions thats been baked in for so long nobody questions it until the threat model changes. AI agents running as your uid is that change.
will check out agentixlabs, the least-privilege agent patterns space is going to be huge. the whole industry is still in the "just trust the agent with everything" phase.
gihutgishuiruv@reddit
If you two are going to jerk each other off, can you at least do it with your own hands rather than delegating even that to an LLM?
GroundbreakingStay27@reddit (OP)
We like LLMs ...it's the future..you can try stopping it ..but they jerk so well😅
gihutgishuiruv@reddit
I can tell from how they’ve convinced you that you actually know what you’re talking about
skccsk@reddit
"The assumption "if theyre running as you youve already lost" breaks"
No it's still true even when people voluntarily hand their systems over to someone else's control.
4xi0m4@reddit
The distinction matters though: with AI agents the attacker doesnt need to compromise the user first, they just need to send a malicious prompt. The agent is trusted to make outbound HTTPS calls, so a compromised prompt injection can redirect those calls to an attacker-controlled endpoint without the user ever losing control of their account directly. That changes the threat model compared to someone physically sitting at your terminal.
skccsk@reddit
The distinction that matters is that installing any of these 'AI agents' is the point of compromise. They are malware with marketing hype.
You will not successfully harden the OS against this because the OS isn't the problem.
https://arstechnica.com/security/2026/04/heres-why-its-prudent-for-openclaw-users-to-assume-compromise/
GroundbreakingStay27@reddit (OP)
fair — the keys are still at risk either way. the difference is before ai agents you had to get compromised first. now code execution as your uid is the default state every time you open cursor or claude
code. the threat model didnt change, the baseline did.
JamzTyson@reddit
Not true. It has always been and will always be possible to shoot yourself in the foot. Your example is nothing more than another way to shoot yourself in the foot.
skccsk@reddit
Weird my systems aren't vulnerable to this.
Zeda1002@reddit
You could have at least took your time to actually make formatting correct if you aren't willing to write this yourself
GroundbreakingStay27@reddit (OP)
Thanks for calling out my shortcomings--noted.
hermzz@reddit
Jesus, one of the worst things about AI output is the ridiculous word salad they like to create.
GroundbreakingStay27@reddit (OP)
which part confused you? happy to explain any of it