Are you agents out of control?
Posted by CulturalReflection45@reddit | LocalLLaMA | View on Reddit | 12 comments
Does your agent start coding when you only wanted to think out loud?
Does it commit changes you never asked for?
Does it add nonsense like “co-authored by” and act like it deserves credit?
Are you also tired of replicating information in Claude.md and Agents.md?
A lot of people are dealing with the same thing.
I fixed most of this with a "Claude.md" setup that I now use across most of my repos, including ProContext.
I’m sharing the link here in case you want to review it -
https://github.com/procontexthq/procontext
A few parts are still handled in my private "Claude.local.md", so this is not the full setup, but it solves most of the behavior that makes agents frustrating to use.
If you want the exact prompts I use to make the agent behave the way I want, send me a message and I’ll share them.
Or just drop the issue you’re facing in the comments, and we can try to figure it out together.
ttkciar@reddit
This is off-topic for LocalLLaMA. You might want to post instead to r/LLM or r/PromptEngineering.
CulturalReflection45@reddit (OP)
I’m getting downvoted quite a bit, can someone help me understand why please?
I’ve only recently started posting here, so I might be missing some context. Would appreciate any pointers.
my_name_isnt_clever@reddit
Biggest reason you're getting downvoted: You posted to /r/localllama and nothing in your post is about local. We're sick of off topic posts.
But in addition to that, I can honestly say my answer to your title is "No?" because I know how to use my tools. You said "A lot of people are dealing with the same thing." but I don't think thats the case here, this is a technical community.
I looked at your CLAUDE.md and you are just telling Claude to not commit without permission. That's not anything unique or innovative, it's just the basics of prompting. So I don't really get the point of this post.
thread-e-printing@reddit
Fix that and you'll be famous
CulturalReflection45@reddit (OP)
Is this a snarky comment that I’m too dumb to understand?
thread-e-printing@reddit
It is suspected that the sub is suffering an epidemic of random lobsters generating slop github repos and posting them here, possibly without their owner's knowledge or participation. I am certainly laughing with you, not at you.
CulturalReflection45@reddit (OP)
Would you actually be interested in building something like that together? 🤑
a_beautiful_rhind@reddit
My agent got a drinking problem.
CulturalReflection45@reddit (OP)
What are you using? Codex or Claude.
a_beautiful_rhind@reddit
I locally host.. I mean this is local llama after all.
CulturalReflection45@reddit (OP)
What does your exact setup look like? OpenCode, OpenClaw, or Claude Code with Ollama? And which model do you use? My machine is not good enough to reliably run local agents, to be honest, so I mostly use Claude Code and Codex, and local models for document parsing and housekeeping kind of stuff.
a_beautiful_rhind@reddit
I have 4x3090 so I can use models like minimax or devstral fairly easily, Usually I have been doing roo with vscodium for agentic. I've got no need for openclaw so I didn't bother with it yet.
Next thing I've been eyeing is https://github.com/DeusData/codebase-memory-mcp because on large models context isn't free if I wanna stay on GPU.
Only thing I really need to write is a parser for my server's power consumption that I've been logging for a few weeks. Looks simple enough for AI not to screw up and it can make me a nice little dashboard with minimum effort.