partly selfhosting my way out ofclaude code dependency
Posted by codehamr@reddit | LocalLLaMA | View on Reddit | 6 comments
Quick note up front. Codehamr here is my side project account, my day job is running a local LLM integration business for German mid market and public utilities. Not plugging the day job, just being transparent. I do mention the side project below, since the whole post is about the setup I built around it.
Been a Claude Code power user for the last year. Solid tool, but the session limits and unpredictable quality have been wearing me thin.
I am in the blessed situation that my day job gives me access to RTX 6000 servers, which I use over the weekends / nights for personal experimentation. Out of curiosity, and partly because I wanted a fallback plan if cloud tools become unreliable, I have been testing pi and opencode against various Qwen models for the last twelve months. Not full time, just on the side. Both are great, opencode especially is a swiss army knife. But honestly until Qwen3.6:27b dropped, none of the local options closed the gap to Claude Code for daily coding.
On my RTX 6000 with 96GB Qwen3.6:27b at Q8 runs with 128k context, no issues. But honestly the 96GB are overkill for a 30B model. A consumer RTX 5090 with 32GB at Q4_M can give you a similar coding experience. If you know what you are doing and have good prompting discipline, this is the first local setup where I do not really miss Claude.
For the agent layer I wanted something radically smaller than opencode. Out of curiosity I was wondering how far I could get building one from scratch. No plugins, no MCP, no themes bullshit. The agent handles search, dependencies, file work through bash on demand. Single Go binary. https://github.com/codehamr/codehamr
Just experimentation joy on my side, happy to share as MIT open source. Use it, fork it, ignore it, whatever fits. Every step toward local LLMs is a step away from someone else owning our coding workflows. Worth a weekend or two of tinkering.
TheseTradition3191@reddit
matches my experience. qwen3.6:27b at Q4 handles most daily edits cleanly, multi file planning across 50+ files is where it still cracks for me. small scope and i forget i am not on claude
bnightstars@reddit
Out of curiousity what is the reason you switch to PI / Opencode instead of just running Claude Code against Local LLM ?
suprjami@reddit
I'm not OP but I use OpenCode because I don't want to play cat-and-mouse with Anthropic.
Local use of Claude Code is in opposition of their goal to sell Claude subscriptions. They could break local functionality at any time. I don't want a situation where a coding agent would be really helpful only to find Boris has decided to screw us all.
You already need workarounds (chat template with developer role, maybe disable web search tool) to use OpenAI Codex with llama.cpp, same deal there.
codehamr@reddit (OP)
Exactly that. The cat-and-mouse with anthropic was the main reason for me to dig into local. Claude Code is great but it works against their commercial interest to keep it open for arbitrary backends. The break could come any time.
Same with the codex llama.cpp workarounds. As soon as you depend on hacks against a commercial endpoint, you are one update away from a broken setup.
That is why I went pure local plus a minimal agent I fully control. Less elegance maybe, but no surprise breakages.
codehamr@reddit (OP)
Ideology and curiosity mainly. I wanted everything selfhosted and offline capable, plus full control over the agent code itself. Pi and opencode are definitely solid and mature, but I wanted something pure and minimal. MIT open source for everyone, and I like Go binaries for the easy cross platform setup. Building it from scratch in Go scratched that itch.
Conscious_Chapter_93@reddit
For local AI development and debugging, Armorer (https://github.com/ArmorerLabs/Armorer) gives you run records, tool visibility, and approvals. Great for anyone working with Claude Code or multi-agent setups. The local control plane approach is solid for observability.