OpenCode or ClaudeCode for Qwen3.5 27B
Posted by Ok-Scarcity-7875@reddit | LocalLLaMA | View on Reddit | 164 comments
I'm tired of copy & pasting code. What should I try and why?
Which is faster / easier to install?
Which is easier to use?
Which has less bugs?
OpenCode or ClaudeCode with Qwen3.5 27B?
Powerful_Evening5495@reddit
Pi is good, but I am loving OpenCode with Qwen3.6-35B-A3B-MXFP4_MOE , I am getting into vibe coding again.
RobertDeveloper@reddit
Can't get opencode to use tools when I use Ollama and qwen3.6
PetToilet@reddit
RobertDeveloper@reddit
switching from Ollama to llama.cpp solved my problem.
Powerful_Evening5495@reddit
I start a llama.cpp server.
I had good luck with ollama and
https://ollama.com/library/qwen3.5:latest
You may try it
RobertDeveloper@reddit
I will try llama.cpp, with ollama only inference works, it doesnt matter what model I select.
trycatch1@reddit
ollama uses tiny context by default, tools could had been trimmed from context
Coconut_Reddit@reddit
Why do u use mxfp4 is it better than unsloth gguf ?
Powerful_Evening5495@reddit
I always use the MOE version. and now MXFP4 , it started with GPT-OSS, and now Qwen 3.6.
btw this gguf is unsloth
https://huggingface.co/unsloth/Qwen3.6-35B-A3B-GGUF/tree/main
PinkySwearNotABot@reddit
doesn't NVFP4 > MXFP4?
Specter_Origin@reddit
OpenCode default prompts are absolute garbage
YashN@reddit
They are easily overridden. Try doing that with Claude Code...
my_name_isnt_clever@reddit
Or just use Pi and avoid the issue altogether.
Willdudes@reddit
Pi.dev requires people to understand security. If you don’t you will have a bad time.
PinkySwearNotABot@reddit
can you elaborate?
my_name_isnt_clever@reddit
If someone doesn't understand security than they won't understand the flagged dangerous commands either, they just choose allow based on vibes.
Instead the harness has access to only that one directory at a time and no general web access. And it's and a git repo so if it messes up it can be easily reverted. I do have security prompts enabled for hermes-agent since it touches a lot more things.
YashN@reddit
There is no issue with OpenCode either.
my_name_isnt_clever@reddit
The issue is the OC default prompts are awful for local. Pi works great for local without any config changes.
YashN@reddit
OC detects the models used and selects a Sys prompt accordingly. If your local model needs something else, this is really easily customised and then you can reuse that setup every time without modifying anything. It is customisable, the only issue is lack of minimal effort to learn how to do this.
imwearingyourpants@reddit
What kind of specs do you run it on? And what your llama-server flags?
2Norn@reddit
i use the same setup on 5080 with 64gb ddr5, getting about 70 tk/s, if i spawn 4 parallel agents it drops to 30ish per agent
imwearingyourpants@reddit
Man, us 3060 owners are really unfortunate :(
DominusIniquitatis@reddit
Gonna be a broken record at this point, but 3060 can run Qwen 3.6 35B A3B at Q4_K_XL quantization and 131072 context just fine at nearly default llama.cpp parameters.
GoldenX86@reddit
I run it with a 3060Ti + 1660 SUPER and 48GB of RAM. Q8 fits, but Q6K leaves me enough room to also use the PC. gets around 18 t/s. Extra experts on CPU, full 262k context on both GPUs, first to fill up is the 3060Ti.
Work on your parameters and it's viable.
Ariquitaun@reddit
The dense or the moe version? I get about 12t/s on the 35b on a radeon 780m igpu.
GoldenX86@reddit
MoE, 14GB combined can't fit the 27B dense.
ProfessionalSpend589@reddit
Opencode runs perfectly fine on a Raspberry Pi 4 connecting to a cluster of Strix Halos with eGPUs.
PayMe4MyData@reddit
And how are those egpus connected to the halos? USB4 or oculink via the pcie port? (I am hoping it is the second) Any loss in inference performance?
ProfessionalSpend589@reddit
OcuLink via extension cable for PCIe. For some reason I failed to run them via the USB4 port.
If we look at a single pair of Strix Halo and an eGPU - the strix halo works at full capacity while the GPU… is wasted on this type of workload.
But you could load a 120B model in quant 8 or use it for ComfyUI.
Powerful_Evening5495@reddit
rtx 3070 and 32gb ram
itroot@reddit
This - https://pi.dev/
ansibleloop@reddit
This thing is so good - I'd highly recommend installing the caveman skill too
https://github.com/juliusbrussee/caveman
Love the philosophy of Pi too - you should adapt it to your workflows, not the other way around
ailee43@reddit
so explain it to me... it seems to be an entirely different class of product than claude code or opencode, more of an agent
my_name_isnt_clever@reddit
It's more similar than you think. Pi agent itself is a TS-based agent framework, in fact it's the core of OpenClaw. But when we say Pi in this thread we mean the pi-coding-agent, which is basically CC and co but without anything extra.
It's minimal and uses signifigantly less tokens with default prompts so it's well suited for local models, and the idea is that you can run it on
~/.piand have it build it's own extensions or skills for whatever you need that isn't included. I had mine build a basic to-do and web tools, throw in some skills and it's the only coding agent I use now.PinkySwearNotABot@reddit
that has been the biggest downside for me. curl/wget seems to be its only native tools for web search. and when i ask it to google something, it can't interpret the data properly and it fails each time.
at the same time, i don't want to install some resource/token-heavy MCP to do the job.
so what are you using for web search?
my_name_isnt_clever@reddit
Search calls my self-hosted Searxng instance in json format, retrieval uses r.jina.ai. It's not the most robust, but at least it's free.
PinkySwearNotABot@reddit
wow, awesome. i had no idea this was possible. maybe not 100% iron-clad, but good enough. i am familiar w/ r.jina.ai though. so you just make all of this into a "skill", right?
my_name_isnt_clever@reddit
I made it as an extension so it has proper tools, but a skill would work as well and be portable to other agents. I just give it the search URL with the JSON format and the jina URL, it did the rest.
PinkySwearNotABot@reddit
okay so you have web fetch nailed down. but how about if you need a real browser agent? do you have a solution for that that’s not using Playwright MCP and heavily bloated?
my_name_isnt_clever@reddit
I don't utilize browser automation with Pi, Hermes Agent has that built in and it works pretty well.
taeper@reddit
It's a joke lol
razorree@reddit
sure, but for that you need more time and experience, i wouldn't recommend to a newbie, unless you want to spend hours or days learning, experimenting etc.
ansibleloop@reddit
Lol no you just ask the pi agent to install it and copy paste the link and config into pi
walden42@reddit
Dang, I just realized that pi already has the skill/knowledge built-in to create extensions and install plugins for itself.
razorree@reddit
what link ?
ansibleloop@reddit
The link to the Git repo
razorree@reddit
but to what exactly ? ah... this is caveman skill :) sorry, I thought i was just responding to "pi bare agent" answer. (that you need experience to customise it)
DistanceSolar1449@reddit
This is literally a joke
DiscipleofDeceit666@reddit
There’s got to be an astroturfing campaign going on. Why is this the only tool being promoted everywhere when there’s thousands around. Dunno if I looked at a fork of the real thing, but pi was created a few days ago? Maybe a week? Tf is going on
PinkySwearNotABot@reddit
i had the same sentiment until i finally tried it. i don't blame you (or me) for the skepticism - everyone's been throwing this word around with no real substantiation for why it's good vs the 25 other harnesses floating around on the internet. and in this era where bots float the internet, how can you just take them at their word when they don't even write a full single sentence as to why they think that model is good?
with that said, i've been using it since this morning and it seems pretty darn good. i imagine due to de-bloating, and minimialistic nature
p4block@reddit
I agree with your sentiment, however, I tried it and yeeted opencode. This thing is just good.
my_name_isnt_clever@reddit
Pi is not new, OpenClaw is literally built on top of it. People are just starting to realize (me included) that a flexible and light coding harness works better with local models than the bloated CC and it's clones.
Your_Friendly_Nerd@reddit
I just got a bunch of recommendations on YT for that after watching like one video on opencode, so I think it's just getting a lot of attention right now.
itroot@reddit
No astroturfing. It's just not bloated, and does one thing well.
I discovered it like, a couple of weeks ago, before that used claude -> opencode with local models. Claude was unusable. Opencode was better. PI worked best for me so far.
super1701@reddit
I'm new to this side of the house. Whats the difference between Hermes and Pi?
my_name_isnt_clever@reddit
That's apples to oranges. I use both, Pi for coding in standalone projects to keep the prompts super concise and direct. I find it helps a lot to keep it simple with local models.
I use hermes as my persistent agent so I can message it with questions, manage my Minecraft server, or have it make changes to my dotfiles repo. Right now I'm using Qwen 3.6 35b for both and enjoying it immensely.
MuDotGen@reddit
I got Pi to actually run with my LLMs.
Feels really lightweight, no huge system prompt or bloat, and for whatever reason, is just much more reliable with even smaller models like Qwen3.5-4b in my experience.
Your_Friendly_Nerd@reddit
The creator has said that pi.dev works exactly because foundation models are so good at knowing what needs to be done, you don't have to have massive prompts that tell them about everything they can do and how to do it.
This makes me think that this will be yet one more tool targeting proprietary API-only models.
noprompt@reddit
Pi is like crack.
CreativeKeane@reddit
Can you tell me a bit more about pi dev? So does it come with the ability to search through directory and read/write files? Or some neat features like web search and etc? Or is that something. We need to write or provide it.
Still learning and figuring things out.
transferStudent2018@reddit
If you want a core set of package to install, maybe pi isn’t for you. The philosophy of Pi is that it isn’t there if you don’t need it and if you do need it, you should build it
CreativeKeane@reddit
Ah gotcha. Thanks, it's my first time hearing about Pi and just wanna know more about it's ecosystem. I don't mind building the tools, just want to understand what I am working with and why people prefer it over the alternatives.
transferStudent2018@reddit
Happy to help! The reason I started using Pi is because
sine120@reddit
For normal consumer hardware, Pi has been the best harness. OpenCode is capable but takes a long time for PP for me.
lifeislikeavco@reddit
This in the little-coder configuration is actually fantastic
2Norn@reddit
https://github.com/minghinmatthewlam/pi-gui
there is also this for people who prefer gui over tui
MuDotGen@reddit
You just made my day. It never hurts to have both options.
Cdou@reddit
That and check those 2 posts out. I haven’t had time to try those out, but might be of help: - https://www.reddit.com/r/LocalLLaMA/s/2UWL0NPFAw - https://www.reddit.com/r/LocalLLaMA/s/lBz86mD2A0
PS: I haven’t no affiliation with any of those lads
coding9@reddit
Yeah opencode scrolling and tui is just bad compared to pi. Scroll is natural and doesn't take over and replace the view. On top of the extension system allowing you do whatever you want.
Todo checklist, permissions modes, MCP, if you need any of them you can add
celsowm@reddit
Qwen code and llama cpp (server)
hinsonan@reddit
Opencode. It really has surpassed Claude code as the better experience. It supports so many providers and the agent loop is well made with proper retries. The default plan and build mode is also pretty great
sn2006gy@reddit
Opencode could be nice but its pretty buggy and has some weird behaviors like camelCase in tool calling that no one uses and the fact its OpenAI endpoint doesn't correctly handle tool calls - unsure how they got that bug as they use verclel ai and I use vercel ai and it my vercel ai harness that reported OpenCode isn't handling tools correctly. Filing bugs but its 2026, we shouldn't have a tool call bug like this.
hinsonan@reddit
The tool calling is fair but honestly outside of a few exceptions it does seem to work most of the time or can work itself out with retries. I had some old glm models that couldn't do tool calling but it was more so on glm for always adding escape characters to tool calls
sn2006gy@reddit
not just that but it isn’t following protocol so the model doesn’t know the tool call was successful - openai api is buggy - hope they accept my pr to fix it
andy2na@reddit
Opencode with codenomad
ComfyUser48@reddit
I've settled with Pi after trying them all
vr_fanboy@reddit
started with pi this morning, im addicted, i have work to do but cannot stop customizing my pipi (context optimization mostly). First time i can actually work in a 100% local environment (3090+ qwen 3.6 27b @ 128k context), very snappy and smart, it implemented many improvements, migrated my CC skills and MCP-s all by itself.
Next issue: throughput, i now want 4-5 pi instances same way i use CC.
exiting times for localllama if this is the new baseline for local models, hope we keep getting qwen releases in the future
PinkySwearNotABot@reddit
i'm on a m1 max 64GB and after trying all 8bit, 6-bit, 4-bit quants of qwen 3.6 27B, i realized dense models are just not practical for me. i can almost deal with the slow 10 token/s output, but it's the slow prefill and PP that kills me.
context: i use for agentic coding, not LLM chat
my_name_isnt_clever@reddit
If you want a lot of instances running at once, you may want to look into Hermes-agent. It has a profiles feature that sounds similar. I use it and Pi on a daily basis.
Polite_Jello_377@reddit
What made pi the standout for you? Did you create a lot of your own tooling for it or are you running a fairly vanilla setup?
2Norn@reddit
lightweight, no bloat, doesn't touch context a lot, you can build it from ground up for your own needs, very customizable and its growing everyday.
it's basicaly lego for agent harnesses.
my_name_isnt_clever@reddit
Exactly, lego for coding agents is the perfect way to describe it. In an age where raw code is dirt cheap, why do I let someone random make all the choices for such important software? And coding my own from scratch would lead to a lot of debugging instead of just using the tool.
Starting with a rock solid core and having the tool expand itself has been a great experience.
chuvadenovembro@reddit
No meu caso (falando de forma resumida), sempre gostei do claude code e fui tentar utilizar para llm local, mas ele é extremamento lento, então usei o opencode que gosto bastante, mas eu estava incomodado com a lentidão também (mesmo sendo bem mais rapido que o claude code), então eu estava procurando algo que fosse mais rapido, porque se você testar a llm local diretamente no terminal, vai ver que ela é bem mais rapida que nesses headless...Então devo ter visto alguém falando algo sobre o pi no openclaw e decidi testar, e fui uma grata surpresa, achei ele muito mais rapido que oopencode...sugiro que teste e veja se atende sua necessidade...estou comentando isso, mas ainda estou testando e não fui capaz de fazer nada com llm local
Head_Bananana@reddit
https://github.com/Gitlawb/openclaude
Ok-Importance-3529@reddit
OpenCode with custom curated agent, or regular ones if you dont mind blowing context, as soon as i started with custom agent and offloading everything to specialized subagents my workflow was shite, now i can process hunders of tousands tokens in one task and in greater speeds, because each subagent call starts with fresh context and greater processing speed.
youcloudsofdoom@reddit
Care to share your agent file for this agent? I'm always intrigued by different approaches to this
Ok-Importance-3529@reddit
here you go: https://limewire.com/d/UFxMK#2PfCGvHguV
i wonder how you like the workflow with them, id be glad to hear some reaction :)
clintonium119@reddit
I'm going to try this out. I'm in the same boat - opencode was just OK for me with local llms until I put some real planning and intent into custom agents. Now it's 👍
Your approach is really detailed though, and I think there are some really nice improvements to some of my instructions that I can lift from yours
Durian881@reddit
Qwen Code. It supports the tool calls for Qwen models and worked well for me (e.g. building webui, payment system using hyperledger nodes with connectivity via API and MCP servers).
Intelligent-Form6624@reddit
https://github.com/undici77/qwen-code-no-telemetry
Hytht@reddit
Don't use that, it's a vibe coded slop project that you shouldn't run without sandboxing https://www.reddit.com/r/LocalLLaMA/comments/1rar6md/comment/o6n522v/
Intelligent-Form6624@reddit
I actually don’t mind a bit of slop
dtdisapointingresult@reddit
You realize you can just set usageStatisticsEnabled=false in settings.json ? It didn't need vibecoded slop.
The OpenTelemetry stuff this sloprepo removed isn't just Qwen Code's own telemetry, it's a user feature that lets YOU record telemetry about what Qwen Code is doing, so you can log statistics on your own logging server. It has nothing to do with what gets sent to Alibaba, and it's off by default.
Seriously, does no one read the configuration guide? They just see "telemetry" and start assuming?
Btw I go out of my day to send Qwen telemetry. It's one of the reasons I wanted to use Qwen Code. I'm not gonna turn off telemetry on an open-source project that helps the company releasing the best small local models. I want to help them any way I can. Just like I do with Debian and other open-source tools.
Velocita84@reddit
https://www.reddit.com/r/LocalLLaMA/s/wkW4sp0kUq
boutell@reddit
I gave Qwen Code a try for Qwen3.6-35B-A3B-UD-IQ4_XS on my Mac, with context limited to 128K because of low specs (32GB RAM).
I ran into problems with QC not compacting soon enough, running out of context and getting stuck. I don't know if it can be configured or not; I went back to opencode which I have already configured to address this.
SmartCustard9944@reddit
How do you configure it to skip login?
Durian881@reddit
I configured the settings.json in the .qwen folder to point to a local openai-compatible endpoint.
wizoneway@reddit
nothing beats a ctrl-c and ctrl-v.
RoroTitiFR@reddit
I'm using OpenCode since week now, with a Tesla P40 + T4 setup and Qwen3.6. I'm getting close to 30 tps, with 256k context settings. Very good developing experience, the web UI is a plus when working on multiple codebases, that allow me to not change window all the time.
I'm really happy with it, and I plan to cancel my cloud AI plans, as I find Qwen very smart.
loadsamuny@reddit
what about the one optimised specifically for the model?
https://github.com/QwenLM/qwen-code
jduartedj@reddit
ive used both with local models, my honest take:
opencode is the easier path for local. it was literally built with byo-model in mind, you just point it at any openai-compatible endpoint (llama.cpp server, vllm, ollama, lm studio, whatever). install is npm i -g @opencode-ai/opencode and youre done basically. config takes 30 seconds.
claude code technically supports custom endpoints now via env vars (ANTHROPIC_BASE_URL etc) but its kinda fighting upstream, was made for claude. youll hit weird edges where it expects anthropic-style tool calling, prompt caching, system prompt structure. doable but more setup pain.
for a 27b qwen specifically i would say opencode every time. claude code is engineered around the assumption youre talking to a frontier-tier model, so it can ramble through long agentic loops. a 27b will get confused after a few tool calls and start spinning... opencode keeps things tighter and gives you more control over the loop length, retries etc.
bug-wise both have rough edges, opencode is faster moving and a bit less polished but bugs get fixed in days. CC is more polished but less flexible.
speed is basically a wash, all bottlenecked on your local inference tps anyway. with qwen3.5 27b on a single 3090 youre getting like 25-35 tps so the agent overhead barely matters.
short answer: opencode + qwen 3.5 27b q4_k_m + llama.cpp server, you'll be writing code in like 10 min from now.
cenderis@reddit
Does involve a little bit of configuration (editing
.config/opencode/opencode.jsonc) but the docs are good enough, and once you've done it you're off. Shame it doesn't pick up the models using the API.suprjami@reddit
It does.
Do
/providerand you'll get the list.You can pick any with
/model provider:modelnamecenderis@reddit
Not for local models, I think? (Even though
llama-server,ollama, etc., provide a/modelsand/orv1/modelsendpoint.)suprjami@reddit
It works for me with llama.cpp
cenderis@reddit
It's still an open issue https://github.com/anomalyco/opencode/issues/6231
jduartedj@reddit
yeah the docs are surprisingly readable for how new the project is. the model auto-discovery via API thing is a known annoyance, theres an open issue about it but it keeps getting bumped. for now i just hardcode the model names in the jsonc and call it a day, not pretty but it works.
splice42@reddit
Bit of a funny statement given that the opencode TUI has been missing the "Other" option to configure an openai-compatible endpoint for around 2 months now and issues and pull requests about it are being closed because the devs can't really be bothered to keep track and action that.
Thankfully you can use the web UI to configure it and then switch back to TUI to use it but it's not a great look that such a basic thing gets overlooked for so long.
YashN@reddit
What? You can configure any base URL in the .json configuration file, even one per model if you need to.
jduartedj@reddit
haha yeah thats a fair point, the TUI gap is annoying. ive been doing the same web-ui-then-back-to-tui dance and it works but its not exactly the polished experience the readme implies. honestly i think the project moves fast enough that the maintainers triage by what hurts THEM in their daily use, and most of them are probably on hosted models so the local-endpoint configs sit lower in the queue. not a great look but explainable.
still better dx than claude code if you want full control over models tho.
cunasmoker69420@reddit
Qwen Code my man. It integrates really well with the Qwen line of models
jacek2023@reddit
try pi coding agent
alternatives: roo code, mistral vibe cli and many others
Latt@reddit
Roo code is being shut down
Pretend_Engineer5951@reddit
Isn't it open source? I'm sure another fork will appear. Roo looks the best IDE extension for agentic coding
jacek2023@reddit
why?
youcloudsofdoom@reddit
They're pivoting to being another enterprise AaaS provider
ArugulaAnnual1765@reddit
Heres oke nobody is telling you, vs code insiders github copilot allows you to use openai compatible endpoints now.
Theres no reason to use anything other than copilot IMO
nicholas_the_furious@reddit
Data gets sent to MS I think. It is not completely local.
Savantskie1@reddit
That’s why you firewall it. And use your local models?
my_name_isnt_clever@reddit
Why would I put in a workaround for telemetry in software when I could just use anything else instead? Baffling.
nicholas_the_furious@reddit
I think with VS Code and GH copilot it is hard to completely air gap. I switched to VS Codium and Opencode extension.
Savantskie1@reddit
What do you mean? I’ve got mine air gapped perfectly fine. It’s on a machine that isn’t Microsoft. Isn’t directly attached to the internet, it’s completely Linux. The only connection is the custom mcp server I built. The memory system I built for my ai personal assistant. It’s really not that hard. Why the excuses?
suicidaleggroll@reddit
And if you want to run your agentic coding on a system with internet accees (whole slew of reasons why that would be useful)?
Savantskie1@reddit
The system doesn’t have internet access? Only the mcp server hosted by another machine has internet access. So say my personal assistant can search the web for things I asked it. Like the weather or the cost of something?
suicidaleggroll@reddit
We’re not talking about a personal assistant. We’re talking about an agentic coding client, whose purpose is to write, install, run, and debug code, which often requires internet access on the client system (installing packages, pulling containers, etc).
Savantskie1@reddit
That’s why you pre download shit and then put it on your system like in the Linux world “.deb” packages or tarballs, or with windows zip files or “.exe” files ahead of time and then ask the ai to install those. I mean it literally is that simple to airgap even with no internet access. I guess I take all of that for granted because I don’t trust ai to install anything without errors. I’d rather install that crap manually
suicidaleggroll@reddit
I don’t think you understand what the word “agentic” means.
And at the end of the day, why? Why on earth would I jump through all of those hoops to use a shitty Microsoft product when I could just…not, and use something else that doesn’t have all those problems to begin with?
Savantskie1@reddit
What hoops? And I understand what agentic is and I think you don’t.
suicidaleggroll@reddit
Agentic coding means letting the coding agent run autonomously. Writing, compiling, running, testing, and iterating, all by itself. When packages are needed, it installs them. When creating a dockerized application, it writes the dockerfile and builds it, pulling the necessary base package and upgrades. All without user intervention. Stopping that process so you can see what packages it needs, then going and downloading them manually, copying them over, and installing them yourself, every time a new package or caontainer is needed, defeats the purpose.
If you have concerns about what the agent might accidentally run on the system, which is valid, you confine it to a VM which you can blow away and restore from backup if things go awry, but it still has internet access so the agent can do its job.
Savantskie1@reddit
Who said I’d use crappy copilot? You do know that you can run local ai with vscode and the vscode copilot chat extension right? And downloading the .deb and whatever packages myself and then using the ai to install those separately? Thats just good security lol. I enjoy ai, but it’s not nearly smart enough to do anything you’ve described yet. lol
ArugulaAnnual1765@reddit
Its true - my servers are "air gapped" meaning all outside routing to them is blocked but they are still physically on the same network and I can still access them remotely with my VPN
nicholas_the_furious@reddit
The irony of all the things you listed + "It's really not that hard"
Savantskie1@reddit
Ok, I’ll give you the not so hard. But I’m a child of the 80’s, I’ve been messing with computers for a long time, and some of this is trivial to me. But not hooking up an ethernet cable to me isn’t magical. It’s trivial to me
youcloudsofdoom@reddit
Privacy settings are easy to access and entirely restrict on it, completely local
ArugulaAnnual1765@reddit
Im running everything on a windows 11 desktop and using edge as my primary browser - there is no data on here that I care about microsoft seeing.
Now my nas and linux servers on the other hand...
youcloudsofdoom@reddit
I wanted this to be true, but much like the comment made elsewhere here about Claude code expecting a frontier model, I find that copilot does too. Lots of wasted tokens compared to lighter local-first harnasses
ArugulaAnnual1765@reddit
Nope, it works amazingly for me
relmny@reddit
I give you that have a lot of courage to recommend that thing... (that I don't even dare to name)
ArugulaAnnual1765@reddit
Curious why? It is far superior to cline and works better with my local vs claude code, what exactly is wrong with it? (Other than the ms spying comment that im not really convinced about anyway)
Your_Friendly_Nerd@reddit
With models my hardware can run (atm mainly gemma4:26b) I prefer using the chat feature that's built into most editors, which is better than a web interface because I can pretty easily share contents of a file, and that's all I need most of the time. And if I do need it to do toolcalls for file editing, I can give it those permissions as well.
Prudent-Ad4509@reddit
both. both is good.
Claude code uses different prompts and has a wide assortment of third-party skills available, but opencode is more customizable and you can control divide before planning and execution better.
YashN@reddit
No, CC will use a Claude-model specific System Prompt and you can't edit that. OpenCode allows customisations.
CautiousStudent6919@reddit
just want to point out that any claude-code skill can also be used in any harness, just put it in \~/.agents/skills instead of \~/.claude/skills
or just simlink the two folders together
DeltaSqueezer@reddit
open code searches the default ~/.claude/skills anyway. heck, even my home-coded agent does this too!
Prudent-Ad4509@reddit
In theory, yeah. It does not hurt to try anyway. I have not investigated what is the difference between those that are reported to work and those that are reported to not work, but logically speaking the latter should heavily depend on specific claude or claude code features. The first suspect is using full screen page screenshots vs page model given by playwright, there might be others.
YashN@reddit
Pi or OpenCode or similar.
How are you going to replace the internal Anthropic-model specific System Prompt if you use Claude Code?
Pleasant-Shallot-707@reddit
Pi
Far_Cat9782@reddit
Try this one https://github.com/goozebump86/KokoOS
Ok-Internal9317@reddit
I like Opencode
cryptofriday@reddit
vscode + proxy
This is The Way...
FinBenton@reddit
Im using cline in vscode, works really well straight outta box, no configuration needed, I like it with llama.cpp hosting the model.
Enthu-Cutlet-1337@reddit
ClaudeCode doesn't natively support Qwen — you'd need a LiteLLM proxy in front, and tool call formatting gets messy at 27B. OpenCode works out of the box with Ollama. Install takes under 5 minutes...
youcloudsofdoom@reddit
These days it's much easier, unsloth fixed the tool calling, no proxy needed for it or API bypass anymore
john0201@reddit
I use Qwen Code….
Subject_Mix_8339@reddit
pi.dev
Sudden_Vegetable6844@reddit
Qwen Code ? works fine for me https://github.com/QwenLM/qwen-code
It's a fork of gemini cli
runner2012@reddit
It has a lot of telemetry to send your prompts somewhere
jedisct1@reddit
This: https://swival.dev
Voxandr@reddit
If u still want to edit the code , use CLine , for me Cline + 122b is best .
tuvok86@reddit
ive tested qwen 3.6 27B with opencode vs pi.dev and it is consistently using 2x the tokens in opencode when reasoning is turned on
jikilan_@reddit
Even with preserve thinking on?
Eyelbee@reddit
I like cline but it's not perfect either. I honestly don't know which is the best option.
AVX_Instructor@reddit
OpenCode or pi.dev,
Claude Code extremely bloated from box
gordi555@reddit
I'm looking forward to your findings :-)