Is opencode the best free coding agent currently?
Posted by MrMrsPotts@reddit | LocalLLaMA | View on Reddit | 63 comments
I just started using it and it seems good. I was very surprised that it also gives free access to minimax 2.5 and glm 5 at the moment.
leepenkman@reddit
lee101/codex is a run forever fork of codex can try that too. supports other models in there too
Unlucky-Message8866@reddit
I've switched to https://buildwithpi.ai/ and loving it, https://github.com/knoopx/pi
Realistic-Ad5812@reddit
looks great but doesn’t support subagents?
Unlucky-Message8866@reddit
supports anything you want, pi is kind of a framework, either you install others extensions or build your own
chawza@reddit
Isnt OpenCode supports plugin?
o0genesis0o@reddit
OpenCode is surprisingly demanding on CPU load on laptop even though it's just a TUI. I used a laptop with Ryzen AI 350 and I can see core spiking to 100% and fans ramping up whenever OpenCode is in the middle of a task. Even at idle, it still pushes the idle power of the laptop up from 5W to nearly 12W. It is inconsequential for desktop, but for laptop, every W counts. And this suggests some inefficiency in the way they run the event loop.
Personally, I use either Qwen Code or Gemini CLI (same thing), depending on whether I want to smooch off Google's grounding feature or not. CPU loads stay stable at 5W, no spike, and the agent's operation is very transparent. Step by step is right in the terminal.
chawza@reddit
I know this is wierd to points out, I switched to the desktop version (tauri) and it significantly reduce the ram usage (3GB tk 400mb).
It does not make any sense as the desktop version allow alus to create workspaces for parallel agents (which i often do)
I also need to point out the desktop is only a client that sends requests the opencode server
knigb@reddit
Try https://github.com/AntigmaLabs/ante-preview. There is a detailed comparison on exactly this
ryuno@reddit
It's really depends on your configuration though, many plug-ins make it way more efficient and even save you some tokens
TomLucidor@reddit
Probably OpenCode need some kinda prompt cache optimization to match Qwen Code / Gemini CLI / whatever GLM or Mistral or Kimi is doing. What other agent would have quality like this?
West-Chocolate2977@reddit
OpenCode isn't even close when it comes to agentic benchmarks. Have a look at this https://www.tbench.ai/leaderboard/terminal-bench/2.0
West-Chocolate2977@reddit
ForgeCode is open-source, written in rust, and leads public benchmarks on accuracy.
jacobgorm@reddit
I tried using ForgeCode, but the input processing when typing is dog-slow, and I cannot stand it, and I cannot imagine the people who built it as competent programmers for not noticing and fixing it.
knigb@reddit
Yeah I think they cheated on terminal bench, you can check terminal bench’s discord
West-Chocolate2977@reddit
Could be a bug? ForgeCode is built on ZLE which is perhaps the fastest editing experience one can get on the terminal.
plurch@reddit
Check out other repos in the same neighborhood as opencode
I have had block/goose on my radar personally.
No-Juggernaut-9832@reddit
Goose is trash... I can't believe Block fire all of those people for this PoS
wuu73@reddit
Honestly I always have problems, every time i have tried to use Opencode - but on Windows, so it might be one of those things with AI tools where people tend to have Macs and Windows/Linux is more like "its 11pm, shit, i forgot about windows again, lets rush it out before bed" type of things where it says multiplatform but since they are in the mac bubble, most testing/experience is on the mac so.. idk but after a few tries spread out (months between) i just decided to forever skip this one lol
ps5cfw@reddit
I enjoy OpenCode but I've found Kilo CLI to work a lot better thanks to orchestrator mode.
It can actually produce good results paired with Qwen Code Next at Q5_K_L and 128K context!
Opencode may produce the same result (after all it's the same system under the hood) but the orchestrator mode makes things work a bit better and opencode does not have it by default.
ryuno@reddit
Weird because kilo cli is a fork of opencode and now the latest version of the extension is just a client
TomLucidor@reddit
Oh-My-OpenCode maybe?
ThomasLeonHighbaugh@reddit
opencode-swarm has worked better for me and doesn't require I be neck deep in the abstractions and naming conventions that oh my opencode bundles together.
Plus as a longtime zsh user I abhore oh my zsh, (awful looking themes, slow loading times, overwhelmingly useless plugin hell) and the name so that bias is a factor that is not fair to Oh My Opencode but it is at play too.
Ill-Chart-1486@reddit
For me definitely yes. But it needs a little tweaking at start. Built in agents have not the best prompts, but you can override them.
The best part that it works with opus or sonnet through GitHub copilot subscription.
The difference is that copilot charges for prompts not token. You can tweak it more to be more prompt efficient.
Pristine-Woodpecker@reddit
What's out there worth trying? Opencode seems to be the most popular, and has fixed some of their most serious bugs. Crush? Kilo CLI never tried.
Qwen CLI, Mistral Vibe - not sure how well those will keep working with random models.
nuclearbananana@reddit
Probably pi, a minimal, modular agent, that's designed to be hyper extensible. OpenClaw uses it as a base.
If you're willing to put in the effort, it's probably the best
ivanryiv@reddit
can vouch
NoGene4978@reddit
it has too many bugs. Github issues piled up to 5k+ currently. 1.7k open PRs ... the maintainers cannot keep up which is not a good sign.
IMHO the current version feels more like a prototype
Chasmansp@reddit
I like openhands if you aren’t bothered by using a cli. It running in uv has some good isolation features.
Impossible_Art9151@reddit
does opencode require local, desktop installations? Or can I install it as an admintstator with local LLMs and serve it to a department?
Dismal_Bit_9879@reddit
yup absolutely love OpenCode, they have come a long way in the past couple of months.. you can also checkout pi which is the agent under the hood for OpenClaw
peyloride@reddit
I don't know why no one mentions factory droid? BYOK available
nonerequired_@reddit
I don’t know droid feels uncomfortable. Is it that good?
peyloride@reddit
I think it's the best agent out there. I do test them all the time and my current favorites are pi and droid. Then opencode.
nuclearbananana@reddit
Is droid open source? What do you like about it?
peyloride@reddit
It's not open source unfortunately. Other than that I like the permission levels of it (low/med/high). It classify the command into these buckets and you can select which bucket level should auto approve. Other than that it got updates almost daily, has a good plan feature and I believe it's more or like has a feature parity with CC.
Dhomochevsky_blame@reddit
It’s honestly impressive what you get for free right now. Access to GLM-5 alone makes it surprisingly competitive. GLM-5 shows strong coding consistency, good long-context handling, and solid reasoning for multi-file refactors and debugging. For a free coding agent, that’s hard to ignore at the moment
MrMrsPotts@reddit (OP)
Sadly GLM 5 is not there any more.
alokin_09@reddit
Open Code is great indeed, but I mostly use Kilo Code (I'm also helping their team out). I'm just a big fan of open-source agents in general.
InitialJelly7380@reddit
yes,opencode has free quota for advanced opensource models,it is enough for daily ops usage;but if you want to develop a full features app,you should bright you api key or buy advanced pricing plan with opencode zen。
PsychologicalSock239@reddit
i noticed that opencode does prompt reprocessing, wasting compute which is not tenable when running local models.
Qwen code works fine, no prompt reprocessing, does everything opencode does.
evnix@reddit
> no prompt reprocessing
what does this mean?
ps5cfw@reddit
on llama.cpp it means that context gets invalidated and must be reprocessed from scratch.
Which might be fine on a full CPU setup, but it's an absolutely atrocious experience on hybrid GPU + GPU Setup.
Even I with 600+ pp tk/s find the experience awful because when you get north of 70k tokens (which does not take a lot with normal coding scenarios) it takes a minute at best for each thing it has to do
TomLucidor@reddit
And it is OpenCode's fault not the inference engine!? WTF
Daniel_H212@reddit
Do you mean that it's fine on a full GPU setup?
ps5cfw@reddit
No, it's still slower than It should be.
Daniel_H212@reddit
I meant that you seem to have said CPU when you meant GPU.
ps5cfw@reddit
You're right! My bad, fixed the typo
evnix@reddit
would it save me tokens when used with an external API provider?
kweglinski@reddit
no idea how to check that one - do you know if kilocode suffers from the same thing? Seems to be build on top of opencode but looking at the model operations it looks like it uses cache cache.
HopePupal@reddit
looks like opencode might fix one of the reasons this happens https://github.com/anomalyco/opencode/issues/5224
Queasy_Asparagus69@reddit
Crush is trash Droid is good but not open Vibe prefers Mistral and lacks features Opencode is good but sometimes has tool call mystery issues
Right now I use Oh-my-pi (omp). It’s crazy fast but has tool call issues with GLM 4.7 unless you create a todo list for your project then it’s fine 😝
jhov94@reddit
If you want something plug and play, no, it's not the best. If you want the best platform to build your own system of custom agents, there is nothing better that I've found.
MrMrsPotts@reddit (OP)
What is better for plug and play?
jhov94@reddit
I don't have experience with that, as I prefer to roll my own agents. But from the hype online, Claude Code.
MrMrsPotts@reddit (OP)
That's not free is it?
jhov94@reddit
Its free software, but it's closed source. I've seen some use it with local models, but I've not used it.
roguefunction@reddit
I use opencode but Crush is pretty...https://github.com/charmbracelet/crush
Ambitious_Spare7914@reddit
I also like it. I like that there are some good add ons such as superpowers, oh my opencode, and plan with files. I use it with poe.com as the provider to be able to try different models with minimal fuss.
MrMrsPotts@reddit (OP)
I didn't know those. Thanks!
CC_NHS@reddit
personally I would agree that OpenCode is the best, or at least my favourite for free coding agent.
not only does it generally have a good recent model(s) on their free zen tier, but you can plug in API quite easily, use custom agents, skills all the latest things really.
other worthwhile mentions;
Qwen CLI - not very customisable in terms of models and such, but Qwen is still surprisingly good and has very generous free tier.
Gemini CLI - probably the best quality 'free' tier, but it's if you do not get limited on their model, and if it's something the model is good at, and honestly sometimes just luck also it can hallucinate strangely. also free quota is not that great even in ideal situations so worth a try maybe if using a little here and there...
Mistral Vibe CLI - underrated, not sure how much you get for free, but it's got devstral 2 on it and seems generous, but not used it enough to hit limits, my guess would be somewhere between Gemini and Qwen.
Also worth mentioning that the GLM coding plan seems to have an entry offer all the time, and it's very cheap, so if there is not enough free out there, it's not a bad cheap option. (it can plug 8nto Claude code or open code easy enough)
Terminator857@reddit
opencode is the best, but it could be much better. It will be much better when plan mode is implemented. Difficult to do while having to work with so many different models and models changing significantly almost every week.
synn89@reddit
It's likely going to be a personal preference thing. But there are a few reasons I like OpenCode. The people behind it seem like they have a pretty good grasp of running an open source project that feeds into their enterprise business. Too many OS projects kill themselves moving features into their commercial version. They seem to understand that.
They also do a really good job with updates and approving PRs. It also helps that they use/created https://models.dev/ to feed model data into OpenCode itself. It's a pretty fast turn around getting things on models.dev which updates your local OpenCode when you start it.
The docs for OpenCode are also pretty good. It's just been an easy tool to settle into and slowly learn to master.
Dry_Yam_4597@reddit
qwen cli is decent, too