What's the smallest model you've gotten to work with OpenCode?
Posted by RecordFront4405@reddit | LocalLLaMA | View on Reddit | 4 comments
Hey all,
I've been trying out OpenCode with some smaller open models, though even the ones tuned for tool calling don't seem to interface with it properly or even attempt to use the tools given to them.
How low have you guys gotten with reliable output? 4B parameter models seem to be a total failure, which is expected to be fair.
My_Unbiased_Opinion@reddit
I have not tried it but apparently Qwen 3 4B 2507 Thinking is pretty good.
igorwarzocha@reddit
Qwen3 4b 2507 thinking/instruct both can call tools with no issues, and I've managed to somewhat successfully chat to them about it afterwards. But I wouldn't trust them to actually write any code, so... Meh.
I generally haven't had much luck with anything else than these two and GPT-OSS.
Any other recommendations, just for lolz? I'm sorta using Opencode as a benchmark for tool calling for these small models.
DistanceAlert5706@reddit
Try Nvidia nemotron, it's pretty good even at 9b, Seed OSS 36b is good too, but it's slower and dense. Pretty much GPT-OSS is viable, I had no luck with 30b Qwen3 models.
throwawayacc201711@reddit
Best bet would be to use an MOE over a small dense model IMO