[Help] Gemma 4 26B: Reasoning_content disappears in Opencode when tool definitions are present

Posted by SomeoneInHisHouse@reddit | LocalLLaMA | View on Reddit | 4 comments

I’m running into a strange discrepancy with Gemma 4 26B regarding its reasoning capabilities. It seems to behave differently depending on the interface/implementation being used.

The Problem:
When using llama.cpp web UI, the model's reasoning works perfectly. Even for simple "Hi" prompts, it produces a reasoning block, and for complex tasks, the reasoning_content can be quite extensive.

However, when using Opencode (v1.4.1), the model seems to "stop thinking" whenever the payload includes the full list of tools. In Opencode, I’ve observed that reasoning_content is only populated during the specific call used to generate a title; for all actual tool-use requests, the reasoning block is missing entirely.

What I've tested so far:

My Hypothesis:
It feels like the inclusion of the tool definitions in the prompt might be interfering with the model's ability to trigger its reasoning phase, or perhaps the way Opencode structures the prompt is suppressing the CoT (Chain of Thought) block.

Has anyone else encountered this behavior where tool definitions seem to "silence" the reasoning block in specific implementations?

TL;DR: Gemma 4 26B reasons perfectly in llama.cpp web UI, but fails to output reasoning_content in Opencode when tool definitions are included in the prompt.