Is Gemma 4 any good for open claw?
Posted by Mean-Ebb2884@reddit | LocalLLaMA | View on Reddit | 11 comments
for reference I’d been writing this article that explains how I set up open claw for free the past few weeks: https://x.com/MainStreetAIHQ/status/2040498932091167136?s=20
but now that Gemma 4 has been released I feel like I should switch over and just run that on my Mac mini
what do you guys think?
ResponsibleTruck4717@reddit
I'm using 26b with claude code, and the results are interesting.
I think we need better support.
QuotableMorceau@reddit
I have just failed to use it with claude code ... it starts a tasks but never finishes anything , what setup are you using ?
I use llamacpp (gemma4 26B 128K context) -> litellm (for openai to claude api) -> claude code
ResponsibleTruck4717@reddit
llama.cpp q4_k_m, and that's it, I give it a task and it's doing.
llama-server.exe -m gemma-4-26B-A4B-it-UD-Q4_K_M.gguf --temp 1 --top_p 0.95 --top_k 64 -np 1 -fa on -c 128000 --cache-type-k q8_0 --cache-type-v q8_0 --jinja
chibop1@reddit
Based on my test, Qwen3.5-27b/35b do much better job on OpenClaw than Gemma4-26b/31b.
Gemma is new, so it might do better when support for different engines get settled.
My setup is an isolated docker with chromeum browser for agent to use, so agents can't mess with my computer. I also mounted .openclaw folder on host, so the assets are persistent, and I can access them easily.
stone_be_water@reddit
Do you get any parse error using Qwen3.5-27b/35b for OpenClaw? I am finding a way to fix that.
chibop1@reddit
What engine are you using? I'm not sure if it's parsing issues, but I've seen some error with Ollama:
400 input[135]: json: cannot unmarshal array into Go struct field ResponsesFunctionCallOutput.output of type stringI haven't seen any error since the latest update with MLX engine on Ollama.
ttkciar@reddit
OpenClaw is a security catastrophe. Using a better model for it will just make for more spectacular security violations.
Mean-Ebb2884@reddit (OP)
I don’t let it run wild I keep it safe so it doesn’t fuck up everything
JacketHistorical2321@reddit
Open claw is BS
Mean-Ebb2884@reddit (OP)
What do u use?
PermanentLiminality@reddit
It has a lot of potential. I think it may be a few more days before tools like llama.cpp adapt. They have been making a lot of releases to address Gemma 4, and I'm giving a few more days before spending much time on it.