What is the current solution to running Gemma 4 locally?

Posted by mihirlifehacks@reddit | LocalLLaMA | View on Reddit | 26 comments

Hi everyone,

I'm hearing very good things about Gemma 4 and I appreciate this community making posts on how it's still not perfect with tool call issues and so many other issues, but now that it's been about a week since it's release, I'm curious if anyone has had any success and how?

I'm hearing that ollama had issues up until getting v0.20.0-rc1 but even that had tool call issues. And now I'm seeing ollama has new release candidates like v0.20.6 rc1 and I'm not sure if that fixes everything?

And then there is a whole other side that says, it's better to use llama.cpp, but is that really perfect?

And what CLI / Coding Client are y'all using to help use the model to code with? I think OpenCode is quite popular but are y'all having a better experience with claude code open source https://github.com/anthropics/claude-code or any other CLI/IDE ?

...unless I'm super wrong and Gemma4 is still a disaster to run locally :D

Thank you for your help community!