gemma4 as a coding agent
Posted by AbbreviationsLoud182@reddit | LocalLLaMA | View on Reddit | 13 comments
hi everyone
I am using Gemma4 as local llm and results are satisfying for me. I also want to use gemma4 for my coding projects instead of claude for simple tasks.
is there any way to add that to opencode or smth like that?
SM8085@reddit
My basic opencode config looks something like:
Where
127.0.0.1:8080/v1is wherever your endpoint is hosted.Then I can select 'local-model llama-server' in the opencode model menu:
AbbreviationsLoud182@reddit (OP)
can I use tools with this methods?
SM8085@reddit
Which tools? Any opencode tools will be available while in opencode.
You can add MCPs to the opencode config as well. I like having browser-use MCP active in case the bot has to investigate something with a web browser. Such as if you're having gemma4 construct a webUI.
AbbreviationsLoud182@reddit (OP)
cool then. currently my model works on another pc, and i am sending request with api calls. can I still integrate that to opencode?
SM8085@reddit
Yep, if you already have vllm set up to have access on your LAN then you should swap out the
127.0.0.1:8080in the config to your address/port.On my LAN I have a domain name like
llm-rig.lanso I would point it tohttp://llm-rig.lan:8000/v1(since I think default vllm port is 8000)AbbreviationsLoud182@reddit (OP)
should I have to use it on lan? bc I want to use local llm when im outside
SM8085@reddit
No, just using it as an example. As soon as you open things up to the outside you start having more security concerns to worry about. Like some people use a VPN to connect to their home servers, etc.
AbbreviationsLoud182@reddit (OP)
got it thanks you so much
Warm-Attempt7773@reddit
I use VSCode and Cline for my coding to access my Strix Halo machine (separate from my workstation/laptop)
marscarsrars@reddit
VS code has Roo Code. Hope it helps.
AbbreviationsLoud182@reddit (OP)
can I use tools with this methods?
itsDitch@reddit
Hey, I'm new to this, but absolutely you can do this. How are you running the Gemma 4 model now? Ollama? Lm studio?
The answer to that will depend how you set it up, but open code has a JSON config file which you will need to edit, you can ask Claude how to do this, it's super easy
AbbreviationsLoud182@reddit (OP)
I use vllm