Are Local LLMs good enough for Vibe Coding? Gemma4-26B-A4B vs Qwen3.5-35B-A3B
Posted by Interesting_Key3421@reddit | LocalLLaMA | View on Reddit | 2 comments
https://grigio.org/are-local-llms-good-enough-for-vibe-coding-gemma4-26b-a4b-vs-qwen3-5-35b-a3b/
sagiroth@reddit
Gemma need to mature, only medium size model worth right now is Qwen3.5 27B or 9B omnicoder. Unless you can run bigger denser models.
tommy_redz@reddit
for me at the moment Gemma4-26B-A4B is still buggy on tool calls. LM Studio doesnt work at all and with llama.cpp tool calls fail after some prompts even after all those fixes. qwen is quite good and gives better explanations. (bot with 8bit)