Experience of using OpenClaude and Gemma4 26b

Posted by nonekanone@reddit | LocalLLaMA | View on Reddit | 12 comments

Hi Guys,

I am relatively new to the LocalLLM scene, and today I started to download my first Local LLM with Gemma 4 26b. I am using Ollama and am running on a M1 Max with 32GB of RAM. When I just use Gemma 4 inside of Ollama, it works like a charm. It takes up a good amount of memory, but that is to be expected with my limited hardware. As soon as I start something like Open Claude, it fully breaks down. For a simple Hello World C++ program, it took 5 minutes to write. (In a new folder so it didn't have to interpret any files). Does anyone know why that's happening and if there is maybe a fix to make it run better on my hardware? Thanks a lot.