trying to load Gemma 4. I getting this error
Posted by wbiggs205@reddit | LocalLLaMA | View on Reddit | 6 comments
trying to load Gemma 4. in llm studio on a Windows server 2026 with RTX 3090 24g and 512g ram server. But When I try to load it I get this error ```. I not getting this error on any other model ?
🥲 Failed to load the model
Failed to load model.
Failed to load model
```
ag789@reddit
in llama.cpp, I need to run a recent release that support the model
https://github.com/ggml-org/llama.cpp/releases
an older release that I use doesn't support it
wbiggs205@reddit (OP)
thanks
alitadrakes@reddit
Select cuda instead of cuda 12 in runtime and load it lower context numbers
Exotic_Success1451@reddit
Doesn't work
wbiggs205@reddit (OP)
thanks
Gringe8@reddit
Cant help, not enough info