Just tried Ollama for the first time, it runs terrible with half GPU power on the default model it provides compared to the one you add, any reason why?

Posted by dreamer_2142@reddit | LocalLLaMA | View on Reddit | 16 comments

My GPU power consumption is 250w (undervolted rtx3090) when I added Qwen3.5-27B-GGUF to Ollama using a template (Modelfile made by gpt). I gave it 3 task to test it, build a snake game, build a flappy bird game, and make an interactive grid on the web for the mouse visual effect, all were successful.

But I don't know how good or bad my "Modelfile" is since I couldn't find a tempalte online, so I thought, let me try Qwen3.6 from inside the app, it downloaded 24GB, and I was surorised it failed with the first two tasks, isn't the app supposed to have the best template and download the best model to give you a good result? and it consume only 120w power.

I think most people have bad results due to the app, not the model.

prompts I've used:

1st task: build for me a snake game for html
--
2nd taesk: build for me a flappy bird game for html