Cannot even run the smallest model on system RAM?

Posted by FloJak2004@reddit | LocalLLaMA | View on Reddit | 22 comments

Cannot even run the smallest model on system RAM?

I am a bit confused. I am trying to run small LLMs on my Unraid server within the Ollama docker, using just the CPU and 16GB of system RAM.

Got Ollama up and running, but even when pulling the smallest models like Qwen 3 0.6B with Q4_K_M quantization, Ollama tells me I need way more RAM than I have left to spare. Why is that? Should this model not be running on any potato? Does this have to do with context overhead?

Sorry if this is a stupid question, I am trying to learn more about this and cannot find the solution anywhere else.