Strix 4090 (24GB) 64GB ram, what coder AND general purp llm is best/newest for Ollama/Openwebui (docker)

Posted by AcePilot01@reddit | LocalLLaMA | View on Reddit | 35 comments

Hello,

I was using coder 2.5 but just decided to delete them all, I MAY move over to llama.cpp but I haven't yet and frankly prefer the GUI (although being in docker sucks cus of the always having to login lmfao, might un do that too)

I am looking at qwen3 Coder next, but not sure what others are thinking/using? speed matters, but context is close as is accuracy and "cleverness" so to speak, ie a good coder lol

The paid OPEN ai one is fine, what ever their newest GPT is, but im not subbed right now and I WILL TELL YOU it is TRASH for the free one lol