High VRAM local coding model — still Qwen 3.6 27B?

Posted by Generic_Name_Here@reddit | LocalLLaMA | View on Reddit | 87 comments

I’ve been using Qwen 3.6 27B and it’s amazing. Not exactly your Opus replacement, but great for small tasks and checking work. But if you had 224GB of VRAM, would it still be your choice? Or is there something you consider better in the 100+B range (GPT-OSS, Deepseek, etc) that’s just not talked about as much because fewer people can run it? I care more about intelligence than t/s.