I like my models dense. Can model makers please bring back or update the dense models from like 2 years ago? A nice 39b or 72b maybe?

Posted by Porespellar@reddit | LocalLLaMA | View on Reddit | 27 comments

Seriously, Qwen3.6 27b is mopping the floor against models like 5 times its size right now. It doesn’t take a rocket scientist to figure out that maybe the whole a2b and a3b MoE thing isn’t the best solution after all. I mean sure MoEs let you run a larger model really fast on a potato PC, but I think we’re learning that there is no free lunch.

As a person who has been on this sub for well over 2 years, I can tell you that despite what benchmarks say, the dense models we seem to have shifted away from because we wanted fast models to run on shitty hardware, those old 35b’s and 72b’s just seemed way smarter when you were talking with them then the benchmaxed crop we have now.

And yes I know access to tools can offset knowledge density to a degree, I know we have tool chains now, and harnesses, and MCP, and web search, but giving a toddler access to Google search or handing it a bash shell doesn’t make it smarter if it doesn’t really know what to do with those tools or understand the output it gets back from them.

Anyways, I’ve tested a ton of models over the last 3 years or so, and I can say without a doubt that a lot of big MoE’s with low active parameters counts don’t seem near as “smart” next to even a small to medium sized dense model. Sure, the speed of MoE’s are great on low resource hardware, but don’t act shocked when a well-trained 27b comes in and leapfrogs the whole pack and don’t be mad because it’s slow AF either. Show that turtle some respect.

For real though, I would love to see more dense models back in the lineup, they’ve obviously shown their potential and value lately.