Qwen3 coder Plus vs Grok Code Fast which is the best free model?
Posted by Level-Dig-4807@reddit | LocalLLaMA | View on Reddit | 4 comments
Hello,
I have been using QwenCode for a while which got me decent performance, although some people claim it to be at par with Claude 4 I have to argue, recently Grok Code Fast has released and it free for few weeks I am using it as well, which seems pretty solid and way faster.
I have tested both side by side and I find Qwen (Qwen3 Coder Plus) better for debugging (which is quite obvious) however for Code Generation and also building UI Grok Code Fast Seems way better and also to mention Grok Code takes fewer prompts.
Am a student and I am working with free AI mostly and occasionally get a subscription when required,
But for day to day stuff I rely mostly on Free ones,
OpenRouter is great unless u have many requests cz they limit maybe I can add 10$ and get more requests.
Now my question is for free users which is the best model for u and what do u use?
Cultural-Arugula-894@reddit
Hey, what parameter model exactly is the Qwen3 coder Plus? Have you tried GLM 4.5?
Eltipex@reddit
im really interested on this one, i agree that grok code fast will be available for a limited time and im pretty sure that i wont start paying once it stops being free. but i recently started playing with chutes and qwen/glm/deepseek models and i found very interesting strenghts&weaknesses within all of them, but, honestly, i didnt had yet enought time to completely understand any of them or even which one would be the best one (in general). atm i concluded this: kimi k2(especially the newest version) seems the best one for frontend dessign and creating consisten front code, i havnt determined how good it is on backend or debug but i *think* that it def is not the best one for general reasoning or/orchestration/architect... deepseek R1 has really surprised me on how solid and consistent is its CoT, pretty neat for understanding and following instructions (dont know if it decays at larger runs) but it is really satisfactory to see it building meticulously its steps logically and coherently(didnt expect it to be that "clever" and comprehend user's complex requests that well...) qwen... i dont really know what to think, there are too much models and i kind of switched between them so fast that i didnt see anything remarkable... code, next-instruct, next-thinking, this ones are the biggest question mark on my head.. sometimes they terribly fail at tasks that i initially expect them to be "simple..." but other times they also seem to run consistently and return consistend and accurate code files in larger context tasks... i really assume that is mostly my fault for frustrating over specific issues and flash-switching between versions, so i never watch them run a full run,.. what im sure i can say, At least atm, is that qwen code pro(1m context) trought qwen code seems to be the most consistent model, following user's instruuctions and specific workflows during very large runs, mantaining its consistency until completion (at least for codebase debugging) the one taht i havnt yet played enought to draw any conclussion is GLM, i used it a bit, but found it pretty weak for the tasks i have tried... pls correct any of mh thought or conclussions if you have used any of these models for a longer time(which is easy) and provide any new conclussion or thoguh, i would really appreciate to read and asnwer your feedback...
this-just_in@reddit
Since you mentioned free and cloud models, I know that both Alibaba/Qwen and Google/Gemini have generous free tiers and you can use their agent harnesses. Grok Code Fast is free for a little while. At the end of the day you have a ton of available resources if you don’t mind your data being used for training. It seems like you have already identified what works best in what case for you, and nobody else’s experience will be better for your own use cases.
Timely_Rain_9284@reddit
For local inference, I'd recommend running Qwen3 Coder Plus locally if you have the hardware. It's been solid for me on a 3090 with 4-bit quantization- handles debugging and complex code generation well once you get the prompt engineering right.
Grok Code Fast is impressive for speed, but since it's cloud-based and temporary free, I wouldn't build workflows around it.