Qwen released API of Qwen3-Max-Preview (Instruct)
Posted by ResearchCrafty1804@reddit | LocalLLaMA | View on Reddit | 10 comments

Big news: Introducing Qwen3-Max-Preview (Instruct) — our biggest model yet, with over 1 trillion parameters! 🚀
Now available via Qwen Chat & Alibaba Cloud API.
Benchmarks show it beats our previous best, Qwen3-235B-A22B-2507. Internal tests + early user feedback confirm: stronger performance, broader knowledge, better at conversations, agentic tasks & instruction following.
Scaling works — and the official release will surprise you even more. Stay tuned!
Qwen Chat: https://chat.qwen.ai/
ExcellentBudget4748@reddit
*Facepalm*
i already like this model :)))
ResearchCrafty1804@reddit (OP)
You should expect the non-preview version of this model to become open-weight when ready
Simple_Split5074@reddit
Based on what? 2.5 MAX weights never got released AFAIK.
ResearchCrafty1804@reddit (OP)
Because it was replaced by a better model by the time it was ready. You shouldn’t doubt Qwen on their commitment to open-weight AI, they have reassured the community many times.
Even if this specific model is not released, a more performant one will definitely be released in its place by Qwen.
Utoko@reddit
They can also be comitted to have both OS and your very best model closed. It is a business they are committed to what makes sense to them from a strategic point of view.
Not from a committed to OS view.
Simple_Split5074@reddit
I don't doubt qwen but OTOH it would be totally understandable to keep a (potential, more benchmarks are needed) SOTA model in-house. Much like the US players try not to be distilled...
Pro-editor-1105@reddit
And it's closed source.
BoJackHorseman42@reddit
What will you do with a 1T parameter model?
MohamedTrfhgx@reddit
for other providers to provide it for cheaper prices
Simple_Split5074@reddit
Impressive for non-thinking - if that is indeed the case, the web UI has a thinking button after all...