Qwen3 Max Thinking this week
Posted by ResearchCrafty1804@reddit | LocalLLaMA | View on Reddit | 54 comments
Posted by ResearchCrafty1804@reddit | LocalLLaMA | View on Reddit | 54 comments
Affectionate-Hat-536@reddit
I want GLM 4.6 Air first ;)
drooolingidiot@reddit
Different companies.
Jayfree138@reddit
That's going to be a heck of an API bill 😂. Trillion parameter dense thinking model.
power97992@reddit
It is around a trillion parameter but it is probably not dense
LinkSea8324@reddit
For the latest thinking qwen3 models (non hybrid) i always find them overthinking and being unusable, throws out 5mins straight of reasoning
AvidCyclist250@reddit
Yeah, the payoff is too low
Bakoro@reddit
I have to do a better job of keeping track of the major papers instead, but I seem to recall one not long ago that basically said, more thinking it's not necessarily better. The found that when thinking models got questions correct, they'd put out a fair but if tokens, but nothing excessive. When the models were out of their depth, they'd produce 5~10 more tokens. It was such a a stark difference, they said that you had a reasonable chance of telling whether the model was wrong, just by the token count.
That one really made me wonder about the state of things, and I hope that's a paper industry tool not of. Thinking is good, but apparently there's a Goldilocks zone that depends on how good the base model is.
pigeon57434@reddit
i would imagine Qwen3-Max-Thinking would be a lot more efficient since its 1T parameters and big models actually utilize their reasoning better but i will probably still be more than closed reasoning models think
met_MY_verse@reddit
They said they purposely incorporated longer thinking times in their 2507 releases, but I agree, it’s more than excessive.
LinkSea8324@reddit
The hybrid release had enough thinking sauce to do fast and accurate tool calling, but long context was non native.
Sad we can’t have both.
jjsilvera1@reddit
Probably not open source tho 😥
Limp_Classroom_2645@reddit
Im okay with it, their other open source models are more than good enough for our local use cases, their business model is what OpenAI should've done for all their models, small and medium models = open source for personal uses and small businesses, large models = API only, to make profit from professionals and enterprises
basxto@reddit
I don’t think it’s just about profits. If they don’t run the model, they can’t use your prompts for learning. Until DeepSeek R1 you even required an account to use their AIs
Charuru@reddit
Yes they have a great balance.
Low88M@reddit
Only Mistral were that smart and generous from nearly the beginning (after first few great generous steps). The middle balanced way ! OpenAI were only about money until they began to feel they had to « be generous » (and ok, gpt-oss is a good one…) regarding the localers community.
colin_colout@reddit
They never open sourced a Max model, so yeah this will certainly be closed source.
More frontier models (closed source or not) is still a good thing. It helps increase the diversity of synthetic data for open source model pretraining/fine tuning.
...plus the qwen team (for now) publishes a surprising amount of their secret sauce research. I assume that will change if they end up leading the pack (and can capitalize on their advantage)
...but for now it benefits the FOSS community so I'll take it!
nullmove@reddit
On a more relevant topic, thinking variant for Coder is also cooking.
Hope he meant the 30B-A3B one :/
Tasty_Lynx2378@reddit
No local, no interest.
Mysterious_Finish543@reddit
It would be great if Qwen3-Max-Thinking was open weight, but even if it wasn't, it would still be an interesting research artifact, since some next-generation Qwen models might be distilled from it, or it might be used to generate synthetic data for training other Qwen models.
Tasty_Lynx2378@reddit
Hope they do release open weights and fair enough about it still being an important release.
That said, there are other research and general LLM subs to discuss that side if it's closed.
I value this sub for focused discussion and news about local models and would prefer it stay focused.
petuman@reddit
Give some examples?
deleted_by_reddit@reddit
r/LLMDevs
"A space for Enthusiasts, Developers and Researchers to discuss LLMs and their applications." 116k members
r/ArtificialInteligence
"A subreddit dedicated to everything Artificial Intelligence. Covering topics from AGI to AI startups. Whether you're a researcher, developer, or simply curious about AI, Jump in!!!" 1.6mil members
r/LLM
"Your community for everything Large Language Models. Discuss the latest research, share prompts, troubleshoot issues, explore real-world applications, and stay updated on breakthroughs in AI and NLP. Whether you’re a developer, researcher, hobbyist, or just LLM-curious, you’re welcome here. Ask questions, share your projects, and connect with others shaping the future of language technology." 24k members
r/LargeLanguageModels
"Everything to do with Large Language Models and AI" 8.6k members
r/artificial
"Reddit’s home for Artificial Intelligence (AI)" 1.2 mil members
Some other small subs too.
There are already enough spaces to discuss closed weight models.
Please keep this sub Local, open weights focused.
EtadanikM@reddit
All of those are bad or inactive.Â
The reality is there are only two big communities that fixate on every new LLM release (as opposed to being brand based), this one and singularity.Â
But singularity is basically a cult and prioritizes hyping on closed source models. That just leaves this one really.Â
petuman@reddit
First two have post activity, but no post rating or comments -- so just bots posting, nobody is reading. Third one has 20 day old posts in 'hot', so even bots aren't spamming there.
I ask for sub like r/hardware and you bring r/technology-- there's no technical discussion.Thomas-Lore@reddit
Most of those subs are either barely used or spammed with anti ai articles.
makistsa@reddit
Why did this stupid comment show up in every post the last month?
The sub is more about local models, but a new release should be here as well.
Did anyone read the rules?
Posts must be related to Llama or the topic of LLMs.
Tasty_Lynx2378@reddit
Because the focus of the sub has been diluted a lot recently and many of us would prefer it stay local focused. See my other reply above for other subs.
Thomas-Lore@reddit
Then create your own sub purelocal or sth like that. This sub was always fine with discussion sota closed models, but now gate keepers appeared and complain under every post. :/
Corporate_Drone31@reddit
To be fair, I wasn't aware of this particular rule until now.
Orolol@reddit
Yeah this sub is only for whining about closed source models.
Tasty_Lynx2378@reddit
There's plenty of other subs for closed models - which have been leaking more to here.
It's not whining to ask that this sub stay focused.
jacek2023@reddit
They upvote anything from China. Some of them are Chinese bots, some of them just hate the west and some of them just hype benchmarks.
tarruda@reddit
How do you reach the conclusion that someone announcing a chinese LLM on reddit hates the west?
jacek2023@reddit
see number of upvotes in this discussion: "Even though it's closed weight, I'm hoping it beats the big US proprietary models just to see the fallout."
Watchforbananas@reddit
Pretty sure just hating the Wallstreet-AI-Hype-Bubble is enough for such a statement, no need to hate "the West" (however you define that). Quite frankly competitors leapfrogging each other is probably the best outcome for consumers almost everywhere.
Tasty_Lynx2378@reddit
The rush and competition will lead to more problems than benefits IMO - especially past a certain point.
nullmove@reddit
Continuously crying about "some" people is only one step removed from bot behaviour, from yourself. Saw you trying to shut off discussion of MiniMax M2, a model that was opened 2 days later.
Unless you think your list of "some" totals to an exhaustive match. In that case it's mental illness.
Creative-Struggle603@reddit
Fair enough. There are drones on all sides. Many posts on Reddit are seemingly only for LLM grooming. We are not even meant to be the primary consumers of some of the "news" anymore. AI has replaced us there, while consuming our water, air and electricity.
Rude-Television8818@reddit
Yep but won't probably be open source :/
Final_Wheel_7486@reddit
This could actually get uncomfortable for U.S. AI companies given the pure non-reasoning performance of Max approaching 235B A22B Thinking...
addandsubtract@reddit
What is it thinking about?
Ok_Warning2146@reddit
Isn't Max models proprietary??? If so, this is off-topic for this forum.
buppermint@reddit
Even though it's closed weight, I'm kind of hoping it beats the big US proprietary models just to see the fallout.
Michaeli_Starky@reddit
It won't beat them.
bilalazhar72@reddit
will beat gpt 5 and gemini 2.5 for sure in coding can it beat sonnet at code benches not sure
Michaeli_Starky@reddit
It won't
Solarka45@reddit
Considering how good normal Max is, this could compete with the top end of proprietary models
WithoutReason1729@reddit
Your post is getting popular and we just featured it on our Discord! Come check it out!
You've also been given a special flair for your contribution. We appreciate your post!
I am a bot and this action was performed automatically.
infinity1009@reddit
I think tomorrow will be released by qwen.
anonynousasdfg@reddit
Just wondering if they will ever make generous subscription options for their API models to use in IDEs and CLIs like z. Ai
s101c@reddit
This shit isn't local. It's in the cloud, restricted to one provider. Treat it like you treat all other cloud LLMs that don't respect your privacy.
No_Conversation9561@reddit
I wish we had sub 200B qwen coder model. 480B is too big.
onlymostlyguts@reddit
Tuned so that it actually just thinks about a guy named Max 24/7.
MrPecunius@reddit
We've reached singularity, you mean?