[-] Different_Fix_2217@reddit Localllama, local [-] entsnack@reddit (OP) Qwen 3 Max is a closed-weight model and cannot be hosted locally. I assume you've already commented on each of these: Qwen 3 Max Qwen 3 Max pricing Qwen 3 Max official benchmarks (possibly open sourcing later..?) Seems new model qwen 3 max preview is already available on qwen chat Qwen released API of Qwen3-Max-Preview (Instruct) [-] ttkciar@reddit Your "whattaboutism" is seen and not appreciated.
[-] entsnack@reddit (OP) Qwen 3 Max is a closed-weight model and cannot be hosted locally. I assume you've already commented on each of these: Qwen 3 Max Qwen 3 Max pricing Qwen 3 Max official benchmarks (possibly open sourcing later..?) Seems new model qwen 3 max preview is already available on qwen chat Qwen released API of Qwen3-Max-Preview (Instruct) [-] ttkciar@reddit Your "whattaboutism" is seen and not appreciated.
[-] Tangostorm@reddit Why this Is on localllma? [-] Tangostorm@reddit Damn Just downvote without explanations. Dudes, are you having a bad Moment? [-] xadiant@reddit Probably because this is one of the very few sane LLM subreddits and any related news are welcome [-] ttkciar@reddit Should we try to keep the sub "sane" by enforcing stricter policies about keeping content on-topic?
[-] Tangostorm@reddit Damn Just downvote without explanations. Dudes, are you having a bad Moment? [-] xadiant@reddit Probably because this is one of the very few sane LLM subreddits and any related news are welcome [-] ttkciar@reddit Should we try to keep the sub "sane" by enforcing stricter policies about keeping content on-topic?
[-] xadiant@reddit Probably because this is one of the very few sane LLM subreddits and any related news are welcome [-] ttkciar@reddit Should we try to keep the sub "sane" by enforcing stricter policies about keeping content on-topic?
[-] ttkciar@reddit Should we try to keep the sub "sane" by enforcing stricter policies about keeping content on-topic?
[-] pigeon57434@reddit its more expensive than GPT-5-Thinking at long contexts since GPT-5 doesnt have variable its the same price at all lengths
[-] Timely_Rain_9284@reddit This is really not cheap at all. I checked, and even though they use a tiered pricing model, their lowest price is still higher than that of Kimi-K2.
[-] mtmttuan@reddit As expensive as other proprietary models. Even more expensive than gemini 2.5 pro for 32k input tokens or more
Ill_Yam_9994@reddit
That's like the same price as Claude Sonnet, better be good.
Different_Fix_2217@reddit
Localllama, local
entsnack@reddit (OP)
Qwen 3 Max is a closed-weight model and cannot be hosted locally.
I assume you've already commented on each of these:
ttkciar@reddit
Your "whattaboutism" is seen and not appreciated.
Tangostorm@reddit
Why this Is on localllma?
Tangostorm@reddit
Damn Just downvote without explanations. Dudes, are you having a bad Moment?
xadiant@reddit
Probably because this is one of the very few sane LLM subreddits and any related news are welcome
ttkciar@reddit
Should we try to keep the sub "sane" by enforcing stricter policies about keeping content on-topic?
darkotic@reddit
They'll lose marketshare. Z.ai and Synthetic.new will capture it.
pigeon57434@reddit
its more expensive than GPT-5-Thinking at long contexts since GPT-5 doesnt have variable its the same price at all lengths
Timely_Rain_9284@reddit
This is really not cheap at all. I checked, and even though they use a tiered pricing model, their lowest price is still higher than that of Kimi-K2.
mtmttuan@reddit
As expensive as other proprietary models. Even more expensive than gemini 2.5 pro for 32k input tokens or more