Most people in this LocalLLaMA are hypocritical.
Posted by Ok_houlin@reddit | LocalLLaMA | View on Reddit | 38 comments
When posts about qwen max appear, there are a lot of comments saying that it shouldn't be discussed.

However, when Gemini 3 and gpt 5 were discussed, not a single comment objected to their being discussed.
218-69@reddit
Lol, literally they say the same shit for every non local model, this is not new or unique
RabbitEater2@reddit
Because who cares about qwen max, it's neither the best model overall nor is it open source. And chatgpt 4.1 or 4.5 weren't talked about as much as they weren't much of an improvement as well.
Gemini 3 and gpt 5 on the other hand, are pretty substantial leaps and a good preview of where models are headed.
No_Gold_8001@reddit
That is no reason to report or spread abuse in such topics, little known secret, you can just not open threads that you have no interest in.
People really should stop reporting and being annoying in such threads. The only thing worse than that was when people were posting non stop bashing competitor models.
a_beautiful_rhind@reddit
Release good model. People will like it. Artificial shilling means it's not.
_nanite_@reddit
Bots gonna bot.
a_beautiful_rhind@reddit
Yea this op is some sort of shill account if you look at the post history. Probably comes with a nice upvote army.
The alibaba yandere arc? "You should be talking about qwen max!!" stares at kitchen knife
SrijSriv211@reddit
Some bots act rude and offended as well. I've seen some. LOL! The best is to ignore them..
__JockY__@reddit
Most people ~~in this LocalLLaMA~~ are hypocritical. Yup.
ElectronSpiderwort@reddit
Also people are different. This sub isn't directed by one hive mind; some people actively talk about advances on the frontier and some people roll their eyes if there's nothing they can gain from it without a subscription. That's okay. (That's more than okay, it's beautiful. And that's rare.) <- proof I am a human that talks to local bots too much
a_beautiful_rhind@reddit
There's always a little cloud discussion but the idea is to discourage it so the sub doesn't become about those things.
Why are you butthurt about qwen though? Max just ain't that good. Plenty of discussion on kimi and deepseek not always on our machines.
Max had a post when it came out. Not many people were interested.
Awwtifishal@reddit
I downvoted the posts but I didn't feel the need to comment because it's obvious that it's not local (or at least I think it should be obvious), while some people may think that qwen max is local or open weights since most of their other models are.
hugo-the-second@reddit
If there was a button that allows the user to toggle between the more restrictive and the less restrictive version - would this give everybody what they want?
(of course this would depend on people taging their postings correctly, so it probably wouldn't do away with all conflict)
131sean131@reddit
Legitimately we should have one day a week where the sub allows non local LLM discussions and then the rest of the week you can't post about it.
CtrlAltDelve@reddit
To be fair, the rules do not actually explicitly say you cannot talk about non-local LLMs: https://old.reddit.com/r/LocalLLaMA/about/rules/
Environmental-Metal9@reddit
I wish more people read the rules instead of vibe interpret what they feel the rules should be based on the sub name
10minOfNamingMyAcc@reddit
Yep, but then again. I find that it's important to talk about closed/proprietary models as well. Problem is, there are just so many posts a day like that, that kind of ruin the point of the sub sometimes...
cobbleplox@reddit
I see no rule against vibe interpreting the sub name though.
solartacoss@reddit
bro i’m just trying to vibesurvive
No_Gold_8001@reddit
Yeah, but threads about non local modals get a bunch of reports and get hidden until reviewed. So mo matter the rules, if the gang dont like you, you are not showing up.
Leopold_Boom@reddit
This + maybe one post for every serious frontier moving model seems like a reasonable policy.
LamentableLily@reddit
Not everyone's logging on to keep tabs on the topics every hour. Some of us have lives and do things during the day, ya know.
jacek2023@reddit
Gemini, Claude and ChatGPT should be discussed but it's impossible to argue with barbarians who joined this sub. They want to discuss "what computer purchase for games and maybe AI" here and they are upvoted. They hype all "benchmarks". This is the world we are living in.
peculiarMouse@reddit
To be fair, it makes no sense to discuss "closed model a bit better than open", as its natural that for LocalLLama users its not even worth a consideration. On the other hand, ChatGPT/Gemini often set a new bar in entire ecosystem and adopt ideas that could migrate into open source, so it makes sense to discuss SOTA closed models after their release. And it makes no sense to discuss "15% better for 4$ per 2kk tokens!" models.
swagonflyyyy@reddit
It only makes sense to discuss SOTA closed source models IF their advances trickle down to open source, otherwise I don't wanna hear it because I can't run them locally.
I think local-only in this sub should be strictly enforced and I'm surprised there isn't a rule for that. Sure, we can discuss closed source companies releasing open source models, or maybe some news article that may affect the open source AI community, but outside of that I don't think closed source models should have a discussion here.
And this only invites tons of shitty cloud-based self-promotion and shilling for closed source, anyway. Nevermind that I would very much like to keep this community under the radar now that open source AI is slowly drawing attention from regulators and a certain someone's files are being published recently.
And for the record, if you compiled and used AI to vibe-code a solution to comb over that particular dataset, great. I'm happy for you, but for the love of God, KEEP THAT SHIT TO YOURSELF. That's all I'm gonna say about the matter.
DunderSunder@reddit
of course it always trickles down. how do you think open models are trained?
entsnack@reddit
You're absolutely right. But it's a weird thing to write 3 paragraphs and bold-caps about ngl.
PotentiallySillyQ@reddit
Bad bot.
ab2377@reddit
qwen gets lot of love from this community, which is deserved.
Theio666@reddit
We just hit the reality of MoE era, when running a good model is way harder nowadays. And hard to justify when there are coding plans by almost every Chinese lab and chutes/nanogpt subs.
I expect this sub to evolve in the next few months into more and more discussions on "how to efficiently call cloud API locally", like coding tools, deep research tools, rag/raggraphs etc etc.
Inferencing a good LLM is too expensive compared to cloud for that to be that relevant. I have access to a100(even multiple), and even it is barely enough, since to run GLM air I have to run awq, and let's just say that the awq performance is quite bad for tool calling. Llamacpp is a bad fit for an agentic workflow.
I wish we would get more on multi gpu setups and local running, but, with 3090 slowly retiring and ram pricing increase I don't see that happening in the next half an year.
Tomatillo_Impressive@reddit
Corporate shills
Such_Advantage_6949@reddit
Nothing new here. Last year, i was advocating chinese models like qwen, ppl flames me hard, talk about censorship and stuffs, and how llama model is best. I advise people to avoid ollama, same result lol. Look where we are now
pl201@reddit
Do we have a mod for this sub? Can we enforce for the discussion of local model only?
Monkey_1505@reddit
Qwen Max has more relationship to local models than OpenAI or Google's API's so, IMO. Although I'd personally prefer if non-open models were just discussed in terms of a meaningful relationship to open models.
AleksHop@reddit
qwen max is really good one to be honest, if they open source it, then everything change again :)
teachersecret@reddit
Hey man, we're just here chattin' bout bots. Don't harsh the mood, dude!
dinerburgeryum@reddit
Maybe we should have a stickied thread for non-local stuff, maybe on a weekly basis? I guess we attract a subset of folks who really want to talk about the larger ecosystem, and that makes sense! But also it's not exactly what we do here...
DinoAmino@reddit
I object to Gemini 3 and GPT 5 being discussed here. Sorry I didn't speak up earlier.
SrijSriv211@reddit
Yes I've seen such comments as well! Not just Qwen but if you talk about any non-American model (other than deepseek or open qwen models) you'll get comments saying that this is not a place to discuss here but Gemini 3, GPT 5, Grok or Claude models are discussed.. I wouldn't say most people are like this, in fact most people here are very kind and are just here to either learn or share their knowledge. There are only some people who behave like this or even act rude.