In about 10 hours another Qwen is getting release, if the previous releases are anything to go by (they have released the other 3.6 models on Wednesday 5-7AM UTC).
You missed the news everyone has a GPU now with Intel driver change for allocating more memory to VRAM. Intel Celeron with 48GB of VRAM and large models GO GO GO.
You think a subreddit about running local models doesnt have a few agents checking constantly ready to automatically make a post as soon as something gets uploaded?
From my experience here, I can tell that within 1 minute after hf show this model, there will be at least 3 separate threads here with dozens of upvotes.
given 3.5 27b and 3.5 122b are in the same ballpark, I would bet it will probably be about the same, maybe slightly better if they trained for longer, which is all I need. I care more about the performance uplift on Macs or similar large-memory systems you get from using the MoE instead of dense.
Same thing for logging in to a gacha game every day to complete daily missions and collect freebies and feeling like the progress is too slow and you're not getting anything good.
Step away from the game for 2 weeks and suddenly so many new events and stuff waiting for you to claim for free.
spaceman_@reddit
In about 10 hours another Qwen is getting release, if the previous releases are anything to go by (they have released the other 3.6 models on Wednesday 5-7AM UTC).
50% chance it's our day tomorrow OP!
spaceman_@reddit
I hate how wrong I was about this.
Makers7886@reddit
It was 7 days between the last two and we are on day 6. I'd guess within 24 hours.
spaceman_@reddit
Both releases were Wednesday morning UTC. We're likely to see the release of either 122B or 9B in the next 12 hours.
spaceman_@reddit
I hate how wrong I was about this.
Blues520@reddit
Inb4 24 hours
LocalLLaMA-ModTeam@reddit
Rule 3
NegotiationNo1504@reddit
What about small tiny ones. 9-7b 2-4b 🤤
eesnimi@reddit
I wish it had some cyber-magic to fit it inside 64gb system ram with q4_k_m instead of iq4_xs
Separate-Forever-447@reddit
have you tried the reap variants?
RazzmatazzReal4129@reddit
It's up. https://huggingface.co/Qwen/Qwen3.6-122B-A10B
2jul@reddit
Why is that link purple? TELL ME
standish_@reddit
https://i.redd.it/7v0xvphixzxg1.gif
misha1350@reddit
HomsarWasRight@reddit
Oh man, this is like smarter than frontier models!
maglat@reddit
Already downloaded, works very great!
-dysangel-@reddit
never gonna give this one up
some_user_2021@reddit
It seems as good as Qwen3.6 397B-A17B !
VoiceApprehensive893@reddit
NO
synth_mania@reddit
Fucker
I opened that in class, volume up xd
spaceman_@reddit
You only have yourself to blame for that one.
spaceman_@reddit
Well played
Kerem-6030@reddit
🥀💔
Intrepid_Travel_3274@reddit
U won this one, I'll come back
Kodrackyas@reddit
3.6 9b ploz?
Hyphonical@reddit
3.6 0.1B for the desperate please 🥺
iLaux@reddit
0.0000001b for the gpu poor? 🥺😢
DocMadCow@reddit
You missed the news everyone has a GPU now with Intel driver change for allocating more memory to VRAM. Intel Celeron with 48GB of VRAM and large models GO GO GO.
Borkato@reddit
What lol
Kerem-6030@reddit
fr of 8b 4b etc.
AppealSame4367@reddit
4b on q3.5 35b level plz...
jzn21@reddit
I am waiting on Qwen 3.6 397b. Can't wait, have high expectations.
Aggravating_Pinch@reddit
let me know too. saves me the trouble of doing this
KeyScene8669@reddit
-dysangel-@reddit
What if we're all refreshing localllama and we've not got anyone checking hf?
waitmarks@reddit
You think a subreddit about running local models doesnt have a few agents checking constantly ready to automatically make a post as soon as something gets uploaded?
-dysangel-@reddit
I see my attempt at absurdist humour was not as obvious as it could have been
Cool-Chemical-5629@reddit
Jacek is most likely checking HF on the cron job and auto posting whenever a new model drops.
SnooPaintings8639@reddit
From my experience here, I can tell that within 1 minute after hf show this model, there will be at least 3 separate threads here with dozens of upvotes.
One minute.
seamonn@reddit
Can confirm, have made such a post
ciprianveg@reddit
me waiting on qwen 3.6 397b.....
LegacyRemaster@reddit
accurate
Blues520@reddit
Do you think it will be better than 27b in coding since it will be moe vs dense?
dkeiz@reddit
it will be faster, like 3 times faster.
onil_gova@reddit (OP)
given 3.5 27b and 3.5 122b are in the same ballpark, I would bet it will probably be about the same, maybe slightly better if they trained for longer, which is all I need. I care more about the performance uplift on Macs or similar large-memory systems you get from using the MoE instead of dense.
ga239577@reddit
Don't forget 9B
Juulk9087@reddit
Useless model
Medium_Chemist_4032@reddit
I'm only not panicking, because of how good the 27b is.
illkeepthatinmind@reddit
Any bet as to whether a 3-4bit quant of 3.6 122B will be better than 27b?
Medium_Chemist_4032@reddit
yanoftheyinoftheyan@reddit
I have no idea what kind of system configuration it would require
VoiceApprehensive893@reddit
me waiting for 9b:
insulaTropicalis@reddit
They said that the line-up is complete with 35B-A3B and 27B. I would love the 122B but it seems unlikely.
Makers7886@reddit
receipts or you are dead to me
srigi@reddit
Terminator857@reddit
Link?
seamonn@reddit
where?
AppealThink1733@reddit
And qwen3.6 9B ? And OmniCoder 2 9B ?
po_stulate@reddit
Same thing for logging in to a gacha game every day to complete daily missions and collect freebies and feeling like the progress is too slow and you're not getting anything good.
Step away from the game for 2 weeks and suddenly so many new events and stuff waiting for you to claim for free.
Cleric07@reddit
Me asf
localizeatp@reddit
9B*