Qwen releases official MLX quants for Qwen3 models in 4 quantization levels: 4bit, 6bit, 8bit, and BF16
Posted by ResearchCrafty1804@reddit | LocalLLaMA | View on Reddit | 45 comments

🚀 Excited to launch Qwen3 models in MLX format today!
Now available in 4 quantization levels: 4bit, 6bit, 8bit, and BF16 — Optimized for MLX framework.
👉 Try it now!
X post: https://x.com/alibaba_qwen/status/1934517774635991412?s=46
Hugging Face: https://huggingface.co/collections/Qwen/qwen3-67dd247413f0e2e4f653967f
Educational-Shoe9300@reddit
Is YaRN possible with these MLX models? I am using LM Studio - how can I use these with context larger than 32K?
SnowBoy_00@reddit
It’s like to know that as well. The lack of documentation around YaRN is pretty sad
kadir_nar@reddit
The quality of the Qwen models is amazing. It's great news that the official Mlx support has been released.
wapxmas@reddit
Qwen/Qwen3-235B-A22B-MLX-6bit is unavailable in LM Studio.
jedisct1@reddit
None of them appear to be visible in LM Studio
Felladrin@reddit
I've just created pull requests on all their MLX repositories so they are correctly marked as MLX models. [Example]
Once they accept the pull requests, we should be able to see them listed on LM Studio's model manager.
jedisct1@reddit
Nice, thank you for doing this!
Divergence1900@reddit
is there a way to run mlx models apart from mlx in the terminal and lm studio?
OriginalSpread3100@reddit
Transformer Lab supports training, evaluation and more with MLX models.
_hephaestus@reddit
Any way to integrate this into open-webui workflows?
Divergence1900@reddit
looks good. i’ll try it out. thanks!
ortegaalfredo@reddit
Is there any benchmark of batching (many simultaneous requests) using MLX ?
EmergencyLetter135@reddit
It's a pity that Mac users with 128 GB RAM are not considered for the 235b model. To run the 4-bit version, we only need 3% RAM memory. Okay, alternatively, there is a fine Q3 Version from unsloth. Thanks to daniel
jzn21@reddit
Is the Q3 also MLX? I find the Unsloth MLX models sparse...
datbackup@reddit
Unsloth has mlx models? News to me…
yoracale@reddit
We don't but we might work on them if they're popular
EmergencyLetter135@reddit
No, MLX versions are only available in x-bit versions. If you absolutely need an MLX version for a 128 GB Mac, you should use a 3-bit version from Huggingface. According to my tests, however, these were significantly worse than the GGUF from Unsloth.
bobby-chan@reddit
have you tried the 3-4 or 3-6 mixed bits ?
hutchisson@reddit
how can one see that?
whoisraiden@reddit
You look at the size of the quant and compare it to your available ram.
Ok-Pipe-5151@reddit
Big W for mac users. Definitely excitedÂ
vertical_computer@reddit
Haven’t these already been available for a while via third party quants?
madaradess007@reddit
third party quant != real deal, a sad realization i had 3 days ago
dampflokfreund@reddit
How so? Atleast on the GGUF side third party ggufs like from Unsloth or Bartowski are a lot better than the official quants due to imatrix and stuff.
Is that not the case with MLX quants?
DorphinPack@reddit
Look into why quantization-aware training helps mitigate some of the issues with post-training quantization.
The assumption here is that Alibaba is creating these quants with full knowledge of the model intervals and training details even if it isn’t proper QAT
segmond@reddit
that's a big assumption.
DorphinPack@reddit
Agreed my hard drive is 20% HF quants 🤪
cibernox@reddit
These are not QAT apparently.
And because of that, and because in the past, third-party quants were as good if not better than official ones, I think this is just moderately exciting.
Nothing makes me thing that these are going to be significantly better than other versions we've had for a while.
Ok-Pipe-5151@reddit
Yes. But official support is better to have
Trvlr_3468@reddit
anyone have an idea of performance differences on apple silicon with the qwen3 GGUF on llama.cpp vs the new MLX versions with python?
Spanky2k@reddit
Great that they're starting to offer this themselves. Hopefully they'll adopt DWQ soon though too as that's where the magic is really happening at the moment.
Account1893242379482@reddit
How do they compare to the GGUF versions? Are they faster? Are they more accurate? What are the advantages?
No_Conversation9561@reddit
if you have DWQ version already, don’t bother with this
Zestyclose_Yak_3174@reddit
They should start using DWQ MLX quants. Much better accuracy, also at lower bits = free gains.
datbackup@reddit
It hurts a little every time someone uploads a new mlx model that isn’t dwq. Is there some downside or tradeoff i’m not familiar with? I’m guessing it’s simply that people aren’t aware… or perhaps lack the hardware to load the full precision models which as I understand it is an important part of the recipe for getting good dwq models
Zestyclose_Yak_3174@reddit
I guess it is still a bit experimental but I can tell you from real world use cases and experiments that their normal MLX quants are not so great compared to the SOTA GGUF ones with good imatrix (calibration) data.
More adoption and innovation with DWQ and AWQ is needed.
Creative-Size2658@reddit
Today? That's weird. I was about to replace my Qwen3 32B model with the "new one" from Qwen, but it turns out, I already have the new one from Qwen. And it's been 49 days
Creative-Size2658@reddit
That's great! I wonder if it has anything to do with the fact that we can use any model in Xcode 26 (through LMStudio). Qwen2.5-coder was already my daily driver for Swift and SwiftUI, but this new feature will undoubtedly give LLM creators some incentive to train their model on Swift and SwiftUI. Can't wait to test Qwen3-coder!
Web3Vortex@reddit
Looking forward to it! Qwen3 is a good one
Mr_Moonsilver@reddit
Wen coder?
getmevodka@reddit
qwen 3 mlx in 235b too ?
hainesk@reddit
Yep
AliNT77@reddit
Is it using QAT? If not what’s different compared to third party quants?
AaronFeng47@reddit
No, I asked qwen team members and they said there is no plan for QATÂ
EternalOptimister@reddit
Anyone ben benchmarking these?