State of NVFP4 on mlx
Posted by Sea-Emu2600@reddit | LocalLLaMA | View on Reddit | 5 comments
So I’m testing several models on macOS and I’d like to understand if NVFP4 is the best option to run 4bit models quantized models using mlx. From my investigation although it’s a software emulator since MacBook does not implement this on hardware, looks like the current mlx implementation is on pair supporting the dual scaling factors (micro block and tensor level). So should I expect less loss compared to a 16fp model? Is my mental model right?
EffectiveCeilingFan@reddit
NVFP4 is not meaningfully better than plain old Q4_K_M in any of my testing. It’s just fast on Nvidia Broadwell. That’s about it.
Ok_Warning2146@reddit
It is fast only for B200/B300 because of hardware support
CBW1255@reddit
I think MLX might be sunsetting now that the main (only?) dev quit and joined Anthropic.
llama.cpp is where it's at.
Do correct me if I'm wrong.
phoiboslykegenes@reddit
Awni left Apple but it looks like Angelos (angeloskath, also working at Apple) stepped up over the past few months and has been doing a great job IMO. The reality is that there isn’t the same level of community engagement and number of maintainers as llama.cpp, but new models are supported very quickly and things have always been stable for me
retry51776@reddit
what? where is source about this? Man, I hope not.