State of NVFP4 on mlx

Posted by Sea-Emu2600@reddit | LocalLLaMA | View on Reddit | 5 comments

So I’m testing several models on macOS and I’d like to understand if NVFP4 is the best option to run 4bit models quantized models using mlx. From my investigation although it’s a software emulator since MacBook does not implement this on hardware, looks like the current mlx implementation is on pair supporting the dual scaling factors (micro block and tensor level). So should I expect less loss compared to a 16fp model? Is my mental model right?