Users of Qwen3-Next-80B-A3B-Instruct-GGUF, How is Performance & Benchmarks?

Posted by pmttyji@reddit | LocalLLaMA | View on Reddit | 32 comments

It's been over a day we got GGUF. Please share your experience. Thanks

At first, I didn't believe that we could run this model just with 30GB RAM(Yes, RAM only) .... Unsloth posted a thread actually. Then someone shared a stat on that.

17 t/s just with 32GB RAM + 10GB VRAM using Q4

Good for Poor GPU Club.