Qwen 3.6 27B BF16 vs Q4_K_M vs Q8_0 GGUF evaluation

Posted by gvij@reddit | LocalLLaMA | View on Reddit | 150 comments

Qwen 3.6 27B BF16 vs Q4_K_M vs Q8_0 GGUF evaluation

Evaluated Qwen 3.6 27B across BF16, Q4_K_M, and Q8_0 GGUF quant variants with llama-cpp-python using Neo AI Engineer.

Benchmarks used:

Total samples:

Results:

BF16

Q4_K_M

Q8_0

What stood out:

Q4_K_M looks like the best practical variant here. It keeps BFCL almost identical to BF16, drops about 5.5 points on HumanEval, and is still only 4 points behind BF16 on HellaSwag.

The tradeoff is pretty good:

Q8_0 was a bit underwhelming in this run. It improved HumanEval over Q4_K_M by \~1.8 points, but used 42 GB RAM vs 28 GB and was slower. It also scored lower than Q4_K_M on HellaSwag in this eval.

For local/CPU deployment, I would probably pick Q4_K_M unless the workload is heavily code-generation focused. For maximum quality, BF16 still wins.

Evaluation setup:

This evaluation was done using Neo AI Engineer, which built the GGUF eval setup, handled checkpointed runs, and consolidated the benchmark results. I manually reviewed the outcome as well.

Complete case study with benchmarking results, approach and code snippets in mentioned in the comments below šŸ‘‡