Same 4 bits. Very different quality. (quant.cpp vs llama.cpp KV compression)

Posted by Suitable-Song-302@reddit | LocalLLaMA | View on Reddit | 11 comments

Both use 4-bit KV quantization. One breaks the model, the other doesn't.

The difference is how you quantize. llama.cpp applies the same Q4_0 scheme to both keys and values. quant.cpp quantizes them independently — per-block min-max (128 elements) for keys, Q4 with per-block scales for values. Outliers stay local instead of corrupting the whole tensor.

Result on WikiText-2 (SmolLM2 1.7B):

What this means in practice: on a 16GB Mac with Llama 3.2 3B, llama.cpp runs out of KV memory around 50K tokens. quant.cpp compresses KV 6.9x and extends to \~350K tokens — with zero quality loss.

Not trying to replace llama.cpp. It's faster. But if context length is your bottleneck, this is the only engine that compresses KV without destroying it.

72K LOC of pure C, zero dependencies. Also ships as a single 15K-line header file you can drop into any C project.

Source: github.com/quantumaikr/quant.cpp