vLLM vs llama.cpp: Huge Context Efficiency Differences on Qwen3.5-4B AWQ
Posted by SageQuestN@reddit | LocalLLaMA | View on Reddit | 8 comments
Hey folks, I’ve been testing Qwen3.5-4B AWQ / Q4_K_M on a single RTX 3060, and the difference between vLLM and llama.cpp is crazy when it comes to handling large contexts. Thought I’d share the numbers because it’s not obvious until you dig in.
Setup
Model: Qwen3.5-4B AWQ / Q4_K_M
GPU: RTX 3060 (12 GB)
vLLM version: latest stable
Context goal: 100k–250k tokens
vLLM flags: --enable-prefix-caching --max_seq_len 110k
Observations
vLLM
KV memory allocated: \~3.23 GB
Max tokens it can handle: \~23k
Reason:
Allocates KV cache for all layers (32 layers)
Adds padding layers, CUDA graph pool, and prefill overhead (\~50% extra memory)
Even with prefix caching, the effective token limit is much lower than theoretical
Result: huge drop compared to model’s native capacity (\~250k tokens)
llama.cpp
KV memory tight: \~16 KB per token for attention layers only
Total memory usage (model + KV + workspace) for 250k tokens: \~10.8 GB ✅
Supports huge context without crashing
Reason:
Only stores KV for attention layers, FFNs are recomputed
Minimal padding/overhead
Efficient checkpoint/recompute strategy
Quick Math
Model architecture (simplified for attention KV):
Layers: 32
KV heads: 4
Head dim: 256
dtype: fp16 → 2 bytes
KV per token: 2 × 32 × 4 × 256 × 2 = 64 KB
vLLM (\~3.23 GB): \~23k tokens max
llama.cpp (attention-only, recompute FFNs): \~16 KB per token → 250k tokens feasible
Takeaways
vLLM is amazing for async scheduling, prefix caching, and small/medium context (\~20–50k tokens).
llama.cpp is far more efficient for ultra-long contexts (>100k tokens) thanks to attention-only KV and recompute strategies.
Hybrid architectures like Qwen3.5 DeltaNet make vLLM’s “full KV per layer” approach painfully inefficient.
On a single RTX 3060, you can push 250k tokens with llama.cpp, but vLLM crashes at \~23k.
sunychoudhary@reddit
This matches what I’ve seen.
vLLM tends to win on:
llama.cpp is still great for:
Feels less like “which is better” and more “what are you optimizing for.”
Expensive-Paint-9490@reddit
This thread says that llama.cpp is better for long context.
Lorian0x7@reddit
Don't bother, it looks like a bot.
sunychoudhary@reddit
Not a bot.
Just not treating it as a single-variable comparison. Context length alone doesn’t decide “better.”
sunychoudhary@reddit
Yeah, that can be true depending on how you define “better.”
llama.cpp can handle long context well in single-user, local scenarios, especially with quantization and no heavy batching.
vLLM usually wins when you look at throughput under load, multi-user batching and serving efficiency at scale.
So it’s less a contradiction and more local vs scaled workloads.
Different constraints, different winners.
Environmental_Hand35@reddit
Set this flag in the same terminal you use to start vLLM:
Launch vLLM with these flags:
Then look for a log line like this:
(EngineCore pid=39492) INFO 04-08 13:15:06 [kv_cache_utils.py:1324] Maximum concurrency for 83,888 tokens per request: 1.00xOn the second run
Maximum concurrency for 83,888 tokens per request: 1.00xvalue may increase due to previous calculation being wrong. If it does not try restarting it one more time.DeltaSqueezer@reddit
You can use --enforce-eager to free up some VRAM on vLLM.
CooperDK@reddit
My vllm runs fine at 50K+ ... On Windows!