DFlash speculative decoding on Apple Silicon : 85 tok/s, 3.3x on Qwen3.5-9B (MLX, M5 Max)

Posted by No_Shift_4543@reddit | LocalLLaMA | View on Reddit | 36 comments

I'm building a native MLX implementation of DFlash (paper) for Apple Silicon. A small draft model generates 16 tokens in parallel via block diffusion, the target verifies them in one forward pass. Output is bit-for-bit identical to baseline (greedy exact argmax match).

Setup: M5 Max, 64GB, MLX, no CUDA.

Results

Qwen3.5-9B bf16

Gen length DFlash Baseline Speedup
1024 tokens 85 tok/s 26 tok/s 3.3x
2048 tokens 80 tok/s 26 tok/s 3.1x

Qwen3.5-4B bf16

Gen length DFlash Baseline Speedup
1024 tokens 109 tok/s 41 tok/s 2.7x
2048 tokens 133 tok/s 42 tok/s 3.2x

The 4B actually gets faster at longer generation. The model is small enough that the draft/verify balance stays healthy as context grows.

Qwen3.5-27B quantized

Quant Gen length DFlash Baseline Speedup
8bit 1024 tokens 35 tok/s 14 tok/s 2.5x
8bit 2048 tokens 26 tok/s 11 tok/s 2.3x
4bit 1024 tokens 44 tok/s 24 tok/s 1.9x
4bit 2048 tokens 40 tok/s 23 tok/s 1.7x

8bit gives better speedup ratios than 4bit. int4 makes the verify so fast that the bf16 draft becomes the bottleneck. With int8, the draft/verify balance is healthier.

All numbers are generation only (first token to last token, no prefill). Acceptance around 80-87% across all models.

What I built

No DFlash MLX implementation existed. I wrote the runtime from scratch. What actually moved the numbers:

head_dim=256 patch. Qwen3.5-9B uses head_dim=256, which MLX's steel_attention didn't support. A 2-line patch unlocked the fast SDPA path.

Sync elision. Restructured the pipeline from 2 GPU→CPU syncs per cycle to 1. At 80+ tok/s each sync costs \~0.5ms.

Packed QKV projection. 3 matmuls → 1 matmul + split. Fewer kernel dispatches per layer.

Lessons on Apple Silicon

On unified memory everything is bandwidth-bound, which changes the speculative decoding game:

Custom Metal kernels (batched-GEMV, fused gated SiLU, custom SDPA) all came back 0.5 to 0.8x slower than stock MLX steel GEMM. Ended up reverting all of them.

Verify cost is almost flat from 4 to 16 tokens (57ms vs 59ms). Weight loading dominates, not token count. "Verify fewer tokens when confidence is low" doesn't help here.

On quantized models, the optimization landscape flips: the draft (bf16) becomes slower than the verify (int4/int8). This is the opposite of the bf16 case and is a structural limitation of speculative decoding on bandwidth-bound hardware with quantized targets.

Currently working on

Draft compression/distillation for the 27B to fix the bf16 draft bottleneck on quantized targets.

Long context stability. Speedup degrades past 2K tokens due to KV cache growth.

MoE models. DFlash drafts exist for Qwen3.5-35B-A3B (35B total, 3B active). Verify cost of a small model, quality of a large one.

Everything is still very much under construction. Will open source when ready.