Multi-Token Prediction (MTP) for Qwen on LLaMA.cpp + TurboQuant

Posted by gladkos@reddit | LocalLLaMA | View on Reddit | 38 comments

Implemented Multi-Token Prediction for QWEN on LLaMA.cpp 

+40% performance! 90% acceptance rate. TurboQuant enabled

Running locally on a MacBook Pro M5 Max 64GB RAM

Patched LLaMA.cpp with MTP and TurboQuant: https://github.com/AtomicBot-ai/atomic-llama-cpp-turboquant

Quantized Qwen 3.6 27B (and 35B) into GGUF with MTP: https://huggingface.co/collections/AtomicChat/qwen-36-udt-mtp

Local Ai Models App: Atomic.Chat