Kimi K2 Thinking 1-bit Unsloth Dynamic GGUFs

Posted by danielhanchen@reddit | LocalLLaMA | View on Reddit | 125 comments

Kimi K2 Thinking 1-bit Unsloth Dynamic GGUFs

Hi everyone! You can now run Kimi K2 Thinking locally with our Unsloth Dynamic 1bit GGUFs. We also collaborated with the Kimi team on a bug fix for K2 Thinking's chat template not prepending the default system prompt of You are Kimi, an AI assistant created by Moonshot AI. on the 1st turn. 🥰

We also we fixed llama.cpp custom jinja separators for tool calling - Kimi does {"a":"1","b":"2"} and not with extra spaces like {"a": "1", "b": "2"}

The 1-bit GGUF will run on 247GB RAM. We shrank the 1T model to 245GB (-62%) & the accuracy recovery is comparable to our third-party DeepSeek-V3.1 Aider Polyglot benchmarks

All 1bit, 2bit and other bit width GGUFs are at https://huggingface.co/unsloth/Kimi-K2-Thinking-GGUF

The suggested temp is temperature = 1.0. We also suggest a min_p = 0.01. If you do not see <think>, use --special. The code for llama-cli is below which offloads MoE layers to CPU RAM, and leaves the rest of the model on GPU VRAM:

export LLAMA_CACHE="unsloth/Kimi-K2-Thinking-GGUF"
./llama.cpp/llama-cli \
    -hf unsloth/Kimi-K2-Thinking-GGUF:UD-TQ1_0 \
    --n-gpu-layers 99 \
    --temp 1.0 \
    --min-p 0.01 \
    --ctx-size 16384 \
    --seed 3407 \
    -ot ".ffn_.*_exps.=CPU"

Step-by-step Guide + fix details: https://docs.unsloth.ai/models/kimi-k2-thinking-how-to-run-locally and GGUFs are here.

Let us know if you have any questions and hope you have a great weekend!