Extension idea: llama-server with custom samplers
Posted by DeProgrammer99@reddit | LocalLLaMA | View on Reddit | 3 comments
Just an idea and a prototype (made by Qwen3.6-27B-UD-Q6_K_XL via OpenCode) for allowing users to add custom sampling logic to llama-server without having to maintain their own entire fork and without having to make a wrapper that reimplements everything llama-server can do.
Included is an example extension that detects and breaks one kind of loop that I've commonly seen with heavily quantized models, where they get stuck repeating the same 1-3 tokens. Other ideas for sampling that aren't in llama.cpp include different sampling parameters during thinking, tool calling, and normal generation; toggling grammars based on context; non-GBNF grammars; guaranteeing that only real tables are referenced in a generated SQL query; redacting PII in the sampler itself; and other experimental general sampling approaches.
This was based on the latest master branch after MTP was merged; also works with speculative decoding.
Posted for votes here: https://github.com/ggml-org/llama.cpp/discussions/23028
Branch: https://github.com/dpmm99/llama.cpp/tree/master-with-sampling-extensions
The example sampler extension is one fairly short file: https://github.com/dpmm99/llama.cpp/blob/master-with-sampling-extensions/examples/sampling-ext/loop-detector.cpp
Vulkan Windows x64 release copy for convenience if you want to try it: https://github.com/dpmm99/llama.cpp/releases/tag/dpmm99-0.1 but here's your daily reminder not to trust random executables from the internet. ;)
Example command:
llama-server -np 1 -c 32768 --temp 0.1 -m Qwen3.6-27B-UD-Q6_K_XL-MTP.gguf --spec-type draft-mtp --spec-draft-n-max 3 --sampling-ext-path sampling-ext-loop-detector.dll
[the extension working in llama-server with Qwen3.6-27B using MTP]()
a_beautiful_rhind@reddit
Ahh but who will come up with custom samplers? We already have lots of them. Someone at IK is already working on token triggered ones.
DeProgrammer99@reddit (OP)
People like https://github.com/ggml-org/llama.cpp/discussions/21050, of course!
If an extension sees broad use, that would be a good signal to add it to the main repo, directly or still as an extension. Just need to ensure there's a central repository for the extensions that the llama.cpp readme/CLI help points to, such as a category on the discussion board. Otherwise, things being downloaded from all sorts of different places makes it really hard to compare them (or find them).
PixelSage-001@reddit
Custom samplers are exactly what we need to break out of the generic AI tone. Being able to write a custom grammar or logit processor that dynamically penalizes specific corporate buzzwords during the generation step would instantly improve the quality of every local model. The architecture is already there in llama.cpp.