Diffusion model support in llama.cpp.
Posted by fallingdowndizzyvr@reddit | LocalLLaMA | View on Reddit | 13 comments
I was browsing the llama.cpp PRs and saw that Am17an has added diffusion model support in llama.cpp. It works. It's very cool to watch it do it's thing. Make sure to use the --diffusion-visual flag. It's still a PR but has been approved so it should be merged soon.
paryska99@reddit
I love seeing new directions people take LLMs. Diffusion sure seems like a good one to explore, considering it can refine output with chosen number of steps.
Semi_Tech@reddit
Whenever i see this I wonder what would happen to benchmark results at 10/100/1000/10k steps
It would take ALOT to run but it could be something that van be left overnight just to see what comes out.
wh33t@reddit
So you can generate images directly in llama.cpp now?
thirteen-bit@reddit
If I understand correctly it's diffusion based text generation, not image.
See e.g. https://huggingface.co/apple/DiffuCoder-7B-cpGRPO
And there's a cool animated GIF in the PR showing the progress of the diffusion:
https://github.com/ggml-org/llama.cpp/pull/14644
wh33t@reddit
Oh excellent!
xignaceh@reddit
Kinda, if you ask it to make ASCII-art :)
shroddy@reddit
Or svg
Minute_Attempt3063@reddit
No
There has been work to make diffusion text generation possible as well, same concept as image generation, but instead of pixels, it's text.
In theory you could make more optimised models this was as well, and bigger, while using less space. In theory
Zc5Gwu@reddit
I hope eventually there is an FIM model. Imagine crazy fast and accurate code completion. No http calls means you could complete large chunks of code in less than a couple hundred milliseconds.
muxxington@reddit
Nice. But how will this be implemented in llama-server? Will streaming still be possible with this?
Capable-Ad-7494@reddit
i imagine making this streamable in a rudimentary manner would be just sending the entire output of denoised tokens every time a new one gets denoised.
Then it would be in the user client to handle interpreting the stream properly
harrro@reddit
I don't think this would work with the way the streaming (openai-compatible) API works -- there's usually a delta text and most clients just append that output to the previously-received output (clients don't replace the entire text on every streamed piece).
Capable-Ad-7494@reddit
That’s why i said it would be on the user client to interpret it properly.
There isn’t an established way to stream models like these yet, as far as i know. You can technically bundle positional info in the streaming api response, but that would also be on the user client to interpret that properly as well.
Just thinking of it as a frame of text and handling it like that is probably the easiest way to deal with it.