AMD Lemonade Server Update: Ubuntu, llama.cpp, Vulkan, webapp, and more!
Posted by jfowers_amd@reddit | LocalLLaMA | View on Reddit | 27 comments
Hi r/localllama, it’s been a bit since my post introducing Lemonade Server, AMD’s open-source local LLM server that prioritizes NPU and GPU acceleration.
GitHub: https://github.com/lemonade-sdk/lemonade
I want to sincerely thank the community here for all the feedback on that post! It’s time for an update, and I hope you’ll agree we took the feedback to heart and did our best to deliver.
The biggest changes since the last post are:
- 🦙Added llama.cpp, GGUF, and Vulkan support as an additional backend alongside ONNX. This adds support for: A) GPU acceleration on Ryzen™ AI 7000/8000/300, Radeon™ 7000/9000, and many other device families. B) Tons of new models, including VLMs.
- 🐧Ubuntu is now a fully supported operating system for llama.cpp+GGUF+Vulkan (GPU)+CPU, as well as ONNX+CPU.
ONNX+NPU support in Linux, as well as NPU support in llama.cpp, are a work in progress.
-
💻Added a web app for model management (list/install/delete models) and basic LLM chat. Open it by pointing your browser at http://localhost:8000 while the server is running.
-
🤖Added support for streaming tool calling (all backends) and demonstrated it in our MCP + tiny-agents blog post.
-
✨Polished overall look and feel: new getting started website at https://lemonade-server.ai, install in under 2 minutes, and server launches in under 2 seconds.
With the added support for Ubuntu and llama.cpp, Lemonade Server should give great performance on many more PCs than it did 2 months ago. The team here at AMD would be very grateful if y'all could try it out with your favorite apps (I like Open WebUI) and give us another round of feedback. Cheers!
extremist@reddit
Is there a way to enable an info box like this within Open-WebUI? It appears when I'm using Ollama, but not Lemonade.
jfowers_amd@reddit (OP)
Thanks for the feedback! Lemonade provides a /stats endpoint that returns performance info, which Open-WebUI could use to populate that box. However, Open-WebUI itself would need to query that endpoint and display the date - that is not something Lemonade controls. If you want, you could open an issue with Open-WebUI to support Lemonade's /stats endpoint.
PlasticSoul266@reddit
Nice project! It seems to provide similar features relative to Ollama. What are the advantages of using Lemonade over that? Is it "just" the inference backend that's different (Vulkan vs. ROCm?)? Is there a plan to provide an official container image for those (like me) who like to deploy services as containers?
jfowers_amd@reddit (OP)
Ollama is great - I would encourage anyone to use Ollama if it meets their needs.
AMD is making Lemonade because we need to guarantee that all the AMD inference engines are fully supported from a single tool that makes it easy and fun to switch between them. This includes Ryzen AI Software (using OGA engine, for NPU) as well as llama.cpp (Vulkan and ROCm today, NPU in the future) on both Windows and Linux. And it should all be tested across the full range of Ryzen AI and Radeon products.
Lemonade is a relatively new product and doesn't have 100% of the above yet (see compatibility chart on the repo), but that is our committed roadmap.
Re: containers: that isn't on the roadmap at this time. Honestly, I don't know a lot about making containers. But if a lot of people need it, I'll learn!
fallingdowndizzyvr@reddit
Is the lemonade CLI available under Windows? If not, is there a way to get stats like tokens per second from lemonade server?
TheCTRL@reddit
Is it also compatible with Debian or Ubuntu (Debian based) only?
jfowers_amd@reddit (OP)
We are using the pre-compiled llamacpp binaries from their releases page: Releases · ggml-org/llama.cpp
They are specifically labeled as Ubuntu and, after some brief searching, there doesn't seem to be documentation one way or another as to whether they'd work on Debian.
In the future we probably need some kind of build-from-source option for llamacpp+Linux to support the breadth of distros out there.
TheCTRL@reddit
Thank you. I was asking because sometimes you can find different lib versions
jfowers_amd@reddit (OP)
The easiest thing for us (the Lemonade team) is if people could convince GGML to provide official binary releases for their Linux distro of choice. At that point is would be very easy to include in Lemonade.
Glittering-Call8746@reddit
So this is vilkan and not rocm ?
jfowers_amd@reddit (OP)
Yes, we went for Vulkan first, but ROCm is high on the to-do list.
shiny_potato@reddit
Yes we did Vulkan first. ROCm is high on the to-do list, though.
Joshsp87@reddit
Dumb question but does running the lemonade server and llama cpp utilize the npu on an AMD ryzen 395? If not is it possible to make models that are able to?
jfowers_amd@reddit (OP)
Hey u/Joshsp87, not a dumb question at all! The compatibility matrix is a little complex right now, as the software matures, so we made this table here to help explain: https://github.com/lemonade-sdk/lemonade#supported-configurations
Right now, llama.cpp does not have access to the NPU (it's a work in progress).
But if you'd like to take your NPU for a spin, you can use the Hybrid models available via OnnxRuntime GenAI (OGA) in Lemonade Server on Windows.
xjE4644Eyc@reddit
One more question: is the NPU/GPU hybrid able to use GGUF format a well or only Onnx?
If Onnx is the only format that the NPU/GPU hybrid supports I would love love love to have Qwen3-30B-A3B supported :)
jfowers_amd@reddit (OP)
GGUF support for NPU/GPU hybrid is a work in progress too.
One of the limitations of ONNX right now is that it doesn't support Qwen3-30B-A3B. The Lemonade team loves that model too! So that was part of the motivation to support GGUF in Lemonade, even though NPU+GGUF wasn't available yet.
I think all of this will converge in the fullness of time :)
Glittering-Call8746@reddit
Glad to hear rocm is high on to do list. All the best !
Ok_Cow1976@reddit
I've got 2 Radeon vii. Are they supported by any chance?
jfowers_amd@reddit (OP)
I don't see it on the current AMD Vulkan page, and I don't have one to try, but it could be worth a shot. Lemonade Server installation is quick and painless, and Vulkan has broad support in general. Let us know if it works for you!
Ok_Cow1976@reddit
Thanks a lot!
xjE4644Eyc@reddit
Really looking forward to NPU support in Ubuntu!
jfowers_amd@reddit (OP)
Same! We'll put it into Lemonade Server as soon as it's available at the next-highest level of the software stack (Ryzen AI SW).
GrndReality@reddit
Will this work on Rx 6000 gpus? I have an Rx 6800 and would love to try this out
jfowers_amd@reddit (OP)
I don't have a 6000 to try it on, but according to the AMD site the RX 6000 (and 5000) series are supported: AMD Software: Adrenalin Edition 25.10.03.01 for Expanded Vulkan Extension Support Release Notes
Please let me know how it goes!
GrndReality@reddit
Thanks! When I get back to my system I’ll give it a go and report back.
AlanzhuLy@reddit
Congrats! Are there any example applications for leveraging the NPU?
jfowers_amd@reddit (OP)
Thanks! And yes, we've put together 11 end-to-end guides in the apps section of our website here: Supported Applications - Lemonade Server Documentation