We built this project to increase LLM throughput by 3x. Now it has been adopted by IBM in their LLM serving stack!
Posted by Nice-Comfortable-650@reddit | LocalLLaMA | View on Reddit | 63 comments

Hi guys, our team has built this open source project, LMCache, to reduce repetitive computation in LLM inference and make systems serve more people (3x more throughput in chat applications) and it has been used in IBM's open source LLM inference stack.
In LLM serving, the input is computed into intermediate states called KV cache to further provide answers. These data are relatively large (\~1-2GB for long context) and are often evicted when GPU memory is not enough. In these cases, when users ask a follow up question, the software needs to recompute for the same KV Cache. LMCache is designed to combat that by efficiently offloading and loading these KV cache to and from DRAM and disk. This is particularly helpful in multi-round QA settings when context reuse is important but GPU memory is not enough.
Ask us anything!
Github:Â https://github.com/LMCache/LMCache
Nice-Comfortable-650@reddit (OP)
Btw LMCache currently uses vLLM as underlying inference engine as well!
okonemi@reddit
How easy would it be to be to integrate this with a running vllm project? We just got our model of choice running quantized with vllm and got a great performance, but if we can easily upgrade and increase the performance it would be a no brainer. We also plan to serve multiple users
okonemi@reddit
Nevermind just read throug the docs, this is amazing! 🤩
Nice-Comfortable-650@reddit (OP)
Thanks!
Altruistic_Heat_9531@reddit
It really does not like WSL2 yeah? got cuda OOM for 1.5B model in 3090
Nice-Comfortable-650@reddit (OP)
Maybe raise an issue for that?
Altruistic_Heat_9531@reddit
https://github.com/LMCache/LMCache/issues/259 someone already raise the issue. I mean for prod i have no issue since it is baremetal linux afterall. but when testing i have to dual boot instead of just using WSL2
Karyo_Ten@reddit
How does the NVMe performance affect inference speed? Is PCIe gen4 enough to get a 3x perf improvement? do we need PCIe gen5 disks or a RAID array?
Nice-Comfortable-650@reddit (OP)
I think the performance highly varies with workload and setup. I would suggest benchmarking it according to your own use case.
Karyo_Ten@reddit
Can you give examples on the hardware and models you have tested.
My workload is inference, I don't see where the difference would be.
ExplanationEqual2539@reddit
Congo bruh. Happy to see your project being utilized. I know the feeling.
Nice-Comfortable-650@reddit (OP)
Thanks!
kadir_nar@reddit
Does it support speech models?
Nice-Comfortable-650@reddit (OP)
Working on it!
cantgetthistowork@reddit
Can this be expanded to caching entire models for fast switching when you have only enough VRAM for one model?
Nice-Comfortable-650@reddit (OP)
Sounds like an interesting idea we should put in our TODO list!
MaverickSaaSFounder@reddit
Very interesting idea by u/cantgetthistowork, could something like this work with a Fireworks AI or a Simplismart?
Lower_Tutor5470@reddit
Is Cacheblending working in this currently. Sounds exciting. Would this potentially allow caching a single long document as context, then blending with different system prompts to process multiple smaller scope tasks in parallel requests?
masc98@reddit
why didnt you contribute this in vllm directly? :/
Nice-Comfortable-650@reddit (OP)
Our team actually knows them very well!
For dev, vLLM has its own priorities and maintaining such a big chunk of code in vLLM will be more painful then expected. We currently integrate with vLLM through a connector. As long as vLLM maintains the connector interface it will be fine.
For usage, we are trying to making LMCache a flag in vLLM to be enabled automatically. But decision is not on us.
Direspark@reddit
Maintainers of large open source projects like vllm rarely are going to accept a PR for a feature like this.
masc98@reddit
ok but if it's worth it, one should at least try imo
CheatCodesOfLife@reddit
Sometimes it's cbf if you're building features rapidly and don't have the time or patience to interact via pull requests, refactor when the upstream project requests it, etc.
It's also a burden/responsibility to maintain the code if you're the only team who understands it properly.
No idea why you're being down voted for asking a question like this lol
masc98@reddit
I can see that. yeah this is reddit my dude
bullerwins@reddit
is this already merged in mainline vllm pip package? does it require any special parameter or does it work automatically? I would like to do some A/B testing
Nice-Comfortable-650@reddit (OP)
You might need to install LMCache for now. We are trying to make it part of vLLM, but the decision is not on us (we hope we can just make enable-lmcache a flag in vLLM)
thigger@reddit
I can see how this would be really useful for me as prompt processing is a major time-sink but I'm often throwing large chunks of the same back in (but after other prompts so the KV cache is gone)
Is there any chance this will come to SGLang? (and on WSL2 or even Windows‽)
Nice-Comfortable-650@reddit (OP)
SGLang integration is coming soon!
rorowhat@reddit
Does it work with vision models as well?
Nice-Comfortable-650@reddit (OP)
This is a top priority!
Away_Expression_3713@reddit
can be used with on server LLMs?
Nice-Comfortable-650@reddit (OP)
Yes, on-server LLMs are the main targeted use cases!
nomorebuttsplz@reddit
I wonder if this could be adapted into MLX for mac
Nice-Comfortable-650@reddit (OP)
Yeah, we just haven't had enough bandwidth for it (sadly). It would be really cool to see that
llordnt@reddit
I posted something with a similar idea for mlx awhile ago. Obviously not as well engineered as this one but it saves all your latest KV cache to disk. When you send another inference request, it search with token prefix match and load the KV cache on disk. The package is called MLX Textgen, which you can find it on Github. However, I am updating the code lately for vision language model integration and to fix some old issues. The changes are not merged to main yet. You can still play around with the current version of it.
teamclouday@reddit
Looks very cool! Does this work with llama.cpp?
Nice-Comfortable-650@reddit (OP)
Not right now! We currently support vLLM and are working on SGLang. Would love to see community contribution for this!
Mendon@reddit
Any interest in publishing arm and cpu only docker images?
Nice-Comfortable-650@reddit (OP)
We have this on our todo but it is not an urgent priority at the moment. Feel free to submit a PR!
r4in311@reddit
Don't we have a KV cache already in popular inference applications? Where exactly lies the difference in your approach?
Nice-Comfortable-650@reddit (OP)
This one is an open source implementation of the KV cache component. By inference application, are you talking more about ChatGPT API calls?
V0dros@reddit
I think he means that most modern inference engines (vLLM, SGLang, llama.cpp, exllamav2, etc.) implement some form of KV caching. How is LMCache different?
Nice-Comfortable-650@reddit (OP)
u/deanboyersbike's explanation is pretty accurate on the difference!
deanboyersbike@reddit
I think LMCache supports many types of backend (not just CPU) and had some research papers on KV compression and blending (breaking the prefix problem in autoregression)
droptableadventures@reddit
Practically everything does have a KV cache implemented but it nearly always just compares the last query with the current one, and most of the time only does a prefix match. It doesn't save the KV cache permanently, just keeps it in memory and chops the end off.
It looks like this one saves chunks of KV cache to disk and can reload them when a query has some text in common with any previous query, anywhere in the query.
Nice-Comfortable-650@reddit (OP)
We do have the ability to reuse other chunks through a technique that "blends" KV cache. But for the prefix caching functionality, our main difference is similar to what u/hainesk described above.
EstarriolOfTheEast@reddit
How would that work? The previous contexts of the phrases would still have to match, no?
Here is where I am: For standard (autoregressive) decoder attention (and not methods like infilling), the cache is restricted to prefix matching as a result of every token being dependent on/a function of all preceding tokens. This means if you have tokens 1..N-1, then token N's hidden state is derived from a weighted sum of the value vectors of tokens 1..N-1.
We can't just match phrases in isolation; if we have two documents where tokens 5..8 line up but tokens 1..4 do not, then the Key and Value vectors for 5..8 will differ. This is why KV caching is forced to stick to simple prefix matching.
Can you help me understand what is meant by arbitrary chunks of KV cache here?
droptableadventures@reddit
While the model decoding does rely on the state of the previous tokens, I believe the encoding of tokens as KV vectors to feed into this can be independently calculated for each input token - I think it's normally done in parallel.
EstarriolOfTheEast@reddit
You're right that for an initial input (ie prompt processing) the tokens are processed in parallel, however, each token's representation still derives from all preceding tokens, hence the context dependence.
But once the KV cache has been established, further processing becomes sequential. The key thing to realize is that in both cases (whether processed in parallel or sequentially), each token's representation is still calculated based on all preceding ones.
hainesk@reddit
In normal inference applications, KV cache is stored on the GPU in the VRAM. When hosting for multiple users, the KV cache can be deleted out of VRAM to support inference for other users. Now if the original user continues the same conversation, the KV cache needs to be re-built before a response can be made.
This takes time and computational resources. It sounds like this open source project creates a sort of "swap" for KV cache, to allow it to be stored in system RAM or even on disk so that instead of rebuilding the cache, it can just be copied back into VRAM for inference.
Nice-Comfortable-650@reddit (OP)
Thanks a lot for the accurate explanation! We are also actively exploring optimizations to KV cache (ex. adding new compressions)
jferments@reddit
Can you share some of the intuition behind how this works in terms of caching KV outside of just prefixes (which already exists in most major LLM servers)? Given the autoregressive nature of transformers, I'm curious to understand how you could be caching anything other than prefixes effectively. Or is this just storing old cached prefixes on disk to prevent recomputing them?
Nice-Comfortable-650@reddit (OP)
Hi, thanks a lot for the questions! I want to answer it in two directions:
For non-prefix caching. We do support caching for RAG workloads. This is dependent on one of our KV cache blending techniques. Our system does partial recomputation for KV cache to enable non-prefix cache reuse. https://arxiv.org/abs/2405.16444
For prefix caching scenarios, we are targeting API server use cases where multiple users are involved and out-of-box prefix caching is not enough. We also do optimizations for KV cache loading/offloading by writing custom CUDA kernels to efficiently overlap communication with computation. Think of LMCache as an extension to major LLM inference frameworks like vLLM and SGLang.
Chromix_@reddit
llama.cpp already supports this - yet you wouldn't use llama.cpp for serving multiple users, unless you don't have enough VRAM and need to do CPU offloading.
Relevant CLI arguments and POST params:
Nice-Comfortable-650@reddit (OP)
Thanks for the info! LMCache is targeting use cases specifically when multiple users are served. In this case offloading to CPU and even disk can bring lots of advantages. Glad to see similar ideas are useful for llama.cpp as well.
Sorry-Hyena6802@reddit
This looks quite awesome? but a question as a hobbyist, how well does it support windows WSL? VLLM doesn’t support pinned memory in WSL, and thus doesn’t support offloading in any capacity natively afaik without nvidia’s drivers enabling that functionality in the barest sense. And is it possible to see this in a docker container for user’s who may use vllm’s docker container, and see this and think “I would like to see how this compares for my needs!” and would be very interested in just a plug and play swap between this docker container and vllm’s?
AbortedFajitas@reddit
Hi, I created a disturbed AI network and our workers use Kobold and Aphrodite for text gen engines.. https://docs.aipowergrid.io
Would this be something that could be made into one size fits and all distributed? All we need is an openai compatible endpoint to become a worker.
azhorAhai@reddit
Very interesting! Which IBM project is it?
Nice-Comfortable-650@reddit (OP)
llm-d. We have our own version of it which offers seamless integration and SOTA performance as well! https://github.com/vllm-project/production-stack
Lanky_Doughnut4012@reddit
Oof this can save me a lot of money incredible.
Nice-Comfortable-650@reddit (OP)
We have been saving hundreds of thousands of dollars for companies already ;) How much do you pay for inference now?
pmv143@reddit
Super interesting work. Curious , how does LMCache handle context reuse across multi-GPU or containerized setups? Especially in scenarios where memory constraints trigger frequent evictions, does the system proactively prefetch or just reload on demand? Would love to understand how you balance latency vs throughput under load churn.
Nice-Comfortable-650@reddit (OP)
We also maintain the vLLM production stack repository: https://github.com/vllm-project/production-stack, which is a K8s deployment with vLLM+LMCache across many nodes.
We have different storage backend options as well. For example, you can use Redis or Mooncake store, which is distributed already.
The last layer of storage is usually much bigger (you have access to TBs of SSD beyond GPUs) so it usually is able to handle most KV cache. There is possible prefetch in production stack enabled by an LLM router as well.
We are currently adding more smart logic in here but we are also looking forward to reading what the community proposes!