Serve 100 Large AI Models on a single GPU with low impact to time to first token.

Posted by SetZealousideal5006@reddit | LocalLLaMA | View on Reddit | 29 comments

I wanted to build an inference provider for proprietary AI models, but I did not have a huge GPU farm. I started experimenting with Serverless AI inference, but found out that coldstarts were huge. I went deep into the research and put together an engine that loads large models from SSD to VRAM up to ten times faster than alternatives. It works with vLLM, and transformers, and more coming soon.

With this project you can hot-swap entire large models (32B) on demand.

Its great for:

And Its open source.

Let me know if anyone wants to contribute :)