I'm using llama.cpp to run models larger than my Mac's memory
Posted by tbaumer22@reddit | LocalLLaMA | View on Reddit | 12 comments
Hey all,
Wanted to share something that I hope can help others. I found a way to optimize inference via llama.cpp specifically for running models that wouldn't typically be able to run locally due to memory shortages. It's called Hypura, and it places model tensors across GPU, RAM, and NVMe tiers based on access patterns, bandwidth costs, and hardware capabilities.
I've found it to work especially well with MoE models since not all experts need to be loaded into memory at the same time, enabling offloading others to NVMe when not in use.
Sharing the Github here. Completely OSS, and only possible because of llama.cpp: https://github.com/t8/hypura

HolidayAd1613@reddit
Nice. My startup did this for Windows in our custom chromium fork with native llama.cpp runtime. Kind of like pseudo-unified memory. No cpu offload just sysmem "spill over". For example, run a 31B parameter 20GB qwen3.5-a3b model on a machine with only a 5080. Works great but significant reduction in tok/s generation...but still around 26 tok/s. It nice to let users select larger models if they want to. We did this before NVIDIA driver offered CUDA Sysmem Fallback Pollicy, so I guess everyone has it now. Ours is WDDM indenpendent but not sure anyone cares about that. Fun times.
fishhf@reddit
I though llama cpp can already run models larger than your memory via memory mapping already?
tiffanytrashcan@reddit
This essentially manages and optimizes that setting. It's going to manage specific layers better and watch which ones actually matter.
fishhf@reddit
Then we want to see a performance comparison, not a statement saying llama cpp crashes.
In fact I just ran qwen3.5 9b q8 on llama cpp on a macbook air 2014 with 8gb ram, just to confirm if mmap is broken on Mac, it's not.
tbaumer22@reddit (OP)
Appreciate the feedback. Updating the benchmarks/charts to show this. My original concern with the CPU-only benchmark comparison was that it would be unfair to compare llamacpp's CPU-only mode to Hypura (because it's tapping into more resources).
Ended up building and running one, and here are the results I've found:
fishhf@reddit
Hmmm --fit should be enabled by default in llama cpp now, is it causing it to crash in your setup?
tiffanytrashcan@reddit
It's not like they're really screaming that it's "broken."
The GitHub page does imply issues with the out of memory errors, but that's exactly what's going to bring most people to that repo - users that haven't properly configured llama cpp - so they run into issues and look for a solution. This seems to provide that and quite a bit more.
fishhf@reddit
OP did mention llama cpp crashes when running models larger than available ram and their project solved the problem. The provided graph explicitly mentions llama cpp crashes without making a performance comparison.
If there's some smart optimization being done, then comparison should be provided. Especially then operating systems do cache frequently accessed memory mapped pages in RAM.
braydon125@reddit
Kind of like nvidia greenboost
tbaumer22@reddit (OP)
Yes exactly. Nvidia greenboost for metal 😄
srigi@reddit
Modern QLC SSDs guarantee like 1000 overwrites to a memory cell. TLC 10k, MLC 100k.
Doing matmul ops on matrices on SSD, screams killing SSD in a month.
tbaumer22@reddit (OP)
Appreciate this concern and it actually prompted me to do some research of my own. From what I've learned so far, there is no reason to be concerned because Hypura reads tensor weights from the GGUF file on NVMe into RAM/GPU memory pools, then compute happens entirely in RAM/GPU.
There is no writing to SSDs on inference with this architecture.