I tried running Gemma 4 on my phone. llama.cpp failed, LiteRT‑LM didn’t.

Posted by GeeekyMD@reddit | LocalLLaMA | View on Reddit | 12 comments

I wanted Gemma 4 as a usable local model on my Android phone, not a benchmark screenshot.

If you’re thinking about serious local models on phones, I wrote up the full experiment and open‑sourced the Android side and the Termux side.