Local AI on Mobile
Posted by TheGreatYeeter113@reddit | LocalLLaMA | View on Reddit | 21 comments
Hey guys! I’m very new to running models locally, so please forgive my ignorance. But I’m curious to know if there’s any actual decent, and more importantly, trustworthy local AI apps available on mobile (mainly iOS). I’ve seen quite a few apps about this on the App Store, but most are published by a single person and don’t have anymore than a few dozen reviews, therefore I’m not sure if I can really trust them. I’m generally just looking for any app that is trustworthy and could let me run various models locally.
Traditional-Card6096@reddit
I am building Solair AI, it’s new but is fully private and offline, with optional web searches and many other features. There’s also a huggingface browser integration so you can get any compatible model you want. Give it a try, it’s free :)
https://apps.apple.com/ch/app/solair-ai-local-ai/id6758450823?l=en-GB
External-Process6667@reddit
This app is incredible, great job! I am experiencing two bugs related to camera on the iPhone 16 plus
1. When flipping the camera horizontal, it clips off-screen, so i cant see the full viewport
2. After uploading the image, it processes for a few seconds, and the app crashes.
I am going to japan in a couple of months, and would love to use this app while im out there.
I gave a donation as well
Traditional-Card6096@reddit
Thank you! I’ll look at every issue you mentioned. I already improved some of the issues and will probably be able to fix the rest for the next update!
oblivion098@reddit
Thank you. How do you get money if its free? Who is investing in this project?
Thanks
Traditional-Card6096@reddit
No money involved. I’m just doing it because it’s fun.
Witty_Ticket_4101@reddit
Man! You are my hero! Outstanding job!!!
FrostyMisa@reddit
I tried your app before and after last update with iPhone 16PM. I found one problem, the app doesn’t let me run 4B models around 3GB size (and tell me not enough memory to run) even other si apps let me run models up to 4GB in size. Like locally ai and pocket pal let me run Qwen3.5 q4.k.m without problem.
Traditional-Card6096@reddit
Fixed for next update! Thanks!
Traditional-Card6096@reddit
I will test on the iphone 16 pro I have here. It should work, so i’ll fix it asap. Thank you for the feedback.
Sweet-Wall-4734@reddit
I've built Cortex - Local AI app for exactly this reason. It's free in the app store.
https://apps.apple.com/in/app/cortex-local-ai/id6757537386
Note that you'll only have a single default model. If you're looking to try out different models, this is not for you. But if you want fast local inference, including images & documents, you can try this out.
bdfortin@reddit
My preferred LLM app so far is Locally AI, mostly because its level of integration with Apple Shortcuts allows you to create a loop that asks multiple models the same question. Keep in mind if you’re using a Shortcut to do that it’ll be happening while the app is in the background and memory will be limited, typically to \~2 GB, so don’t try to summon larger models.
magentswm@reddit
Hi! There's Private Mind, where you can use various models or upload your own :) https://privatemind.swmansion.com/
DIBSSB@reddit
Is there voice mode supported in this v
Maxdme124@reddit
I have used locally AI and for me it’s the best UX wise, it has all the latest local models you may want to run, supports image and PDF uploads and even Shortcuts and local ChatGPT-like voice mode. There’s also Pocket Pal which is a bit more tinkerer friendly but lacks a lot of polish and features from Locally AI
maxton41@reddit
Does locally AI allow for the model that you run to search the web for answers?
TheGreatYeeter113@reddit (OP)
That sounds great. I’m just mildly concerned/paranoid since both have such few reviews.
Maxdme124@reddit
Local AI apps, especially on iOS, are very niche so I find that hardly surprising. I don't have any connections/incentives to promote any AI apps, you go for which ever your gut tells you. But again if you want a recommendation I really do vouch for giving Locally AI a shot
TheGreatYeeter113@reddit (OP)
Alr I’ll give it a shot. I’m probably just being paranoid over nothing. Thanks!
themaxx2@reddit
Sorry for your restrictions, on Android I just compile llama.Cpp and run llama-server and use my browser.
Significant_Fig_7581@reddit
Use pocketpal, Go ahead and download liquid ai models, and then you can try qwen and mistral the onference of liquid ai is faster thats why i suggest you starting with ot and they have good models in the range of 1.2B so they are relatively smaller than others, Then you can just browse huggingface (HF) there are all the models and start experimenting with them all... Be careful not to run big dense models as it drains your battery quickly, and remember the ram bottleneck if you want to run something that is almost a chatgpt you should have a gpu and the model must be at least 30B and even then when you run models you have to run quants of the model so it fits in your ram (use undloth quants but bartowski and others are great too try unsloth first)
Abject-Tomorrow-652@reddit
Curious how this plays out… my friend was gonna build someth like this