built a local ai that runs offline — looking for feedback
Posted by jimmy6929@reddit | LocalLLaMA | View on Reddit | 5 comments
Hey everyone,
I’ve been building a local AI project over the past few days and just launched it today, would love some feedback.
It’s called Molebie AI.
The idea is to have a fully local AI that:
- runs on your machine
- works offline
- is private by default
- is optimized to run smoothly even on lower-RAM machines (8GB minimum, 16GB recommended)
- has different reasoning modes (instant / thinking / think harder)
- includes tools like CLI, voice, document memory, and web search
I mainly built it because I wanted something simple and fully under my control without relying on APIs.
It’s open-source, still early, and definitely rough in some areas.
Would really appreciate any thoughts or suggestions 🙏
If you like it, I’d also really appreciate an upvote on Product Hunt today!
GitHub: https://github.com/Jimmy6929/Molebie_AI?tab=readme-ov-file
Product Hunt: https://www.producthunt.com/products/molebie-ai
Status_Record_1839@reddit
Voice + RAG + web search all offline is a solid combo. What model are you using by default, and how does it hold up on 8GB RAM machines? That minimum spec will make or break adoption for a lot of people.
jimmy6929@reddit (OP)
So there are 2 models in this project! Qwen 3.5 4B (fast mode) and 9B (thinking and think harder mode). if you only have 8ram, you can only download 4B model but it still does the same job. it just the result might be a bit less.
Status_Record_1839@reddit
Nice, the 4B/9B split makes sense for that use case. CLI installer is a good call too, reduces friction a lot for first-time setup.
jimmy6929@reddit (OP)
Thx a lot! Welcome to fork it and make it your own etc
jimmy6929@reddit (OP)
Also this project come with CLI installer tool, it will check your system for you so you know which setup of this project is the best for you.