Local-First Autonomous AI Agent Framework Built to Run Entirely on Your Machine Using Local Models
Posted by Janglerjoe@reddit | LocalLLaMA | View on Reddit | 5 comments
I’m sharing this project for testing and feedback:
https://github.com/janglerjoe-commits/LMAgent
LMAgent is a locally hosted AI agent framework written in pure Python. The core goal is for everything to run entirely on your own machine using local models. There are no required cloud dependencies. MCP servers are the only optional external services, depending on how you configure the system.
The objective is to enable fully local autonomous workflows including file operations, shell commands, Git management, todo tracking, and interaction through a CLI, REPL, or web UI while keeping both execution and model inference on-device with local models.
This is an early-stage project and bugs are expected. I’m actively looking for:
- Bug reports (with clear reproduction steps)
- Edge cases that break workflows
- Issues related to running local models
- Performance bottlenecks
- Security concerns related to local execution
- Architectural feedback
- Feature requests aligned with a local-first design
If you test it, please include:
- Operating system
- Python version
- Local model setup (e.g., Ollama, LM Studio, etc.)
- Whether MCP servers were used
- Exact steps that led to the issue
- Relevant logs or error output
The goal is to make this a stable, predictable, and secure local-first autonomous agent framework built around local models. All feedback is appreciated.
yamajun@reddit
Interesting approach. I've found that for 'autonomous' agents, the browser-harness is often the weakest link. Scrapers get blocked but real CDP-driven sessions with saved cookie profiles tend to survive much longer. How are you handling JS-heavy single page apps? Also, how do you store the credentials?
behrens-ai@reddit
Local-first is the right call when you're giving an agent real access to files and credentials. Small flag for anyone who does bring in MCP servers: even as an optional external
layer, they introduce their own trust boundary. A compromised server can poison tool responses, leak secrets through return values, or embed prompt injection in content the model
reads back. Worth thinking through before wiring them in.
Cool project. This is the right philosophy for anything touching real systems.
Janglerjoe@reddit (OP)
Ill look into it never thought about the MCP layer being compromised that's actually interesting. Thanks for the feedback.
BC_MARO@reddit
The security concern piece is underrated for local-first - no data leaving the machine means you can give the agent real access to sensitive files and creds without worrying about what gets sent to an API. The MCP optional/local flexibility is the right call too, hardcoding cloud deps into an agent framework defeats the whole point.
Janglerjoe@reddit (OP)
Exactly. The focus is on building strong local tooling around the model instead of assuming the model can handle everything. Models can fail, so the framework should provide structure and safeguards. As local models improve, the system scales naturally without relying on cloud dependencies.