Building a runtime layer for long-running local agents

Posted by agentspan@reddit | LocalLLaMA | View on Reddit | 0 comments

This may be relevant if you're already running agents on top of Ollama or another local model stack.

We're building open-source model-agnostic orchestration too called Agentspan.

The idea we're getting at is having agent code run in worker processes, but execution state lives sever-side. So then we can maintain useful abstractions like execution history crash recovery, and a UI layer.

We deliberately included Ollama as a supported provider because we want Agentspan to be (a) LLM-agnostic, and (b) ideal for rapidly prototyping on a local machine.

But mostly we just want to know if this tool would be remotely useful for folks. You can see some of the example use cases we have in mind (our docs transparently are still a work in progress).