What tools are you using to give your LLM a persistent second brain / long-term memory?

Posted by AmphibianHungry2466@reddit | LocalLLaMA | View on Reddit | 80 comments

I've been going down a rabbit hole trying to solve LLM memory. the problem where every session starts blank and your agent has no idea what it learned last week.

I put together a list of tools I found: https://github.com/fsaint/bestOfSecondBrainLLM

The ones I've come across so far:

- Tolaria: markdown vault manager with an MCP server for agents

- QMD: local BM25 + vector + reranking search engine for markdown docs

- Graphify: turns any folder into a queryable knowledge graph

- MarkItDown (Microsoft): converts anything (PDF, audio, YouTube, images) to markdown

- RAG-Anything: multimodal RAG pipeline built on LightRAG

- PARA Workspace: workspace framework for humans + agents with an inbox/archive structure

- Beads: graph-based task tracker with agent memory decay

- Obsidian Skills: agent skills for vault navigation + web-to-markdown via Defuddle

The conceptual anchor for a lot of this is Karpathy's LLM Wiki gist./

What I'm still figuring out:

- Entity extraction: NER vs LLM-assisted, cost vs quality tradeoff

- Local embeddings (nomic-embed, ollama) vs API (OpenAI, Voyage)

- How to avoid the knowledge base becoming stale or bloated over time

What's working for you? Anything I'm missing? Would love to add more tools to the repo especially things people are actually using in production or at least consistently for your flow.