Agentic RAG: Learn AI Agents, Tools & Flows in One Repo
Posted by CapitalShake3085@reddit | LocalLLaMA | View on Reddit | 4 comments
A well-structured repository to learn and experiment with Agentic RAG systems using LangGraph.
It goes beyond basic RAG tutorials by covering how to build a modular, agent-driven workflow with features such as:
| Feature | Description |
|---|---|
| 🗂️ Hierarchical Indexing | Search small chunks for precision, retrieve large Parent chunks for context |
| 🧠 Conversation Memory | Maintains context across questions for natural dialogue |
| ❓ Query Clarification | Rewrites ambiguous queries or pauses to ask the user for details |
| 🤖 Agent Orchestration | LangGraph coordinates the full retrieval and reasoning workflow |
| 🔀 Multi-Agent Map-Reduce | Decomposes complex queries into parallel sub-queries |
| ✅ Self-Correction | Re-queries automatically if initial results are insufficient |
| 🗜️ Context Compression | Keeps working memory lean across long retrieval loops |
| 🔍 Observability | Track LLM calls, tool usage, and graph execution with Langfuse |
Includes:
- 📘 Interactive notebook for learning step-by-step
- 🧩 Modular architecture for building and extending systems
pulse-os@reddit
solid resource, the hierarchical indexing (small chunks for precision, parent chunks for context) is underrated — most RAG tutorials skip that entirely and wonder why retrieval sucks.
one thing i'd add from building production agent memory systems: conversation memory alone isn't enough once you go multi-session. you need persistent memory that survives process restarts, not just in-memory state within a LangGraph run. we hit this wall hard. agent was brilliant within a session and completely amnesic the next day lol.
the self-correction loop is interesting tho, we do something similar but at the knowledge level instead of the query level — if an agent retrieves a memory that turns out to be wrong, the confidence score on that memory gets penalized automatically so it surfaces less next time. basically reinforcement learning on your RAG results instead of just re-querying.
curious if the multi-agent map-reduce handles contradictory sub-query results? thats where things get spicy in production — two sub-agents return conflicting answers and you need a merge strategy that isn't just "pick the longer one" lol
korino11@reddit
rag =dead end. It have a lot of minuses...
draconisx4@reddit
Solid repo for agent workflows make sure to bake in runtime checks early to avoid surprises when things scale, as agent memory can lead to unintended behaviors in production.
CapitalShake3085@reddit (OP)
Thank you :)