Exploring inspectable RAG pipelines on a fully local Ollama setup

Posted by HarinezumIgel@reddit | LocalLLaMA | View on Reddit | 3 comments

I’ve been working on RAG‑LCC (Local Corpus & Classification), an experimental, offline‑first RAG lab built around a fully local Ollama setup.

The goal isn’t to ship a production framework, but to experiment with and inspect RAG behavior—document routing, filtering stages, and retrieval trade‑offs—without hiding decisions inside a black box.

Current assumptions / constraints

What I’m exploring

For interactive use, the project can optionally start a local OpenAI‑compatible listener so Open WebUI can act as a front‑end; the UI is external, while all logic stays in the same local pipeline.

Screenshots illustrating the filter pipeline, prompt validation, and Open WebUI integration are available in the project’s README on GitHub.

I’m mainly interested in feedback from people running local LLM stacks:

Repo: https://github.com/HarinezumIgel/RAG-LCC

Happy to answer questions or adjust direction based on real‑world experience.