Workflow comparison: Running Llama 3.2 locally with LangChain vs n8n. Why I stopped coding my agents.

Posted by jokiruiz@reddit | LocalLLaMA | View on Reddit | 0 comments

Hi everyone. Weekend project report!

I wanted to build a "Sports Analyst" agent completely locally using Ollama (Llama 3.2) via Docker.

I tried 3 approaches:

The tricky part: Connecting n8n (Docker) to Ollama (Host). I wasted hours on fetch failed errors. The fix was setting OLLAMA_HOST=0.0.0.0 and pointing n8n to host.docker.internal:11434.

I made a walkthrough video comparing the 3 builds. (Audio is Spanish, but code/config is universal).

https://youtu.be/H0CwMDC3cYQ?si=7zsT2XT37tBgvG74

Has anyone else moved their local agents to n8n pipelines, or do you stick to Python scripts?