Testing Qwen3.6 with Hermes Agent on agentic coding. Locally with llama.cpp.
Posted by curiousily_@reddit | LocalLLaMA | View on Reddit | 4 comments
I'll be testing the setup and try out the Hermes Agent live: https://www.youtube.com/live/q5vqvwZykRI
Final_Elevator_1128@reddit
Hermes Agent 57k stars in 6 weeks. the missing piece underneath it is llm-wiki-compiler by AtomicMem. outer loop + inner loop. complete stack.
Repo- github.com/atomicmemory/llm-wiki-compiler
BreezyChill@reddit
I tried this combo hosting qwen 3.6 on vllm, and i got tons of interleaved tool calls and weird mis-spellings in my output. Ready to abandon it.
CryptoLamboMoon@reddit
Qwen3.6 pairs really well with Hermes — the extended context window helps a lot with longer agentic coding loops. One thing worth knowing: Hermes handles memory natively so you're not just running stateless calls. Makes a big difference for multi-step refactors. Covered a full breakdown of this combo (Hermes vs OpenClaw locally) in EP002 of The AI Harness podcast if you want the deeper dive.
Predatedtomcat@reddit
Link for podcast ?