Fix: OpenClaw + Ollama local models silently timing out? The slug generator is blocking your agent (and 4 other fixes)
Posted by After-Confection-592@reddit | LocalLLaMA | View on Reddit | 19 comments
I spent a full day debugging why Gemma 4 26B (and E4B) would never respond through OpenClaw on Telegram, even though ollama run gemma4 worked perfectly fine. Sharing everything I found.
Hardware: Mac Studio M4 Max, 128GB unified memory
Setup: OpenClaw 2026.4.2 + Ollama 0.20.2 + Gemma 4 26B-A4B Q8_0
The Symptoms
/newworks instantly, shows correct model- Send "hi" and nothing happens. No typing indicator, no response
- No visible errors in the gateway log
- Model responds in <1s via direct
ollama run
Root Cause #1: The Slug Generator Jams Ollama
This was the big one. OpenClaw has a session-memory hook that runs a "slug generator" to name session files. It sends a request to Ollama with a hardcoded 15s timeout. The model can't process OpenClaw's system prompt in 15s, so:
- OpenClaw times out and abandons the request
- Ollama keeps processing the abandoned request
- The main agent's request queues behind it
- Ollama is now stuck. Even
curlto Ollama hangs
This is a known issue but the workaround isn't documented anywhere:
openclaw hooks disable session-memory
Root Cause #2: 38K Character System Prompt
OpenClaw injects \~38,500 characters of system prompt (identity, tools, bootstrap files) on every request. Cloud APIs process this in milliseconds. Local models need 40-60s just for the prefill.
Fix: Skip bootstrap file injection to cut it in half:
{
"agents": {
"defaults": {
"skipBootstrap": true,
"bootstrapTotalMaxChars": 500
}
}
}
This brought the system prompt from 38K down to \~19K chars.
Root Cause #3: Hidden 60s Idle Timeout
OpenClaw has a DEFAULT_LLM_IDLE_TIMEOUT_MS of 60 seconds. If the model doesn't produce a first token within 60s, it kills the connection and silently falls back to your fallback model (Sonnet in my case). The config key is undocumented:
{
"agents": {
"defaults": {
"llm": {
"idleTimeoutSeconds": 300
}
}
}
}
Root Cause #4: Ollama Processes Requests Serially
Even with OLLAMA_NUM_PARALLEL=4, abandoned requests from the slug generator hold slots. Add this to your Ollama plist/service config anyway:
OLLAMA_NUM_PARALLEL=4
Root Cause #5: Thinking Mode
Gemma 4 defaults to a thinking/reasoning phase that adds 20-30s before the first token. Disable it:
{
"agents": {
"defaults": {
"thinkingDefault": "off"
}
}
}
Full Working Config
{
"agents": {
"defaults": {
"model": {
"primary": "ollama/gemma4:26b-a4b-it-q8_0",
"fallbacks": ["anthropic/claude-sonnet-4-6"]
},
"thinkingDefault": "off",
"timeoutSeconds": 600,
"skipBootstrap": true,
"bootstrapTotalMaxChars": 500,
"llm": {
"idleTimeoutSeconds": 300
}
}
}
}
Pin the model in memory so it doesn't unload between requests:
curl http://localhost:11434/api/generate -d '{"model":"gemma4:26b-a4b-it-q8_0","keep_alive":-1,"options":{"num_ctx":16384}}'
Result
- First message after
/new: \~60s (system prompt prefill, unavoidable for local models) - Subsequent messages: fast (Ollama caches the KV state)
- 31GB VRAM, 100% GPU, 16K context
- Fully local, zero API cost, private
The first-message delay is the tradeoff for running completely local. After that initial prefill, the KV cache makes it snappy. Worth it if you value privacy and zero cost.
Hope this saves someone a day of debugging.
Hopeful-Fix3130@reddit
As someone who is new and oblivious, where does this config go?
CuchulainnPrimus@reddit
You're a life saver! I have been struggling with this for days....
CuchulainnPrimus@reddit
I switched to llama.cpp directly instead of ollama. Takes some getting used to, but it works so much better!
nugentgl@reddit
This is great information but a little worried about #1. If you disable session-memory, then when I issue /new it will not save any session information before starting a new session. Is this an issue or not really?
OrchidBig7036@reddit
you share to fix debugging for helping me, thanks for you.
OrchidBig7036@reddit
you share to fix debugging for helping me, thanks for you.
therepfella@reddit
You're a lifesaver, I was going crazy trying to make this work until coming across your thread.
It's still slow for me on e4b so i tried adding LiteLLM to address this issue where on boot, its background sync actively ignores upstream Ollama metadata (like num_ctx), aggressively defaults to a hardcoded llama3.3 state, and forcefully pushes a 128,000 token context window onto whatever model you select. With this context window limit (like 16k) for gemma4:e2b, the model's memory management completely collapses under the forced 128k assumption, causing the exact stalling, hallucination, and instruction-forgetting.
Any suggestions?
UnderstandingFew2968@reddit
saved my day. thank you
Leaxpm@reddit
Have you come across this scenario? I apply some fixes that you post, but sometimes I'm still having this trouble where it starts compacting, and then the new messages just blank instead of following on from where it was, anyone has any guidance?
ComfortableSafe7085@reddit
Thank you! I was struggling with the exact same issues for the past couple days and your fixes were exactly what I needed. I'm running a similar setup but on a mac mini 32GB.
Substantial-Dot-2916@reddit
This is actually fixes it and is working!
EmilyWong_LA@reddit
I tried this method, but it seems I can't manage the files on my computer anymore.
maxedbeech@reddit
yeah openclaw's billing model was always going to be a problem. for running scheduled jobs i've been using openhelm.ai, it's a local mac app that sits on top of claude code directly. no extra costs if you already have a subscription
Character_Split4906@reddit
Thanks I noticed the same issue with openclaw tui as well but isnt 16k context window too small for openclaw? I will try this out and see how this works out for TUI
draconisx4@reddit
Solid troubleshooting keeping your local setups reliable directly supports better control and safety in AI ops, so props for sharing these fixes to help others avoid runtime surprises.
Even_Minimum_4797@reddit
This is really helpful, thanks for sharing.
After-Confection-592@reddit (OP)
Glad it helped!
Emotional-Breath-838@reddit
this was why i left openclaw for hermes
After-Confection-592@reddit (OP)
Fair enough. The defaults are definitely not local-friendly out of the box. Once you get past the config though, the tool calling and Telegram integration are solid.