Openclaw LLM Timeout (SOLVED)

Posted by styles01@reddit | LocalLLaMA | View on Reddit | 18 comments

Hey this is a solution to a particularly nasty issue I spent days chasing down. Thanks to the help of my agents we were able to fix it, there was pretty much no internet documentation of this fix, so, you're welcome.

TL:DR: Openclaw timeout issue loading models at 60s? Use this fix (tested):

{
"agents": {
"defaults": {
"llm": {
"idleTimeoutSeconds": 300
}
}
}
}

THE ISSUE: Cold-loaded local models would fail after about 60 seconds even though the general agent timeout was already set much higher. (This would also happen with cloud models (via ollama and sometimes openai-codex)

Typical pattern:

The misleading part

The obvious things are not the real fix here:

- `agents.defaults.timeoutSeconds`

- `.zshrc` exports

- `LLM_REQUEST_TIMEOUT`

- blaming LM Studio / Ollama immediately

Those can all send you down the wrong rabbit hole.

---

## Root cause

OpenClaw has a separate **embedded-runner LLM idle timeout** for the period before the model emits the **first streamed token**.

Source trace found:

- `src/agents/pi-embedded-runner/run/llm-idle-timeout.ts`

with default:

```ts

DEFAULT_LLM_IDLE_TIMEOUT_MS = 60_000

```

And the config path resolves from:

```ts

cfg?.agents?.defaults?.llm?.idleTimeoutSeconds

```

So the real config knob is:

```json

agents.defaults.llm.idleTimeoutSeconds

```

THE FIX (TESTED)

After setting:

"agents": {
  "defaults": {
    "llm": {
      "idleTimeoutSeconds": 180
    }
  }
}

we tested a cold Gemma call that had previously died around 60 seconds.

This time:

That confirmed the fix was real.

We then increased it to 300 for extra cold-load headroom.

Recommended permanent config

{
  "agents": {
    "defaults": {
      "timeoutSeconds": 300,
      "llm": {
        "idleTimeoutSeconds": 300
      }
    }
  }
}

Why 300?

Because local models are unpredictable, and false failovers are more annoying than waiting longer for a genuinely cold model.