Local model on coding has reached a certain threshold to be feasible for real work

Posted by Exciting-Camera3226@reddit | LocalLLaMA | View on Reddit | 41 comments

We ran open-weight 27B–32B models on Terminal-Bench 2.0 (89 tasks, terminal-bench-2.git @ 69671fb) through our agent harness. Best result was Qwen 3.6-27B at 38.2% (34/89) under the default per-task timeout — the same constraint the public leaderboard uses (Qwen's official post uses a more relaxed config) . We deliberately used the default setup for TB official leaderboard, because we wanted an apples-to-apples number against the verified leaderboard.

One interesting find is that MOE models still has a order of magnitude of improve in terms of inference speeds.

The interesting part isn't 38.2% in absolute terms — current verified SOTA is \~80% (GPT-5.5 / Opus 4.6 / Gemini 3.1 Pro). The interesting part is what 38.2% maps to in time.

Anchoring on model release dates of verified leaderboard entries:

So today's best runnable-offline coding model lands roughly where the hosted frontier was in late 2025 — about a 6–8 month lag. That's the first time this has been close enough to matter for real deployments (regulated environments, air-gapped, on-prem CI, batch workloads).

more details on our blog: https://antigma.ai/blog/2026/04/24/offline-coding-models