Routerly 0.2.0 is almost out. Here is what I learned from the first benchmark campaign and what I changed.
Posted by nurge86@reddit | LocalLLaMA | View on Reddit | 0 comments
Five days ago I posted the first Routerly benchmark campaign (MMLU / HumanEval / BIRD, 10 seeds, paired t-tests, semantic-intent routing vs direct Claude Sonnet 4.6). Today I published the full results write-up. Short recap for anyone who missed the first thread:
- MMLU: 83.5% vs 86.5% Sonnet, $0.00344 vs $0.01118 per run, 69% cheaper, delta not significant (p = 0.19)
- HumanEval: 95.0% vs 97.0% Sonnet Pass@1, $0.03191 vs $0.04889 per run, 35% cheaper, delta not significant (p = 0.40)
- BIRD (SQL): 44.5% vs 55.5% Sonnet, accuracy gap was significant (p = 0.02). Flagged as a backend pool failure, not a routing failure.
Full write-up with the PDF audit is here: https://blog.routerly.ai/we-ran-200-questions-per-model
0.2.0 is the first release that directly reflects what that campaign told me. Releasing in the next few days. I wanted to share what is actually changing and why, because I think the reasoning is more interesting than the changelog.
What I changed
- SQL pool rebuild. The BIRD result was not acceptable and I did not want to hide it. The cheap tier on SQL tasks is replaced. Re-run on BIRD is running this week and will be published regardless of outcome.
- Routing decomposition is now observable per request. In the first campaign I found that the LLM-routing policy on MMLU was spending 80% of its total cost on the routing call itself. 0.2.0 exposes this breakdown in the response metadata, so you can see routing cost vs inference cost per call instead of guessing.
- Semantic-intent policy is the new default. The embedding-based router (text-embedding-3-small, \~$0.000002 per query) matched or beat the LLM-routing policy on every benchmark while being roughly 3 orders of magnitude cheaper to run. Routing distribution on MMLU went from 96% DeepSeek under the LLM policy to a 76/24 DeepSeek/Sonnet split under semantic-intent, which is what closed the accuracy gap. Keeping LLM routing as an option for users who want fully dynamic decisions, but the default moves.
- Statistical rigor baked into the benchmark harness. The follow-up at 55 seeds (vs 10 in the original run) is now the standard campaign shape. 10 seeds of n=20 gave roughly 80% power to detect a \~7.7 pp gap, which is too coarse for honest claims on small deltas.
What I did not fix and why
Opus 4.6 as an always-on ceiling is still more accurate than any routed configuration on a handful of MMLU subjects (graduate-level physics, professional law). I am not pretending routing beats Opus on the hardest slice of the distribution. The pitch is that most production traffic is not that slice, and the savings on the rest pay for the few calls where you still want to hit Opus directly.
Release
0.2.0 drops in the next few days. I will post a second update with the 55-seed numbers and the rebuilt SQL pool results as soon as the campaign is complete. Expect the data to either confirm the first round or embarrass me publicly, which is the point of running it.
Full write-up of the first campaign (metrics, routing distributions, link to the PDF audit) is here: https://blog.routerly.ai/we-ran-200-questions-per-model
If you want to try Routerly on your own workload before 0.2.0 ships, everything else is at routerly.ai. Happy to answer anything in the comments, especially methodology critiques.