Added myself as a baseline to my LLM benchmark

Posted by Interesting_Fly_6576@reddit | LocalLLaMA | View on Reddit | 4 comments

Running a pipeline to classify WST problems in \~590K Uzbek farmer messages. 19 categories, Telegram/gov news/focus groups, mix of Uzbek and Russian.

Built a 100-text benchmark with 6 models, then decided to annotate it myself blind. 58 minutes, 100 texts done.

Result: F1 = 76.9% vs Sonnet ground truth. Basically same as Kimi K2.5.

Then flipped it — used my labels as ground truth instead of Sonnet's. Turns out Sonnet was too conservative, missed \~22% of real problems. Against my annotations:

Setup: RTX 5090, 32GB VRAM. Qwen runs at \~50 tok/s per request, median text is 87 tokens so \~1.8s/text. Aggregate throughput \~200-330 tok/s at c=16-32.

Gemma 4 26B on vLLM was too slow for production, Triton problem most probably — ended up using OpenRouter for it and cloud APIs for Kimi/Gemini/GPT.

The ensemble (Qwen screens → Gemma verifies → Kimi tiebreaks) runs 63% locally and hits F1 = 88.2%. 2 points behind Kimi K2.5, zero API cost for most of it.

Good enough. New local models are impressive!