Xiaomi Mimo-V2.5 Released, looks like today is big day for Open-Weight releases
Posted by Specter_Origin@reddit | LocalLLaMA | View on Reddit | 34 comments

Qwen-27b now this!
solomars3@reddit
Bro i dont really believe these benchmarks, bro opus made some cool stuff for me, and when i try a local model its like day and night, not even close
Old_Stretch_3045@reddit
That’s right, the only real advantage of local models is anonymity; in every other way, it’s paying more for far less capable intelligence compared to Opus, Gemini, GPT, or Grok.
Nepherpitu@reddit
Nope. Some tasks are so big and require only small 9B model, it is cheaper to buy few 3090, run them for 2-3 months 24/7 and then out on the shelf, than to buy cheapest possible model from openrouter.
NoParking19@reddit
Name literally any worthwhile task that fits this criteria
Nepherpitu@reddit
For last 6 days I'm running 8x3090 on two systems to extract metadata from ~8TB of juridical documents to use it in search engine.
Old_Stretch_3045@reddit
For the price of three 3090s and a junk 9B model, you could’ve had enough DeepSeek API tokens for life.
NoParking19@reddit
Congrats, you just dropped 8 grand to do a basic index search. You really showed them!
nullmove@reddit
The difference gets less and less stark the more intelligent the human in the loop is (and wider otherwise).
BulkyAd8059@reddit
have you seen the price difference between Opus and these models? Have you even tried them? Because I am, and it’s clear Xiaomi isn't wasting any time...
XCSme@reddit
Something is weird with MiMo-v2.5 and MiMo-v2.5-Pro
They have almost identical results, BUT MiMo-v2.5-Pro is a lot more efficient in tokens usage, using only half of the reasoning tokens of the non-pro model.
This means, that, in practice, Mimo-v2.5-Pro is cheaper than MiMo-v2.5
9gxa05s8fa8sh@reddit
that only holds if they're the same model... you can't compare models by token like that
XCSme@reddit
Why is that? The cost is per token, so 2x reasoning tokens = 2x the cost
XCSme@reddit
Doesn't seem much better than MiMo-v2-Pro in my tests, which likely means it's only more fine-tuned for coding/agentic use.
LocalLLaMA-ModTeam@reddit
Duplicate thread
lendo93@reddit
We have uploaded some initial, objective benchmarks at https://gertlabs.com/?mode=oneshot_coding
TL;DR: the open weights one is severely benchmaxxed (possibly the most benchmaxxed we've tested), cannot adapt to unorthodox benchmarks, is verbose and often runs out of output tokens before completing solutions to difficult problems. Probably not going to be your daily driver, trailing far behind other recent open weights model releases.
Long context agentic work and other benchmarks will be finalized in the combined scoring in 24 hours.
So far, strange results (especially for the proprietary Pro version) -- the open source version is pretty weak in initial testing. But the Pro version is definitely intelligent, but struggles with formatting and has a low success rate in our tests, which drags down its score. This does suggest a fair amount of overfitting. Basically, when it works (plays the eval without compilation failures, violating the sandbox, or throwing some other kind of error), it's a smart model. But it's not at the same level as Kimi K2.6 or GLM 5.1. Both the open weights and free version are extremely verbose.
Still, the quantity and quality of open weights options is incredible. Moats are closing.
PinkySwearNotABot@reddit
wow. didn't even know Xiaomi was in the running!
fsalaizai@reddit
Tbh this is hands down the best model after opus. Works to see things you can’t see with opus 4.7 (total vibe coder perspective)
iamapizza@reddit
The long weight is over
paperbenni@reddit
They finished V2.5 before making V2 Open weights? What? The official line is they were still iterating on V2 before making it public. Is that what that is? If so, then why offer paid API access at all if the model is apparently that experimental?
TKGaming_11@reddit
I dont see weights anywhere
Specter_Origin@reddit (OP)
They announced it as OpenSource so I bet they are coming but are not out yet...
johnfkngzoidberg@reddit
Then why post? Hype spam.
Specter_Origin@reddit (OP)
Where is the hype spam? they announced a new model which the company says will be open sourced ?
JacketHistorical2321@reddit
You said "released". It's not released. "Soon" is not released dude
adeadbeathorse@reddit
If there was no indication as to it being local, I would agree with you. But this post fits this sub if it will be open weights.
Skaryth_@reddit
cr0wburn@reddit
Cool, anyone know the size ?
Skaryth_@reddit
R_Duncan@reddit
Is that really opensource? They released only flash version of MiMO-V2 , so there space for doubts.
BulkyAd8059@reddit
I believe MiMo 2.5 is actually an improved version of Flash. I'm getting 189.0 tokens/second in testing, and the quality is very good, almost 3x faster than Mimo v2 Flash
Specter_Origin@reddit (OP)
We will just have to wait and watch
christianarg7@reddit
Se sabe de cuánto es el procesador?
Specter_Origin@reddit (OP)
GufGuf When ?
patricious@reddit
my tummy is hungry for some GufGuf